Feed aggregator

New OA Framework 12.2.6 Update 10 Now Available

Steven Chan - Thu, 2018-02-22 11:16

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure.

We periodically release updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.6 is now available:

Oracle Application Framework (FWK) Release 12.2.6 Bundle 10 (Patch 27308923:R12.FWK.C)

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.6 users should apply this patch. Future OAF patches for EBS Release 12.2.6 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes all fixes released in previous EBS Release 12.2.6 bundle patches.

In addition, this latest bundle patch includes fixes for the following issues:

  • When a Descriptive Flexfield (DFF) is in an advanced table under a query bean, some of its segments may not be displayed. Instead, other columns of the table may be repeated, or the same segment may be repeated.

Related Articles

Categories: APPS Blogs

Oracle linux 7 for ARM64 updated to OL7.4

Wim Coekaerts - Thu, 2018-02-22 10:56

We just updated the Oracle Linux 7 for ARM64 content.

Oracle Linux 7 for ARM64 (64-bit only) is freely downloadable from OTN: here.

The release is now at the same level as x64 (Oracle Linux 7 update 4)

The ARM64 yum repositories are also updated with the latest content. Keep in mind that we have a devtool set release for ARM as well.

Two important features on the latest ARM ISO:

- first preview of UEK5. (Linux kernel 4.14.14+) as the default kernel

- gcc 7.2 and gcc 7.3 are included on the ISO (and in the yum repo) to have easy and free access to latest gcc for ARM64

Remember that our ARM port is a preview release, it's for test and development only, it's not a GA supported product today however it's on par with x64 in terms of packages and it's completely free to download and use. No need to get a vendor auth code or whatever others out there have.


Software Collections 3.0 for Oracle Linux 6 and Oracle Linux 7, Oracle Linux EPEL, Oracle Cloud ...

Wim Coekaerts - Thu, 2018-02-22 10:29

We just recently released a new Software Collections update on our yum server.

SCL 3.0 in the Software Collections yum repo:

On Oracle Linux 7 this adds maven 3.5, nginx 1.12, nodejs8, php7.1 and python 3.6 and the usual updates to other developer packages.

Updates in the Oracle Linux 7 Developer repo:

We released the latest updates of the Oracle Cloud Infrastructure python SDK (1.3.14) and CLI (2.4.16) (using the python SDK). This makes it very, very easy to install the tools needed.

We updated the terraform OCI provider to 2.0.7.

Tons of updates in the Oracle Linux 7 EPEL repo. Too many to list here though. We now have over 10000 RPMs in the EPEL repo.

As a reminder:

You can keep up to date with new RPMs we add on a daily basis by looking at our yum what's new page. (sneak preview: we're going to also add an announce mail list for those that prefer email over webpages).

Keep in mind that we have added many new yum repositories of late, so if you have an existing install, consider updating your yum repo file or at least go look at the https://yum.oracle.com pages to see which repos are new. 

latest ol7 yum repo file

OL7 yum page





Can you shed some light on this ora error [session idle bit]

Tom Kyte - Thu, 2018-02-22 10:06
SYS@XYZ> select sum(ksmchsiz) ||' bytes' "ToSHRPOOLMem" from x$ksmsp; ^C ^C select sum(ksmchsiz) ||' bytes' "ToSHRPOOLMem" from x$ksmsp * ERROR at line 1: ORA-00603: ORACLE server session ...
Categories: DBA Blogs

How to restore a packages from rman backups

Tom Kyte - Thu, 2018-02-22 10:06
Hello, Is it possible to restore packages from rman backups? I know the export method that can do it, but I want to know if extended rman can do such a thing?
Categories: DBA Blogs


Tom Kyte - Thu, 2018-02-22 10:06
HI Tom - facing this issue of incorrect string sizing while decrypting.Usage of trim doesnt solve the issue. Below are the codes. 1.CREATE OR REPLACE FUNCTION CRYPT( P_STR IN VARCHAR2 ) RETURN VARCHAR2 AS ...
Categories: DBA Blogs

ODA Some curious password management rules

Yann Neuhaus - Thu, 2018-02-22 09:33

While deploying an ODA based on the DCS stack (odacli), it is mandatory to provide a “master” password at appliance creation. The web GUI provides for that a small tooltip which describes the rules applied on password management. However it looks like there is some flexibility with those rules. Lets try to check this out with some basics tests.

First of all here are the rules as provided by the ODA interface:


So basically it has to start with an alpha character and be at least 9 characters long. My first reaction was that 9 characters is not to bad even if 10 would be better as minimum. Unfortunately it is not requesting any additional complexity mixing uppercase, lowercase, numbers… My second reaction, as most of IT guys, was to try to not respect these rules and see what happen :-P

I started really basically by using an “high secured” password: test


Perfect the ODA reacted as expect and tells me I should read the rules once again. Next step is try something a bit more complicated: manager

..and don’t tell me you never used it in any Oracle environment ;-)


Fine, manager is still not 9 character long, 7 indeed, and the installer is still complaining. For now, everything is okay.
Next step was to try a password respecting the rules of 9 characters: welcome123


Still a faultless reaction of ODA!

Then I had the strange idea to test the historical ODA password: welcome1


Oops! The password starts with an alpha character fine, but if I’m right welcome1 is only 8 characters long :-?
If you don’t believe me, try to count the dot on the picture above….and I swear I didn’t use Gimp to “adjust” it ;-)

Finally just to be sure I tried another password of 8 characters: welcome2


Ah looks better. This time the installer sees that the password is not long enough and shows a warning.

…but would it mean that welcome1 is hard-coded somewhere??


Not matter, let’s continue and run the appliance creation with welcome123. Once done I try log using SSH to my brandly new created ODA using my new master password


it doesn’t work! 8-O

I tried multiple combination from welcome123, welcome1, Welcome123 and much more. Unfortunately none of them work.
At this point there are only 2 solutions to connect back to your ODA:

  1. There is still a shell connected as root to the ODA and then the root password can easily be changed using passwd
  2. No session is open to the ODA anymore and then it requires to open the remote console to reboot the ODA in Single User mode :-(

As the master password should be set to both root, grid and oracle users, I tried the password for grid and oracle too:


Same thing there the master password provided during the appliance creation hasn’t be set properly.

Hope it help!


Cet article ODA Some curious password management rules est apparu en premier sur Blog dbi services.

Oracle Communications Network Charging and Control Enables Mobile Service Providers to Differentiate and Monetize Their Brand

Oracle Press Releases - Thu, 2018-02-22 07:00
Press Release
Oracle Communications Network Charging and Control Enables Mobile Service Providers to Differentiate and Monetize Their Brand Delivers agile online charging for high-growth mobile, cloud and IoT services

Redwood Shores, Calif.—Feb 22, 2018

Oracle today announced the latest version of Oracle Communications Network Charging and Control (NCC), a key product in Oracle’s complete digital monetization portfolio which addresses communications, cloud and IoT services. A modern, standards-based online charging system for prepaid dominant mobile markets, Oracle Communications NCC expands the reach of Oracle’s digital monetization portfolio to help service providers, mobile virtual network enablers (MVNEs) and operators (MVNOs) in high growth markets, introduce innovative and interactive mobile offers to rapidly and cost effectively monetize their brands. Key capabilities introduced in this new release include 3GPP advanced data charging and policy integration together with support for contemporary, cost effective deployments on Oracle Linux.

The pre-paid market for consumer mobile broadband and Intelligent-Network (IN) voice services continues to grow globally. Ovum forecasts1 that the market for pre-paid mobile voice and data subscriptions will grow from 5.5B subscriptions in 2017 to 6.0B subscriptions in 2022 with highest net growth in developing markets. In addition, the GSMA estimates there to be almost 1,000 MVNOs globally with more than 250 mobile network operator (MNO) sub-brands, all seeking growth through brand differentiation.

For such operators, Oracle Communications NCC provides advanced mobile broadband and IN monetization, intuitive graphical service logic design and complete prepaid business management in a single solution. It supports flexible recharge and targeted real-time promotions, complete and secure voucher lifecycle management, and a large set of pre-built and configurable service templates for the rapid launch of new innovative offers. This is critical as competitive pressures and customer expectations mount, requiring service providers to rethink their services and how they can increase brand engagement and loyalty. With this evolution in services, it’s imperative that underlying charging systems evolve to meet these changing business requirements—across digital, cloud and IoT services.

ASPIDER-NGI builds, supports and operates innovative MVNO and IoT platforms for Operator, Manufacturer and Enterprise sectors,” said David Traynor, Chief Marketing Officer, ASPIDER-NGI. “We use Oracle Communications Network Charging and Control as part of our MVNE infrastructure, allowing our clients to quickly deploy new mobile data and intelligent network services. Our clients demand the controls to deliver competitive offerings to specific customer segments and to support their own IoT business models. This release provides us the agility to accelerate our pace of innovation with an online charging platform that supports the latest 3GPP technologies.”

Oracle Communications NCC aligns with 3GPP Release 14 Policy and Charging Control (PCC) standards, including Diameter Gy data services charging, and supports comprehensive SS7 Service Control (CAP, INAP, and MAP) for IN services. In addition, it supports integration with Policy and Charging Rules Function (PCRF) deployments, including Oracle Communications Policy Management, via the Diameter Sy interface. Such integration provides support for a wide range of value added scenarios from on-demand bandwidth purchases for video or data intensive services to fair usage policies that gracefully reduce mobile bandwidth as threshold quotas are met to ensure an optimal customer experience. Oracle Communications NCC may be deployed in a virtualized or bare metal configuration on Oracle Linux using the Oracle Database to provide a highly cost effective, performant and scalable online charging solution.

“This major release of Oracle Communications Network Charging and Control reiterates Oracle’s continued commitment to provide a complete and cost effective online charging and business management platform for the pre-paid consumer mobile market,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “With new features including support for policy integration and deployment flexibility on a contemporary, open platform, we are offering our customers a modern alternative to traditional IN platforms, enabling them to differentiate and grow their brands, and in turn, delight their customers.”

In addition to Oracle Communications Network Charging and Control, Oracle’s digital monetization portfolio also includes Oracle Communications Billing and Revenue Management and Oracle Monetization Cloud, which collectively support the rapid introduction and monetization of subscription and consumption based offerings.

Oracle Communications provides the integrated communications and cloud solutions that enable users to accelerate their digital transformation journey—from customer experience to digital business to network evolution. See Oracle Communications NCC in action at Mobile World Congress, Barcelona, February 26–March 1, 2018, Hall 3, Booth 3B30. Ovum, TMT Intelligence, Informa, Active Users, Prepaid and Postpaid Mobile Subscriptions, February 09, 2018

1. GSMA Intelligence—Segmenting the global MVNO footprint—https://www.gsmaintelligence.com/research/2015/03/infographic-segmenting-the-global-mvno-footprint/482/

Contact Info
Katie Barron
Kristin Reeves
Blanc & Otus
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.925.787.6744

Migrate Oracle Database(s) and ASM diskgroups from VMWARE to Oracle VM

Yann Neuhaus - Thu, 2018-02-22 06:45

This is a step by step demonstration on how to migrate any ASM disk groups from a cluster to another. May be use, with or without virtualization and may be used with storage layer snapshot for fast environment provisioning.

Step 01 – Shutdown source database(s) on VMWARE servers

Shutdown all databases hosted in the targeted Disk groups for which you want consistency. Then unmount the disk groups.

$ORACLE_HOME/bin/srvctl stop database -db cdb001

$ORACLE_HOME/bin/asmcmd umount FRA

$ORACLE_HOME/bin/asmcmd umount DATA


Step 02 – Re route LUNs from the storage array to newf servers

Create a snapshot and make the snapshot LUNs visible for Oracle Virtual Server (OVS) according the third-party storage technology.

Step 03 – Add LUNs to DomUs (VMs)

Then, we refresh the storage layer from OVM Manager to present LUNs in each OVS

OVM> refresh storagearray name=STORAGE_ARRAY_01

Step 04 – Then, tell OVM Manager to add LUNs to the VMs in which we want our databases to be migrated

create VmDiskMapping slot=20 physicalDisk=sa01_clus01_asm_data01 name=sa01_clus01_asm_data01 on Vm name=rac001
create VmDiskMapping slot=21 physicalDisk=sa01_clus01_asm_data02 name=sa01_clus01_asm_data02 on Vm name=rac001
create VmDiskMapping slot=22 physicalDisk=sa01_clus01_asm_data03 name=sa01_clus01_asm_data03 on Vm name=rac001
create VmDiskMapping slot=23 physicalDisk=sa01_clus01_asm_data04 name=sa01_clus01_asm_data04 on Vm name=rac001
create VmDiskMapping slot=24 physicalDisk=sa01_clus01_asm_data05 name=sa01_clus01_asm_data05 on Vm name=rac001
create VmDiskMapping slot=25 physicalDisk=sa01_clus01_asm_data06 name=sa01_clus01_asm_data06 on Vm name=rac001
create VmDiskMapping slot=26 physicalDisk=sa01_clus01_asm_reco01 name=sa01_clus01_asm_reco01 on Vm name=rac001
create VmDiskMapping slot=27 physicalDisk=sa01_clus01_asm_reco02 name=sa01_clus01_asm_reco02 on Vm name=rac001

create VmDiskMapping slot=20 physicalDisk=sa01_clus01_asm_data01 name=sa01_clus01_asm_data01 on Vm name=rac002
create VmDiskMapping slot=21 physicalDisk=sa01_clus01_asm_data02 name=sa01_clus01_asm_data02 on Vm name=rac002
create VmDiskMapping slot=22 physicalDisk=sa01_clus01_asm_data03 name=sa01_clus01_asm_data03 on Vm name=rac002
create VmDiskMapping slot=23 physicalDisk=sa01_clus01_asm_data04 name=sa01_clus01_asm_data04 on Vm name=rac002
create VmDiskMapping slot=24 physicalDisk=sa01_clus01_asm_data05 name=sa01_clus01_asm_data05 on Vm name=rac002
create VmDiskMapping slot=25 physicalDisk=sa01_clus01_asm_data06 name=sa01_clus01_asm_data06 on Vm name=rac002
create VmDiskMapping slot=26 physicalDisk=sa01_clus01_asm_reco01 name=sa01_clus01_asm_reco01 on Vm name=rac002
create VmDiskMapping slot=27 physicalDisk=sa01_clus01_asm_reco02 name=sa01_clus01_asm_reco02 on Vm name=rac002

At this stage we have all LUNs of our both disk groups for DATA and FRA available on both nodes of the cluster.

Step 05 – Migrate disk in AFD

We can rename disk groups if required or if a disk group with the same name already exists

renamedg phase=both dgname=DATA newdgname=DATAMIG verbose=true asm_diskstring='/dev/xvdr1','/dev/xvds1','/dev/xvdt1','/dev/xvdu1','/dev/xvdv1','/dev/xvdw1'
renamedg phase=both dgname=FRA  newdgname=FRAMIG  verbose=true asm_diskstring='/dev/xvdx1','/dev/xvdy1'


Then we migrate disks into AFD configuration

$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdr1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvds1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdt1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdu1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdv1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdw1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label FRAMIG  /dev/xvdx1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label FRAMIG  /dev/xvdy1 --migrate


Step 06 – Mount disk groups on the new cluster and add database(s) in the cluster

$ORACLE_HOME/bin/asmcmd mount DATAMIG
$ORACLE_HOME/bin/asmcmd mount FRAMIG


Then add database(s) to cluster (repeat for each database)

$ORACLE_HOME/bin/srvctl add database -db cdb001 \
-oraclehome /u01/app/oracle/product/12.2.0/dbhome_1 \
-dbtype RAC \
-spfile +DATAMIG/CDB001/spfileCDB001.ora \


Step 06 – Startup database

In that case, we renamed the disk groups so we need to modify file locations and some parameter values

create pfile='/tmp/initcdb001.ora' from spfile='+DATAMIG/<spfile_path>' ;
-- modify controlfiles, recovery area and any other relevant paramters
create spfile='+DATAMIG/CDB001/spfileCDB001.ora' from pfile='/tmp/initcdb001.ora' ;

ALTER DATABASE RENAME FILE '+DATA/<datafile_paths>','+DATAMIG/<datafile_paths>'
ALTER DATABASE RENAME FILE '+DATA/<tempfile_paths>','+DATAMIG/<tempfile_paths>'
ALTER DATABASE RENAME FILE '+DATA/<onlinelog_paths>','+DATAMIG/<onlinelog_paths>'
ALTER DATABASE RENAME FILE '+FRA/<onlinelog_paths>', '+FRAMIG/<onlinelog_paths>'


Then start the database

$ORACLE_HOME/bin/srvctl start database -db cdb001


This method can be used to easily migrated TB of data with almost no pain, reducing at most as possible the downtime period. For near Zero downtime migration, just add a GoldenGate replication on top of that.

The method describes here is also perfectly applicable for ASM snapshot in order to duplicate huge volume from one environment to another. This permits fast environment provisioning without the need to duplicate data over the network nor impact storage layer with intensive I/Os.

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.




Cet article Migrate Oracle Database(s) and ASM diskgroups from VMWARE to Oracle VM est apparu en premier sur Blog dbi services.

Oracle Systems Partner Webcast-Series: SPARC value for Partners


We share our skills to maximize your revenue!
Categories: DBA Blogs

Set up continuous application build and delivery from Git to Kubernetes with Oracle Wercker

Amis Blog - Thu, 2018-02-22 03:22

It is nice – to push code to a branch in a Git repository and after a little while find the freshly built application up and running in the live environment. That is exactly what Wercker can do for me.


The Oracle + Wercker Cloud service allows me to define applications based on Git repositories. For each application, one or more workflows can be defined composed out of one or more pipelines (steps). A workflow can be triggered by a commit on a specific branch in the Git repository. A pipeline can do various things – including: build a Docker container from the sources as runtime for the application, push the Docker container to a container registry and deploy containers from this container registry to a Kubernetes cluster.

In this article, I will show the steps I went through to set up the end to end workflow for a Node JS application that I had developed and tested locally and then pushed to a repository on GitHub. This end to end workflow is triggered by any commit to the master branch. It builds the application runtime container, stores it and deploys it to a Kubernetes Cluster running on Oracle Cloud Infrastructure (the Container Engine Cloud).

The starting point is the application – eventmonitor-microservice-soaring-clouds-sequel – in the GitHub Repository at: https://github.com/lucasjellema/eventmonitor-microservice-soaring-clouds-sequel . I already have a free account on Wercker (http://www.wercker.com/)

The steps:

1. Add an Application to my Wercker account


2. Step through the Application Wizard:


Select GitHub (in my case).

Since I am logged in into Wercker using my GitHub account details, I get presented a list of all my repositories. I select the one that holds the code for the application I am adding:


Accept checking out the code without SSH key:


Step 4 presents the configuration information for the application. Press Create to complete the definition of the application.


The successful creation of the application is indicated.


3. Define the build steps in a wercker.yml

The build steps that Wercker executes are described by a wercker.yml file. This file is expected in the root of the source repository.

Wercker offers help with the creation of the build file. For a specific languagem it can generate the skeleton wercker.yml file that already refers to the base box (a language specific runtime) and has the outline for the steps to build and push a container.


In my case, I have created the wercker.yml file manually and already included it in my source repo.

Here is part of that file.


Based on the box node8 (the base container image), it defines three building block: build, push-to-releases and deploy-to-oke. The first one is standard for Node applications and builds the application (well, it gathers all node modules). The second one takes the resulting container image from the first step and pushes it to the Wercker Container Registry with a tag composed from the branch name and the git commit id. The third one is a little more elaborate. It takes the container image from the Wercker registry and creates a Kubernetes deployment that is subsequently pushed to the Kubernetes cluster that is indicated by the environment variables KUBERNETES_MASTER and KUBERNETES_TOKEN.

4. Define Pipelines and Workflow

In the Wercker console, I can define workflows for my application. These workflows consist of pipelines, organized in a specific sequence. Each pipeline is triggered by the completion of the previous one. The first pipeline is typically triggered by a commit event in the source repository.



Before I can compose the workflow I need, I first have to set up the Pipelines – corresponding to the build steps in the wercker.yml file in the application source repo. Click on Add new pipline.

Define the name for the new pipeline (anything you like) and the name of the YML Pipeline – this one has to correspond exactly with the name of the building block in the wercker.yml file.


Click on Create.

Next, create a pipeline for the ”deploy-to-oke” step in the YML file


Press Create to also create this pipeline.

With all three pipelines available, we can complete the workflow.


Click on the plus icon to add step in the workflow. Associate this step with the pipeline push-docker-image-to-releases:image

Next, add a step for the final pipeline:


This completes the workflow. If you now commit code to the master branch of the GitHub repo, the workflow will be triggered and will start to execute. The execution will fail however: the wercker.yml file contains various references to variables that need to be defined for the application (or the workflow or even the individual pipeline) before the workflow can be successful.


Crucial in making the deployment to Kubernetes successful are the files kubernetes-deployment.yml.template and ingress.yml.template. These files are used as template for the Kubernetes deployment and ingress definitions that are applied to Kubernetes. These files define important details such as:

  • Container Image in the Wercker Container Registry to create the Pod for
  • Port(s) to be exposed from each Pod
  • Environment variables to be published inside the Pod
  • URL path at which the application’s endpoints are accessed (in ingress.yml.template)


5. Define environment variables

Click on the Environment tab. Set values for all the variables used in the wercker.yml file. Some of these define the Kubernetes environment to which deployment should take place, others provide values that are injected into the Kubernetes Pod and made available as environment variables to the application at run timeSNAGHTMLce5b5cb

6. Trigger a build of the application

At this point, the application is truly ready to be built and deployed. One way to trigger this, is by committing something to the master branch. Another option is shown here:


The build is triggered. The output from each step is available in the console:image

When the build is done, the console reflects the result.


Each pipeline can be clicked to inspect details for all individual steps, for example the deployment to Kubernetes:


Each step can be expanded for even more details:


In these details, we can find the values that have been injected for the environment variables.

7. Access the live application

This final step is not specific to Wercker. It is however the icing on the cake – to make actual use of the application.

The ingress definition for the application specifies:


This means that the application can be accessed at the endpoint for the K8S ingress at the path /eventmonitor-ms/app/.

Given the external IP address for the ingress service, I can now access the application:


Note: /health is one of the operations supported by the application.

8. Change the application and Roll out the Change – the ultimate proof

The real proof of this pipeline is in changing the application and having that change rolled out as a result of the Git commit.

I make a tiny change, commit the change to GitHub


and push the changes. Almost immediately, the workflow is triggered:


After a minute or so, the workflow is complete:

and the updated application is live on Kubernetes:


Check the live logs in the Pod:


And access the application again – now showing the updated version:


The post Set up continuous application build and delivery from Git to Kubernetes with Oracle Wercker appeared first on AMIS Oracle and Java Blog.

Huge Pages

Jonathan Lewis - Thu, 2018-02-22 03:03

A useful quick summary from Neil Chandler replying to a thread on Oracle-L:

Topic: RAC install on Linux

You should always be using Hugepages.

They give a minor performance improvement and a significant memory saving in terms of the amount of memory needed to handle the pages – less Transaction Lookaside Buffers, which also means less TLB misses (which are expensive).

You are handling the memory chopped up into 2MB pieces instead of 4K. But you also have a single shared memory TLB for Hugepages.

The kernel has less work to do, bookkeeping fewer pointers in the TLB.

You also have contiguous memory allocation and it can’t be swapped.

If you are having problems with Hugepages, you have probably overallocated them (I’ve seen this several times at clients so it’s not uncommon). Hugepages can *only* be used for your SGA’s. All of your SGA’s should fit into the Hugepages and that should generally be no more than about 60% of the total server memory (but there are exceptions), leaving plenty of “normal” memory (small pages) for PGA , O/S and other stuff like monitoring agendas.

As an added bonus, AMM can’t use Hugepages, so your are forced to use ASMM. AMM doesn’t work well and has been kind-of deprecated by oracle anyway – dbca won’t let you setup AMM if the server has more than 4GB of memory.


Oracle database backup

Tom Kyte - Wed, 2018-02-21 15:46
Hi Developers, I am using Oracle 10g. I need to take backup of my database. I can take a back-up of tables, triggers etc using sql developers' Database Backup option but there are multiple users created in that database. Can you please support ...
Categories: DBA Blogs

How do you purge stdout files generated by DBMS_SCHEDULER jobs?

Tom Kyte - Wed, 2018-02-21 15:46
When running scheduler jobs, logging is provided in USER_SCHEDULER_JOB_LOG and USER_SCHEDULER_JOB_RUN_DETAILS. And stdout is provided in $ORACLE_HOME/scheduler/log. The database log tables are purged either by default 30 days (log_history attribute)....
Categories: DBA Blogs

V$SQL history

Tom Kyte - Wed, 2018-02-21 15:46
How many records/entry are there in v$sql,v$ession. and how they flush like Weekly or Space pressure. Thanks
Categories: DBA Blogs

Dynamic SQL in regular SQL queries

Tom Kyte - Wed, 2018-02-21 15:46
Hi, pardon me for asking this question (I know I can do this with the help of a PL/SQL function) but would like to ask just in case. I'm wondering if this doable in regular SQL statement without using a function? I'm trying to see if I can write a ...
Categories: DBA Blogs

Adding hash partitions and spreading data across

Tom Kyte - Wed, 2018-02-21 15:46
Hi, I have a table with a certain number of range partitions and for each partitions I have eight hash subpartitions. Is there a way to increase the subpartitions number to ten and distributing evenly the number of rows? I have tried "alter tabl...
Categories: DBA Blogs

Bug when using 1 > 0 at "case when" clause

Tom Kyte - Wed, 2018-02-21 15:46
Hello, guys! Recently, I've found a peculiar situation when building a SQL query. The purpose was add a "where" clause using a "case" statement that was intented to verify if determined condition was greater than zero. I've reproduced using a "wit...
Categories: DBA Blogs

Difference between explain and execute plan and actual execute plan

Tom Kyte - Wed, 2018-02-21 15:46
Hi, I have often got questions around explain plan and execute plan. As per my knowledge, explain plan gives you the execute plan of the query. But I have also read that Execute plan is the plan which Oracle Optimizer intends to use for the query and...
Categories: DBA Blogs

Oracle Data Cloud Launches Data Marketing Program to Help Savvy Auto Dealer Agencies Better Use Digital Data

Oracle Press Releases - Wed, 2018-02-21 12:55
Press Release
Oracle Data Cloud Launches Data Marketing Program to Help Savvy Auto Dealer Agencies Better Use Digital Data Nine Leading Retail Automotive Marketing Agencies Are First to Complete Comprehensive Program, Receive Oracle Data Cloud’s Auto Elite Data Marketer (EDM) Designation

Redwood City, Calif.—Feb 21, 2018

Oracle Data Cloud today launched an advanced data training and marketing program to help savvy auto dealer agencies better use digital data. Oracle also announced the first nine leading Tier 3 auto marketing agencies to qualify for the rigorous program and receive Oracle Data Cloud’s Auto Elite Data Marketer (EDM) designation. Those companies included: C-4 Analytics, Dealer Inspire, Dealers United, Goodway Group, L2TMedia, SocialDealer, Stream Marketing, Team Velocity, and TurnKey Marketing. Oracle’s Auto Elite Data Marketer program will help agencies effectively allocate their marketing resources as advertising budgets shift from offline media to digital platforms.

“As the automotive industry goes through an era of transformational change, dealers are literally where the rubber meets the road, and they need cutting edge marketing tools to help maintain or grow market share,” said Joe Kyriakoza, VP and GM of Automotive for the Oracle Data Cloud. “Tier 3 marketers know that reaching the right audience drives measurable campaign results. By increasing the data skills of our marketing agency partners, Oracle can help them directly impact and improve their clients’ campaign results.”

Oracle Data Cloud’s Auto Elite Data Marketer Program includes:

  1. 1. Education & training - Expert training to the marketing agency and their extended teams on advanced targeting strategies and audience planning techniques

  2. 2. Customized collateral - Co-branded collateral pieces to support client marketing efforts, including summary sheets, decks, activation guides, and other materials.

  3. 3. Co-branding marketing - Co-branded marketing initiatives through thought leadership, speaking opportunities, and co-hosted webinars.

  4. 4. Strategic sales support - Access to Oracle’s specialized Retail Solutions Team and the Oracle Data Hotline to support strategic pitches, events, and RFP inquiries.

“We are proud to have worked with Oracle Data Cloud since the beginning, shaping the program together to drive more business for dealers using audience data,” said Joe Chura, CEO of Dealer Inspire. “Our team is excited to continue this relationship as an Elite Data Marketer, empowering Dealer Inspire clients with the unique advantage of utilizing Oracle data for automotive retail targeting.”

“We are consumed with data that allows for hyper-personalization and better targeting of in-market consumers,” said David Boice, CEO and Chairman of Team Velocity Marketing. “Oracle is a new goldmine of data to drive excellent sales and service campaigns and a perfect complement to our Apollo Technology Platform.”  According to Joe Castle, Founder of SOCIALDEALER, “We are excited to be one of the few Auto Elite Data Marketers which provides us a deeper level of custom audience data access from Oracle. Our companies look forward to working closely to further deliver a superior ROI to all our dealership and OEM relationships.”

Through the Auto Elite Data Marketer program, retail marketers learn how to use Oracle’s expansive selection of automotive audiences, which cover the entire vehicle ownership lifecycle, like in-market car shoppers, existing owners, and individuals needing auto finance, credit assistance, or vehicle service. This comprehensive data set allows clients to precisely target the right prospects for any automotive retail campaign. Oracle has teamed up with industry leading data providers to build the robust dataset, like IHS Markit’s Polk for vehicle ownership and intent data, Edmunds.com for online car shopper data and TransUnion the trusted source for consumer finance audiences.

Oracle Data Cloud plans to expand the Auto Elite Data Marketer program to include additional dealer marketing agencies, as well as working directly with dealers and dealer groups and their media partners to use data effectively for advanced targeting and audience planning efforts. For more information about the Auto Elite Data Marketer program, please contact the Oracle Auto team at dealersolutions@oracle.com.

Oracle Data Cloud

Oracle Data Cloud operates the BlueKai Data Management Platform and the BlueKai Marketplace, the world’s largest audience data marketplace. Leveraging more than $5 trillion in consumer transaction data, more than five billion global IDs and 1,500+ data partners, Oracle Data Cloud connects more than two billion consumers around the world across their devices each month. Oracle Data Cloud is made up of AddThis, BlueKai, Crosswise, Datalogix and Moat.

Oracle Data Cloud helps the world’s leading marketers and publishers deliver better results by reaching the right audiences, measuring the impact of their campaigns and improving their digital strategies. For more information and free data consultation, contact The Data Hotline at www.oracle.com/thedatahotline

Contact Info
Simon Jones
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Simon Jones

  • +1.650.506.0325


Subscribe to Oracle FAQ aggregator