Feed aggregator

Oracle Backup and Recovery in the Cloud

Gerger Consulting - Tue, 2018-01-30 10:17
Attend our free webinar on February 13th and learn how you can implement a robust backup and recovery solution for your Oracle database running in the cloud.

About the Webinar:
A common misconception is that once you move your database to the cloud, you are done and data protection stops being your problem; it automatically backs itself up, recovers magically and of course it all happens securely. Sadly, nothing can be further from the truth.
In reality, the cloud opens up new opportunities but it also comes with its own challenges, especially when it comes to implementing a secure, reliable and fast database backup and recovery solution.
In this webinar you'll learn about how the cloud changes your database backup&recovery strategy and what solutions are available to you to use with your Oracle Database in the cloud.
Specifically, we'll cover the following topics:
  • Backup and recovery strategies for databases running in the cloud
  • Pros and cons of various backup and recovery strategies
  • How to do native, multi-cloud data protection
  • How to streamline and automate backups in the cloud to reduce costs and improve efficiency
We will also do a live demo with Oracle database running in the Amazon Cloud.



Categories: Development

Partner Webcast - Oracle Big Data Cloud Service – Compute Edition: Getting Started Hands-on

Oracle Cloud provides several big data services and deployment models. The big data service choices enable you or your customer to start at the cost and capability level suitable to your use case and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Miroglio Fashion Deploys Oracle Retail Omnichannel Solutions in 7 Months

Oracle Press Releases - Tue, 2018-01-30 08:00
Press Release
Miroglio Fashion Deploys Oracle Retail Omnichannel Solutions in 7 Months Italian Fashion Company Offers Customers a Rich Shopping Experience with Oracle Retail Xstore and Oracle Retail Customer Engagement

Redwood Shores, Calif.—Jan 30, 2018

Today, Oracle announced that Italian fashion company Miroglio Fashion deployed the Oracle Retail Xstore Point-of-Service and Oracle Retail Customer Engagement to drive innovation and create connections between product, data and people. The Italian fashion retailer operates more than 1,100 stores, six commerce sites and 2,400 wholesale outlets in 34 countries and has 12 distinctive womenswear brands as Elena Mirò, Fiorella Rubino, Motivi and Oltre.

“As the consumer continues to evolve, our business must follow and innovate. With Oracle Retail, our physical and digital locations can now operate as part of a network to better serve our customer with unified stock management and unified shopping experiences across channels. Customers are empowered to engage with our brand as they choose,” said Hans Hoegstedt, CEO of Miroglio Fashion.

Miroglio Fashion launched the Retail 4.0 program as an initiative to foster innovation, drive efficiency and increase performance. With the help of Oracle Retail Consulting, Miroglio Fashion implemented Oracle Retail Xstore Point-of-Service and Oracle Retail Customer Engagement. In 2013, Oracle quickly declared Oracle Retail Xstore Point-of-Service as the strategic Omnichannel platform to deliver more innovation to the market.

“We needed speed, efficiency and reliability to execute toward the vision of our Retail Innovation Program. Our leadership team declared explicit business targets. We designed the roll-out and a progressive migration process in advance of the implementation,” said Francesco Cavarero, Group Chief Information Officer, Miroglio Fashion.

“We are thankful for the collaboration with the IT and business leadership teams of Miroglio Fashion. The clarity and focus of the project and commitment to a vanilla implementation allowed my team to deliver on-time and on-budget in seven working months,” said Lou Frio, Vice President of Global Consulting, Oracle Retail. “Miroglio Fashion leverages a remote deployment process to keep an aggressive yet healthy pace for the rollout.”

Oracle will convene a global community of retail leadership at Oracle Industry Connect April 10-11, 2018 in New York, NY. Oracle Industry Connect provides attendees with intimate peer networking opportunities in addition to over 30 customer-led presentations on adapting to market changes, simplifying operations and empowering authentic brand experiences. To learn more about Oracle Industry Connect 2018 and register to attend visit: www.oracle.com/oracleindustryconnect/

Contact Info
Matt Torres
Oracle
415-595-1584
matt.torres@oracle.com
About Miroglio Fashion

Miroglio Fashion is the company in the leading textiles and apparel Miroglio Group—founded in 1947—which is specialized in womenswear. Today it is the number three company in Italy in this market. It creates, produces and distributes 12 brands through 1,188 branded sales outlets, 6 e-commerce websites and a wholesale network of 2,400 stores.

Miroglio Fashion’s growth path draws on the qualities that have made Italian entrepreneurship great, such as focusing on people, distinctive brand offerings, attention to detail, and an intimate, humane shopping experience, all backed up by latest generation technology.

www.mirogliofashion.com

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 415-595-1584

Index Skip Scan: Potential Use Case or Maybe Not ? (Shine On You Crazy Diamond)

Richard Foote - Tue, 2018-01-30 05:57
While answering a recent question on a LinkedIn forum, it got me thinking whether there’s a potential use case for using an INDEX SKIP SCAN I hadn’t previously considered. I’ve discussed Index Skip Scans previously (as I did here), a feature introduced around Oracle9i that allows an index to be considered by the CBO even […]
Categories: DBA Blogs

Histogram Threat

Jonathan Lewis - Tue, 2018-01-30 02:07

Have you ever seen a result like this:


SQL> select sql_id, count(*) from V$sql group by sql_id having count(*) > 1000;

SQL_ID		COUNT(*)
------------- ----------
1dbzmt8gpg8x7	   30516

A client of mine who had recently upgraded to 12.2.0.1 RAC, using DRCP (database resident connection pooling) for an application using PHP was seeing exactly this type of behaviour for a small number of very simple SQL statements and wanted to find out what was going on because they were also seeing an undesirable level of contention in the library cache when the system load increased.

In this note I just want to highlight a particular detail of their problem – with an example – showing how easily histograms can introduce problems if you don’t keep an eye out for the dangers.

One of their queries really was as simple as this:

select count(*), sum(skew2) from t1 where skew = :b1;

And I’m going to use this query to model the problem. All I have to do is arrange for a data set that results in a hybrid (or height-balanced) histogram being created on the skew column, and then run the query lots of times with different input bind values. In the case of the client there were around 18,000 possible values for the column, and the number of rows per value varied from 1 to about 20,000 – but whatever the number of rows selected the optimum execution plan was always going to be an indexed access.


rem
rem     Script:         acs_child_cursors.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2018
rem

create table t1 (
        id, skew, skew2, padding
)
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 3e3
)
select
        rownum  id,
        g1.id   id1,
        g2.id   id2,
        rpad('x',100)
from
        generator       g1,
        generator       g2
where
        g2.id <= g1.id     -- > comment to avoid WordPress format issue
order by
        g2.id, g1.id
;

alter table t1 modify skew not null;
alter table t1 modify skew2 not null;

create index t1_skew on t1(skew);

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1 for columns skew size 254'
--              method_opt       => 'for all columns size 1'
        );
end;
/

variable b1 number
exec :b1 := 0;

set termout off
@start_1000
@start_1000
@start_1000
set termout on

set linesize 120
set trimspool on

column sql_text format a55

select
        child_number, plan_hash_value, executions,
        is_bind_sensitive,
        is_bind_aware,
        is_shareable,
        sql_text
from
        v$sql
where   sql_id = 'b82my582cnvut'
;

The data set contains 3,000 distinct values for skew and the way I’ve generated the rows means that the value N will appear N times – so there’s one row with the value 1 and 3,000 rows with the value 3,000 and so on for a total of 4,501,500 rows. If you want to run the tes the code is likely to take a couple of minutes to complete, requiring roughly 700 MB of disk space.

The mechanism of the script start_1000 is something I published a few years ago, and essentially it executes a script called start_1.sql 1,000 times which, for this test, contains the following two lines:


exec :b1 := :b1 + 1

select count(*), sum(skew2) from t1 where skew = :b1;

The net effect of the 3 calls to start_1000.sql is that my simple SQL statement is called once in turn for each value of skew from 1 to 3,000. The SQL_ID of the statement is ‘b82my582cnvut’ which I’ve used to query v$sql when the run is complete, with the following result:


CHILD_NUMBER PLAN_HASH_VALUE EXECUTIONS I I I SQL_TEXT
------------ --------------- ---------- - - - -------------------------------------------------------
	   0	  1041516234	    498 Y N N select count(*), sum(skew2) from t1 where skew = :b1
	   1	  1041516234	     25 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   2	  1041516234	    104 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   3	  1041516234	    308 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   4	  1041516234	    429 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   5	  1041516234	    640 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   6	  1041516234	     31 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   7	  1041516234	    305 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   8	  1041516234	    660 Y Y Y select count(*), sum(skew2) from t1 where skew = :b1

9 rows selected.

I’ve got 9 child cursors, all with the same execution plan, all except the last labelled as not shareable (you may find that you don’t get exactly the same result, on repeated tests I got between 5 and 9 cursors). Given enough time all these cursors except the last (shareable) one would be aged out of the library cache. In the case of the client, who had a shared pool that was probably quite a bit larger than needed, the number of non-shareable cursors could get very large and they were hanging around for ages. Typically most of the cursors would report an execution count less than 30, with many showing just one or two executions and a handful showing execution counts in the region of several hundred (and that hanful were the ones that were still marked as shareable).

After eliminating the critical histogram (using dbms_stats.delete_column_stats()) and eliminating the redundant child cursors (using dbms_pool.purge()) the massive proliferation stopped happening and the performance threat disappeared. The only issue then was to change the table preferences for stats collection on this table to add the clause “for columns size 1 skew” so that the histogram would not be recreated on the next gather.

Further Observations.

I suspect that part of the client’s probem – something that exaggerated the count rather than causing it – could be attributed to using DRCP (database resident connection pool) which probably still has some quirky bits of behaviour. It was also true that the client’s connection pool was probably much bigger than it needed to be so if there were any funny little bits of emergent behaviour at scale the client would probably have hit them.

The problem of escalating child cursors is one that Oracle has been working on for quite a long time, and there’s a (hidden) parameter that was introduced late in 11gR2 (though I think that the 11g mechanism first appeared through a fix control) to allow Oracle to mark a parent cursor obsolete if it acquired too many child cursors.  There’s a note on MoS that the client had read on this topic: Doc ID: 2298504.1: Cursor Mutex X Wait Events: After Upgrading To 12.2″ which looked as if it was describing their symptoms so they had set this parameter (_cursor_obsolete_threshold) from 8192 (the 12.2 default) down to 1024 (the default for 12.1 and earlier versions). This had been of some help with the library cache problem.  When the sql_id at the top of this article reported 30,516 child cursors that would be 29 “obsolete” parent cursors with 1,024 childs cursor and one “live” parent cursor with 820 child cursors.

You can appreciate that if Oracle has to find a parent cursor and pin it while walking a chain of 30,516 child cursors that’s likely to take a lot more time than walking a chain of 30 parent cursors (while holding a library cache mutex, perhaps) to find the one non-obsolete parent, then acquiring the parent mutex to walk a chain of 820 child cursor.

I don’t know the exact details of the mechanisms involved with manipulating the various chains – but there are likely to be times when one process needs exclusive mutexes/latches to change a chain while other processes are holding shared mutexes/latches to search the chain. When you’ve got 30,000 child cursors in total the important questions become: “where’s the trade-off between making the child chains shorter and making the parent search longer ?” (I’d expect – or hope – that the Oracle developers had actually found a way to code the obsolence so that the new parent was first in the search, and the rest were never visited, of course.)

One of the suggestions I made to try to alleviate the problem – which I had assumed was due in part to the degree of concurrent execution of the statement – was to mark the cursor as “hot” This resulted in 36 differet sql_ids for the same statement (the client machine had 72 CPUs). This had some effect but ultimately meant that there were 36 chains of parents that would eventually end up with lots of child cursors – the only difference was the rate at which the total cursor count was growing (a lot slower), balanced against the threat that they might all manage to get to 30,000+ child cursors! Instead, as a quick and dirty workaround, I supplied the client with a script that could be run at quieter moments during the day to call dbms_shared_pool.purge() for the sql_id to flush all its cursors from the library cache.

One final oddity – which isn’t from the client site. When I changed my test above to avoid generating the histogram (using the commented out method_opt “for all columns size 1”) I found that I still got two child cursors; the first ended up marked as bind-aware but insensitive and non-shareable, the second appeared (time and time again) when my bind value got to 499 and was marked bind-sensitive, bind-aware and shareable.  I still have no idea why this happened.

Footnote:

When repeating the test I started with a “flush shared_pool” – but I’ve removed this line from the code above in case anyone ran it without considering the possible side effects. You do need to clear previous copies of the key statement from the library cache, though, if you want to avoid confusing the issue on repeated runs of the test.

 

Oracle Utilities Application Framework V4.3.0.5.0 Release Summary

Anthony Shorten - Mon, 2018-01-29 20:51

The latest release of Oracle Utilities Application Framework, namely 4.3.0.5.0 (or 4.3 SP5 for short) will be included in new releases of Oracle Utilities products over the next few months. This release is quite diverse with a range of new and improved capabilities that can be used by implementations of the new releases.

The key features included in the release including the following:

  • Mobile Framework release - The initial release of a new REST based channel to allow Oracle Utilities products to provide mobile device applications. This release is a port of the Mobile Communication Platform (MCP) used in the Oracle Mobile Workforce Management product to the Oracle Utilities Application Framework. This initial release is restricted to allow Oracle Utilities products to provide mobile experiences for use with an enterprise. As with other channels in the Oracle Utilities Application Framework, it can be deployed alone or in conjunction with other channels.
  • Support For Chrome for Business - In line with Oracle direction, the Oracle Utilities Application Framework supports Chrome for Business as a browser alternative. A new browser policy, in line with Oracle direction, has been introduced to clarify support arrangement for Chrome and other supported browsers. Check individual product release notes for supported versions.
  • Improved Security Portal - To reduce effort in managing security definitions within the product, the application service portal has been extended to show secured objects or objects that an application service is related to.
  • Attachment Changes - In the past to add attachments to object required custom UI maps to link attachment types to objects. In this release, a generic zone has been added reducing the need for any custom UI Maps. The attachment object now also records the extension of the attachment to reduce issues where an attachment type can have multiple extensions (e.g. DOC vs DOCX).
  • Support for File Imports in Plug In Batch - In past releases Plug In Batch was introduced as a configuration based approach to replace the need for Java programming for batch programming. In the past, SQL processing and File Exports where supported for batch processing. In this release, importing files in CSV, Fixed format or XML format are now supported using Plug In Batch (using Groovy based extensions). Samples are supplied with the product that can be copied and altered accordingly.
  • Improvements in identifying related To Do's - The logic determining related To Do's has been enhanced to provide additional mechanisms for finding related To Do's to improve closing related work. This will allow a wider range to To Do's to be found than previously determined.
  • Web Service Categories - To aid in API management (e.g. when using Integration Cloud Service and other cloud services) Web Service categories can be attached to Inbound Web Services, Outbound Message Types and legacy XAI services that are exposed via Inbound Web Services. A given web service or outbound message can be associated with more than one category. Categories are supplied with the product release and custom categories can be added.
  • Extended Oracle Web Services Manager Support - In past releases Oracle Web Services Manager could provide additional transport and message security for Inbound Web Services. In this release, Oracle Web Services Manager support has been extended to include Outbound Messages and REST Services.
  • Outbound Message Payload Extension - In this release it is possible to include the Outbound Message Id as part of the payload as a reference for use in the target system.
  • Dynamic URL support in Outbound Message - In the past Outbound Message destinations were static to the environment. In this release the URL used for the destination can vary according to the data or dynamically assembled programmatically if necessary.
  • SOAP Header Support in Outbound Messages - In this release it is possible to dynamically set SOAP Header variables in Outbound Messages.
  • New Groovy Imports Step Type - A new step type has been introduced to define classes to be imported for use in Groovy members. This promotes reuse and allows for coding without the need for the fully qualified package name in Groovy Library and Groovy Member step types. 
  • New Schema Designer - A newly redesigned Schema Editor has been introduced to reduce total cost of ownership and improve schema development. Color coding has been now included in the raw format editor.
  • Oracle Jet Library Optimizations - To improve integration with the Oracle Jet libraries used by the Oracle Utilities Application Framework, a new UI Map fragment has been introduced to include in any Jet based UI Map to reduce maintenance costs.
  • YUI Library Removal - With the desupport of the YUI libraries, they have been removed from this release in the Oracle Utilities Application Framework. Any custom code directly referencing the YUI libraries should use the Oracle Utilities Application Framework equivalent function.
  • Proxy Settings now at JVM level - In past release, proxy settings were required on individual connections where needed. In this release, the standard HTTP Proxy JVM options are now supported at the container/JVM layer to reduce maintenance costs.

This is just a summary of some of the new features in the release. A full list is available in the release notes of the products using this service pack.

Note: Some of these enhancements have been back ported to past releases. Check My Oracle Support for those patches.

Over the next few weeks, I will be writing a few articles about a few of these enhancements to illustrate the new capabilities.

Performance issue while processing Huge XML file

Tom Kyte - Mon, 2018-01-29 18:06
I'm seeing performance issue while processing the xml file which has more then 10K records, it's working fine if the file has 100 records. Below is the sample procedure I'm using.. <code>++++++++++++++++ CREATE OR REPLACE PROCEDURE process_xml_fi...
Categories: DBA Blogs

Audit logon and logoff of specific users eg sys and system

Tom Kyte - Mon, 2018-01-29 18:06
Hi, I need to write a trigger to audit logon and logoff activities of specific users 'SYS' and 'SYSTEM'. I have one trigger but its not inserting records into the table mentioned in the trigger. Please, help me to fix the issue.Your help will be...
Categories: DBA Blogs

How to unpivot table data with out displaying column names

Tom Kyte - Mon, 2018-01-29 18:06
Hi Tom, Am working on Oracle DB and below query <code> select t1.id as dbid, t2.mid as askid, t3.m2idd as amid from table1 t1, table2 t2, table3 t3 where t1.actid = t2.senid and t2.denid = t2.mkid ...
Categories: DBA Blogs

SQL query that returns the difference between the latest

Tom Kyte - Mon, 2018-01-29 18:06
Problem Statement [question redacted] Comment from Connor, Jan 29 1018 ================================ You can see from our initial answer that were thrilled that you provided us a complete test case. However, when things seem too good to b...
Categories: DBA Blogs

SQL Query Optimization

Tom Kyte - Mon, 2018-01-29 18:06
We have a table called PROTOKOLL, with the following definition: <code> CREATE TABLE "PROTOKOLL" ( "UUID" CHAR(36 BYTE) NOT NULL ENABLE, "VERSIONSNUMMER" VARCHAR2(20 BYTE), "TYP" CHAR(1 BYTE) NOT NULL ENABLE, "BENUTZER_ID" VARCHAR2(25...
Categories: DBA Blogs

Spectre and Meltdown Vulnerability and Oracle Utilities Products

Anthony Shorten - Mon, 2018-01-29 16:18

As you may or may not be aware a set of hardware based security vulnerabilities known as Spectre/Spectre and Meltdown have been identified. Vendors are quickly issuing software patches to address these hardware based vulnerabilities. Oracle has issued a number of patches to address this issue in it January 2018 patchsets.

Customers should refer to Addendum to the January 2018 CPU Advisory for Spectre and Meltdown (Doc Id: 2347948.1) for details of the patches available to address this issue and the state of patches for other products.

At this time, no patches are expected for Oracle Utilities products as the vulnerabilities are addressed by applying the patches outlined in the above article. It is highly recommended that Oracle Utilities customers apply patches outlined in that article to protect their systems. For customer's on non-Oracle platforms, it is recommended to refer to the relevant vendor site for any operating system or related patches for those platforms.

Edge Conference 2018 is coming - Technical Sessions

Anthony Shorten - Mon, 2018-01-29 16:16

It is that time of year again, Customer Edge conference time. This year we will be once again holding a Technical stream which focuses on the Oracle Utilities Application Framework and related products. Once again, I will be holding the majority of the sessions at the various conferences.

The sessions this year are focused around giving valuable advice as well as giving a window into our future plans for the various technologies we are focusing upon. As normal, there will be a general technical session covering our road map as well as specific set of session targeting important topics. The technical sessions planned for this year include:

Session Overview Reducing Your Storage Costs Using Information Life-cycle Management With the increasing costs of maintaining storage and satisfying business data retention rules can be challenging. Using Oracle Information Life-cycle Management solution can help simplify your storage solution and hardness the power of the hardware and software to reduce storage costs. Integration using Inbound Web Services and REST with Oracle Utilities Integration is a critical part of any implementation. The Oracle Utilities Application Framework has a range of facilities for integrating from and to other applications. This session will highlight all the facilities and where they are best suited to be used. Optimizing Your Implementation Implementations have a wide range of techniques available to implement successfully. This session will highlight a group of techniques that have been used by partners and our cloud implementations to reduce Total Cost Of Ownership. Testing Your On-Premise and Cloud Implementations Our Oracle Testing solution is popular with on premise implementations. This session will outline the current testing solution as well as outline our future plans for both on premise and in the cloud. Securing Your Implementations With the increase in cybersecurity concerns in the industry, a number of key security enhancements have made available in the product to support simple or complex security setups for on premise and cloud implementations. Turbocharge Your Oracle Utilities
Product Using the Oracle In-Memory Database Option
The Oracle Database In-Memory options allows for both OLTP and Analytics to run much faster using advanced techniques. This session will outline the capability and how it can be used in existing on premise implementations to provide superior performance. Mobile Application Framework Overview The Oracle Utilities Application Framework has introduced a new Mobile Framework for use in the Oracle Utilities products. This session gives an overview of the mobile framework capabilities for future releases. Developing Extensions using Groovy Groovy has been added as a supported language for on premise and cloud implementations. This session outlines that way that Groovy can be used in building extensions. Note: This session will be very technical in nature. Ask Us Anything Session Interaction with the customer and partner community is key to the Oracle Utilities product lines. This interactive sessions allows you (the customers and partners) to ask technical resources within Oracle Utilities questions you would like answered. The session will also allow Oracle Utilities to discuss directions and poll the audience on key initiatives to help plan road maps.

This year we have decided to not only discuss capabilities but also give an idea of how we use those facilities in our own cloud implementations to reduce our operating costs for you to use as a template for on-premise and hybrid implementations.

For customers and partners interested in attending the USA Edge Conference registration is available.

 

Avoid Blind SQL Call from ADF Task Flow Method

Andrejus Baranovski - Mon, 2018-01-29 15:42
Keep an eye open on ADF Task Flow Method Call activities where methods from ADF Bindings are called. JDEV 12c sets deferred refresh for ADF binding iterators related to TF Method Call activities and this causing blind SQL to be executed. Blind SQL - query without bind variables.

Let me explain the use case, so that it will be more clear what I'm talking about.

Common example - TF initialization method call where data is prepared. Typically this involves VO execution with bind variables:


Such method call could invoke binding operation either directly (pay attention - bind variable value is set):


Or through Java bean method using API:


My example renders basic UI form in the fragment, after TF method call was invoked:


If you log SQL queries executed during form rendering, you will see two queries instead of expected one. First query is executed without bind variables, while second gets correct bind variable assigned:


What is the cause for first query without bind variables? It turns out - iterator (with setting Refresh = deferred) from page definition mapped with TF method call is causing this. Somehow iterator is initialized not at the right time, when bind variable is not assigned yet and this causing blind SQL call:


Workaround is to set Refresh = never:


With Refresh = never, only one query is executed as expected, with bind variable assigned:


This may look minor, but trust me - with complex queries such fix could be a great help for performance tuning. Avoid executing SQL queries without bind variables.

Download sample application - ADFTFCallBindingApp.zip.

Result Cache: when *not* to use it

Yann Neuhaus - Mon, 2018-01-29 15:39

I encountered recently a case where result cache was incorrectly used, leading to high contention when the application encountered a peak of load. It was not a surprise when I’ve seen that the function was called with an ‘ID’ as argument, which may have thousands of values in this system. I mentioned to the software vendor that the result cache must be used only for frequently calling the function with same arguments, not for random values, even if each value have 2 or 3 identical calls. And, to detail this, I looked at the Oracle Documentation to link the part which explains when the result cache can be used and when it should be avoided.

But I’ve found nothing relevant. This is another(*) case where the Oracle Documentation is completely useless. Without explaining how a feature works, you completely fail to get this feature used. Most people will not take the risk to use it, and a few will use it in the wrong place, before definitely blacklisting this feature.

(*) By another case, I’m thinking about Kamil Stawiarski presentation about Pragma UDF and the lack of useful documentation about it.

Oracle documentation

So this is what I’ve find in the Database Performance Tuning Guide about the Benefits of Using the Server Result Cache

  1. The benefits of using the server result cache depend on the application
  2. OLAP applications can benefit significantly from its use.
  3. Good candidates for caching are queries that access a high number of rows but return a small number, such as those in a data warehouse.

So, this is vague (‘depends’, ‘can benefit’, ‘good candidates’). And doesn’t help to decide when it can be used.
The ‘access a high number of rows but return a small number’ is an indication why cache hits can benefit. However, there is no mention of the most important things, which are :

  • The cache result is invalidated for any DML on the tables the result relies on.
  • The cache miss, when the result is invalidated is expensive
  • The cache miss, when the result is not in the result cache is expensive
  • The ‘expensive’ here is a scalability issue: not detected in unit tests, but big contention when load increases
Real things to know

The first thing to know is that the Result Cache memory is protected by a latch:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 2 0 0 0 0
0000000060047870 Result Cache: SO Latch 0 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

This latch has no children:

SQL> select * from v$latch_children where name like '%Result Cache%';
 
no rows selected

Only one latch to protect the whole result cache: concurrent sessions – even for different functions – have to serialize their access on the same latch.

This latch is acquired in exclusive mode when the session has to write to the result cache (cache miss, invalidation,…) or in shared mode – since 11gR2 when reading only. This has been explained by Alex Fatkulin http://afatkulin.blogspot.ch/2012/05/result-cache-latch-in-11gr2-shared-mode.html.

This means that, whatever the Oracle Documentation says, the benefit of result cache comes only at cache hit: when the result of the function is already there, and has not been invalidated. If you call the same function with always the same parameter, frequently, and with no changes in the related tables, then we are in the good case.

But if there was a modification of one of the tables, even some rows that have nothing to do with the result, then you will have an overhead: exclusive latch get. And if you call the function with new values for the arguments, that’s also a cache miss which has to get this exclusive latch. And if you have multiple sessions experiencing a cache miss, then they will spin on CPU to get the exclusive latch. This can be disastrous with a large number of sessions. I have seen this kind of contention for hours with connection pools set to 100 sessions when the call to the function is frequent with different values.

To show it, I create a demo table (just to have a dependency) and a result_cache function:

SQL> create table DEMO as select rownum n from xmltable('1 to 1000');
Table created.
 
SQL> create or replace function F(n number) return number result_cache as begin for i in (select * from DEMO where DEMO.n=F.n) loop return i.n; end loop; end;
2 /
Function created.

I have just restarted the instance and my latch statistics are reset:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 2 0 0 0 0
0000000060047870 Result Cache: SO Latch 0 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

Result Cache Hit

This will call the function always with the same argument, and no change in the table it relies on:
SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(1); end loop; end;
2 /
PL/SQL procedure successfully completed.

So, the first call is a cache miss and the 999 next calls are cache hits. This is the perfect case for Result Cache.

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 1009 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

So, that’s about 1000 latch gets. With cache hits you get the latch once per execution, and this is a shared latch, so no contention here.
You want to see check that it is a shared latch? Just set a breakpoint with gdb on the ksl_get_shared_latch function (up to 12.1 because 12.2 uses ksl_get_shared_latch_int) and print the arguments (as explained by Stefan Koehler and Frits Hoogland):

As my RC latch is at address 00000000600477D0 I set a beakpoint on ksl_get_shared_latch where the first argument is 0x600477d0 and display the other arguments:

break ksl_get_shared_latch
condition 1 $rdi == 0x600477d0
commands
silent
printf "ksl_get_shared_latch laddr:%x, willing:%d, where:%d, why:%d, mode:%d\n", $rdi, $rsi, $rdx, $rcx, $r8
c
end

Then one call with cache hit displays:

ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:8

Mode 8 is shared: many concurrent sessions can do the same without waiting. Shared is scalable: cache hits are scalable.

Cache miss – result not in cache

Here each call will have a different value for the argument, so that they are all cache misses (except the first one):

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

Now the ‘RC latch’ statistics have increased further:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 6005 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

This is about 5000 additional latch gets, which means 5 per execution. And, because it writes, you can expect them to be exclusive.

Here is my gdb script output when I call the function with a value that is not already in cache:

ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:8
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5347, mode:16
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:16
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5374, mode:16

Mode 16 is exclusive. And we have 3 of them in addition to the shared one. You can imagine what happens when several sessions are running this: spin and wait, all sessions on the same resource.

Cache miss – result in cache but invalid

I run the same again, where all values are in cache now:

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

So this is only 1000 additional gets:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 7005 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

The function depends on DEMO table, and I do some modifications on it:

SQL> insert into DEMO values (0)
1 row created.
SQL> commit;
Commit complete.

This has invalidated all previous results. A new run will have all cache miss:

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

And this is 5000 additional gets:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 12007 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

So what?

The important thing to know is that each cache miss requires an exclusive access to the Result Cache, multiple times. Those must be avoided. The Result Cache is good for a static set of result. It is not a short-term cache to workaround an application design where the function is called two or three times with the same values. This is, unfortunately, not explained in the Oracle Documentation. But it becomes obvious when we look at the implementation, or when we load test it with multiple sessions. The consequence can be this kind of high contention during minutes or hours:

Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
latch free 858,094 1,598,387 1863 78.8
enq: RC - Result Cache: Contention 192,855 259,563 1346 12.8

Without either the knowledge of the implementation, or relevant load tests, the risk is that a developer stays on his good results in unit testing, and implement Result Cache in each function. The consequence will be seen too late, in production, at a time of load peak. If this happens to you, you can disable the result cache (DBMS_RESULT_CACHE.BYPASS(TRUE);) but the risk is to have performance degradation in the ‘good cases’. Or recompile the procedures with removed RESULT_CACHE, but you may bring a new contention on library cache then.

 

Cet article Result Cache: when *not* to use it est apparu en premier sur Blog dbi services.

New Web ADI Update Available for EBS 12.1.3

Steven Chan - Mon, 2018-01-29 11:48

Oracle Web Applications Desktop Integrator (Web ADI) allows you to use Microsoft Office applications such as Excel, Word, and Project to upload data to Oracle E-Business Suite. For example, you can use Excel to create formatted spreadsheets on your desktop to download, view, edit, validate, and upload Oracle E-Business Suite data. 

We have just released a Web ADI update for EBS 12.1.3 with new functionality:

  • Digital Signature Support

    Some security options within Microsoft Excel allow files containing macros to be opened only if the files are digitally signed to identify them as coming from a trusted source. You can now set a profile option to have Oracle Web Applications Desktop Integrator affix a digital signature to the spreadsheets you create. Users will then be able to open the spreadsheets created by Oracle Web Applications Desktop Integrator with the higher Microsoft Excel security settings.

  • Desktop Integration Framework

    The Oracle E-Business Suite Desktop Integration Framework user interface is enhanced to let you define Java contents, an uploader, and an importer for an integrator, as well as to let you test an integrator definition and view an integrator definition in read-only mode.

You can download the update here:

This is a cumulative bundle patch update that includes all previously-released Web ADI updates for EBS 12.1.3.

References

Related Articles

Categories: APPS Blogs

Office Depot Pivots to a New Future with Oracle Cloud Applications

Oracle Press Releases - Mon, 2018-01-29 07:00
Press Release
Office Depot Pivots to a New Future with Oracle Cloud Applications Leading office products retailer to modernize its legacy systems in the cloud and accelerate omnichannel business success

Redwood Shores, Calif.—Jan 29, 2018

Office Depot, Inc. (NASDAQ:ODP), a leading omni-channel provider of business services, products and technology, selected Oracle Cloud Applications to modernize its IT systems and transform its business processes to meet the demands of today’s digital economy. By enhancing the performance and outcomes of key supply chain, HR, and financial management functions with Oracle Cloud Applications, Office Depot will be able to accelerate its omnichannel growth, simplify its IT infrastructure, and lower its costs.

To enable strategic business model changes and achieve future growth plans, Office Depot needed a comprehensive solution that could enhance its current infrastructure and extend its existing IT investments. After a competitive review, Office Depot selected Oracle Supply Chain Management (SCM) Cloud, Oracle Human Capital Management (HCM) Cloud, and Oracle Enterprise Performance Management (EPM) Cloud to speed time to market and control costs.

“Office Depot is pivoting away from being a traditional office products retailer to a broader omnichannel services and products company and that causes a lot of complexity given our existing technology,” said Damon Venger, senior director, IT Applications, Office Depot. “We evaluated multiple vendors, but only Oracle could deliver the end-to-end solutions we needed. Oracle Cloud applications continue to evolve and expand—ensuring they can grow with us over time. This was important when looking for a company that could be a true partner and understand our unique business needs.”

To support high-order volumes and a large range of products, Office Depot chose Oracle Supply Chain Management (SCM) Cloud for an end-to-end business solution that can help get products to market faster and offer real-time insight into day-to-day logistics, procurement, and forecasting. With Oracle HCM Cloud, Office Depot will be empowered with insightful and actionable data to manage its global workforce for a more positive and consistent employee experience worldwide, while eliminating individual and disjointed point solutions. Oracle Enterprise Performance Management (EPM) Cloud will help transform the finance function —from planning to financial close — to enable data-driven, strategic decision making, timely and accurate financial reporting, and lower IT complexity.

“Office Depot continues to digitally transform its business to better serve the needs of its constituents — from employees to customers,” said Steve Miranda, executive vice president of applications development, Oracle. “With Oracle Cloud Applications, the company will be well equipped to capitalize on new opportunities for growth, while delivering a differentiated, compelling service to its customers.”

Additional Information
To learn more about Oracle EPM Cloud, follow @OracleEPMCloud on Twitter or Facebook, or read the Modern Finance Leader blog.

More information on Oracle HCM Cloud can be found on the Modern HR in the Cloud blog, follow @OracleHCM on Twitter or Facebook.

For additional information on Oracle SCM Cloud, visit FacebookTwitter or the Oracle SCM blog.

Contact Info
Jennifer Yamamoto
Oracle PR
916-761-9555
jennifer.yamamoto@oracle.com
Rebecca Rakitin
Office Depot, Inc.
561-438-1450
Rebecca.Rakitin@officedepot.com
About Office Depot, Inc.

Office Depot, Inc. is a leading provider of office supplies, business products and services delivered through an omnichannel platform.

The company had 2016 annual sales of approximately $11 billion, employed approximately 38,000 associates, and served consumers and businesses in North America and abroad with approximately 1,400 retail stores, award-winning e-commerce sites and a dedicated business-to-business sales organization – with a global network of wholly owned operations, franchisees, licensees and alliance partners. The company operates under several banner brands including Office Depot®, OfficeMax®, BizBox, CompuCom®, Complete Office and Grand&Toy®. The company’s portfolio of exclusive product brands include TUL®, Foray®, Brenton Studio®, Ativa®, WorkPro®, Realspace® and Highmark®.

Office Depot, Inc.’s common stock is listed on the NASDAQ Global Select Market under the symbol “ODP.”

Office Depot, Foray, Ativa and Realspace are trademarks of The Office Club, Inc. OfficeMax, TUL, Brenton Studio, WorkPro and Highmark are trademarks of OMX, Inc. CompuCom is a trademark of CompuCom Systems, Inc. and Complete Office is a trademark of Complete Office Solutions, LLC. Grand&Toy is a trademark of Grand & Toy, LLC in Canada. ©2017 Office Depot, Inc. All rights reserved. Any other product or company names mentioned herein are the trademarks of their respective owners.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Jennifer Yamamoto

  • 916-761-9555

Rebecca Rakitin

  • 561-438-1450

Case Study – 1

Jonathan Lewis - Mon, 2018-01-29 04:45

It has been some time since I wrote an article walking through the analysis of information on an AWR report, but a nice example appeared a few weeks ago on Twitter that broke a big AWR picture into a sequence of bite-sized chunks that made a little story so here it is, replayed in sync with my ongoing thoughts. The problem started with the (highly paraphrased) question – “How could I get these headline figures when all the ‘SQL ordered by’ sections of the report show captured SQL account for 0.0% of Total?”. The report was a 30 minute snapshot from 11.2.0.3, and here’s the Load Profile:As you can see, the database was doing quite a lot of work as measured by the physical reads and writes, the number of block changes and size of redo, and the fact that the average CPU usage by the foreground sessions in this instance accounts for 9.5 CPUs. Generally speaking the 49 seconds per second of DB time is also a fairly good hint,when combined with the other numbers, that the instance is busy but, in the absence of any other information, that could be one session holding a blocking lock with 48 other sessions waiting for the lock to be released.

There are a couple of unusual (though not impossible) features to this profile. Given the amount of work the figures for user calls and executes look very small – again not technically impossible, just unlikely in “normal” database processing given the apparent workload – and bear in mind that the “SQL ordered by ..” figures seem to account for none of the workload. Similarly the figures for logical reads and block changes are a little unusual (for “normal” processing) – on average this instance seems to have modified every block it visited (without any work showing up in the captured SQL).

Next in the set of images was the Time Model:As you can see, the foreground time (DB time) is 85,944 seconds or which foreground CPU usage (DB CPU) is 16,735 seconds – with about 69,000 seconds unaccounted ! THere’s virtually no time spend on PL/SQL or SQL, and Rman doesn’t even make an appearance  (I mention rman specifically because there was at least one version of Oracle where the rman time was accidentally lost from this summary).

So what does the Top Timed Events look like:

It’s nice to see that this is consistent with the Load Profile: the DB CPU matches, and there’s a lot of physical reads (and a quick check says that 6,560,642/1,800 = 3,644 … which is pretty close to the 3,746 physical reads per second in the Load Profile).  There’s one major anomaly here, though: the huge number of (and time spent on) latch: row cache objects. and even though it’s not the single largest component of time it’s the most obvious one to pursue so the next bit of the report to check is the Dictionary Cache Statistics, with the Tablespace IO Stats and Segments by Physical Reads to follow. I don’t have an image for the dictionary cache stats, but the answer to the question “What were all the rowcache object gets for?” was: “dc_tablespaces (214,796,434)” – which (probably) told me everything I needed to know.

I could show you the specific Instance Activity statistic that I wanted to see next, but I’ll postpone that for a moment and jump to the Tablespace IO Stats – which we were planning to do and might have done anyway if we hadn’t known the rocket science behind massive number of gets on dc_tablespaces.

That’s a horrendous number of (single block) reads of the undo tablespace – and why would they be happening ? The instance engaged in some massive rollback activity (and the transactions being rolled back are on objects in the GIRO tablespace – which is why it is also suffering a lot of single block reads) and this is the point where we jump to the relevant Instance Activity statistic to confirm the claim:

There are two other “undo records applied” statistics, but we don’t need to think about them – the match between the count of records applied and the gets on the dc_tablespaces latch is nearly perfect. Almost everything that this instance is doing is rolling back – there must have been some enormous data changes (possibly only one, possibly parallel-enabled) that failed in the previous half hour and now the entire mess is being cleaned up.

One little detail to note – the “undo records applied” per second is 122,355, but the Load Profile reported 247,885 “Block changes” per second. The two figures are consistent with each other. Each application of an undo record is two block changes – the first when you modify the original data block, the second when you update the undo record itself to flag it as “applied”:  122,355 * 2  = 244,710, which is a good match for 247,855.

Final Thoughts

There is a second reason why you could see lots of gets on dc_tablespaces – but I think it was a bug in 9i relating to temporary tablespace groups. The phenomenon as a side effect of rolling back was something I discovered in the 8i timeline and I’ve not actually checked what an AWR report really would look like if I forced a massive rollback to take place as the only workload across the interval – so here’s a quick test I constructed and ran to finish the thread off:


set timing on

create table t1 as
with g as (select rownum id from dual connect by level <= 1000)
select rownum id, rownum n1, rpad ('x',150) padding from g,g
;

create index t1_i1 on t1(id);

begin
        for i in 1..1000000 loop
                update t1 set n1 = n1 where id = i;
        end loop;
end;
/

alter system flush buffer_cache;

pause take AWR snapshot from another session

rollback;

prompt take AWR snapshot from another session

On the small VM I used to run the test it took a little over 60 seconds to run the update, and the same again to complete the rollback. The “DB time” show in the Time Model section of the AWR report was 62.0 seconds, while the “sql execute elapsed time” was 3.9 seconds (which was the SQL executed while capturing the AWR data).

Conclusion

This was a problem where the database seemed to have done a lot of work that couldn’t be attributed to an SQL. While I had a little rocket science up my sleeve that may have allowed me to identify the source more quickly and with more confidence than the average DBA all I’ve done in this note it highlight a couple of oddities and big numbers that anyone could have spotted, and followed a couple of simple guesses:

a) DB time was large, but sql (and pl/sql) time was non-existent

b) Top timed events were I/O and latches – so identify the latch

c) The specific latch pointed towards tablespaces – so check the Tablespace I/O and note the UNDO tablespace

d) Look at any Instance Activity statistics about “undo”.

e) Job done – but a quick test case is the icing on the cake.

 

 

Creating Indexes on Custom Table

Tom Kyte - Sun, 2018-01-28 23:46
Hi Tom, I have below query <code> select a.order_number from xxdm.XXARX_INV_SOURCE_TBL_28_jan1 a,XXARX_INV_CNV_TBL_28th_jan1 b where a.trx_number = b.trx_number and a.customer_number_source = b.customer_number_source and a.trx_number = :t...
Categories: DBA Blogs

ContractOracle.com is for sale

ContractOracle - Sun, 2018-01-28 20:30
If you would like to purchase this domain, email Robert.Geier@ContractOracle.com

This domain is perfect if you are a contractor working with Oracle software. 
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator