Feed aggregator

Oracle and LinkedIn Collaborate to Improve Candidate and Employee Experience

Oracle Press Releases - Wed, 2018-10-10 07:00
Press Release
Oracle and LinkedIn Collaborate to Improve Candidate and Employee Experience Expanded relationship helps organizations address the challenges of today’s rapidly changing global talent marketplace

Redwood Shores, Calif.—Oct 10, 2018

Oracle and LinkedIn today announced that they are working together to help HR professionals meet the shifting demands of the talent economy. A series of new integrations between Oracle’s Human Capital Management Cloud (Oracle HCM Cloud) and Taleo Enterprise Edition, and LinkedIn, will help HR teams attract, engage and retain employees by growing their talent pool, improving the candidate experience, enhancing internal mobility and increasing career development opportunities.

“The world of work is rapidly changing, and this is creating new opportunities and challenges for talent leaders,” said Scott Roberts, Vice President of Business Development, LinkedIn. “We are excited to be working with Oracle to create better solutions to make hiring and developing talent as seamless and effective as possible.”

Technology continues to transform the global talent marketplace with the rise of automation and the shrinking shelf life of skills. To successfully manage these changes and address escalating recruitment costs and increasing employee turnover, HR teams need to swiftly evolve their strategies and technologies. The new integrations between LinkedIn and Oracle HCM Cloud and Taleo Enterprise Edition address this need by enabling HR teams to take a holistic view of their talent’s experience, skills and career aspirations in order to achieve a meaningful alignment between each employee’s job responsibilities and an organization’s overall business objectives. 

“The rapidly changing global talent market is forcing organizations across industries to rethink how they attract, engage and retain employees,” said Nagaraj Nadendla, Group Vice President, Product Development, Oracle. “Navigating these changes is one of the biggest challenges organizations face today and requires HR teams to take a holistic view of the candidate and employee experience. Working closely with LinkedIn, we are uniquely placed to help HR teams meet heightened candidate and employee expectations by combining future-proofed Oracle HCM Cloud and Taleo Enterprise platforms with one of the world’s largest talent marketplaces.”    

The new integrations between Oracle HCM Cloud and LinkedIn include:

  • Talent Profile Import: Helps organizations enhance internal talent mobility by enabling employees to choose to import key elements of their LinkedIn profiles into their Oracle HCM Cloud Talent Profile. 

  • Recommended Matches and Embedded Search: Helps organizations streamline recruitment by enabling LinkedIn Recruiter seat holders to search LinkedIn members and see those that best match a job requisition or project within Oracle Recruiting Cloud and Taleo Enterprise Edition.

  • Referral Recommendations: Improves the candidate experience by enabling candidates to apply for a job via Oracle Recruiting Cloud or Taleo Enterprise Edition and identify and contact (via InMail) their LinkedIn connections who can best refer them for that job.

  • Recruiter System Connect: Provides a seamless and efficient recruiter experience by surfacing transactional recruiting data, from both Oracle Recruiting Cloud and Taleo Enterprise Edition, as well as LinkedIn, in LinkedIn Recruiter.

  • Deeper Integration with Oracle Learning Cloud: Increases career development opportunities through access to LinkedIn Learning courses, automatic course catalog integrations, and up-to-date insight on learner engagement within Oracle Learning Cloud.

Part of Oracle Cloud Applications, Oracle HCM Cloud enables HR professionals to simplify the complex in order to meet the increasing expectations of an ever-changing workforce and business environment. By providing a complete and powerful platform that spans the entire employee life cycle, Oracle HCM Cloud helps HR professionals deliver superior employee experience, align people strategy to evolving business priorities, and cultivate a culture of continuous innovation.

Contact Info
Simon Jones
Oracle
+1.415.202.4574
s.jones@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Simon Jones

  • +1.415.202.4574

Survey Highlights 5G Adoption as Catalyst for Advanced Digital Service Ecosystems

Oracle Press Releases - Wed, 2018-10-10 07:00
Press Release
Survey Highlights 5G Adoption as Catalyst for Advanced Digital Service Ecosystems Oracle Study Identifies Key Services Opportunities, Including Smart Cities and Connected and Autonomous Vehicles

Redwood Shores, Calif.—Oct 10, 2018

Digital services dominate the current and future strategies of telecoms and enterprise companies across industries, according to a new Oracle Communications survey. In the Digital Services: Disrupt or Be Disrupted report, Oracle found nearly all organizations surveyed (96 percent) have already implemented, or are planning to implement, digital services in the next 12 months. Almost half (45 percent) are looking to deploy digital customer engagement tools within their mobile apps and websites to further advance their digital transformation in the same timeframe.

In September 2018, Oracle surveyed 292 individuals with decision-making authority for IT and communications software in their companies. Seventy-three percent of respondents were at the President/CxO or Director/Department head level. A total of 18 industries, from 14 countries, were represented, with half of those surveyed from telecom and the other half from computer software/equipment, education, healthcare, and manufacturing.

“The digital opportunity is clearly recognized by companies across all industries,” said Doug Suriano, senior vice president, general manager, Oracle Communications. “While telecom companies have significant experience providing digital services, they face the challenge of delivering differentiated new revenue streams in an agile manner. Other industries, who in many cases are beginning to offer digital subscriptions and services enabled by physical products, also find they need more sophisticated and modern digital services monetization systems to capitalize on their digital opportunity.”

Racing to Deliver Improved Digital Experiences

5G will accelerate opportunities: When looking to the future of high-speed, real-time, sensor-enabled digital services, respondents indicated that sensor-driven services such as those for smart cities and connected and autonomous vehicles are the top two opportunities for digital service ecosystems enabled by 5G technology. The number of non-telecoms companies exploring smart cities opportunities will rise from 34 percent today to 50 percent when 5G is enabled. The connected and autonomous vehicle opportunity will rise from 29 percent today to 46 percent when 5G is enabled. This illustrates the cross-industry value for such digital service ecosystems enabled by 5G adoption.

Speed and agility are key: Ninety-one percent of respondents prioritize speed to market over a fully integrated IT environment, with 69 percent preferring a lightweight solution that can be deployed quickly over a fully featured solution that takes much longer to deploy. Speed to market is critical, with half believing that launching new digitally enabled services should take days – not weeks or months.

Not launching services reduces competitiveness and risks disruption: Almost two-thirds (65 percent) reported reduced revenue growth as the greatest concern of not launching digitally enabled services, followed by 55 percent who said they will be unable to competitively differentiate in the market. Half are concerned about loss of brand perception and relevance.

More sophisticated pricing models are essential: For companies to drive sustainable market differentiation and revenue growth, the survey indicated that more sophisticated pricing models for digital services beyond simple subscriptions will be essential. Companies recognized the importance of packaging and bundling multiple products and/or digital services to create more compelling propositions. Respondents also emphasized the importance of usage/consumption-based charging followed closely by outcome/performance-based charging for such propositions.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Brent Curry
Hill+Knowlton Strategies
+1.312.255.9131
brent.curry@hkstrategies.com
About Oracle Communications

Oracle Communications provides integrated communications and cloud solutions for Service Providers and Enterprises to accelerate their digital transformation journey in a communications-driven world from network evolution to digital business to customer experience. www.oracle.com/communications

To learn more about Oracle Communications industry solutions, visit: Oracle Communications LinkedIn, or join the conversation at Twitter @OracleComms.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Brent Curry

  • +1.312.255.9131

12c attribute clustering vs old index cluster

Tom Kyte - Wed, 2018-10-10 06:26
Hi, The attribute clustering works only for the direct path inserts? Will it support the normal inserts? If not whats the way to have the data in table stored in an ordered way especially for normal inserts (not direct path). Can you explain ...
Categories: DBA Blogs

SPACE GROWTH procedure for data volume calcualation for datawarehouse

Tom Kyte - Wed, 2018-10-10 06:26
Hi, I am currently working on the data sizing for each table to be loaded in a datawarehouse. Ref: https://docs.oracle.com/cd/E11882_01/doc.112/e26211/app_size.htm#DMAOG301 I have seen a procedure from https://docs.oracle.com/database/121/AR...
Categories: DBA Blogs

OBJECT_GROWTH_TREND function

Tom Kyte - Wed, 2018-10-10 06:26
Hi, I am trying to use the script below to analyze the data consumed for a table since I am planning the storage capacity for a datawarehouse. Using this value(for every table), I will then be able to plan data for 18 months. I am writing a scrip...
Categories: DBA Blogs

Generate formatted excel and pdf report in apex

Tom Kyte - Wed, 2018-10-10 06:26
Please tell us the options for generating the formatted excel and pdf reports in apex. It is helpful if opensource option.
Categories: DBA Blogs

difference with blocks from DBA_SEGMENTS and Blocks From TSQ$

Tom Kyte - Wed, 2018-10-10 06:26
I have a schema and I look for the size by using the following <code> select segment_type,sum(blocks) from dba_segments where owner='W_GBA_000' group by segment_type; </code> SEGMENT_TYPE SUM(BLOCKS) ------------------ ----------- LOBIN...
Categories: DBA Blogs

How does insert and delete use space regarding to indexes?

Tom Kyte - Wed, 2018-10-10 06:26
Hello Tom I have such as test: SQL> create table t as select * from dba_objects; Table created. SQL> desc t Name OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE CREATED LAST_DDL_TIME TIMESTAMP ...
Categories: DBA Blogs

Gather Index Stats ve Rebuild Index

Tom Kyte - Wed, 2018-10-10 06:26
Hi Team, We encountered production issue related 100% CPU usage. New script introduced that was rebuilding indexes on table having size 450GB with parallel degree 8. I read one arrival on asktom where they explained rebuild recreates the Ind...
Categories: DBA Blogs

Oracle and NVIDIA Bring the Power of the Cloud to the Next Generation of Analytics, Machine Learning and AI

Oracle Press Releases - Wed, 2018-10-10 05:00
Press Release
Oracle and NVIDIA Bring the Power of the Cloud to the Next Generation of Analytics, Machine Learning and AI First public cloud provider to support NVIDIA HGX-2 to give customers access to a unified data science, HPC and AI computing architecture

Munich, Germany – GPU Technology Conference—Oct 10, 2018

Oracle (NYSE:ORCL) and NVIDIA (NASDAQ: NVDA) today announced that Oracle is the first public cloud provider to support the NVIDIA HGX-2TM platform on Oracle Cloud Infrastructure, to meet the needs of the next generation of analytics, machine learning and AI. The companies are also announcing the general availability of support for GPU-accelerated deep learning and HPC containers from the NVIDIA GPU Cloud™ (NGC) container registry on Oracle Cloud Infrastructure.

From enabling autonomous vehicles to driving global climate simulations, rapid progress in AI and HPC has transformed entire industries while also demanding massive increases in complexity and compute power. HGX-2 is designed for multi-precision computing to accelerate the most demanding applications by unleashing 2 petaflops of computing power and half a terabyte (TB) of total GPU memory with 16 NVIDIA Tesla® V100 Tensor Core GPUs interconnected with NVSwitch™. Supporting HGX-2 on both Oracle Cloud Infrastructure bare-metal and virtual machine instances, Oracle and NVIDIA are helping customers solve the greatest AI and HPC challenges for the most complex workloads.

Oracle is also announcing support for RAPIDS™ open-source software introduced today by NVIDIA for executing end-to-end data science training pipelines accelerated on NVIDIA GPUs. RAPIDS is now generally available on Oracle Cloud Infrastructure via NGC. RAPIDS dramatically accelerates data science pipelines by moving workflows onto the GPU. This optimizes machine learning training with more iterations for better model accuracy. Data scientists can quickly integrate RAPIDS with hassle-free integration and minimal code changes, enabling them to significantly accelerate the Python data science toolchain. With this new offering and support for NGC containers, Oracle and NVIDIA are allowing customers to easily deploy containerized applications and frameworks for HPC, data science and AI and run them seamlessly on Oracle Cloud Infrastructure.

Oracle Cloud Infrastructure is also working with NVIDIA to support RAPIDS across its platform, including the Oracle Data Science Cloud, to further accelerate customers’ end to-end data science workflows. RAPIDS software runs seamlessly on the Oracle Cloud, allowing customers to support their HPC, AI and data science needs, all while taking advantage of the portfolio of GPU instances available on Oracle Cloud Infrastructure.

“We are very excited about this collaboration with NVIDIA. As the world of computing continues to push the boundaries of what’s possible, we are providing our customers with the software, tools and cloud infrastructure needed to solve the most complex challenges,” said Clay Magouyrk, senior vice president, software development, Oracle Cloud Infrastructure. “Whether you are an engineer, data scientist, researcher or developer, we are bringing the power of compute and cloud to your fingertips.

“This new collaboration with Oracle will help fuel incredible innovation across a wide range of industries,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “By taking advantage of NVIDIA’s latest technologies, Oracle is well positioned to meet surges in demand for GPU acceleration for deep learning, high-performance computing, data analytics and machine learning.”

Contact Info
Danielle Tarp
Oracle
+1.650.506.2905
danielle.tarp@oracle.com
Kristin Bryson
NVIDIA
+1.203.241.9190
kbryson@nvidia.com
About Oracle Cloud Infrastructure

Oracle Cloud Infrastructure is the enterprise Infrastructure as a Service (IaaS) platform. Companies of all sizes rely on Oracle Cloud to run enterprise and cloud native applications with mission-critical performance and core-to-edge security. By running both traditional and new workloads on a comprehensive cloud that includes compute, storage, networking, database, and containers, Oracle Cloud Infrastructure dramatically increases operational efficiency and lowers total cost of ownership. For more information, visit https://cloud.oracle.com/iaas

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About NVIDIA

NVIDIA’s (NASDAQ: NVDA) invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI—the next era of computing—with the GPU acting as the brain of computers, robots and self-driving cars that can perceive and understand the world. More information at http://nvidianews.nvidia.com/.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. NVIDIA and the NVIDIA logo are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.

Certain statements in this press release including, but not limited to, statements as to: the benefits and impact of Oracle supporting the NVIDIA HGX-2 platform on Oracle Cloud Infrastructure and supporting RAPIDS; and the benefits, impact, and features of HGX-2 and RAPIDS software are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the reports NVIDIA files with the Securities and Exchange Commission, or SEC, including its Form 10-Q for the fiscal period ended July 29, 2018. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

Talk to a Press Contact

Danielle Tarp

  • +1.650.506.2905

Kristin Bryson

  • +1.203.241.9190

Few Exadata MOS Docs to review

Syed Jaffar - Wed, 2018-10-10 04:20
If you have MOS login credentials and managing Exadata database machines, below is the list of few MOS Doc which is worth reading:

  • 888828.1, "Exadata Database Machine and Exadata Storage Server Supported Versions"
  • 1070954.1, "Oracle Exadata Database Machine Exachk or HealthCheck"
  • 1353073.2, "Exadata Diagnostic Collection Guide"
  • 1500257.1, " Exadata Write-Back Flash Cache - FAQ"
  • 1553103.1, "dbnodeupdate.sh and dbserver.patch.zip: Updating Exadata Database Server Software using the DBNodeUpdate Utility and patchmgr"
  • 1589868.1, "Procedure to check for corrupted root file system on Exadata Storage Servers and Linux database servers"

Join Cardinality – 3

Jonathan Lewis - Tue, 2018-10-09 07:01

In the previous posting I listed the order of precision of histograms as:

  • Frequency
  • Top-Frequency
  • Hybrid
  • Height-balanced
  • None

Having covered the Frequency/Frequency join (for a single column, no nulls, equijoin) in the previous posting I’ve decided to work down the list and address Frequency/Top-Frequency in this posting. It gets a little harder to generate data as we move to the less precise histograms since we need to have skew, we want some gaps, and (for Top-Frequency) we need to have some data that can be “ignored”. On the plus side, though, I want to work with a small number of buckets to keep the output of any queries I run fairly short so I’m going to stick with a small number of buckets, which means the “small” volume of “ignorable” data (the “spare” bucket) can be relative large. Here’s the code I used to generate data for my investigation – 100 rows for the table with a frequency histogram and 800 rows for the table with a top-frequency.


rem
rem     Script:         freq_hist_join_05.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2018
rem     Purpose:        
rem
rem     Last tested 
rem             18.3.0.0
rem             12.2.0.1
rem

execute dbms_random.seed(0)

create table t1 (
        id              number(6),
        n04             number(6),
        n05             number(6),
        n20             number(6),
        j1              number(6)
)
;

create table t2(
        id              number(8,0),
        n20             number(6,0),
        n30             number(6,0),
        n50             number(6,0),
        j2              number(6,0)      
)
;

insert into t1
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        mod(rownum,   4) + 1                    n04,
        mod(rownum,   5) + 1                    n05,
        mod(rownum,  20) + 1                    n20,
        trunc(2.5 * trunc(sqrt(v1.id*v2.id)))   j1
from
        generator       v1,
        generator       v2
where
        v1.id <= 10 -- > comment to avoid WordPress format issue
and     v2.id <= 10 -- > comment to avoid WordPress format issue
;

insert into t2
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        mod(rownum,   20) + 1                   n20,
        mod(rownum,   30) + 1                   n30,
        mod(rownum,   50) + 1                   n50,
        28 - round(abs(7*dbms_random.normal))        j2      
from
        generator       v1
where
        rownum <= 800 -- > comment to avoid WordPress format issue
;

begin
        dbms_stats.gather_table_stats(
                ownname          => null,
                tabname          => 'T1',
                method_opt       => 'for all columns size 1 for columns j1 size 254'
        );
        dbms_stats.gather_table_stats(
                ownname          => null,
                tabname          => 'T2',
                method_opt       => 'for all columns size 1 for columns j2 size 16'
        );
end;
/

In this example I’ve used the sqrt() function and the dbms_random.normal() function to generate the data. The scaling and truncating I’ve done on the results has given me two sets of data which have a nice skew, some gaps, but different patterns (though both have a small number of small values and a larger number of larger values). The data from dbms_random.normal() will produce 22 distinct values, so I’ve requested a histogram with 16 buckets and checked that this will produce a Top-Frequency histogram. (If I want a Hybrid histogram – for the next thrilling installment in the series – I’ll just reduce the number of buckets slightly).

Here are the resulting stats, preceded by the code that reported them:


select  table_name, column_name, histogram, num_distinct, num_buckets, density
from    user_tab_cols
where   table_name in ('T1','T2')
and     column_name in ('J1','J2')
order by table_name
;

select  table_name, num_rows
from    user_tables
where   table_name in ('T1','T2')
order by table_name
;

break on table_name skip 1 on report skip 1

with f1 as (
select 
        table_name,
        endpoint_value                                                            value, 
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) row_or_bucket_count,
        endpoint_number
from 
        user_tab_histograms 
where 
        table_name  = 'T1' 
and     column_name = 'J1'
order by 
        endpoint_value
),
f2 as (
select 
        table_name,
        endpoint_value                                                            value, 
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) row_or_bucket_count,
        endpoint_number
from 
        user_tab_histograms 
where 
        table_name  = 'T2' 
and     column_name = 'J2'
order by 
        endpoint_value
)
select f1.* from f1
union all
select f2.* from f2
order by 1,2
;


TABLE_NAME           COLUMN_NAME          HISTOGRAM       NUM_DISTINCT NUM_BUCKETS    DENSITY
-------------------- -------------------- --------------- ------------ ----------- ----------
T1                   J1                   FREQUENCY                 10          10       .005
T2                   J2                   TOP-FREQUENCY             22          16    .000625

TABLE_NAME             NUM_ROWS
-------------------- ----------
T1                          100
T2                          800

TABLE_NAME                VALUE ROW_OR_BUCKET_COUNT ENDPOINT_NUMBER
-------------------- ---------- ------------------- ---------------
T1                            2                   5               5
                              5                  15              20
                              7                  15              35
                             10                  17              52
                             12                  13              65
                             15                  13              78
                             17                  11              89
                             20                   7              96
                             22                   3              99
                             25                   1             100

T2                            1                   1               1
                             13                  14              15
                             15                  11              26
                             16                  22              48
                             17                  34              82
                             18                  31             113
                             19                  36             149
                             20                  57             206
                             21                  44             250
                             22                  45             295
                             23                  72             367
                             24                  70             437
                             25                  87             524
                             26                 109             633
                             27                  96             729
                             28                  41             770

Table t1 reports 100 rows, 10 distinct values and a Frequency histogram with 10 buckets.
Table t2 reports 800 rows, 22 distinct values and a Top-Frequency histogram with 16 buckets.

Things we notice from the histograms are: t1 has a range from 2 to 25, while t2 has a range from 1 to 28. We also notice that the highest endpoint_number for t2 is only 770 out of a possible 800 – we’ve “lost” 30 rows. We don’t really care what they are for the purposes of the arithmetic, but if we did a quick “select j2, count(*)” query we’d see that we had lost the following:


SQL> select j2, count(*) from t2 group by j2 order by count(*), j2;

	J2   COUNT(*)
---------- ----------
	 1	    1
	 9	    1  *
	 8	    3  *
	11	    4  *
	10	    5  *
	12	    8  *
	14	    9  *
	15	   11
...

The reason why the total number of rows accounted for is less than the total number of rows in the table comes in two parts. The Top-Frequency histogram is designed to hold the Top N most popular entries in the table, so there will be some entries that don’t make an appearance in the histogram despite contributing rows to the total table count; the number of “lost” rows can then be increased because the Top N popular values may not include the column low and high values, and these two values must appear in the histogram. Looking at the output above we can see that we could have reported 14 as the 16th most popular value, instead we have to record 1, losing a further 9 rows and regaining 1.

Let’s test the pure join query on the two tables to see what the optimizer is predicting as the join cardinality, and then try to re-create that cardinality from the histogram data:


alter session set statistics_level = all;
alter session set events '10053 trace name context forever';
alter session set tracefile_identifier='BASELINE';

select
        count(*) 
from
        t1, t2
where
        t1.j1 = t2.j2
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

alter session set statistics_level = typical;
alter session set events '10053 trace name context off';


-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.01 |      41 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.01 |      41 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |   1608 |   1327 |00:00:00.01 |      41 |  2545K|  2545K| 1355K (0)|
|   3 |    TABLE ACCESS FULL| T1   |      1 |    100 |    100 |00:00:00.01 |       7 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |    800 |    800 |00:00:00.01 |       7 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."J1"="T2"."J2")

Our target is to work out how we can query the histogram data in a way that gets the result 1,608. Ideally we’ll also think of a rationale for justifying our method, and then we’ll apply the same method with 15 buckets and 17 buckets, and with a couple of variations to the data (e.g. update all rows where j1 = 25 to set j1 = 28), to see if the method still gets the right result.

All we did with the frequency/frequency join was to join the two histograms on matching values, multiply the frequencies on each resulting row , then sum down the set, and this automatically eliminated rows which were outside the “highest low” and “lowest high” (i.e. we only examined rows where the histograms overlapped). We might hope that things shouldn’t be too different when one of the histograms is a top-frequency histogram.

There is an important difference, though, between frequency and top-frequency histograms – in the latter case there are values in the table which will not be in the histogram, so we ought to make some allowance for these (even though it’s only “one bucket’s worth”). It’s possible that some of these values might match values in the frequency histogram so we need to include a mechanism for adding in a factor to allow for them. So as a first step let’s work out the “average number of rows per value” for the missing values.

We have 22 distinct values and 16 end points so there are 6 missing values. We have 800 rows in the table but only 770 rows reported in the histogram so there are 30 missing rows. So let’s say the missing values have an average cardinality of 30/6 = 5 (and we might extend that to say they have an average selectivity of 5/800 = 0.00625).

Let’s bring that value into the query we wrote for the frequency/frequency case by using an outer join (which I’ll write as an “ANSI” Full Outer Join”) with a predicate in place that restricts the result to just the overlapping range, which is [2,25], the “higher low value” and “lower high value” across the two histograms. Here’s some code – with an odd little detail included:


column product format 999,999,999.99
compute sum of product on report

compute sum of t1_count on report
compute sum of t1_value on report
compute sum of t2_count on report
compute sum of t2_value on report

with f1 as (
select 
        table_name,
        endpoint_value                                                            value, 
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) row_or_bucket_count,
        endpoint_number
from 
        user_tab_histograms 
where 
        table_name  = 'T1' 
and     column_name = 'J1'
order by 
        endpoint_value
),
f2 as (
select 
        table_name,
        endpoint_value                                                            value, 
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) row_or_bucket_count,
        endpoint_number
from 
        user_tab_histograms 
where 
        table_name  = 'T2' 
and     column_name = 'J2'
order by 
        endpoint_value
)
select
        f1.value f1_value,
        f2.value f2_value,
        nvl(f1.row_or_bucket_count,0.00) t1_count, 
        nvl(f2.row_or_bucket_count,800*0.00625) t2_count,
        nvl(f1.row_or_bucket_count,0.00) * 
        nvl(f2.row_or_bucket_count,800*0.006250) product
from
        f1
full outer join
        f2
on
        f2.value = f1.value
where
        coalesce(f1.value, f2.value) between 2 and 25
order by
        coalesce(f1.value, f2.value)
;

I’ve included an nvl() on the columns for the top-frequency histograms that convert nulls (i.e. the preserved rows derived from the frequency histogram) into the average frequency we’ve just calculated, using the “num_rows * selectivity” representation. The odd little detail that I commented on above does something similar for the preserved rows derived from the top-frequency histogram because this first guess at the calculation was wrong and needed an adjustment which I’m anticipating. Here are the results I got with this code:

  T1_VALUE   T2_VALUE   T1_COUNT   T2_COUNT         PRODUCT
---------- ---------- ---------- ---------- ---------------
         2                     5          5           25.00
         5                    15          5           75.00
         7                    15          5           75.00
        10                    17          5           85.00
        12                    13          5           65.00
                   13          0         14             .00
        15         15         13         11          143.00
                   16          0         22             .00
        17         17         11         34          374.00
                   18          0         31             .00
                   19          0         36             .00
        20         20          7         57          399.00
                   21          0         44             .00
        22         22          3         45          135.00
                   23          0         72             .00
                   24          0         70             .00
        25         25          1         87           87.00
---------- ---------- ---------- ---------- ---------------
       135        233        100        548        1,463.00

The figure is too low, so there has to be an adjustment. What if the code is allowing for the “maybe there are other values” algorithm that the optimizer uses with fequency histograms ? If you’ve gathered a frequency histogram on a column but query it with a value that isn’t in the histogram than Oracle applies an algorithm that looks like: “if you’re asking for something that isn’t in the histogram I’ll assume that there must be some data there and use a frequency that’s half the lowest frequency I have recorded”**Important footnote. The value 25 appears once in our histogram so let’s include a fudge-factor of 0.5 (i.e. half a row) in the nvl() expression for the t1 frequencies and see what happens. This is what the new results look like:


  T1_VALUE   T2_VALUE   T1_COUNT   T2_COUNT         PRODUCT
---------- ---------- ---------- ---------- ---------------
         2                     5          5           25.00
         5                    15          5           75.00
         7                    15          5           75.00
        10                    17          5           85.00
        12                    13          5           65.00
                   13         .5         14            7.00
        15         15         13         11          143.00
                   16         .5         22           11.00
        17         17         11         34          374.00
                   18         .5         31           15.50
                   19         .5         36           18.00
        20         20          7         57          399.00
                   21         .5         44           22.00
        22         22          3         45          135.00
                   23         .5         72           36.00
                   24         .5         70           35.00
        25         25          1         87           87.00
---------- ---------- ---------- ---------- ---------------
       135        233      103.5        548        1,607.50

Since we were looking for 1,608 I’m going to call that a success. I can check precision, of course, by looking at the 10053 trace file. Extracting a few critical lines:

egrep -e"Density" -e"Join Card" orcl12c_ora_6520_BASELINE.trc

    AvgLen: 3 NDV: 22 Nulls: 0 Density: 0.006250 Min: 1.000000 Max: 28.000000
    AvgLen: 3 NDV: 10 Nulls: 0 Density: 0.005000 Min: 2.000000 Max: 25.000000

Join Card:  1607.500000 = outer (100.000000) * inner (800.000000) * sel (0.020094)

The “Density” lines come from the column statistics – note the 0.00625 that matches the “average selectivity” I derived from the top-frequency figures. You might also note that the “half the least frequent value” could be derived from the t1.j1 density (0.005) * t1.num_rows (100).

The “Join Card” line is exactly what it says – the join cardinality calculation showing that the plan’s prediction of 1,608 rows was actually a rounded 1607.5

There is one more important thing to check before I start tweaking the data to see if there are any other factors involved. Is the 0.5 I stuck into the query really the value of “half the least common frequency” or is it a fixed value in all cases. A nice easy way of testing this is to update the t1 table to change one row from 22 to 25 (22 will still be present in the table and histogram before and after this test, so it’s a minimal and safe change). Making this change and re-running the calculation query leaving the 0.5 unchanged gives the following:


update t1 set j1 = 25 where j1 = 22 and rownum = 1;

...

		   21	      .5	 44	      22.00
	22	   22	       2	 45	      90.00
		   23	      .5	 72	      36.00
		   24	      .5	 70	      35.00
	25	   25	       2	 87	     174.00
		      ---------- ---------- ---------------
sum			   103.5	548	   1,649.50

Without reporting all the details:

  • the estimate in the plan went up from 1,608 to 1,794
  • leaving 0.5 in the query the derived result was 1,649.5 (last few lines of output above)
  • changing the 0.5 to 1.0 the derived result was 1,794.0

Conclusion – the “fudge factor” is consistent with the model the optimizer uses with frequency histogram calculations. The optimizer models “missing” rows in the join calculation as “half the number of the least frequently occuring value**Important footnote

Filter Predicates:

After a dozen tests varying the number of buckets in the top-frequency histogram (and checking it really was still a top-frequency histogram), and tweaking the t1 (frequency histogram) data to use values on the boundaries of, or outside, the range of the t2 (top-frequency) data, I concluded that my approach was probably correct. Outer join the two histograms, restrict to the overlap, supply the “num_rows * density” figure on the top-frequency side, and “half the lowest frequency”**Important footnote on the frequency side, and the query produces the same result as the optimizer for the pure join cardinality.

So the next step is to check what happens when you add filter predicates on one, or both, sides. I listed a fragment of code earlier on to execute the pure join and count the number of rows it produced, enabling the 10053 trace and pulling the actual plan from memory at the same time. I repeated this code with 3 variations and checked the “Join Card” lines from the resulting trace files:


select count(*) from  t1, t2 where  t1.j1 = t2.j2
select count(*) from  t1, t2 where  t1.j1 = t2.j2 and t1.n04 = 2
select count(*) from  t1, t2 where  t1.j1 = t2.j2                and t2.n30 = 25
select count(*) from  t1, t2 where  t1.j1 = t2.j2 and t1.n04 = 2 and t2.n30 = 25

egrep -e"Join Card" orcl12c_ora_10447*.trc

orcl12c_ora_10447_BASELINE.trc:Join Card:  1607.500000 = outer (800.000000) * inner (100.000000) * sel (0.020094)
orcl12c_ora_10447_FILTERJ1.trc:Join Card:  401.875000 = outer (800.000000) * inner (25.000000) * sel (0.020094)
orcl12c_ora_10447_FILTERJ2.trc:Join Card:  53.583333 = outer (100.000000) * inner (26.666667) * sel (0.020094)
orcl12c_ora_10447_FILTJ1J2.trc:Join Card:  13.395833 = outer (26.666667) * inner (25.000000) * sel (0.020094)

As you can see in all 4 cases, Oracle reports an inner and outer cardinality estimate and a join selectivity. The join selectivity remains unchanged throughout; it’s the value we can derive from our pure join test (0.020094 = 1607.5 / (100 * 800)). All that changes is that the individual table predicates are applied to the base tables before the join selectivity is applied to the product of the filtered base table cardinalities:

  • Column n04 has 4 distinct values in 100 rows – filter cardinality = 100/4 = 25
  • Column n30 has 30 distinct values in 800 rows – filter cardinality = 800/30 = 26.66666…
Conclusion

For a single column equijoin on columns with no nulls where one column has a frequency histogram and the other has a top-frequency histogram the optimizer calculates the “pure” join cardinality using the overlapping range of column values and two approximating frequencies, then derives the filtered cardinality by applying the base table filters, calculates the cardinality of the cartesian join of the filtered data sets, then multiplies by the pure join selectivity.

 

 

**Important Footnote  Until Chinar Aliyev questioned what I had written, I had never noticed that the “half the lowest frequency” that I describe at various point in the arithmetic was anything other than a fixed fudge factor. In fact, in perfect symmetry with the expression used for the average selectivity in the top-frequency part of the calculcation, this “fudge factor” is simple “num_rows * column_density” for the column with the frequency histogram. (Whether the “half the lowest frequency” drops out as a side effect of the density calculation, or whether the column density is derived from half the lowest frequency is another matter.)

Al Nahdi Leverages Oracle Retail Science Cloud Services to Identify Revenue Opportunities and Reward Customer Loyalty

Oracle Press Releases - Tue, 2018-10-09 07:00
Press Release
Al Nahdi Leverages Oracle Retail Science Cloud Services to Identify Revenue Opportunities and Reward Customer Loyalty Artificial Intelligence and Machine Learning Power New Business Model That Correlates Customer Engagement and Efficacy of Marketing Campaigns

Redwood Shores, Calif.—Oct 9, 2018

Saudi Arabian pharmacy retailer Nahdi Medical Company has started utilizing the new marketing analytic models found on the Innovation Workbench, part of Oracle Retail Science Cloud Services, to develop customer churn analytics which will encourage repeat purchases.  With 85 million annual customers across stores in 125 cities, understanding customer churn and identifying revenue opportunities became more complex as Nahdi scaled operations. Al Nahdi has started creating a new business model incorporating artificial intelligence and machine learning to correlate customer engagement with marketing campaigns to identify low-churn customers and reward loyalty. The scale of the cloud-based Oracle Retail Science Platform, the flexibility of the Innovation Workbench and the power of artificial intelligence and machine learning allow Al Nahdi to posture for more strategic analysis of consumer behavior while minimizing manual queries as they continue to grow.

With this new model, Nahdi can better understand campaign effectiveness and its impact on driving store traffic, increasing market basket size and encouraging customer loyalty. Nahdi can measure the likelihood of churn that helps define long-term retention initiative and campaigns.  When using predictive churn probability, the business can flag customers for upcoming email campaigns and define targeted offers strategically within promotional campaigns. A common cloud-based retail science platform has significantly increased the speed to operationalizing innovation using a comprehensive retail data model in conjunction with productized sciences and Oracle’s industry-leading data science toolkit. These accelerators empowered the Nahdi team to focus on strategic initiatives that directly drive business growth, rather than spending cycles on technology, integration and data analysis. Nahdi has already implemented Advanced Clustering, Customer Segmentation, Customer Decision Trees and Demand Transferences to optimize the performance and processes with Oracle Retail Category Management. 

With Oracle Retail as a partner, Nahdi has all the tools necessary to focus on increasing value to their business through data science instead of building new solutions, algorithms, databases, and integrations.

“The Oracle Retail Innovation Workbench will allow us to leverage the latest artificial intelligence and machine learning to build our own predictive and cognitive analytics," said Dr. Khalid Tadlaoui, Vice President of Information Technology, Nahdi Medical Company.

“By taking the innovative Predictive models provided by Oracle Retail Science Cloud Service and couple them with our Analytics intellectual property, we’ll be able to understand and predict our guests’ behavior which will give us the opportunities to serve them better," said Dr. Ayman Abdalazem, Head of Business Intelligence and Analytics, Nahdi Medical Company.

Selecting a cloud platform for data science has enabled us to quickly implement the solution while minimizing technical debt and accelerating the value we extract for Nahdi Medical Company.

“Organizations are quickly understanding the value of the cloud to analyze consumer behavior as they wrestle with the complexity of their continued growth," said Mike Webster, senior vice president, and general manager, Oracle Retail. "Oracle Retail Science Cloud Services allow retailers to leverage AI, machine learning and decision science to detect trends, learn from results and increase their accuracy, adding contextual data to paint a clearer picture on what motivates consumers. Retailers can focus on increasing value to their business through data science instead of building new solutions, algorithms, databases, and integrations. Moreover, with a cloud delivery model, Al Nahdi can focus on driving a more strategic business while leveraging the latest industry innovations delivered by Oracle.”

Join Oracle Retail at ORIF Dubai Tuesday October 16

At Oracle Retail Industry Forum, retail executives gather to share deep domain expertise and best practices on the mission-critical applications that are core to their businesses and their success. Program content is focused on customers providing perspectives on identifying and overcoming complex industry challenges. Join us in Dubai on 16 October 2018. Registration is limited to retailers and our sponsor community.

Contact Info
Matt Torres
Oracle
1.415.595.1584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 1.415.595.1584

What are you doing at Oracle OpenWorld and Code One 2018?

Christopher Jones - Tue, 2018-10-09 02:11

The time for Oracle OpenWorld and CodeOne conferences is fast approaching.  These two conferences run concurrently in San Francisco over October 22 - 25.  If you are flying in, you will have already sorted out your conference tickets, but if you are local there are options from the free Discover pass (search the page for 'Discover') on upwards that you should take advantage of.

There are plenty of things to attend and do - you can get your hands dirty in any number of ways! There are so many sessions running that you need to keep a clear head.  If you want to see sessions that are related to the areas this blog covers check out our 'Focus On' document.  There is a short URL: https://tinyurl.com/AppDevOOW18 which will also take you there.  This document has the abstracts and will be up to date if there are any changes, but for ease of reference here is the current list of wonderful events:

HOL (Hands-on Lab) Sessions

Python and Oracle Database: Scripting for the Future - BYOL [HOL5052]
Monday, Oct 22, 9:00 a.m. - 11:00 a.m. | Moscone West - Overlook 2A (HOL)

Python and Oracle Database 18c: Scripting for the Future [HOL6329]
Tuesday, Oct 23, 2:15 p.m. - 3:15 p.m. | Marriott Marquis (Yerba Buena Level) - Salon 3/4

Developer Sessions

Getting Started with R and ROracle for Data Science with Oracle Database [DEV5049]
Monday, Oct 22, 11:30 a.m. - 12:15 p.m. | Moscone West - Room 2016

The Go Language: Principles and Practices for Oracle Database [DEV5047]
Monday, Oct 22, 12:30 p.m. - 1:15 p.m. | Moscone West - Room 2001

How to Build Geospatial Analytics with Python and Oracle Database [DEV5185]
Monday, Oct 22, 1:30 p.m. - 2:15 p.m. | Moscone West - Room 2003

How Does Ruby on Rails Work with MySQL, Oracle Database, and Other Databases [DEV4948]
Monday, Oct 22, 2:30 p.m. - 3:15 p.m. | Moscone West - Room 2014

Getting Started with GraphQL APIs on Oracle Database with Node.js [DEV4879]
Tuesday, Oct 23, 11:30 a.m. - 12:15 p.m. | Moscone West - Room 2012

Product Training Sessions

Oracle Net Services: Best Practices for Database Performance and High Availability [TRN4073]
Monday, Oct 22, 3:45 p.m. - 4:30 p.m. | Moscone West - Room 3009

A Database Proxy for Transparent High Availability, Performance, Routing, and Security [TRN4070]
Wednesday, Oct 24, 11:15 a.m. - 12:00 p.m. | Moscone West - Room 3009

Application High Availability Best Practices and New Features [TRN4078]
Thursday, Oct 25, 10:00 a.m. - 10:45 a.m. | Moscone West - Room 3009

Using Location in Cloud Applications with Python, Node.js, and More [TRN4089]
Thursday, Oct 25, 11:00 a.m. - 11:45 a.m. | Moscone West - Room 3001

Building Real-Time Data in Web Applications with Node.js [TRN4081]
Thursday, Oct 25, 1:00 p.m. - 1:45 p.m. | Moscone West - Room 3009

IoT for Oracle Database: Soldering, Python, and a Little PL/SQL [TRN4077]
Thursday, Oct 25, 2:00 p.m. - 2:45 p.m. | Moscone West - Room 3009

Tips and Tricks Sessions

Python and Oracle Database on the Table [TIP4076]
Tuesday, Oct 23, 12:30 p.m. - 1:15 p.m. | Moscone West - Room 3009

Node.js: Async Data In and Data Out with Oracle Database [TIP4080]
Thursday, Oct 25, 11:00 a.m. - 11:45 a.m. | Moscone West - Room 3009

Performance and Scalability Techniques for Oracle Database Applications [TIP4075]
Thursday, Oct 25, 12:00 p.m. - 12:45 p.m. | Moscone West - Room 3009
Meet the Experts

Node.js, Python, PHP, and Go with Oracle Database [MTE6765]
Wednesday, Oct 24, 3:00 p.m. - 3:50 p.m. | Moscone West - The Hub - Lounge B

Demos

We'll also have a demo booth in one of the demo halls:

Python, Node.js, Go, C, and C++ Application Development for Oracle Database [APD-A03]

Other Happenings

Some other 'Focus On' Documents from my wider group are:

 

“Hidden” Efficiencies of Non-Partitioned Indexes on Partitioned Tables Part II (Aladdin Sane)

Richard Foote - Tue, 2018-10-09 01:16
In Part I of this series, I highlighted how a Non-Partitioned Global Index on a Partitioned Table is able to effectively perform “Partition Pruning” by reading only the associated index entries to access just the table blocks of interest from relevant table partitions when the table partitioned keys are specified in an SQL Predicate. Understanding […]
Categories: DBA Blogs

Oracle Utilities Documentation

Anthony Shorten - Mon, 2018-10-08 22:20

One of the most common questions I get from partners and customers is the location of the documentation for the product. In line with most Oracle products there are three locations for documentation:

  • Online Help. The product ships with online help which has information about the screens and advice on the implementation and extension techniques available. Of course, this assumes you have installed the product first. Help can be accessed using the assigned keyboard shortcut or using the help icon on the product screens.
  • Oracle Software Delivery Cloud. Along with the installation media for the products, it is possible to download PDF versions of all the documentation for offline use. This is usually indicated on the download when selecting the version of the product to down though it can downloaded at anytime.
  • Oracle Utilities Help Center. As with other Oracle products, all the documentation is available online via the Oracle Help Center (under Industries --> Utilities).

The following documentation is available:

Document Usage Audience Release Notes Summary of the changes and new features in the Oracle Utilities product Implementation Teams Quick Install Summary of the technical installation process including prerequisites UNIX Administrators Installation Guide Detailed software installation guide for the Oracle Utilities product. UNIX Administrators Optional Products Installation Summary of any optional additional or third party products used for the Oracle Utilities product. This guide only exists if optional products are certified/supported with the product. UNIX Administrators Database Administrator's Guide Installation, management and guidelines for use with the Oracle Utilities product. DBA Administrators Licensing Information User Manual Legal license information relating to the Oracle Utilities product and related products UNIX Administrators Administrative User Guide Offline copy of the Administration documentation for the Oracle Utilities product. This is also available via the online help installed with the product. Implementation Teams, Developers Business User Guide Offline copy of the Business and User documentation for the Oracle Utilities product. This is also available via the online help installed with the product. Implementation Teams, Developers Package Organization Summary Summary of the different packages included in the Oracle Utilities product. This may not exist for single product installations. Implementation Teams Server Administration Guide Guide to the technical configuration settings, management utilities and other technical architecture aspects of the Oracle Utilities product. UNIX Administrators Security Guide Guide to the security aspects of the Oracle Utilities product centralized in a single document. Covers both security functionality and technical security capabilities. This is design for use by Security personnel to design their security solutions. Implementation Teams API Reference Notes Summary of the API's provided with the Oracle Utilities product. This is also available via online features. Developers Developers Guide This is the Oracle Utilities SDK guide for using the Eclipse based development tools for extending the Oracle Utilities product using Java. Partners using the ConfigTools or Groovy should use the Administrative User Guide instead. Developers

Be familiar with this documentation as well as Oracle Support which has additional Knowledge Base articles.

Oracle E-Business Suite 12.2.8 Now Available

Steven Chan - Mon, 2018-10-08 15:21

I am pleased to announce that Oracle E-Business Suite 12.2.8 is now available.

This update includes several significant functional innovations around customer requests and voting, Enterprise Command Centers, user experience, data privacy standards, accounting standards changes, and automation to help customers move EBS environments to Oracle Cloud Infrastructure.

For details about the 2018 innovations, see:

Cliff Godwin talks through the latest innovations and what they mean for customers in this video:

Instructions for downloading and applying this latest release update pack (RUP) for the EBS 12.2 codeline can be found here:

What Does Release 12.2.8 Include?

As a consolidated suite-wide patchset, this RUP includes new features, statutory and regulatory updates, and enhancements for stability, performance, and security.

Release 12.2.8 is cumulative. That means that as well as providing new updates for this release, it also includes updates that were originally made available as one-off patches for earlier 12.2 releases.

For a complete list of new features, refer to:

Common Questions and Answers About Upgrading

  • Q: Is there a direct upgrade path from Release 11i to Release 12.2.8?
    A: No. Release 11i customers must first upgrade to Release 12.2 before applying 12.2.8.
  • Q: Is there a direct upgrade path from EBS 12.0 to 12.2.8?
    A: No. Release 12.0 customers must first upgrade to Release 12.2 before applying 12.2.8.
  • Q: Is there a direct upgrade path from EBS 12.2 to 12.2.8?
  • A: Yes. Release 12.2 customers can apply 12.2.8 directly to their
    environments. EBS 12.2.8 is an online patch, so it can be applied while an existing Release 12.2 system is running.
Additional References Related Articles
Categories: APPS Blogs

From Oracle to Postgres with the EDB Postgres Migration Portal

Yann Neuhaus - Mon, 2018-10-08 10:28

EnterpriseDB is a valuable actor in PostgreSQL’s world. In addition to provide support, they also deliver very useful tools to manage easily your Postgres environments. Among these we can mention EDB Enterprise Manager, EDB Backup & Recovery Tool, EDB Failover Manager, aso…
With this post I will present one of the last in the family, EDB Postgres Migration Portal, a helpful tool to migrate from Oracle to Postgres.

To acces to the Portal, use your EDB account or create one if you don’t have. By the way, with your account you can also connect to PostgresRocks, a very interesting community platform. Go take a look :) .

Once connected, click on “Create project” :
1

Fulfill the fields and click on “Create”. Currently it is only possible to migrate from Oracle 11 or 12 to Postgres EDB Advanced Server 10 :
2

All your projects are displayed at the bottom of the page. Click on the “Assess” link to continue :
3

The migration steps consist of the following :

  1. Extracting the DDL metadata from Oracle database using the EDB’s DDL Extractor script
  2. Running assessment
  3. Correcting conflicts
  4. Downloading and running the new DDL statements adapted to your EDB Postgres database
  5. Migrating data
1. Extracting the DDL metadata from Oracle database

The DDL Extractor script is easy to use. You just need to specify the schema name to extract the DDLs and the path to store the DDLs file. As you can guess, the script run the Oracle dbms_metadata.get_dll package to extract objects definitions :
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select object_type, count(*) from dba_objects where owner='HR' and status='VALID' group by object_type order by 1;

OBJECT_TYPE COUNT(*)
----------------------- ----------
INDEX 19
PROCEDURE 2
SEQUENCE 3
TABLE 7
TRIGGER 2

SQL>

SQL> @edb_ddl_extractor.sql
# -- EDB DDL Extractor Version 1.2 for Oracle Database -- #
# ------------------------------------------------------- #
Enter SCHEMA NAME to extract DDLs : HR
Enter PATH to store DDL file : /home/oracle/migration


Writing HR DDLs to /home/oracle/migration_gen_hr_ddls.sql
####################################################################################################################
## DDL EXTRACT FOR EDB POSTGRES MIGRATION PORTAL CREATED ON 03-10-2018 21:41:27 BY DDL EXTRACTION SCRIPT VERSION 1.2
##
## SOURCE DATABASE VERSION: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
####################################################################################################################
Extracting SYNONYMS...
Extracting DATABASE LINKS...
Extracting TYPE/TYPE BODY...
Extracting SEQUENCES...
Extracting TABLEs...
Extracting PARTITION Tables...
Extracting CACHE Tables...
Extracting CLUSTER Tables...
Extracting KEEP Tables...
Extracting INDEX ORGANIZED Tables...
Extracting COMPRESSED Tables...
Extracting NESTED Tables...
Extracting EXTERNAL Tables..
Extracting INDEXES...
Extracting CONSTRAINTS...
Extracting VIEWs..
Extracting MATERIALIZED VIEWs...
Extracting TRIGGERs..
Extracting FUNCTIONS...
Extracting PROCEDURE...
Extracting PACKAGE/PACKAGE BODY...


DDLs for Schema HR have been stored in /home/oracle/migration_gen_hr_ddls.sql
Upload this file to the EDB Migration Portal to assess this schema for EDB Advanced Server Compatibility.


Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
oracle@vmrefdba01:/home/oracle/migration/ [DB1]

2. Assessment

Go back to your browser. It’s time to check if the Oracle schema can be imported to Postgres or not. Upload the output file…
4…and click on “Run assessment” to start the check.
The result is presented as follow :
6

3. Correcting conflicts

We can notice an issue in the report above… the bfile type is not supported by EDB PPAS. You can click on the concerned table to get more details about the issue :7Tips : when you want to manage bfile columns in Postgres, you can use the external_file extension.
Of course several other conversion issues can happen. A very good point with the Portal is that it provide a knowledge base to solve conflicts. You will find all necessary information and workarounds by navigating to the “Repair handler” and “Knowledge base” tabs. Moreover, you can do the corrections directly from the Portal.

4. Creating the objects in Postgres database

Once you have corrected the conflicts and the assess report indicates a 100% success ratio, click on the top right “Export DLL” button to download the new creation script adapted for Postgres EDB :
8
Then connect to your Postgres instance and run the script :
postgres=# \i Demo_HR.sql
CREATE SCHEMA
SET
CREATE SEQUENCE
CREATE SEQUENCE
CREATE SEQUENCE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
CREATE PROCEDURE
CREATE PROCEDURE
CREATE TRIGGER
CREATE TRIGGER
postgres=#

Quick check :
postgres=# select object_type, count(*) from dba_objects where schema_name='HR' and status='VALID' group by object_type order by 1;
object_type | count
-------------+-------
INDEX | 19
PROCEDURE | 2
SEQUENCE | 3
TABLE | 7
TRIGGER | 2
(5 rows)

Sounds good ! All objects have been created successfully.

5. Migrating data

The Migration Portal doesn’t provide an embedded solution to import the data. So to do that you can use the EDB Migration Tool Kit.
Let see how it works…
You will find MTK in the edbmtk directory of the {PPAS_HOME}. Inside etc the toolkit.properties file is used to store the connection parameters to the source & target database :
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/etc/ [PG10edb] cat toolkit.properties
SRC_DB_URL=jdbc:oracle:thin:@192.168.22.101:1521:DB1
SRC_DB_USER=system
SRC_DB_PASSWORD=manager

TARGET_DB_URL=jdbc:edb://localhost:5444/postgres
TARGET_DB_USER=postgres
TARGET_DB_PASSWORD=admin123
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/etc/ [PG10edb]

MTK use JDBC to connect to the Oracle database. You need to download the Oracle JDBC driver (ojdbc7.jar) and to store it in the following location :
postgres@ppas01:/home/postgres/ [PG10edb] ll /etc/alternatives/jre/lib/ext/
total 11424
-rw-r--r--. 1 root root 4003800 Oct 20 2017 cldrdata.jar
-rw-r--r--. 1 root root 9445 Oct 20 2017 dnsns.jar
-rw-r--r--. 1 root root 48733 Oct 20 2017 jaccess.jar
-rw-r--r--. 1 root root 1204766 Oct 20 2017 localedata.jar
-rw-r--r--. 1 root root 617 Oct 20 2017 meta-index
-rw-r--r--. 1 root root 2032243 Oct 20 2017 nashorn.jar
-rw-r--r--. 1 root root 3699265 Jun 17 2016 ojdbc7.jar
-rw-r--r--. 1 root root 30711 Oct 20 2017 sunec.jar
-rw-r--r--. 1 root root 293981 Oct 20 2017 sunjce_provider.jar
-rw-r--r--. 1 root root 267326 Oct 20 2017 sunpkcs11.jar
-rw-r--r--. 1 root root 77962 Oct 20 2017 zipfs.jar
postgres@ppas01:/home/postgres/ [PG10edb]

As HR’s objects already exist, let’s start the data migration with the -dataOnly option :
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/bin/ [PG10edb] ./runMTK.sh -dataOnly -truncLoad -logBadSQL HR
Running EnterpriseDB Migration Toolkit (Build 51.0.1) ...
Source database connectivity info...
conn =jdbc:oracle:thin:@192.168.22.101:1521:DB1
user =system
password=******
Target database connectivity info...
conn =jdbc:edb://localhost:5444/postgres
user =postgres
password=******
Connecting with source Oracle database server...
Connected to Oracle, version 'Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options'
Connecting with target EDB Postgres database server...
Connected to EnterpriseDB, version '10.5.12'
Importing redwood schema HR...
Loading Table Data in 8 MB batches...
Disabling FK constraints & triggers on hr.countries before truncate...
Truncating table COUNTRIES before data load...
Disabling indexes on hr.countries before data load...
Loading Table: COUNTRIES ...
[COUNTRIES] Migrated 25 rows.
[COUNTRIES] Table Data Load Summary: Total Time(s): 0.054 Total Rows: 25
Disabling FK constraints & triggers on hr.departments before truncate...
Truncating table DEPARTMENTS before data load...
Disabling indexes on hr.departments before data load...
Loading Table: DEPARTMENTS ...
[DEPARTMENTS] Migrated 27 rows.
[DEPARTMENTS] Table Data Load Summary: Total Time(s): 0.046 Total Rows: 27
Disabling FK constraints & triggers on hr.employees before truncate...
Truncating table EMPLOYEES before data load...
Disabling indexes on hr.employees before data load...
Loading Table: EMPLOYEES ...
[EMPLOYEES] Migrated 107 rows.
[EMPLOYEES] Table Data Load Summary: Total Time(s): 0.168 Total Rows: 107 Total Size(MB): 0.0087890625
Disabling FK constraints & triggers on hr.jobs before truncate...
Truncating table JOBS before data load...
Disabling indexes on hr.jobs before data load...
Loading Table: JOBS ...
[JOBS] Migrated 19 rows.
[JOBS] Table Data Load Summary: Total Time(s): 0.01 Total Rows: 19
Disabling FK constraints & triggers on hr.job_history before truncate...
Truncating table JOB_HISTORY before data load...
Disabling indexes on hr.job_history before data load...
Loading Table: JOB_HISTORY ...
[JOB_HISTORY] Migrated 10 rows.
[JOB_HISTORY] Table Data Load Summary: Total Time(s): 0.035 Total Rows: 10
Disabling FK constraints & triggers on hr.locations before truncate...
Truncating table LOCATIONS before data load...
Disabling indexes on hr.locations before data load...
Loading Table: LOCATIONS ...
[LOCATIONS] Migrated 23 rows.
[LOCATIONS] Table Data Load Summary: Total Time(s): 0.053 Total Rows: 23 Total Size(MB): 9.765625E-4
Disabling FK constraints & triggers on hr.regions before truncate...
Truncating table REGIONS before data load...
Disabling indexes on hr.regions before data load...
Loading Table: REGIONS ...
[REGIONS] Migrated 4 rows.
[REGIONS] Table Data Load Summary: Total Time(s): 0.025 Total Rows: 4
Enabling FK constraints & triggers on hr.countries...
Enabling indexes on hr.countries after data load...
Enabling FK constraints & triggers on hr.departments...
Enabling indexes on hr.departments after data load...
Enabling FK constraints & triggers on hr.employees...
Enabling indexes on hr.employees after data load...
Enabling FK constraints & triggers on hr.jobs...
Enabling indexes on hr.jobs after data load...
Enabling FK constraints & triggers on hr.job_history...
Enabling indexes on hr.job_history after data load...
Enabling FK constraints & triggers on hr.locations...
Enabling indexes on hr.locations after data load...
Enabling FK constraints & triggers on hr.regions...
Enabling indexes on hr.regions after data load...
Data Load Summary: Total Time (sec): 0.785 Total Rows: 215 Total Size(MB): 0.01

Schema HR imported successfully.
Migration process completed successfully.

Migration logs have been saved to /home/postgres/.enterprisedb/migration-toolkit/logs

******************** Migration Summary ********************
Tables: 7 out of 7

Total objects: 7
Successful count: 7
Failed count: 0
Invalid count: 0

*************************************************************
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/bin/ [PG10edb]

Quick check :
postgres=# select * from hr.regions;
region_id | region_name
-----------+------------------------
1 | Europe
2 | Americas
3 | Asia
4 | Middle East and Africa
(4 rows)

Conclusion

Easy, isn’t it ?
Once again, EnterpriseDB is providing a very practical, user-frendly and quick to handle tool. In my demo the HR schema is pretty simple. The migration of more complexe schema can be more challenging. Currently only migrations from Oracle are available but SQL Server and other legacy databases should be supported in future versions. In the meantime, you must use EDB Migration Tool Kit for that.

That’s it. Have fun and… be ready to say goodbye to Oracle :-)

 

Cet article From Oracle to Postgres with the EDB Postgres Migration Portal est apparu en premier sur Blog dbi services.

Random Upgrade

Jonathan Lewis - Mon, 2018-10-08 07:36

Here’s a problem that (probably) won’t affect the day to day running of most systems – but it could be a pain in the backside for people who write programs to generate repeatable test data. I’m not going to say much about the problem, just leave you with a test script.


rem
rem	Script	random_upgrade.sql
rem	Author:	Jonathan Lewis
rem	Dated:	Oct 2018
rem
rem	Last tested
rem		18.3.0.0
rem		12.2.0.1
rem	Notes
rem	In the upgrade from 12.2.0.1 something
rem	changed that meant
rem		create as select dbms_random
rem	gets different data from
rem		select dbms_random
rem

drop table t4 purge;
drop table t3 purge;
drop table t2 purge;
drop table t1 purge;
drop table t0 purge;

set feedback off

create table t0 as
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
;


execute dbms_random.seed(0);

create table t1
as
select dbms_random.normal
from
	t0
;

execute dbms_random.seed(0);

create table t2
as
with g1 as (
	select rownum id
	from dual
	connect by
		level <= 1e4 -- > comment to avoid WordPress format issue
)
select
	dbms_random.normal
from
	g1
;

prompt	=================
prompt	Diff the two CTAS
prompt	=================

select count(*)
from (
select * from t1
minus
select * from t2
union all
select * from t2
minus
select * from t1
)
;


create table t3 
as 
select * from t2 
where rownum < 1 -- > comment to avoid WordPress format issue
;

create table t4 
as 
select * from t2 
where rownum < 1 -- > comment to avoid WordPress format issue
;

execute dbms_random.seed(0)

insert into t3
select dbms_random.normal
from
	t0
;

execute dbms_random.seed(0)

insert into t4
with g1 as (
	select rownum id
	from dual
	connect by
		level <= 1e4 -- > comment to avoid WordPress format issue
)
select
	dbms_random.normal
from
	g1
;


prompt	===================
prompt	Diff the two Insert
prompt	===================

select count(*)
from (
select * from t3
minus
select * from t4
union all
select * from t4
minus
select * from t3
)
;


prompt	===========
prompt	Sum of CTAS
prompt	===========

select sum(normal) from t1;

prompt	=============
prompt	Sum of Insert
prompt	=============

select sum(normal) from t3;


execute dbms_random.seed(0)

prompt	=============
prompt	Sum of select
prompt	=============

with g1 as (
	select rownum id
	from dual
	connect by
		level <= 1e4 -- > comment to avoid WordPress format issue
)
select sum(n) from (
select
	dbms_random.normal n
from
	g1
)
;


I’m repeatedly using dbms_random.seed(0) to reset the random number generator and trying to generate 10,000 normally distributed numbers. (I’ve chosen the normal distribution because that happened to be the function in a script I sent someone with the comment that “this will recreate the data for the demonstration” – and they wrote back to say that it didn’t.)

I’ve got two “create as select”, and two “insert as select”. One of each pair selects from a real existing table to get 10,000 rows, the other uses the “select dual connect by” trick to generate rows. I’ve written SQL that shows whether or not the two pairs of tables end up with the same data (they do, pairwise), then I’ve summed one table from each pair to see if the different mechanisms produce the same data – and that depends on the version of Oracle you’re using. Finally I’ve reset the random number generator and summed across a pure select to see what that produces.

If you run this code on 12.2.0.1 or earlier you’ll see that the “diffs” report zeros and the “sums” report -160.39249. If you upgrade to 18.3 the diffs will still report zeros and some of the sums will still report -160.39249 but the sum of the CTAS will report -91.352172.

Bottom Line

If you’ve got code that you wrote to create reproducible test cases and the code uses: “create table … as select … dbms_random …” then it won’t produce the same data when you upgrade to 18.3. You’ll have to modify the code to do “create table (); insert as select …”.

As of this afternoon I have 1,209 test scripts on my laptop that use the dbms_random package to model data distribution patterns. It is almost certain that I will end up modifying every single one of them eventually.

There are words to express how I feel about this – but not ones that I would consider publishing.

(EX42) Flash disk failure may lead to ASM metadata corruption when using write-back flash cache

Syed Jaffar - Mon, 2018-10-08 07:19
While reviewing the latest Exachk report on X5-2 machine, the following critical alrams were observed:



And details shows below description:


And the MOS Note : 1270094.1 explains the following:


According to MOS Doc: 2356460.1, the said behavior is due to a bug (27372426) which applies on Exa version 12.2.1.1.0 to 12.2.1.1.5 or 18.1.0.0.0 to 18.1.3.0.0.

Impact:

If you are running GI 11.2.0.4 or 12.1 with the above said Exa version, and  with FlashCache configured as Writeback mode, the following ORA error may encounter, during: ASM rebalancing operation, disk group mount, & disk group consistency checks, ASM review asm alert.log:

ORA-00600: internal error code, arguments: [kfdAuDealloc2]

WARNING: cache read a corrupt block: group=1(DATA) fn=381 indblk=27 disk=110 (DATA_CD_04_DM01CEL01)
ORA-15196: invalid ASM block header [kfc.c:26411] [endian_kfbh]

ORA-00600: internal error code, arguments: [kfrValAcd30]

ORA-00600: internal error code, arguments: [kfdAuPivotVec2], [kfCheckDG]

ERROR: file +DATADG1.3341.962251267: F3341 PX38530 => D55 A765853 => F1677
PX1647463: fnum mismatch
ERROR: file +DATADG1.3341.962251267: F3341 PX38531 => D15 A205431 => F3341
PX56068: xnum mismatch



Workaround:
To fix the bug, Following action plan needs to be applied:

1) Update the storage server to >=12.2.1.1.6 or >=18.1.4.0.0
2) Apply patch 27510959 and scan ASM metadata


Note :

The issues doesn't impact on GI 12.2 or whenever you have higher version of Exa software mentioned in this bug;
The bug also doesn't affect if the FlashCache mode is WriteThrough;

References:

Exadata Critical Issues (Doc ID 1270094.1)


(EX42) Flash disk failure may lead to ASM metadata corruption when using write-back flash cache (Doc ID 2356460.1)

Pages

Subscribe to Oracle FAQ aggregator