Feed aggregator

Canada Alliance 2018

Jim Marion - Fri, 2018-10-05 10:02

Calling all Canadian Higher Education and Government customers! Canada Alliance is next month and boasts a great lineup of speakers and sessions. JSMPROS will host two pre-conference workshops Monday prior to the main conference. Please bring a laptop if you wish to participate. Please note: space is limited.

  • Configure, Don't Customize! PeopleSoft Page and Field Configurator Monday, November 12 from 10:00 AM–12:30 PM in Coast Hotel: Acadia Room
  • Advanced Query Monday, November 12 from 1:30 PM–4:00 PM in Coast Hotel: Acadia Room

For further details, please visit the Canada Alliance 2018 Workshop page. I look forward to seeing you soon!

Walking through the Zürich ZOUG Event – September the 18th

Yann Neuhaus - Fri, 2018-10-05 09:38

What a nice and an interesting new experience… My first ZOUG Event… Interesting opportunities to meet some great persons and hear to some great sessions. I had the chance to participate to Markus Michalewicz sessions. Markus is Senior Director, Database HA and Scalability Product Management by Oracle, and was the special guest to this event.

https://soug.ch/events/soug-day-september-2018/

The introduction session was done by Markus. He covered a presentation of the different HA solutions in order to talk about MAA. Oracle Maximum Availability Architecture (MAA) is, from my understanding, more a service delivered by Oracle in order to help customer to find their best solution at the lowest cost and complexity according to their constraint.

I was really looking to hear the next session from Robert Bialek from Trivadis about oracle database service high availability with Data Guard. Bialek covered a nice presentation of Data Guard, how it is working and providing some good tips in the way it should be configured.

The best session was certainly the next one, done by my colleague, Clemens Bleile, Oracle Technology Leader at dbi. What a great sharing experience from his past years as one of the managers in the Oracle Support Performance team EMEA. Clemens talked about SQLTXPLAIN, performance troubleshooting tool, its history and the future. Clemens also presented SQLT tool.

The last session I followed was chaired by Markus. The subject was autonomous database, and all the automatic features which came along the last Oracle releases. Will this make Databases been able to be managed themselves? The future will let us know. :-)

Thanks to dbi management to have given me the opportunity to join this Zoug event!

 

Cet article Walking through the Zürich ZOUG Event – September the 18th est apparu en premier sur Blog dbi services.

Join Cardinality – 2

Jonathan Lewis - Fri, 2018-10-05 09:37

In the previous note I posted about Join Cardinality I described a method for calculating the figure that the optimizer would give for the special case where you had a query that:

  • joined two tables
  • used a single-column to join on equality
  • had no nulls in the join columns
  • had a perfect frequency histogram on the columns at the two ends of the join
  • had no filter predicates associated with either table

The method simply said: “Match up rows from the two frequency histograms, multiply the corresponding frequencies” and I supplied a simple SQL statement that would read and report the two sets of histogram data, doing the arithmetic and reporting the final cardinality for you. In an update I also added an adjustment needed in 11g (or, you might say, removed in 12c) where gaps in the histograms were replaced by “ghost rows” with a frequency that was half the lowest frequency in the histogram.

This is a nice place to start as the idea is very simple, and it’s likely that extensions of the basic idea will be used in all the other cases we have to consider. There are 25 possibilities that could need separate testing – though only 16 of them ought to be relevant from 12c onwards. Oracle allows for four kinds of histograms – in order of how precisely they describe the data they are:

  • Frequency – with a perfect description of the data
  • Top-N (a.k.a. Top-Frequency) – which describes all but a tiny fraction (ca. one bucket’s worth) of data perfectly
  • Hybrid – which can (but doesn’t usually, by default) describe up to 2,048 popular values perfectly and gives an approximate distribution for the rest
  • Height-balanced – which can (but doesn’t usually, by default) describe at most 1,024 popular values with some scope for misinformation.

Finally, of course, we have the general case of no histogram, using only 4 numbers (low value, high value, number of rows, number of distinct values) to give a rough picture of the data – and the need for histograms appears, of course, when the data doesn’t look anything like an even distribution of values between the low and high with close to “number of rows”/”number of distinct values” for each value.

So there are 5 possible statistical descriptions for the data in a column – which means there are 5 * 5 = 25 possible options to consider when we join two columns, or 4 * 4 = 16 if we label height-balanced histograms as obsolete and ignore them (which would be a pity because Chinar has done some very nice work explaining them).

Of course, once we’ve worked out a single-column equijoin between two tables there are plenty more options to consider:  multi-column joins, joins involving range-based predicates, joins involving more than 2 tables, and queries which (as so often happens) have predicates which aren’t involved in the joins.

For the moment I’m going to stick to the simplest case – two tables, one column, equality – and comment on the effects of filter predicates. It seems to be very straightforward as I’ll demonstrate with a new model

rem
rem     Script:         freq_hist_join_03.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2018
rem

execute dbms_random.seed(0)

create table t1(
        id      number(8,0),
        n0040   number(4,0),
        n0090   number(4,0),
        n0190   number(4,0),
        n0990   number(4,0),
        n1      number(4,0)
)
;

create table t2(
        id      number(8,0),
        n0050   number(4,0),
        n0110   number(4,0),
        n0230   number(4,0),
        n1150   number(4,0),
        n1      number(4,0)
)
;

insert into t1
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        mod(rownum,   40) + 1                   n0040,
        mod(rownum,   90) + 1                   n0090,
        mod(rownum,  190) + 1                   n0190,
        mod(rownum,  990) + 1                   n0990,
        trunc(30 * abs(dbms_random.normal))     n1
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 -- > comment to avoid WordPress format issue
;

insert into t2
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        mod(rownum,   50) + 1                   n0050,
        mod(rownum,  110) + 1                   n0110,
        mod(rownum,  230) + 1                   n0230,
        mod(rownum, 1150) + 1                   n1150,
        trunc(30 * abs(dbms_random.normal))     n1
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 -- > comment to avoid WordPress format issue
;

begin
        dbms_stats.gather_table_stats(
                ownname => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1 for columns n1 size 254'
        );
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T2',
                method_opt  => 'for all columns size 1 for columns n1 size 254'
        );
end;
/

You’ll notice that in this script I’ve created empty tables and then populated them. This is because of an anomaly that appeared in 18.3 when I used “create as select”, and should allow the results from 18.3 be an exact match for 12c. You don’t need to pay much attention to the Nxxx columns, they were there so I could experiment with a few variations in the selectivity of filter predicates.

Given the purpose of the demonstration I’ve gathered histograms on the column I’m going to use to join the tables (called n1 in this case), and here are the summary results:


TABLE_NAME           COLUMN_NAME          HISTOGRAM       NUM_DISTINCT NUM_BUCKETS
-------------------- -------------------- --------------- ------------ -----------
T1                   N1                   FREQUENCY                119         119
T2                   N1                   FREQUENCY                124         124

     VALUE  FREQUENCY  FREQUENCY      PRODUCT
---------- ---------- ---------- ------------
         0       2488       2619    6,516,072
         1       2693       2599    6,999,107
         2       2635       2685    7,074,975
         3       2636       2654    6,995,944
...
       113          1          3            3
       115          1          2            2
       116          4          3           12
       117          1          1            1
       120          1          2            2
                                 ------------
sum                               188,114,543

We’ve got frequencyy histograms, and we can see that they don’t have a perfect overlap. I haven’t printed every single line from the cardinality query, just enough to show you the extreme skew, a few gaps, and the total. So here are three queries with execution plans:


set serveroutput off

alter session set statistics_level = all;
alter session set events '10053 trace name context forever';

select
        count(*)
from
        t1, t2
where
        t1.n1 = t2.n1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

select
        count(*)
from
        t1, t2
where
        t1.n1 = t2.n1
and     t1.n0990 = 20
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));


select
        count(*)
from
        t1, t2
where
        t1.n1 = t2.n1
and     t1.n0990 = 20
and     t2.n1150 = 25
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

I’ve queried the pure join – the count was exactly the 188,114,543 predicted by the cardinality query, of course – then I’ve applied a filter to one table, then to both tables. The first filter n0990 = 20 will (given the mod(,990)) definition identify one row in 990 from the original 100,000 in t1; the second filter n1150 = 25 will identify one row in 1150 from t2. That’s filtering down to 101 rows and 87 rows respectively from the two tables. So what do we see in the plans:


-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:23.47 |     748 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:23.47 |     748 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |    188M|    188M|00:00:23.36 |     748 |  6556K|  3619K| 8839K (0)|
|   3 |    TABLE ACCESS FULL| T1   |      1 |    100K|    100K|00:00:00.01 |     374 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |    100K|    100K|00:00:00.01 |     374 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."N1"="T2"."N1")



-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.02 |     748 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.02 |     748 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |    190K|    200K|00:00:00.02 |     748 |  2715K|  2715K| 1647K (0)|
|*  3 |    TABLE ACCESS FULL| T1   |      1 |    101 |    101 |00:00:00.01 |     374 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |    100K|    100K|00:00:00.01 |     374 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."N1"="T2"."N1")
   3 - filter("T1"."N0990"=20)



-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.01 |     748 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.01 |     748 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |    165 |    165 |00:00:00.01 |     748 |  2715K|  2715K| 1678K (0)|
|*  3 |    TABLE ACCESS FULL| T2   |      1 |     87 |     87 |00:00:00.01 |     374 |       |       |          |
|*  4 |    TABLE ACCESS FULL| T1   |      1 |    101 |    101 |00:00:00.01 |     374 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."N1"="T2"."N1")
   3 - filter("T2"."N1150"=25)
   4 - filter("T1"."N0990"=20)


The first execution plan shows an estimate of 188M rows – but we’ll have to check the trace file to confirm whether that’s only an approximate match to our calculation, or whether it’s an exact match. So here’s the relevant pair of lines:


Join Card:  188114543.000000 = outer (100000.000000) * inner (100000.000000) * sel (0.018811)
Join Card - Rounded: 188114543 Computed: 188114543.000000

Yes, the cardinality calculation and the execution plan estimates match perfectly. But there are a couple of interesting things to note. First, Oracle seems to be deriving the cardinality by multiplying the individual cardinalities of the two tables with a figure it calls “sel” – the thing that Chinar Aliyev has labelled Jsel the “Join Selectivity”. Secondly, Oracle can’t do arithmetic (or, removing tongue from cheek) the value it’s reported for the join selectivity is reported at only 6 decimal places, but stored to far more. What is the Join Selectivity, though ? It’s the figure we derive from the cardinality SQL divided by the cardinality of the cartesian join of the two tables – i.e. 188,114,543 / (100,000 * 100,000).

With the clue from the first trace file, can we work out why the second and third plans show 190K and 165 rows respectively. How about this – multiply the filtered cardinalities of the two separate tables, then multiply the result by the join selectivity:

  • 1a)   n0990 = 20: gives us 1 row in every 990.    100,000 / 990 = 101.010101…    (echoing the rounded execution plan estimate).
  • 1b)   100,000 * (100,000/990) * 0.0188114543 = 190,014.69898989…    (which is in the ballpark of the plan and needs confirmation from the trace file).

 

  • 2a)   n1150 = 25: gives us 1 row in every 1,150.    100,000 / 1,150 = 86.9565217…    (echoing the rounded execution plan estimate)
  • 2b)   (100,000/990) * (100,000/1,150) * 0.0188114543 = 165.2301651..    (echoing the rounded execution plan estimate).

Cross-checking against extracts from the 10053 trace files:


Join Card:  190014.689899 = outer (101.010101) * inner (100000.000000) * sel (0.018811)
Join Card - Rounded: 190015 Computed: 190014.689899

Join Card:  165.230165 = outer (86.956522) * inner (101.010101) * sel (0.018811)
Join Card - Rounded: 165 Computed: 165.230165

Conclusion.

Remembering that we’re still looking at very simple examples with perfect frequency histograms: it looks as if we can work out a “Join Selectivity” (Jsel) – the selectivity of a “pure” unfiltered join of the two tables – by querying the histogram data then use the resulting value to calculate cardinalities for simple two-table equi-joins by multiplying together the individual (filtered) table cardinality estimates and scaling by the Join Selectivity.

Acknowledgements

Most of this work is based on a document written by Chinar Aliyev in 2016 and presented at the Hotsos Symposium the same year. I am most grateful to him for responding to a recent post of mine and getting me interested in spending some time to get re-acquainted with the topic. His original document is a 35 page pdf file, so there’s plenty more material to work through, experiment with, and write about.

 

Alliance Down Under 2018 Workshops

Jim Marion - Fri, 2018-10-05 08:15

Today marks the 30 day countdown to Alliance Down Under, an incredible opportunity for Oracle customers to network, share experiences, and learn more about Oracle products. On Monday and Tuesday, November 5 - 6, I am partnering with Presence of IT to deliver several pre-conference workshops at Alliance Down Under. For more details and to register, please visit the Alliance Down Under pre-conference workshop page. Workshops available:

  • Building better-than-breadcrumbs navigation
  • Configure, Don’t Customize! Event Mapping and Page and Field Configurator
  • Chatbot Workshop
  • Data Migration Framework: Deep Dive
  • App Designer for Functional Business Analysts (including building CIs for Excel to CI)
  • Advanced PeopleTools Tips & Techniques
  • Fluid Design/Configuration for Functional Business Analysts

We look forward to seeing you there!

[BLOG] Oracle EBS (R12) Financial: Production Based Depreciation in Fixed Assets

Online Apps DBA - Fri, 2018-10-05 04:23

✔What is Depreciation ✔What are the fixed assets ✔Production Based Depreciation in Fixed Assets & much more… Check at https://k21academy.com/financial13 ✔What is Depreciation ✔What are the fixed assets ✔Production Based Depreciation in Fixed Assets & much more… Check at https://k21academy.com/financial13

The post [BLOG] Oracle EBS (R12) Financial: Production Based Depreciation in Fixed Assets appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle Cloud Jump Start With Oracle Cloud Infrastructure

Oracle Cloud Infrastructure is the cloud for your most demanding workloads. It combines the elasticity and utility of public cloud with the granular control, security, and predictability of...

We share our skills to maximize your revenue!
Categories: DBA Blogs

The identity column jumps its value if using merge into statement

Tom Kyte - Thu, 2018-10-04 22:06
Hi, I have one table defined as below, one of the column is defined as identity type <code> create table TEST ( col1 VARCHAR2(10), col2 NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY MINVALUE 1 MAXVALUE 999999999999999...
Categories: DBA Blogs

dbms_lob.compare and length

Tom Kyte - Thu, 2018-10-04 22:06
Hello, I'm trying within a trigger to compare two clobs to see if there is any change. I am trying to prevent any unnecessary writes. Prior to writing to audit trail I compare two values. <code> v_clob_compare := dbms_lob.compare( :old.clob_tex...
Categories: DBA Blogs

Using function in conjunction with WITH query clause

Tom Kyte - Thu, 2018-10-04 22:06
Bit of a newbie, and hoping I can get pointed in the right direction. I've simplified things to demonstrate the issue I'm experiencing (and I'm really struggling to get a clear answer on other posts). When running the following: <code>with f...
Categories: DBA Blogs

SQL Query to Convert Ten Rows with One Column to Five Rows With One Column

Tom Kyte - Thu, 2018-10-04 22:06
i have a table with column name as value. There are 10 rows in the table. The desired output of this to be displayed as two columns first 5 rows as one column A and rows 6 to 10 as column B , next to each other as 5 rows of data like this <code>A B...
Categories: DBA Blogs

Calculate a variable date value and use it in a where clause to return all rows after that date

Tom Kyte - Thu, 2018-10-04 22:06
Long time SQL user of many flavors but brand new to PL/SQL and struggling to learn the "Oracle way". I've seen MANY examples of using variables in queries online and in documentation, but I've been unsuccessful finding a sample of what I want to do ...
Categories: DBA Blogs

SP execution plan should depend on input parameter

Tom Kyte - Thu, 2018-10-04 22:06
Hi guys, I have a SP having input parameters and the execution plan should depend on the parameters provided to the procedure. Ex : PROCEDURE GetData( DataType int, DataValue int ) I want this procedure to search DataValue in column1 if DataType =...
Categories: DBA Blogs

Understanding SQL Profiles

Tom Kyte - Thu, 2018-10-04 22:06
Hi Tom, My understanding of using SQL Profiles has always been that they would prevent (frequent) changes in access paths of SQL statement. This morning I noticed that, despite the fact that an SQL profile was connected to a statement and statias...
Categories: DBA Blogs

Oracle Utilities Technical Best Practices whitepaper updated

Anthony Shorten - Thu, 2018-10-04 18:21

With the release of Oracle Utilities Application Framework V4.3.0.6.0 the Technical Best Practices whitepaper has been updated with the latest advice and latest information.

The following changes have been made:

  • Overview of the Health Check capability
  • Preparing your Implementation for the Oracle Cloud - An overview of the objects that need to be changed to prepare for the migration from on-premise to the Oracle Cloud
  • Optimization techniques for minimizing costs.

The latest version is located in Technical Best Practices (Doc Id: 560367.1) available from My Oracle Support.

Oracle’s AI-driven Risk Management Makes Corporate Finances More Secure

Oracle Press Releases - Thu, 2018-10-04 11:00
Press Release
Oracle’s AI-driven Risk Management Makes Corporate Finances More Secure Advanced Access Controls parlay AI to help finance teams bolster security and risk analysis

Redwood Shores, Calif.—Oct 4, 2018

To help protect customers from ever-increasing fraud and security threats, Oracle today unveiled the enterprise software industry’s first AI—driven security and risk management solution. Designed specifically for Oracle Enterprise Resource Planning (ERP) Cloud, Oracle Risk Management Cloud’s new Advanced Access Controls enable organizations to continuously monitor for segregation of duties (SOD), financial compliance (SOX), privacy risks, proprietary information and payment risks.

The new controls embed self-learning, artificial intelligence (AI) techniques to constantly examine all users, roles and privileges against a library of active security rules. The offering includes more than 100 best practices (configurable rules) across general ledger, payables, receivables and fixed assets.

“As the pace of business accelerates, organizations can no longer rely on time-consuming manual processes, which leave them vulnerable to fraud and human error,” said Laeeq Ahmed, managing director at KPMG. “With adaptive capabilities and AI, products such as Oracle Risk Management Cloud can help organizations manage access controls and monitor activity at scale to protect valuable data and reduce exposure to risk.”

Key benefits of AI-driven Security & Risk Management include:

  • Continuous protection: Constant monitoring of user and application activity

  • Instant best practices: More than 100 proven ERP security rules

  • Self-learning: Embedded AI and self-learning for precise results

  • Augmented incident response: Ensures that issues are directed to analysts for tracking, investigation and closure

“On-going disruption in the marketplace and regulatory landscape presents continually evolving operational and financial risks,” said Bill Behen, principal at Grant Thornton. “Oracle’s unique approach to risk management and expertise applying AI technology enables organizations to securely move to the cloud and continuously protect their business from a host of external and internal threats.”

The pre-packaged audit-approved security rules automate access analysis during the role design phase to significantly accelerate ERP implementations. In addition, the intuitive workbench, visualization and simulation features within Advanced Access Controls make it easy to add new rules and further optimize user access. Once live, the solution continuously monitors and automatically routes incidents to security analysts.

To help customers analyze complex, recursive and dynamic security data across all users, roles and privileges, Advanced Access Controls uses graph-based analysis and self-learning algorithms. This enables organizations to accurately and reliably review and visualize the entire path by which any user is able to access and execute sensitive functions.

“Advanced Access Controls automate the time-consuming analysis needed to protect business data from insider threats, fraud, misuse and human error,” said Sid Sinha, vice president of Risk Management Cloud Product Strategy, at Oracle. “This service is part of an integrated, practical solution to effectively protect information in business applications using the latest data analysis and exception management techniques.”

For more information on Oracle Advanced Access Controls and Oracle Risk Management Cloud, go to cloud.oracle.com/risk-management-cloud.

To learn more about Risk Management Cloud at Oracle OpenWorld, please visit the sessions catalog.

Contact Info
Bill Rundle
Oracle PR
+1 650 506 1891
bill.rundle@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Bill Rundle

  • +1 650 506 1891

New OA Framework 12.2.6 Update 14 Now Available

Steven Chan - Thu, 2018-10-04 10:31

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure.

We periodically release updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.6 is now available:

Oracle Application Framework (FWK) Release 12.2.6 Bundle 14 (Patch 28183913:R12.FWK.C)

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.6 users should apply this patch. Future OAF patches for EBS Release 12.2.6 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes all fixes released in previous EBS Release 12.2.6 bundle patches.

In addition, this latest bundle patch includes fixes for the following issues:

  • The details of an expanded row are not visible in the viewport area when a table has more than 30 rows.
  • On the iSupport Service Request creation page, the length of the Problem Summary data input field is inadequate.
  • Messages filtered for viruses are not added to the confirmation message in a specific product flow.
  • Validation of required fields is not triggered when a new row is added to a table.
  • With the application session language set to Arabic, in the IE 11 browser a popup does not appear when the Delete button is clicked.

Related Articles

Categories: APPS Blogs

Local Governments to Modernize Community Development with Oracle Cloud

Oracle Press Releases - Thu, 2018-10-04 07:00
Press Release
Local Governments to Modernize Community Development with Oracle Cloud New Public Sector Community Development solution reduces permitting and licensing complexity to drive economic development

Redwood Shores, Calif.—Oct 4, 2018

Oracle has launched Oracle Public Sector Community Development, enabling local governments to improve quality-of-life for their constituents with faster, more efficient land management. A modern cloud-native application for state and local government agencies, the offering transforms cumbersome permitting and licensing into an end-to-end solution for reliable execution of building regulatory processes that can foster economic growth, while helping deliver public safety and accountability.

The new SaaS (Software-as-a-Service) application leverages Oracle’s scalable and secure cloud computing capabilities, advanced analytics and platform development services. The combination of market-leading, emerging technologies such as chatbots and artificial intelligence (AI), and pre-built, easy-to-configure applications, designed specifically for government agencies, modernizes how local government deliver services and results for their citizens and jurisdictions.

“Today’s digitally-savvy citizens and business owners expect to interact with their local government in the same way they do so with commercial companies offering multichannel, always available, 24/7 access to goods and services,” said Mark Johnson, senior vice president, Oracle Public Sector. “The Oracle Public Sector Community Development solution helps make this possible for state and local governments.”

Drive Economic Development While Helping Provide Compliance

Oracle Public Sector Community Development is a fully-configurable, built-for-purpose cloud solution for state and local government that:

  • Modernizes the compliance and regulatory process for land-use and building infrastructure, focusing first on building permits and inspections.

  • Accommodates diverse permit types and associated workflows that are easily configurable and extensible to each agency’s unique needs through built-in process automation tools from Oracle Integration Cloud, designed for use by departmental staff—not constrained IT personnel.

  • Allows frictionless interactions between the public and local government with an intuitive, role-based user experience on any device, enhanced with guided interactions and natural voice interactions.

  • Increases strategic insight and enables better decision making through contextually relevant data visualization, leveraging Oracle’s business intelligence and analytics.

  • Provides deeper permit understanding and greater processing efficiency through integration with Esri’s ArcGIS developer API. Esri is an established provider of location intelligence.

  • Empowers inspectors to ensure public safety and accountability with a mobile application, enforcing code compliance quickly and easily while accelerating occupancy rates.

Lower Costs and Innovate Faster

Oracle Public Sector Community Development is offered as a subscription cloud service. This eliminates the need for capital expenditures, enabling government to innovate faster, and lowers the total cost of ownership since cloud services are rapid to deploy and easy to maintain. “As-a-service” solutions lower the demand on IT staff, scale with the growth of a city or county, and allow government agencies to continuously receive new functionality.

“Governments face technology demands from two directions,” wrote Gartner in a recent report1. “The first is from employees, with consumer technology and employee expectations having surpassed enterprise technology and government workplace experiences. The second is from the pace of technology generally, where expectations of citizens and commercial services have outstripped governments’ ability to keep pace.”

Leveraging its unique scale and extensive investments in research and development, Oracle will continue to rapidly deliver new capabilities to market. Oracle’s roadmap includes expanding its Community Development offerings to include planning and zoning and code enforcement functionality and delivering new applications that address additional regulatory challenges at the local and state level, including business and professional licenses.

By employing open standards and building on the common technology foundation of Oracle’s robust cloud applications for finance, citizen services, procurement, and human capital management, governments can streamline operations and deploy integrated, extensible, end to end cross-agency processes that facilitate a holistic digital platform effort.

Today, thousands of public sector customers use Oracle’s digital solutions to transform how their constituents engage with government. Across the nation, cities and states have harnessed the power of cloud, IoT and analytics and big data to predict and prepare for a better future for all citizens. Read their stories here.  

For more information, go to www.oracle.com/communitydevelopment.

1 Source, Gartner, Inc., A Master CIO in Government, Rick Holgate, Alia Mendonsa, and Alvaro Mello, February 23, 2018.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

[BLOG] 1z0-161 Oracle JAVA Cloud Service (JCS) Certification Roadmap

Online Apps DBA - Thu, 2018-10-04 06:38

Are you looking forward to clear Oracle JAVA Cloud Service Certificate Associate Exam (1Z0-161)? Get clarity about the exam by visiting https://k21academy.com/jcs15 where we cover: ✔Why Java Cloud Service Certification(1Z0-161)? ✔Certificaion Details ✔Oracle 1Z0-161 Certification Topics & much more… Are you looking forward to clear Oracle JAVA Cloud Service Certificate Associate Exam (1Z0-161)? Get clarity […]

The post [BLOG] 1z0-161 Oracle JAVA Cloud Service (JCS) Certification Roadmap appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Understanding Distribution in #Exasol

The Oracle Instructor - Thu, 2018-10-04 04:12
Exasol doesn’t need much administration but getting distribution right matters

Exasol uses a clustered shared-nothing architecture with many sophisticated internal mechanisms to deliver outstanding performance without requiring much administration. Getting the distribution of rows between cluster nodes right is one of the few critical tasks left, though. To explain this, let’s say we have two tables t1 and t2:

The two tables are joined on the column JoinCol, while WHERE conditions for filtering are done with the column WhereCol. Other columns are not shown to keep the sketches small and simple. Now say these two tables are stored on a three-node cluster. Again, for simplicity only active nodes are on the sketch – no reserve nodes or license nodes. We also ignore the fact that small tables will be replicated across all active nodes.

Distribution will be random if no distribution key is specified

Without specifying a distribution key, the rows of the tables are distributed randomly across the nodes like this:

Absence of proper distribution keys: global joins

The two tables are then joined:

SELECT <something> FROM t1 JOIN t2 ON t1.JoinCol = t2.JoinCol;

Internally, this is processed as a global join which means network communication between the nodes on behalf of the join is required. This is the case because some rows do not find local join partners on the same node:

Distribution on join columns: local joins

If the two tables were distributed on their join columns with statements like these

ALTER TABLE t1 DISTRIBUTE BY JoinCol;

ALTER TABLE t2 DISTRIBUTE BY JoinCol;

then the same query can be processed internally as a local join:

Here every row finds a local join partner on the same node so no network communication between the nodes on behalf of the join is required. The performance with this local join is much better than with the global join although it’s the same statement as before.

Why you shouldn’t distribute on WHERE-columns

While it’s generally a good idea to distribute on JOIN-columns, it’s by contrast a bad idea to distribute on columns that are used for filtering with WHERE conditions. If both tables would have been distributed on the WhereCol columns, it would look like this:

This distribution is actually worse than the initial random distribution! Not only does this cause global joins between the two tables as already explained, statements like e.g.

<Any DQL or DML> WHERE t2.WhereCol='A';

will utilize only one node (the first with this WHERE condition) and that effectively disables one of Exasol’s best strengths, the Massive Parallel Processing (MPP) functionality. This distribution leads to poor performance because all other nodes in the cluster have to stand by being idle while one node has to do all the work alone.

Examine existing distribution with iproc()

The function iproc() helps investigating the existing distribution of rows across cluster nodes. This statement shows the distribution of the table t1:

SELECT iproc(),COUNT(*) FROM t1 GROUP BY 1 ORDER BY 1;
Evaluate the effect of distribution keys with value2proc()

The function value2proc() can be used to display the effect that a (new) distribution key would have:

SELECT home_node,COUNT(*) FROM (SELECT value2proc(JoinCol) AS home_node FROM t1) GROUP BY 1 ORDER BY 1;
Conclusion

Distribution on JOIN-columns leads to local joins which perform better than global joins: Do that!

Distribution on WHERE-columns leads to global joins and disables the MPP functionality, both causing poor performance: Don’t do that!

Categories: DBA Blogs

“Hidden” Efficiencies of Non-Partitioned Indexes on Partitioned Tables Part I (The Jean Genie)

Richard Foote - Thu, 2018-10-04 03:00
When it comes to indexing a partitioned table, many automatically opt for Local Indexes, as it’s often assumed they’re simply easier to manage and more efficient than a corresponding Global Index. Having smaller index structures that are aligned to how the table is partitioned certainly has various advantages. The focus in this little series is on […]
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator