Feed aggregator

dbms_log

Jonathan Lewis - Fri, 2018-10-12 10:39

I’ve been a long time, though occasional, user of the undocumented dbms_system package, typically using it to write messages or insert break marks in trace files (or the alert log). Thanks to an email from Cary Millsap I’ve recently discovered that the procedures for writing to trace files have been copied to a separate dbms_log package – which is nice because some of the things in dbms_system shouldn’t be made available to general code, for example dbms_system has a procedure kcfrms which resets a number of the “max time” columns in various dynamic performance views. It can be very useful occasionally – you might even want to call it just before or just after every AWR snapshot – but I’d rather that no-one else was able to call if I thought that I needed to  do so.

The dbms_log package is also (currently – even in 18.3) undocumented, but maybe one day soon it will appear in the PL/SQL Packages and Types reference manual. The procedures available in the package are as follows:

SQL> desc dbms_log
PROCEDURE KSDDDT
PROCEDURE KSDFLS
PROCEDURE KSDIND
 Argument Name			Type			In/Out Default?
 ------------------------------ ----------------------- ------ --------
 LVL				BINARY_INTEGER		IN
PROCEDURE KSDWRT
 Argument Name			Type			In/Out Default?
 ------------------------------ ----------------------- ------ --------
 DEST				BINARY_INTEGER		IN
 TST				VARCHAR2		IN
  • ksdddt move to a new line in the trace file, writes the date and moves to the next line – but won’t do anything if the trace file has not already been opened so if you want a datestamp at the top of the output you actually have to start with a ksdwrt or ksdind call.
  • ksdfls flushes any pending writes to the trace file / alert log and closes the file – which isn’t relevant in my example, but would make a difference to when you see the text that has been written from inside a pl/sql block.
  • ksdind writes an “indent” of lvl colon (‘:’) symbols to the trace file. This is a one-off effect, it doesn’t set an indent for all future lines, it merely writes the ‘:’ so that the next call to ksdwrt appends its text after the colons.
  • ksdwrt writes tst to the trace file if dest = 1, to the alter log if dest = 2 and both if dest = 3, adding a new-line at the end of the text.

Here’s a sample of code calling the procedures to write something to my trace file:


execute dbms_log.ksdddt
execute dbms_log.ksdwrt(1,'Starting')
execute dbms_log.ksdddt
execute dbms_log.ksdwrt(1,'Underlining')
execute dbms_log.ksdind(20)
execute dbms_log.ksdwrt(1,'Indented')
execute dbms_log.ksdwrt(1,'Not Indented')
execute dbms_log.ksdind(30)
execute dbms_log.ksdddt
execute dbms_log.ksdwrt(1,'Finished')
execute dbms_log.ksdind(30)
execute dbms_log.ksdfls


Here’s the text that appears in the trace files:


Starting

*** 2018-10-04T16:31:15.525515+01:00 (ORCL(3))
Underlining
::::::::::::::::::::Indented
Not Indented
::::::::::::::::::::::::::::::
*** 2018-10-04T16:31:15.532881+01:00 (ORCL(3))
Finished
::::::::::::::::::::::::::::::

Note how the call to ksdddt in line 1 of code didn’t write a date into the trace file. The call to ksdwrt in line 2 writes ‘Starting’ and move to a new line so we get a blank line when the call to ksdddt in line 3 moves to a new line and writes the date. At line 5 we “indent 20”, so the ksdwrt at line 6 starts after the string of colons, then moves to a new line where the indent is not repeated. We indent again at line 8, which leaves us at the end of a line, so when we call ksdddt it moves to the start of the next line and writes the date there, we don’t get a blank line.

Footnote: when I saw Cary Millsap’s note I assumed that the procedures had been copied across in a recent version of Oracle; in fact dbms_log has been around since at least 11.2.0.4

Footnote 2: While checking my library for references to dbms_system I came across a script I’d used to create a local version of dbms_system that allowed me to execute a call to “dbms_system.set_bool_param_in_sesssion(‘#_SILVER_BULLET’, true)”. I used it at an IOUG conference a few years ago to demonstrate that if you set this “very hidden” parameter to true then some of your queries could run faster.  (What the call actually did was enable query rewrite and disable function-based indexes so that a special time-wasting index I’d created couldn’t get used.)

 

Get More from Your Oracle Premier Support Investment

Chris Warticki - Fri, 2018-10-12 10:31

Our conference sessions give you security, and how-to information that you can apply when you are back on the job. Hear directly from Oracle Support experts and Oracle Support customers and learn how you can maximize the value Oracle Premier Support provides your company.

Bookmark the Focus on Oracle Premier Support document for quick reference on your cell phone and you will have all the Premier Support sessions and activities available at your fingertips. Click on the + button for a session to pre-register and add the session to your Oracle OpenWorld schedule.

Oracle Premier Support—Hear from Customers

Oracle Support customers tell the story best—why they rely on Oracle Premier Support and how to get more from your Oracle products—from best practices for security, compliance, and governance to moving Oracle Applications Unlimited products to Oracle Cloud Infrastructure. Whether your Oracle Applications Unlimited run on premises or in the cloud, Oracle Premier Support is there with you.

 

See all the Oracle Premier Support customer sessions.

 

Oracle Premier Support Security—Hear from Security Experts

Oracle Premier Support provides customers with an enhanced security posture. Presented by security experts with operational security experience, these sessions will give you with insight into current security trends, common causes of security breaches, and with specific recommendations to help you secure your environment.

 

See all Oracle Premier Support Security Sessions

 

Oracle Applications Unlimited—Through at Least 2030

Get the latest details on Oracle Applications Unlimited products and their availability—whether you run Applications Unlimited products on premises or in Oracle Cloud Infrastructure. See all the Applications Unlimited sessions.

See all Oracle Applications Unlimited sessions

 

Oracle Platinum Services

See how Oracle Platinum Services helps maximize availability and reduce support complexity through remote fault monitoring, decreased response times, and patching performed by Oracle experts—at no additional cost.

See all the Oracle Platinum Services sessions

 

Find all the Oracle Support sessions, activities, and events on the Support and Services Oracle OpenWorld page.

Getting started with Red Hat Satellite – Installation

Yann Neuhaus - Fri, 2018-10-12 06:44

This is the start of a series of posts I wanted to write for a long time: Getting started with Red Hat Satellite. Just in case you don’t know what it is, this statement from the official Red Hat website summarizes it quite well: “As your Red Hat® environment continues to grow, so does the need to manage it to a high standard of quality. Red Hat Satellite is an infrastructure management product specifically designed to keep Red Hat Enterprise Linux® environments and other Red Hat infrastructure running efficiently, properly secured, and compliant with various standards.”. In this first post it is all about the installation of Satellite and that is surprisingly easy. Lets go.

What you need as a starting point is a redhat Enterprise Linux minimal installation, either version 6 or 7. In my case it is the latest 7 release as of today:

[root@satellite ~]$ cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.5 (Maipo)

Of course the system should be fully registered so you will be able to install updates / fixes and additional packages (and that requires a redhat subscription):

[root@satellite ~]$ subscription-manager list
+-------------------------------------------+
    Installed Product Status
+-------------------------------------------+
Product Name:   Red Hat Enterprise Linux Server
Product ID:     69
Version:        7.5
Arch:           x86_64
Status:         Subscribed
Status Details: 
Starts:         11/20/2017
Ends:           09/17/2019

As time management is critical that should be up and running before proceeding. For redhat Enterprise Linux chrony is the tool to go for:

[root@satellite ~]$ yum install -y chrony
[root@satellite ~]$ systemctl enable chronyd
[root@satellite ~]$ systemctl start chronyd

Satellite requires a fully qualified hostname so lets add that to the hosts file (of course you would do that with DNS in a real environment):

[root@satellite mnt]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.22.11 satellite.it.dbi-services.com satellite

As a Satellite server only makes sense when clients can connect to it a few ports need to be opened (not going into the details here, that will be the topic of another post):

[root@satellite ~]$ firewall-cmd --permanent \
                                 --add-port="53/udp" --add-port="53/tcp" \
                                 --add-port="67/udp" --add-port="69/udp" \
                                 --add-port="80/tcp"  --add-port="443/tcp" \
                                 --add-port="5000/tcp" --add-port="5647/tcp" \
                                 --add-port="8000/tcp" --add-port="8140/tcp" \
                                 --add-port="9090/tcp"

That’s basically all you need to do as preparation. There are several methods to install Satellite, I will use the downloaded iso as the source (what is called the “Disconnected Installation” what you will usually need in enterprise environments):

[root@satellite ~]$ ls -la /var/tmp/satellite-6.3.3-rhel-7-x86_64-dvd.iso 
-rw-r--r--. 1 root root 3041613824 Oct 11 18:16 /var/tmp/satellite-6.3.3-rhel-7-x86_64-dvd.iso

First of all the required packages need to be installed so we need to mount the iso:

[root@satellite ~]$ mount -o ro,loop /var/tmp/satellite-6.3.3-rhel-7-x86_64-dvd.iso /mnt
[root@satellite ~]$ cd /mnt/
[root@satellite mnt]# ls
addons  extra_files.json  install_packages  media.repo  Packages  repodata  RHSCL  TRANS.TBL

Installing the packages required for Satellite is just a matter of calling the “install_packages” script:

[root@satellite mnt]$ ./install_packages 
This script will install the satellite packages on the current machine.
   - Ensuring we are in an expected directory.
   - Copying installation files.
   - Creating a Repository File
   - Creating RHSCL Repository File
   - Checking to see if Satellite is already installed.
   - Importing the gpg key.
   - Installation repository will remain configured for future package installs.
   - Installation media can now be safely unmounted.

Install is complete. Please run satellite-installer --scenario satellite

The output already tells us what to do next, executing the “satellite-installer” script (I will go with the defaults here but there are many options you could specify already here):

[root@satellite mnt]$ satellite-installer --scenario satellite
This system has less than 8 GB of total memory. Please have at least 8 GB of total ram free before running the installer.

Hm, I am running that locally in a VM so lets try to increase that at least for the time of the installation and try again:

[root@satellite ~]$ satellite-installer --scenario satellite
Installing             Package[grub2-efi-x64]                             [0%] [                                         ]

… and here we go. Some minutes later the configuration/installation is completed:

[root@satellite ~]$ satellite-installer --scenario satellite
Installing             Done                                               [100%] [.......................................]
  Success!
  * Satellite is running at https://satellite.it.dbi-services.com
      Initial credentials are admin / L79AAUCMJWf6Y4HL

  * To install an additional Capsule on separate machine continue by running:

      capsule-certs-generate --foreman-proxy-fqdn "$CAPSULE" --certs-tar "/root/$CAPSULE-certs.tar"

  * To upgrade an existing 6.2 Capsule to 6.3:
      Please see official documentation for steps and parameters to use when upgrading a 6.2 Capsule to 6.3.

  The full log is at /var/log/foreman-installer/satellite.log

Ready:
Selection_065

Before we go into some details on how to initially configure the system in the next post lets check what we have running. A very good choice (at least when you ask me :) ) is to use PostgreSQL as the repository database:

[root@satellite ~]$ ps -ef | grep postgres
postgres  1264     1  0 08:56 ?        00:00:00 /usr/bin/postgres -D /var/lib/pgsql/data -p 5432
postgres  1381  1264  0 08:56 ?        00:00:00 postgres: logger process   
postgres  2111  1264  0 08:57 ?        00:00:00 postgres: checkpointer process   
postgres  2112  1264  0 08:57 ?        00:00:00 postgres: writer process   
postgres  2113  1264  0 08:57 ?        00:00:00 postgres: wal writer process   
postgres  2114  1264  0 08:57 ?        00:00:00 postgres: autovacuum launcher process   
postgres  2115  1264  0 08:57 ?        00:00:00 postgres: stats collector process   
postgres  2188  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36952) idle
postgres  2189  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36954) idle
postgres  2193  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36958) idle
postgres  2194  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36960) idle
postgres  2218  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36964) idle
postgres  2474  1264  0 08:58 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2541  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36994) idle
postgres  2542  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36996) idle
postgres  2543  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36998) idle
postgres  2609  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2618  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2627  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2630  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2631  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2632  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2634  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2660  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2667  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2668  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2672  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2677  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2684  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2685  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2689  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
root      2742  2303  0 08:59 pts/0    00:00:00 grep --color=auto postgres

Lets quickly check if that is a supported version of PostgreSQL:

[root@satellite ~]$ cat /var/lib/pgsql/data/PG_VERSION 
9.2
[root@satellite ~]$ su - postgres
-bash-4.2$ psql
psql (9.2.24)
Type "help" for help.

postgres=# select version();
                                                    version                                                    
---------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.2.24 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

Hm, 9.2 is already out of support. Nothing we would recommend to our customers but as long as redhat itself is supporting that it is probably fine. Just do not expect to get any fixes for that release from PostgreSQL community. Going a bit further into the details the PostgreSQL instance contains two additional users:

postgres=# \du
                             List of roles
 Role name |                   Attributes                   | Member of 
-----------+------------------------------------------------+-----------
 candlepin |                                                | {}
 foreman   |                                                | {}
 postgres  | Superuser, Create role, Create DB, Replication | {}

That corresponds to the connections to the instance we can see in the process list:

-bash-4.2$ ps -ef | egrep "foreman|candlepin" | grep postgres
postgres  2541  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36994) idle
postgres  2542  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36996) idle
postgres  2543  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36998) idle
postgres  2609  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2618  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2627  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2630  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2631  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2632  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2634  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2677  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2684  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2685  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2689  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  3143  1264  0 09:03 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(37114) idle
postgres  3144  1264  0 09:03 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(37116) idle
postgres  3145  1264  0 09:03 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(37118) idle
postgres  3146  1264  0 09:03 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(37120) idle
postgres  3147  1264  0 09:03 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(37122) idle

Foreman is responsible for the life cycle management and candlepin is responsible for the subscription management. Both are fully open source and can also be used on their own. What else do we have:

[root@satellite ~]$ ps -ef | grep -i mongo
mongodb   1401     1  0 08:56 ?        00:00:08 /usr/bin/mongod --quiet -f /etc/mongod.conf run
root      3736  2303  0 09:11 pts/0    00:00:00 grep --color=auto -i mongo

In addition to the PostgreSQL instance there is also a MongoDB process running. What is it for? It is used by Katello which is a Foreman plugin that brings “the full power of content management alongside the provisioning and configuration capabilities of Foreman”.

The next component is Pulp:

[root@satellite ~]# ps -ef | grep pulp
apache    1067     1  0 08:56 ?        00:00:03 /usr/bin/python /usr/bin/celery beat --app=pulp.server.async.celery_instance.celery --scheduler=pulp.server.async.scheduler.Scheduler
apache    1076     1  0 08:56 ?        00:00:02 /usr/bin/python /usr/bin/pulp_streamer --nodaemon --syslog --prefix=pulp_streamer --pidfile= --python /usr/share/pulp/wsgi/streamer.tac
apache    1085     1  0 08:56 ?        00:00:11 /usr/bin/python /usr/bin/celery worker -A pulp.server.async.app -n resource_manager@%h -Q resource_manager -c 1 --events --umask 18 --pidfile=/var/run/pulp/resource_manager.pid
apache    1259     1  0 08:56 ?        00:00:12 /usr/bin/python /usr/bin/celery worker -n reserved_resource_worker-0@%h -A pulp.server.async.app -c 1 --events --umask 18 --pidfile=/var/run/pulp/reserved_resource_worker-0.pid --maxtasksperchild=2
apache    1684  1042  0 08:56 ?        00:00:04 (wsgi:pulp)     -DFOREGROUND
apache    1685  1042  0 08:56 ?        00:00:04 (wsgi:pulp)     -DFOREGROUND
apache    1686  1042  0 08:56 ?        00:00:04 (wsgi:pulp)     -DFOREGROUND
apache    1687  1042  0 08:56 ?        00:00:00 (wsgi:pulp-cont -DFOREGROUND
apache    1688  1042  0 08:56 ?        00:00:00 (wsgi:pulp-cont -DFOREGROUND
apache    1689  1042  0 08:56 ?        00:00:00 (wsgi:pulp-cont -DFOREGROUND
apache    1690  1042  0 08:56 ?        00:00:01 (wsgi:pulp_forg -DFOREGROUND
apache    2002  1085  0 08:57 ?        00:00:00 /usr/bin/python /usr/bin/celery worker -A pulp.server.async.app -n resource_manager@%h -Q resource_manager -c 1 --events --umask 18 --pidfile=/var/run/pulp/resource_manager.pid
apache   17757  1259  0 09:27 ?        00:00:00 /usr/bin/python /usr/bin/celery worker -n reserved_resource_worker-0@%h -A pulp.server.async.app -c 1 --events --umask 18 --pidfile=/var/run/pulp/reserved_resource_worker-0.pid --maxtasksperchild=2
root     18147  2303  0 09:29 pts/0    00:00:00 grep --color=auto pulp

This one is responsible “for managing repositories of software packages and making it available to a large numbers of consumers”. So far for the main components. We will have a more in depth look into these in one of the next posts.

 

Cet article Getting started with Red Hat Satellite – Installation est apparu en premier sur Blog dbi services.

[BLOG] Introduction: Integration Cloud Service (ICS)

Online Apps DBA - Fri, 2018-10-12 05:45

Integration Cloud Service is one of the two cloud integration products (The second one is SOA Cloud Service) that have been launched by Oracle in recent times and is a lightweight service that can be used to achieve simple, lightweight integrations that do not require much complexity. Start learning about ICS by visiting https://k21academy.com/ics11 where […]

The post [BLOG] Introduction: Integration Cloud Service (ICS) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[BLOG] Oracle E-Business Suite (EBS) 12.2.8 Now Available: Documentation & Download Software

Online Apps DBA - Fri, 2018-10-12 03:01

Oracle recently released EBS R12.2.8 version. Visit https://k21academy.com/appsdba34 to check: ✔Link to Download 12.2.8 ✔Docs to Refer to Apply 12.2.8 ✔Upgarde Path to go to R12.2.8 Oracle recently released EBS R12.2.8 version. Visit https://k21academy.com/appsdba34 to check: ✔Link to Download 12.2.8 ✔Docs to Refer to Apply 12.2.8 ✔Upgarde Path to go to R12.2.8

The post [BLOG] Oracle E-Business Suite (EBS) 12.2.8 Now Available: Documentation & Download Software appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

How to migrate Grid Infrastructure from release 12c to release 18c

Yann Neuhaus - Fri, 2018-10-12 02:01

Oracle Clusterware 18c builds on this innovative technology by further enhancing support for larger multi-cluster environments and improving the overall ease of use. Oracle Clusterware is leveraged in the cloud in order to provide enterprise-class resiliency where required and dynamic as well as online allocation of compute resources where needed, when needed.
Oracle Grid Infrastructure provides the necessary components to manage high availability (HA) for any business critical application.
HA in consolidated environments is no longer simple active/standby failover.

In this blog we will see how to upgrade our Grid Infrastructure stack from 12cR2 to 18c.

Step1: You are required to patch your GI with the patch 27006180

[root@dbisrv04 ~]# /u91/app/grid/product/12.2.0/grid/OPatch/opatchauto apply /u90/Kit/27006180/ -oh /u91/app/grid/product/12.2.0/grid/

Performing prepatch operations on SIHA Home........

Start applying binary patches on SIHA Home........

Performing postpatch operations on SIHA Home........

[finalize:finalize] OracleHomeLSInventoryGrepAction action completed on home /u91/app/grid/product/12.2.0/grid successfully
OPatchAuto successful.

Step2: Check the list of patches applied

grid@dbisrv04:/u90/Kit/ [+ASM] /u91/app/grid/product/12.2.0/grid/OPatch/opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.6
Copyright (c) 2018, Oracle Corporation.  All rights reserved.

Lsinventory Output file location : /u91/app/grid/product/12.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2018-10-11_09-06-44AM.txt

--------------------------------------------------------------------------------
Oracle Grid Infrastructure 12c                                       12.2.0.1.0
There are 1 products installed in this Oracle Home.


Interim patches (1) :

Patch  27006180     : applied on Thu Oct 11 09:02:50 CEST 2018
Unique Patch ID:  21761216
Patch description:  "OCW Interim patch for 27006180"
   Created on 5 Dec 2017, 09:12:44 hrs PST8PDT
   Bugs fixed:
     13250991, 20559126, 22986384, 22999793, 23340259, 23722215, 23762756
........................
     26546632, 27006180

 

Step3: Upgrage the binaries to the release 18c

upgrade_grid

directory_new_grid

– recommend to run the rootUpgrade.sh script manually

run_root_script

/u90/app/grid/product/18.3.0/grid/rootupgrade.sh
[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u90/app/grid/product/18.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u90/app/grid/product/18.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/dbisrv04/crsconfig/roothas_2018-10-11_09-21-27AM.log

2018/10/11 09:21:29 CLSRSC-595: Executing upgrade step 1 of 12: 'UpgPrechecks'.
2018/10/11 09:21:30 CLSRSC-363: User ignored prerequisites during installation
2018/10/11 09:21:31 CLSRSC-595: Executing upgrade step 2 of 12: 'GetOldConfig'.
2018/10/11 09:21:33 CLSRSC-595: Executing upgrade step 3 of 12: 'GenSiteGUIDs'.
2018/10/11 09:21:33 CLSRSC-595: Executing upgrade step 4 of 12: 'SetupOSD'.
2018/10/11 09:21:34 CLSRSC-595: Executing upgrade step 5 of 12: 'PreUpgrade'.

ASM has been upgraded and started successfully.

2018/10/11 09:22:25 CLSRSC-595: Executing upgrade step 6 of 12: 'UpgradeAFD'.
2018/10/11 09:23:52 CLSRSC-595: Executing upgrade step 7 of 12: 'UpgradeOLR'.
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
2018/10/11 09:23:57 CLSRSC-595: Executing upgrade step 8 of 12: 'UpgradeOCR'.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node dbisrv04 successfully pinned.
2018/10/11 09:24:00 CLSRSC-595: Executing upgrade step 9 of 12: 'CreateOHASD'.
2018/10/11 09:24:02 CLSRSC-595: Executing upgrade step 10 of 12: 'ConfigOHASD'.
2018/10/11 09:24:02 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2018/10/11 09:24:49 CLSRSC-595: Executing upgrade step 11 of 12: 'UpgradeSIHA'.
CRS-4123: Oracle High Availability Services has been started.


dbisrv04     2018/10/11 09:25:58     /u90/app/grid/product/18.3.0/grid/cdata/dbisrv04/backup_20181011_092558.olr     70732493   

dbisrv04     2018/07/31 15:24:14     /u91/app/grid/product/12.2.0/grid/cdata/dbisrv04/backup_20180731_152414.olr     0
2018/10/11 09:25:59 CLSRSC-595: Executing upgrade step 12 of 12: 'InstallACFS'.
CRS-4123: Oracle High Availability Services has been started.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dbisrv04'
CRS-2673: Attempting to stop 'ora.driver.afd' on 'dbisrv04'
CRS-2677: Stop of 'ora.driver.afd' on 'dbisrv04' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'dbisrv04' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/10/11 09:27:54 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

– you can ignore the warning related to the memory resources

ignore_prereq

completed_succesfully

– once finished the installation, verify what has been made

[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/bin/crsctl query has softwareversion
Oracle High Availability Services version on the local node is [18.0.0.0.0]

[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.DATA2.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.DATA3.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.RECO.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.asm
               ONLINE  ONLINE       dbisrv04                 Started,STABLE
ora.ons
               OFFLINE OFFLINE      dbisrv04                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.db18c.db
      1        ONLINE  ONLINE       dbisrv04                 Open,HOME=/u90/app/o
                                                             racle/product/18.3.0
                                                             /dbhome_1,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.evmd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.orcl.db
      1        ONLINE  ONLINE       dbisrv04                 Open,HOME=/u90/app/o
                                                             racle/product/18.3.0
                                                             /dbhome_1,STABLE
--------------------------------------------------------------------------------
 

Cet article How to migrate Grid Infrastructure from release 12c to release 18c est apparu en premier sur Blog dbi services.

How to grant entire Schema access?

Tom Kyte - Thu, 2018-10-11 19:06
Hello!! I did some research, but I still need help, please. I have eight schemas: A, B, C, E, F, G and H. I need to give schema A, full access (select, insert, update, delete) to schema B, C, D, E, F, G and H (all objects: TABLE, VIEW, PACKA...
Categories: DBA Blogs

DBMS_SCHEDULER: Emailing after inserting into database tables

Tom Kyte - Thu, 2018-10-11 19:06
Hello Tom, You provided an example for "Emailing after inserting into database tables."; I?m working with 10g. 10g documentation recommends using DBMS_SCHEDULER. The code you provided using DBMS_JOB is listed below. Would be possible to prov...
Categories: DBA Blogs

Why does the optimizer in Oracle Database does not use an index for create-table-as-select?

Tom Kyte - Thu, 2018-10-11 19:06
hi when I execute following queries see completely different result where is problem? here I execute a simple select query which takes a few seconds to complete: <code> SELECT * FROM PRODUCTION.VERY_SMALL_TABLE L INNER JOIN PR...
Categories: DBA Blogs

Inaugural Startup Class Collaborates on Enterprise Growth and Cloud Innovation at Oracle’s First U.S. Startup Hub

Oracle Press Releases - Thu, 2018-10-11 07:00
Press Release
Inaugural Startup Class Collaborates on Enterprise Growth and Cloud Innovation at Oracle’s First U.S. Startup Hub Austin, Texas hub welcomes six startups to the global program and Capital Factory co-working space

Redwood Shores, Calif.—Oct 11, 2018

Oracle today announced the first class of startups joining the Oracle Global Startup Ecosystem hub in Austin, Texas. Announced earlier this year, Austin is the program’s first U.S. hub location. The six startups in the hub’s inaugural class will work closely with Oracle’s research and development and enterprise sales teams to enable scalable growth and drive cloud-based innovation.

Oracle’s global startup program offers personalized experiences for startups, mentoring from technical and business experts, state-of-the-art technology, free Oracle Cloud credits, and access to Oracle’s more than 430,000 global customers, as well partners and investors. Unlike traditional accelerator programs, Oracle’s is driven by the R&D team and does not take equity.

“In the global marketplace, one size does not fit all, and that’s especially true for today’s startup ecosystem,” said Reggie Bradford, senior vice president, Oracle Global Startup Ecosystem. “That’s why we’ve taken a different approach – reimagining the startup-enterprise relationship and offering personalized experiences for the startups to achieve true collaboration, co-development and co-innovation.”

Selected from a competitive and diverse pool of applicants, the six startups joining the Austin-based hub include:

  • data.world is a platform for modern data teamwork. Its smart data catalog can wake up the hidden data workforce within every enterprise, unify silos, and create data-driven cultures faster. 

  • Eventador is a fully managed, enterprise-grade stream processing as a service platform that addresses a critical business problem: to alleviate the complexities and enable the building of applications that use real-time data at their core. Eventador enables developers to quickly create and painlessly manage real-time data pipelines to gain time-to-market, actionable insights and technology advantages.

  • Pilosa enables real-time retrieval of massive, fragmented datasets. Pilosa introduces the concept of splitting the index from the database by making the index a first-class citizen in the modern technology stack. Pilosa’s software can store a knowledge representation of underlying data in memory, making it orders of magnitude smaller and unimaginably fast.

  • ROIKOI built one of the first talent recommendation graphs. Companies use ROIKOI software and services to hire a more diverse set of candidates more efficiently than they could through sourcing platforms or job boards, and can find referral-quality candidates at a scale greater than traditional referral platforms.

  • Senseye is building a direct link between humans and computers. Senseye’s technology can read the brain wirelessly, providing a vast array of use cases and solutions for many key industries. They are working to enable you to use the human brain tomorrow the way a mouse and keyboard are used today.

  • Transmute is a decentralized app engine for enterprise development teams that can configure decentralized applications to work seamlessly with public and private clouds. The inevitable rise of Distributed Ledger Technology (DLT) and the resulting shift to decentralized applications will be as disruptive to enterprise IT as the transition to cloud computing. Transmute’s mission is to help customers use decentralization when and where it delivers real value, using code they’ve already written.   

“The six startups entering our inaugural Austin program are ready to scale innovative enterprise technologies, and we are looking forward to being a catalyst for their next phase of growth,” said JD Weinstein, head of the Oracle Global Startup Ecosystem in Austin. “We will work closely with each startup to create a personalized journey that enables them to tap into Oracle’s rich enterprise ecosystem, cloud solutions, global customers, and market expertise.”

The startups will work out of Oracle’s hub at Capital Factory and also benefit from Capital Factory’s network of local entrepreneurs, prominent CEOs, venture capitalists, corporate partners, and government officials.

The Oracle Global Startup Ecosystem enables startups to tailor their program experience based on their unique journey. The global program offers an immersive co-working environment, a virtual-style experience, and an approach that is a hybrid of the two. The Oracle Global Startup Ecosystem has co-working spaces in Austin, Bangalore, Bristol, Delhi, Mumbai, Paris, Sao Paolo, Singapore and Tel Aviv, and works with startups virtually in many other locations around the world.

Startup Founder Quotes
  • “We are honored to be selected into this exclusive startup program. Oracle can help us continue to innovate, scale, and improve our ability to help enterprises create data-driven cultures faster.” – Brett Hurt, CEO and Co-Founder, data.world

  • “Oracle can help accelerate our ability to scale with its vast global resources and technical and business expertise. It’s an incredibly exciting time for our company, as every day we see new real-time use cases and customers building increasingly complex line of business streaming applications.” – Kenny Gorman, Co-founder and CEO, Eventador

  • "We are incredibly excited to be given the opportunity to access Oracle’s vast global resources, technologies, and customer ecosystem. At this stage in our growth, this program is exactly what we need to help enterprises serve analytical type queries in real-time at scale." – HO Maycotte, CEO, Pilosa

  • "Oracle understands what we're working to accomplish and can provide us the resources and expertise across its cloud solutions, sales support, recruiting, and global customer ecosystem. It's an honor to be selected to this global program, and we look forward to a long-standing relationship with Oracle." – Andy Wolfe, CEO, ROIKOI

  • “Oracle can help accelerate our capabilities to scale with its leading cloud technologies and customer ecosystem. It’s an incredibly exciting time for our growth, as we continue to push the boundaries of neuroscience and technology.” – David Zakariaie CEO Senseye

  • “Being part of this global startup program will be invaluable for us. Not only will it allow us to work directly with Oracle Cloud and Oracle engineers to optimize our enterprise platform offering, but we will also benefit from Oracle’s vast global resources, customer ecosystem, and business expertise.” – Karyl Fowler, CEO, Transmute

Contact Info
Julia Allyn
Oracle Corporate Communications
+1.650.607.1338
julia.allyn@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Julia Allyn

  • +1.650.607.1338

ADOPValidations EBS 12.2

Senthil Rajendran - Thu, 2018-10-11 05:21
ADOPValidations_detailed.log reported a warning message for one of the critical error "Cross Reference"

ValidateFSConfigFilesForCrossRef:
        [WARNING]: Found invalid cross references in FS config files.

cross reference is nothing but having incorrect values in the context file of EBS 12.2

for example : run file system context file having path of patch file system 


$ echo $CONTEXT_FILE
/prod/inst/fs1/inst/apps/PROD_node1/appl/admin/PROD_node1.xml

$ grep -i fs2 /prod/inst/fs1/inst/apps/PROD_node1/appl/admin/PROD_node1.xml
         /prod/applmgr/fs2
         /prod/applmgr/fs2/EBSapps/comn/adopclone_node1
$
 

$ grep -i fs1 /prod/inst/fs2/inst/apps/PROD_node1/appl/admin/PROD_node1.xml /prod/applmgr/fs1
         /prod/applmgr/fs1/EBSapps/comn/adopclone_node1
         /prod/applmgr/fs2/EBSapps/comn/util/jdk/jre/lib/sparcv9/server:/prod/applmgr/fs2/EBSapps/comn/util/jdk/jre/lib/sparcv9:/prod/applmgr/fs2/EBSapps/appl/pay/12.0.0/vendor/quantum/lib:/prod/applmgr/fs2/FMW_Home/webtier/jdk/jre/lib/sparcv9:/prod/applmgr/fs2/FMW_Home/webtier/jdk/jre/lib/sparcv9/native_threads:/prod/applmgr/fs2/EBSapps/appl/cz/12.0.0/bin64:/prod/applmgr/fs2/EBSapps/appl/sht/12.0.0/lib64:/prod/applmgr/fs1/EBSapps/comn/adopclone_node1/oui/lib/solaris
$

here on the patch file system there is a cross reference which can be very costly mistake if you ignore it. so consider having all validations reviewed.

how to fix it then ....
simple

just update the context file and upload it to the database

$ADJVAPRG oracle.apps.ad.autoconfig.oam.CtxSynchronizer action=upload contextfile= logfile=/tmp/patchctxupload.log

then run ADOP operations...

Golden Gate moving from Classic to Integrated

Senthil Rajendran - Thu, 2018-10-11 05:07
Golden Gate moving from Classic to Integrated was an option taken in one of our project to have better performance. A lot of effort was put to convert all the extract and replicate to Integrated mode and some optimization parameters were used but still there were not a significant performance improvement.

Then I took a different route to examine the AWR and ASH for the replicate processes. Wow our friend INDEXES. Since we were dealing with build load , some of the unwanted indexes were dropped and Golgen Gate was super fast. Both Integrated and Classic was having better performance. Post the build load we rebuild the indexes.

Better consult a Dr.DBA always.


ODC Appreciation Day: Oracle Cloud PSM Cli

Rittman Mead Consulting - Thu, 2018-10-11 03:11
 Oracle Cloud PSM Cli

Oracle Developer Community (ODC) Appreciation Day (previously know as OTN Appreciation Day) is a day, started from an initiative of Tim Hall, where everyone can share their Thanks to the Oracle community by writing about a favourite product, an experience, a story related to Oracle technology.

 Oracle Cloud PSM Cli

Last year I wrote about OBIEE Time Hierarchies and how they are very useful to perform time comparison, shifts, and aggregations.

This year I want to write about Oracle Paas Service Manager (PSM) Client!
I've already written a blog post about it in detail, basically Oracle PSM allows Oracle cloud administrators to manage their instances via command line instead of forcing them to use the Web-UI.

 Oracle Cloud PSM Cli

PSM Cli allows you to create an Oracle Analytics Cloud instance by just calling

psm analytics create-service -c <CONFIG_FILE> -of <OUTPUT_FORMAT>

and passing a JSON <CONFIG_FILE> which can easily be downloaded after following the creation process in the Web-UI, a bit like the response file in on-premises OBIEE can be saved and customised for future reuse after the first UI installation. Examples of the PSM JSON payloads can be found here.

OAC Instances can also easily be started/stopped/restarted with the command

psm analytics start/stop/restart -s <INSTANCE_NAME>

And the status of each command tracked with

psm analytics operation-status -j <JOB_ID>

As mentioned in my previous post, PSM Cli opens also the doors for instance management automation which is a requirement for providing cost-effective fully isolated feature-related OAC instances useful when thinking about DevOps practices. The fact that PSM Cli is command line, means that it can be integrated in any automation tool like Jenkins and thus integrated in any DevOps flow being designed in any company.

So Thank you, Oracle, for enabling such automation with PSM Cli!

Follow the #ThanksODC hashtag on Twitter to check which post have been published on the same theme!

Categories: BI & Warehousing

procedure having out parameter of clob type, how we will see the value of it?

Tom Kyte - Thu, 2018-10-11 00:46
procedure having out parameter of clob type, how we will see the value of it,please advise? Hello Tom, we are having one procedure as below: PROCEDURE P1 ( p_dat_file_name out varchar2, p_zip_file_name out varchar2, p_metad...
Categories: DBA Blogs

Join me at Oracle OpenWorld

David Haimes - Wed, 2018-10-10 18:41

In under two weeks Oracle will have it’s annual conference here in San Francisco and it should be a very exciting time.  I will be talking about Blockchain technology and how it is applicable to Enterprise Applications and business processes, with a particular focus on ERP applications.

I will also be putting my faith in the conference WiFi and showing some demos of prototypes we are working on.

My session details are below, last year my session was full so I advise you to register (here) as soon as possible to be sure you get a spot.

Blockchain Applications Use Cases and Live Demos [PRO6215]

In this session learn where Blockchain can add real business value and how it fits into existing enterprise business applications ecosystems. See the types of patterns across multiple industries, including real-world examples. The session also includes live demos of a number of Blockchain applications.

 

David Haimes, Senior Director, Oracle

Wednesday, Oct 24, 11:15 a.m. – 12:00 p.m. | Moscone South – Room 154

Categories: APPS Blogs

Discover Oracle University Training at Oracle OpenWorld

Chris Warticki - Wed, 2018-10-10 17:25

Over 57%[1] of IT decision makers see training as important to mitigating the skills gaps they now face.

Technology is changing and skills gaps are widening. Are you ready for the challenge?

Oracle OpenWorld provides avenues for you to network and learn about new products and services. It also provides you with the opportunity to speak with Oracle product experts and to gain new skills or brush-up on the ones you already have.

As part of Oracle’s Support Services organization, Oracle University is excited to offer you a variety of ways to expand your product knowledge while at Oracle OpenWorld or Code One.

New this year:  Free “Deep Dive” training sessions! These sessions provide four hours of in-depth product training taught by our expert Oracle instructors.

Take a look at these FREE topic sessions offered to Oracle OpenWorld and Code One attendees:

Need something shorter? Visit the Oracle University theatre within the Exchange in Moscone South. We will be running a series of short “how to” product training sessions to help you advance your skill set.

Want to talk to Oracle University about your training and certification needs? Stop by the Oracle Support Stars Bar to talk to Oracle University and other support experts from across Oracle’s product portfolio.

Make the most of your OpenWorld trip with training at OpenWorld! Reserve your seat today at education.oracle.com/oow.

Find all the Oracle Support and Services sessions, activities and events on the Support and Services Oracle OpenWorld page.

 

One Click Access on the Shop Floor to Part Information with PTC ThingWorx® Navigate®

Shop floor technicians and operators involved in assembly and other processes are at the critical, final steps in the manufacturing process. Unfortunately, these workers are often at the mercy of out of date, less than accurate paper documentation, or they need to access multiple systems to find associated parts information. These issues create bottlenecks that can impact quality and on time shipments, as well as lead to employee frustration.

The following is a summary of the problems Fishbowl Solutions has seen at customers when it comes to accessing parts information needed for assembly:

  • Having the design engineers create manufacturing documentation, print it, and deliver physical copies to shop floor workers
  • Storing associated parts information on a network drive, making it hard to find for shop floor staff but especially new workers
  • Quality alerts and other processes are not integral with any systems
  • Parts information contained within PDFs requires excessive scrolling to get to the information needed
  • The MPMLink viewer requires multiple clicks to get to relevant parts information

To solve these problems, Fishbowl has worked with customers to leverage PTC ThingWorx to build shop floor viewing applications that can surface up relevant information to workers in one, simple view.

To read the rest of this blog post and see sample screenshots of the shop floor viewing application, please click over to the PTC LiveWorx 2019 blog.

The post One Click Access on the Shop Floor to Part Information with PTC ThingWorx® Navigate® appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Hybrid Fake

Jonathan Lewis - Wed, 2018-10-10 09:12

Oracle 12c introduced the “Hybrid” histogram – a nice addition to the available options and one that (ignoring the bug for which a patch has been created) supplies the optimizer with better information about the data than the equivalent height-balanced histogram. There is still a problem, though, in the trade-off between accuracy and speed: just as it does with height-balanced histograms when using auto_sample_size Oracle samples (typically) about 5,500 rows to create a hybrid histogram, and the SQL it uses to generate the necessary summary is essentially an aggregation of the sample, so either you have a small sample with the risk of lower accuracy or a large sample with an increase in workload. This being the case it’s worth knowing how to create a hybrid histogram using the dbms_stats.set_column_stats() API.

It’s fairly easy to identify the cases where a hybrid histogram could be helpful.  You have a large volume of data spread over a large number (more than 2048) of distinct values, but a few values (typically less than 250) which are responsible for a significant fraction of the data. You would like to tell Oracle about the special “extreme” cases so that the optimizer can take defensive if you query for one of those values, but at the same time you would like to give Oracle a picture of the way the rest of the data is distributed. This is similar in some respects to the Top-N (a.k.a. Top-Frequency) histogram which says to Oracle “We have a small number of popular values, and some odds and ends on the side that are pretty ignorable”, the critical difference is that you need the hybrid histogram when it’s not safe to “ignore” the odds and ends.

Here’s an example of creating some data and then generating a completely artificial hybrid histogram. The code demonstrates 3 points – the principle feature of creating hybrid histograms and a couple of generic details about Oracle’s histograms:

  • The main point is that Oracle 12c introduces a new numeric array in the dbms_stats.statrec structure. This allows each row (bucket) in a histogram to hold a second statistic about the bucket so we can now store a frequency figure for the bucket as a whole, and a “repeat-count” figure for the highest value in the bucket. (Warning – there is a counter-intuitive conflict between the name of the new structure and the way it is used for hybrid histograms).
  • As side-point I’ve included a code variation that shows you the remarkable similarity between generating a Frequency histogram and a Hybrid histogram.
  • As a second side-point I have also highlighted the effect you see in the dba_tab_histograms view when your popular values are “too similar” to each other – i.e. when they match on the first 6 characters.

We start by creating a table as a copy of the view all_objects – then we’re going to create a hybrid histogram on the object_type column that looks nothing like the  data. The histogram will say:

  • for every 15,000 rows (where the column is not null)
    • 5,000 will have values less than or equal to ‘C’, of which 3,000 will have the value ‘C’
    • The next 2,000 (i.e. running total 7,000) will have values greater than ‘C’ and up to ‘PPPPPP1’, but ‘PPPPPP1’ itself is not a popular value
    • The next 2,000 (i.e. running total 9,000) will have values greater than ‘PPPPPP1’ and up to ‘PPPPPP2’, but ‘PPPPPP2’ itself is not a popular value
    • The next 2,000 (i.e. running total 11,000) will have values greater than ‘PPPPPP2’ and up to ‘PPPPPP3’, but ‘PPPPPP3’ itself is not a popular value
    • The last 4,000 (i.e. running total 15,000) will have values greater than ‘PPPPPP3’ and up to ‘X’ of which 3,000 will have the value ‘X’

Note particularly that the “how many rows hold the endpoint value” are stored in the statrec.bkvals array – just as they would be for a frequency histogram – and the cumulative count of rows is stored in the statrec.rpcnts structure. All we have to do to create a frequency histogram instead of a hybrid histogram is to store zeros in the statrec.rpcnts structure, or leave it uninitialized.

You’ll notice that since I’m creating a histogram on a character column I’ve used an array of type dbms_stats.chararray to hold the list of values (in ascending order) that I want the histogram to describe.


rem
rem     Script:         12c_hybrid_histogram_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2018
rem 

create table t1
as
select * from all_objects
;

begin
        dbms_stats.gather_table_stats(
                ownname         => null,
                tabname         => 't1',
                method_opt      => 'for all columns size 1'
        );
end;
/

declare
                c_array         dbms_stats.chararray;
                m_rec           dbms_stats.statrec;
                m_distcnt       number;
                m_density       number;
                m_nullcnt       number;
                m_avgclen       number;

begin
        dbms_stats.get_column_stats(
                ownname         => user,
                tabname         => 'T1',
                colname         => 'OBJECT_TYPE', 
                distcnt         => m_distcnt,
                density         => m_density,
                nullcnt         => m_nullcnt,
                srec            => m_rec,
                avgclen         => m_avgclen
        );

        m_rec.epc    := 5;

        c_array      := dbms_stats.chararray( 'C',  'PPPPPP1',  'PPPPPP2',  'PPPPPP3',   'X');
        m_rec.bkvals := dbms_stats.numarray (3000,          1,          1,          1,  3000);

        m_rec.rpcnts := dbms_stats.numarray (5000,       7000,       9000,      11000, 15000);
--      m_rec.rpcnts := dbms_stats.numarray (0000,       0000,       0000,       0000, 00000);

        dbms_stats.prepare_column_values(m_rec, c_array);

        dbms_stats.set_column_stats(
                ownname         => user,
                tabname         => 'T1',
                colname         => 'OBJECT_TYPE', 
                distcnt         => m_distcnt,
                density         => m_density,
                nullcnt         => m_nullcnt,
                srec            => m_rec,
                avgclen         => m_avgclen
        ); 
end;
/

That’s it – it’s remarkably simple. To show the effect of running this code I can report the content of user_tab_histograms for the column. I’ve actually run the code and queried the results twice; first for the case where I created the hybrid histogram and then after modifying the PL/SQL block to set the rpcnts array to zeros to create a frequency histogram.


column endpoint_actual_value format a22
column endpoint_value        format 999,999,999,999,999,999,999,999,999,999,999,999

select
        endpoint_number, endpoint_value, endpoint_actual_value, endpoint_repeat_count
from
        user_tab_histograms
where
        table_name = 'T1'
and     column_name = 'OBJECT_TYPE'
order by
        endpoint_value
;

With non-zero rpcnts (hybrid histogram)
=======================================
ENDPOINT_NUMBER                                   ENDPOINT_VALUE ENDPOINT_ACTUAL_VALUE  ENDPOINT_REPEAT_COUNT
--------------- ------------------------------------------------ ---------------------- ---------------------
           3000  347,883,889,521,833,000,000,000,000,000,000,000 C                                       3000
           7000  417,012,704,559,973,000,000,000,000,000,000,000 PPPPPP1                                    1
           9000  417,012,704,559,973,000,000,000,000,000,000,000 PPPPPP2                                    1
          11000  417,012,704,559,973,000,000,000,000,000,000,000 PPPPPP3                                    1
          15000  456,922,123,551,065,000,000,000,000,000,000,000 X                                       3000


With rpcnts set to zero (frequency histogram)
=============================================
ENDPOINT_NUMBER                                   ENDPOINT_VALUE ENDPOINT_ACTUAL_VALUE  ENDPOINT_REPEAT_COUNT
--------------- ------------------------------------------------ ---------------------- ---------------------
           3000  347,883,889,521,833,000,000,000,000,000,000,000 C                                          0
           3001  417,012,704,559,973,000,000,000,000,000,000,000 PPPPPP1                                    0
           3002  417,012,704,559,973,000,000,000,000,000,000,000 PPPPPP2                                    0
           3003  417,012,704,559,973,000,000,000,000,000,000,000 PPPPPP3                                    0
           6003  456,922,123,551,065,000,000,000,000,000,000,000 X                                          0

I made a comment earlier on that the naming and use of the rpcnts structure was somewhat counter-intuitive. As you can see in the results above, when I created the hybrid histogram the values I stored in the rpcnts structure are not the values reported as the “repeat count”, the numbers reported as the “repeat count” are from the bkvals (bucket values).  As far as I’m concerned this means I have to go back to my basic examples every time I want to fake a histogram because I’m never too sure which arrays I should populate with what values – and whether I should use absolute or cumulative values.

One last minor point: you’ll see that the endpoint_actual_value has been populated in this example. This is because (with Oracle’s interesting transformation from character to numeric) the three ‘PPPPPPx’ character values turn into the same number – so Oracle stores the first 64 bytes (or 32 for versions of Oracle prior to 12c) of the actual value.

 

Oracle Offline Persistence Toolkit - Applying Server Changes

Andrejus Baranovski - Wed, 2018-10-10 09:00
This is my final post related to Oracle Offline Persistence Toolkit. I will show simple example, which explains how to apply server changes, if data conflict comes up. Read previous post about - Oracle Offline Persistence Toolkit - Submitting Client Changes.

To apply server changes is easier, than to apply client changes. You need to remove failed request from sync queue and fetch server data to client by key.

Example of data conflict during sync:


User decides to cancel his changes and bring data from the server. GET is executed to fetch latest data and push it to the client:


In JS code, first of all we remove request from sync queue, in promise we read key value for that request and then refetch data:


Download sample code from GitHub repository.

Eye, Eye, Cap’n

Jonathan Lewis - Wed, 2018-10-10 08:00

By the time you read this I will have had the lenses in both my eyes replaced, so I won’t be staring at a computer screen for a while – and that means, in particular, I won’t be doing any further investigation into join cardinality for a while.

For those who might otherwise feel deprived of my exquisite prose I have, however, prepared a couple of lightweight articles for automatic launching over the next few days. But please don’t expect any prompt follow-ups if you add comments or send email over the next couple of weeks.

Pages

Subscribe to Oracle FAQ aggregator