Feed aggregator

Join of huge tables taking to much of time

Tom Kyte - Mon, 2018-12-24 00:26
<u></u> Hi Team, <b>Background:</b> We have 40-45 applications who have SQL server as a database (fake name : <<SQLAppDB>>). Now, this <<SQLAppDB>> is having some views which point to Oracle's Views via linked server connection (fake name : <<Or...
Categories: DBA Blogs

java.sql.SQLException: SQL string is not Query - While updating BLOB column

Tom Kyte - Mon, 2018-12-24 00:26
Greetings: We are facing below exception, while executing a query "UPDATE TRN_WSF_SERVICE_AUDIT SET REQUEST=EMPTY_CLOB() WHERE TRN_WSF_SERVICE_AUDIT_SID = 75002202" here REQUEST is a CLOB datatype. Detailed error ----------------------------...
Categories: DBA Blogs

session id for sys user

Tom Kyte - Mon, 2018-12-24 00:26
Hi Tom, Wondering your thoughts on why the session id of sys user is hard coded to 429496729. SQL> select sys_context('userenv','sessionid') from dual; When you run this connected as sysdba it always returns 429496729. The same select using sid i...
Categories: DBA Blogs

Oracle Security Blog Posts

Pete Finnigan - Mon, 2018-12-24 00:26
I teach many training classes on Oracle security to lots of students worldwide both on-site and on-line and one area I often cover quote briefly is where can you find more information or keep up to date on Oracle security....[Read More]

Posted by Pete On 23/12/18 At 05:53 PM

Categories: Security Blogs

Understanding Attributes Enum in ADF BC Row Class

Andrejus Baranovski - Sun, 2018-12-23 03:59
Did you ever wonder why Attributes Enum is generated by JDeveloper in Entity or View Row class? Attributes Enum holds a collection of attribute names and there is a set of static variables with attribute indexes. These indexes are used to locate attribute in getter/setter. Attributes Enum is a structure which is required for JDeveloper on design time to generate Java code. On runtime Attributes Enum is needed only as long as you are using a static variable index in the getter/setter.

Attributes Enum and list of static indexes in View Row class:


Static index is used in the getter/setter to access attribute:


Attributes Enum is mimicking attributes order in the VO/EO. You can think about it as about attributes metadata. It is not mandatory to use index from Attributes Enum. In some use cases, you could get attribute index directly from VO/EO Def and use it to access attribute:


First name is fetched correctly using overridden getter:


Download sample code from GitHub

Control distribution scripts with "whenever sqlerror"

Tom Kyte - Fri, 2018-12-21 17:26
My company uses distribution scripts for deploying database updates to multiple environments. I need to control the scripts such that an "error" (defined as resulting in an "Ora-#" code) abends the script, but a warning (e.g. Warning: compiled but w...
Categories: DBA Blogs

Automatically archive data based on timestamp

Tom Kyte - Fri, 2018-12-21 17:26
Hi Tom I need to perform data archival in Oracle 12c, after which the archived data must be accessible by the application, when queried for. Is there a way to automatically archive data which are >1 year old based on the 'updated_date' column for ...
Categories: DBA Blogs

ODA : Free up space on local filesystems

Yann Neuhaus - Fri, 2018-12-21 10:02
Introduction

When you work on ODA you sometimes get struggled with local filesystem free space. ODA has terabytes of space on data disks, but local disks are still limited to a raid-1 array of 2x 480GB disks. And only few GB are dedicated to / and /u01 filesystems. You do not need hundreds of GB on these filesystems, but I think that you prefer to keep at least 20-30% of free space. And if you plan to patch your ODA, you surely need more space to pass all the steps without reaching dangerous level of filling. Here is how to grab free space on these filesystems.

Use additional purgeLogs script

PurgeLogs script is provided as an additional tool from Oracle. It should have been available with oakcli/odacli but it’s not. Download it from MOS note 2081655.1. As this tool is not part of the official ODA tool, please test it before using it on a production environment. It’s quite easy to use, put the zip in a folder, unzip it, and run it with root user. You can use this script with a single parameter that will clean up all the logfiles for all the Oracle products aged of a number of days:


df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 55G 29G 23G 56% /
df -h /u01/
Filesystem Size Used Avail Use% Mounted on
/dev/xvdb1 92G 43G 45G 50% /u01

cd /tmp/
unzip purgeLogs.zip
du -hs /opt/oracle/oak/log/*
11G /opt/oracle/oak/log/aprhodap02db0
4.0K /opt/oracle/oak/log/fishwrap
232K /opt/oracle/oak/log/test
./purgeLogs -days 1

--------------------------------------------------------
purgeLogs version: 1.43
Author: Ruggero Citton
RAC Pack, Cloud Innovation and Solution Engineering Team
Copyright Oracle, Inc.
--------------------------------------------------------

2018-12-20 09:20:06: I adrci GI purge started
2018-12-20 09:20:06: I adrci GI purging diagnostic destination diag/asm/+asm/+ASM1
2018-12-20 09:20:06: I ... purging ALERT older than 1 days

2018-12-20 09:20:47: S Purging completed succesfully!
du -hs /opt/oracle/oak/log/*
2.2G /opt/oracle/oak/log/aprhodap02db0
4.0K /opt/oracle/oak/log/fishwrap
28K /opt/oracle/oak/log/test


df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 55G 18G 34G 35% /
df -h /u01/
Filesystem Size Used Avail Use% Mounted on
/dev/xvdb1 92G 41G 47G 48% /u01

In this example, you just freed up about 13GB. If your ODA is composed of 2 nodes, don’t forget to use the same script on the other node.

Truncate hardware log traces

Hardware related traces are quietly filling up the filesystem if your ODA is running since a long time. This traces are located under /opt/oracle/oak/log/`hostname`/adapters. I don’t know if each model has this kind of behaviour but this was an example on an old X4-2 running for 3 years now.

cd /opt/oracle/oak/log/aprhodap02db0/adapters
ls -lrth
total 2.2G
-rw-r--r-- 1 root root 50M Dec 20 09:26 ServerAdapter.log
-rw-r--r-- 1 root root 102M Dec 20 09:27 ProcessorAdapter.log
-rw-r--r-- 1 root root 794M Dec 20 09:28 MemoryAdapter.log
-rw-r--r-- 1 root root 110M Dec 20 09:28 PowerSupplyAdapter.log
-rw-r--r-- 1 root root 318M Dec 20 09:30 NetworkAdapter.log
-rw-r--r-- 1 root root 794M Dec 20 09:30 CoolingAdapter.log
head -n 3 CoolingAdapter.log
[Mon Apr 27 18:02:28 CEST 2015] Action script '/opt/oracle/oak/adapters/CoolingAdapter.scr' for resource [CoolingType] called for action discovery
In CoolingAdapter.scr
[Mon Apr 27 18:07:28 CEST 2015] Action script '/opt/oracle/oak/adapters/CoolingAdapter.scr' for resource [CoolingType] called for action discovery
head -n 3 MemoryAdapter.log
[Mon Apr 27 18:02:26 CEST 2015] Action script '/opt/oracle/oak/adapters/MemoryAdapter.scr' for resource [MemoryType] called for action discovery
In MemoryAdapter.scr
[Mon Apr 27 18:07:25 CEST 2015] Action script '/opt/oracle/oak/adapters/MemoryAdapter.scr' for resource [MemoryType] called for action discovery

Let’s purge the oldest lines in these files:

for a in `ls *.log` ; do tail -n 200 $a > tmpfile ; cat tmpfile > $a ; rm -f tmpfile; done
ls -lrth
total 176K
-rw-r--r-- 1 root root 27K Dec 20 09:32 CoolingAdapter.log
-rw-r--r-- 1 root root 27K Dec 20 09:32 ProcessorAdapter.log
-rw-r--r-- 1 root root 30K Dec 20 09:32 PowerSupplyAdapter.log
-rw-r--r-- 1 root root 29K Dec 20 09:32 NetworkAdapter.log
-rw-r--r-- 1 root root 27K Dec 20 09:32 MemoryAdapter.log
-rw-r--r-- 1 root root 27K Dec 20 09:32 ServerAdapter.log

2GB of traces you’ll never use! Don’t forget the second node on a HA ODA.

Purge old patches in the repository: simply because they are useless

If you successfully patched your ODA at least 2 times, you can remove the oldest patch in the ODA repository. As you may know, patches are quite big in size because they include a lot of things. So it’s a good practise to remove the oldest patches when you have successfuly patched your ODA. To identify if old patches are still on your ODA, you can dig into folder /opt/oracle/oak/pkgrepos/orapkgs/. Purge of old patches is easy:

df -h / >> /tmp/dbi.txt
oakcli manage cleanrepo --ver 12.1.2.6.0
Deleting the following files...
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OAK/12.1.2.6.0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST95000N/SF04/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST95001N/SA03/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/WDC/WD500BLHXSUN/5G08/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H101860SFSUN600G/A770/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST360057SSUN600G/0B25/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H106060SDSUN600G/A4C0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H109060SESUN600G/A720/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/HUS1560SCSUN600G/A820/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/HSCAC2DA6SUN200G/A29A/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/HSCAC2DA4SUN400G/A29A/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/ZeusIOPs-es-G3/E12B/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/Z16IZF2EUSUN73G/9440/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24P/0018/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24C/0018/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE3-24C/0291/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4370-es-M2/3.0.16.22.f-es-r100119/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H109090SESUN900G/A720/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/Z16IZF4EUSUN200G/944A/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7240AS60SUN4.0T/A2D2/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7240B520SUN4.0T/M554/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7280A520SUN8.0T/P554/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/SUN/T4-es-Storage/0342/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4170-es-M3/3.2.4.26.b-es-r101722/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4-2/3.2.4.46.a-es-r101689/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X5-2/3.2.4.52-es-r101649/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/HMP/2.3.4.0.1/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/IPMI/1.8.12.4/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/ASR/5.3.1/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/12.1.0.2.160119/Patches/21948354
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.4.160119/Patches/21948347
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.3.15/Patches/20760997
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.2.12/Patches/17082367
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OEL/6.7/Patches/6.7.1
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OVS/12.1.2.6.0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/GI/12.1.0.2.160119/Base
df -h / >> /tmp/dbi.txt
cat /tmp/dbi.txt
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 55G 28G 24G 54% /
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 55G 21G 31G 41% /

Increase /u01 filesystem with remaining space

This only concern ODAs in bare metal. You may have noticed that not all the disk space is allocated to your ODA local filesystems. On modern ODAs, you have 2 M2 SSD of 480GB each in a RAID1 configuration for the system, and only half of the space is allocated. As the appliance is using LogicalVolumes, you can extend very easily the size of your /u01 filesystem.

This is an example on a X7-2M:


vgdisplay
--- Volume group ---
VG Name VolGroupSys
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 6
Open LV 4
Max PV 0
Cur PV 1
Act PV 1
VG Size 446.00 GiB
PE Size 32.00 MiB
Total PE 14272
Alloc PE / Size 7488 / 234.00 GiB
Free PE / Size 6784 / 212.00 GiB
VG UUID wQk7E2-7M6l-HpyM-c503-WEtn-BVez-zdv9kM


lvdisplay
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolRoot
LV Name LogVolRoot
VG Name VolGroupSys
LV UUID icIuHv-x9tt-v2fN-b8qK-Cfch-YfDA-xR7y3W
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:40:00 +0100
LV Status available
# open 1
LV Size 30.00 GiB
Current LE 960
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:0
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolU01
LV Name LogVolU01
VG Name VolGroupSys
LV UUID ggYNkK-GfJ4-ShHm-d5eG-6cmu-VCdQ-hoYzL4
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:40:07 +0100
LV Status available
# open 1
LV Size 100.00 GiB
Current LE 3200
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:2
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolOpt
LV Name LogVolOpt
VG Name VolGroupSys
LV UUID m8GvKZ-zgFF-2gXa-NSCG-Oy9l-vTYd-ALi6R1
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:40:30 +0100
LV Status available
# open 1
LV Size 60.00 GiB
Current LE 1920
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:3
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolSwap
LV Name LogVolSwap
VG Name VolGroupSys
LV UUID 9KWiYw-Wwot-xCmQ-uzCW-mILq-rsPz-t2X2pr
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:40:44 +0100
LV Status available
# open 2
LV Size 24.00 GiB
Current LE 768
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:1
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolDATA
LV Name LogVolDATA
VG Name VolGroupSys
LV UUID oTUQsd-wpYe-0tiA-WBFk-719z-9Cgd-ZjTmei
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:55:25 +0100
LV Status available
# open 0
LV Size 10.00 GiB
Current LE 320
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:4
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolRECO
LV Name LogVolRECO
VG Name VolGroupSys
LV UUID mJ3yEO-g0mw-f6IH-6r01-r7Ic-t1Kt-1rf36j
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:55:25 +0100
LV Status available
# open 0
LV Size 10.00 GiB
Current LE 320
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:5

212GB are available. Let’s take 100GB for extending /u01:


lvextend -L +100G /dev/mapper/VolGroupSys-LogVolU01
Size of logical volume VolGroupSys/LogVolU01 changed from 100.00 GiB (3200 extents) to 200.00 GiB.
Logical volume LogVolU01 successfully resized.

Filesystem needs to be resized:

resize2fs /dev/mapper/VolGroupSys-LogVolU01
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/mapper/VolGroupSys-LogVolU01 is mounted on /u01; on-line resizing required
old_desc_blocks = 7, new_desc_blocks = 13
Performing an on-line resize of /dev/mapper/VolGroupSys-LogVolU01 to 52428800 (4k) blocks.
The filesystem on /dev/mapper/VolGroupSys-LogVolU01 is now 52428800 blocks long.

Now /u01 is bigger:

df -h /dev/mapper/VolGroupSys-LogVolU01
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolU01
197G 77G 111G 41% /u01

Conclusion

Don’t hesitate to clean up your ODA before having to deal with space pressure.

Cet article ODA : Free up space on local filesystems est apparu en premier sur Blog dbi services.

PAM Authentication With PostgreSQL

Yann Neuhaus - Fri, 2018-12-21 08:52

PostgreSQl supports many authentication methods. The PAM authentication method operates similarly to password except that it uses PAM (Pluggable Authentication Modules) as the authentication mechanism. The user must exist in the database before PAM can be used for authentication.
In this blog I will configure PAM authentication for a PostgreSQL cluster 11 running on a centos 7.

postgres=# select version();
                                                 version

--------------------------------------------------------------------------------
-------------------------
 PostgreSQL 11.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (R
ed Hat 4.8.5-28), 64-bit
(1 row)

postgres=#

[root@dbi-pg-essentials ~]# cat /etc/centos-release
CentOS Linux release 7.3.1611 (Core)
[root@dbi-pg-essentials ~]# hostname
dbi-pg-essentials

We suppose that PostgreSQL is already installed with the PAM module. This should be the case if the installation was done with yum. If you decide to install using the sources, be sure to configure with the option –with-pam
With the installation this should exist a service named postgresql in the /etc/pam.d directory. If not you have to create a service for postgresql.

[postgres@dbi-pg-essentials pam.d]# pwd
/etc/pam.d
[root@dbi-pg-essentials pam.d]# ls -l postgresql
-rw-r--r--. 1 root root 71 Nov  7 12:37 postgresql
[root@dbi-pg-essentials pam.d]#

The first step is then to configure PostgreSQL to accept PAM authentication. Like other authentication methods, we have to add the corresponding entries in the pg_hba.conf file

[postgres@dbi-pg-essentials data]$ grep pamservice pg_hba.conf
host    all             all            192.168.22.0/24        pam pamservice=postgresql
[postgres@dbi-pg-essentials data]$

We can note the option pamservice=postgresql. Don’t forget to reload or to restart your cluster after modifying the pg_hba.conf file.
In my case I also had to change the permissions of /etc/shadow file to following

[postgres@dbi-pg-essentials data]$ ls -l /etc/shadow
-r--r--r-- 1 root root 988 Dec 21 11:20 /etc/shadow
[postgres@dbi-pg-essentials data]$

And the configuration is done. For the test let’s create a linux user named for example usrpam in the server

[root@dbi-pg-essentials ~]# useradd -m usrpam
[root@dbi-pg-essentials ~]# passwd usrpam
Changing password for user usrpam.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: all authentication tokens updated successfully.
[root@dbi-pg-essentials ~]#

As specified earlier, the user should exist in the database before PAM authentication can be used. So let’s create the same user in PostgreSQL but without password

postgres=# create user usrpam with login;
CREATE ROLE
postgres=#

And now the usrpam should be able to connect from any client in the network 192.168.22.0.

[usr2@dbi-pg-essentials_2 ~]$  psql -h dbi-pg-essentials -U usrpam -d postgres  
Password for user usrpam:
psql (11.1)
Type "help" for help.

postgres=> select user;
  user
--------
 usrpam
(1 row)

postgres=>

Cet article PAM Authentication With PostgreSQL est apparu en premier sur Blog dbi services.

Find Database Growth Using OEM Repository

Michael Dinh - Fri, 2018-12-21 08:20

Typically, what as been done is to schedule job for each database to collect database growth.

This may be problematic as it can be forgotten when new databases are created versus the likelihood of forgetting to add database to monitoring for OEM.

EM12c, EM13c : Querying the Repository Database for Building Reports using Metric Information (Doc ID 2347253.1)

Those raw data are inserted in various tables like EM_METRIC_VALUES for example. 
EM aggregates those management data by hour and by day. 
Those raw data are kept 7 days; the one hour aggregated data are kept 31 days, while one day aggregated data are kept one year.

How to obtain the Historical Database Total Used and Allocated Size from OEM Repository

The above blog post provided a good starting point.

This post is using query to collect database size (metric_name=’DATABASE_SIZE’) vs tablespace size (metric_name=’tbspAllocation’) to avoid having to sum all tablespaces to determine database size.

OMS: 13.2.0 and EMREP DB: 12.2.0

Comparison for METRIC_COLUMN between DATABASE_SIZE and tbspAllocation.

For tbspAllocation, the size was not clear and did not research further but it does appear to be GB.

SQL> select distinct metric_name, METRIC_COLUMN from sysman.mgmt$metric_daily where metric_name='tbspAllocation' order by 1;

METRIC_NAME                                                      METRIC_COLUMN
---------------------------------------------------------------- ----------------------------------------------------------------
tbspAllocation                                                   spaceUsed
tbspAllocation                                                   spaceAllocated

SQL> select distinct METRIC_COLUMN from sysman.mgmt$metric_daily WHERE metric_name='DATABASE_SIZE';

METRIC_COLUMN
----------------------------------------------------------------
ALLOCATED_GB
USED_GB

TARGET_TYPE used (not all results presented):

SQL> select distinct target_type from sysman.mgmt$metric_daily order by 1;

TARGET_TYPE
----------------------------------------------------------------
oracle_database
oracle_pdb
rac_database

METRIC_NAME used (not all results presented):

SQL> select distinct metric_name from sysman.mgmt$metric_daily order by 1;

METRIC_NAME
----------------------------------------------------------------
DATABASE_SIZE
tbspAllocation

DEMO:

SQL> @dbsize.sql
SQL> -- Michael Dinh : Dec 20, 2018
SQL> set echo off
Enter value for 1: perf

TARGET_NAME                                        TARGET_TYPE     MONTH_DT  USED_GB ALLOCATED_GB PREVIOUS_MONTH DIFF_USED_GB
-------------------------------------------------- --------------- --------- ------- ------------ -------------- ------------
xxxxperf                                           rac_database    01-MAR-18  2698.6       3526.8
                                                   rac_database    01-APR-18  2709.9       3526.8         2698.6        11.31
                                                   rac_database    01-MAY-18  2728.8       3526.8         2709.9        18.86
                                                   rac_database    01-JUN-18  2735.4       3548.8         2728.8         6.61
                                                   rac_database    01-JUL-18  2746.4       3548.8         2735.4        11.01
                                                   rac_database    01-AUG-18  2758.7       3548.8         2746.4        12.27
                                                   rac_database    01-SEP-18  2772.5       3548.8         2758.7        13.82
                                                   rac_database    01-OCT-18  4888.8       6207.8         2772.5       2116.3
                                                   rac_database    01-NOV-18  4647.8       6207.8         4888.8         -241
                                                   rac_database    01-DEC-18  3383.2       6207.8         4647.8        -1265
yyyyperf                                           oracle_database 01-MAR-18   63.07       395.58
                                                   oracle_database 01-APR-18   63.19       395.58          63.07          .12
                                                   oracle_database 01-MAY-18   64.33       395.58          63.19         1.14
                                                   oracle_database 01-JUN-18   64.81       395.58          64.33          .48
                                                   oracle_database 01-JUL-18    65.1       395.58          64.81          .29
                                                   oracle_database 01-AUG-18   65.22       395.58           65.1          .12
                                                   oracle_database 01-SEP-18   65.79       395.58          65.22          .57
                                                   oracle_database 01-OCT-18   68.18       395.58          65.79         2.39
                                                   oracle_database 01-NOV-18   75.79       395.72          68.18         7.61
                                                   oracle_database 01-DEC-18    80.4       395.72          75.79         4.61

29 rows selected.

SQL> @dbsize
SQL> -- Michael Dinh : Dec 20, 2018
SQL> set echo off
Enter value for 1: *

TARGET_NAME                                        TARGET_TYPE     MONTH_DT  USED_GB ALLOCATED_GB PREVIOUS_MONTH DIFF_USED_GB
-------------------------------------------------- --------------- --------- ------- ------------ -------------- ------------
CDByyyy_xxxxxxxxxxxxxxxxxxxxxxxxxx_CDBROOT         oracle_pdb      01-MAR-18    7.96        94.73
                                                   oracle_pdb      01-APR-18    3.44        94.73           7.96        -4.52
                                                   oracle_pdb      01-MAY-18   12.26        95.07           3.44         8.82
                                                   oracle_pdb      01-JUN-18   76.18        95.12          12.26        63.92
                                                   oracle_pdb      01-JUL-18   70.87        95.15          76.18        -5.31
                                                   oracle_pdb      01-AUG-18   77.63        95.15          70.87         6.76
                                                   oracle_pdb      01-SEP-18     4.9        95.15          77.63       -72.73
                                                   oracle_pdb      01-OCT-18       4        95.15            4.9          -.9
                                                   oracle_pdb      01-NOV-18   41.34        95.15              4        37.34
                                                   oracle_pdb      01-DEC-18   33.52        95.15          41.34        -7.82
CDByyyy_xxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxPDB        oracle_pdb      01-MAR-18  1610.6         2571
                                                   oracle_pdb      01-APR-18  1644.9         2571         1610.6        34.27
                                                   oracle_pdb      01-MAY-18  1659.3       2571.3         1644.9        14.43
                                                   oracle_pdb      01-JUN-18  1694.7       2571.4         1659.3        35.32
                                                   oracle_pdb      01-JUL-18  1753.8       2571.4         1694.7        59.18
                                                   oracle_pdb      01-AUG-18  1827.9       2571.4         1753.8        74.06
                                                   oracle_pdb      01-SEP-18  1900.8       2571.4         1827.9        72.91
                                                   oracle_pdb      01-OCT-18  1977.2       2571.4         1900.8        76.43
                                                   oracle_pdb      01-NOV-18  2044.8       2571.4         1977.2         67.6
                                                   oracle_pdb      01-DEC-18  2144.5       2571.4         2044.8        99.64

Script:

set line 200 verify off trimspool off tab off pages 1000 numw 6 echo on
-- Michael Dinh : Dec 20, 2018
set echo off
/*
How to obtain the Historical Database Total Used and Allocated Size from OEM Repository
How to obtain the Historical Database Total Used and Allocated Size from OEM Repository
*/
col target_name for a50
col target_type for a15
undefine 1
break on target_name
WITH dbsz AS (
SELECT
target_name, target_type, month_dt,
SUM(DECODE(metric_column, 'USED_GB', maximum)) used_gb,
SUM(DECODE(metric_column, 'ALLOCATED_GB', maximum)) allocated_gb
FROM (
SELECT target_name, target_type, trunc(rollup_timestamp,'MONTH') month_dt, metric_column, MAX(maximum) maximum
FROM sysman.mgmt$metric_daily
WHERE target_type IN ('rac_database','oracle_database','oracle_pdb')
AND metric_name = 'DATABASE_SIZE'
AND metric_column IN ('ALLOCATED_GB','USED_GB')
AND REGEXP_LIKE(target_name,'&&1','i')
GROUP BY target_name, target_type, trunc(rollup_timestamp,'MONTH'), metric_column
)
GROUP BY target_name, target_type, month_dt
ORDER BY target_name, month_dt
)
SELECT target_name, target_type, month_dt, used_gb, allocated_gb,
LAG(used_gb,1) OVER (PARTITION BY target_name ORDER BY target_name) previous_month,
used_gb-LAG(used_gb,1) OVER (PARTITION BY target_name ORDER BY target_name) diff_used_gb
FROM dbsz
ORDER BY target_name, month_dt
;

@OracleIMC Season's Greeting & Happy 2019

.container{ -webkit-box-shadow: 0px 2px 8px rgba(147, 150, 153, 0.3); -moz-box-shadow: 0px 2px 8px rgba(147, 150, 153, 0.3); box-shadow: 0px 2px 8px rgba(147, 150, 153,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

QC vs. PX

Jonathan Lewis - Fri, 2018-12-21 06:26

One last post before closing down for the Christmas break.

Here’s a little puzzle with a remarkably easy and obvious solution that Ivica Arsov presented at the UKOUG Tech2018 conference. It’s a brilliant little puzzle that makes a very important point, because it reminded me that most problems are easy and obvious only after you’ve seen them at least once. If you you’ve done a load of testing and investigation into something it’s easy to forget that there may be many scenarios you haven’t even thought of testing – so when you see the next puzzle your mind follows all the things you’ve done previously and doesn’t think that you might be looking at something new.

In this case I had to wait until the end of the presentation to discover how “easy and obvious” the solution was. Here’s a query with its results, all I’m going to do is join a session (from v$session) with all its parallel execution slaves by looking for the matching qcsid in v$px_session:


break on server_group skip 1 duplicate
 
select
        px.sid, px.qcsid, 
        px.server_group, px.server_set, px.server#,
        ss.sql_id
from
        V$px_session px,
        v$session ss
where
        ss.username = 'TEST_USER'
and     ss.sid = px.sid
order by
        px.server_group nulls first, px.server_set, px.server#
;

     QCSID        SID SERVER_GROUP SERVER_SET    SERVER# SQL_ID
---------- ---------- ------------ ---------- ---------- -------------
       357        357                                    b4wg6286xn324

       357        125            1          1          1 bppfad1y1auhj
       357        246                       1          2 bppfad1y1auhj
       357        364                       1          3 bppfad1y1auhj

       357          7            2          1          1 5vdbyjy0c7dam
       357        133                       1          2 5vdbyjy0c7dam
       357        253                       1          3 5vdbyjy0c7dam

As you can see session 357 is reported as a query coordinator session, with two parallel server groups of 3 slave processes each. Strangely, though, the co-ordinator and the two groups of parallel query slaves are reported different SQL_IDs; this is probably contrary to the experience that most of us have had. When a parallel query (or DML or DDL statement) is executing the query co-ordinator and all its slave processes should report the same SQL_ID – so what’s happening here.

Little pause for thought …
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
… and no doubt some of you were immediately aware of the probable explanation. It’s very simple if you’ve come across the phenomenon before. Here’s the SQL that allowed me (from another session) to capture this result:


rem
rem     Script: px_qc_joke_2.sql
rem     Author: Jonathan Lewis
rem     Dated:  Dec 2018
rem

create table t1 nologging 
as
select ao.*
from 
        all_objects ao, 
        (
         select rownum id
         from   dual 
         connect by level <= 10 ) -- > comment to avoid wordpress format issue
;

create table t2 nologging as select * from t1;
create table t3 nologging as select * from t1;

begin
        dbms_stats.gather_table_stats(user,'t1', method_opt=>'for all columns size 1');
        dbms_stats.gather_table_stats(user,'t2', method_opt=>'for all columns size 1');
        dbms_stats.gather_table_stats(user,'t3', method_opt=>'for all columns size 1');
end;
/


prompt  =====================
prompt  Starting PL/SQL block
prompt  Prepare to query v$
prompt  =====================

declare
        cursor c1 is select /*+ parallel (t1 3) */ object_id from t1;
        cursor c2 is select /*+ parallel (t2 3) */ object_id from t2;
        m_id1 number;
        m_id2 number;
begin
        open c1;
        fetch c1 into  m_id1;

        open c2;
        fetch c2 into  m_id2;

        for r in (select owner from t3 where object_id = least(m_id1,m_id2)) loop
                dbms_output.put_line(r.owner);
                dbms_lock.sleep(3);
        end loop;

        dbms_output.put_line(m_id1 || ' - ' || m_id2); 

        close c2;
        close c1;
end;
/

I’ve opened two cursors on parallel queries inside an anonymous PL/SQL block. The driving session is the query co-ordinator for two different parallel queries at the same time because it’s keeping two cursors open, and it’s also executing the cursor that is the driving query block. If we check v$sql for the three SQL_IDs reported from v$px_session this is what we see:


SQL_ID
-------------
SQL_TEXT
--------------------------------------------------------------------------------
b4wg6286xn324
declare  cursor c1 is select /*+ parallel (t1 3) */ object_id from t1;  cursor c
2 is select /*+ parallel (t2 3) */ object_id from t2;  m_id1 number;  m_id2 numb
er; begin  open c1;  fetch c1 into  m_id1;   open c2;  fetch c2 into  m_id2;   f
or r in (select owner from t3 where object_id = least(m_id1,m_id2)) loop   dbms_
output.put_line(r.owner);   dbms_lock.sleep(1);  end loop;   dbms_output.put_lin
e(m_id1 || ' - ' || m_id2);   close c2;  close c1; end;

bppfad1y1auhj
SELECT /*+ parallel (t1 3) */ OBJECT_ID FROM T1

5vdbyjy0c7dam
SELECT /*+ parallel (t2 3) */ OBJECT_ID FROM T2


Apart from the warning that it’s easy to be misled by a problem because you keep thinking of all the cases you’ve seen before there’s another important point behind this little quiz. It’s often said that when you run parallel queries you may actually use “2 * DOP” parallel query slaves – this is true (though for more complicated queries you may get multiple DFO trees at once, each with its “2 * DOP” slaves) – it’s worth remembering that even with very simple queries a single session can have many cursors open at once, holding “2 * DOP” slave for each one, and ruin every other session’s response time because every other session ends up running serial queries.

Data types

Tom Kyte - Thu, 2018-12-20 23:06
Hi Tom Tom I am sure you have answered my question several times as " floats are synonyms for numbers. But I have a situation at work where my colleague wont agree with me that there is no difference between oracle float and number datatype with ...
Categories: DBA Blogs

join example, new vs old syntax

Tom Kyte - Thu, 2018-12-20 23:06
Hi Tom, I know in oracle9i we have the cross join and full outer join. Do they exist for 8i if so can you point me to the documentation. If they don't can you give an example/breakdown of how to rewrite in 8i. Thanks in advance,
Categories: DBA Blogs

Show previous YTD if query is ran in January else Current YTD

Tom Kyte - Thu, 2018-12-20 23:06
I'm looking for the logic to filter data Previous YTD if the query is ran in January else filter Current YTD. Any help would be greatly appreciated. Thank you Marc
Categories: DBA Blogs

SQL Plus doesn't consider special characters in filenames

Tom Kyte - Thu, 2018-12-20 23:06
<b>There are files that contain special characters (i.e. $, #) in the name, and SQL Plus cannot find them. </b> <code>$ sqlplus -L user/password@host:port/SID @/path/to/file/create$spec_view.sql SQL*Plus: Release 12.2.0.1.0 Production on Thu Dec...
Categories: DBA Blogs

ORA-29283: invalid file operation

Tom Kyte - Thu, 2018-12-20 23:06
Hi Tom, Thank you for being with us and for all your support over the years. From last few days I am getting an error while writing file from my application i.e Exception Info: ORA-29283: invalid file operation. But if I run same procedure f...
Categories: DBA Blogs

Partner Webcast – Oracle on Serverless Computing: Developing FaaS with Oracle Fn Project

It is probably one of the most wrongly named idea in the world of IT (what do you mean ‘serverless’, there are no servers? Where should I run my code then?) which is quite likely to take...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Virtualbox 6.0 released

Dietrich Schroff - Thu, 2018-12-20 10:33
Today Oracle released virtualbox version 6.0:

 For Linux the following distributions are supported:

 So let's install the new version:

 dpkg -i virtualbox-6.0_6.0.0-127566~Ubuntu~bionic_amd64.deb 
Vormals nicht ausgewähltes Paket virtualbox-6.0 wird gewählt.
dpkg: Betreffend virtualbox-6.0_6.0.0-127566~Ubuntu~bionic_amd64.deb, welches virtualbox-6.0 enthält:
 virtualbox-6.0 kollidiert mit virtualbox
  virtualbox-5.2 liefert virtualbox und ist vorhanden und installiert.

dpkg: Fehler beim Bearbeiten des Archivs virtualbox-6.0_6.0.0-127566~Ubuntu~bionic_amd64.deb (--install):
 Kollidierende Pakete - virtualbox-6.0 wird nicht installiert
Fehler traten auf beim Bearbeiten von:
 virtualbox-6.0_6.0.0-127566~Ubuntu~bionic_amd64.deb
Ok - first deinstall the old version:
# dpkg -r virtualbox-5.2

(Lese Datenbank ... 421401 Dateien und Verzeichnisse sind derzeit installiert.)

Entfernen von virtualbox-5.2 (5.2.10-122088~Ubuntu~bionic) ...

Trigger für shared-mime-info (1.9-2) werden verarbeitet ...

Trigger für hicolor-icon-theme (0.17-2) werden verarbeitet ...

Trigger für desktop-file-utils (0.23-1ubuntu3.18.04.2) werden verarbeitet ...

Trigger für gnome-menus (3.13.3-11ubuntu1.1) werden verarbeitet ...

Trigger für bamfdaemon (0.5.3+18.04.20180207.2-0ubuntu1) werden verarbeitet ...

Rebuilding /usr/share/applications/bamf-2.index...

Trigger für mime-support (3.60ubuntu1) werden verarbeitet ...
and once again:
# dpkg -i virtualbox-6.0_6.0.0-127566~Ubuntu~bionic_amd64.deb 
(Lese Datenbank ... 420583 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von virtualbox-6.0_6.0.0-127566~Ubuntu~bionic_amd64.deb ...
Entpacken von virtualbox-6.0 (6.0.0-127566~Ubuntu~bionic) ...
virtualbox-6.0 (6.0.0-127566~Ubuntu~bionic) wird eingerichtet ...
addgroup: Die Gruppe »vboxusers« existiert bereits als Systemgruppe. Programmende.
Trigger für systemd (237-3ubuntu10.9) werden verarbeitet ...
Trigger für ureadahead (0.100.0-20) werden verarbeitet ...
Trigger für desktop-file-utils (0.23-1ubuntu3.18.04.2) werden verarbeitet ...
Trigger für gnome-menus (3.13.3-11ubuntu1.1) werden verarbeitet ...
Trigger für bamfdaemon (0.5.3+18.04.20180207.2-0ubuntu1) werden verarbeitet ...
Rebuilding /usr/share/applications/bamf-2.index...
Trigger für mime-support (3.60ubuntu1) werden verarbeitet ...
Trigger für hicolor-icon-theme (0.17-2) werden verarbeitet ...
Trigger für shared-mime-info (1.9-2) werden verarbeitet ...
The new GUI has a new submenu for each vm:

If you choose "Logs", you can directly open the logfiles for this vm:
Very nice feature!



Lightning Web Components - Events, and listening to your children

Robert Baillie - Thu, 2018-12-20 09:13
Another fantastic inclusion in Lightning Web Components is the completely reworked events model. De-composing functionality and building smaller, and more generic building blocks has become much simpler and much more intuitive. In the world of Lightning Components I never got on with events. The idea of adding a Salesforce configuration for an event, registering events on the dispatcher template, and then registering listeners on the receiving template seemed really cumbersome. And then added onto that was the differences in syntax between component and application events. They just felt really unnatural. In Lightning Web Components all this has become significantly simpler, and much more in-keeping with the standard HTML / Javascript model. We've already seen how we can use @api allow state to be passed into our components. Now we're talking about notifying our parents when events occur. I could go into deep detail on how this is done, but the documentation on this area is...

Pages

Subscribe to Oracle FAQ aggregator