Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 17 hours 2 min ago

Autonomous Database

Sun, 2017-10-08 12:09

Larry Ellison has announced Oracle 18c to be the Autonomous Database, the no-human labor, and self-driven, database. Here is a quick recap of what it is behind the marketing words. My opinion only.

Autonomous

Since Oracle decided to be a public cloud PaaS provider, they announced ‘unmanaged’ and ‘managed’ services. The managed service for DBaaS is where you are not the Database Administrator. Autonomous Database 017-10-04 15.51.53 You have full administration right to deploy and develop your application, but the system administration is done by Oracle: provisioning, patching, upgrade, system tuning, availability. The first managed service was announced last year at OOW16: the Oracle Exadata Express Cloud Service, for developers. And the second managed service is the Oracle Data Warehouse Cloud Service, for data warehousing. It is planned for December 2017 and will be based on new version: Oracle 18c.
And let’s be clear, the ‘autonomous’ label came at the last minute, as the marketing message for Open World, to show that the managed service is fully automated, because Cloud is about automation.

So, is that only marketing? I don’t think so. There are real features behind it. And some of them exist for a long time, and had just to be enhanced further in 18c.

Availability

The features are there for a long time for full automation. RAC protects from instance or server crash and Data Guard protects from all other failures. Both are fully automated with no-human intervention: service relocation in RAC, Fast-Start FailOver in Data Guard. And both can failover transparently with Application Continuity. Oracle also recently introduced Sharding, to link data availability to different datacenter. And this technique will also be used for RAC instance affinity in Oracle 18c.

Patch and Upgrade

You can apply patches in a rolling manner with RAC. The problem was OJVM, but this will be also rolling in 18c. You can do rolling upgrade with Data Guard. And 18c will have automated and faster upgrade at PDB plug-in. From a system point of view, all is fully automated. However, we will still need human intervention for testing, and planning it at the right time, and of course for troubleshooting when something goes wrong. The autonomous database is not incompatible with that. With logical replication (Golden Gate, Transient logical standby) or versioning (Edition Based Redefinition, Workspace Manager) Oracle has the tools to automatically provide the environment to test the upgrade before it is opened in production.

We can also imagine that other features may help to avoid regression. For example, SQL Plan Management can prevent execution plan regressions on common use-cases, and let the (human) user accept evolved plans later. This can also be done automatically (but humans still have to define the capture). Of course, we all know the limits of the advisors and automatic implementation. But there are also some applications where it can be fine. This ‘autonomous’ Cloud Service is a possibility, not a universal solution.

Tuning

Again, we all know that the database cannot be optimized without the knowledge of the data and the design of the application. But Oracle has also a lot of features to automate some common practices. Automatic Data Optimization, Segment Advisor, SQL Access Advisor, Auto DOP, automatic choice of buffered or direct reads,… We have seen a lot of drawbacks with SQL Plan Directives, but that was mainly in OLTP. This new managed service is for DWH where dynamic sampling is not a bad idea.

The idea is to ‘load data and run’ and Oracle takes care of index, partitioning, etc. You create the table and declare Primary keys and Foreign key (RELY DISABLE NOVALIDATE see a previous blog post about that). Then I suppose that Oracle can guess which are the dimension tables and the fact tables. And then do some common things to do on that: partition on the date dimension (if there is only one – maybe it has to detect some load/query patterns), create bitmap indexes on all fact foreign key. Online statistics gathering will be extended in 18c to incrementally maintain statistics on bulk-insert, and this may include histograms.

I’m very skeptical on that point, because I’ve seen lot of datawarehouse databases where, even on big hardware, there is always a need for optimization. But my point of view may be biased. I’m a consultant, and then I see only the databases where people think they need human analysis and troubleshooting. There may be some non-critical datawarehouse databases where nobody is doing any optimisation, and then implementing some default optimization may be sufficient to make it a bit more efficient. This autonomous elastic cloud service may be a good start for some projects, when it is difficult to plan the hardware and human resources that will be needed. But I’m quite sure that after a while, designing an efficient and scalable infrastructure and data model will still require our DBA skills and human intelligence.

Load

This service is provisioned as a PDB where what we can do is limited by the multitenant lockdown profiles. We can connect easily (with a credentials .zip) from SQL Developer, and we can load data from an object store using the DBMS_CLOUD package to define the credentials (Swift) and load text files. Once again, it is an evolution of existing features like external tables and preprocessors.

Scalability

This service is Elastic: it can scale the CPU resource up and down without stopping the service. Again this can use existing features: OVM for the host, and Resource Manager for the CDB and PDB.

So what?

Some will say that it is only marketing with nothing behind, and the same announcement that was made by each previous version. Others will say that it is really autonomous, self-driving, self-securing, self-repairing. This ‘self-driven’ idea is an analogy with Tesla (Elon Musk is a big friend of Larry Ellison), but for the moment, there’s still a human inside a Tesla. Autonomous is a vision for the future, not the present. The present is a new managed service, more online operations, and easier upgrades.

What I really like is the integration of existing features to serve a new marketing trend. Do you remember when the flashback features came out? Flashback query existed internally since the invention of rollback segments (and maybe even before with the Before Image). It was exposed in 9i with a dbms package, and 10g in the SQL ‘as of’. All was already there to bring a feature that no other RDBMSs are capable of. The existing features support the marketing message at the right time, and this marketing message encourages to develop new features again, like new online operations. And those do not benefit only to the managed cloud services as they go to the main branch of Oracle Database.

The other thing I like is the idea to have a configuration dedicated to specific needs. In the OOW17 hands-on lab, there was even a “_cloud_service_type” parameter set to DWCS. Here this service is for DWH, and there will be one dedicated to OLTP mid-2018. This is something I would like to see in future versions. For example, there was a lot of discussion about 12.1 enabling adaptive statistics by default, and 12.2 disabling them. This kind of default settings could be set depending on the database type: more conservative for OLTP upgrades, more adaptive for new datawarehouse projects.

 

Cet article Autonomous Database est apparu en premier sur Blog dbi services.

Changing the LOCALE in CentOS/RedHat 7

Fri, 2017-10-06 07:04

What I really don’t like is this: Someone wants me to work on a Linux system for whatever purpose and then I get messages like this:

postgres@pgbox:/home/postgres/ [pg960final] cat /proc/sysrq-trigger 
cat: /proc/sysrq-trigger: Permission non accordée

or:

postgres@pgbox:/home/postgres/ [pg960final] cat /proc/sysrq-trigger 
cat: /proc/sysrq-trigger: Permiso denegado

It is not that I don’t like French or Spanish but usually (well, not in that simple case) I am not able to translate that without the help from others. So how can you change that to English (which should be the default anyway nowadays)? Lets go:

A quick fix for getting the above messages in English would be:

postgres@pgbox:/home/postgres/ [pg960final] export LANG="en_EN.UTF8"
postgres@pgbox:/home/postgres/ [pg960final] cat /proc/sysrq-trigger 
cat: /proc/sysrq-trigger: Permission denied

The good thing with this approach is, that the setting is temporary and valid only for my current session. The default for that particular user and the system default will not be touched. But what when you want to make this persistent for this user? Easy as well:

postgres@pgbox:/home/postgres/ [pg960final] echo "LANG=\"en_EN.UTF8\"" >> ~/.bash_profile 
postgres@pgbox:/home/postgres/ [pg960final] echo "export LANG" >> ~/.bash_profile 

Once you have that every new session will have that set. The system default is defined in /etc/locale.conf:

postgres@pgbox:/home/postgres/ [pg960final] cat /etc/locale.conf 
LANG="en_US.UTF-8"

So when you want to make it the persistent default for the whole system then change it there. Not a big deal, but good to know.

 

Cet article Changing the LOCALE in CentOS/RedHat 7 est apparu en premier sur Blog dbi services.

And finally it is there…PostgreSQL 10

Thu, 2017-10-05 08:45

Selection_001

What are you waiting for?

 

Cet article And finally it is there…PostgreSQL 10 est apparu en premier sur Blog dbi services.

udev rules for ASM devices on RHEL 7

Thu, 2017-10-05 06:28

Preparing the storage devices for use with ASM is one of the first tasks you should do. When you do not want to use ASMLib then udev is what you should use for this. As the order/naming of the devices in the device tree is not guaranteed (e.g. dm-1 is not guaranteed to be the same device after a reboot of the system) you need a way to fix this and that is what udev can be used for.

The first step is to get the UUIDs of all the devices you want to use with ASM (dm-1 to dm-3, in this case). This assumes that multi pathing is already setup:

[root@xxx ~]$ udevadm info --query=all --name=/dev/dm-1 | grep -i DM_UUID
E: DM_UUID=mpath-3600a098000a819ff0000018559bf68d7
[root@xxx ~]$ udevadm info --query=all --name=/dev/dm-2 | grep -i DM_UUID
E: DM_UUID=mpath-3600a098000a81955000001b359bf6920
[root@xxx ~]$ udevadm info --query=all --name=/dev/dm-3 | grep -i DM_UUID
E: DM_UUID=mpath-3600a098000a819ff0000018359bf68ce

Having this the udev rules should look like this (of course the name does not need to be same as below):

[root@xxx ~]$ cat /etc/udev/rules.d/30-oracle.rules 
ACTION=="add|change", ENV{DM_UUID}=="mpath-3600a098000a81955000001b359bf6920", NAME="asm-crs",  OWNER="oracle", GROUP="asmadmin", MODE="0600"
ACTION=="add|change", ENV{DM_UUID}=="mpath-3600a098000a819ff0000018359bf68ce", NAME="asm-fra",  OWNER="oracle", GROUP="asmadmin", MODE="0600"
ACTION=="add|change", ENV{DM_UUID}=="mpath-3600a098000a819ff0000018559bf68d7", NAME="asm-data", OWNER="oracle", GROUP="asmadmin", MODE="0600"

Once the system rebooted the permissions on the devices should be fine and naming persistent:

[root@xxx ~]$ ls -al /dev/mapper/
total 0
drwxr-xr-x  2 root root     280 Oct  4 08:50 .
drwxr-xr-x 20 root root    4180 Oct  4 08:50 ..
lrwxrwxrwx  1 root root       7 Oct  4 13:10 asm-crs -> ../dm-2
lrwxrwxrwx  1 root root       7 Oct  4 13:10 asm-data -> ../dm-1
lrwxrwxrwx  1 root root       7 Oct  4 13:10 asm-fra -> ../dm-3
crw-------  1 root root 10, 236 Oct  4 08:50 control
lrwxrwxrwx  1 root root       7 Oct  4 12:51 disk00 -> ../dm-4
lrwxrwxrwx  1 root root       7 Oct  4 08:50 vg_root-lv_openafs -> ../dm-8
lrwxrwxrwx  1 root root       7 Oct  4 12:51 vg_root-lv_root -> ../dm-0
lrwxrwxrwx  1 root root       8 Oct  4 12:51 vg_root-lv_swap -> ../dm-10
lrwxrwxrwx  1 root root       7 Oct  4 12:51 vg_root-lv_tmp -> ../dm-7
lrwxrwxrwx  1 root root       7 Oct  4 12:51 vg_root-lv_var -> ../dm-6
lrwxrwxrwx  1 root root       7 Oct  4 12:51 vg_root-lv_var_log -> ../dm-5
lrwxrwxrwx  1 root root       7 Oct  4 12:51 vg_root-lv_var_log_audit -> ../dm-9
[root@xxx ~]$ ls -la /dev/dm*
brw-rw---- 1 root   disk     253,  0 Oct  4 12:51 /dev/dm-0
brw------- 1 oracle asmadmin 253,  1 Oct  4 13:11 /dev/dm-1
brw-rw---- 1 root   disk     253, 10 Oct  4 12:51 /dev/dm-10
brw------- 1 oracle asmadmin 253,  2 Oct  4 13:11 /dev/dm-2
brw------- 1 oracle asmadmin 253,  3 Oct  4 13:11 /dev/dm-3
brw-rw---- 1 root   disk     253,  4 Oct  4 12:51 /dev/dm-4
brw-rw---- 1 root   disk     253,  5 Oct  4 12:51 /dev/dm-5
brw-rw---- 1 root   disk     253,  6 Oct  4 12:51 /dev/dm-6
brw-rw---- 1 root   disk     253,  7 Oct  4 12:51 /dev/dm-7
brw-rw---- 1 root   disk     253,  8 Oct  4 08:50 /dev/dm-8
brw-rw---- 1 root   disk     253,  9 Oct  4 12:51 /dev/dm-9

Hope this helps.

 

Cet article udev rules for ASM devices on RHEL 7 est apparu en premier sur Blog dbi services.

Going from SLES12 SP2 to SLES12 SP3, online

Wed, 2017-10-04 05:59

SLES 12 SP3 was released some time ago and as we have customers running on that I thought it might be a good idea to test the upgrade from SP2 to SP3. Actually it turned out this is quite easy and can be done online. The supported methods for doing this is either by using YaST or Zypper directly. As I wanted to stay on the command line I’ll use Zypper for the scope of this post. Lets go…

As said above I’ll be upgrading from SLES 12 SP2:

postgres@sles12sp2:/home/postgres/ [pg963] cat /etc/os-release 
NAME="SLES"
VERSION="12-SP2"
VERSION_ID="12.2"
PRETTY_NAME="SUSE Linux Enterprise Server 12 SP2"
ID="sles"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:12:sp2"

When you have another setup make sure that the upgrade path is supported, which you can check here.

My current kernel version is:

postgres@sles12sp2:/home/postgres/ [pg963] uname -a
Linux sles12sp2 4.4.74-92.38-default #1 SMP Tue Sep 12 19:43:46 UTC 2017 (545c055) x86_64 x86_64 x86_64 GNU/Linux

The first thing you should do is to apply the latest patches:

postgres@sles12sp2:/home/postgres/ [pg963] sudo zypper patch
Refreshing service 'Containers_Module_12_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_12_SP2_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Software_Development_Kit_12_SP2_x86_64'.
Retrieving repository 'SLES12-SP2-Updates' metadata ...............................................................................................[done]
Building repository 'SLES12-SP2-Updates' cache ....................................................................................................[done]
Retrieving repository 'SLE-SDK12-SP2-Updates' metadata ............................................................................................[done]
Building repository 'SLE-SDK12-SP2-Updates' cache .................................................................................................[done]
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following 4 NEW patches are going to be installed:
  SUSE-SLE-SERVER-12-SP2-2017-1597 SUSE-SLE-SERVER-12-SP2-2017-1602 SUSE-SLE-SERVER-12-SP2-2017-1606 SUSE-SLE-SERVER-12-SP2-2017-1622

The following 8 packages are going to be upgraded:
  autofs gtk3-data gtk3-lang gtk3-tools libgtk-3-0 typelib-1_0-Gtk-3_0 xinetd yast2-xml

8 packages to upgrade.
Overall download size: 6.4 MiB. Already cached: 0 B. After the operation, additional 12.0 KiB will be used.
Continue? [y/n/...? shows all options] (y): y
Retrieving package autofs-5.0.9-28.3.5.x86_64                                                                       (1/8), 407.0 KiB (  2.0 MiB unpacked)
Retrieving delta: ./x86_64/autofs-5.0.9-27.2_28.3.5.x86_64.drpm, 160.5 KiB
Retrieving: autofs-5.0.9-27.2_28.3.5.x86_64.drpm ..................................................................................................[done]
Applying delta: ./autofs-5.0.9-27.2_28.3.5.x86_64.drpm ............................................................................................[done]
Retrieving package gtk3-data-3.20.10-17.3.20.noarch                                                                 (2/8), 162.7 KiB ( 10.8 KiB unpacked)
Retrieving: gtk3-data-3.20.10-17.3.20.noarch.rpm ..................................................................................................[done]
Retrieving package xinetd-2.3.15-8.8.1.x86_64                                                                       (3/8), 126.6 KiB (286.4 KiB unpacked)
Retrieving delta: ./x86_64/xinetd-2.3.15-8.5.1_8.8.1.x86_64.drpm, 17.8 KiB
Retrieving: xinetd-2.3.15-8.5.1_8.8.1.x86_64.drpm .................................................................................................[done]
Applying delta: ./xinetd-2.3.15-8.5.1_8.8.1.x86_64.drpm ...........................................................................................[done]
Retrieving package yast2-xml-3.1.2-2.3.1.x86_64                                                                     (4/8),  48.2 KiB (127.7 KiB unpacked)
Retrieving delta: ./x86_64/yast2-xml-3.1.1_3.1.2-1.46_2.3.1.x86_64.drpm, 10.3 KiB
Retrieving: yast2-xml-3.1.1_3.1.2-1.46_2.3.1.x86_64.drpm ..........................................................................................[done]
Applying delta: ./yast2-xml-3.1.1_3.1.2-1.46_2.3.1.x86_64.drpm ....................................................................................[done]
Retrieving package gtk3-tools-3.20.10-17.3.20.x86_64                                                                (5/8), 237.8 KiB (294.2 KiB unpacked)
Retrieving: gtk3-tools-3.20.10-17.3.20.x86_64.rpm .................................................................................................[done]
Retrieving package libgtk-3-0-3.20.10-17.3.20.x86_64                                                                (6/8),   2.5 MiB (  8.4 MiB unpacked)
Retrieving delta: ./x86_64/libgtk-3-0-3.20.10-16.2_17.3.20.x86_64.drpm, 175.6 KiB
Retrieving: libgtk-3-0-3.20.10-16.2_17.3.20.x86_64.drpm ...........................................................................................[done]
Applying delta: ./libgtk-3-0-3.20.10-16.2_17.3.20.x86_64.drpm .....................................................................................[done]
Retrieving package typelib-1_0-Gtk-3_0-3.20.10-17.3.20.x86_64                                                       (7/8), 335.2 KiB (879.3 KiB unpacked)
Retrieving delta: ./x86_64/typelib-1_0-Gtk-3_0-3.20.10-16.2_17.3.20.x86_64.drpm, 159.9 KiB
Retrieving: typelib-1_0-Gtk-3_0-3.20.10-16.2_17.3.20.x86_64.drpm ..................................................................................[done]
Applying delta: ./typelib-1_0-Gtk-3_0-3.20.10-16.2_17.3.20.x86_64.drpm ............................................................................[done]
Retrieving package gtk3-lang-3.20.10-17.3.20.noarch                                                                 (8/8),   2.6 MiB ( 18.8 MiB unpacked)
Retrieving delta: ./noarch/gtk3-lang-3.20.10-16.2_17.3.20.noarch.drpm, 178.6 KiB
Retrieving: gtk3-lang-3.20.10-16.2_17.3.20.noarch.drpm ............................................................................................[done]
Applying delta: ./gtk3-lang-3.20.10-16.2_17.3.20.noarch.drpm ......................................................................................[done]
Checking for file conflicts: ......................................................................................................................[done]
(1/8) Installing: autofs-5.0.9-28.3.5.x86_64 ......................................................................................................[done]
Additional rpm output:
Updating /etc/sysconfig/autofs...


(2/8) Installing: gtk3-data-3.20.10-17.3.20.noarch ................................................................................................[done]
(3/8) Installing: xinetd-2.3.15-8.8.1.x86_64 ......................................................................................................[done]
Additional rpm output:
Updating /etc/sysconfig/xinetd...


(4/8) Installing: yast2-xml-3.1.2-2.3.1.x86_64 ....................................................................................................[done]
(5/8) Installing: gtk3-tools-3.20.10-17.3.20.x86_64 ...............................................................................................[done]
(6/8) Installing: libgtk-3-0-3.20.10-17.3.20.x86_64 ...............................................................................................[done]
(7/8) Installing: typelib-1_0-Gtk-3_0-3.20.10-17.3.20.x86_64 ......................................................................................[done]
(8/8) Installing: gtk3-lang-3.20.10-17.3.20.noarch ................................................................................................[done]

Once all is patched make sure you have the zypper-migration-plugin installed on your system (which is already there in my case):

postgres@sles12sp2:/home/postgres/ [pg963] sudo zypper install zypper-migration-plugin
Refreshing service 'Containers_Module_12_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_12_SP2_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Software_Development_Kit_12_SP2_x86_64'.
Loading repository data...
Reading installed packages...
'zypper-migration-plugin' is already installed.
No update candidate for 'zypper-migration-plugin-0.10-12.4.noarch'. The highest available version is already installed.
Resolving package dependencies...

Nothing to do.
postgres@sles12sp2:/home/postgres/ [pg963] 

Ready to migrate to SP3:

postgres@sles12sp2:/home/postgres/ [pg963] sudo zypper migration

Executing 'zypper  refresh'

Retrieving repository 'SLE-Module-Containers12-Pool' metadata .....................................................................................[done]
Building repository 'SLE-Module-Containers12-Pool' cache ..........................................................................................[done]
Repository 'SLE-Module-Containers12-Updates' is up to date.                                                                                              
Repository 'SLES12-SP2-12.2-0' is up to date.                                                                                                            
Retrieving repository 'SLES12-SP2-Pool' metadata ..................................................................................................[done]
Building repository 'SLES12-SP2-Pool' cache .......................................................................................................[done]
Repository 'SLES12-SP2-Updates' is up to date.                                                                                                           
Retrieving repository 'SLE-SDK12-SP2-Pool' metadata ...............................................................................................[done]
Building repository 'SLE-SDK12-SP2-Pool' cache ....................................................................................................[done]
Repository 'SLE-SDK12-SP2-Updates' is up to date.                                                                                                        
All repositories have been refreshed.

Executing 'zypper  --no-refresh patch-check --updatestack-only'

Loading repository data...
Reading installed packages...

0 patches needed (0 security patches)

Available migrations:

    1 | SUSE Linux Enterprise Server 12 SP3 x86_64
        SUSE Linux Enterprise Software Development Kit 12 SP3 x86_64
        Containers Module 12 x86_64 (already installed)
       

[num/q]: 1

Executing 'snapper create --type pre --cleanup-algorithm=number --print-number --userdata important=yes --description 'before online migration''

Upgrading product SUSE Linux Enterprise Server 12 SP3 x86_64.
Found obsolete repository SLES12-SP2-12.2-0
Disable obsolete repository SLES12-SP2-12.2-0 [y/n] (y): y
... disabling.
Upgrading product SUSE Linux Enterprise Software Development Kit 12 SP3 x86_64.
Upgrading product Containers Module 12 x86_64.

Executing 'zypper --releasever 12.3 ref -f'

Warning: Enforced setting: $releasever=12.3
Forcing raw metadata refresh
Retrieving repository 'SLE-Module-Containers12-Pool' metadata ..............................................................................................[done]
Forcing building of repository cache
Building repository 'SLE-Module-Containers12-Pool' cache ...................................................................................................[done]
Forcing raw metadata refresh
Retrieving repository 'SLE-Module-Containers12-Updates' metadata ...........................................................................................[done]
Forcing building of repository cache
Building repository 'SLE-Module-Containers12-Updates' cache ................................................................................................[done]
Forcing raw metadata refresh
Retrieving repository 'SLES12-SP3-Pool' metadata ...........................................................................................................[done]
Forcing building of repository cache
Building repository 'SLES12-SP3-Pool' cache ................................................................................................................[done]
Forcing raw metadata refresh
Retrieving repository 'SLES12-SP3-Updates' metadata ........................................................................................................[done]
Forcing building of repository cache
Building repository 'SLES12-SP3-Updates' cache .............................................................................................................[done]
Forcing raw metadata refresh
Retrieving repository 'SLE-SDK12-SP3-Pool' metadata ........................................................................................................[done]
Forcing building of repository cache
Building repository 'SLE-SDK12-SP3-Pool' cache .............................................................................................................[done]
Forcing raw metadata refresh
Retrieving repository 'SLE-SDK12-SP3-Updates' metadata .....................................................................................................[done]
Forcing building of repository cache
Building repository 'SLE-SDK12-SP3-Updates' cache ..........................................................................................................[done]
All repositories have been refreshed.

Executing 'zypper --releasever 12.3  --no-refresh  dist-upgrade --no-allow-vendor-change '

Warning: Enforced setting: $releasever=12.3
Warning: You are about to do a distribution upgrade with all enabled repositories. Make sure these repositories are compatible before you continue. See 'man zypper' for more information about this command.
Loading repository data...
Reading installed packages...
Computing distribution upgrade...

The following 21 NEW packages are going to be installed:
  blog crash-kmp-default-7.1.8_k4.4.73_5-3.9 kernel-default-4.4.82-6.9.1 libefivar1 libfastjson4 libfreebl3 libibverbs libibverbs1 libmlx5-1 librados2 libsnapper4
  libsoftokn3 libtidyp-1_04-0 lttng-ust mozilla-nss mozilla-nss-certs patterns-sles-yast2 perl-HTML-Tidy python-talloc rdma-core sles-release-POOL

The following NEW pattern is going to be installed:
  yast2

The following 2 packages are going to be REMOVED:
  libsnapper3 pytalloc

The following 168 packages are going to be upgraded:
  aaa_base aaa_base-extras augeas augeas-lenses autoyast2 autoyast2-installation binutils cpupower crash cups-filters-ghostscript dbus-1 dbus-1-x11 dracut
  efibootmgr ethtool filesystem gdb grub2 grub2-i386-pc grub2-snapper-plugin grub2-systemd-sleep-plugin hwinfo ipmitool iscsiuio kdump kexec-tools kpartx
  libaugeas0 libblkid1 libcpupower0 libdbus-1-3 libdcerpc0 libdcerpc-binding0 libdrm2 libdrm_amdgpu1 libdrm_intel1 libdrm_nouveau2 libdrm_radeon1 libfdisk1
  libgbm1 libgnutls28 libldb1 libLLVM libmount1 libndr0 libndr-krb5pac0 libndr-nbt0 libndr-standard0 libnetapi0 libp11-kit0 libparted0 libsamba-credentials0
  libsamba-errors0 libsamba-hostconfig0 libsamba-passdb0 libsamba-util0 libsamdb0 libsmartcols1 libsmbclient0 libsmbconf0 libsmbldap0 libsolv-tools libstorage7
  libstorage-ruby libtalloc2 libtasn1 libtasn1-6 libtdb1 libtevent0 libtevent-util0 libuuid1 libwbclient0 libx86emu1 libyui7 libyui-ncurses7 libyui-ncurses-pkg7
  linux-glibc-devel logrotate makedumpfile mcelog mdadm Mesa Mesa-libEGL1 Mesa-libGL1 Mesa-libglapi0 multipath-tools open-iscsi openslp openslp-server openssh
  openssh-helpers p11-kit p11-kit-tools parted patterns-sles-base patterns-sles-laptop patterns-sles-Minimal perl-Bootloader perl-solv postfix python-solv
  release-notes-sles rollback-helper rsyslog ruby2.1-rubygem-cfa ruby2.1-rubygem-cfa_grub2 samba-client samba-libs samba-winbind sle-sdk-release
  sle-sdk-release-POOL sles-release snapper snapper-zypp-plugin sudo SUSEConnect SuSEfirewall2 systemd-presets-branding-SLE sysvinit-tools util-linux
  util-linux-lang util-linux-systemd yast2 yast2-add-on yast2-bootloader yast2-ca-management yast2-core yast2-country yast2-country-data yast2-dhcp-server
  yast2-dns-server yast2-firewall yast2-ftp-server yast2-hardware-detection yast2-http-server yast2-installation yast2-iscsi-client yast2-kdump yast2-ldap
  yast2-mail yast2-migration yast2-network yast2-nis-client yast2-ntp-client yast2-packager yast2-pam yast2-perl-bindings yast2-pkg-bindings yast2-printer
  yast2-registration yast2-ruby-bindings yast2-samba-client yast2-schema yast2-services-manager yast2-slp yast2-smt yast2-snapper yast2-storage yast2-support
  yast2-theme-SLE yast2-trans-en_US yast2-update yast2-users yast2-vm yast2-ycp-ui-bindings zypper zypper-lifecycle-plugin zypper-log

The following 3 patterns are going to be upgraded:
  base laptop Minimal

The following 2 products are going to be upgraded:
  "SUSE Linux Enterprise Server 12 SP2" "SUSE Linux Enterprise Software Development Kit 12"

The following 19 packages are going to be downgraded:
  branding-SLE grub2-branding-SLE iptables kernel-firmware libapparmor1 libICE6 libiptc0 libjasper1 libldap-2_4-2 libxtables10 libzypp nfs-client
  nfs-kernel-server openldap2-client openldap2-devel plymouth-branding-SLE wallpaper-branding-SLE xtables-plugins yast2-auth-client

The following package is going to change architecture:
  yast2-network  x86_64 -> noarch


The following 5 packages are not supported by their vendor:
  libtidyp-1_04-0 openldap2-devel perl-HTML-Tidy sle-sdk-release sle-sdk-release-POOL

168 packages to upgrade, 19 to downgrade, 21 new, 2 to remove, 1 to change arch.
Overall download size: 171.1 MiB. Already cached: 0 B. After the operation, additional 184.6 MiB will be used.
Continue? [y/n/...? shows all options] (y): y

As I am fine with the summary I’ll proceed:

(Use arrows or pgUp/pgDown keys to scroll the text by lines or pages.)

In order to install 'SUSE Linux Enterprise Software Development Kit 12 SP3' (product), you must agree to terms of the following license agreement:

SUSE(R) Linux Enterprise Software Development Kit 12 Service Pack 3
 
SUSE End User License Agreement 

Of course you need to agree to the license agreement and then the migration starts. At the end you should see something like this:

    dracut: *** Creating image file '/boot/initrd-4.4.82-6.9-default' ***
    dracut: *** Creating initramfs image file '/boot/initrd-4.4.82-6.9-default' done ***

There are some running programs that might use files deleted by recent upgrade. You may wish to check and restart some of them. Run 'zypper ps -s' to list these programs.

Executing 'snapper create --type post --pre-number 29 --cleanup-algorithm=number --print-number --userdata important=yes --description 'after online migration''

That’s all:

postgres@sles12sp2:/home/postgres/ [pg963] cat /etc/os-release 
NAME="SLES"
VERSION="12-SP3"
VERSION_ID="12.3"
PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"
ID="sles"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:12:sp3"

Of course the kernel is still the old one and the new one will become active after the next reboot:

postgres@sles12sp2:/home/postgres/ [pg963] sudo systemctl reboot
...
...
postgres@sles12sp2:/home/postgres/ [pg963] uname -a
Linux sles12sp2 4.4.82-6.9-default #1 SMP Tue Sep 19 06:38:15 UTC 2017 (d4a2d80) x86_64 x86_64 x86_64 GNU/Linux

Quite easy.

 

Cet article Going from SLES12 SP2 to SLES12 SP3, online est apparu en premier sur Blog dbi services.

ODA X7

Tue, 2017-10-03 14:58
ODA X6-2

So, the ODA X6 End of Life seems to be planned for October, and be careful, as far as I know, this concerns also the storage expansion.

The main feedback from the X6-2 series were:

  • not enough cores when consolidating multiple databases in M, L and HA
  • not enough storage on HA when compared with X5-2
  • Standard Edition on HA would be great for cheaper RAC
  • The position of the ‘Large’ version of ODA Lite was not clear
ODA X7-2S

The Small version of the ODA Lite has the same number of cores as in X6-2: 10 cores in one socket. Those are Intel Xeon Silver 4114 processor (2.2GHz). Some more memory: 6x32GB instead of 4x32GB, so 192GB expandable to 384GB as before.

And the big new thing is that it can run virtualized (X6-2 was Bare Metal only). Even if it is still OVM, the hypervisor is KVM. OVM is accepted by Oracle to license per vCPU, but for the moment the database cannot run on a VM so you need to get database licences for all the cores enabled on the ODA.

The storage for database is two 6.4TB Flash NVMe, which means 12.8TB raw capacity, 4.8TB double-mirrored, 3.2TB triple-mirrored.

ODA X7-2 M

You can look at it in the 3D view

The larger version of ODA Lite has more cores: 36 cores instead of 20 before: two sockets with Intel Xeon Gold 6140 processor (2.3GHz). Some more memory: 12x32GB=384GB instead of 256GB. Expandable to 768GB as before.

As the Small one, it can be virtualized with KVM.

The storage for database is two 6.4TB Flash NVMe, and you can add 6 additional ones to bring the storage to 51.2TB raw capacity.

Because of this new capacity in the ‘Medium’ one, and the large possibility to add disks, there is no ‘Large’ version of ODA X7-2.

ODA X7-2 HA

You can look at it in the 3D view

The HA version of ODA has more cores: 72 cores instead of 40 before: two 18-cores sockets with Intel Xeon Gold 6140 processor (2.3GHz), in two nodes. Some more memory: 768GB instead of 256GB. Expandable to 1.5GB as before.

The big new thing is that you are allowed to run Standard Edition, and then RAC with no option. In the latter case, with SE2 only one socket per node is enabled.

The second important thing is that the storage which was limited (12TB expandable to 24TB or 48TB with a second shelf) is now declined into:

  • High Performance 3.2GB SSD for 16TB (raw, which is 8TB in normal redundancy or 5.3 TB in normal redundancy) expandable to 64TB, or 128TB with additional shelf
  • High Capacity starts the same with 4 SSD, but you add mechanical 10TB disks for additional 150TB to 300TB

For the REDO, there are 4 additional 800GB disks (4 additional in the expansion shelf).

In the initial release of X7 the virtualization on HA still used OVM/Xen.

Common new features

You will be able to install 12cR2 (12.2.0.1).

You have the choice for the network between:

  • two 10 GbE ports (RJ45)
  • or two 10/25 GbE ports (SFP128)

OAKCLI will be replaced everywhere by ODACLI

CaptureODA-X7-2

 

Cet article ODA X7 est apparu en premier sur Blog dbi services.

SQL Server 2017 RTM is available: time to try!

Tue, 2017-10-03 09:47

Now SQL Server 2017 RTM  is available. :-D
The first step is to go to the Evaluation Center here

Install_00

After completing the registration with your contact information, you download the file “SQL Server 2017-SSEI-Eval.exe
Install_02
Run SQL Server 2017-SSEI-Eval.exe and now it begins!

A first screen gives me 3 choices to install SQL Server

  • Basic: easy and fast to download and install SQL Server 2017
  • Custom: Download the setup package and launch the SQL Installation Center
  • Download Media: choose between an ISO or a CAB

Install_03

Basic Installation

In this mode, you have 2 steps to follow:

    • Accept the license

Install_basic_01

 

    • Choose the destination folder

Install_basic_02

And now the installation begins…
Install_basic_03
At the end, you have a SQL Server installed with a default instance as you can see in this summary window.
Install_basic_04
If you click on the button “Connect now”, the interface run a SQLCMD
Install_basic_05_connectnow

The second button “Customize”, run the SQL Server setup that you know to install a SQL Server instance
Install_basic_05_customize

The “Install SSMS” button retursn you to the web page to download SQL Server management Studio (SSMS)

Install_basic_05_ssms

Here the direct link to SSMS

At the end, you can see also the installation folder from your default instance…
Install_basic_06Be very carefuly with this installation type, at the end you have a default instance installed!

The last point to notice is that on C:\SQLServer2017Media, you retrieve the full Setup from SQL Server 2017
Install_basic_07

Custom installation

This installation asks the target location to dowload the installation files (Default is C:\SQLServer2017Media).

installation_custom

 

Remarks: If you have like me installed the Basic before and click the custom installation, it will detect that you have already the installation files (download phase is shunted!).

After the download, it will run the “Classical” setup (SQL Server Installation Center) to install an instance

Install_custom02

Download Media

This type of installation gives you the language choice, the package choice (.iso or .cab) and the folder for the download file.
Install_04
Click Download and let’s go!
Install_05
After few minutes, the file (iso in my case) is here.
Install_06

This last installation type is the best for me because I can mount the iso file on VMs to install and test SQL Server 2017!

Now, It’s time to test this new version for my Team and write a lot of blogs for you! 8-)

 

Cet article SQL Server 2017 RTM is available: time to try! est apparu en premier sur Blog dbi services.

Oracle 18c RU and RUR for pioneers and followers

Tue, 2017-10-03 08:44

In a previous post, I explained the announcement of the new release model, with Release Updates and Release Update Revisions replacing Proactive Bundle Patch and PSUs. Here is a description that will help you to plan your patching activity.

Which DBA are you?

There are currently a lot of discussions about the different types of DBAs. When it comes to patching, the DBA type is important.

Some DBAs are very conservative: do not use new features, do not upgrade to ‘.1′ releases, because every change can break anything. At least, maybe just patch to SPU for the security fixes. Some are more confident and apply the PSUs, with more fixes. The problem is that what is not fixed may be subject to a bug one day, and need an urgent one-of patch.

And some others are very pro-active: they patch and upgrade the development environment as soon as there is a new Proactive Bundle Patch, then they always have the latest fixes.

Actually, this does not depend only on the personality of the DBA but also on the ability to test. If you have only the production database with representative volume and activity, then any patch set can be dangerous. If in addition to that any intervention or decision takes 3 months, the problems encountered are even worse. But if you are in DevOps, where developers work on a production copy, when integration tests run every night, covering most of the critical use cases, then you can be confident that once those databases have run a few days with the latest Proactive Bundle Patch, then you can do the same in production. And if you come upon the low probability of a regression, you can react immediately with workarounds because you are agile.

The SPUer, the PSUer and the PBPer

In the previous release model, still used for 11g and 12c up to 12.2, this context will define if you are a SPUer (only security patches with very low risk of regression), PSUer (the critical fixes), or Proactive BPer (more fixes) but this will still not include optimizer fixes, and then may still require one-offs.

The .0, the .1 and the .2

With the new release model, you will have releases (R) every year, like 18c, increasing the first number, such as 18.1.0 which should arrive in January. Then during the support window, you will have Release Updates (RU) release quarterly to proactively bring software to the lasted fixes, increasing the second number, such as 18.2.0. But, because those may bring some regressions, you will have additional Release Update Revisions (RUR) which will fix the issues encountered in the last 6 months Release Updates, increasing the third number, such as 18.2.1 and 18.2.2

Then, there will be 3 types of approaches

The pioneer will go to the latest software

The pioneer is confident in his ability to cover most of the critical uses cases in his automated regression tests and is agile enough to fix quickly any issue in the non-covered cases. Then, he will upgrade to release 18.1.0 in January 2018. He will apply the Release Updates as soon as they are there: 18.2.0 in April, 18.3.0 in July, 18.4.0 in October. In January 2019 he can apply the 18c release update 18.5.0 or upgrade to 19c release 19.1.0

The wise will be on the latest Release Updates (RU)

Because each release brings a bunch of new features, they may come with regression on existing ones. Then, the wise DBA will prefer to wait that the pioneers have encountered them and that Oracle has fixed them in Release Updates. He can upgrade to 18.2.0 in April 2018, then apply the RU as they are released: 18.3.0 in July, 18.4.0 in October, 18.5.0 in January 2019,… He can continue for few years according to the support lifecycle (MOS 742060.1)

Of course, he doesn’t need to apply the RU immediately and can choose to lag for a few ones in order to be sure that more pioneers or eager wise ones have encountered more issues. But then, he will also lag on the security fixes, which is not good. And this is where Release Update Revisions come up.

The follower will be on the latest revision (RUR)

When you want to be up-to-date only on security fixes, with the minimal risk of regression, you will stay on the RUR branch, where the potential issues encountered on RU will be fixed. This is the same philosophy as applying only the SPU in the previous release model, but without lagging for years on other fixes. In the new release model, the RUR will be provided only for the last two previous RU. This extends the lifetime of RU by 6 months. The follower can apply the latest security fixes, but they will also come with other fixes. However, the risk is lowered here because the other fixes included are only those that have been tested for 6 months by other users. This 6 month lag on pro-active fixes keeps the risk minimal without running old software. He will upgrade to 18.2.1 in July for a 3 months lag, or waits October to upgrade to 18.3.2 for a 6 months lag.

Of course, you don’t have upgrade to 18c immediately and to patch every 3 months. But when you want to patch, you have the choice to be on the latest software to get new features immediately, or on latest RU to get new fixes immediately, or on latest RUR to let the others endure early problems for a few months. Be aware that RUs have a bit more than what Proactive BP included. In RU you may have some optimizer fixes, but those are disabled by default if there is a risk of plan change. You may also find some minor enhancements. You have the flexibility to switch between RU and RUR. And in the low probability that you need a one-off patch, it will be delivered quickly for the latest RU and RUR.

So what?

This new model helps to run a software that is not too old, which is more and more mandatory, at least for security reasons. Finally, all is about testing. Do you prefer to let others test the quarterly upgrades on their database, in the hope that what they run is similar to what you run on your database? Then be a ‘.1′ or ‘.2′ RUR follower. Or improve and automate your own testing and you can reduce the lag by being an up-to-date ‘.0′ RU patcher.

 

Cet article Oracle 18c RU and RUR for pioneers and followers est apparu en premier sur Blog dbi services.

SQL Server 2017 on RHEL first installation

Mon, 2017-10-02 04:24

Beginning of last week Microsoft announced the general availability of SQL Server 2017 for today, the second of October. In the same time, but in a more marketing side, Microsoft announced a SQL Server on Red Hat Enterprise Linux offer, more information here.
It looked for me like the good time to do my first installation of SQL Server 2017 on Linux.

My first concern was to download an ISO of the Red Hat Enterprise Linux. I found it here and mapped it in my new Hyper-V Virtual Machine. I created my Virtual machine with small settings: just one vCPU, 4GB RAM and 50GB disk.
To be honest for a Windows guy, having to work in a Linux environment is a little bit stressful…
The first step has been to install my Linux.
I did it with the minimal possible configurations and let the installation configured automatically the needed mount points.
Once the installation finished, the tricky part for me has been to configure my two network cards: one for internal purpose in order to interconnect with my others VMs and a second one to connect to the Internet… what a challenge… after some requests to my Oracle colleagues and some tests I finally managed to set my two cards ;-)

I started the installation with the YUM Red Hat package manager following the Microsoft documentation.
First error: my Red Hat Enterprise Linux system is not registered… In order to register and automatically subscribe I ran this command with the userid and password I used to create an account before downloading the REHL iso:

# subscription-manager register --username <username> --password <password> --auto-attach

After this registration, the first step was to download the Microsoft SQL Server Red Hat referential file into the /etc/yum.repos.d folder and to run a YUM Update to install the last available updates… more than 240 for me…
Once done  I was able to execute the following script to download and install the SQL Server packages:

SQL2017_1

SQL2017_2

During the installation of the packages I have been asked to run the SQL Server configuration setup to finalyze the setup of my SQL Server, I did it and needed to accept the license agreement, to choose an edition and to give a password to the SA login:

SQL2017_3

SQL2017_4

It’s already done!
My SQL Server instance is installed and processes are running:

SQL2017_5

SQL2017_6

At this point it was already possible to connect to my instance via Management Studio installed in another Virtual Machine. I used the Ip address of my Virtual Machine (“ip a” to have it on Linux) and my System Administrator user and password, the port is the default one 1433:

SQL2017_10

SQL2017_11

To be able to query my instance directly from my Linux machine, I installed the SQL Server tools by downloading as before the Microsoft Red Hat repository configuration file (1 in the below picture) and after installed the tools with the packages previously downloaded (2 in my picture):

SQL2017_7

SQL2017_8

I was able after this installation to directly query my instance from my Linux with the following command:

SQL2017_9

This first step with Microsoft SQL Server 2017 on Red Hat Enterprise Linux was a good experience and also a good way to remember some Linux commands which I didn’t use for a while…

Before to conclude, a small tips if you want to change the port of your SQL Server instance.
As you know by default SQL Server will run under port 1433. To change it on SQL Server 2017, I used the file mssql-conf which is a configuration script installed with this new version of SQL Server. This script is used to set some parameters like default directories, collation, trace flags…
Here I setted the port to 48100:

SQL2017_12

I needed to restart my SQL Server service and after I could run a sqlcmd command with my new port:

SQL2017_13

That’s all Folks!
It was quite funny to play around with command lines as I learnt during my studies many years ago ;-)

 

 

 

 

 

 

Cet article SQL Server 2017 on RHEL first installation est apparu en premier sur Blog dbi services.

SQL Server 2016: New Dynamic Management Views (DMVs)

Fri, 2017-09-29 08:32

In SQL Server 2016, you will discover a lot of new Dynamic Management Views(DMVs).
In this article, I will just give you a little overview of these useful views for us as DBA.

SQL Server 2012 has 145 DMVs and SQL Server 2014 has 166 DMVs.
Now, SQL Server 2016 has 185 DMVs.

How to see it?

It is very easy to have a look using the sys.all_objects view:

SELECT * FROM sys.all_objects WHERE TYPE=’V’ AND NAME LIKE ‘dm_%’ order by name ASC

DMV_SQL2016

From SQL Server 2012 to SQL Server 2014, we can notice that a lot of new DMVs comes with the In-Memory technology with the syntax “dm_xtp_xxxxxxxx” or “dm_db_xtp_xxxxxxxx”

In SQL Server 2016, a lot of new “dm_exec_xxxxxxxx” is present.

All definitions for these views come from the Microsoft documentation or web site.

To begin, you will see 10 DMVs for the PolyBase technology:

  • dm_exec_compute_node_status
  • dm_exec_dms_workers

A useful msdn page resumes all DMVs for these new views here

Other dm_exec_xxx views are basically usefull like:

  • dm_exec_query_optimizer_memory_gateways
    • Returns the current status of resource semaphores used to throttle concurrent query optimization.
    • Microsoft Reference here
  • dm_exec_session_wait_stats
    • Returns information about all the waits encountered by threads that executed for each session
    • Microsoft Reference here

3 new DMVs for the Columstore technology:

  • dm_column_store_object_pool
  • dm_db_column_store_row_group_operational_stats
    • Returns current row-level I/O, locking, and access method activity for compressed rowgroups in a columnstore index.
    • Microsoft Reference here
  • dm_db_column_store_row_group_physical_stats
    • Provides current rowgroup-level information about all of the columnstore indexes in the current database
    • Microsoft Reference here

2 new DMVs for Stretch Databases in the database context and with rda(remote database archive):

  • dm_db_rda_migration_status
    • For the current database, list of state information of the remote data archive schema update task.
    • Microsoft Reference here

This list can change if a Service Pack is  applied.
It is just for you to have a little reference view about these useful views! 8-)

 

Cet article SQL Server 2016: New Dynamic Management Views (DMVs) est apparu en premier sur Blog dbi services.

“_suppress_identifiers_on_dupkey” – the SAP workaround for bad design

Fri, 2017-09-29 08:01

In SQL, ‘upsert’ is a conditional insert or update: if the row is there, you update it, but if it is not there, you insert it. In Oracle, you should use a MERGE statement for that. You are clearly doing it wrong if you code something like:

begin
insert...
exception
when dup_val_on_index then update...
end;


But it seems that there are many applications with this bad design, and Oracle has introduced an underscore parameter for them: “_suppress_identifiers_on_dupkey”. You won’t be surprised that this one is part of the long list of parameters required for SAP.

Let’s investigate this.

Insert – Exception – Update

So the idea is to try first an insert, rely on the unique constraint (primary key) to get an exception if the row exists, and in this case update the existing row. There are several flows with that.

The first problem, is that it is not as easy as it looks like. If a concurrent session deletes the row between you insert and update, then the update will fail. You have to manage this. The failed insert cannot leave a lock on the rows that was not inserted.

The second problem is that the SQL engine is optimized for transactions which commit. When the ‘dup_val_on_index’ on index occurs, you have already inserted the table row, updated some indexes, etc. And all that has to be rolled back when the exception occurs. This generates unnecessary contention on the index leaf block, and unnecessary redo.

Then the third problem, and probably the worst one, is that an exception is an error. And error management has lot of work to do, such as looking into the dictionary for the violated constraint name in order to give you a nice error message.

I’ve created the following table:

create table demo as select * from dual;
create unique index demo on demo(dummy);

And I’ve run 10 million inserts on it, all with duplicates:

exec for i in 1..1e7 loop begin insert into demo values('x'); exception when others then null; end; end loop;

Here is some extracts from the AWR on manual snapshots taked before and after.

Elapsed: 20.69 (mins)
DB Time: 20.69 (mins)

This has run for 20 minutes.


Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 100.00 Soft Parse %: 100.00
Execute to Parse %: 33.34 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 92.31 % Non-Parse CPU: 94.90
Flash Cache Hit %: 0.00

The ‘Execute to Parse %’ show that 2/3 of statements are parsed each time.


SQL ordered by Gets DB/Inst: CDB1/CDB1 Snaps: 19-20
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - Buffer Gets as a percentage of Total Buffer Gets
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Buffer Gets: 180,125,740
-> Captured SQL account for 127.7% of Total
 
Buffer Gets Elapsed
Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
1.80094E+08 1 1.800942E+08 100.0 1,239.8 99.5 .3 frvpzg5yubp29
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
BEGIN for i in 1..1e7 loop begin insert into demo values('x'); exception when ot
hers then null; end; end loop; END;
 
1.60094E+08 10,000,000 16.0 88.9 983.1 100.3 .4 319ypa1z41aba
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
INSERT INTO DEMO VALUES('x')
 
49,999,995 9,999,999 5.0 27.8 201.1 103.2 0 2skwhauh2cwky
PDB: PDB1
select o.name, u.name from obj$ o, user$ u where o.obj# = :1 and o.owner# = u.u
ser#
 
19,999,998 9,999,999 2.0 11.1 148.5 98.9 0 2jfqzrxhrm93b
PDB: PDB1
select /*+ rule */ c.name, u.name from con$ c, cdef$ cd, user$ u where c.con# =
cd.con# and cd.enabled = :1 and c.owner# = u.user#

My failed inserts have read on average 16 blocks for each attempt. that’s too much for doing nothing. And in addition to that, I see two expensive statements parsed and executed each time: one to get the object name and one to get the constraint name.
This is how we can retreive the error message which is:

 
ORA-00001: unique constraint (SCOTT.DEMO) violated
 

This is a big waste of resource. I did this test in PL/SQL but if you cumulate all worst practices and run those inserts row by row, then you will see those colors:
CaptureBreakReset

The Orange is ‘Log File Sync’ because you generate more redo than necessary.
The Green is ‘CPU’ because you read more blocks than necessary.
The read is ‘SQL*Net break/reset to client’ when the server process sends the error.

_suppress_identifiers_on_dupkey

When you set “_suppress_identifiers_on_dupkey” to true, Oracle will not return the name of the constraint which is violated, but only the information which is already there in the session context.

Here is the message that you get:

 
ORA-00001: unique constraint (UNKNOWN.obj#=73375) violated
 

Where 73375 is the OBJECT_ID of the index where the unique constraint exception has been violated.

You have less information, but it is faster:

Elapsed: 15.45 (mins)
DB Time: 15.48 (mins)

There is no Soft Parse overhead:

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 100.00 Soft Parse %: 96.43
Execute to Parse %: 99.98 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 90.38 % Non-Parse CPU: 99.95
Flash Cache Hit %: 0.00

Our statement is the only one using the CPU and reads less blocks:

SQL ordered by Gets DB/Inst: CDB1/CDB1 Snaps: 21-22
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - Buffer Gets as a percentage of Total Buffer Gets
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Buffer Gets: 110,132,467
-> Captured SQL account for 81.8% of Total
 
Buffer Gets Elapsed
Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
1.10091E+08 1 1.100906E+08 100.0 926.2 98.8 1 frvpzg5yubp29
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
BEGIN for i in 1..1e7 loop begin insert into demo values('x'); exception when ot
hers then null; end; end loop; END;
 
90,090,580 10,000,000 9.0 81.8 515.7 99.1 1.9 319ypa1z41aba
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
INSERT INTO DEMO VALUES('x')

This parameter is a workaround for bad design, but not a solution.

Update – no rows – Insert

In order to avoid all this rollback and exception management overhead, there is another idea. Start with the update and, when no row was found, insert it. This is easy with the ROWCOUNT.

begin
update ...
if SQL%ROWCOUNT = 0 then insert ...

This is more efficient but still subject to a concurrent session inserting the row between your update and you insert. But at least, you manage the different scenario with a condition on ROWCOUNT rather than with an exception, which is more scalable.

So what?

Always use the database in the expected way. Exceptions and Errors are not for the normal scenario of the use-case. Exceptions should be unusual. The solution is to use the MERGE statement which has been implemented exactly for this reason: do an upsert without the error management overhead and with the statement isolation level which prevents errors in a multi-user environment.

 

Cet article “_suppress_identifiers_on_dupkey” – the SAP workaround for bad design est apparu en premier sur Blog dbi services.

Oracle Big Data Cloud Service – Compute Edition

Fri, 2017-09-29 03:00

In this blog post, we will see how to create a Big Data cluster through the Oracle Cloud Services. If you want more details about the Oracle Big Data cloud services offering, you can refer to my previous blog Introduction to Oracle Big Data.

First, you need to create your trial account through the following link: https://cloud.oracle.com/tryit. Note that, when you create your trial account, all information (phone number, address, credit card…), must be from the same country. Otherwise, you will get an error message.

Then you will get an email from Oracle with your connection information. The 4 main connection information are:

During the first connection you need to change your password and answer to 3 secret questions.

You are now login into the Oracle Cloud Services Dashboard. Select the “Big Data – Compute Edition” service to create your cluster.

BDCS-CE Dashboard

Click on “Service” and “Create Service”.

BDCS-CE Create Cluster

First, complete the service information. Cluster name, description… and click on “Next”.

BDCS-CE Cluster creation

Then, you enter the details of your Big Data cluster (configuration, credentials, storage…).

Cluster configuration:

Use the “full” deployment. It will provision a cluster with Spark, MapReduce, Zeppelin, Hive, Spark Thrift, Big Data File System.

Credentials:

Generate an ssh public key and insert it (see screenshot below). Update or keep the current Administrative user / password which is very important for the next operations.

Storage:

Oracle Public Cloud is working with Object storage container. Which means that, a storage container can be used by all cloud services. For the Big Data Service you need to use an existing storage container or create one. The storage container name must follow a specific syntax.

https://<identity_domaine>.storage.oraclecloud.com/v1/Storage-<identity_domaine>/<container_name>

Example: https://axxxxxx.storage.oraclecloud.com/v1/Storage-axxxxxx/dbistorage

You can find the complete configuration below.

BDCS-CE Configuration Overview

Confirm your cluster configuration and click on “Next”.

During the cluster deployment, you can take the time to read the documentation: https://docs.oracle.com/en/cloud/paas/big-data-cloud/index.html

Once your services has been deployed, you can access to the Big Data Cluster Console, to monitor your cluster and access it.

BDCS-CE Cluster Overview

 

BDCS-CE Cluster Console

OBDCS-CE Monitoring

You have now deployed an Big Data cluster composed by 3 nodes, based on HortonWorks distribution with the following tools:

  • HDFS = Hadoop Distributed FileSystem
  • YARN = Resources management for the cluster
  • Hive = Data Warehouse for managing large data sets using SQL
  • Spark= Data processing framework
  • Pig = High-level platform for creating programs that runs on Hadoop
  • ZooKeeper = Hadoop cluster scheduler
  • Zeppelin = Data scientist workbench, web based.
  • Alluxio = Memory speed virtual distributed storage
  • Tez = Framework for YARN-based, Data Processing Applications In Hadoop

Your Oracle Big Data cluster, through Oracle Big Data Cloud Service – Compute Edition is now ready to use.

Enjoy ;-)

 

Cet article Oracle Big Data Cloud Service – Compute Edition est apparu en premier sur Blog dbi services.

Introduction to Oracle Big Data Services

Fri, 2017-09-29 01:00

Since few years, Oracle decided to move forward in the Big Data area, as their main competitor. The goal of this blog post is to explain you, how the Oracle Big Data offering is composed.

As the Oracle Big Data offering is continuously improving, I’m always open to your feedback :-)

Oracle Big Data offering is split in 2 parts:

  • On-Premise
  • Public Cloud

Note: It’s important to know, that the 2 main Big Data distribution on the market are Cloudera and Hortonworks. We will see later how Oracle stands with this 2 main distributions.

On-premise: Oracle Big Data Appliance:

The main product of the Oracle Big Data offering is the Oracle Big Data Appliance. OBDA is an engineered systems based on the Cloudera distribution. The Big Data appliance offers you an easy-to-deploy solution with Cloudera manager for managing a Big Data cluster including a complete Hadoop ecosystem ready-to-use.

Oracle Big Data Appliance starts with a “Starter” rack of 6 nodes for a storage capacity of 96TB. Below the details configuration per nodes.

Oracle X6-2 server:

  • 2 × 22-Core Intel ® Xeon ® E5 Processors
  • 64GB Memory
  • 96TB disk space

Oracle Big Data Appliance is a combination of open source software and proprietary software from Oracle (i.e Oracle Big Data SQL). Below a high-level overview of Big Data Appliance software.

Screen Shot 2017-09-27 at 08.25.45

Oracle Big Data Cloud Machine:

On customer side, Oracle offers the Oracle Big Data Cloud Machine (BDCM). Fully managed by Oracle as it’s a PaaS service (Platform as a Service), based on customer infrastructures, designed to provide Big Data Cloud Service. The BDCM is a Big Data Appliance managed and operated by Oracle in customer’s data center.

The Big Data Cloud Machine starts with a “Starter Pack” of 3 nodes. Below the minimal configuration:

  • 3 nodes
  • 32 OCPU’s per node
  • 256GB RAM per node
  • 48TB disk space per node

Oracle Big Data Cloud Machine princing: https://cloud.oracle.com/en_US/big-data/cloudmachine/pricing

Oracle Public Cloud:

Oracle provides several deployment and services for Big Data:

  • Oracle Big Data Cloud Services
  • Oracle Big Data Cloud Services – Compute Edition
  • Event Hub Cloud Services (Kafka as a Service)
  • Oracle Big Data SQL Cloud Service

Oracle public cloud services, including Big Data, is available in two payment methods, metered and non-metered.

  • Metered: You are charged on the actual usage of the service resource :
    • OCPU/hour
    • Environment/hour
    • Host/hour
    • For the storage : GB or TB/month
  • Non-metered: Monthly or annual subscription for a service and it’s not depending on the resources usage. Charging is performed monthly.

For more information you can refer to the following links:

https://blogs.oracle.com/pshuff/metered-vs-un-metered-vs-dedicated-services

Oracle Big Data Cloud Services:

OBDCS is a dedicated Big Data Appliance in the public cloud. An engineered system managed and pre configured by Oracle. OBDCS is a large system from the start with Terabytes of storage.

The offering starts with a “Starter pack” of 3 nodes, including:

  • Platform as a Service
  • 2 payments methods: metered and non-metered
  • SSH connection to cluster nodes
  • Cloudera’s Distribution including Apache Hadoop, Enterprise Data Hub Edition
  • Oracle Big Data Connectors
  • Oracle Copy to Hadoop
  • Oracle Big Data Spatial and Graph

The cost entry is very high, that’s why this service is recommended for large and more mature business cases.

Pricing information: https://cloud.oracle.com/en_US/big-data/big-data/pricing

Oracle Big Data Cloud Services – Compute Edition:

OBDCS-CE provides you a dedicated Hadoop cluster based on Hortonworks distribution. The cost entry is smaller than Oracle Big Data Cloud Service, that’s why this service is more suitable for small business use case and proof and concept.

OBDCS-CE offering details:

  • Platform as a Service
  • 2 payments methods: metered and non-metered
  • Apache Hadoop cluster based on Hortonworks distribution
  • Free number of nodes for the deployment – 3 nodes is the minimum for a High Availability cluster, recommended for production. You can actually have one node clusters, but this is obviously not recommended.
  • Apache Zeppelin for Hive and Spark analytic
  • 3 access methods:
    • BDCS-CE console (GUI)
    • REST API
    • SSH

Pricing information: https://cloud.oracle.com/en_US/big-data-cloud/pricing

Summary Engineered systems PaaS On-Premise (customer side) - Big Data Appliance (BDA)- Big Data Cloud Machine (BDA managed by Oracle) Oracle Cloud Machine (OCM)  + BDCS – Compute edition Oracle Public Cloud Big Data Cloud Service (BDCS) – a BDA in Oracle public cloud – Cloudera distribution Big Data Cloud Service – Compute edition – Hortonworks distribution

More details about Oracle PaaS offering:

http://www.oracle.com/us/corporate/contracts/paas-iaas-public-cloud-2140609.pdf

I hope, this blog will help you to better understand the Oracle Big Data offering and products.

 

Cet article Introduction to Oracle Big Data Services est apparu en premier sur Blog dbi services.

Announcing the dbi OpenDB Appliance

Thu, 2017-09-28 07:04

As already announced on Twitter and LinkedIn here is the blog post to describe our OpenDB appliance in more detail. I am sure you wonder what this is about so let me explain why we are doing this. What we do see day by day at our customers is that more and more databases get consolidated on to a VMWare deployment. This is not only true for the smaller ones of those but also for the critical, potentially much bigger ones. What makes it complicated, especially for smaller companies that do not necessarily have the knowhow for the specific database, is that you need to apply the best practices not only to the database deployment but also to the operating system and the VMWare deployment. But even if you have this already in place: Do you know how to deploy the PostgeSQL binaries, how to setup a PostgreSQL instance, how to monitor and how to backup and restore all that? Do you know how to do this with MySQL/MariaDB, MongoDB, Cassandra? If your answer to this is no but you need to have a PostgreSQL/MySQL/MariaDB/MongoDB/Cassandra instance ready quite fast then the dbi OpenDB Appliance might be the solution for you. Let’s dig into some details.

OpenDB-logo

A typical use case: You are forced to support an application which is running on a specific database. What do you do? Quickly setup a Linux VM, download the installer, clicking next, next, next and hopefully make the application connect to what you just installed and then cross your fingers and hope that never ever something goes wrong? You laugh? There are deployments out there which got setup in exactly this way. Another option would be to hire someone who is experienced in that area. This will not help you either as you’d at least need two people (because people tend to want to go to holidays from time to time). The next option would be to work together with external consultants which probably will work as long as you work with the right ones. Completely outsourcing the stuff is another option (or even going to the cloud), if you want to do that. With the dbi OpenDB Appliance you’ll get another option: We deliver a fully pre-configured VMWare based virtual machine image which you can easily plug into your existing VMWare landscape. Can that work? Let me explain what you would get:

As said just before you get an image which you can import into your VMWare ESX. I said this image is pre-configured, what does that mean? Well, when you start it up it boots into a CentOS 7.3 x64 Linux operating system. No magic, I know :) Additionally you’ll get four pre-configured disks:

/       15GB    The Linux operating system
/boot	1GB	The boot images (kernels)
/u01	50GB	All files belonging to the OpenDB itself
                All required DMK packages
                All source files (PostgreSQL, MariaDB, MongoDB, Cassandra)
                The Linux yum repositories
                The HOMEs of all product installations
                The admin directories for the initialized products
/u02	10GB	The data files belonging to the initialized products
/u03	10GB	The redo/wal files belonging to the initialized products
/u04	10GB	Backups

You are not supposed to touch the root, /boot and /u01 partitions but of course you will be able to resize /u02 to /u04. The 10GB provided initially are just meant as minimum setup. Resize your VMWare disk images (vmdks) and the dbi OpenDB command line utility offers you to resize the file systems as well with just a single call. At this point you probably wonder what the dbi OpenDB command line utility is about. In short this is a wrapper around our various DMK packages. Using one of the various DMK packages you can deploy and monitor databases even today. The command line utility makes use of that and wraps around the various DMKs. The interface is menu driven to make it as easy as possible for you and helps you with initializing the appliance (setting the hostname, network configuration and disk resizing). In addition you can install the products we support and create database instances on top of that without knowing the details. We take care of implementing the best practices in the background (kernel settings, file system layout, initialization parameters, …). But that is not all: We’ll go a step further and implement monitoring, alerting and backup procedures as well. The idea is that you really do not need to take care of such things: It just comes when you setup a product.

To give you an idea you’ll get something like this when you fire up the command line utility:

==============================================================================================
=                                                                                            =
=                                                                                            =
=       _ _    _    ___                 ___  ___     _             _ _                       =
=    __| | |__(_)  / _ \ _ __  ___ _ _ |   \| _ )   /_\  _ __ _ __| (_)__ _ _ _  __ ___      =
=   / _  | '_ \ | | (_) | '_ \/ -_) ' \| |) | _ \  / _ \| '_ \ '_ \ | / _  | ' \/ _/ -_)     =
=   \__,_|_.__/_|  \___/| .__/\___|_||_|___/|___/ /_/ \_\ .__/ .__/_|_\__,_|_||_\__\___|     =
=                       |_|                             |_|  |_|                             =
=                                                                                            =
=                                                                                            =
=      Please make a selection from the menu below (type 'q' to exit):                       =
=                                                                                            =
=      1. Deploy a database home                                                             =
=      2. List the deployed database homes                                                   =
=      3. Setup a database instance                                                          =
=      4. List the deployed database instances                                               =
=                                                                                            =
=     10. Stop and remove a database instance                                                =
=     11. Remove a database home                                                             =
=                                                                                            =
=                                                                                            =
=     99. Initialize the appliance                                                           =
=                                                                                            =
=                                                                                            =
==============================================================================================
 
 Your input please: 

You would start by “Initialize the appliance” to set your preferred host name, to initialize the network and to provide the monitoring credentials. Once done you can go on and start deploying product homes (e.g. PostgreSQL) and instances on top of that. Of course you can deploy multiple instances on the same home and you can install several homes of the same product version.

What do we mean by a “product”? A product is what we support with a specific release of the appliance. Initially this probably will be:

  • PostgreSQL 9.6.5
  • PostgreSQL 9.5.9

So the menu would offer you something like this for deploying the binaries:

==============================================================================================
=                                                                                            =
=                                                                                            =
=       _ _    _    ___                 ___  ___     _             _ _                       =
=    __| | |__(_)  / _ \ _ __  ___ _ _ |   \| _ )   /_\  _ __ _ __| (_)__ _ _ _  __ ___      =
=   / _  | '_ \ | | (_) | '_ \/ -_) ' \| |) | _ \  / _ \| '_ \ '_ \ | / _  | ' \/ _/ -_)     =
=   \__,_|_.__/_|  \___/| .__/\___|_||_|___/|___/ /_/ \_\ .__/ .__/_|_\__,_|_||_\__\___|     =
=                       |_|                             |_|  |_|                             =
=                                                                                            =
=                                                                                            =
=      Please make a selection from the menu below (type 'q' to exit, 'b' to go back):       =
=                                                                                            =
=                                                                                            =
=     000 - PostgreSQL 9.6.5                                                                 =
=     001 - PostgreSQL 9.5.9                                                                 =
=                                                                                            =
=                                                                                            =
==============================================================================================
 
 Your input please: 

Once you have deployed the homes you require you can list them:

==============================================================================================
=                                                                                            =
=                                                                                            =
=       _ _    _    ___                 ___  ___     _             _ _                       =
=    __| | |__(_)  / _ \ _ __  ___ _ _ |   \| _ )   /_\  _ __ _ __| (_)__ _ _ _  __ ___      =
=   / _  | '_ \ | | (_) | '_ \/ -_) ' \| |) | _ \  / _ \| '_ \ '_ \ | / _  | ' \/ _/ -_)     =
=   \__,_|_.__/_|  \___/| .__/\___|_||_|___/|___/ /_/ \_\ .__/ .__/_|_\__,_|_||_\__\___|     =
=                       |_|                             |_|  |_|                             =
=                                                                                            =
=                                                                                            =
=      Please make a selection from the menu below (type 'q' to exit, 'b' to go back):       =
=                                                                                            =
=                                                                                            =
=     The following homes are available for deploying instances on:                          =
=                                                                                            =
=                                                                                            =
=     pg965:/u01/app/opendb/product/PG96/db_5/:dummy:9999:D                                  =
=     PG959:/u01/app/opendb/product/PG95/db_9/:dummy:9999:D                                  =
=     PG959_1:/u01/app/opendb/product/PG95/db_9_0:dummy:9999:D                               =
=     PG965_1:/u01/app/opendb/product/PG96/db_5_0:dummy:9999:D                               =
=                                                                                            =
=                                                                                            =
==============================================================================================
 
 Your input please: 

Here you can see that you can have multiple homes of the same release (two for PostgreSQL 9.6.5 and two for PostgreSQL 9.5.9 in this case). The path and naming for a home follow our best practices and are generated automatically. Having the homes you can start deploying you instances:

==============================================================================================
=                                                                                            =
=                                                                                            =
=       _ _    _    ___                 ___  ___     _             _ _                       =
=    __| | |__(_)  / _ \ _ __  ___ _ _ |   \| _ )   /_\  _ __ _ __| (_)__ _ _ _  __ ___      =
=   / _  | '_ \ | | (_) | '_ \/ -_) ' \| |) | _ \  / _ \| '_ \ '_ \ | / _  | ' \/ _/ -_)     =
=   \__,_|_.__/_|  \___/| .__/\___|_||_|___/|___/ /_/ \_\ .__/ .__/_|_\__,_|_||_\__\___|     =
=                       |_|                             |_|  |_|                             =
=                                                                                            =
=                                                                                            =
=      Please make a selection from the menu below (type 'q' to exit, 'b' to go back):       =
=                                                                                            =
=                                                                                            =
=     Please specify an alias for your new instance                                          =
=       The alias needs to be at least 4 characters                                          =
=       The alias needs to be at most  8 characters                                          =
=                                                                                            =
=                                                                                            =
=                                                                                            =
==============================================================================================
 
 Your input please: MYINST1 

What happens in the background then is that the PostgreSQL cluster is initialized, started and added to the auto start configuration (systemd) so that the instance will properly shutdown when the appliance is stopped and comes up when the appliance is started. Listing the deployed instances is possible, too, of course:

==============================================================================================
=                                                                                            =
=                                                                                            =
=       _ _    _    ___                 ___  ___     _             _ _                       =
=    __| | |__(_)  / _ \ _ __  ___ _ _ |   \| _ )   /_\  _ __ _ __| (_)__ _ _ _  __ ___      =
=   / _  | '_ \ | | (_) | '_ \/ -_) ' \| |) | _ \  / _ \| '_ \ '_ \ | / _  | ' \/ _/ -_)     =
=   \__,_|_.__/_|  \___/| .__/\___|_||_|___/|___/ /_/ \_\ .__/ .__/_|_\__,_|_||_\__\___|     =
=                       |_|                             |_|  |_|                             =
=                                                                                            =
=                                                                                            =
=      Please make a selection from the menu below (type 'q' to exit, 'b' to go back):       =
=                                                                                            =
=                                                                                            =
=     The following instances are currently deployed:                                        =
=                                                                                            =
=                                                                                            =
=     MYINST1:/u01/app/opendb/product/PG96/db_5/:/u02/opendb/pgdata/MYINST1:5432:Y           =
=                                                                                            =
=                                                                                            =
==============================================================================================
 
 Your input please: 

The cronjobs for monitoring, alerting and backup have been created as well:

[opendb@opendb ~]$ crontab -l
00 01 * * * /u01/app/opendb/local/dmk/dmk_postgres/bin/dmk-pg-dump.sh -s MYINST1 -t /u04/opendb/pgdata/MYINST1/dumps >/dev/null 2>&1
58 00 * * * /u01/app/opendb/local/dmk/dmk_postgres/bin/dmk-pg-badger-reports.sh -s MYINST1 >/dev/null 2>&1
*/10 * * * * /u01/app/opendb/local/dmk/dmk_postgres/bin/dmk-check-postgres.sh -s MYINST1 -m  >/dev/null 2>&1

With every new release/update of the appliance we plan to include more products such as MariaDB/MongoDB/Cassandra, provide patch sets for the existing ones and update the Linux operating system. Updates will be delivered as tarballs and the command line utility will take care of the rest, you do not need to worry about that. You can expect updates twice a year.

To visualize this:
OpenDB-big-picture

/u02 will hold all the files that contain your user data. /u03 is there for redo/wal/binlog where required and /u04 is for holding the backups. This is fixed and must not be changed. Independent of which product you choose to deploy you’ll get a combination of pcp (Performance Co-Pilot) and vector to do real time performance monitoring of the appliance (of course configured automatically).

Alerting will be done by a combination of third party (open source) projects and DMK. The tools we’ll use for PostgreSQL will be check_postgres and pgbadger, for example. For the other products we’ll announce what we will use when it will be included in a future release.

In addition to the VMWare template you can have the appliance also in the Hidora Cloud as a pay as you go service (although that is not fully ready).

If you have any questions just send as an email to: opendb[at]dbi-services[dot]com

 

Cet article Announcing the dbi OpenDB Appliance est apparu en premier sur Blog dbi services.

Searching wikipedia from the command line

Thu, 2017-09-28 06:41

Wouldn’t it be nice if you could search wikipedia from the command line? I often need to quickly look up a definition or want to know more about a specific topic when I am working on the command line. So here is how you can do it …

What you need is npm and wikit. On my debian based system I can install both with:

$ sudo apt-get install npm
$ sudo npm install wikit -g
$ sudo ln -s /usr/bin/nodejs /usr/bin/node

The link is to avoid the following issue:

$ wikit postgresql
/usr/bin/env: ‘node’: No such file or directory

For Fedora/RedHat/Centos you should use yum:

$ sudo yum install npm -y
$ sudo npm install wikit -g

Once you have that you can use wikit to query wikipedia (summary):

$ wikit postgresql
 PostgreSQL, often simply Postgres, is an object-relational database management system
 (ORDBMS) with an emphasis on extensibility and standards compliance. As a database
 server, its primary functions are to store data securely and return that data in
 response to requests from other software applications. It can handle workloads ranging
 from small single-machine applications to large Internet-facing applications (or
 for data warehousing) with many concurrent users; on macOS Server, PostgreSQL is
 the default database; and it is also available for Microsoft Windows and Linux (supplied
 in most distributions). PostgreSQL is ACID-compliant and transactional. PostgreSQL
 has updatable views and materialized views, triggers, foreign keys; supports functions
 and stored procedures, and other expandability. PostgreSQL is developed by the PostgreSQL
 Global Development Group, a diverse group of many companies and individual contributors.
 It is free and open-source, released under the terms of the PostgreSQL License,
 a permissive software license.

Cool. When you want to read the output in your default browser instead of the console you can do this as well by adding then “-b” flag:

$ wikit postgresql -b

When you want to open the “disambiguation” page in your browser:

$ wikit postgresql -d

Selection_013

Changing the language is possible as well with the “-lang” switch:

$ wikit --lang de postgresql 
 PostgreSQL (englisch [,pəʊstgɹɛs kjʊ'ɛl]), oft kurz Postgres genannt, ist ein freies,
 objektrelationales Datenbankmanagementsystem (ORDBMS). Seine Entwicklung begann
 in den 1980er Jahren, seit 1997 wird die Software von einer Open-Source-Community
 weiterentwickelt. PostgreSQL ist weitgehend konform mit dem SQL-Standard ANSI-SQL
 2008, d.h. der Großteil der Funktionen ist verfügbar und verhält sich wie definiert.
 PostgreSQL ist vollständig ACID-konform (inklusive der Data Definition Language),
 und unterstützt erweiterbare Datentypen, Operatoren, Funktionen und Aggregate. Obwohl
 sich die Entwicklergemeinde sehr eng an den SQL-Standard hält, gibt es dennoch eine
 Reihe von PostgreSQL-spezifischen Funktionalitäten, wobei in der Dokumentation bei
 jeder Eigenschaft ein Hinweis erfolgt, ob dies dem SQL-Standard entspricht, oder
 ob es sich um eine spezifische Erweiterung handelt. Darüber hinaus verfügt PostgreSQL
 über ein umfangreiches Angebot an Erweiterungen durch Dritthersteller, wie z.B.
 PostGIS zur Verwaltung von Geo-Daten. PostgreSQL ist in den meisten Linux-Distributionen
 enthalten. Apple liefert ab der Version Mac OS X Lion (10.7) PostgreSQL als Standarddatenbank

Quite helpful …

 

Cet article Searching wikipedia from the command line est apparu en premier sur Blog dbi services.

When deterministic function is not

Tue, 2017-09-26 14:47

When you declare a function-based index, the function is deterministic, which means that calling it with same arguments will always return the same result. This is required because the indexed values, which are the result of the function, are stored in the index. But what happens if you declare a function deterministic when it is not?

I wanted to do this test after reading the following documents about Postgres HOT and WARM.

The relevant read is https://t.co/rCzqdz9RyJ and https://t.co/rFis1lqlFf

— Vladimir Sitnikov (@VladimirSitnikv) August 2, 2017

They say that they cannot vacuum one page at a time because index entries must be cleaned, and there’s a risk when trying to find an index entry from the table in case a user indexed a function which is not actually deterministic. This could lead to logical corruption. So, it seems that Postgres will always navigate from the index to the table and not the opposite. And that is possible in Postgres because they don’t implement DELETE and UPDATE physically. They only do an INSERT with the new version of the whole row and mark the old version as stale.

But Oracle is far more complex than that. Critical OLTP applications must be able to update in-place, without row movement, or the indexes maintenance would kill the performance and the redo generation would be orders of magnitude larger. An update is done in-place and the updated column must maintain the related index. And deletes will also delete all the index entries. Then, Oracle needs to navigate from the table to the index. This is done with a lookup onf the value in the index structure. The value is either a value stored in the table row, or derived with a deterministic function.

So what happens if I declare a function deterministic when it is not?

Here is a table:

SQL> create table DEMO (n not null) pctfree 99 as select rownum from xmltable('1 to 5');
 
Table created.

And here is a function which returns a rendom number. But I declare it deterministic:

SQL> create or replace function DEMO_FUNCTION(n number) return number deterministic as
2 begin
3 return dbms_random.value;
4 end;
5 /
 
Function created.

I declare an index on it:

SQL> create index DEMO_FUNCTION on DEMO(DEMO_FUNCTION(n));
 
Index created.

Oracle cannot verify if the function is deterministic or not, and trusts me.

A full table scan re-calculates the value each time, and do not raise any error.

SQL> select /*+ full(DEMO) */ DEMO_FUNCTION(n),rowid,n from DEMO where DEMO_FUNCTION(n) is not null;
 
DEMO_FUNCTION(N) ROWID N
---------------- ------------------ ----------
.743393494 AAAR5kAAMAAABXbAAA 1
.075404174 AAAR5kAAMAAABXbAAB 2
.601606733 AAAR5kAAMAAABXbAAC 3
.716335239 AAAR5kAAMAAABXbAAD 4
.253810651 AAAR5kAAMAAABXbAAE 5

If you run it several times, you will see different values.

An index acess will show always the same values because they come from the index:

SQL> select /*+ index(DEMO) */ DEMO_FUNCTION(n),rowid,n from DEMO where DEMO_FUNCTION(n) is not null;
 
DEMO_FUNCTION(N) ROWID N
---------------- ------------------ ----------
.135108581 AAAR5kAAMAAABXbAAE 5
.440540027 AAAR5kAAMAAABXbAAD 4
.480565266 AAAR5kAAMAAABXbAAA 1
.546056579 AAAR5kAAMAAABXbAAB 2
.713949559 AAAR5kAAMAAABXbAAC 3

Oracle could have run the function on the value from the table and compare it with the value from the index, and then raise an error. But that would be more expensive.

But then, what happens if I delete a row? Oracle will try to find the index entry by running the function, but then the value is not found in the index:

SQL> delete from DEMO where n=3 and DEMO_FUNCTION(n) is not null;
delete from DEMO where n=3 and DEMO_FUNCTION(n) is not null
*
ERROR at line 1:
ORA-08102: index key not found, obj# 73317, file 12, block 5603 (2)

This is a logical corruption caused by the bug in the function which was declared deterministic but is not. Verifying the deterministic truth would require running the function several times and even that would not detect values that change after days. It is the developer responsibility, to tell the truth. This was just a test. I you are in this case, make the index unusable and fix the function before re-building it.

 

Cet article When deterministic function is not est apparu en premier sur Blog dbi services.

Using WebLogic 12C RESTful management for monitoring WebLogic Domains

Tue, 2017-09-26 01:08

WebLogic 12.2.1 provides a new RESTful management interface with full accesses to all WebLogic Server resources. This interface offers an alternative to the WLST scripting or JMX developments for the management and the monitoring of WebLogic Domains.

The following of this blog provides a few examples demonstrating the simplicity and the efficiency of the REST requests.

WebLogic Server State
When checking a WebLogic Server state, we are interested in the state and in the “ActivationTime” that shows the last time the server was started. Low value can indicate the server was restarted recently and may be automatically by the node manager after a crash.
The following RESTful URL requests the state of a WebLogic Server named “server1”

https://host01:7002/management/weblogic/latest/domainRuntime/serverRuntimes/server1?fields=state,activationTime&links=none

Output:

{
"state": "RUNNING",
"activationTime": 1470982524188
}

The following RESTful URL requests the state of all WebLogic Servers in a domain:

https://host01:7002/management/weblogic/latest/domainRuntime/serverRuntimes?fields=name,state,activationTime&links=none

Output:

{
"name": "AdminServer",
"state": "RUNNING",
"activationTime": 1473234973467
},
{
"name": "server2",
"state": "RUNNING",
"activationTime": 1473235510682
},
{
"name": "server1",
"state": "RUNNING",
"activationTime": 1473235506258
}

Get JVM Key Metrics
Monitoring the memory usage provides a good view on how the application behaves regarding memory consumption. Even if the memory management is a JVM task, the WebLogic Server mbeans can be queried for some heap usage information.
The following RESTful URL requests the JVM key metrics of a WebLogic Server named “server1”

https://host01:7002/management/weblogic/latest/domainRuntime/serverRuntimes/server1/JVMRuntime?links=none&fields=heapSizeCurrent,heapFreeCurrent,heapFreePercent,heapSizeMax

Output:

{
"heapSizeCurrent": 259588096,
"heapFreeCurrent": 101962840,
"heapSizeMax": 518979584,
"heapFreePercent": 72
}

Get WebLogic Threads key metrics
When WebLogic opens up too many threads to service the load, there is a decrease in performance, due to the resources (CPU, Memory) usage.

https://host01:7002/management/weblogic/latest/domainRuntime/serverRuntimes/server1/threadPoolRuntime?links=none&fields=executeThreadIdleCount,hoggingThreadCount,pendingUserRequestCount,completedRequestCount,throughput,healthState

Output:

{
"executeThreadIdleCount": 0,
"completedRequestCount": 4652,
"pendingUserRequestCount": 0,
"throughput": 1.999000499750125,
"hoggingThreadCount": 0,
"healthState": {
"state": "ok",
"subsystemName": null,
"partitionName": null,
"symptoms": []
}
}

Get JDBC Runtime Metrics
“Active Connection Current Count”, “Current Connection High Count”, “Waiting for Connection High count” allows you to validate that you have the correct amount of resources available to service the client’s needs. It’s also helpful to determine if you need to increase or decrease the pool size. While “Wait Seconds High Count”, “Waiting for Connection Failure Total” and “Connection Delay Times” can be used to determine DB responsiveness and how the clients are impacted by the connection pool size.

https://host01:7002/management/weblogic/12.2.1.0/domainRuntime/serverRuntimes/server1/JDBCServiceRuntime/JDBCDataSourceRuntimeMBeans/myDS?links=none&fields=name,activeConnectionsCurrentCount,activeConnectionsHighCount,waitingForConnectionHighCount,waitSecondsHighCount,waitingForConnectionFailure

Output:

{
"name": “myDS",
"state": "Running",
"activeConnectionsCurrentCount": 4,
"activeConnectionsHighCount": 8,
"waitingForConnectionFailureTotal": 0,
"waitingForConnectionHighCount": 0,
"waitSecondsHighCount": 0,
}

Get Application Runtime Metrics
We can fetch some useful information like “how many sessions were connected to the application” and how many concurrent sessions”

https://host01:7002/management/weblogic/latest/domainRuntime/serverRuntimes/server1/applicationRuntimes/SimpleAuctionWebAppDb/componentRuntimes?fields=openSessionsCurrentCount,sessionsOpenedTotalCount,openSessionsHighCount&links=none

Output:

{
"openSessionsCurrentCount": 12,
"sessionsOpenedTotalCount": 255,
"openSessionsHighCount": 15
}

The WebLogic REST management allows to access directly the WebLogic MBeans without any additional resource cost. The Monitoring tools can only benefit from this WebLogic RESTful interface.  As WebLogic RESTful requests used for Monitoring are simple, the java plugin for the dbi monitoring has been developed in two days.

 

Cet article Using WebLogic 12C RESTful management for monitoring WebLogic Domains est apparu en premier sur Blog dbi services.

Am I a DBA 3.0 or just an SQL*DBA?

Sun, 2017-09-24 13:47

There are currently a lot of new buzz words and re-namings which suggest that our DBA role is changing, most of them escorted with a #cloud hashtag. Oracle Technology Network is now called Oracle Developer Community. Larry Ellison announced the database that does not need to be operated by humans. And people talking about the death of DBA, about the future of DBA, about DBA 3.0,…

Those are all generalizations and universal statements, and I don’t like generalizations. There is not only one type of DBA role. The DBA role in big US companies (where most of those claims come from) is very different than the DBA role in European medium companies (where I’m doing most of my job). The only thing I like with generalization is that a simple counterexample is sufficient to prove that the universal statement is wrong. And I’ll take the example I know the best: mine.

CaptureEM
What do you call DBA role? Is it only server management, or database management? Are you a Dev DBA or an Ops DBA? And when you will go multitenant, will you become a CDB DBA or PDB DBA? And on Engineered Systems, are you still a Database administrator or a Database Machine administrator?

So here is my experience. Let’s flashback to sysdate-8000.
6580a20eb9faaba67ff143dcd7bfcbdc
My first job with an Oracle Database was in 1994. I was working at Alcatel in Paris. The IT was typical of big companies at that time: all IBM, databases were DB2 on mainframe and applications were developed through a several years waterfall cycle from specifications to production. But I was not working there. I was in the accounting department which had his own little IT with some Borland Paradox small applications. This IT department was one senior person, from whom I’ve learned everything, and me, junior developer doing some Borland Paradox things. When came the need for a new application, the idea was to build a prototype in this ‘rebel’ IT service rather than waiting for the whole cycle of development managed by the official IT department.

mainpictWe asked SUN to lend us a workstation. We asked Oracle to lend us Oracle 7.1 database. That was not difficult. Both of them were happy to try to come into this all-IBM company by another way. And Borland had a new product: Delphi so this is where I started to build the prototype. I had everything to learn there: I had never installed a Unix system, I had never installed a database, I even never configured a TCP/IP network. But with the books (no internet then) and the help of my manager (did I say I owe him everything?) we got the environment ready within one month.

aba590f96f7b8138deb71fe28526fb93Today we are talking about Cloud PaaS as the only way to get quickly an environment to start a new development project. The marketing explains that you can get the environment with a few clicks and operational one hour later. But in real life, I know several projects where the environment is not ready after one month, for different reasons (time to choose which service, evaluate the cost, set-it up). Remember that in my case, 23 years ago, it took one or two months but I had a full server, enough storage, available 24/7, with the workstation on my desk. And all that for free.

CaptureDelphiSo I started to develop the application. The business users were there in next room. A short meeting, a small doc, and the conception of the first prototype was ready. Development with Delphi was really quick (remember RAD – Rapid Application Development?) and as soon as I had a usable prototype, one user had access to it, giving me their feedback for future evolutions. We have built something very clever: easy to evolve, fit to business needs, with good performance, and easy to deploy. It has been replaced years later by an application provided by the IT department, but our application was used a the specification.

So, what was my role here? I was clearly a developer and not a DBA. But I was already designing a system, installing a server, creating a database, modeling the data and analyzing performance. When interacting with the database, I was just a DBA doing some SQL. If I want to invent new words myself, I would call that an SQL*DBA, like the name of the tool that was replaced at that time by svrmgrl for the DBA stuff related to the server management. All that has been consolidated into the same tools later: sqlplus does the DBA and Developer stuff, Enterprise manager does both, SQL Developer does both…

During the 20 years later, I’ve evolved to a DBA. I’ve learned new technologies each time, but I don’t think that my DBA role has changed. I’ve done BI (called ‘infocenter’ at that time and ‘datawarehouse’ later) with Business Objects and DB2 and Data Propagator (yes, logical replication and that was in 1996). All this was designed in cooperation with the business end-users. In 2001 I’ve managed terabyte databases full of numbers, doing what we call Big Data today. All maintenance tasks (capacity planning, upgrade, recovery) were done very closely to the business utilization of this data. I administered telecom databases at a time where mobile phone providers came with a new idea every month to be implemented. We would call that ‘agile’ today. I’ve setup databases for quick cloning with transportable tablespaces. I’ve setup continuous integration tests of databases based on flashback technologies and workspace manager. I’ve configured parallel query and XMLDB to do analytics on unstructured data, with better results than MapReduce PoC. Technology evolves, names are modernized, but my DBA role is the same and I still use SQL.

The latest I’ve read about this changing role is Penny Avril interview and it is a very good explanation of all those changes announced. I totally agree with all of that. Except that I see no change in my role there. Let’s take this: DBAs are being asked to understand what businesses do with data rather than just the mechanics of keeping the database healthy and running.
I have always known which business users and which business cases are interacting with the database. My resume always mentioned the business area of each project. I’ve always interacted with end-users and developers for any database I administered, whether there are DEV, TEST or PROD databases. You cannot setup a backup/recovery plan without knowing the data and the business requirements. You cannot upgrade or migrate without interacting with users to test and validate the migration. You cannot address performance without interacting with users and developers. You cannot size the SGA without knowing how the data is used, at the different times of the day or the week.You cannot keep database healthy without knowing how it is used. You cannot build an infrastructure architecture without the support of the business for the software costs.

My DBA job is not a mechanics to keep processes running on a server. That would be a Database System administrator. But we are talking about DataBase Administrator. The major part of my job, and the main reason why I like it, is the human interaction that is around the database. You talk to server/storage/network administrators, you talk to all kind of business users, you talk to developers, you talk to managers, you talk to vendors,… You have to understand OS schedulers, network security, and mutexes, and also have to understand banking, retail, hospital data workflow. Then I don’t worry about the self-driven/no-human-labor part of the DBA role that may be moved to be managed by the cloud provider. Those are boring things that we already automated long time ago. For example, at dbi-services we have included all this automation into the DMK. And this goes further for open source databases with the OpenDB Appliance. Do you think a consulting company would provide this for free to their customers if this automation takes all the DBA job out? The boring and recurring things are automated to avoid errors, and all the intelligent stuff is provided by experienced human DBAs talking SQL with the system. As always.

 

Cet article Am I a DBA 3.0 or just an SQL*DBA? est apparu en premier sur Blog dbi services.

Wrong result with multitenant, dba_contraints and current_schema

Sat, 2017-09-23 15:03

Multitenant architecture is not such a big change and this is why I recommend it when you start a project in 12c or if you upgrade to 12.2 – of course after thoroughly testing your application. However, there is a point where you may encounter problems on dictionary queries, because it is really a big change internally. The dictionary separation has several side effects. You should test carefully the queries you do on the dictionary views to get metadata. Here is an example of a bug I recently encountered.

This happened with a combination of things you should not do very often, and not in a critical use case: query dictionary for constraints owned by your current schema, when different than the user you connect with.

I create two users: USER1 and USER2
SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> grant dba to USER1 identified by USER1 container=current;
Grant succeeded.
SQL> grant dba to USER2 identified by USER2 container=current;
Grant succeeded.

USER1 owns a table which has a constraint:

SQL> connect USER1/USER1@//localhost/PDB1
Connected.
SQL> create table DEMO(dummy constraint pk primary key) as select * from dual;
Table DEMO created.

USER2 can access to the table either by prefixing it with USER1 or by setting the current_schema to USER1

SQL> connect USER2/USER2@//localhost/PDB1
Connected.
SQL> alter session set current_schema=USER1;
Session altered.

Bug

Ok, now imagine you want to read constraint metadata for the current schema you have set:

SQL> select sys_context('USERENV','CURRENT_SCHEMA'), a.*
2 from sys.dba_constraints a
3 where owner = sys_context('USERENV','CURRENT_SCHEMA')
4 /
 
no rows selected

No rows selected is a wrong result here because my current_schema is USER1 and USER1 has constraints:

SQL> select owner,constraint_name
2 from sys.dba_constraints a
3 where owner = 'USER1'
4 /
OWNER CONSTRAINT_NAME
----- ---------------
USER1 PK

So, where’s the problem? Let’s have a look at the execution plan:

SQL_ID 2fghqwz1cktyf, child number 0
-------------------------------------
select sys_context('USERENV','CURRENT_SCHEMA'), a.* from
sys.dba_constraints a where owner =
sys_context('USERENV','CURRENT_SCHEMA')
 
Plan hash value: 1258862619
 
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.32 | 2656 |
| 1 | PARTITION LIST ALL | | 1 | 2 | 0 |00:00:00.32 | 2656 |
|* 2 | EXTENDED DATA LINK FULL| INT$INT$DBA_CONSTRAINTS | 2 | 2 | 0 |00:00:00.32 | 2656 |
--------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter((("INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=4 OR
("INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=2 AND "INT$INT$DBA_CONSTRAINTS"."ORIGIN_CON_ID"=TO_NUMBER(SY
S_CONTEXT('USERENV','CON_ID')))) AND "OWNER"=SYS_CONTEXT('USERENV','CURRENT_SCHEMA')))

I am in 12.2 and DBA_CONSTRAINTS reads from INT$DBA_CONSTRAINTS which reads from INT$INT$DBA_CONSTRAINTS and in multitenant this view being an extended data view will read from CDB$ROOT and from the current container. This is why we see EXTENDED DATA LINK FULL in the execution plan and up to this point the predicates are correct: “OWNER”=SYS_CONTEXT(‘USERENV’,’CURRENT_SCHEMA’)

The execution through data link is run on each container with parallel processes: they switch to the container and run the underlying query on the view. But when I look at the sql trace of the parallel process running the query on my PDB I can see that the predicate on OWNER has replaced the SYS_CONTEXT(‘USERENV’,’CURRENT_SCHEMA’) with the hardcoded value:

SELECT /*+ NO_STATEMENT_QUEUING RESULT_CACHE (SYSOBJ=TRUE) OPT_PARAM('_ENABLE_VIEW_PDB', 'FALSE') */ OWNER,CONSTRAINT_NAME,CONSTRAINT_TYPE,TABLE_NAME,OBJECT_TYPE#,SEARCH_CONDITION,SEARCH_CONDITION_VC,R_OWNER,R_CONSTRAINT_NAME,DELETE_RULE,STATUS,DEFERRABLE,DEFERRED,VALIDATED,GENERATED,BAD,RELY,LAST_CHANGE,INDEX_OWNER,INDEX_NAME,INVALID,VIEW_RELATED,ORIGIN_CON_ID FROM NO_COMMON_DATA(SYS."INT$INT$DBA_CONSTRAINTS") "INT$INT$DBA_CONSTRAINTS" WHERE ("INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=4 OR "INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=2 AND "INT$INT$DBA_CONSTRAINTS"."ORIGIN_CON_ID"=TO_NUMBER('3')) AND "INT$INT$DBA_CONSTRAINTS"."OWNER"=q'"USER2"'

And unfortunately, this value is not the right one: USER2 is my connected user, but not the CURRENT_SCHEMA that I have set. In the same trace, I can see where this value comes from:

select 'q''"' || SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA') || '"''' from sys.dual

but it seems that the current_schema was lost through the call to the parallel process and the PDB switch to my container.

Workaround

The problem is easy to workaround. This works:

SQL> select owner,constraint_name
2 from sys.dba_constraints a
3 where owner = ( select sys_context('USERENV','CURRENT_SCHEMA') from dual )
4 /
 
OWNER CONSTRAINT_NAME
----- ---------------
USER1 PK

And anyway, better to get the current schema before and pass it as a bind variable. The bind variables are passed correctly through data link queries:


SQL> variable v varchar2(30)
SQL> exec select sys_context('USERENV','CURRENT_SCHEMA') into :v from dual;
 
PL/SQL procedure successfully completed.
 
SQL> select sys_context('USERENV','CURRENT_SCHEMA'), a.*
2 from sys.dba_constraints a
3 --where owner = sys_context('USERENV','CURRENT_SCHEMA')
4 where owner = :v
5 /

So what?

The multitenant architecture is a real challenge for dictionary views. The dictionary is separated: system metadata in CDB$ROOT and user metadata in PDB. But, because of compatibility with non-CDB architecture, the dictionary views must show both of them, and this is where it becomes complex: what was separated on purpose has now to be merged. And complexity is subject to bugs. If you want to get an idea, have a look at dcore.sql in ORACLE_HOME/rdbms/admin and compare 11g version with 12c ones, with all the evolution in 12.1.0.1, 12.1.0.2 and 12.2.0.1

 

Cet article Wrong result with multitenant, dba_contraints and current_schema est apparu en premier sur Blog dbi services.

Documentum – RCS 7.3 – Issue with projections

Sat, 2017-09-23 11:42

In the last two blogs I posted, I mentioned that I was working on silent installations with CS 7.3 (in HA). The issue I will describe in this blog is linked to the HA installation. I faced this issue using the silent installation but I’m guessing it should also occurs using the GUI Installation.

 

So basically on the first Content Server, I installed a Global Registry and a few other docbases. For the creation of the docbases, there were obviously a docbroker installed. This is the global docbroker which was targeted by default by all docbases using the server.ini. In addition to that, I installed 2 other docbrokers for specific docbases. The purpose here is to have the docbroker N°1 only for the GR + DocBase1 and the docbroker N°2 only for the GR + DocBase2 + DocBase3. So I started to configure the projections to achieve that goal.

 

I will use below the GR. If you followed what I mentioned above, I should see – at the end – two projections in the docbase (+ the default one from the server.ini). So let’s configure the projections (with only one Content Server installed):

[dmadmin@content-server-01 ~]$ iapi GR_DocBase
Please enter a user (dmadmin):
Please enter password for dmadmin:


        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2016
        All rights reserved.
        Client Library Release 7.3.0040.0025


Connecting to Server using docbase GR_DocBase
[DM_SESSION_I_SESSION_START]info:  "Session 010f123450003d07 started for user dmadmin."


Connected to Documentum Server running Release 7.3.0050.0039  Linux64.Oracle
Session id is s0
API> retrieve,c,dm_server_config
...
3d0f123450000102
API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  title                           :
  ...
  projection_targets            []: <none>
  projection_ports              []: <none>
  projection_proxval            []: <none>
  projection_notes              []: <none>
  projection_enable             []: <none>
  ...
  i_vstamp                        : 28

API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
                                                                                                    
                                                                                                    
(2 rows affected)

API> set,c,l,projection_targets[0]
SET> content-server-01
...
OK
API> set,c,l,projection_ports[0]
SET> 1493
...
OK
API> set,c,l,projection_proxval[0]
SET> 1
...
OK
API> set,c,l,projection_notes[0]
SET> Dedicated Docbroker N°2
...
OK
API> set,c,l,projection_enable[0]
SET> T
...
OK
API> save,c,l
...
OK
API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  ...
  projection_targets           [0]: content-server-01
  projection_ports             [0]: 1493
  projection_proxval           [0]: 1
  projection_notes             [0]: Dedicated Docbroker N°2
  projection_enable            [0]: T
  ...
  i_vstamp                        : 29

API> reinit,c
...
OK
API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
                                                                                                    
content-server-01          1493              1                   Dedicated Docbroker N°2            1
(2 rows affected)

API> set,c,l,projection_targets[1]
SET> content-server-01
...
OK
API> set,c,l,projection_ports[1]
SET> 1491
...
OK
API> set,c,l,projection_proxval[1]
SET> 1
...
OK
API> set,c,l,projection_notes[1]
SET> Dedicated Docbroker N°1
...
OK
API> set,c,l,projection_enable[1]
SET> T
...
OK
API> save,c,l
...
OK
API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  ...
  projection_targets           [0]: content-server-01
                               [1]: content-server-01
  projection_ports             [0]: 1493
                               [1]: 1491
  projection_proxval           [0]: 1
                               [1]: 1
  projection_notes             [0]: Dedicated Docbroker N°2
                               [1]: Dedicated Docbroker N°1
  projection_enable            [0]: T
                               [1]: T
  ...
  i_vstamp                        : 30

API> reinit,c
...
OK
API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
content-server-01          1491              1                   Dedicated Docbroker N°1            1
content-server-01          1493              1                   Dedicated Docbroker N°2            1
(2 rows affected)

 

Up to this point, everything sounds good and everything is working. If you try to access DA, you will see two projections for the Global Registry that are the two we defined above:

Projection1

 

So this is working. Now what is the goal of this blog? Well it’s not what is above… So let’s talk about the issue now! The next step in this environment was to set it in High Availability. As mentioned my last blog, I faced some issues with the Remote Content Server (RCS) installation but I was finally able to install a Content-File Server. When I finished the installation of GR_DocBase on the RCS, I wanted to setup the projections between the CS1 and CS2… For that, I opened DA again and I went to the same screen that you can see above (in the screenshot).

 

What I was expecting to see was the exact same thing as above, meaning two projections of the GR_DocBase with the two docbrokers installed on the CS1. What I saw was a little bit different…:

Projection2

 

I can assure you that between the two screenshots above, the only thing I did was to install the CFS for GR_DocBase on the RCS… So why is this screen not working anymore, what’s the issue? “An error has occurred. 1 >= 1″. Yeah sure, what else? ;)

 

I checked the Documentum Administrator logs and found the following stack trace:

05:45:07,980 ERROR [[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'] com.documentum.web.common.Trace - 1 >= 1
java.lang.ArrayIndexOutOfBoundsException: 1 >= 1
        at java.util.Vector.elementAt(Vector.java:474)
        at com.documentum.fc.common.DfList.get(DfList.java:427)
        at com.documentum.fc.common.DfList.getString(DfList.java:561)
        at com.documentum.webcomponent.admin.server.ServerConnectionBrokerList.loadConnectionBrokerProjections(ServerConnectionBrokerList.java:78)
        at com.documentum.webcomponent.admin.server.ServerConnectionBrokerList.onInit(ServerConnectionBrokerList.java:51)
        at com.documentum.web.form.FormProcessor.invokeMethod(FormProcessor.java:1604)
        at com.documentum.web.form.FormProcessor.invokeMethod(FormProcessor.java:1489)
        at com.documentum.web.form.FormProcessor.fireOnInitEvent(FormProcessor.java:1154)
        at com.documentum.web.form.ControlTag.fireFormOnInitEvent(ControlTag.java:794)
        at com.documentum.web.formext.control.component.ComponentIncludeTag.renderEnd(ComponentIncludeTag.java:135)
        at com.documentum.web.form.ControlTag.doEndTag(ControlTag.java:928)
        at jsp_servlet._webcomponent._library._propertysheetwizardcontainer.__propertysheetwizardcontainer._jsp__tag25(__propertysheetwizardcontainer.java:1501)
        at jsp_servlet._webcomponent._library._propertysheetwizardcontainer.__propertysheetwizardcontainer._jspService(__propertysheetwizardcontainer.java:368)
        at weblogic.servlet.jsp.JspBase.service(JspBase.java:35)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
        at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.servlet.ResponseHeaderControlFilter.doFilter(ResponseHeaderControlFilter.java:351)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.servlet.CompressionFilter.doFilter(CompressionFilter.java:96)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.env.WDKController.processRequest(WDKController.java:144)
        at com.documentum.web.env.WDKController.doFilter(WDKController.java:131)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3683)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3649)
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:326)
        at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
        at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
        at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2433)
        at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2281)
        at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2259)
        at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1691)
        at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1651)
        at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:348)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:333)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:54)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:640)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:406)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:346)

 

As mentioned in the stack trace (which is more useful than the error message), the problem might come from the docbrokers projections… But we were checking them just before installing the Remote docbase and it was OK… With the CS 7.3, the RCS installation is automatically adding a projection on the Primary Content Server dm_server_config object to point to the Remote docbroker and this is causing this error…

 

So I did check the projections on the GR_DocBase Primary dm_server_config object to see what was the issue and I got this:

API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  ...
  projection_targets           [0]: content-server-01
                               [1]: content-server-01
                               [2]: content-server-02
  projection_ports             [0]: 1493
                               [1]: 1491
                               [2]: 1489
  projection_proxval           [0]: 1
                               [1]: 1
                               [2]: 2
  projection_notes             [0]: Dedicated Docbroker N°2
                               [1]: Dedicated Docbroker N°1
                               [2]: Projecting primary to remote.
  projection_enable            [0]: T
                               [1]: T
                               [2]: T
  ...
  i_vstamp                        : 34

API> reinit,c
...
OK
API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
content-server-02          1489              2                   Projecting primary to remote.      1
content-server-01          1491              1                   Dedicated Docbroker N°1            1
content-server-01          1493              1                   Dedicated Docbroker N°2            1
(3 rows affected)

 

So this looks good isn’t it? And yet through DA, it shows this ArrayIndexOutOfBoundsException… I tried a lot of things but the only way I could solve this issue was by removing the projection that the RCS Installer is automatically adding and then adding it again… To simplify the process, I included that in our silent scripts so it is done automatically after the creation of the CFS. Just below is a very small extract of code I added in our silent scripts (shell) in order to remove the projection added by the installer (some variables are obviously defined before this section of code…):

...
echo "  **  "
echo "  **  HA Docbase Configuration - projections"
echo "  **  "
dql_select_docbase="${script_folder}/rcs_select_${docbase}"
dql_update_docbase="${script_folder}/rcs_update_${docbase}"
idql ${docbase} -U${install_owner} -Pxxx <<EOF > ${dql_select_docbase}.log 2>&1
SELECT 'projection_id=', i_position*-1-1 as projection_id FROM dm_server_config WHERE LOWER(object_name) like LOWER('${docbase}') AND projection_notes='Projecting primary to remote.' AND LOWER(projection_targets) like LOWER('%`hostname -s`%') enable(ROW_BASED)
go
EOF
if [[ ${?} != 0 ]]; then
  echo "ERROR - There was a problem while gathering the index for ${docbase}. Please see the file '${dql_select_docbase}.log' for error. Exiting."
  exit
fi
index=`grep "^projection_id=[[:space:]]" ${dql_select_docbase}.log | sed 's,^.*=[[:space:]]*\([0-9]*\)[[:space:]]*$,\1,'`
if [[ ${index} == "" || ${index} == '""' ]]; then
  echo "INFO - There is no existing projections from the primary to the remote."
else
  echo -e "INFO - The index of 'dm_server_config.projection_targets' is:  \t${index}"
idql ${docbase} -U${install_owner} -Pxxx <<EOF > ${dql_update_docbase}.log 2>&1
UPDATE dm_server_config object REMOVE projection_targets[${index}], REMOVE projection_ports[${index}], REMOVE projection_proxval[${index}], REMOVE projection_notes[${index}], REMOVE projection_enable[${index}] WHERE LOWER(object_name) like LOWER('${docbase}') enable(ROW_BASED)
go
EOF
  nb_error=`grep "_E_" ${dql_update_docbase}.log | wc -l`
  cat "${dql_update_docbase}.log"
  echo ""
  if [[ ${nb_error} != 0 ]]; then
    echo "ERROR - There was a problem while updating the object for ${docbase}. Please see above. Exiting."
    exit
  fi
fi
...

 

This small section of code is just to shows how this kind of things can be automated. I’m just executing a first DQL to gather the index of the projection added by the RCS installer. This is the correct index because the note added by the RCS is always “Projecting primary to remote.” and the target host is obviously the RCS… To confirm that DA is now working again, you can reload the docbroker projection screen and for me it was indeed working.

 

So then the next step is to add the projection again but this time without the “.” at the end of the note (to differentiate them) because we still need this projection for the CS2 to be aware of the docbases on the CS1. You can either do it programmatically using iapi/idql or with DA directly. After adding this third projection again, I got the following (notice the only difference is the “.” in the notes) in the docbase:

API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  ...
  projection_targets           [0]: content-server-01
                               [1]: content-server-01
                               [2]: content-server-02
  projection_ports             [0]: 1493
                               [1]: 1491
                               [2]: 1489
  projection_proxval           [0]: 1
                               [1]: 1
                               [2]: 2
  projection_notes             [0]: Dedicated Docbroker N°2
                               [1]: Dedicated Docbroker N°1
                               [2]: Projecting primary to remote
  projection_enable            [0]: T
                               [1]: T
                               [2]: T
  ...
  i_vstamp                        : 36

API> reinit,c
...
OK
API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
content-server-02          1489              2                   Projecting primary to remote       1
content-server-01          1491              1                   Dedicated Docbroker N°1            1
content-server-01          1493              1                   Dedicated Docbroker N°2            1
(3 rows affected)

 

The result is the same but I was finally able to see the three projections through DA… I don’t know what caused this error because I simply re-added the exact same thing with a different note (slightly) but I’m sure that this was caused by the RCS Installation…

Projection3

 

Yet another interesting behaviour of Documentum… So many ways to have fun! :)

 

 

Cet article Documentum – RCS 7.3 – Issue with projections est apparu en premier sur Blog dbi services.

Pages