Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 13 hours 49 min ago

SQL Server 2017 on RHEL first installation

Mon, 2017-10-02 04:24

Beginning of last week Microsoft announced the general availability of SQL Server 2017 for today, the second of October. In the same time, but in a more marketing side, Microsoft announced a SQL Server on Red Hat Enterprise Linux offer, more information here.
It looked for me like the good time to do my first installation of SQL Server 2017 on Linux.

My first concern was to download an ISO of the Red Hat Enterprise Linux. I found it here and mapped it in my new Hyper-V Virtual Machine. I created my Virtual machine with small settings: just one vCPU, 4GB RAM and 50GB disk.
To be honest for a Windows guy, having to work in a Linux environment is a little bit stressful…
The first step has been to install my Linux.
I did it with the minimal possible configurations and let the installation configured automatically the needed mount points.
Once the installation finished, the tricky part for me has been to configure my two network cards: one for internal purpose in order to interconnect with my others VMs and a second one to connect to the Internet… what a challenge… after some requests to my Oracle colleagues and some tests I finally managed to set my two cards ;-)

I started the installation with the YUM Red Hat package manager following the Microsoft documentation.
First error: my Red Hat Enterprise Linux system is not registered… In order to register and automatically subscribe I ran this command with the userid and password I used to create an account before downloading the REHL iso:

# subscription-manager register --username <username> --password <password> --auto-attach

After this registration, the first step was to download the Microsoft SQL Server Red Hat referential file into the /etc/yum.repos.d folder and to run a YUM Update to install the last available updates… more than 240 for me…
Once done  I was able to execute the following script to download and install the SQL Server packages:

SQL2017_1

SQL2017_2

During the installation of the packages I have been asked to run the SQL Server configuration setup to finalyze the setup of my SQL Server, I did it and needed to accept the license agreement, to choose an edition and to give a password to the SA login:

SQL2017_3

SQL2017_4

It’s already done!
My SQL Server instance is installed and processes are running:

SQL2017_5

SQL2017_6

At this point it was already possible to connect to my instance via Management Studio installed in another Virtual Machine. I used the Ip address of my Virtual Machine (“ip a” to have it on Linux) and my System Administrator user and password, the port is the default one 1433:

SQL2017_10

SQL2017_11

To be able to query my instance directly from my Linux machine, I installed the SQL Server tools by downloading as before the Microsoft Red Hat repository configuration file (1 in the below picture) and after installed the tools with the packages previously downloaded (2 in my picture):

SQL2017_7

SQL2017_8

I was able after this installation to directly query my instance from my Linux with the following command:

SQL2017_9

This first step with Microsoft SQL Server 2017 on Red Hat Enterprise Linux was a good experience and also a good way to remember some Linux commands which I didn’t use for a while…

Before to conclude, a small tips if you want to change the port of your SQL Server instance.
As you know by default SQL Server will run under port 1433. To change it on SQL Server 2017, I used the file mssql-conf which is a configuration script installed with this new version of SQL Server. This script is used to set some parameters like default directories, collation, trace flags…
Here I setted the port to 48100:

SQL2017_12

I needed to restart my SQL Server service and after I could run a sqlcmd command with my new port:

SQL2017_13

That’s all Folks!
It was quite funny to play around with command lines as I learnt during my studies many years ago ;-)

 

 

 

 

 

 

Cet article SQL Server 2017 on RHEL first installation est apparu en premier sur Blog dbi services.

SQL Server 2016: New Dynamic Management Views (DMVs)

Fri, 2017-09-29 08:32

In SQL Server 2016, you will discover a lot of new Dynamic Management Views(DMVs).
In this article, I will just give you a little overview of these useful views for us as DBA.

SQL Server 2012 has 145 DMVs and SQL Server 2014 has 166 DMVs.
Now, SQL Server 2016 has 185 DMVs.

How to see it?

It is very easy to have a look using the sys.all_objects view:

SELECT * FROM sys.all_objects WHERE TYPE=’V’ AND NAME LIKE ‘dm_%’ order by name ASC

DMV_SQL2016

From SQL Server 2012 to SQL Server 2014, we can notice that a lot of new DMVs comes with the In-Memory technology with the syntax “dm_xtp_xxxxxxxx” or “dm_db_xtp_xxxxxxxx”

In SQL Server 2016, a lot of new “dm_exec_xxxxxxxx” is present.

All definitions for these views come from the Microsoft documentation or web site.

To begin, you will see 10 DMVs for the PolyBase technology:

  • dm_exec_compute_node_status
  • dm_exec_dms_workers

A useful msdn page resumes all DMVs for these new views here

Other dm_exec_xxx views are basically usefull like:

  • dm_exec_query_optimizer_memory_gateways
    • Returns the current status of resource semaphores used to throttle concurrent query optimization.
    • Microsoft Reference here
  • dm_exec_session_wait_stats
    • Returns information about all the waits encountered by threads that executed for each session
    • Microsoft Reference here

3 new DMVs for the Columstore technology:

  • dm_column_store_object_pool
  • dm_db_column_store_row_group_operational_stats
    • Returns current row-level I/O, locking, and access method activity for compressed rowgroups in a columnstore index.
    • Microsoft Reference here
  • dm_db_column_store_row_group_physical_stats
    • Provides current rowgroup-level information about all of the columnstore indexes in the current database
    • Microsoft Reference here

2 new DMVs for Stretch Databases in the database context and with rda(remote database archive):

  • dm_db_rda_migration_status
    • For the current database, list of state information of the remote data archive schema update task.
    • Microsoft Reference here

This list can change if a Service Pack is  applied.
It is just for you to have a little reference view about these useful views! 8-)

 

Cet article SQL Server 2016: New Dynamic Management Views (DMVs) est apparu en premier sur Blog dbi services.

“_suppress_identifiers_on_dupkey” – the SAP workaround for bad design

Fri, 2017-09-29 08:01

In SQL, ‘upsert’ is a conditional insert or update: if the row is there, you update it, but if it is not there, you insert it. In Oracle, you should use a MERGE statement for that. You are clearly doing it wrong if you code something like:

begin
insert...
exception
when dup_val_on_index then update...
end;


But it seems that there are many applications with this bad design, and Oracle has introduced an underscore parameter for them: “_suppress_identifiers_on_dupkey”. You won’t be surprised that this one is part of the long list of parameters required for SAP.

Let’s investigate this.

Insert – Exception – Update

So the idea is to try first an insert, rely on the unique constraint (primary key) to get an exception if the row exists, and in this case update the existing row. There are several flows with that.

The first problem, is that it is not as easy as it looks like. If a concurrent session deletes the row between you insert and update, then the update will fail. You have to manage this. The failed insert cannot leave a lock on the rows that was not inserted.

The second problem is that the SQL engine is optimized for transactions which commit. When the ‘dup_val_on_index’ on index occurs, you have already inserted the table row, updated some indexes, etc. And all that has to be rolled back when the exception occurs. This generates unnecessary contention on the index leaf block, and unnecessary redo.

Then the third problem, and probably the worst one, is that an exception is an error. And error management has lot of work to do, such as looking into the dictionary for the violated constraint name in order to give you a nice error message.

I’ve created the following table:

create table demo as select * from dual;
create unique index demo on demo(dummy);

And I’ve run 10 million inserts on it, all with duplicates:

exec for i in 1..1e7 loop begin insert into demo values('x'); exception when others then null; end; end loop;

Here is some extracts from the AWR on manual snapshots taked before and after.

Elapsed: 20.69 (mins)
DB Time: 20.69 (mins)

This has run for 20 minutes.


Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 100.00 Soft Parse %: 100.00
Execute to Parse %: 33.34 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 92.31 % Non-Parse CPU: 94.90
Flash Cache Hit %: 0.00

The ‘Execute to Parse %’ show that 2/3 of statements are parsed each time.


SQL ordered by Gets DB/Inst: CDB1/CDB1 Snaps: 19-20
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - Buffer Gets as a percentage of Total Buffer Gets
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Buffer Gets: 180,125,740
-> Captured SQL account for 127.7% of Total
 
Buffer Gets Elapsed
Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
1.80094E+08 1 1.800942E+08 100.0 1,239.8 99.5 .3 frvpzg5yubp29
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
BEGIN for i in 1..1e7 loop begin insert into demo values('x'); exception when ot
hers then null; end; end loop; END;
 
1.60094E+08 10,000,000 16.0 88.9 983.1 100.3 .4 319ypa1z41aba
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
INSERT INTO DEMO VALUES('x')
 
49,999,995 9,999,999 5.0 27.8 201.1 103.2 0 2skwhauh2cwky
PDB: PDB1
select o.name, u.name from obj$ o, user$ u where o.obj# = :1 and o.owner# = u.u
ser#
 
19,999,998 9,999,999 2.0 11.1 148.5 98.9 0 2jfqzrxhrm93b
PDB: PDB1
select /*+ rule */ c.name, u.name from con$ c, cdef$ cd, user$ u where c.con# =
cd.con# and cd.enabled = :1 and c.owner# = u.user#

My failed inserts have read on average 16 blocks for each attempt. that’s too much for doing nothing. And in addition to that, I see two expensive statements parsed and executed each time: one to get the object name and one to get the constraint name.
This is how we can retreive the error message which is:

 
ORA-00001: unique constraint (SCOTT.DEMO) violated
 

This is a big waste of resource. I did this test in PL/SQL but if you cumulate all worst practices and run those inserts row by row, then you will see those colors:
CaptureBreakReset

The Orange is ‘Log File Sync’ because you generate more redo than necessary.
The Green is ‘CPU’ because you read more blocks than necessary.
The read is ‘SQL*Net break/reset to client’ when the server process sends the error.

_suppress_identifiers_on_dupkey

When you set “_suppress_identifiers_on_dupkey” to true, Oracle will not return the name of the constraint which is violated, but only the information which is already there in the session context.

Here is the message that you get:

 
ORA-00001: unique constraint (UNKNOWN.obj#=73375) violated
 

Where 73375 is the OBJECT_ID of the index where the unique constraint exception has been violated.

You have less information, but it is faster:

Elapsed: 15.45 (mins)
DB Time: 15.48 (mins)

There is no Soft Parse overhead:

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 100.00 Soft Parse %: 96.43
Execute to Parse %: 99.98 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 90.38 % Non-Parse CPU: 99.95
Flash Cache Hit %: 0.00

Our statement is the only one using the CPU and reads less blocks:

SQL ordered by Gets DB/Inst: CDB1/CDB1 Snaps: 21-22
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - Buffer Gets as a percentage of Total Buffer Gets
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Buffer Gets: 110,132,467
-> Captured SQL account for 81.8% of Total
 
Buffer Gets Elapsed
Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
1.10091E+08 1 1.100906E+08 100.0 926.2 98.8 1 frvpzg5yubp29
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
BEGIN for i in 1..1e7 loop begin insert into demo values('x'); exception when ot
hers then null; end; end loop; END;
 
90,090,580 10,000,000 9.0 81.8 515.7 99.1 1.9 319ypa1z41aba
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
INSERT INTO DEMO VALUES('x')

This parameter is a workaround for bad design, but not a solution.

Update – no rows – Insert

In order to avoid all this rollback and exception management overhead, there is another idea. Start with the update and, when no row was found, insert it. This is easy with the ROWCOUNT.

begin
update ...
if SQL%ROWCOUNT = 0 then insert ...

This is more efficient but still subject to a concurrent session inserting the row between your update and you insert. But at least, you manage the different scenario with a condition on ROWCOUNT rather than with an exception, which is more scalable.

So what?

Always use the database in the expected way. Exceptions and Errors are not for the normal scenario of the use-case. Exceptions should be unusual. The solution is to use the MERGE statement which has been implemented exactly for this reason: do an upsert without the error management overhead and with the statement isolation level which prevents errors in a multi-user environment.

 

Cet article “_suppress_identifiers_on_dupkey” – the SAP workaround for bad design est apparu en premier sur Blog dbi services.

Oracle Big Data Cloud Service – Compute Edition

Fri, 2017-09-29 03:00

In this blog post, we will see how to create a Big Data cluster through the Oracle Cloud Services. If you want more details about the Oracle Big Data cloud services offering, you can refer to my previous blog Introduction to Oracle Big Data.

First, you need to create your trial account through the following link: https://cloud.oracle.com/tryit. Note that, when you create your trial account, all information (phone number, address, credit card…), must be from the same country. Otherwise, you will get an error message.

Then you will get an email from Oracle with your connection information. The 4 main connection information are:

During the first connection you need to change your password and answer to 3 secret questions.

You are now login into the Oracle Cloud Services Dashboard. Select the “Big Data – Compute Edition” service to create your cluster.

BDCS-CE Dashboard

Click on “Service” and “Create Service”.

BDCS-CE Create Cluster

First, complete the service information. Cluster name, description… and click on “Next”.

BDCS-CE Cluster creation

Then, you enter the details of your Big Data cluster (configuration, credentials, storage…).

Cluster configuration:

Use the “full” deployment. It will provision a cluster with Spark, MapReduce, Zeppelin, Hive, Spark Thrift, Big Data File System.

Credentials:

Generate an ssh public key and insert it (see screenshot below). Update or keep the current Administrative user / password which is very important for the next operations.

Storage:

Oracle Public Cloud is working with Object storage container. Which means that, a storage container can be used by all cloud services. For the Big Data Service you need to use an existing storage container or create one. The storage container name must follow a specific syntax.

https://<identity_domaine>.storage.oraclecloud.com/v1/Storage-<identity_domaine>/<container_name>

Example: https://axxxxxx.storage.oraclecloud.com/v1/Storage-axxxxxx/dbistorage

You can find the complete configuration below.

BDCS-CE Configuration Overview

Confirm your cluster configuration and click on “Next”.

During the cluster deployment, you can take the time to read the documentation: https://docs.oracle.com/en/cloud/paas/big-data-cloud/index.html

Once your services has been deployed, you can access to the Big Data Cluster Console, to monitor your cluster and access it.

BDCS-CE Cluster Overview

 

BDCS-CE Cluster Console

OBDCS-CE Monitoring

You have now deployed an Big Data cluster composed by 3 nodes, based on HortonWorks distribution with the following tools:

  • HDFS = Hadoop Distributed FileSystem
  • YARN = Resources management for the cluster
  • Hive = Data Warehouse for managing large data sets using SQL
  • Spark= Data processing framework
  • Pig = High-level platform for creating programs that runs on Hadoop
  • ZooKeeper = Hadoop cluster scheduler
  • Zeppelin = Data scientist workbench, web based.
  • Alluxio = Memory speed virtual distributed storage
  • Tez = Framework for YARN-based, Data Processing Applications In Hadoop

Your Oracle Big Data cluster, through Oracle Big Data Cloud Service – Compute Edition is now ready to use.

Enjoy ;-)

 

Cet article Oracle Big Data Cloud Service – Compute Edition est apparu en premier sur Blog dbi services.

Introduction to Oracle Big Data Services

Fri, 2017-09-29 01:00

Since few years, Oracle decided to move forward in the Big Data area, as their main competitor. The goal of this blog post is to explain you, how the Oracle Big Data offering is composed.

As the Oracle Big Data offering is continuously improving, I’m always open to your feedback :-)

Oracle Big Data offering is split in 2 parts:

  • On-Premise
  • Public Cloud

Note: It’s important to know, that the 2 main Big Data distribution on the market are Cloudera and Hortonworks. We will see later how Oracle stands with this 2 main distributions.

On-premise: Oracle Big Data Appliance:

The main product of the Oracle Big Data offering is the Oracle Big Data Appliance. OBDA is an engineered systems based on the Cloudera distribution. The Big Data appliance offers you an easy-to-deploy solution with Cloudera manager for managing a Big Data cluster including a complete Hadoop ecosystem ready-to-use.

Oracle Big Data Appliance starts with a “Starter” rack of 6 nodes for a storage capacity of 96TB. Below the details configuration per nodes.

Oracle X6-2 server:

  • 2 × 22-Core Intel ® Xeon ® E5 Processors
  • 64GB Memory
  • 96TB disk space

Oracle Big Data Appliance is a combination of open source software and proprietary software from Oracle (i.e Oracle Big Data SQL). Below a high-level overview of Big Data Appliance software.

Screen Shot 2017-09-27 at 08.25.45

Oracle Big Data Cloud Machine:

On customer side, Oracle offers the Oracle Big Data Cloud Machine (BDCM). Fully managed by Oracle as it’s a PaaS service (Platform as a Service), based on customer infrastructures, designed to provide Big Data Cloud Service. The BDCM is a Big Data Appliance managed and operated by Oracle in customer’s data center.

The Big Data Cloud Machine starts with a “Starter Pack” of 3 nodes. Below the minimal configuration:

  • 3 nodes
  • 32 OCPU’s per node
  • 256GB RAM per node
  • 48TB disk space per node

Oracle Big Data Cloud Machine princing: https://cloud.oracle.com/en_US/big-data/cloudmachine/pricing

Oracle Public Cloud:

Oracle provides several deployment and services for Big Data:

  • Oracle Big Data Cloud Services
  • Oracle Big Data Cloud Services – Compute Edition
  • Event Hub Cloud Services (Kafka as a Service)
  • Oracle Big Data SQL Cloud Service

Oracle public cloud services, including Big Data, is available in two payment methods, metered and non-metered.

  • Metered: You are charged on the actual usage of the service resource :
    • OCPU/hour
    • Environment/hour
    • Host/hour
    • For the storage : GB or TB/month
  • Non-metered: Monthly or annual subscription for a service and it’s not depending on the resources usage. Charging is performed monthly.

For more information you can refer to the following links:

https://blogs.oracle.com/pshuff/metered-vs-un-metered-vs-dedicated-services

Oracle Big Data Cloud Services:

OBDCS is a dedicated Big Data Appliance in the public cloud. An engineered system managed and pre configured by Oracle. OBDCS is a large system from the start with Terabytes of storage.

The offering starts with a “Starter pack” of 3 nodes, including:

  • Platform as a Service
  • 2 payments methods: metered and non-metered
  • SSH connection to cluster nodes
  • Cloudera’s Distribution including Apache Hadoop, Enterprise Data Hub Edition
  • Oracle Big Data Connectors
  • Oracle Copy to Hadoop
  • Oracle Big Data Spatial and Graph

The cost entry is very high, that’s why this service is recommended for large and more mature business cases.

Pricing information: https://cloud.oracle.com/en_US/big-data/big-data/pricing

Oracle Big Data Cloud Services – Compute Edition:

OBDCS-CE provides you a dedicated Hadoop cluster based on Hortonworks distribution. The cost entry is smaller than Oracle Big Data Cloud Service, that’s why this service is more suitable for small business use case and proof and concept.

OBDCS-CE offering details:

  • Platform as a Service
  • 2 payments methods: metered and non-metered
  • Apache Hadoop cluster based on Hortonworks distribution
  • Free number of nodes for the deployment – 3 nodes is the minimum for a High Availability cluster, recommended for production. You can actually have one node clusters, but this is obviously not recommended.
  • Apache Zeppelin for Hive and Spark analytic
  • 3 access methods:
    • BDCS-CE console (GUI)
    • REST API
    • SSH

Pricing information: https://cloud.oracle.com/en_US/big-data-cloud/pricing

Summary Engineered systems PaaS On-Premise (customer side) - Big Data Appliance (BDA)- Big Data Cloud Machine (BDA managed by Oracle) Oracle Cloud Machine (OCM)  + BDCS – Compute edition Oracle Public Cloud Big Data Cloud Service (BDCS) – a BDA in Oracle public cloud – Cloudera distribution Big Data Cloud Service – Compute edition – Hortonworks distribution

More details about Oracle PaaS offering:

http://www.oracle.com/us/corporate/contracts/paas-iaas-public-cloud-2140609.pdf

I hope, this blog will help you to better understand the Oracle Big Data offering and products.

 

Cet article Introduction to Oracle Big Data Services est apparu en premier sur Blog dbi services.

Announcing the dbi OpenDB Appliance

Thu, 2017-09-28 07:04

As already announced on Twitter and LinkedIn here is the blog post to describe our OpenDB appliance in more detail. I am sure you wonder what this is about so let me explain why we are doing this. What we do see day by day at our customers is that more and more databases get consolidated on to a VMWare deployment. This is not only true for the smaller ones of those but also for the critical, potentially much bigger ones. What makes it complicated, especially for smaller companies that do not necessarily have the knowhow for the specific database, is that you need to apply the best practices not only to the database deployment but also to the operating system and the VMWare deployment. But even if you have this already in place: Do you know how to deploy the PostgeSQL binaries, how to setup a PostgreSQL instance, how to monitor and how to backup and restore all that? Do you know how to do this with MySQL/MariaDB, MongoDB, Cassandra? If your answer to this is no but you need to have a PostgreSQL/MySQL/MariaDB/MongoDB/Cassandra instance ready quite fast then the dbi OpenDB Appliance might be the solution for you. Let’s dig into some details.

OpenDB-logo

A typical use case: You are forced to support an application which is running on a specific database. What do you do? Quickly setup a Linux VM, download the installer, clicking next, next, next and hopefully make the application connect to what you just installed and then cross your fingers and hope that never ever something goes wrong? You laugh? There are deployments out there which got setup in exactly this way. Another option would be to hire someone who is experienced in that area. This will not help you either as you’d at least need two people (because people tend to want to go to holidays from time to time). The next option would be to work together with external consultants which probably will work as long as you work with the right ones. Completely outsourcing the stuff is another option (or even going to the cloud), if you want to do that. With the dbi OpenDB Appliance you’ll get another option: We deliver a fully pre-configured VMWare based virtual machine image which you can easily plug into your existing VMWare landscape. Can that work? Let me explain what you would get:

As said just before you get an image which you can import into your VMWare ESX. I said this image is pre-configured, what does that mean? Well, when you start it up it boots into a CentOS 7.3 x64 Linux operating system. No magic, I know :) Additionally you’ll get four pre-configured disks:

/       15GB    The Linux operating system
/boot	1GB	The boot images (kernels)
/u01	50GB	All files belonging to the OpenDB itself
                All required DMK packages
                All source files (PostgreSQL, MariaDB, MongoDB, Cassandra)
                The Linux yum repositories
                The HOMEs of all product installations
                The admin directories for the initialized products
/u02	10GB	The data files belonging to the initialized products
/u03	10GB	The redo/wal files belonging to the initialized products
/u04	10GB	Backups

You are not supposed to touch the root, /boot and /u01 partitions but of course you will be able to resize /u02 to /u04. The 10GB provided initially are just meant as minimum setup. Resize your VMWare disk images (vmdks) and the dbi OpenDB command line utility offers you to resize the file systems as well with just a single call. At this point you probably wonder what the dbi OpenDB command line utility is about. In short this is a wrapper around our various DMK packages. Using one of the various DMK packages you can deploy and monitor databases even today. The command line utility makes use of that and wraps around the various DMKs. The interface is menu driven to make it as easy as possible for you and helps you with initializing the appliance (setting the hostname, network configuration and disk resizing). In addition you can install the products we support and create database instances on top of that without knowing the details. We take care of implementing the best practices in the background (kernel settings, file system layout, initialization parameters, …). But that is not all: We’ll go a step further and implement monitoring, alerting and backup procedures as well. The idea is that you really do not need to take care of such things: It just comes when you setup a product.

To give you an idea you’ll get something like this when you fire up the command line utility:

==============================================================================================
=                                                                                            =
=                                                                                            =
=       _ _    _    ___                 ___  ___     _             _ _                       =
=    __| | |__(_)  / _ \ _ __  ___ _ _ |   \| _ )   /_\  _ __ _ __| (_)__ _ _ _  __ ___      =
=   / _  | '_ \ | | (_) | '_ \/ -_) ' \| |) | _ \  / _ \| '_ \ '_ \ | / _  | ' \/ _/ -_)     =
=   \__,_|_.__/_|  \___/| .__/\___|_||_|___/|___/ /_/ \_\ .__/ .__/_|_\__,_|_||_\__\___|     =
=                       |_|                             |_|  |_|                             =
=                                                                                            =
=                                                                                            =
=      Please make a selection from the menu below (type 'q' to exit):                       =
=                                                                                            =
=      1. Deploy a database home                                                             =
=      2. List the deployed database homes                                                   =
=      3. Setup a database instance                                                          =
=      4. List the deployed database instances                                               =
=                                                                                            =
=     10. Stop and remove a database instance                                                =
=     11. Remove a database home                                                             =
=                                                                                            =
=                                                                                            =
=     99. Initialize the appliance                                                           =
=                                                                                            =
=                                                                                            =
==============================================================================================
 
 Your input please: 

You would start by “Initialize the appliance” to set your preferred host name, to initialize the network and to provide the monitoring credentials. Once done you can go on and start deploying product homes (e.g. PostgreSQL) and instances on top of that. Of course you can deploy multiple instances on the same home and you can install several homes of the same product version.

What do we mean by a “product”? A product is what we support with a specific release of the appliance. Initially this probably will be:

  • PostgreSQL 9.6.5
  • PostgreSQL 9.5.9

So the menu would offer you something like this for deploying the binaries:

==============================================================================================
=                                                                                            =
=                                                                                            =
=       _ _    _    ___                 ___  ___     _             _ _                       =
=    __| | |__(_)  / _ \ _ __  ___ _ _ |   \| _ )   /_\  _ __ _ __| (_)__ _ _ _  __ ___      =
=   / _  | '_ \ | | (_) | '_ \/ -_) ' \| |) | _ \  / _ \| '_ \ '_ \ | / _  | ' \/ _/ -_)     =
=   \__,_|_.__/_|  \___/| .__/\___|_||_|___/|___/ /_/ \_\ .__/ .__/_|_\__,_|_||_\__\___|     =
=                       |_|                             |_|  |_|                             =
=                                                                                            =
=                                                                                            =
=      Please make a selection from the menu below (type 'q' to exit, 'b' to go back):       =
=                                                                                            =
=                                                                                            =
=     000 - PostgreSQL 9.6.5                                                                 =
=     001 - PostgreSQL 9.5.9                                                                 =
=                                                                                            =
=                                                                                            =
==============================================================================================
 
 Your input please: 

Once you have deployed the homes you require you can list them:

==============================================================================================
=                                                                                            =
=                                                                                            =
=       _ _    _    ___                 ___  ___     _             _ _                       =
=    __| | |__(_)  / _ \ _ __  ___ _ _ |   \| _ )   /_\  _ __ _ __| (_)__ _ _ _  __ ___      =
=   / _  | '_ \ | | (_) | '_ \/ -_) ' \| |) | _ \  / _ \| '_ \ '_ \ | / _  | ' \/ _/ -_)     =
=   \__,_|_.__/_|  \___/| .__/\___|_||_|___/|___/ /_/ \_\ .__/ .__/_|_\__,_|_||_\__\___|     =
=                       |_|                             |_|  |_|                             =
=                                                                                            =
=                                                                                            =
=      Please make a selection from the menu below (type 'q' to exit, 'b' to go back):       =
=                                                                                            =
=                                                                                            =
=     The following homes are available for deploying instances on:                          =
=                                                                                            =
=                                                                                            =
=     pg965:/u01/app/opendb/product/PG96/db_5/:dummy:9999:D                                  =
=     PG959:/u01/app/opendb/product/PG95/db_9/:dummy:9999:D                                  =
=     PG959_1:/u01/app/opendb/product/PG95/db_9_0:dummy:9999:D                               =
=     PG965_1:/u01/app/opendb/product/PG96/db_5_0:dummy:9999:D                               =
=                                                                                            =
=                                                                                            =
==============================================================================================
 
 Your input please: 

Here you can see that you can have multiple homes of the same release (two for PostgreSQL 9.6.5 and two for PostgreSQL 9.5.9 in this case). The path and naming for a home follow our best practices and are generated automatically. Having the homes you can start deploying you instances:

==============================================================================================
=                                                                                            =
=                                                                                            =
=       _ _    _    ___                 ___  ___     _             _ _                       =
=    __| | |__(_)  / _ \ _ __  ___ _ _ |   \| _ )   /_\  _ __ _ __| (_)__ _ _ _  __ ___      =
=   / _  | '_ \ | | (_) | '_ \/ -_) ' \| |) | _ \  / _ \| '_ \ '_ \ | / _  | ' \/ _/ -_)     =
=   \__,_|_.__/_|  \___/| .__/\___|_||_|___/|___/ /_/ \_\ .__/ .__/_|_\__,_|_||_\__\___|     =
=                       |_|                             |_|  |_|                             =
=                                                                                            =
=                                                                                            =
=      Please make a selection from the menu below (type 'q' to exit, 'b' to go back):       =
=                                                                                            =
=                                                                                            =
=     Please specify an alias for your new instance                                          =
=       The alias needs to be at least 4 characters                                          =
=       The alias needs to be at most  8 characters                                          =
=                                                                                            =
=                                                                                            =
=                                                                                            =
==============================================================================================
 
 Your input please: MYINST1 

What happens in the background then is that the PostgreSQL cluster is initialized, started and added to the auto start configuration (systemd) so that the instance will properly shutdown when the appliance is stopped and comes up when the appliance is started. Listing the deployed instances is possible, too, of course:

==============================================================================================
=                                                                                            =
=                                                                                            =
=       _ _    _    ___                 ___  ___     _             _ _                       =
=    __| | |__(_)  / _ \ _ __  ___ _ _ |   \| _ )   /_\  _ __ _ __| (_)__ _ _ _  __ ___      =
=   / _  | '_ \ | | (_) | '_ \/ -_) ' \| |) | _ \  / _ \| '_ \ '_ \ | / _  | ' \/ _/ -_)     =
=   \__,_|_.__/_|  \___/| .__/\___|_||_|___/|___/ /_/ \_\ .__/ .__/_|_\__,_|_||_\__\___|     =
=                       |_|                             |_|  |_|                             =
=                                                                                            =
=                                                                                            =
=      Please make a selection from the menu below (type 'q' to exit, 'b' to go back):       =
=                                                                                            =
=                                                                                            =
=     The following instances are currently deployed:                                        =
=                                                                                            =
=                                                                                            =
=     MYINST1:/u01/app/opendb/product/PG96/db_5/:/u02/opendb/pgdata/MYINST1:5432:Y           =
=                                                                                            =
=                                                                                            =
==============================================================================================
 
 Your input please: 

The cronjobs for monitoring, alerting and backup have been created as well:

[opendb@opendb ~]$ crontab -l
00 01 * * * /u01/app/opendb/local/dmk/dmk_postgres/bin/dmk-pg-dump.sh -s MYINST1 -t /u04/opendb/pgdata/MYINST1/dumps >/dev/null 2>&1
58 00 * * * /u01/app/opendb/local/dmk/dmk_postgres/bin/dmk-pg-badger-reports.sh -s MYINST1 >/dev/null 2>&1
*/10 * * * * /u01/app/opendb/local/dmk/dmk_postgres/bin/dmk-check-postgres.sh -s MYINST1 -m  >/dev/null 2>&1

With every new release/update of the appliance we plan to include more products such as MariaDB/MongoDB/Cassandra, provide patch sets for the existing ones and update the Linux operating system. Updates will be delivered as tarballs and the command line utility will take care of the rest, you do not need to worry about that. You can expect updates twice a year.

To visualize this:
OpenDB-big-picture

/u02 will hold all the files that contain your user data. /u03 is there for redo/wal/binlog where required and /u04 is for holding the backups. This is fixed and must not be changed. Independent of which product you choose to deploy you’ll get a combination of pcp (Performance Co-Pilot) and vector to do real time performance monitoring of the appliance (of course configured automatically).

Alerting will be done by a combination of third party (open source) projects and DMK. The tools we’ll use for PostgreSQL will be check_postgres and pgbadger, for example. For the other products we’ll announce what we will use when it will be included in a future release.

In addition to the VMWare template you can have the appliance also in the Hidora Cloud as a pay as you go service (although that is not fully ready).

If you have any questions just send as an email to: opendb[at]dbi-services[dot]com

 

Cet article Announcing the dbi OpenDB Appliance est apparu en premier sur Blog dbi services.

Searching wikipedia from the command line

Thu, 2017-09-28 06:41

Wouldn’t it be nice if you could search wikipedia from the command line? I often need to quickly look up a definition or want to know more about a specific topic when I am working on the command line. So here is how you can do it …

What you need is npm and wikit. On my debian based system I can install both with:

$ sudo apt-get install npm
$ sudo npm install wikit -g
$ sudo ln -s /usr/bin/nodejs /usr/bin/node

The link is to avoid the following issue:

$ wikit postgresql
/usr/bin/env: ‘node’: No such file or directory

For Fedora/RedHat/Centos you should use yum:

$ sudo yum install npm -y
$ sudo npm install wikit -g

Once you have that you can use wikit to query wikipedia (summary):

$ wikit postgresql
 PostgreSQL, often simply Postgres, is an object-relational database management system
 (ORDBMS) with an emphasis on extensibility and standards compliance. As a database
 server, its primary functions are to store data securely and return that data in
 response to requests from other software applications. It can handle workloads ranging
 from small single-machine applications to large Internet-facing applications (or
 for data warehousing) with many concurrent users; on macOS Server, PostgreSQL is
 the default database; and it is also available for Microsoft Windows and Linux (supplied
 in most distributions). PostgreSQL is ACID-compliant and transactional. PostgreSQL
 has updatable views and materialized views, triggers, foreign keys; supports functions
 and stored procedures, and other expandability. PostgreSQL is developed by the PostgreSQL
 Global Development Group, a diverse group of many companies and individual contributors.
 It is free and open-source, released under the terms of the PostgreSQL License,
 a permissive software license.

Cool. When you want to read the output in your default browser instead of the console you can do this as well by adding then “-b” flag:

$ wikit postgresql -b

When you want to open the “disambiguation” page in your browser:

$ wikit postgresql -d

Selection_013

Changing the language is possible as well with the “-lang” switch:

$ wikit --lang de postgresql 
 PostgreSQL (englisch [,pəʊstgɹɛs kjʊ'ɛl]), oft kurz Postgres genannt, ist ein freies,
 objektrelationales Datenbankmanagementsystem (ORDBMS). Seine Entwicklung begann
 in den 1980er Jahren, seit 1997 wird die Software von einer Open-Source-Community
 weiterentwickelt. PostgreSQL ist weitgehend konform mit dem SQL-Standard ANSI-SQL
 2008, d.h. der Großteil der Funktionen ist verfügbar und verhält sich wie definiert.
 PostgreSQL ist vollständig ACID-konform (inklusive der Data Definition Language),
 und unterstützt erweiterbare Datentypen, Operatoren, Funktionen und Aggregate. Obwohl
 sich die Entwicklergemeinde sehr eng an den SQL-Standard hält, gibt es dennoch eine
 Reihe von PostgreSQL-spezifischen Funktionalitäten, wobei in der Dokumentation bei
 jeder Eigenschaft ein Hinweis erfolgt, ob dies dem SQL-Standard entspricht, oder
 ob es sich um eine spezifische Erweiterung handelt. Darüber hinaus verfügt PostgreSQL
 über ein umfangreiches Angebot an Erweiterungen durch Dritthersteller, wie z.B.
 PostGIS zur Verwaltung von Geo-Daten. PostgreSQL ist in den meisten Linux-Distributionen
 enthalten. Apple liefert ab der Version Mac OS X Lion (10.7) PostgreSQL als Standarddatenbank

Quite helpful …

 

Cet article Searching wikipedia from the command line est apparu en premier sur Blog dbi services.

When deterministic function is not

Tue, 2017-09-26 14:47

When you declare a function-based index, the function is deterministic, which means that calling it with same arguments will always return the same result. This is required because the indexed values, which are the result of the function, are stored in the index. But what happens if you declare a function deterministic when it is not?

I wanted to do this test after reading the following documents about Postgres HOT and WARM.

The relevant read is https://t.co/rCzqdz9RyJ and https://t.co/rFis1lqlFf

— Vladimir Sitnikov (@VladimirSitnikv) August 2, 2017

They say that they cannot vacuum one page at a time because index entries must be cleaned, and there’s a risk when trying to find an index entry from the table in case a user indexed a function which is not actually deterministic. This could lead to logical corruption. So, it seems that Postgres will always navigate from the index to the table and not the opposite. And that is possible in Postgres because they don’t implement DELETE and UPDATE physically. They only do an INSERT with the new version of the whole row and mark the old version as stale.

But Oracle is far more complex than that. Critical OLTP applications must be able to update in-place, without row movement, or the indexes maintenance would kill the performance and the redo generation would be orders of magnitude larger. An update is done in-place and the updated column must maintain the related index. And deletes will also delete all the index entries. Then, Oracle needs to navigate from the table to the index. This is done with a lookup onf the value in the index structure. The value is either a value stored in the table row, or derived with a deterministic function.

So what happens if I declare a function deterministic when it is not?

Here is a table:

SQL> create table DEMO (n not null) pctfree 99 as select rownum from xmltable('1 to 5');
 
Table created.

And here is a function which returns a rendom number. But I declare it deterministic:

SQL> create or replace function DEMO_FUNCTION(n number) return number deterministic as
2 begin
3 return dbms_random.value;
4 end;
5 /
 
Function created.

I declare an index on it:

SQL> create index DEMO_FUNCTION on DEMO(DEMO_FUNCTION(n));
 
Index created.

Oracle cannot verify if the function is deterministic or not, and trusts me.

A full table scan re-calculates the value each time, and do not raise any error.

SQL> select /*+ full(DEMO) */ DEMO_FUNCTION(n),rowid,n from DEMO where DEMO_FUNCTION(n) is not null;
 
DEMO_FUNCTION(N) ROWID N
---------------- ------------------ ----------
.743393494 AAAR5kAAMAAABXbAAA 1
.075404174 AAAR5kAAMAAABXbAAB 2
.601606733 AAAR5kAAMAAABXbAAC 3
.716335239 AAAR5kAAMAAABXbAAD 4
.253810651 AAAR5kAAMAAABXbAAE 5

If you run it several times, you will see different values.

An index acess will show always the same values because they come from the index:

SQL> select /*+ index(DEMO) */ DEMO_FUNCTION(n),rowid,n from DEMO where DEMO_FUNCTION(n) is not null;
 
DEMO_FUNCTION(N) ROWID N
---------------- ------------------ ----------
.135108581 AAAR5kAAMAAABXbAAE 5
.440540027 AAAR5kAAMAAABXbAAD 4
.480565266 AAAR5kAAMAAABXbAAA 1
.546056579 AAAR5kAAMAAABXbAAB 2
.713949559 AAAR5kAAMAAABXbAAC 3

Oracle could have run the function on the value from the table and compare it with the value from the index, and then raise an error. But that would be more expensive.

But then, what happens if I delete a row? Oracle will try to find the index entry by running the function, but then the value is not found in the index:

SQL> delete from DEMO where n=3 and DEMO_FUNCTION(n) is not null;
delete from DEMO where n=3 and DEMO_FUNCTION(n) is not null
*
ERROR at line 1:
ORA-08102: index key not found, obj# 73317, file 12, block 5603 (2)

This is a logical corruption caused by the bug in the function which was declared deterministic but is not. Verifying the deterministic truth would require running the function several times and even that would not detect values that change after days. It is the developer responsibility, to tell the truth. This was just a test. I you are in this case, make the index unusable and fix the function before re-building it.

 

Cet article When deterministic function is not est apparu en premier sur Blog dbi services.

Using WebLogic 12C RESTful management for monitoring WebLogic Domains

Tue, 2017-09-26 01:08

WebLogic 12.2.1 provides a new RESTful management interface with full accesses to all WebLogic Server resources. This interface offers an alternative to the WLST scripting or JMX developments for the management and the monitoring of WebLogic Domains.

The following of this blog provides a few examples demonstrating the simplicity and the efficiency of the REST requests.

WebLogic Server State
When checking a WebLogic Server state, we are interested in the state and in the “ActivationTime” that shows the last time the server was started. Low value can indicate the server was restarted recently and may be automatically by the node manager after a crash.
The following RESTful URL requests the state of a WebLogic Server named “server1”

https://host01:7002/management/weblogic/latest/domainRuntime/serverRuntimes/server1?fields=state,activationTime&links=none

Output:

{
"state": "RUNNING",
"activationTime": 1470982524188
}

The following RESTful URL requests the state of all WebLogic Servers in a domain:

https://host01:7002/management/weblogic/latest/domainRuntime/serverRuntimes?fields=name,state,activationTime&links=none

Output:

{
"name": "AdminServer",
"state": "RUNNING",
"activationTime": 1473234973467
},
{
"name": "server2",
"state": "RUNNING",
"activationTime": 1473235510682
},
{
"name": "server1",
"state": "RUNNING",
"activationTime": 1473235506258
}

Get JVM Key Metrics
Monitoring the memory usage provides a good view on how the application behaves regarding memory consumption. Even if the memory management is a JVM task, the WebLogic Server mbeans can be queried for some heap usage information.
The following RESTful URL requests the JVM key metrics of a WebLogic Server named “server1”

https://host01:7002/management/weblogic/latest/domainRuntime/serverRuntimes/server1/JVMRuntime?links=none&fields=heapSizeCurrent,heapFreeCurrent,heapFreePercent,heapSizeMax

Output:

{
"heapSizeCurrent": 259588096,
"heapFreeCurrent": 101962840,
"heapSizeMax": 518979584,
"heapFreePercent": 72
}

Get WebLogic Threads key metrics
When WebLogic opens up too many threads to service the load, there is a decrease in performance, due to the resources (CPU, Memory) usage.

https://host01:7002/management/weblogic/latest/domainRuntime/serverRuntimes/server1/threadPoolRuntime?links=none&fields=executeThreadIdleCount,hoggingThreadCount,pendingUserRequestCount,completedRequestCount,throughput,healthState

Output:

{
"executeThreadIdleCount": 0,
"completedRequestCount": 4652,
"pendingUserRequestCount": 0,
"throughput": 1.999000499750125,
"hoggingThreadCount": 0,
"healthState": {
"state": "ok",
"subsystemName": null,
"partitionName": null,
"symptoms": []
}
}

Get JDBC Runtime Metrics
“Active Connection Current Count”, “Current Connection High Count”, “Waiting for Connection High count” allows you to validate that you have the correct amount of resources available to service the client’s needs. It’s also helpful to determine if you need to increase or decrease the pool size. While “Wait Seconds High Count”, “Waiting for Connection Failure Total” and “Connection Delay Times” can be used to determine DB responsiveness and how the clients are impacted by the connection pool size.

https://host01:7002/management/weblogic/12.2.1.0/domainRuntime/serverRuntimes/server1/JDBCServiceRuntime/JDBCDataSourceRuntimeMBeans/myDS?links=none&fields=name,activeConnectionsCurrentCount,activeConnectionsHighCount,waitingForConnectionHighCount,waitSecondsHighCount,waitingForConnectionFailure

Output:

{
"name": “myDS",
"state": "Running",
"activeConnectionsCurrentCount": 4,
"activeConnectionsHighCount": 8,
"waitingForConnectionFailureTotal": 0,
"waitingForConnectionHighCount": 0,
"waitSecondsHighCount": 0,
}

Get Application Runtime Metrics
We can fetch some useful information like “how many sessions were connected to the application” and how many concurrent sessions”

https://host01:7002/management/weblogic/latest/domainRuntime/serverRuntimes/server1/applicationRuntimes/SimpleAuctionWebAppDb/componentRuntimes?fields=openSessionsCurrentCount,sessionsOpenedTotalCount,openSessionsHighCount&links=none

Output:

{
"openSessionsCurrentCount": 12,
"sessionsOpenedTotalCount": 255,
"openSessionsHighCount": 15
}

The WebLogic REST management allows to access directly the WebLogic MBeans without any additional resource cost. The Monitoring tools can only benefit from this WebLogic RESTful interface.  As WebLogic RESTful requests used for Monitoring are simple, the java plugin for the dbi monitoring has been developed in two days.

 

Cet article Using WebLogic 12C RESTful management for monitoring WebLogic Domains est apparu en premier sur Blog dbi services.

Am I a DBA 3.0 or just an SQL*DBA?

Sun, 2017-09-24 13:47

There are currently a lot of new buzz words and re-namings which suggest that our DBA role is changing, most of them escorted with a #cloud hashtag. Oracle Technology Network is now called Oracle Developer Community. Larry Ellison announced the database that does not need to be operated by humans. And people talking about the death of DBA, about the future of DBA, about DBA 3.0,…

Those are all generalizations and universal statements, and I don’t like generalizations. There is not only one type of DBA role. The DBA role in big US companies (where most of those claims come from) is very different than the DBA role in European medium companies (where I’m doing most of my job). The only thing I like with generalization is that a simple counterexample is sufficient to prove that the universal statement is wrong. And I’ll take the example I know the best: mine.

CaptureEM
What do you call DBA role? Is it only server management, or database management? Are you a Dev DBA or an Ops DBA? And when you will go multitenant, will you become a CDB DBA or PDB DBA? And on Engineered Systems, are you still a Database administrator or a Database Machine administrator?

So here is my experience. Let’s flashback to sysdate-8000.
6580a20eb9faaba67ff143dcd7bfcbdc
My first job with an Oracle Database was in 1994. I was working at Alcatel in Paris. The IT was typical of big companies at that time: all IBM, databases were DB2 on mainframe and applications were developed through a several years waterfall cycle from specifications to production. But I was not working there. I was in the accounting department which had his own little IT with some Borland Paradox small applications. This IT department was one senior person, from whom I’ve learned everything, and me, junior developer doing some Borland Paradox things. When came the need for a new application, the idea was to build a prototype in this ‘rebel’ IT service rather than waiting for the whole cycle of development managed by the official IT department.

mainpictWe asked SUN to lend us a workstation. We asked Oracle to lend us Oracle 7.1 database. That was not difficult. Both of them were happy to try to come into this all-IBM company by another way. And Borland had a new product: Delphi so this is where I started to build the prototype. I had everything to learn there: I had never installed a Unix system, I had never installed a database, I even never configured a TCP/IP network. But with the books (no internet then) and the help of my manager (did I say I owe him everything?) we got the environment ready within one month.

aba590f96f7b8138deb71fe28526fb93Today we are talking about Cloud PaaS as the only way to get quickly an environment to start a new development project. The marketing explains that you can get the environment with a few clicks and operational one hour later. But in real life, I know several projects where the environment is not ready after one month, for different reasons (time to choose which service, evaluate the cost, set-it up). Remember that in my case, 23 years ago, it took one or two months but I had a full server, enough storage, available 24/7, with the workstation on my desk. And all that for free.

CaptureDelphiSo I started to develop the application. The business users were there in next room. A short meeting, a small doc, and the conception of the first prototype was ready. Development with Delphi was really quick (remember RAD – Rapid Application Development?) and as soon as I had a usable prototype, one user had access to it, giving me their feedback for future evolutions. We have built something very clever: easy to evolve, fit to business needs, with good performance, and easy to deploy. It has been replaced years later by an application provided by the IT department, but our application was used a the specification.

So, what was my role here? I was clearly a developer and not a DBA. But I was already designing a system, installing a server, creating a database, modeling the data and analyzing performance. When interacting with the database, I was just a DBA doing some SQL. If I want to invent new words myself, I would call that an SQL*DBA, like the name of the tool that was replaced at that time by svrmgrl for the DBA stuff related to the server management. All that has been consolidated into the same tools later: sqlplus does the DBA and Developer stuff, Enterprise manager does both, SQL Developer does both…

During the 20 years later, I’ve evolved to a DBA. I’ve learned new technologies each time, but I don’t think that my DBA role has changed. I’ve done BI (called ‘infocenter’ at that time and ‘datawarehouse’ later) with Business Objects and DB2 and Data Propagator (yes, logical replication and that was in 1996). All this was designed in cooperation with the business end-users. In 2001 I’ve managed terabyte databases full of numbers, doing what we call Big Data today. All maintenance tasks (capacity planning, upgrade, recovery) were done very closely to the business utilization of this data. I administered telecom databases at a time where mobile phone providers came with a new idea every month to be implemented. We would call that ‘agile’ today. I’ve setup databases for quick cloning with transportable tablespaces. I’ve setup continuous integration tests of databases based on flashback technologies and workspace manager. I’ve configured parallel query and XMLDB to do analytics on unstructured data, with better results than MapReduce PoC. Technology evolves, names are modernized, but my DBA role is the same and I still use SQL.

The latest I’ve read about this changing role is Penny Avril interview and it is a very good explanation of all those changes announced. I totally agree with all of that. Except that I see no change in my role there. Let’s take this: DBAs are being asked to understand what businesses do with data rather than just the mechanics of keeping the database healthy and running.
I have always known which business users and which business cases are interacting with the database. My resume always mentioned the business area of each project. I’ve always interacted with end-users and developers for any database I administered, whether there are DEV, TEST or PROD databases. You cannot setup a backup/recovery plan without knowing the data and the business requirements. You cannot upgrade or migrate without interacting with users to test and validate the migration. You cannot address performance without interacting with users and developers. You cannot size the SGA without knowing how the data is used, at the different times of the day or the week.You cannot keep database healthy without knowing how it is used. You cannot build an infrastructure architecture without the support of the business for the software costs.

My DBA job is not a mechanics to keep processes running on a server. That would be a Database System administrator. But we are talking about DataBase Administrator. The major part of my job, and the main reason why I like it, is the human interaction that is around the database. You talk to server/storage/network administrators, you talk to all kind of business users, you talk to developers, you talk to managers, you talk to vendors,… You have to understand OS schedulers, network security, and mutexes, and also have to understand banking, retail, hospital data workflow. Then I don’t worry about the self-driven/no-human-labor part of the DBA role that may be moved to be managed by the cloud provider. Those are boring things that we already automated long time ago. For example, at dbi-services we have included all this automation into the DMK. And this goes further for open source databases with the OpenDB Appliance. Do you think a consulting company would provide this for free to their customers if this automation takes all the DBA job out? The boring and recurring things are automated to avoid errors, and all the intelligent stuff is provided by experienced human DBAs talking SQL with the system. As always.

 

Cet article Am I a DBA 3.0 or just an SQL*DBA? est apparu en premier sur Blog dbi services.

Wrong result with multitenant, dba_contraints and current_schema

Sat, 2017-09-23 15:03

Multitenant architecture is not such a big change and this is why I recommend it when you start a project in 12c or if you upgrade to 12.2 – of course after thoroughly testing your application. However, there is a point where you may encounter problems on dictionary queries, because it is really a big change internally. The dictionary separation has several side effects. You should test carefully the queries you do on the dictionary views to get metadata. Here is an example of a bug I recently encountered.

This happened with a combination of things you should not do very often, and not in a critical use case: query dictionary for constraints owned by your current schema, when different than the user you connect with.

I create two users: USER1 and USER2
SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> grant dba to USER1 identified by USER1 container=current;
Grant succeeded.
SQL> grant dba to USER2 identified by USER2 container=current;
Grant succeeded.

USER1 owns a table which has a constraint:

SQL> connect USER1/USER1@//localhost/PDB1
Connected.
SQL> create table DEMO(dummy constraint pk primary key) as select * from dual;
Table DEMO created.

USER2 can access to the table either by prefixing it with USER1 or by setting the current_schema to USER1

SQL> connect USER2/USER2@//localhost/PDB1
Connected.
SQL> alter session set current_schema=USER1;
Session altered.

Bug

Ok, now imagine you want to read constraint metadata for the current schema you have set:

SQL> select sys_context('USERENV','CURRENT_SCHEMA'), a.*
2 from sys.dba_constraints a
3 where owner = sys_context('USERENV','CURRENT_SCHEMA')
4 /
 
no rows selected

No rows selected is a wrong result here because my current_schema is USER1 and USER1 has constraints:

SQL> select owner,constraint_name
2 from sys.dba_constraints a
3 where owner = 'USER1'
4 /
OWNER CONSTRAINT_NAME
----- ---------------
USER1 PK

So, where’s the problem? Let’s have a look at the execution plan:

SQL_ID 2fghqwz1cktyf, child number 0
-------------------------------------
select sys_context('USERENV','CURRENT_SCHEMA'), a.* from
sys.dba_constraints a where owner =
sys_context('USERENV','CURRENT_SCHEMA')
 
Plan hash value: 1258862619
 
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.32 | 2656 |
| 1 | PARTITION LIST ALL | | 1 | 2 | 0 |00:00:00.32 | 2656 |
|* 2 | EXTENDED DATA LINK FULL| INT$INT$DBA_CONSTRAINTS | 2 | 2 | 0 |00:00:00.32 | 2656 |
--------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter((("INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=4 OR
("INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=2 AND "INT$INT$DBA_CONSTRAINTS"."ORIGIN_CON_ID"=TO_NUMBER(SY
S_CONTEXT('USERENV','CON_ID')))) AND "OWNER"=SYS_CONTEXT('USERENV','CURRENT_SCHEMA')))

I am in 12.2 and DBA_CONSTRAINTS reads from INT$DBA_CONSTRAINTS which reads from INT$INT$DBA_CONSTRAINTS and in multitenant this view being an extended data view will read from CDB$ROOT and from the current container. This is why we see EXTENDED DATA LINK FULL in the execution plan and up to this point the predicates are correct: “OWNER”=SYS_CONTEXT(‘USERENV’,’CURRENT_SCHEMA’)

The execution through data link is run on each container with parallel processes: they switch to the container and run the underlying query on the view. But when I look at the sql trace of the parallel process running the query on my PDB I can see that the predicate on OWNER has replaced the SYS_CONTEXT(‘USERENV’,’CURRENT_SCHEMA’) with the hardcoded value:

SELECT /*+ NO_STATEMENT_QUEUING RESULT_CACHE (SYSOBJ=TRUE) OPT_PARAM('_ENABLE_VIEW_PDB', 'FALSE') */ OWNER,CONSTRAINT_NAME,CONSTRAINT_TYPE,TABLE_NAME,OBJECT_TYPE#,SEARCH_CONDITION,SEARCH_CONDITION_VC,R_OWNER,R_CONSTRAINT_NAME,DELETE_RULE,STATUS,DEFERRABLE,DEFERRED,VALIDATED,GENERATED,BAD,RELY,LAST_CHANGE,INDEX_OWNER,INDEX_NAME,INVALID,VIEW_RELATED,ORIGIN_CON_ID FROM NO_COMMON_DATA(SYS."INT$INT$DBA_CONSTRAINTS") "INT$INT$DBA_CONSTRAINTS" WHERE ("INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=4 OR "INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=2 AND "INT$INT$DBA_CONSTRAINTS"."ORIGIN_CON_ID"=TO_NUMBER('3')) AND "INT$INT$DBA_CONSTRAINTS"."OWNER"=q'"USER2"'

And unfortunately, this value is not the right one: USER2 is my connected user, but not the CURRENT_SCHEMA that I have set. In the same trace, I can see where this value comes from:

select 'q''"' || SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA') || '"''' from sys.dual

but it seems that the current_schema was lost through the call to the parallel process and the PDB switch to my container.

Workaround

The problem is easy to workaround. This works:

SQL> select owner,constraint_name
2 from sys.dba_constraints a
3 where owner = ( select sys_context('USERENV','CURRENT_SCHEMA') from dual )
4 /
 
OWNER CONSTRAINT_NAME
----- ---------------
USER1 PK

And anyway, better to get the current schema before and pass it as a bind variable. The bind variables are passed correctly through data link queries:


SQL> variable v varchar2(30)
SQL> exec select sys_context('USERENV','CURRENT_SCHEMA') into :v from dual;
 
PL/SQL procedure successfully completed.
 
SQL> select sys_context('USERENV','CURRENT_SCHEMA'), a.*
2 from sys.dba_constraints a
3 --where owner = sys_context('USERENV','CURRENT_SCHEMA')
4 where owner = :v
5 /

So what?

The multitenant architecture is a real challenge for dictionary views. The dictionary is separated: system metadata in CDB$ROOT and user metadata in PDB. But, because of compatibility with non-CDB architecture, the dictionary views must show both of them, and this is where it becomes complex: what was separated on purpose has now to be merged. And complexity is subject to bugs. If you want to get an idea, have a look at dcore.sql in ORACLE_HOME/rdbms/admin and compare 11g version with 12c ones, with all the evolution in 12.1.0.1, 12.1.0.2 and 12.2.0.1

 

Cet article Wrong result with multitenant, dba_contraints and current_schema est apparu en premier sur Blog dbi services.

Documentum – RCS 7.3 – Issue with projections

Sat, 2017-09-23 11:42

In the last two blogs I posted, I mentioned that I was working on silent installations with CS 7.3 (in HA). The issue I will describe in this blog is linked to the HA installation. I faced this issue using the silent installation but I’m guessing it should also occurs using the GUI Installation.

 

So basically on the first Content Server, I installed a Global Registry and a few other docbases. For the creation of the docbases, there were obviously a docbroker installed. This is the global docbroker which was targeted by default by all docbases using the server.ini. In addition to that, I installed 2 other docbrokers for specific docbases. The purpose here is to have the docbroker N°1 only for the GR + DocBase1 and the docbroker N°2 only for the GR + DocBase2 + DocBase3. So I started to configure the projections to achieve that goal.

 

I will use below the GR. If you followed what I mentioned above, I should see – at the end – two projections in the docbase (+ the default one from the server.ini). So let’s configure the projections (with only one Content Server installed):

[dmadmin@content-server-01 ~]$ iapi GR_DocBase
Please enter a user (dmadmin):
Please enter password for dmadmin:


        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2016
        All rights reserved.
        Client Library Release 7.3.0040.0025


Connecting to Server using docbase GR_DocBase
[DM_SESSION_I_SESSION_START]info:  "Session 010f123450003d07 started for user dmadmin."


Connected to Documentum Server running Release 7.3.0050.0039  Linux64.Oracle
Session id is s0
API> retrieve,c,dm_server_config
...
3d0f123450000102
API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  title                           :
  ...
  projection_targets            []: <none>
  projection_ports              []: <none>
  projection_proxval            []: <none>
  projection_notes              []: <none>
  projection_enable             []: <none>
  ...
  i_vstamp                        : 28

API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
                                                                                                    
                                                                                                    
(2 rows affected)

API> set,c,l,projection_targets[0]
SET> content-server-01
...
OK
API> set,c,l,projection_ports[0]
SET> 1493
...
OK
API> set,c,l,projection_proxval[0]
SET> 1
...
OK
API> set,c,l,projection_notes[0]
SET> Dedicated Docbroker N°2
...
OK
API> set,c,l,projection_enable[0]
SET> T
...
OK
API> save,c,l
...
OK
API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  ...
  projection_targets           [0]: content-server-01
  projection_ports             [0]: 1493
  projection_proxval           [0]: 1
  projection_notes             [0]: Dedicated Docbroker N°2
  projection_enable            [0]: T
  ...
  i_vstamp                        : 29

API> reinit,c
...
OK
API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
                                                                                                    
content-server-01          1493              1                   Dedicated Docbroker N°2            1
(2 rows affected)

API> set,c,l,projection_targets[1]
SET> content-server-01
...
OK
API> set,c,l,projection_ports[1]
SET> 1491
...
OK
API> set,c,l,projection_proxval[1]
SET> 1
...
OK
API> set,c,l,projection_notes[1]
SET> Dedicated Docbroker N°1
...
OK
API> set,c,l,projection_enable[1]
SET> T
...
OK
API> save,c,l
...
OK
API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  ...
  projection_targets           [0]: content-server-01
                               [1]: content-server-01
  projection_ports             [0]: 1493
                               [1]: 1491
  projection_proxval           [0]: 1
                               [1]: 1
  projection_notes             [0]: Dedicated Docbroker N°2
                               [1]: Dedicated Docbroker N°1
  projection_enable            [0]: T
                               [1]: T
  ...
  i_vstamp                        : 30

API> reinit,c
...
OK
API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
content-server-01          1491              1                   Dedicated Docbroker N°1            1
content-server-01          1493              1                   Dedicated Docbroker N°2            1
(2 rows affected)

 

Up to this point, everything sounds good and everything is working. If you try to access DA, you will see two projections for the Global Registry that are the two we defined above:

Projection1

 

So this is working. Now what is the goal of this blog? Well it’s not what is above… So let’s talk about the issue now! The next step in this environment was to set it in High Availability. As mentioned my last blog, I faced some issues with the Remote Content Server (RCS) installation but I was finally able to install a Content-File Server. When I finished the installation of GR_DocBase on the RCS, I wanted to setup the projections between the CS1 and CS2… For that, I opened DA again and I went to the same screen that you can see above (in the screenshot).

 

What I was expecting to see was the exact same thing as above, meaning two projections of the GR_DocBase with the two docbrokers installed on the CS1. What I saw was a little bit different…:

Projection2

 

I can assure you that between the two screenshots above, the only thing I did was to install the CFS for GR_DocBase on the RCS… So why is this screen not working anymore, what’s the issue? “An error has occurred. 1 >= 1″. Yeah sure, what else? ;)

 

I checked the Documentum Administrator logs and found the following stack trace:

05:45:07,980 ERROR [[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'] com.documentum.web.common.Trace - 1 >= 1
java.lang.ArrayIndexOutOfBoundsException: 1 >= 1
        at java.util.Vector.elementAt(Vector.java:474)
        at com.documentum.fc.common.DfList.get(DfList.java:427)
        at com.documentum.fc.common.DfList.getString(DfList.java:561)
        at com.documentum.webcomponent.admin.server.ServerConnectionBrokerList.loadConnectionBrokerProjections(ServerConnectionBrokerList.java:78)
        at com.documentum.webcomponent.admin.server.ServerConnectionBrokerList.onInit(ServerConnectionBrokerList.java:51)
        at com.documentum.web.form.FormProcessor.invokeMethod(FormProcessor.java:1604)
        at com.documentum.web.form.FormProcessor.invokeMethod(FormProcessor.java:1489)
        at com.documentum.web.form.FormProcessor.fireOnInitEvent(FormProcessor.java:1154)
        at com.documentum.web.form.ControlTag.fireFormOnInitEvent(ControlTag.java:794)
        at com.documentum.web.formext.control.component.ComponentIncludeTag.renderEnd(ComponentIncludeTag.java:135)
        at com.documentum.web.form.ControlTag.doEndTag(ControlTag.java:928)
        at jsp_servlet._webcomponent._library._propertysheetwizardcontainer.__propertysheetwizardcontainer._jsp__tag25(__propertysheetwizardcontainer.java:1501)
        at jsp_servlet._webcomponent._library._propertysheetwizardcontainer.__propertysheetwizardcontainer._jspService(__propertysheetwizardcontainer.java:368)
        at weblogic.servlet.jsp.JspBase.service(JspBase.java:35)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
        at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.servlet.ResponseHeaderControlFilter.doFilter(ResponseHeaderControlFilter.java:351)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.servlet.CompressionFilter.doFilter(CompressionFilter.java:96)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.env.WDKController.processRequest(WDKController.java:144)
        at com.documentum.web.env.WDKController.doFilter(WDKController.java:131)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3683)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3649)
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:326)
        at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
        at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
        at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2433)
        at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2281)
        at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2259)
        at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1691)
        at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1651)
        at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:348)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:333)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:54)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:640)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:406)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:346)

 

As mentioned in the stack trace (which is more useful than the error message), the problem might come from the docbrokers projections… But we were checking them just before installing the Remote docbase and it was OK… With the CS 7.3, the RCS installation is automatically adding a projection on the Primary Content Server dm_server_config object to point to the Remote docbroker and this is causing this error…

 

So I did check the projections on the GR_DocBase Primary dm_server_config object to see what was the issue and I got this:

API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  ...
  projection_targets           [0]: content-server-01
                               [1]: content-server-01
                               [2]: content-server-02
  projection_ports             [0]: 1493
                               [1]: 1491
                               [2]: 1489
  projection_proxval           [0]: 1
                               [1]: 1
                               [2]: 2
  projection_notes             [0]: Dedicated Docbroker N°2
                               [1]: Dedicated Docbroker N°1
                               [2]: Projecting primary to remote.
  projection_enable            [0]: T
                               [1]: T
                               [2]: T
  ...
  i_vstamp                        : 34

API> reinit,c
...
OK
API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
content-server-02          1489              2                   Projecting primary to remote.      1
content-server-01          1491              1                   Dedicated Docbroker N°1            1
content-server-01          1493              1                   Dedicated Docbroker N°2            1
(3 rows affected)

 

So this looks good isn’t it? And yet through DA, it shows this ArrayIndexOutOfBoundsException… I tried a lot of things but the only way I could solve this issue was by removing the projection that the RCS Installer is automatically adding and then adding it again… To simplify the process, I included that in our silent scripts so it is done automatically after the creation of the CFS. Just below is a very small extract of code I added in our silent scripts (shell) in order to remove the projection added by the installer (some variables are obviously defined before this section of code…):

...
echo "  **  "
echo "  **  HA Docbase Configuration - projections"
echo "  **  "
dql_select_docbase="${script_folder}/rcs_select_${docbase}"
dql_update_docbase="${script_folder}/rcs_update_${docbase}"
idql ${docbase} -U${install_owner} -Pxxx <<EOF > ${dql_select_docbase}.log 2>&1
SELECT 'projection_id=', i_position*-1-1 as projection_id FROM dm_server_config WHERE LOWER(object_name) like LOWER('${docbase}') AND projection_notes='Projecting primary to remote.' AND LOWER(projection_targets) like LOWER('%`hostname -s`%') enable(ROW_BASED)
go
EOF
if [[ ${?} != 0 ]]; then
  echo "ERROR - There was a problem while gathering the index for ${docbase}. Please see the file '${dql_select_docbase}.log' for error. Exiting."
  exit
fi
index=`grep "^projection_id=[[:space:]]" ${dql_select_docbase}.log | sed 's,^.*=[[:space:]]*\([0-9]*\)[[:space:]]*$,\1,'`
if [[ ${index} == "" || ${index} == '""' ]]; then
  echo "INFO - There is no existing projections from the primary to the remote."
else
  echo -e "INFO - The index of 'dm_server_config.projection_targets' is:  \t${index}"
idql ${docbase} -U${install_owner} -Pxxx <<EOF > ${dql_update_docbase}.log 2>&1
UPDATE dm_server_config object REMOVE projection_targets[${index}], REMOVE projection_ports[${index}], REMOVE projection_proxval[${index}], REMOVE projection_notes[${index}], REMOVE projection_enable[${index}] WHERE LOWER(object_name) like LOWER('${docbase}') enable(ROW_BASED)
go
EOF
  nb_error=`grep "_E_" ${dql_update_docbase}.log | wc -l`
  cat "${dql_update_docbase}.log"
  echo ""
  if [[ ${nb_error} != 0 ]]; then
    echo "ERROR - There was a problem while updating the object for ${docbase}. Please see above. Exiting."
    exit
  fi
fi
...

 

This small section of code is just to shows how this kind of things can be automated. I’m just executing a first DQL to gather the index of the projection added by the RCS installer. This is the correct index because the note added by the RCS is always “Projecting primary to remote.” and the target host is obviously the RCS… To confirm that DA is now working again, you can reload the docbroker projection screen and for me it was indeed working.

 

So then the next step is to add the projection again but this time without the “.” at the end of the note (to differentiate them) because we still need this projection for the CS2 to be aware of the docbases on the CS1. You can either do it programmatically using iapi/idql or with DA directly. After adding this third projection again, I got the following (notice the only difference is the “.” in the notes) in the docbase:

API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  ...
  projection_targets           [0]: content-server-01
                               [1]: content-server-01
                               [2]: content-server-02
  projection_ports             [0]: 1493
                               [1]: 1491
                               [2]: 1489
  projection_proxval           [0]: 1
                               [1]: 1
                               [2]: 2
  projection_notes             [0]: Dedicated Docbroker N°2
                               [1]: Dedicated Docbroker N°1
                               [2]: Projecting primary to remote
  projection_enable            [0]: T
                               [1]: T
                               [2]: T
  ...
  i_vstamp                        : 36

API> reinit,c
...
OK
API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
content-server-02          1489              2                   Projecting primary to remote       1
content-server-01          1491              1                   Dedicated Docbroker N°1            1
content-server-01          1493              1                   Dedicated Docbroker N°2            1
(3 rows affected)

 

The result is the same but I was finally able to see the three projections through DA… I don’t know what caused this error because I simply re-added the exact same thing with a different note (slightly) but I’m sure that this was caused by the RCS Installation…

Projection3

 

Yet another interesting behaviour of Documentum… So many ways to have fun! :)

 

 

Cet article Documentum – RCS 7.3 – Issue with projections est apparu en premier sur Blog dbi services.

Documentum – RCS 7.3 – dm_server_config cannot contain dash

Sat, 2017-09-23 09:54

As mentioned in this blog, I was recently installing a new 7.3 P05 environment. Actually, this environment was in HA and it was the first silent installation of a Remote Content Server for me. I already created a lot of blogs related to issues with the CS 7.3 but since this was my first RCS 7.3 installation, I was kind of expecting to face some issues… I wasn’t disappointed!

 

So let’s start with the first issue I faced during the creation of the Content-File Server (CFS) on the Remote Content Server. As always, I prepared the properties file with all the needed information for a RCS (the properties file isn’t the same as for a normal docbase creation on the Primary CS) and then I launched the process to install the CFS in silent for my Global Registry. After only 30 seconds or so, the installation was “done” so I knew something wasn’t right… When you install a CFS, if there is no docbroker present on the host, it will install one before creating the docbase… Please note that below, the docbroker was already created using a previous silent script so here, I’m just trying to create the CFS.

 

The problem with the silent installation is that if something goes wrong, you absolutely have no feedback… Using the GUI, you always have a message (more or less meaningful) printed on the screen so you know something wasn’t right and you can start checking why. So as a best practice, I would recommend to always check the installation log file for errors when using the silent installation.

 

So what is printed in the installation log file? (don’t look at the first WARN message, it’s because there is no global registry locally, yet)

[dmadmin@content-server-02 ~]$ cd $DM_HOME/install/logs
[dmadmin@content-server-02 logs]$ cat install.log
10:57:51,620  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - Done InitializeSharedLibrary ...
10:57:51,640  INFO [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsInitializeImportantServerVariables - The installer is gathering system configuration information.
10:57:51,651  INFO [main] com.documentum.install.server.installanywhere.actions.DiWASilentRemoteServerValidation - Start to verify the password
10:57:52,088  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is /tmp/104190.tmp/dfc.keystore
10:57:52,445  INFO [main] com.documentum.fc.client.security.internal.CreateIdentityCredential$MultiFormatPKIKeyPair - generated RSA (2,048-bit strength) mutiformat key pair in 334 ms
10:57:52,473  INFO [main] com.documentum.fc.client.security.internal.CreateIdentityCredential - certificate created for DFC <CN=dfc_q5Dd2d8FbuDOh98h1Ef2aeEcp3a,O=EMC,OU=Documentum> valid from Thu Sep 7 10:52:52 UTC 2017 to Sun Sep 5 10:57:52 UTC 2027:

10:57:52,474  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is /tmp/104190.tmp/dfc.keystore
10:57:52,485  INFO [main] com.documentum.fc.client.security.impl.InitializeKeystoreForDfc - [DFC_SECURITY_IDENTITY_INITIALIZED] Initialized new identity in keystore, DFC alias=dfc, identity=dfc_q5Dd2d8FbuDOh98h1Ef2aeEcp3a
10:57:52,486  INFO [main] com.documentum.fc.client.security.impl.AuthenticationMgrForDfc - identity for authentication is dfc_q5Dd2d8FbuDOh98h1Ef2aeEcp3a
10:57:52,490  INFO [main] com.documentum.fc.impl.RuntimeContext - DFC Version is 7.3.0040.0025
10:57:52,504  INFO [Timer-2] com.documentum.fc.client.impl.bof.cache.ClassCacheManager$CacheCleanupTask - [DFC_BOF_RUNNING_CLEANUP] Running class cache cleanup task
10:57:52,512  WARN [main] com.documentum.fc.client.security.internal.RegistrationMgr - [DFC_SECURITY_GR_PUBLICATION_FAILED] Publication of DFC instance with global registry failed
DfException:: THREAD: main; MSG: [DFC_BOF_GLOBAL_REGISTRY_NOT_CONFIGURED] A global registry is not configured; ERRORCODE: ff; NEXT: null
        at com.documentum.fc.client.security.impl.DfcIdentityPublisher.<init>(DfcIdentityPublisher.java:51)
        ...
        at com.zerog.lax.LAX.main(Unknown Source)
10:57:52,955  INFO [main] com.documentum.fc.client.impl.connection.docbase.DocbaseConnection - Object protocol version 2
10:57:52,984  INFO [main] com.documentum.fc.client.security.internal.AuthenticationMgr - new identity bundle <dfc_q5Dd2d8FbuDOh98h1Ef2aeEcp3a  1602102952       content-server-02         hudL6VVqhfy1whQlqbqfn3xZoOlola5zscimwba4o0MDQbigdzAOi+l54BHFvqc/3auMipaihywp65a5bR4vqvzP55CzzuFjSD+UZa3vJOGiwpKlctdmg45Kl0aOTwrfYH5jEupQ79oUVNY1cNQmAxn3odYFwguvaEp3VxezAbO+cPh8svnKjhvZJm/DFVrmdLnGPu+PHf3jWHbYSfhc+TWDLPqk8dlFTzJTjLnGLnGPu+PHf3jWHbYSfhc+TWDLPqk/13OmxsnXcSUL59QbhU+BBW2/4lsXvSvwxP/8A+/GKsGPSLoTFZJ2nlbJnq0TX5XWPTAG7Emgjeil35cbfax6D/rUQp8kHWPlLWtPvpOaKQ==>
10:57:54,477  INFO [main] com.documentum.fc.client.impl.connection.docbase.DocbaseConnection - Object protocol version 2
10:57:54,526  INFO [main] com.documentum.install.server.installanywhere.actions.DiWASilentRemoteServerValidation - Server config object name is invalid
10:57:54,526 ERROR [main] com.documentum.install.server.installanywhere.actions.DiWASilentRemoteServerValidation - Server config name only can contain characters: A-Z, a-z, 0-9, underscore(_) and dot(.)
com.documentum.install.shared.common.error.DiException: Server config name only can contain characters: A-Z, a-z, 0-9, underscore(_) and dot(.)
        at com.documentum.install.server.installanywhere.actions.DiWASilentRemoteServerValidation.setup(DiWASilentRemoteServerValidation.java:82)
        at com.documentum.install.shared.installanywhere.actions.InstallWizardAction.install(InstallWizardAction.java:75)
        at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.an(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runPreInstall(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.consoleInstallMain(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.executeApplication(Unknown Source)
        at com.zerog.ia.installer.Main.main(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.zerog.lax.LAX.launch(Unknown Source)
        at com.zerog.lax.LAX.main(Unknown Source)
10:57:54,527  INFO [main]  - The INSTALLER_UI value is SILENT
10:57:54,527  INFO [main]  - The KEEP_TEMP_FILE value is true
10:57:54,527  INFO [main]  - The common.installOwner.password value is ******
10:57:54,527  INFO [main]  - The SERVER.SECURE.ROOT_PASSWORD value is ******
10:57:54,527  INFO [main]  - The common.upgrade.aek.lockbox value is null
10:57:54,527  INFO [main]  - The common.old.aek.passphrase.password value is null
10:57:54,527  INFO [main]  - The common.aek.algorithm value is AES_256_CBC
10:57:54,527  INFO [main]  - The common.aek.passphrase.password value is ******
10:57:54,527  INFO [main]  - The common.aek.key.name value is CSaek
10:57:54,527  INFO [main]  - The common.use.existing.aek.lockbox value is null
10:57:54,528  INFO [main]  - The SERVER.ENABLE_LOCKBOX value is true
10:57:54,528  INFO [main]  - The SERVER.LOCKBOX_FILE_NAME value is lockbox.lb
10:57:54,528  INFO [main]  - The SERVER.LOCKBOX_PASSPHRASE.PASSWORD value is ******
10:57:54,528  INFO [main]  - The SERVER.COMPONENT_ACTION value is CREATE
10:57:54,528  INFO [main]  - The SERVER.DOCBROKER_ACTION value is null
10:57:54,528  INFO [main]  - The SERVER.PRIMARY_CONNECTION_BROKER_HOST value is content-server-01
10:57:54,528  INFO [main]  - The SERVER.PRIMARY_CONNECTION_BROKER_PORT value is 1489
10:57:54,528  INFO [main]  - The SERVER.PROJECTED_CONNECTION_BROKER_HOST value is content-server-02
10:57:54,528  INFO [main]  - The SERVER.PROJECTED_CONNECTION_BROKER_PORT value is 1489
10:57:54,528  INFO [main]  - The SERVER.FQDN value is content-server-02
10:57:54,528  INFO [main]  - The SERVER.DOCBASE_NAME value is GR_DocBase
10:57:54,528  INFO [main]  - The SERVER.PRIMARY_SERVER_CONFIG_NAME value is GR_DocBase
10:57:54,528  INFO [main]  - The SERVER.REPOSITORY_USERNAME value is dmadmin
10:57:54,528  INFO [main]  - The SERVER.SECURE.REPOSITORY_PASSWORD value is ******
10:57:54,528  INFO [main]  - The SERVER.REPOSITORY_USER_DOMAIN value is
10:57:54,529  INFO [main]  - The SERVER.REPOSITORY_USERNAME_WITH_DOMAIN value is dmadmin
10:57:54,529  INFO [main]  - The SERVER.REPOSITORY_HOSTNAME value is content-server-01
10:57:54,529  INFO [main]  - The SERVER.CONNECTION_BROKER_NAME value is null
10:57:54,529  INFO [main]  - The SERVER.CONNECTION_BROKER_PORT value is null
10:57:54,529  INFO [main]  - The SERVER.DOCBROKER_NAME value is
10:57:54,529  INFO [main]  - The SERVER.DOCBROKER_PORT value is
10:57:54,529  INFO [main]  - The SERVER.DOCBROKER_CONNECT_MODE value is null
10:57:54,529  INFO [main]  - The SERVER.USE_CERTIFICATES value is false
10:57:54,529  INFO [main]  - The SERVER.DOCBROKER_KEYSTORE_FILE_NAME value is null
10:57:54,529  INFO [main]  - The SERVER.DOCBROKER_KEYSTORE_PASSWORD_FILE_NAME value is null
10:57:54,529  INFO [main]  - The SERVER.DOCBROKER_CIPHER_LIST value is null
10:57:54,529  INFO [main]  - The SERVER.DFC_SSL_TRUSTSTORE value is null
10:57:54,529  INFO [main]  - The SERVER.DFC_SSL_TRUSTSTORE_PASSWORD value is ******
10:57:54,529  INFO [main]  - The SERVER.DFC_SSL_USE_EXISTING_TRUSTSTORE value is null
10:57:54,529  INFO [main]  - The SERVER.CONNECTION_BROKER_SERVICE_STARTUP_TYPE value is null
10:57:54,529  INFO [main]  - The SERVER.DOCUMENTUM_DATA value is $DOCUMENTUM/data
10:57:54,530  INFO [main]  - The SERVER.DOCUMENTUM_SHARE value is $DOCUMENTUM/share
10:57:54,530  INFO [main]  - The CFS_SERVER_CONFIG_NAME value is content-server-02_GR_DocBase
10:57:54,530  INFO [main]  - The SERVER.DOCBASE_SERVICE_NAME value is GR_DocBase
10:57:54,530  INFO [main]  - The CLIENT_CERTIFICATE value is null
10:57:54,530  INFO [main]  - The RKM_PASSWORD value is ******
10:57:54,530  INFO [main]  - The SERVER.DFC_BOF_GLOBAL_REGISTRY_VALIDATE_OPTION_IS_SELECTED value is true
10:57:54,530  INFO [main]  - The SERVER.PROJECTED_DOCBROKER_PORT_OTHER value is 1489
10:57:54,530  INFO [main]  - The SERVER.PROJECTED_DOCBROKER_HOST_OTHER value is content-server-01
10:57:54,530  INFO [main]  - The SERVER.GLOBAL_REGISTRY_REPOSITORY value is GR_DocBase
10:57:54,530  INFO [main]  - The SERVER.BOF_REGISTRY_USER_LOGIN_NAME value is dm_bof_registry
10:57:54,530  INFO [main]  - The SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD value is ******
10:57:54,530  INFO [main]  - The SERVER.COMPONENT_ACTION value is CREATE
10:57:54,530  INFO [main]  - The SERVER.COMPONENT_NAME value is null
10:57:54,530  INFO [main]  - The SERVER.DOCBASE_NAME value is GR_DocBase
10:57:54,531  INFO [main]  - The SERVER.CONNECTION_BROKER_NAME value is null
10:57:54,531  INFO [main]  - The SERVER.CONNECTION_BROKER_PORT value is null
10:57:54,531  INFO [main]  - The SERVER.PROJECTED_CONNECTION_BROKER_HOST value is content-server-02
10:57:54,531  INFO [main]  - The SERVER.PROJECTED_CONNECTION_BROKER_PORT value is 1489
10:57:54,531  INFO [main]  - The SERVER.PRIMARY_SERVER_CONFIG_NAME value is GR_DocBase
10:57:54,531  INFO [main]  - The SERVER.DOCBROKER_NAME value is
10:57:54,531  INFO [main]  - The SERVER.DOCBROKER_PORT value is
10:57:54,531  INFO [main]  - The SERVER.CONNECTION_BROKER_SERVICE_STARTUP_TYPE value is null
10:57:54,531  INFO [main]  - The SERVER.REPOSITORY_USERNAME value is dmadmin
10:57:54,531  INFO [main]  - The SERVER.REPOSITORY_PASSWORD value is ******
10:57:54,531  INFO [main]  - The SERVER.REPOSITORY_USER_DOMAIN value is
10:57:54,531  INFO [main]  - The SERVER.REPOSITORY_USERNAME_WITH_DOMAIN value is dmadmin
10:57:54,531  INFO [main]  - The SERVER.DFC_BOF_GLOBAL_REGISTRY_VALIDATE_OPTION_IS_SELECTED_KEY value is null
10:57:54,531  INFO [main]  - The SERVER.PROJECTED_DOCBROKER_PORT_OTHER value is 1489
10:57:54,531  INFO [main]  - The SERVER.PROJECTED_DOCBROKER_HOST_OTHER value is content-server-01
10:57:54,531  INFO [main]  - The SERVER.GLOBAL_REGISTRY_REPOSITORY value is GR_DocBase
10:57:54,532  INFO [main]  - The SERVER.BOF_REGISTRY_USER_LOGIN_NAME value is dm_bof_registry
10:57:54,532  INFO [main]  - The SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD value is ******
10:57:54,532  INFO [main]  - The SERVER.COMPONENT_ACTION value is CREATE
10:57:54,532  INFO [main]  - The SERVER.COMPONENT_NAME value is null
10:57:54,532  INFO [main]  - The SERVER.PRIMARY_SERVER_CONFIG_NAME value is GR_DocBase
10:57:54,532  INFO [main]  - The SERVER.DOCBASE_NAME value is GR_DocBase
10:57:54,532  INFO [main]  - The SERVER.REPOSITORY_USERNAME value is dmadmin
10:57:54,532  INFO [main]  - The SERVER.REPOSITORY_PASSWORD value is ******
10:57:54,532  INFO [main]  - The SERVER.REPOSITORY_USER_DOMAIN value is
10:57:54,532  INFO [main]  - The SERVER.REPOSITORY_USERNAME_WITH_DOMAIN value is dmadmin
10:57:54,532  INFO [main]  - The env PATH value is: /usr/xpg4/bin:$JAVA_HOME/bin:$DM_HOME/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$JAVA_HOME/bin:$DM_HOME/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$DM_HOME/bin:$ORACLE_HOME/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/dmadmin/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin
[dmadmin@content-server-02 logs]$

 

As you can see above, the installation fails because the properties aren’t “correct”. More specifically, the installation fails because the dm_server_config name provided doesn’t match the requirement/validation… Starting with the CS 7.3 GA release, the Content Server team introduced a new validation which force the dm_server_config name to only contains letters (lower/upper case), numbers, underscore or dot. If you are familiar with the Remote Content Servers, you should know that by default, the CFS installation will create a new dm_server_config object with a name that is “<hostname>_<service_name>”. So if you are installing a CFS for a docbase “GR_DocBase” with a service name set to “GR_DocBase” (it’s not mandatory the same value as the docbase name) on a Content Server with a hostname equal to “content-server-02″, then the dm_server_config will, by default, have the following name: content-server-02_GR_DocBase.

 

So if you compare this default value with the one from the log file, you will see that it is the same… Because I knew what default value the GUI Installer would have chosen, I therefore put this value in the silent properties file so that we keep our naming conventions on all our environments (7.3 or not). So this is the default value that the installer would have chosen and yet the installation fails and this is all because the hostname contains a simple dash… Funny, isn’t it?

 

Since this is clearly an issue, I opened a SR with OTX to ask them to either:

  • Explain why the dash (‘-‘) aren’t accepted in the dm_server_config name and since this is the default value, where it is documented that the hostname of the Content Servers cannot contain any dashs
  • Provide a hotfix for this issue and fix it in the next patch as well

 

OTX obviously recognized that this validation is too strict and they are therefore working on providing us a hotfix (via CS-57533) and implementing a more reliable validation. At the moment, I got a first draft hotfix that only solve the silent installation part (so the issue is still present using the GUI). I’m pretty sure it won’t be hard for OTX to solve that also on the GUI and to include the fix in a next patch but I don’t know which one at the moment!

 

 

Cet article Documentum – RCS 7.3 – dm_server_config cannot contain dash est apparu en premier sur Blog dbi services.

Documentum – CS 7.3 – Issue with dmqdocbroker

Sat, 2017-09-23 09:23

Beginning of this month, I was building a new environment in 7.3 P05 (using silent installation but this isn’t relevant here) and I found a strange behaviour from the dmqdocbroker so I wanted to share that. You will see below an error message “Input has special characters …”. This is specific to the CS 7.3 for which we opened a SR with EMC/OTX some months ago (beginning of this year during the first CS 7.3 tests we performed) but the outcome of this SR was that this shouldn’t have any impact on the proper behaviour of the docbroker so I didn’t blog about it back then but now, it might very well be linked to the issue I will discuss here.

 

So let’s talk about the issue with the dmqdocbroker. In this new environment, we configured several docbases and several docbrokers so that we can have separated applications. I will create another blog related to the projections (with another issue) so here I’m directly getting to the point. The issue I faced is that the first (default) docbroker was working properly, responding to the ping, listing all docbases, aso… But then when I created two additional docbrokers, these two docbrokers weren’t behaving in the same way…

 

To show that, I can simply execute the ping or getdocbasemap command for each docbrokers (for this example, I will use the following ports: first docbroker on 1489/1490, second one on 1491/1492, third one on 1493/1494). We are always using secure communications so that’s why you will always see the secure port below in the replies:

[dmadmin@content-server-01 ~]$ dmqdocbroker -p 1489 -c ping
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Targeting current host
$DM_HOME/bin/dmawk: Input has special characters ...
 source line number 624
Using specified port: 1489
Successful reply from docbroker at host (content-server-01) on port(1490) running software version (7.3.0050.0039  Linux64).
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$ dmqdocbroker -p 1491 -c ping
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Targeting current host
$DM_HOME/bin/dmawk: Input has special characters ...
 source line number 624
Using specified port: 1491
dmqdocbroker: Error returned from docbroker request:
[DFC_DOCBROKER_REQUEST_FAILED] Request to Docbroker ":1491" failed

[DFC_DOCBROKER_REQUEST_FAILED] Request to Docbroker ":1491" failed

[DM_SESSION_E_RPC_ERROR]error:  "Server communication failure"

java.net.ConnectException: Connection refused

[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$ dmqdocbroker -p 1493 -c ping
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Targeting current host
$DM_HOME/bin/dmawk: Input has special characters ...
 source line number 624
Using specified port: 1493
dmqdocbroker: Error returned from docbroker request:
[DFC_DOCBROKER_REQUEST_FAILED] Request to Docbroker ":1493" failed

[DFC_DOCBROKER_REQUEST_FAILED] Request to Docbroker ":1493" failed

[DM_SESSION_E_RPC_ERROR]error:  "Server communication failure"

java.net.ConnectException: Connection refused

[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$

 

As you can see, the request to the docbroker “:<port>” failed… But wait, why is there only the port here? Why is the hostname not mentioned in the error message? Well, that’s a pretty good question! For an unknown reason, the request to the first docbroker is working properly (ping successful) but then the request to the two others isn’t. It’s just like if the hostname was known for the first one but not for the others…

 

So since it is possible to specify which host to target, then what happen if you specify that?

[dmadmin@content-server-01 ~]$ echo `hostname -f`
content-server-01
[dmadmin@content-server-01 ~]$ 
[dmadmin@content-server-01 ~]$ dmqdocbroker -t `hostname -f` -p 1489 -c ping
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Using specified port: 1489
Successful reply from docbroker at host (content-server-01) on port(1490) running software version (7.3.0050.0039  Linux64).
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$ dmqdocbroker -t `hostname -f` -p 1491 -c ping
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Using specified port: 1491
Successful reply from docbroker at host (content-server-01) on port(1492) running software version (7.3.0050.0039  Linux64).
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$ dmqdocbroker -t `hostname -f` -p 1493 -c ping
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Using specified port: 1493
Successful reply from docbroker at host (content-server-01) on port(1494) running software version (7.3.0050.0039  Linux64).
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$

 

And here all three docbrokers are replying successfully… And in case you didn’t notice, even the message “Input has special characters …” isn’t displayed here. So for me, this shows that this message might very well be linked to this particular behaviour.

 

I’m not 100% sure what the issue is (I didn’t get time to open a SR for this yet) but if I had to guess, I would suggest you to read carefully the next blogs I will post today because this issue might be linked by the dash (‘-‘) (<– no this isn’t a smiley!) in the hostname.

 

I only showed the ping above but I can reproduce the same behaviour with the getdocbasemap for example:

[dmadmin@content-server-01 ~]$ dmqdocbroker -p 1493 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Targeting current host
$DM_HOME/bin/dmawk: Input has special characters ...
 source line number 624
Using specified port: 1493
dmqdocbroker: Error returned from docbroker request:
[DFC_DOCBROKER_REQUEST_FAILED] Request to Docbroker ":1493" failed

[DFC_DOCBROKER_REQUEST_FAILED] Request to Docbroker ":1493" failed

[DM_SESSION_E_RPC_ERROR]error:  "Server communication failure"

java.net.ConnectException: Connection refused

[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$ dmqdocbroker -t `hostname -f` -p 1493 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Using specified port: 1493
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : content-server-01
Docbroker port            : 1494
Docbroker network address : INET_ADDR: 03 52f 91f62f98 content-server-01 156.174.191.17
Docbroker version         : 7.3.0050.0039  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : DocBase1
Docbase id          : 1000002
Docbase description : DocBase1 Repository
Govern docbase      :
Federation name     :
Server version      : 7.3.0050.0039  Linux64.Oracle
Docbase Roles       :
Docbase Dormancy Status     :
--------------------------------------------
Docbase name        : GR_DocBase
Docbase id          : 1000001
Docbase description : GR_DocBase Repository
Govern docbase      :
Federation name     :
Server version      : 7.3.0050.0039  Linux64.Oracle
Docbase Roles       : Global Registry
Docbase Dormancy Status     :
--------------------------------------------
[dmadmin@content-server-01 ~]$

 

If you don’t see this issue with a hostname that doesn’t contain any dash, please let me know! This would confirm that both issues are linked to that… As mentioned earlier, you will see in the next blog why I think this has something to do with the dash ;).

 

 

Cet article Documentum – CS 7.3 – Issue with dmqdocbroker est apparu en premier sur Blog dbi services.

Be careful when putting the Oracle ADR on xfs, or better 4K sector format drives

Thu, 2017-09-21 07:09

Today, after we did a fresh setup of a Grid Infrastructure cluster (12.1.0.2.170814) we faced two issues reported in the alert.log of the ASM instances (in fact you would see the same for the alert logs of any instance in that configuration but we did not had any other instance up and running at that time):

This:

ORA-00700: soft internal error, arguments: [dbgrfrbf_1], [/disk00/app/grid/diag/asm/+asm/+ASM2/metadata/INC_METER_SUMMARY.ams], [0], [4], [], [], [], [], [], [], [], []
ORA-27072: File I/O error
Linux-x86_64 Error: 22: Invalid argument
Additional information: 4
Additional information: 1
Additional information: 4294967295

… and this:

ERROR: create the ADR schema in the specified ADR Base directory [/disk00/app/grid]
ERROR: The ORA-48178 error is caused by the ORA-48101 error. 
ORA-48101: error encountered when attempting to read a file [block] [/disk00/app/grid/diag/asm/+asm/+ASM1/metadata/ADR_INTERNAL.mif] [0]
ORA-27072: File I/O error
Linux-x86_64 Error: 22: Invalid argument
Additional information: 4
Additional information: 1
Additional information: 4294967295

As it turned out this was an issue with the xfs block size. In the configuration we had the block size was set to 4096 (this was chosen by default when the file system got created):

$:/disk00/app/grid/diag/asm/+asm/+ASM1/metadata/ [+ASM1] xfs_info /disk00
meta-data=/dev/mapper/vg_root-lv_disk00 isize=512    agcount=4, agsize=13107200 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=52428800, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=25600, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
$:/disk00/app/grid/diag/asm/+asm/+ASM1/metadata/ [+ASM1] 

After changing that to 512 all was fine again:

$:/home/ [+ASM1] xfs_info /disk00/
meta-data=/dev/mapper/disk00     isize=256    agcount=4, agsize=104857600 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0 spinodes=0
data     =                       bsize=512    blocks=419430400, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=512    blocks=204800, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
$:/home/ [+ASM1] 

Indeed this was not so easy because the Cisco UCS systems we used for that came with internal disks that had a 4k sector size so we couldn’t even create a new xfs file system on that with the settings required.

[root@xxxx ~]# mkfs.xfs -s size=512 -m crc=0 -b size=512
        /dev/mapper/vg_root-lv_disk00_test 

      illegal sector size 512; hwsector is 4096

The solution was to get a LUN from the storage and use that instead. Another important note from the above linked data sheet:

NOTE: 4K format drives are supported and qualified as bootable with Cisco UCS Manager Release 3.1(2b)and later versions. However, 4K sector format drives do not support VMware and require UEFI boot.

Be careful …

 

Cet article Be careful when putting the Oracle ADR on xfs, or better 4K sector format drives est apparu en premier sur Blog dbi services.

SQL Saturday Toulouse 2017: SQL Server on Linux – Slide deck

Mon, 2017-09-18 03:39

#SQLSatToulouse was definitely a great event. Firstly, a big thanks to the organization team (@Guss and @Fredg_31) as well as all the sponsors and attendees (without whom this kind of event would not be possible).

blog 124 - sqlonlinuxsqlsattoulouse

As promised, here the slide deck of my session “Introduction to SQL Server on Linux for DBAs“. It was for me a great moment of sharing.

Thanks to all!

 

 

Cet article SQL Saturday Toulouse 2017: SQL Server on Linux – Slide deck est apparu en premier sur Blog dbi services.

DOAG IMC Day 2017

Mon, 2017-09-18 03:04

Screen Shot 2017-09-18 at 09.59.38

This year and before taking part to the DOAG in Nuremberg as a referent – Yes I will present at the DOAG this year – I had the opportunity to attend the DOAG IMC Day in Berlin. It was the SIG IMW dedicated to the infrastructure and operating system.

It was my first time in Berlin :) Very nice city.

The event was well organized at the DOAG “Diensleistung”. All the facility was there.

So let’s get back to the event, there was a lots of interesting sessions. At the beginning, Jan-Peter Timmermann presented the DOAG community and how it’s splitted. The event I took part was part of the Middleware community.

After that, Franck Burkhardt started with a session about Forms and security. He presented a retour of experience made during the implementation of Oracle Access Management for an existing CRM environment that was already using an SSO mechanism with AD. The challenge was to use the Windows Native Authentication provided by OAM with web gate and Forms Applications. Forms applications were configured to use SSO. Architectural challenges have been presented.

Then Jan-Peter Timmerman had a useful session about Troubleshooting Oracle FMW 12c Forms/Reports. He presented some issues he had and provide information to begin the investigation and solve them.

Then, there were two interesting sessions around the Cloud; one made by Jan Brosowski and one made by Danilo Schmiedel.

Then it came to the most exciting one of the day for me. It was around Ansible and WebLogic. The presenter, Grzegorz Lysko defined what Ansible is and what we could do with it. He gave some samples on how Fusion Middleware can be easily deployed on multiple hosts including clustering features and Fusion Middleware components. It was the most interesting because I also developed Ansible scripts to deploy production environments running Oracle Fusion Middleware components and I was able to compare the approach I has chosen with mine. The scripts I developed allows to easily deploy in less than 25 minutes an OFM infrastructure including Reports and Forms. The all based on our defined best practices. It checked as well if the Linux requested packages and install them if they are missing. It deploys in the same time the JDK, and our DMK WebLogic that is a kind of management kit which allows to manage easily a WebLogic domain and its system components if some.

The last session cover the JVM internal usage and some known issue we can have with the JVM. Mostly the OOM issue with permspace, heapsize, swapspace and so on.

It was really an interesting day in Berlin for the DOAG. Let’s see them again next year. Why not for presenting something.

 

Cet article DOAG IMC Day 2017 est apparu en premier sur Blog dbi services.

Active Data Guard services in Multitenant

Fri, 2017-09-15 17:36

A database (or the CDB in multitenant) registers its name as the default service. When a standby database is on the same server, or same cluster, you have no problem because this database name is the db_unique_name which is different between the primary and the standby(s).

In multitenant, in addition to that, each PDB registers its name as a service. But the PDB name is the same in the primary and the standby database. This means that we have the same service name registered for the PDB in primary and standby:

Service "pdb1" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...

We cannot change that, and then it is strongly recommended to create different services for the PDB in primary and standby.

The PDB default service name

Here is what we want to avoid.
I’ve a container database (db_name=CDB2) with its primary (db_unique_name=CDB2A) and standby (db_unique_name=CDB2B) on the same server, registered to the same listener:

Service "59408d6bed2c1c8ee0536a4ea8c0cfa9" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "CDB2A" has 1 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Service "CDB2AXDB" has 1 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Service "CDB2A_DGB" has 1 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Service "CDB2A_DGMGRL" has 1 instance(s).
Instance "CDB2A", status UNKNOWN, has 1 handler(s) for this service...
Service "CDB2B" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "CDB2BXDB" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "CDB2B_DGB" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "CDB2B_DGMGRL" has 1 instance(s).
Instance "CDB2B", status UNKNOWN, has 1 handler(s) for this service...
Service "CDB2_CFG" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...

The PDB1 service is registered from both instances, and then when I use it in my connection string I’m connected at random to the primary or the standby:

22:27:46 SQL> connect sys/oracle@//localhost:1522/pdb1 as sysdba
Connected.
22:27:51 SQL> select * from v$instance;
 
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
1 CDB2B VM106 12.2.0.1.0 15-SEP-17 OPEN NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE SINGLE
 
22:28:00 SQL> connect sys/oracle@//localhost:1522/pdb1 as sysdba
Connected.
22:28:06 SQL> /
 
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
1 CDB2A VM106 12.2.0.1.0 15-SEP-17 OPEN NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE SINGLE
 
22:28:07 SQL> connect sys/oracle@//localhost:1522/pdb1 as sysdba
Connected.
22:28:10 SQL> /
 
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
1 CDB2B VM106 12.2.0.1.0 15-SEP-17 OPEN NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE SINGLE
 
22:28:11 SQL> connect sys/oracle@//localhost:1522/pdb1 as sysdba
Connected.
22:28:13 SQL> /
 
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
1 CDB2A VM106 12.2.0.1.0 15-SEP-17 OPEN NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE SINGLE

I don’t want to use a service that connects at random and then I need to create different services.

Read-Only service for the Active Data Guard standby

I’m in Oracle Restart and I create the service with srvctl (but you can also create it with dbms_service when not running with Grid Infrastructure):


srvctl add service -db cdb2b -service pdb1_ro -pdb pdb1 -role physical_standby

This creates the service for the standby database (CDB2B) to be started when in physical standby role, and the service connects to the pluggable database PDB1.
But I cannot start it:

srvctl start service -db cdb2b -service pdb1_ro -pdb pdb1
 
 
PRCD-1084 : Failed to start service pdb1_ro
PRCR-1079 : Failed to start resource ora.cdb2b.pdb1_ro.svc
CRS-5017: The resource action "ora.cdb2b.pdb1_ro.svc start" encountered the following error:
ORA-16000: database or pluggable database open for read-only access
ORA-06512: at "SYS.DBMS_SERVICE", line 5
ORA-06512: at "SYS.DBMS_SERVICE", line 288
ORA-06512: at line 1
. For details refer to "(:CLSN00107:)" in "/u01/app/12.2/diag/crs/vm106/crs/trace/ohasd_oraagent_oracle.trc".
 
CRS-2674: Start of 'ora.cdb2b.pdb1_ro.svc' on 'vm106' failed

The reason is that the service information must be stored in the dictionary, SYS.SERVICE$ table, and you cannot do that on a read-only database.

This has been explained a long time ago by Ivica Arsov on his blog: https://iarsov.com/oracle/data-guard/active-services-on-physical-standby-database/ and nothing has changed. You need to create the service on the primary so that the update of SYS.SERVICE$ is propagated to the standby database through log shipping:


srvctl add service -db cdb2a -service pdb1_ro -pdb pdb1 -role physical_standby

This is not sufficient because the insert in SYS.SERVICE$ does not occur yet:

SQL> alter session set container=PDB1;
 
Session altered.
 
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- --------- ----------
3 PDB1 READ WRITE NO
 
SQL> select * from service$;
 
SERVICE_ID NAME NAME_HASH NETWORK_NAME CREATION_DATE CREATION_DATE_HASH DELETION_DATE FAILOVER_METHOD FAILOVER_TYPE FAILOVER_RETRIES FAILOVER_DELAY MIN_CARDINALITY MAX_CARDINALITY GOAL FLAGS EDITION PDB RETENTION_TIMEOUT REPLAY_INITIATION_TIMEOUT SESSION_STATE_CONSISTENCY SQL_TRANSLATION_PROFILE MAX_LAG_TIME GSM_FLAGS PQ_SVC STOP_OPTION FAILOVER_RESTORE DRAIN_TIMEOUT
---------- ---- --------- ------------ ------------- ------------------ ------------- --------------- ------------- ---------------- -------------- --------------- --------------- ---- ----- ------- --- ----------------- ------------------------- ------------------------- ----------------------- ------------ --------- ------ ----------- ---------------- -------------
14 pdb1 1888881990 pdb1 15-SEP-17 1332716667 136 PDB1

As explained by Ivica in his blog post, we need to start the service once to have the row inserted in SERVICE$:

srvctl start service -db cdb2a -service pdb1_ro -pdb pdb1
srvctl stop service -db cdb2a -service pdb1_ro

Now the service information is persistent in the dictionary:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
3 PDB1 READ WRITE NO
 
SQL> select * from service$;
 
SERVICE_ID NAME NAME_HASH NETWORK_NAME CREATION_DATE CREATION_DATE_HASH DELETION_DATE FAILOVER_METHOD FAILOVER_TYPE FAILOVER_RETRIES FAILOVER_DELAY MIN_CARDINALITY MAX_CARDINALITY GOAL FLAGS EDITION PDB RETENTION_TIMEOUT REPLAY_INITIATION_TIMEOUT SESSION_STATE_CONSISTENCY SQL_TRANSLATION_PROFILE MAX_LAG_TIME GSM_FLAGS PQ_SVC STOP_OPTION FAILOVER_RESTORE DRAIN_TIMEOUT
---------- ---- --------- ------------ ------------- ------------------ ------------- --------------- ------------- ---------------- -------------- --------------- --------------- ---- ----- ------- --- ----------------- ------------------------- ------------------------- ----------------------- ------------ --------- ------ ----------- ---------------- -------------
14 pdb1 1888881990 pdb1 15-SEP-17 1332716667 136 PDB1
1 pdb1_ro 1562179816 pdb1_ro 15-SEP-17 1301388390 0 0 0 8 PDB1 86400 300 DYNAMIC ANY 0 0 0 0

This is from the primary, but after the redo has been transported and applied, I have the same on the standby. Now I can start the service I’ve created for the standby:

srvctl start service -db cdb2b -service pdb1_ro -pdb pdb1

Here is the new service registered on the listener, which I can use to connect to the read-only PDB1 on the Active Data Guard standby:

Service "pdb1" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1_ro" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1_rw" has 1 instance(s).

Read-Write service for the primary

You can see above that in order to select from SERVICE$ I connected to CDB$ROOT and switched to PDB1 with ‘set container’. There’s no other choice because using the service name directs me at random to any instance. Then, I need a service to connect to the primary only, and I’ll call it PDB1_RW as it is opened in Read Write there.

srvctl add service -db cdb2a -service pdb1_rw -pdb pdb1 -role primary
srvctl start service -db cdb2a -service pdb1_rw

Finally, here are the services registered from the listener:

Service "pdb1" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1_ro" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1_rw" has 1 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...

I’ll probably never use the ‘PDB1′ service because I want to know where I connect to.

In case of switchover, I also create the Read Write service in for the standby:

srvctl add service -db cdb2b -service pdb1_rw -pdb pdb1 -role primary

Here are the resources when CDB2A is the primary:

$ crsctl stat resource -t -w "TYPE = ora.service.type"
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cdb2a.pdb1_ro.svc
1 OFFLINE OFFLINE STABLE
ora.cdb2a.pdb1_rw.svc
1 ONLINE ONLINE vm106 STABLE
ora.cdb2b.pdb1_ro.svc
1 ONLINE ONLINE vm106 STABLE
ora.cdb2b.pdb1_rw.svc
1 OFFLINE OFFLINE STABLE
--------------------------------------------------------------------------------

I test as switchover to CDB2B:

$ dgmgrl sys/oracle
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Fri Sep 15 23:41:26 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
Welcome to DGMGRL, type "help" for information.
Connected to "CDB2B"
Connected as SYSDG.
DGMGRL> switchover to cdb2b;
Performing switchover NOW, please wait...
New primary database "cdb2b" is opening...
Oracle Clusterware is restarting database "cdb2a" ...
Switchover succeeded, new primary is "cdb2b"

Here are the services:

[oracle@VM106 blogs]$ crsctl stat resource -t -w "TYPE = ora.service.type"
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cdb2a.pdb1_ro.svc
1 ONLINE ONLINE vm106 STABLE
ora.cdb2a.pdb1_rw.svc
1 OFFLINE OFFLINE STABLE
ora.cdb2b.pdb1_ro.svc
1 OFFLINE OFFLINE STABLE
ora.cdb2b.pdb1_rw.svc
1 ONLINE ONLINE vm106 STABLE
--------------------------------------------------------------------------------

So what?

The recommendations are not new here:

  • Always do the same on the primary and the standby. Create services on both sites, then have started them depending on the role
  • Always use one or several application services rather than the default one, in order to have better control and flexibility on where you connect

In multitenant, because services are mandatory to connect to a container with a local user, all the recommendations about services are even more important than before. If you follow them, you will see that multitenant is not difficult at all.

This case may seem improbable, because you probably don’t put the standby on the same server or cluster as the primary. But you may have several standby databases on the same server. About the service registered from the PDB name, just don’t use it. I’m more concerned by the GUID service name (here 59408d6bed2c1c8ee0536a4ea8c0cfa9) which is also declared by both databases. If you plan to use online PDB relocate in a Data Guard configuration then be careful with that. I’ve not tested it, but it is probably better to keep the standby PDB closed, or at least do not register it on the same listener.

 

Cet article Active Data Guard services in Multitenant est apparu en premier sur Blog dbi services.

12c Access Control Lists

Sun, 2017-09-10 14:39

There is already enough information about the new simplified 12c way to define Access Control Lists, such as in oracle-base.
I’m just posting my example here to show how it is easy.

If, as a non-SYS user you want to connect to a host with TCP, you get an error:

SQL> connect DEMO1/demo@//localhost/PDB1
Connected.
SQL>
SQL>
SQL> declare
2 c utl_tcp.connection;
3 n number:=0;
4 begin
5 c:=utl_tcp.open_connection(remote_host=>'towel.blinkenlights.nl',remote_port=>23);
6 end;
7 /
 
Error starting at line : 27 File @ /media/sf_share/122/blogs/12cacl.sql
In command -
declare
c utl_tcp.connection;
n number:=0;
begin
c:=utl_tcp.open_connection(remote_host=>'towel.blinkenlights.nl',remote_port=>23);
end;
Error report -
ORA-24247: network access denied by access control list (ACL)
ORA-06512: at "SYS.UTL_TCP", line 19
ORA-06512: at "SYS.UTL_TCP", line 284
ORA-06512: at line 5
24247. 00000 - "network access denied by access control list (ACL)"
*Cause: No access control list (ACL) has been assigned to the target
host or the privilege necessary to access the target host has not
been granted to the user in the access control list.
*Action: Ensure that an access control list (ACL) has been assigned to
the target host and the privilege necessary to access the target
host has been granted to the user.
SQL>

Here are the ACLs defined by default:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
 
SQL> select * from dba_host_acls;
 
HOST LOWER_PORT UPPER_PORT ACL ACLID ACL_OWNER
---- ---------- ---------- --- ----- ---------
* NETWORK_ACL_4700D2108291557EE05387E5E50A8899 0000000080002724 SYS
 
SQL> select * from dba_host_aces;
 
HOST LOWER_PORT UPPER_PORT ACE_ORDER START_DATE END_DATE GRANT_TYPE INVERTED_PRINCIPAL PRINCIPAL PRINCIPAL_TYPE PRIVILEGE
---- ---------- ---------- --------- ---------- -------- ---------- ------------------ --------- -------------- ---------
* 1 GRANT NO GSMADMIN_INTERNAL DATABASE RESOLVE
* 2 GRANT NO GGSYS DATABASE RESOLVE

So, I add an ACL to access to towel.blinkenlights.nl on telnet port (23) for my user DEMO1:

SQL> exec dbms_network_acl_admin.append_host_ace(host=>'towel.blinkenlights.nl',lower_port=>23,upper_port=>23,ace=>xs$ace_type(privilege_list =>xs$name_list('connect'),principal_name=>'DEMO1',principal_type =>xs_acl.ptype_db));
 
PL/SQL procedure successfully completed.
 
SQL> select * from dba_host_acls;
 
HOST LOWER_PORT UPPER_PORT ACL ACLID ACL_OWNER
---- ---------- ---------- --- ----- ---------
towel.blinkenlights.nl 23 23 NETWORK_ACL_5876ADC67B6635CEE053684EA8C0F378 000000008000281F SYS
* NETWORK_ACL_4700D2108291557EE05387E5E50A8899 0000000080002724 SYS
 
SQL> select * from dba_host_aces;
 
HOST LOWER_PORT UPPER_PORT ACE_ORDER START_DATE END_DATE GRANT_TYPE INVERTED_PRINCIPAL PRINCIPAL PRINCIPAL_TYPE PRIVILEGE
---- ---------- ---------- --------- ---------- -------- ---------- ------------------ --------- -------------- ---------
* 1 GRANT NO GSMADMIN_INTERNAL DATABASE RESOLVE
* 2 GRANT NO GGSYS DATABASE RESOLVE
towel.blinkenlights.nl 23 23 1 GRANT NO DEMO1 DATABASE CONNECT

Now I can connect from my user:

SQL> connect DEMO1/demo@//localhost/PDB1
Connected.
 
SQL> declare
2 c utl_tcp.connection;
3 n number:=0;
4 begin
5 c:=utl_tcp.open_connection(remote_host=>'towel.blinkenlights.nl',remote_port=>23);
6 end;
7 /
 
PL/SQL procedure successfully completed.

If you don’t know why I used towel.blinkenlights.nl, then just try to telnet to it and have fun…

 

Cet article 12c Access Control Lists est apparu en premier sur Blog dbi services.

Create constraints in your datawarehouse – why and how

Fri, 2017-09-08 14:13

We still see some developers not declaring referential integrity constraints in datawarehouse databases because they think they don’t need it (integrity of data has been validated by the ETL). Here is a small demo I did to show why you need to declare them, and how to do it to avoid any overhead on the ETL.

Test case

I create 3 dimension tables and 1 fact table:

21:01:18 SQL> create table DIM1 (DIM1_ID number, DIM1_ATT1 varchar2(20));
Table DIM1 created.
 
21:01:19 SQL> create table DIM2 (DIM2_ID number, DIM2_ATT1 varchar2(20));
Table DIM2 created.
 
21:01:20 SQL> create table DIM3 (DIM3_ID number, DIM3_ATT1 varchar2(20));
Table DIM3 created.
 
21:01:21 SQL> create table FACT (DIM1_ID number, DIM2_ID number, DIM3_ID number,MEAS1 number);
Table FACT created.

I insert 10 million rows into the fact table:

21:01:22 SQL> insert into FACT select mod(rownum,3),mod(rownum,5),mod(rownum,10),rownum from xmltable('1 to 10000000');
10,000,000 rows inserted.
 
Elapsed: 00:00:18.983

and fill the dimension tables from it:

21:01:42 SQL> insert into DIM1 select distinct DIM1_ID,'...'||DIM1_ID from FACT;
3 rows inserted.
 
Elapsed: 00:00:01.540
 
21:01:52 SQL> insert into DIM2 select distinct DIM2_ID,'...'||DIM2_ID from FACT;
5 rows inserted.
 
Elapsed: 00:00:01.635
 
21:01:57 SQL> insert into DIM3 select distinct DIM3_ID,'...'||DIM3_ID from FACT;
10 rows inserted.
 
Elapsed: 00:00:01.579
 
21:01:58 SQL> commit;
Commit complete.

Query joining fact with one dimension

I’ll run the following query:

21:01:58 SQL> select count(*) from FACT join DIM1 using(DIM1_ID);
 
COUNT(*)
--------
10000000
 
Elapsed: 00:00:01.015

Here is the execution plan:

21:02:12 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 4pqjrjkc7sn17, child number 0
-------------------------------------
select count(*) from FACT join DIM1 using(DIM1_ID)
 
Plan hash value: 1826335751
 
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7514 (100)| |
| 1 | SORT AGGREGATE | | 1 | 26 | | |
|* 2 | HASH JOIN | | 10M| 253M| 7514 (2)| 00:00:01 |
| 3 | TABLE ACCESS FULL| DIM1 | 3 | 39 | 3 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL| FACT | 10M| 126M| 7482 (1)| 00:00:01 |
----------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("FACT"."DIM1_ID"="DIM1"."DIM1_ID")
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)

Actually, we don’t need that join. A dimension table has two goals:

  • filter facts on the dimension attributes. Example: filter on customer last name
  • add dimension attributes to the result. Example: add customer first name

Here, there is no WHERE clause on DIM1 columns, and no columns from DIM1 selected. We don’t need to join to DIM1. However, we often see those useless joins for two reasons:

  • We query a view that joins the fact with all dimensions
  • The query is generated by a reporting tool which always join to dimensions
Join elimination

The Oracle optimizer is able to remove those kinds of unnecessary joins. But one information is missing here for the optimizer. We know that all rows in the fact table have a matching row in each dimension, but Oracle doesn’t know that. And if there is no mathing row, then the inner join should not return the result. For this reason, the join must be done.

Let’s give this information to the optimizer: declare the foreign key from FACT to DIM1 so that Oracle knows that there is a many-to-one relationship:

21:02:17 SQL> alter table DIM1 add constraint DIM1_PK primary key(DIM1_ID);
Table DIM1 altered.
 
Elapsed: 00:00:00.051
 
21:02:20 SQL> alter table FACT add constraint DIM1_FK foreign key(DIM1_ID) references DIM1;
Table FACT altered.
 
Elapsed: 00:00:03.210

I’ve spent 3 seconds here to create this foreign key (would have been much longer with a real fact table and lot of columns and rows) but now, the optimizer is able to eliminate the join:

21:02:24 SQL> select count(*) from FACT join DIM1 using(DIM1_ID);
 
COUNT(*)
--------
10000000
 
21:02:25 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 4pqjrjkc7sn17, child number 0
-------------------------------------
select count(*) from FACT join DIM1 using(DIM1_ID)
 
Plan hash value: 3735838348
 
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7488 (100)| |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | TABLE ACCESS FULL| FACT | 10M| 126M| 7488 (1)| 00:00:01 |
---------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter("FACT"."DIM1_ID" IS NOT NULL)
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)

No join needed here, the query is faster. This is exactly the point of this blog post: to show you that declaring constraints improve performance of queries. It adds information to the optimizer, like statistics. Statistics gives estimated cardinalities. Foreign keys are exact cardinality (many-to-one).

No validate

When loading a datawarehouse, you usually don’t need to validate the constraints because data was bulk loaded from a staging area where all data validation has been done. You don’t want to spend time validating constraints (the 3 seconds in my small example above) and this is why some datawarehouse developers do not declare constraints.

However, we can declare constraints without validating them. Let’s do that for the second dimension table:

21:02:34 SQL> alter table DIM2 add constraint DIM2_PK primary key(DIM2_ID) novalidate;
Table DIM2 altered.
 
Elapsed: 00:00:00.018
%nbsp;
21:02:35 SQL> alter table FACT add constraint DIM2_FK foreign key(DIM2_ID) references DIM2 novalidate;
Table FACT altered.
 
Elapsed: 00:00:00.009

That was much faster than the 3 seconds we had for the ‘validate’ constraint which is the default. Creating a constraint in NOVALIDATE is immediate and do not depend on the size of the table.

However this is not sufficient to get the join elimination:

21:02:39 SQL> select count(*) from FACT join DIM2 using(DIM2_ID);
 
COUNT(*)
--------
10000000
 
21:02:40 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 4t9g2n6duw0jf, child number 0
-------------------------------------
select count(*) from FACT join DIM2 using(DIM2_ID)
 
Plan hash value: 3858910383
 
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7518 (100)| |
| 1 | SORT AGGREGATE | | 1 | 26 | | |
|* 2 | HASH JOIN | | 10M| 253M| 7518 (2)| 00:00:01 |
| 3 | INDEX FULL SCAN | DIM2_PK | 5 | 65 | 1 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL| FACT | 10M| 126M| 7488 (1)| 00:00:01 |
-------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("FACT"."DIM2_ID"="DIM2"."DIM2_ID")
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)

The constraint ensures that no rows will be inserted without a matching row in the dimension. However, because Oracle has not validated the result itself, it does not apply the join elimination, just in case a previously existing row has no matching dimension.

Rely novalidate

If you want the optimizer to do the join elimination on a ‘novalidate’ constraint, then it has to trust you and rely on the constraint you have validated.

RELY is an attribute of the constraint that you can set:

21:02:44 SQL> alter table DIM2 modify constraint DIM2_PK rely;
Table DIM2 altered.
 
Elapsed: 00:00:00.016
 
21:02:45 SQL> alter table FACT modify constraint DIM2_FK rely;
Table FACT altered.
 
Elapsed: 00:00:00.010

But this is not sufficient. You told Oracle to rely on your constraint, but Oracle must trust you.

Trusted

The join elimination is a rewrite of the query and, by default, rewrite is enabled but only when integrity is enforced by Oracle:

21:02:50 SQL> show parameter query_rewrite
NAME TYPE VALUE
----------------------- ------ --------
query_rewrite_enabled string TRUE
query_rewrite_integrity string ENFORCED

Let’s allow our session to have rewrite transformations to trust our RELY constraints:

21:02:52 SQL> alter session set query_rewrite_integrity=trusted;
Session altered.

Now, joining to DIM2 without using DIM2 columns outside of the join allows join elimination:

21:02:57 SQL> select count(*) from FACT join DIM2 using(DIM2_ID);
 
COUNT(*)
--------
10000000
 
Elapsed: 00:00:00.185
21:02:58 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 4t9g2n6duw0jf, child number 0
-------------------------------------
select count(*) from FACT join DIM2 using(DIM2_ID)
 
Plan hash value: 3735838348
 
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7494 (100)| |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | TABLE ACCESS FULL| FACT | 10M| 126M| 7494 (1)| 00:00:01 |
---------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter("FACT"."DIM2_ID" IS NOT NULL)
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- rely constraint used for this statement

In 12.2 the execution plan has a note to show that the plan depends on RELY constraint.

From this example, you can see that you can, and should, create RELY NOVALIDATE constraints on tables where you know the existing data is valid. They are immediately created, without any overhead on the load process, and helps to improve queries generated on your dimensional model.

Rely Disable

I said that a NOVALIDATE constraint has no overhead when created, but you may have further inserts or updates in your datawarehouse. And then, those constraints will have to be verified and this may add a little overhead. In this case, you can go further and disable the constraint:

21:03:04 SQL> alter table DIM3 add constraint DIM3_PK primary key(DIM3_ID) rely;
Table DIM3 altered.
 
Elapsed: 00:00:00.059
 
21:03:05 SQL> alter table FACT add constraint DIM3_FK foreign key(DIM3_ID) references DIM3 rely disable novalidate;
Table FACT altered.
 
Elapsed: 00:00:00.014

Note that I had to set the referenced constraint DIM3_PK to RELY here, even if it is enable and validate, or I would get: ORA-25158: Cannot specify RELY for foreign key if the associated primary key is NORELY.

My session still trusts RELY constraints for query rewrite:

21:03:07 SQL> show parameter query_rewrite
 
NAME TYPE VALUE
----------------------- ------ -------
query_rewrite_enabled string TRUE
query_rewrite_integrity string TRUSTED

Now, the join elimination occurs:

21:03:08 SQL> select count(*) from FACT join DIM3 using(DIM3_ID);
 
COUNT(*)
--------
10000000
 
21:03:09 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 3bhs523zyudf0, child number 0
-------------------------------------
select count(*) from FACT join DIM3 using(DIM3_ID)
 
Plan hash value: 3735838348
 
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7505 (100)| |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | TABLE ACCESS FULL| FACT | 11M| 138M| 7505 (1)| 00:00:01 |
---------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter("FACT"."DIM3_ID" IS NOT NULL)
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- rely constraint used for this statement

So, we can still benefit from the query optimization even with the RELY DISABLE NOVALIDATE.

But I would not recommend this. Be careful. Here are my foreign key constraints:

21:03:15 SQL> select table_name,constraint_type,constraint_name,status,validated,rely from all_constraints where owner='
DEMO' and table_name='FACT' order by 4 desc,5 desc,6 nulls last;
 
TABLE_NAME CONSTRAINT_TYPE CONSTRAINT_NAME STATUS VALIDATED RELY
---------- --------------- --------------- ------ --------- ----
FACT R DIM1_FK ENABLED VALIDATED
FACT R DIM2_FK ENABLED NOT VALIDATED RELY
FACT R DIM3_FK DISABLED NOT VALIDATED RELY

For DIM1_FK and DIM2_FK the constraints prevent us from inconsistencies:

21:03:17 SQL> insert into FACT(DIM1_ID)values(666);
 
Error starting at line : 1 in command -
insert into FACT(DIM1_ID)values(666)
Error report -
ORA-02291: integrity constraint (DEMO.DIM1_FK) violated - parent key not found

But the disabled one will allow inconsistencies:

21:03:19 SQL> insert into FACT(DIM3_ID)values(666);
1 row inserted.

That’s bad. I rollback this immediately:

21:03:20 SQL> rollback;
Rollback complete.

Star transformation

Join elimination is not the only transformation that needs to know about the many-to-one relationship between fact tables and dimensions. You usually create a bitmap index on each foreign key to the dimension, to get the higher selectivity when looking at the table rows from the combination of criteria on the dimension attributes.

21:03:24 SQL> create bitmap index FACT_DIM1 on FACT(DIM1_ID);
Index FACT_DIM1 created.
 
21:03:29 SQL> create bitmap index FACT_DIM2 on FACT(DIM2_ID);
Index FACT_DIM2 created.
 
21:03:33 SQL> create bitmap index FACT_DIM3 on FACT(DIM3_ID);
Index FACT_DIM3 created.

Here is the kind of query with predicates on each dimension attributes:

21:03:35 SQL> select count(*) from FACT
2 join DIM1 using(DIM1_ID) join DIM2 using(DIM2_ID) join DIM3 using(DIM3_ID)
3 where dim1_att1='...0' and dim2_att1='...0' and dim3_att1='...0';
 
COUNT(*)
--------
333333

By default, the optimizer applies those predicates on the dimension and do a cartesian join to get all accepted combinations of dimension IDs. Then the rows can be fetched from the table:

21:03:37 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 01jmjv0sz1dpq, child number 0
-------------------------------------
select count(*) from FACT join DIM1 using(DIM1_ID) join DIM2
using(DIM2_ID) join DIM3 using(DIM3_ID) where dim1_att1='...0' and
dim2_att1='...0' and dim3_att1='...0'
 
Plan hash value: 1924236134
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 5657 (100)| |
| 1 | SORT AGGREGATE | | 1 | 114 | | |
| 2 | NESTED LOOPS | | 55826 | 6215K| 5657 (1)| 00:00:01 |
| 3 | MERGE JOIN CARTESIAN | | 1 | 75 | 9 (0)| 00:00:01 |
| 4 | MERGE JOIN CARTESIAN | | 1 | 50 | 6 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | DIM1 | 1 | 25 | 3 (0)| 00:00:01 |
| 6 | BUFFER SORT | | 1 | 25 | 3 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL | DIM2 | 1 | 25 | 3 (0)| 00:00:01 |
| 8 | BUFFER SORT | | 1 | 25 | 6 (0)| 00:00:01 |
|* 9 | TABLE ACCESS FULL | DIM3 | 1 | 25 | 3 (0)| 00:00:01 |
| 10 | BITMAP CONVERSION COUNT | | 55826 | 2126K| 5657 (1)| 00:00:01 |
| 11 | BITMAP AND | | | | | |
|* 12 | BITMAP INDEX SINGLE VALUE| FACT_DIM3 | | | | |
|* 13 | BITMAP INDEX SINGLE VALUE| FACT_DIM2 | | | | |
|* 14 | BITMAP INDEX SINGLE VALUE| FACT_DIM1 | | | | |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
5 - filter("DIM1"."DIM1_ATT1"='...0')
7 - filter("DIM2"."DIM2_ATT1"='...0')
9 - filter("DIM3"."DIM3_ATT1"='...0')
12 - access("FACT"."DIM3_ID"="DIM3"."DIM3_ID")
13 - access("FACT"."DIM2_ID"="DIM2"."DIM2_ID")
14 - access("FACT"."DIM1_ID"="DIM1"."DIM1_ID")

Here rows are fetched from the fact table through a nested loop from the cartesian join on the dimensions, using the bitmap index access for each loop. If there are lot of rows to fetch, then the optimizer will chose a hash join and then will have to full scan the fact table, which is expensive. To lower that cost, the optimizer can add a ‘IN (SELECT DIM_ID FROM DIM WHERE DIM_ATT)’ for very selective dimensions. This is STAR transformation and relies on the foreign key constraints.

It is not enabled by default:

21:03:43 SQL> show parameter star
NAME TYPE VALUE
---------------------------- ------- -----
star_transformation_enabled string FALSE

We can enable it and then it is a cost based transformation:

21:03:45 SQL> alter session set star_transformation_enabled=true;
Session altered.

Here is my example:

21:03:47 SQL> select count(*) from FACT
2 join DIM1 using(DIM1_ID) join DIM2 using(DIM2_ID) join DIM3 using(DIM3_ID)
3 where dim1_att1='...0' and dim2_att1='...0' and dim3_att1='...0';
 
COUNT(*)
--------
333333

The star transformation, changing a join to an ‘IN()’ is possible only when we know that there is a many-to-one relationship. We have all constraints for that, disabled or not, validated or not, but all in RELY. Then Star Transformation can occur:

21:03:51 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 01jmjv0sz1dpq, child number 1
-------------------------------------
select count(*) from FACT join DIM1 using(DIM1_ID) join DIM2
using(DIM2_ID) join DIM3 using(DIM3_ID) where dim1_att1='...0' and
dim2_att1='...0' and dim3_att1='...0'
 
Plan hash value: 1831539117
 
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 68 (100)| |
| 1 | SORT AGGREGATE | | 1 | 38 | | |
|* 2 | HASH JOIN | | 2 | 76 | 68 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL | DIM2 | 1 | 25 | 3 (0)| 00:00:01 |
| 4 | VIEW | VW_ST_62BA0C91 | 8 | 104 | 65 (0)| 00:00:01 |
| 5 | NESTED LOOPS | | 8 | 608 | 56 (0)| 00:00:01 |
| 6 | BITMAP CONVERSION TO ROWIDS| | 8 | 427 | 22 (5)| 00:00:01 |
| 7 | BITMAP AND | | | | | |
| 8 | BITMAP MERGE | | | | | |
| 9 | BITMAP KEY ITERATION | | | | | |
|* 10 | TABLE ACCESS FULL | DIM1 | 1 | 25 | 3 (0)| 00:00:01 |
|* 11 | BITMAP INDEX RANGE SCAN| FACT_DIM1 | | | | |
| 12 | BITMAP MERGE | | | | | |
| 13 | BITMAP KEY ITERATION | | | | | |
|* 14 | TABLE ACCESS FULL | DIM2 | 1 | 25 | 3 (0)| 00:00:01 |
|* 15 | BITMAP INDEX RANGE SCAN| FACT_DIM2 | | | | |
| 16 | BITMAP MERGE | | | | | |
| 17 | BITMAP KEY ITERATION | | | | | |
|* 18 | TABLE ACCESS FULL | DIM3 | 1 | 25 | 3 (0)| 00:00:01 |
|* 19 | BITMAP INDEX RANGE SCAN| FACT_DIM3 | | | | |
| 20 | TABLE ACCESS BY USER ROWID | FACT | 1 | 25 | 43 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("ITEM_1"="DIM2"."DIM2_ID")
3 - filter("DIM2"."DIM2_ATT1"='...0')
10 - filter("DIM1"."DIM1_ATT1"='...0')
11 - access("FACT"."DIM1_ID"="DIM1"."DIM1_ID")
14 - filter("DIM2"."DIM2_ATT1"='...0')
15 - access("FACT"."DIM2_ID"="DIM2"."DIM2_ID")
18 - filter("DIM3"."DIM3_ATT1"='...0')
19 - access("FACT"."DIM3_ID"="DIM3"."DIM3_ID")
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- star transformation used for this statement
- this is an adaptive plan

Here, each dimension drives a range scan on the bitmap index: the predicate on the dimension table returns the dimension ID for the the index lookup on the fact table. The big advantage of bitmap indexes here is that when this access path is used for several dimensions, the bitmap result can be combined before going to the table. This transformation avoids the join and then you must be sure that there is a many-to-one relationship.

In summary

As you should rely on the integrity of data in your datawarehouse, you should find the following parameters to query on fact-dimension schemas:

NAME TYPE VALUE
---------------------------- ------- ------
query_rewrite_enabled string TRUE
query_rewrite_integrity string TRUSTED
star_transformation_enabled string FALSE

And you should define all constraints. When you are sure about the integrity of data, then those constraints can be created RELY ENABLE NOVALIDATE. If some processing must be optimized by not enforcing the constraint verification, then you may create those constraints as RELY DISABLE NOVALIDATE but the gain will probably minimal, and the risk high. But remember that there are not only the well-controlled processes which update data. You may have one day to do a manual update to fix something, and enabled constraint can prevent terrible errors.

I have not covered all optimizer transformations that rely on constraints. When using materialized views you, the rewrite capability also relies on constraints. Relationship cardinality is one of the most important information of database design, this information must be known by the optimizer.

 

Cet article Create constraints in your datawarehouse – why and how est apparu en premier sur Blog dbi services.

Pages