Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 15 hours 36 min ago

Using WebLogic 12C RESTful management for monitoring WebLogic Domains

Tue, 2017-09-26 01:08

WebLogic 12.2.1 provides a new RESTful management interface with full accesses to all WebLogic Server resources. This interface offers an alternative to the WLST scripting or JMX developments for the management and the monitoring of WebLogic Domains.

The following of this blog provides a few examples demonstrating the simplicity and the efficiency of the REST requests.

WebLogic Server State
When checking a WebLogic Server state, we are interested in the state and in the “ActivationTime” that shows the last time the server was started. Low value can indicate the server was restarted recently and may be automatically by the node manager after a crash.
The following RESTful URL requests the state of a WebLogic Server named “server1”

https://host01:7002/management/weblogic/latest/domainRuntime/serverRuntimes/server1?fields=state,activationTime&links=none

Output:

{
"state": "RUNNING",
"activationTime": 1470982524188
}

The following RESTful URL requests the state of all WebLogic Servers in a domain:

https://host01:7002/management/weblogic/latest/domainRuntime/serverRuntimes?fields=name,state,activationTime&links=none

Output:

{
"name": "AdminServer",
"state": "RUNNING",
"activationTime": 1473234973467
},
{
"name": "server2",
"state": "RUNNING",
"activationTime": 1473235510682
},
{
"name": "server1",
"state": "RUNNING",
"activationTime": 1473235506258
}

Get JVM Key Metrics
Monitoring the memory usage provides a good view on how the application behaves regarding memory consumption. Even if the memory management is a JVM task, the WebLogic Server mbeans can be queried for some heap usage information.
The following RESTful URL requests the JVM key metrics of a WebLogic Server named “server1”

https://host01:7002/management/weblogic/latest/domainRuntime/serverRuntimes/server1/JVMRuntime?links=none&fields=heapSizeCurrent,heapFreeCurrent,heapFreePercent,heapSizeMax

Output:

{
"heapSizeCurrent": 259588096,
"heapFreeCurrent": 101962840,
"heapSizeMax": 518979584,
"heapFreePercent": 72
}

Get WebLogic Threads key metrics
When WebLogic opens up too many threads to service the load, there is a decrease in performance, due to the resources (CPU, Memory) usage.

https://host01:7002/management/weblogic/latest/domainRuntime/serverRuntimes/server1/threadPoolRuntime?links=none&fields=executeThreadIdleCount,hoggingThreadCount,pendingUserRequestCount,completedRequestCount,throughput,healthState

Output:

{
"executeThreadIdleCount": 0,
"completedRequestCount": 4652,
"pendingUserRequestCount": 0,
"throughput": 1.999000499750125,
"hoggingThreadCount": 0,
"healthState": {
"state": "ok",
"subsystemName": null,
"partitionName": null,
"symptoms": []
}
}

Get JDBC Runtime Metrics
“Active Connection Current Count”, “Current Connection High Count”, “Waiting for Connection High count” allows you to validate that you have the correct amount of resources available to service the client’s needs. It’s also helpful to determine if you need to increase or decrease the pool size. While “Wait Seconds High Count”, “Waiting for Connection Failure Total” and “Connection Delay Times” can be used to determine DB responsiveness and how the clients are impacted by the connection pool size.

https://host01:7002/management/weblogic/12.2.1.0/domainRuntime/serverRuntimes/server1/JDBCServiceRuntime/JDBCDataSourceRuntimeMBeans/myDS?links=none&fields=name,activeConnectionsCurrentCount,activeConnectionsHighCount,waitingForConnectionHighCount,waitSecondsHighCount,waitingForConnectionFailure

Output:

{
"name": “myDS",
"state": "Running",
"activeConnectionsCurrentCount": 4,
"activeConnectionsHighCount": 8,
"waitingForConnectionFailureTotal": 0,
"waitingForConnectionHighCount": 0,
"waitSecondsHighCount": 0,
}

Get Application Runtime Metrics
We can fetch some useful information like “how many sessions were connected to the application” and how many concurrent sessions”

https://host01:7002/management/weblogic/latest/domainRuntime/serverRuntimes/server1/applicationRuntimes/SimpleAuctionWebAppDb/componentRuntimes?fields=openSessionsCurrentCount,sessionsOpenedTotalCount,openSessionsHighCount&links=none

Output:

{
"openSessionsCurrentCount": 12,
"sessionsOpenedTotalCount": 255,
"openSessionsHighCount": 15
}

The WebLogic REST management allows to access directly the WebLogic MBeans without any additional resource cost. The Monitoring tools can only benefit from this WebLogic RESTful interface.  As WebLogic RESTful requests used for Monitoring are simple, the java plugin for the dbi monitoring has been developed in two days.

 

Cet article Using WebLogic 12C RESTful management for monitoring WebLogic Domains est apparu en premier sur Blog dbi services.

Am I a DBA 3.0 or just an SQL*DBA?

Sun, 2017-09-24 13:47

There are currently a lot of new buzz words and re-namings which suggest that our DBA role is changing, most of them escorted with a #cloud hashtag. Oracle Technology Network is now called Oracle Developer Community. Larry Ellison announced the database that does not need to be operated by humans. And people talking about the death of DBA, about the future of DBA, about DBA 3.0,…

Those are all generalizations and universal statements, and I don’t like generalizations. There is not only one type of DBA role. The DBA role in big US companies (where most of those claims come from) is very different than the DBA role in European medium companies (where I’m doing most of my job). The only thing I like with generalization is that a simple counterexample is sufficient to prove that the universal statement is wrong. And I’ll take the example I know the best: mine.

CaptureEM
What do you call DBA role? Is it only server management, or database management? Are you a Dev DBA or an Ops DBA? And when you will go multitenant, will you become a CDB DBA or PDB DBA? And on Engineered Systems, are you still a Database administrator or a Database Machine administrator?

So here is my experience. Let’s flashback to sysdate-8000.
6580a20eb9faaba67ff143dcd7bfcbdc
My first job with an Oracle Database was in 1994. I was working at Alcatel in Paris. The IT was typical of big companies at that time: all IBM, databases were DB2 on mainframe and applications were developed through a several years waterfall cycle from specifications to production. But I was not working there. I was in the accounting department which had his own little IT with some Borland Paradox small applications. This IT department was one senior person, from whom I’ve learned everything, and me, junior developer doing some Borland Paradox things. When came the need for a new application, the idea was to build a prototype in this ‘rebel’ IT service rather than waiting for the whole cycle of development managed by the official IT department.

mainpictWe asked SUN to lend us a workstation. We asked Oracle to lend us Oracle 7.1 database. That was not difficult. Both of them were happy to try to come into this all-IBM company by another way. And Borland had a new product: Delphi so this is where I started to build the prototype. I had everything to learn there: I had never installed a Unix system, I had never installed a database, I even never configured a TCP/IP network. But with the books (no internet then) and the help of my manager (did I say I owe him everything?) we got the environment ready within one month.

aba590f96f7b8138deb71fe28526fb93Today we are talking about Cloud PaaS as the only way to get quickly an environment to start a new development project. The marketing explains that you can get the environment with a few clicks and operational one hour later. But in real life, I know several projects where the environment is not ready after one month, for different reasons (time to choose which service, evaluate the cost, set-it up). Remember that in my case, 23 years ago, it took one or two months but I had a full server, enough storage, available 24/7, with the workstation on my desk. And all that for free.

CaptureDelphiSo I started to develop the application. The business users were there in next room. A short meeting, a small doc, and the conception of the first prototype was ready. Development with Delphi was really quick (remember RAD – Rapid Application Development?) and as soon as I had a usable prototype, one user had access to it, giving me their feedback for future evolutions. We have built something very clever: easy to evolve, fit to business needs, with good performance, and easy to deploy. It has been replaced years later by an application provided by the IT department, but our application was used a the specification.

So, what was my role here? I was clearly a developer and not a DBA. But I was already designing a system, installing a server, creating a database, modeling the data and analyzing performance. When interacting with the database, I was just a DBA doing some SQL. If I want to invent new words myself, I would call that an SQL*DBA, like the name of the tool that was replaced at that time by svrmgrl for the DBA stuff related to the server management. All that has been consolidated into the same tools later: sqlplus does the DBA and Developer stuff, Enterprise manager does both, SQL Developer does both…

During the 20 years later, I’ve evolved to a DBA. I’ve learned new technologies each time, but I don’t think that my DBA role has changed. I’ve done BI (called ‘infocenter’ at that time and ‘datawarehouse’ later) with Business Objects and DB2 and Data Propagator (yes, logical replication and that was in 1996). All this was designed in cooperation with the business end-users. In 2001 I’ve managed terabyte databases full of numbers, doing what we call Big Data today. All maintenance tasks (capacity planning, upgrade, recovery) were done very closely to the business utilization of this data. I administered telecom databases at a time where mobile phone providers came with a new idea every month to be implemented. We would call that ‘agile’ today. I’ve setup databases for quick cloning with transportable tablespaces. I’ve setup continuous integration tests of databases based on flashback technologies and workspace manager. I’ve configured parallel query and XMLDB to do analytics on unstructured data, with better results than MapReduce PoC. Technology evolves, names are modernized, but my DBA role is the same and I still use SQL.

The latest I’ve read about this changing role is Penny Avril interview and it is a very good explanation of all those changes announced. I totally agree with all of that. Except that I see no change in my role there. Let’s take this: DBAs are being asked to understand what businesses do with data rather than just the mechanics of keeping the database healthy and running.
I have always known which business users and which business cases are interacting with the database. My resume always mentioned the business area of each project. I’ve always interacted with end-users and developers for any database I administered, whether there are DEV, TEST or PROD databases. You cannot setup a backup/recovery plan without knowing the data and the business requirements. You cannot upgrade or migrate without interacting with users to test and validate the migration. You cannot address performance without interacting with users and developers. You cannot size the SGA without knowing how the data is used, at the different times of the day or the week.You cannot keep database healthy without knowing how it is used. You cannot build an infrastructure architecture without the support of the business for the software costs.

My DBA job is not a mechanics to keep processes running on a server. That would be a Database System administrator. But we are talking about DataBase Administrator. The major part of my job, and the main reason why I like it, is the human interaction that is around the database. You talk to server/storage/network administrators, you talk to all kind of business users, you talk to developers, you talk to managers, you talk to vendors,… You have to understand OS schedulers, network security, and mutexes, and also have to understand banking, retail, hospital data workflow. Then I don’t worry about the self-driven/no-human-labor part of the DBA role that may be moved to be managed by the cloud provider. Those are boring things that we already automated long time ago. For example, at dbi-services we have included all this automation into the DMK. And this goes further for open source databases with the OpenDB Appliance. Do you think a consulting company would provide this for free to their customers if this automation takes all the DBA job out? The boring and recurring things are automated to avoid errors, and all the intelligent stuff is provided by experienced human DBAs talking SQL with the system. As always.

 

Cet article Am I a DBA 3.0 or just an SQL*DBA? est apparu en premier sur Blog dbi services.

Wrong result with multitenant, dba_contraints and current_schema

Sat, 2017-09-23 15:03

Multitenant architecture is not such a big change and this is why I recommend it when you start a project in 12c or if you upgrade to 12.2 – of course after thoroughly testing your application. However, there is a point where you may encounter problems on dictionary queries, because it is really a big change internally. The dictionary separation has several side effects. You should test carefully the queries you do on the dictionary views to get metadata. Here is an example of a bug I recently encountered.

This happened with a combination of things you should not do very often, and not in a critical use case: query dictionary for constraints owned by your current schema, when different than the user you connect with.

I create two users: USER1 and USER2
SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> grant dba to USER1 identified by USER1 container=current;
Grant succeeded.
SQL> grant dba to USER2 identified by USER2 container=current;
Grant succeeded.

USER1 owns a table which has a constraint:

SQL> connect USER1/USER1@//localhost/PDB1
Connected.
SQL> create table DEMO(dummy constraint pk primary key) as select * from dual;
Table DEMO created.

USER2 can access to the table either by prefixing it with USER1 or by setting the current_schema to USER1

SQL> connect USER2/USER2@//localhost/PDB1
Connected.
SQL> alter session set current_schema=USER1;
Session altered.

Bug

Ok, now imagine you want to read constraint metadata for the current schema you have set:

SQL> select sys_context('USERENV','CURRENT_SCHEMA'), a.*
2 from sys.dba_constraints a
3 where owner = sys_context('USERENV','CURRENT_SCHEMA')
4 /
 
no rows selected

No rows selected is a wrong result here because my current_schema is USER1 and USER1 has constraints:

SQL> select owner,constraint_name
2 from sys.dba_constraints a
3 where owner = 'USER1'
4 /
OWNER CONSTRAINT_NAME
----- ---------------
USER1 PK

So, where’s the problem? Let’s have a look at the execution plan:

SQL_ID 2fghqwz1cktyf, child number 0
-------------------------------------
select sys_context('USERENV','CURRENT_SCHEMA'), a.* from
sys.dba_constraints a where owner =
sys_context('USERENV','CURRENT_SCHEMA')
 
Plan hash value: 1258862619
 
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.32 | 2656 |
| 1 | PARTITION LIST ALL | | 1 | 2 | 0 |00:00:00.32 | 2656 |
|* 2 | EXTENDED DATA LINK FULL| INT$INT$DBA_CONSTRAINTS | 2 | 2 | 0 |00:00:00.32 | 2656 |
--------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter((("INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=4 OR
("INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=2 AND "INT$INT$DBA_CONSTRAINTS"."ORIGIN_CON_ID"=TO_NUMBER(SY
S_CONTEXT('USERENV','CON_ID')))) AND "OWNER"=SYS_CONTEXT('USERENV','CURRENT_SCHEMA')))

I am in 12.2 and DBA_CONSTRAINTS reads from INT$DBA_CONSTRAINTS which reads from INT$INT$DBA_CONSTRAINTS and in multitenant this view being an extended data view will read from CDB$ROOT and from the current container. This is why we see EXTENDED DATA LINK FULL in the execution plan and up to this point the predicates are correct: “OWNER”=SYS_CONTEXT(‘USERENV’,’CURRENT_SCHEMA’)

The execution through data link is run on each container with parallel processes: they switch to the container and run the underlying query on the view. But when I look at the sql trace of the parallel process running the query on my PDB I can see that the predicate on OWNER has replaced the SYS_CONTEXT(‘USERENV’,’CURRENT_SCHEMA’) with the hardcoded value:

SELECT /*+ NO_STATEMENT_QUEUING RESULT_CACHE (SYSOBJ=TRUE) OPT_PARAM('_ENABLE_VIEW_PDB', 'FALSE') */ OWNER,CONSTRAINT_NAME,CONSTRAINT_TYPE,TABLE_NAME,OBJECT_TYPE#,SEARCH_CONDITION,SEARCH_CONDITION_VC,R_OWNER,R_CONSTRAINT_NAME,DELETE_RULE,STATUS,DEFERRABLE,DEFERRED,VALIDATED,GENERATED,BAD,RELY,LAST_CHANGE,INDEX_OWNER,INDEX_NAME,INVALID,VIEW_RELATED,ORIGIN_CON_ID FROM NO_COMMON_DATA(SYS."INT$INT$DBA_CONSTRAINTS") "INT$INT$DBA_CONSTRAINTS" WHERE ("INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=4 OR "INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=2 AND "INT$INT$DBA_CONSTRAINTS"."ORIGIN_CON_ID"=TO_NUMBER('3')) AND "INT$INT$DBA_CONSTRAINTS"."OWNER"=q'"USER2"'

And unfortunately, this value is not the right one: USER2 is my connected user, but not the CURRENT_SCHEMA that I have set. In the same trace, I can see where this value comes from:

select 'q''"' || SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA') || '"''' from sys.dual

but it seems that the current_schema was lost through the call to the parallel process and the PDB switch to my container.

Workaround

The problem is easy to workaround. This works:

SQL> select owner,constraint_name
2 from sys.dba_constraints a
3 where owner = ( select sys_context('USERENV','CURRENT_SCHEMA') from dual )
4 /
 
OWNER CONSTRAINT_NAME
----- ---------------
USER1 PK

And anyway, better to get the current schema before and pass it as a bind variable. The bind variables are passed correctly through data link queries:


SQL> variable v varchar2(30)
SQL> exec select sys_context('USERENV','CURRENT_SCHEMA') into :v from dual;
 
PL/SQL procedure successfully completed.
 
SQL> select sys_context('USERENV','CURRENT_SCHEMA'), a.*
2 from sys.dba_constraints a
3 --where owner = sys_context('USERENV','CURRENT_SCHEMA')
4 where owner = :v
5 /

So what?

The multitenant architecture is a real challenge for dictionary views. The dictionary is separated: system metadata in CDB$ROOT and user metadata in PDB. But, because of compatibility with non-CDB architecture, the dictionary views must show both of them, and this is where it becomes complex: what was separated on purpose has now to be merged. And complexity is subject to bugs. If you want to get an idea, have a look at dcore.sql in ORACLE_HOME/rdbms/admin and compare 11g version with 12c ones, with all the evolution in 12.1.0.1, 12.1.0.2 and 12.2.0.1

 

Cet article Wrong result with multitenant, dba_contraints and current_schema est apparu en premier sur Blog dbi services.

Documentum – RCS 7.3 – Issue with projections

Sat, 2017-09-23 11:42

In the last two blogs I posted, I mentioned that I was working on silent installations with CS 7.3 (in HA). The issue I will describe in this blog is linked to the HA installation. I faced this issue using the silent installation but I’m guessing it should also occurs using the GUI Installation.

 

So basically on the first Content Server, I installed a Global Registry and a few other docbases. For the creation of the docbases, there were obviously a docbroker installed. This is the global docbroker which was targeted by default by all docbases using the server.ini. In addition to that, I installed 2 other docbrokers for specific docbases. The purpose here is to have the docbroker N°1 only for the GR + DocBase1 and the docbroker N°2 only for the GR + DocBase2 + DocBase3. So I started to configure the projections to achieve that goal.

 

I will use below the GR. If you followed what I mentioned above, I should see – at the end – two projections in the docbase (+ the default one from the server.ini). So let’s configure the projections (with only one Content Server installed):

[dmadmin@content-server-01 ~]$ iapi GR_DocBase
Please enter a user (dmadmin):
Please enter password for dmadmin:


        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2016
        All rights reserved.
        Client Library Release 7.3.0040.0025


Connecting to Server using docbase GR_DocBase
[DM_SESSION_I_SESSION_START]info:  "Session 010f123450003d07 started for user dmadmin."


Connected to Documentum Server running Release 7.3.0050.0039  Linux64.Oracle
Session id is s0
API> retrieve,c,dm_server_config
...
3d0f123450000102
API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  title                           :
  ...
  projection_targets            []: <none>
  projection_ports              []: <none>
  projection_proxval            []: <none>
  projection_notes              []: <none>
  projection_enable             []: <none>
  ...
  i_vstamp                        : 28

API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
                                                                                                    
                                                                                                    
(2 rows affected)

API> set,c,l,projection_targets[0]
SET> content-server-01
...
OK
API> set,c,l,projection_ports[0]
SET> 1493
...
OK
API> set,c,l,projection_proxval[0]
SET> 1
...
OK
API> set,c,l,projection_notes[0]
SET> Dedicated Docbroker N°2
...
OK
API> set,c,l,projection_enable[0]
SET> T
...
OK
API> save,c,l
...
OK
API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  ...
  projection_targets           [0]: content-server-01
  projection_ports             [0]: 1493
  projection_proxval           [0]: 1
  projection_notes             [0]: Dedicated Docbroker N°2
  projection_enable            [0]: T
  ...
  i_vstamp                        : 29

API> reinit,c
...
OK
API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
                                                                                                    
content-server-01          1493              1                   Dedicated Docbroker N°2            1
(2 rows affected)

API> set,c,l,projection_targets[1]
SET> content-server-01
...
OK
API> set,c,l,projection_ports[1]
SET> 1491
...
OK
API> set,c,l,projection_proxval[1]
SET> 1
...
OK
API> set,c,l,projection_notes[1]
SET> Dedicated Docbroker N°1
...
OK
API> set,c,l,projection_enable[1]
SET> T
...
OK
API> save,c,l
...
OK
API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  ...
  projection_targets           [0]: content-server-01
                               [1]: content-server-01
  projection_ports             [0]: 1493
                               [1]: 1491
  projection_proxval           [0]: 1
                               [1]: 1
  projection_notes             [0]: Dedicated Docbroker N°2
                               [1]: Dedicated Docbroker N°1
  projection_enable            [0]: T
                               [1]: T
  ...
  i_vstamp                        : 30

API> reinit,c
...
OK
API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
content-server-01          1491              1                   Dedicated Docbroker N°1            1
content-server-01          1493              1                   Dedicated Docbroker N°2            1
(2 rows affected)

 

Up to this point, everything sounds good and everything is working. If you try to access DA, you will see two projections for the Global Registry that are the two we defined above:

Projection1

 

So this is working. Now what is the goal of this blog? Well it’s not what is above… So let’s talk about the issue now! The next step in this environment was to set it in High Availability. As mentioned my last blog, I faced some issues with the Remote Content Server (RCS) installation but I was finally able to install a Content-File Server. When I finished the installation of GR_DocBase on the RCS, I wanted to setup the projections between the CS1 and CS2… For that, I opened DA again and I went to the same screen that you can see above (in the screenshot).

 

What I was expecting to see was the exact same thing as above, meaning two projections of the GR_DocBase with the two docbrokers installed on the CS1. What I saw was a little bit different…:

Projection2

 

I can assure you that between the two screenshots above, the only thing I did was to install the CFS for GR_DocBase on the RCS… So why is this screen not working anymore, what’s the issue? “An error has occurred. 1 >= 1″. Yeah sure, what else? ;)

 

I checked the Documentum Administrator logs and found the following stack trace:

05:45:07,980 ERROR [[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'] com.documentum.web.common.Trace - 1 >= 1
java.lang.ArrayIndexOutOfBoundsException: 1 >= 1
        at java.util.Vector.elementAt(Vector.java:474)
        at com.documentum.fc.common.DfList.get(DfList.java:427)
        at com.documentum.fc.common.DfList.getString(DfList.java:561)
        at com.documentum.webcomponent.admin.server.ServerConnectionBrokerList.loadConnectionBrokerProjections(ServerConnectionBrokerList.java:78)
        at com.documentum.webcomponent.admin.server.ServerConnectionBrokerList.onInit(ServerConnectionBrokerList.java:51)
        at com.documentum.web.form.FormProcessor.invokeMethod(FormProcessor.java:1604)
        at com.documentum.web.form.FormProcessor.invokeMethod(FormProcessor.java:1489)
        at com.documentum.web.form.FormProcessor.fireOnInitEvent(FormProcessor.java:1154)
        at com.documentum.web.form.ControlTag.fireFormOnInitEvent(ControlTag.java:794)
        at com.documentum.web.formext.control.component.ComponentIncludeTag.renderEnd(ComponentIncludeTag.java:135)
        at com.documentum.web.form.ControlTag.doEndTag(ControlTag.java:928)
        at jsp_servlet._webcomponent._library._propertysheetwizardcontainer.__propertysheetwizardcontainer._jsp__tag25(__propertysheetwizardcontainer.java:1501)
        at jsp_servlet._webcomponent._library._propertysheetwizardcontainer.__propertysheetwizardcontainer._jspService(__propertysheetwizardcontainer.java:368)
        at weblogic.servlet.jsp.JspBase.service(JspBase.java:35)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
        at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.servlet.ResponseHeaderControlFilter.doFilter(ResponseHeaderControlFilter.java:351)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.servlet.CompressionFilter.doFilter(CompressionFilter.java:96)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.env.WDKController.processRequest(WDKController.java:144)
        at com.documentum.web.env.WDKController.doFilter(WDKController.java:131)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3683)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3649)
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:326)
        at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
        at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
        at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2433)
        at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2281)
        at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2259)
        at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1691)
        at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1651)
        at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:348)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:333)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:54)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:640)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:406)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:346)

 

As mentioned in the stack trace (which is more useful than the error message), the problem might come from the docbrokers projections… But we were checking them just before installing the Remote docbase and it was OK… With the CS 7.3, the RCS installation is automatically adding a projection on the Primary Content Server dm_server_config object to point to the Remote docbroker and this is causing this error…

 

So I did check the projections on the GR_DocBase Primary dm_server_config object to see what was the issue and I got this:

API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  ...
  projection_targets           [0]: content-server-01
                               [1]: content-server-01
                               [2]: content-server-02
  projection_ports             [0]: 1493
                               [1]: 1491
                               [2]: 1489
  projection_proxval           [0]: 1
                               [1]: 1
                               [2]: 2
  projection_notes             [0]: Dedicated Docbroker N°2
                               [1]: Dedicated Docbroker N°1
                               [2]: Projecting primary to remote.
  projection_enable            [0]: T
                               [1]: T
                               [2]: T
  ...
  i_vstamp                        : 34

API> reinit,c
...
OK
API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
content-server-02          1489              2                   Projecting primary to remote.      1
content-server-01          1491              1                   Dedicated Docbroker N°1            1
content-server-01          1493              1                   Dedicated Docbroker N°2            1
(3 rows affected)

 

So this looks good isn’t it? And yet through DA, it shows this ArrayIndexOutOfBoundsException… I tried a lot of things but the only way I could solve this issue was by removing the projection that the RCS Installer is automatically adding and then adding it again… To simplify the process, I included that in our silent scripts so it is done automatically after the creation of the CFS. Just below is a very small extract of code I added in our silent scripts (shell) in order to remove the projection added by the installer (some variables are obviously defined before this section of code…):

...
echo "  **  "
echo "  **  HA Docbase Configuration - projections"
echo "  **  "
dql_select_docbase="${script_folder}/rcs_select_${docbase}"
dql_update_docbase="${script_folder}/rcs_update_${docbase}"
idql ${docbase} -U${install_owner} -Pxxx <<EOF > ${dql_select_docbase}.log 2>&1
SELECT 'projection_id=', i_position*-1-1 as projection_id FROM dm_server_config WHERE LOWER(object_name) like LOWER('${docbase}') AND projection_notes='Projecting primary to remote.' AND LOWER(projection_targets) like LOWER('%`hostname -s`%') enable(ROW_BASED)
go
EOF
if [[ ${?} != 0 ]]; then
  echo "ERROR - There was a problem while gathering the index for ${docbase}. Please see the file '${dql_select_docbase}.log' for error. Exiting."
  exit
fi
index=`grep "^projection_id=[[:space:]]" ${dql_select_docbase}.log | sed 's,^.*=[[:space:]]*\([0-9]*\)[[:space:]]*$,\1,'`
if [[ ${index} == "" || ${index} == '""' ]]; then
  echo "INFO - There is no existing projections from the primary to the remote."
else
  echo -e "INFO - The index of 'dm_server_config.projection_targets' is:  \t${index}"
idql ${docbase} -U${install_owner} -Pxxx <<EOF > ${dql_update_docbase}.log 2>&1
UPDATE dm_server_config object REMOVE projection_targets[${index}], REMOVE projection_ports[${index}], REMOVE projection_proxval[${index}], REMOVE projection_notes[${index}], REMOVE projection_enable[${index}] WHERE LOWER(object_name) like LOWER('${docbase}') enable(ROW_BASED)
go
EOF
  nb_error=`grep "_E_" ${dql_update_docbase}.log | wc -l`
  cat "${dql_update_docbase}.log"
  echo ""
  if [[ ${nb_error} != 0 ]]; then
    echo "ERROR - There was a problem while updating the object for ${docbase}. Please see above. Exiting."
    exit
  fi
fi
...

 

This small section of code is just to shows how this kind of things can be automated. I’m just executing a first DQL to gather the index of the projection added by the RCS installer. This is the correct index because the note added by the RCS is always “Projecting primary to remote.” and the target host is obviously the RCS… To confirm that DA is now working again, you can reload the docbroker projection screen and for me it was indeed working.

 

So then the next step is to add the projection again but this time without the “.” at the end of the note (to differentiate them) because we still need this projection for the CS2 to be aware of the docbases on the CS1. You can either do it programmatically using iapi/idql or with DA directly. After adding this third projection again, I got the following (notice the only difference is the “.” in the notes) in the docbase:

API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  ...
  projection_targets           [0]: content-server-01
                               [1]: content-server-01
                               [2]: content-server-02
  projection_ports             [0]: 1493
                               [1]: 1491
                               [2]: 1489
  projection_proxval           [0]: 1
                               [1]: 1
                               [2]: 2
  projection_notes             [0]: Dedicated Docbroker N°2
                               [1]: Dedicated Docbroker N°1
                               [2]: Projecting primary to remote
  projection_enable            [0]: T
                               [1]: T
                               [2]: T
  ...
  i_vstamp                        : 36

API> reinit,c
...
OK
API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
content-server-02          1489              2                   Projecting primary to remote       1
content-server-01          1491              1                   Dedicated Docbroker N°1            1
content-server-01          1493              1                   Dedicated Docbroker N°2            1
(3 rows affected)

 

The result is the same but I was finally able to see the three projections through DA… I don’t know what caused this error because I simply re-added the exact same thing with a different note (slightly) but I’m sure that this was caused by the RCS Installation…

Projection3

 

Yet another interesting behaviour of Documentum… So many ways to have fun! :)

 

 

Cet article Documentum – RCS 7.3 – Issue with projections est apparu en premier sur Blog dbi services.

Documentum – RCS 7.3 – dm_server_config cannot contain dash

Sat, 2017-09-23 09:54

As mentioned in this blog, I was recently installing a new 7.3 P05 environment. Actually, this environment was in HA and it was the first silent installation of a Remote Content Server for me. I already created a lot of blogs related to issues with the CS 7.3 but since this was my first RCS 7.3 installation, I was kind of expecting to face some issues… I wasn’t disappointed!

 

So let’s start with the first issue I faced during the creation of the Content-File Server (CFS) on the Remote Content Server. As always, I prepared the properties file with all the needed information for a RCS (the properties file isn’t the same as for a normal docbase creation on the Primary CS) and then I launched the process to install the CFS in silent for my Global Registry. After only 30 seconds or so, the installation was “done” so I knew something wasn’t right… When you install a CFS, if there is no docbroker present on the host, it will install one before creating the docbase… Please note that below, the docbroker was already created using a previous silent script so here, I’m just trying to create the CFS.

 

The problem with the silent installation is that if something goes wrong, you absolutely have no feedback… Using the GUI, you always have a message (more or less meaningful) printed on the screen so you know something wasn’t right and you can start checking why. So as a best practice, I would recommend to always check the installation log file for errors when using the silent installation.

 

So what is printed in the installation log file? (don’t look at the first WARN message, it’s because there is no global registry locally, yet)

[dmadmin@content-server-02 ~]$ cd $DM_HOME/install/logs
[dmadmin@content-server-02 logs]$ cat install.log
10:57:51,620  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - Done InitializeSharedLibrary ...
10:57:51,640  INFO [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsInitializeImportantServerVariables - The installer is gathering system configuration information.
10:57:51,651  INFO [main] com.documentum.install.server.installanywhere.actions.DiWASilentRemoteServerValidation - Start to verify the password
10:57:52,088  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is /tmp/104190.tmp/dfc.keystore
10:57:52,445  INFO [main] com.documentum.fc.client.security.internal.CreateIdentityCredential$MultiFormatPKIKeyPair - generated RSA (2,048-bit strength) mutiformat key pair in 334 ms
10:57:52,473  INFO [main] com.documentum.fc.client.security.internal.CreateIdentityCredential - certificate created for DFC <CN=dfc_q5Dd2d8FbuDOh98h1Ef2aeEcp3a,O=EMC,OU=Documentum> valid from Thu Sep 7 10:52:52 UTC 2017 to Sun Sep 5 10:57:52 UTC 2027:

10:57:52,474  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is /tmp/104190.tmp/dfc.keystore
10:57:52,485  INFO [main] com.documentum.fc.client.security.impl.InitializeKeystoreForDfc - [DFC_SECURITY_IDENTITY_INITIALIZED] Initialized new identity in keystore, DFC alias=dfc, identity=dfc_q5Dd2d8FbuDOh98h1Ef2aeEcp3a
10:57:52,486  INFO [main] com.documentum.fc.client.security.impl.AuthenticationMgrForDfc - identity for authentication is dfc_q5Dd2d8FbuDOh98h1Ef2aeEcp3a
10:57:52,490  INFO [main] com.documentum.fc.impl.RuntimeContext - DFC Version is 7.3.0040.0025
10:57:52,504  INFO [Timer-2] com.documentum.fc.client.impl.bof.cache.ClassCacheManager$CacheCleanupTask - [DFC_BOF_RUNNING_CLEANUP] Running class cache cleanup task
10:57:52,512  WARN [main] com.documentum.fc.client.security.internal.RegistrationMgr - [DFC_SECURITY_GR_PUBLICATION_FAILED] Publication of DFC instance with global registry failed
DfException:: THREAD: main; MSG: [DFC_BOF_GLOBAL_REGISTRY_NOT_CONFIGURED] A global registry is not configured; ERRORCODE: ff; NEXT: null
        at com.documentum.fc.client.security.impl.DfcIdentityPublisher.<init>(DfcIdentityPublisher.java:51)
        ...
        at com.zerog.lax.LAX.main(Unknown Source)
10:57:52,955  INFO [main] com.documentum.fc.client.impl.connection.docbase.DocbaseConnection - Object protocol version 2
10:57:52,984  INFO [main] com.documentum.fc.client.security.internal.AuthenticationMgr - new identity bundle <dfc_q5Dd2d8FbuDOh98h1Ef2aeEcp3a  1602102952       content-server-02         hudL6VVqhfy1whQlqbqfn3xZoOlola5zscimwba4o0MDQbigdzAOi+l54BHFvqc/3auMipaihywp65a5bR4vqvzP55CzzuFjSD+UZa3vJOGiwpKlctdmg45Kl0aOTwrfYH5jEupQ79oUVNY1cNQmAxn3odYFwguvaEp3VxezAbO+cPh8svnKjhvZJm/DFVrmdLnGPu+PHf3jWHbYSfhc+TWDLPqk8dlFTzJTjLnGLnGPu+PHf3jWHbYSfhc+TWDLPqk/13OmxsnXcSUL59QbhU+BBW2/4lsXvSvwxP/8A+/GKsGPSLoTFZJ2nlbJnq0TX5XWPTAG7Emgjeil35cbfax6D/rUQp8kHWPlLWtPvpOaKQ==>
10:57:54,477  INFO [main] com.documentum.fc.client.impl.connection.docbase.DocbaseConnection - Object protocol version 2
10:57:54,526  INFO [main] com.documentum.install.server.installanywhere.actions.DiWASilentRemoteServerValidation - Server config object name is invalid
10:57:54,526 ERROR [main] com.documentum.install.server.installanywhere.actions.DiWASilentRemoteServerValidation - Server config name only can contain characters: A-Z, a-z, 0-9, underscore(_) and dot(.)
com.documentum.install.shared.common.error.DiException: Server config name only can contain characters: A-Z, a-z, 0-9, underscore(_) and dot(.)
        at com.documentum.install.server.installanywhere.actions.DiWASilentRemoteServerValidation.setup(DiWASilentRemoteServerValidation.java:82)
        at com.documentum.install.shared.installanywhere.actions.InstallWizardAction.install(InstallWizardAction.java:75)
        at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.an(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runPreInstall(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.consoleInstallMain(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.executeApplication(Unknown Source)
        at com.zerog.ia.installer.Main.main(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.zerog.lax.LAX.launch(Unknown Source)
        at com.zerog.lax.LAX.main(Unknown Source)
10:57:54,527  INFO [main]  - The INSTALLER_UI value is SILENT
10:57:54,527  INFO [main]  - The KEEP_TEMP_FILE value is true
10:57:54,527  INFO [main]  - The common.installOwner.password value is ******
10:57:54,527  INFO [main]  - The SERVER.SECURE.ROOT_PASSWORD value is ******
10:57:54,527  INFO [main]  - The common.upgrade.aek.lockbox value is null
10:57:54,527  INFO [main]  - The common.old.aek.passphrase.password value is null
10:57:54,527  INFO [main]  - The common.aek.algorithm value is AES_256_CBC
10:57:54,527  INFO [main]  - The common.aek.passphrase.password value is ******
10:57:54,527  INFO [main]  - The common.aek.key.name value is CSaek
10:57:54,527  INFO [main]  - The common.use.existing.aek.lockbox value is null
10:57:54,528  INFO [main]  - The SERVER.ENABLE_LOCKBOX value is true
10:57:54,528  INFO [main]  - The SERVER.LOCKBOX_FILE_NAME value is lockbox.lb
10:57:54,528  INFO [main]  - The SERVER.LOCKBOX_PASSPHRASE.PASSWORD value is ******
10:57:54,528  INFO [main]  - The SERVER.COMPONENT_ACTION value is CREATE
10:57:54,528  INFO [main]  - The SERVER.DOCBROKER_ACTION value is null
10:57:54,528  INFO [main]  - The SERVER.PRIMARY_CONNECTION_BROKER_HOST value is content-server-01
10:57:54,528  INFO [main]  - The SERVER.PRIMARY_CONNECTION_BROKER_PORT value is 1489
10:57:54,528  INFO [main]  - The SERVER.PROJECTED_CONNECTION_BROKER_HOST value is content-server-02
10:57:54,528  INFO [main]  - The SERVER.PROJECTED_CONNECTION_BROKER_PORT value is 1489
10:57:54,528  INFO [main]  - The SERVER.FQDN value is content-server-02
10:57:54,528  INFO [main]  - The SERVER.DOCBASE_NAME value is GR_DocBase
10:57:54,528  INFO [main]  - The SERVER.PRIMARY_SERVER_CONFIG_NAME value is GR_DocBase
10:57:54,528  INFO [main]  - The SERVER.REPOSITORY_USERNAME value is dmadmin
10:57:54,528  INFO [main]  - The SERVER.SECURE.REPOSITORY_PASSWORD value is ******
10:57:54,528  INFO [main]  - The SERVER.REPOSITORY_USER_DOMAIN value is
10:57:54,529  INFO [main]  - The SERVER.REPOSITORY_USERNAME_WITH_DOMAIN value is dmadmin
10:57:54,529  INFO [main]  - The SERVER.REPOSITORY_HOSTNAME value is content-server-01
10:57:54,529  INFO [main]  - The SERVER.CONNECTION_BROKER_NAME value is null
10:57:54,529  INFO [main]  - The SERVER.CONNECTION_BROKER_PORT value is null
10:57:54,529  INFO [main]  - The SERVER.DOCBROKER_NAME value is
10:57:54,529  INFO [main]  - The SERVER.DOCBROKER_PORT value is
10:57:54,529  INFO [main]  - The SERVER.DOCBROKER_CONNECT_MODE value is null
10:57:54,529  INFO [main]  - The SERVER.USE_CERTIFICATES value is false
10:57:54,529  INFO [main]  - The SERVER.DOCBROKER_KEYSTORE_FILE_NAME value is null
10:57:54,529  INFO [main]  - The SERVER.DOCBROKER_KEYSTORE_PASSWORD_FILE_NAME value is null
10:57:54,529  INFO [main]  - The SERVER.DOCBROKER_CIPHER_LIST value is null
10:57:54,529  INFO [main]  - The SERVER.DFC_SSL_TRUSTSTORE value is null
10:57:54,529  INFO [main]  - The SERVER.DFC_SSL_TRUSTSTORE_PASSWORD value is ******
10:57:54,529  INFO [main]  - The SERVER.DFC_SSL_USE_EXISTING_TRUSTSTORE value is null
10:57:54,529  INFO [main]  - The SERVER.CONNECTION_BROKER_SERVICE_STARTUP_TYPE value is null
10:57:54,529  INFO [main]  - The SERVER.DOCUMENTUM_DATA value is $DOCUMENTUM/data
10:57:54,530  INFO [main]  - The SERVER.DOCUMENTUM_SHARE value is $DOCUMENTUM/share
10:57:54,530  INFO [main]  - The CFS_SERVER_CONFIG_NAME value is content-server-02_GR_DocBase
10:57:54,530  INFO [main]  - The SERVER.DOCBASE_SERVICE_NAME value is GR_DocBase
10:57:54,530  INFO [main]  - The CLIENT_CERTIFICATE value is null
10:57:54,530  INFO [main]  - The RKM_PASSWORD value is ******
10:57:54,530  INFO [main]  - The SERVER.DFC_BOF_GLOBAL_REGISTRY_VALIDATE_OPTION_IS_SELECTED value is true
10:57:54,530  INFO [main]  - The SERVER.PROJECTED_DOCBROKER_PORT_OTHER value is 1489
10:57:54,530  INFO [main]  - The SERVER.PROJECTED_DOCBROKER_HOST_OTHER value is content-server-01
10:57:54,530  INFO [main]  - The SERVER.GLOBAL_REGISTRY_REPOSITORY value is GR_DocBase
10:57:54,530  INFO [main]  - The SERVER.BOF_REGISTRY_USER_LOGIN_NAME value is dm_bof_registry
10:57:54,530  INFO [main]  - The SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD value is ******
10:57:54,530  INFO [main]  - The SERVER.COMPONENT_ACTION value is CREATE
10:57:54,530  INFO [main]  - The SERVER.COMPONENT_NAME value is null
10:57:54,530  INFO [main]  - The SERVER.DOCBASE_NAME value is GR_DocBase
10:57:54,531  INFO [main]  - The SERVER.CONNECTION_BROKER_NAME value is null
10:57:54,531  INFO [main]  - The SERVER.CONNECTION_BROKER_PORT value is null
10:57:54,531  INFO [main]  - The SERVER.PROJECTED_CONNECTION_BROKER_HOST value is content-server-02
10:57:54,531  INFO [main]  - The SERVER.PROJECTED_CONNECTION_BROKER_PORT value is 1489
10:57:54,531  INFO [main]  - The SERVER.PRIMARY_SERVER_CONFIG_NAME value is GR_DocBase
10:57:54,531  INFO [main]  - The SERVER.DOCBROKER_NAME value is
10:57:54,531  INFO [main]  - The SERVER.DOCBROKER_PORT value is
10:57:54,531  INFO [main]  - The SERVER.CONNECTION_BROKER_SERVICE_STARTUP_TYPE value is null
10:57:54,531  INFO [main]  - The SERVER.REPOSITORY_USERNAME value is dmadmin
10:57:54,531  INFO [main]  - The SERVER.REPOSITORY_PASSWORD value is ******
10:57:54,531  INFO [main]  - The SERVER.REPOSITORY_USER_DOMAIN value is
10:57:54,531  INFO [main]  - The SERVER.REPOSITORY_USERNAME_WITH_DOMAIN value is dmadmin
10:57:54,531  INFO [main]  - The SERVER.DFC_BOF_GLOBAL_REGISTRY_VALIDATE_OPTION_IS_SELECTED_KEY value is null
10:57:54,531  INFO [main]  - The SERVER.PROJECTED_DOCBROKER_PORT_OTHER value is 1489
10:57:54,531  INFO [main]  - The SERVER.PROJECTED_DOCBROKER_HOST_OTHER value is content-server-01
10:57:54,531  INFO [main]  - The SERVER.GLOBAL_REGISTRY_REPOSITORY value is GR_DocBase
10:57:54,532  INFO [main]  - The SERVER.BOF_REGISTRY_USER_LOGIN_NAME value is dm_bof_registry
10:57:54,532  INFO [main]  - The SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD value is ******
10:57:54,532  INFO [main]  - The SERVER.COMPONENT_ACTION value is CREATE
10:57:54,532  INFO [main]  - The SERVER.COMPONENT_NAME value is null
10:57:54,532  INFO [main]  - The SERVER.PRIMARY_SERVER_CONFIG_NAME value is GR_DocBase
10:57:54,532  INFO [main]  - The SERVER.DOCBASE_NAME value is GR_DocBase
10:57:54,532  INFO [main]  - The SERVER.REPOSITORY_USERNAME value is dmadmin
10:57:54,532  INFO [main]  - The SERVER.REPOSITORY_PASSWORD value is ******
10:57:54,532  INFO [main]  - The SERVER.REPOSITORY_USER_DOMAIN value is
10:57:54,532  INFO [main]  - The SERVER.REPOSITORY_USERNAME_WITH_DOMAIN value is dmadmin
10:57:54,532  INFO [main]  - The env PATH value is: /usr/xpg4/bin:$JAVA_HOME/bin:$DM_HOME/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$JAVA_HOME/bin:$DM_HOME/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$DM_HOME/bin:$ORACLE_HOME/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/dmadmin/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin
[dmadmin@content-server-02 logs]$

 

As you can see above, the installation fails because the properties aren’t “correct”. More specifically, the installation fails because the dm_server_config name provided doesn’t match the requirement/validation… Starting with the CS 7.3 GA release, the Content Server team introduced a new validation which force the dm_server_config name to only contains letters (lower/upper case), numbers, underscore or dot. If you are familiar with the Remote Content Servers, you should know that by default, the CFS installation will create a new dm_server_config object with a name that is “<hostname>_<service_name>”. So if you are installing a CFS for a docbase “GR_DocBase” with a service name set to “GR_DocBase” (it’s not mandatory the same value as the docbase name) on a Content Server with a hostname equal to “content-server-02″, then the dm_server_config will, by default, have the following name: content-server-02_GR_DocBase.

 

So if you compare this default value with the one from the log file, you will see that it is the same… Because I knew what default value the GUI Installer would have chosen, I therefore put this value in the silent properties file so that we keep our naming conventions on all our environments (7.3 or not). So this is the default value that the installer would have chosen and yet the installation fails and this is all because the hostname contains a simple dash… Funny, isn’t it?

 

Since this is clearly an issue, I opened a SR with OTX to ask them to either:

  • Explain why the dash (‘-‘) aren’t accepted in the dm_server_config name and since this is the default value, where it is documented that the hostname of the Content Servers cannot contain any dashs
  • Provide a hotfix for this issue and fix it in the next patch as well

 

OTX obviously recognized that this validation is too strict and they are therefore working on providing us a hotfix (via CS-57533) and implementing a more reliable validation. At the moment, I got a first draft hotfix that only solve the silent installation part (so the issue is still present using the GUI). I’m pretty sure it won’t be hard for OTX to solve that also on the GUI and to include the fix in a next patch but I don’t know which one at the moment!

 

 

Cet article Documentum – RCS 7.3 – dm_server_config cannot contain dash est apparu en premier sur Blog dbi services.

Documentum – CS 7.3 – Issue with dmqdocbroker

Sat, 2017-09-23 09:23

Beginning of this month, I was building a new environment in 7.3 P05 (using silent installation but this isn’t relevant here) and I found a strange behaviour from the dmqdocbroker so I wanted to share that. You will see below an error message “Input has special characters …”. This is specific to the CS 7.3 for which we opened a SR with EMC/OTX some months ago (beginning of this year during the first CS 7.3 tests we performed) but the outcome of this SR was that this shouldn’t have any impact on the proper behaviour of the docbroker so I didn’t blog about it back then but now, it might very well be linked to the issue I will discuss here.

 

So let’s talk about the issue with the dmqdocbroker. In this new environment, we configured several docbases and several docbrokers so that we can have separated applications. I will create another blog related to the projections (with another issue) so here I’m directly getting to the point. The issue I faced is that the first (default) docbroker was working properly, responding to the ping, listing all docbases, aso… But then when I created two additional docbrokers, these two docbrokers weren’t behaving in the same way…

 

To show that, I can simply execute the ping or getdocbasemap command for each docbrokers (for this example, I will use the following ports: first docbroker on 1489/1490, second one on 1491/1492, third one on 1493/1494). We are always using secure communications so that’s why you will always see the secure port below in the replies:

[dmadmin@content-server-01 ~]$ dmqdocbroker -p 1489 -c ping
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Targeting current host
$DM_HOME/bin/dmawk: Input has special characters ...
 source line number 624
Using specified port: 1489
Successful reply from docbroker at host (content-server-01) on port(1490) running software version (7.3.0050.0039  Linux64).
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$ dmqdocbroker -p 1491 -c ping
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Targeting current host
$DM_HOME/bin/dmawk: Input has special characters ...
 source line number 624
Using specified port: 1491
dmqdocbroker: Error returned from docbroker request:
[DFC_DOCBROKER_REQUEST_FAILED] Request to Docbroker ":1491" failed

[DFC_DOCBROKER_REQUEST_FAILED] Request to Docbroker ":1491" failed

[DM_SESSION_E_RPC_ERROR]error:  "Server communication failure"

java.net.ConnectException: Connection refused

[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$ dmqdocbroker -p 1493 -c ping
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Targeting current host
$DM_HOME/bin/dmawk: Input has special characters ...
 source line number 624
Using specified port: 1493
dmqdocbroker: Error returned from docbroker request:
[DFC_DOCBROKER_REQUEST_FAILED] Request to Docbroker ":1493" failed

[DFC_DOCBROKER_REQUEST_FAILED] Request to Docbroker ":1493" failed

[DM_SESSION_E_RPC_ERROR]error:  "Server communication failure"

java.net.ConnectException: Connection refused

[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$

 

As you can see, the request to the docbroker “:<port>” failed… But wait, why is there only the port here? Why is the hostname not mentioned in the error message? Well, that’s a pretty good question! For an unknown reason, the request to the first docbroker is working properly (ping successful) but then the request to the two others isn’t. It’s just like if the hostname was known for the first one but not for the others…

 

So since it is possible to specify which host to target, then what happen if you specify that?

[dmadmin@content-server-01 ~]$ echo `hostname -f`
content-server-01
[dmadmin@content-server-01 ~]$ 
[dmadmin@content-server-01 ~]$ dmqdocbroker -t `hostname -f` -p 1489 -c ping
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Using specified port: 1489
Successful reply from docbroker at host (content-server-01) on port(1490) running software version (7.3.0050.0039  Linux64).
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$ dmqdocbroker -t `hostname -f` -p 1491 -c ping
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Using specified port: 1491
Successful reply from docbroker at host (content-server-01) on port(1492) running software version (7.3.0050.0039  Linux64).
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$ dmqdocbroker -t `hostname -f` -p 1493 -c ping
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Using specified port: 1493
Successful reply from docbroker at host (content-server-01) on port(1494) running software version (7.3.0050.0039  Linux64).
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$

 

And here all three docbrokers are replying successfully… And in case you didn’t notice, even the message “Input has special characters …” isn’t displayed here. So for me, this shows that this message might very well be linked to this particular behaviour.

 

I’m not 100% sure what the issue is (I didn’t get time to open a SR for this yet) but if I had to guess, I would suggest you to read carefully the next blogs I will post today because this issue might be linked by the dash (‘-‘) (<– no this isn’t a smiley!) in the hostname.

 

I only showed the ping above but I can reproduce the same behaviour with the getdocbasemap for example:

[dmadmin@content-server-01 ~]$ dmqdocbroker -p 1493 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Targeting current host
$DM_HOME/bin/dmawk: Input has special characters ...
 source line number 624
Using specified port: 1493
dmqdocbroker: Error returned from docbroker request:
[DFC_DOCBROKER_REQUEST_FAILED] Request to Docbroker ":1493" failed

[DFC_DOCBROKER_REQUEST_FAILED] Request to Docbroker ":1493" failed

[DM_SESSION_E_RPC_ERROR]error:  "Server communication failure"

java.net.ConnectException: Connection refused

[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$
[dmadmin@content-server-01 ~]$ dmqdocbroker -t `hostname -f` -p 1493 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Using specified port: 1493
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : content-server-01
Docbroker port            : 1494
Docbroker network address : INET_ADDR: 03 52f 91f62f98 content-server-01 156.174.191.17
Docbroker version         : 7.3.0050.0039  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : DocBase1
Docbase id          : 1000002
Docbase description : DocBase1 Repository
Govern docbase      :
Federation name     :
Server version      : 7.3.0050.0039  Linux64.Oracle
Docbase Roles       :
Docbase Dormancy Status     :
--------------------------------------------
Docbase name        : GR_DocBase
Docbase id          : 1000001
Docbase description : GR_DocBase Repository
Govern docbase      :
Federation name     :
Server version      : 7.3.0050.0039  Linux64.Oracle
Docbase Roles       : Global Registry
Docbase Dormancy Status     :
--------------------------------------------
[dmadmin@content-server-01 ~]$

 

If you don’t see this issue with a hostname that doesn’t contain any dash, please let me know! This would confirm that both issues are linked to that… As mentioned earlier, you will see in the next blog why I think this has something to do with the dash ;).

 

 

Cet article Documentum – CS 7.3 – Issue with dmqdocbroker est apparu en premier sur Blog dbi services.

Be careful when putting the Oracle ADR on xfs, or better 4K sector format drives

Thu, 2017-09-21 07:09

Today, after we did a fresh setup of a Grid Infrastructure cluster (12.1.0.2.170814) we faced two issues reported in the alert.log of the ASM instances (in fact you would see the same for the alert logs of any instance in that configuration but we did not had any other instance up and running at that time):

This:

ORA-00700: soft internal error, arguments: [dbgrfrbf_1], [/disk00/app/grid/diag/asm/+asm/+ASM2/metadata/INC_METER_SUMMARY.ams], [0], [4], [], [], [], [], [], [], [], []
ORA-27072: File I/O error
Linux-x86_64 Error: 22: Invalid argument
Additional information: 4
Additional information: 1
Additional information: 4294967295

… and this:

ERROR: create the ADR schema in the specified ADR Base directory [/disk00/app/grid]
ERROR: The ORA-48178 error is caused by the ORA-48101 error. 
ORA-48101: error encountered when attempting to read a file [block] [/disk00/app/grid/diag/asm/+asm/+ASM1/metadata/ADR_INTERNAL.mif] [0]
ORA-27072: File I/O error
Linux-x86_64 Error: 22: Invalid argument
Additional information: 4
Additional information: 1
Additional information: 4294967295

As it turned out this was an issue with the xfs block size. In the configuration we had the block size was set to 4096 (this was chosen by default when the file system got created):

$:/disk00/app/grid/diag/asm/+asm/+ASM1/metadata/ [+ASM1] xfs_info /disk00
meta-data=/dev/mapper/vg_root-lv_disk00 isize=512    agcount=4, agsize=13107200 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=52428800, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=25600, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
$:/disk00/app/grid/diag/asm/+asm/+ASM1/metadata/ [+ASM1] 

After changing that to 512 all was fine again:

$:/home/ [+ASM1] xfs_info /disk00/
meta-data=/dev/mapper/disk00     isize=256    agcount=4, agsize=104857600 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0 spinodes=0
data     =                       bsize=512    blocks=419430400, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=512    blocks=204800, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
$:/home/ [+ASM1] 

Indeed this was not so easy because the Cisco UCS systems we used for that came with internal disks that had a 4k sector size so we couldn’t even create a new xfs file system on that with the settings required.

[root@xxxx ~]# mkfs.xfs -s size=512 -m crc=0 -b size=512
        /dev/mapper/vg_root-lv_disk00_test 

      illegal sector size 512; hwsector is 4096

The solution was to get a LUN from the storage and use that instead. Another important note from the above linked data sheet:

NOTE: 4K format drives are supported and qualified as bootable with Cisco UCS Manager Release 3.1(2b)and later versions. However, 4K sector format drives do not support VMware and require UEFI boot.

Be careful …

 

Cet article Be careful when putting the Oracle ADR on xfs, or better 4K sector format drives est apparu en premier sur Blog dbi services.

SQL Saturday Toulouse 2017: SQL Server on Linux – Slide deck

Mon, 2017-09-18 03:39

#SQLSatToulouse was definitely a great event. Firstly, a big thanks to the organization team (@Guss and @Fredg_31) as well as all the sponsors and attendees (without whom this kind of event would not be possible).

blog 124 - sqlonlinuxsqlsattoulouse

As promised, here the slide deck of my session “Introduction to SQL Server on Linux for DBAs“. It was for me a great moment of sharing.

Thanks to all!

 

 

Cet article SQL Saturday Toulouse 2017: SQL Server on Linux – Slide deck est apparu en premier sur Blog dbi services.

DOAG IMC Day 2017

Mon, 2017-09-18 03:04

Screen Shot 2017-09-18 at 09.59.38

This year and before taking part to the DOAG in Nuremberg as a referent – Yes I will present at the DOAG this year – I had the opportunity to attend the DOAG IMC Day in Berlin. It was the SIG IMW dedicated to the infrastructure and operating system.

It was my first time in Berlin :) Very nice city.

The event was well organized at the DOAG “Diensleistung”. All the facility was there.

So let’s get back to the event, there was a lots of interesting sessions. At the beginning, Jan-Peter Timmermann presented the DOAG community and how it’s splitted. The event I took part was part of the Middleware community.

After that, Franck Burkhardt started with a session about Forms and security. He presented a retour of experience made during the implementation of Oracle Access Management for an existing CRM environment that was already using an SSO mechanism with AD. The challenge was to use the Windows Native Authentication provided by OAM with web gate and Forms Applications. Forms applications were configured to use SSO. Architectural challenges have been presented.

Then Jan-Peter Timmerman had a useful session about Troubleshooting Oracle FMW 12c Forms/Reports. He presented some issues he had and provide information to begin the investigation and solve them.

Then, there were two interesting sessions around the Cloud; one made by Jan Brosowski and one made by Danilo Schmiedel.

Then it came to the most exciting one of the day for me. It was around Ansible and WebLogic. The presenter, Grzegorz Lysko defined what Ansible is and what we could do with it. He gave some samples on how Fusion Middleware can be easily deployed on multiple hosts including clustering features and Fusion Middleware components. It was the most interesting because I also developed Ansible scripts to deploy production environments running Oracle Fusion Middleware components and I was able to compare the approach I has chosen with mine. The scripts I developed allows to easily deploy in less than 25 minutes an OFM infrastructure including Reports and Forms. The all based on our defined best practices. It checked as well if the Linux requested packages and install them if they are missing. It deploys in the same time the JDK, and our DMK WebLogic that is a kind of management kit which allows to manage easily a WebLogic domain and its system components if some.

The last session cover the JVM internal usage and some known issue we can have with the JVM. Mostly the OOM issue with permspace, heapsize, swapspace and so on.

It was really an interesting day in Berlin for the DOAG. Let’s see them again next year. Why not for presenting something.

 

Cet article DOAG IMC Day 2017 est apparu en premier sur Blog dbi services.

Active Data Guard services in Multitenant

Fri, 2017-09-15 17:36

A database (or the CDB in multitenant) registers its name as the default service. When a standby database is on the same server, or same cluster, you have no problem because this database name is the db_unique_name which is different between the primary and the standby(s).

In multitenant, in addition to that, each PDB registers its name as a service. But the PDB name is the same in the primary and the standby database. This means that we have the same service name registered for the PDB in primary and standby:

Service "pdb1" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...

We cannot change that, and then it is strongly recommended to create different services for the PDB in primary and standby.

The PDB default service name

Here is what we want to avoid.
I’ve a container database (db_name=CDB2) with its primary (db_unique_name=CDB2A) and standby (db_unique_name=CDB2B) on the same server, registered to the same listener:

Service "59408d6bed2c1c8ee0536a4ea8c0cfa9" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "CDB2A" has 1 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Service "CDB2AXDB" has 1 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Service "CDB2A_DGB" has 1 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Service "CDB2A_DGMGRL" has 1 instance(s).
Instance "CDB2A", status UNKNOWN, has 1 handler(s) for this service...
Service "CDB2B" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "CDB2BXDB" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "CDB2B_DGB" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "CDB2B_DGMGRL" has 1 instance(s).
Instance "CDB2B", status UNKNOWN, has 1 handler(s) for this service...
Service "CDB2_CFG" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...

The PDB1 service is registered from both instances, and then when I use it in my connection string I’m connected at random to the primary or the standby:

22:27:46 SQL> connect sys/oracle@//localhost:1522/pdb1 as sysdba
Connected.
22:27:51 SQL> select * from v$instance;
 
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
1 CDB2B VM106 12.2.0.1.0 15-SEP-17 OPEN NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE SINGLE
 
22:28:00 SQL> connect sys/oracle@//localhost:1522/pdb1 as sysdba
Connected.
22:28:06 SQL> /
 
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
1 CDB2A VM106 12.2.0.1.0 15-SEP-17 OPEN NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE SINGLE
 
22:28:07 SQL> connect sys/oracle@//localhost:1522/pdb1 as sysdba
Connected.
22:28:10 SQL> /
 
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
1 CDB2B VM106 12.2.0.1.0 15-SEP-17 OPEN NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE SINGLE
 
22:28:11 SQL> connect sys/oracle@//localhost:1522/pdb1 as sysdba
Connected.
22:28:13 SQL> /
 
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
1 CDB2A VM106 12.2.0.1.0 15-SEP-17 OPEN NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE SINGLE

I don’t want to use a service that connects at random and then I need to create different services.

Read-Only service for the Active Data Guard standby

I’m in Oracle Restart and I create the service with srvctl (but you can also create it with dbms_service when not running with Grid Infrastructure):


srvctl add service -db cdb2b -service pdb1_ro -pdb pdb1 -role physical_standby

This creates the service for the standby database (CDB2B) to be started when in physical standby role, and the service connects to the pluggable database PDB1.
But I cannot start it:

srvctl start service -db cdb2b -service pdb1_ro -pdb pdb1
 
 
PRCD-1084 : Failed to start service pdb1_ro
PRCR-1079 : Failed to start resource ora.cdb2b.pdb1_ro.svc
CRS-5017: The resource action "ora.cdb2b.pdb1_ro.svc start" encountered the following error:
ORA-16000: database or pluggable database open for read-only access
ORA-06512: at "SYS.DBMS_SERVICE", line 5
ORA-06512: at "SYS.DBMS_SERVICE", line 288
ORA-06512: at line 1
. For details refer to "(:CLSN00107:)" in "/u01/app/12.2/diag/crs/vm106/crs/trace/ohasd_oraagent_oracle.trc".
 
CRS-2674: Start of 'ora.cdb2b.pdb1_ro.svc' on 'vm106' failed

The reason is that the service information must be stored in the dictionary, SYS.SERVICE$ table, and you cannot do that on a read-only database.

This has been explained a long time ago by Ivica Arsov on his blog: https://iarsov.com/oracle/data-guard/active-services-on-physical-standby-database/ and nothing has changed. You need to create the service on the primary so that the update of SYS.SERVICE$ is propagated to the standby database through log shipping:


srvctl add service -db cdb2a -service pdb1_ro -pdb pdb1 -role physical_standby

This is not sufficient because the insert in SYS.SERVICE$ does not occur yet:

SQL> alter session set container=PDB1;
 
Session altered.
 
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- --------- ----------
3 PDB1 READ WRITE NO
 
SQL> select * from service$;
 
SERVICE_ID NAME NAME_HASH NETWORK_NAME CREATION_DATE CREATION_DATE_HASH DELETION_DATE FAILOVER_METHOD FAILOVER_TYPE FAILOVER_RETRIES FAILOVER_DELAY MIN_CARDINALITY MAX_CARDINALITY GOAL FLAGS EDITION PDB RETENTION_TIMEOUT REPLAY_INITIATION_TIMEOUT SESSION_STATE_CONSISTENCY SQL_TRANSLATION_PROFILE MAX_LAG_TIME GSM_FLAGS PQ_SVC STOP_OPTION FAILOVER_RESTORE DRAIN_TIMEOUT
---------- ---- --------- ------------ ------------- ------------------ ------------- --------------- ------------- ---------------- -------------- --------------- --------------- ---- ----- ------- --- ----------------- ------------------------- ------------------------- ----------------------- ------------ --------- ------ ----------- ---------------- -------------
14 pdb1 1888881990 pdb1 15-SEP-17 1332716667 136 PDB1

As explained by Ivica in his blog post, we need to start the service once to have the row inserted in SERVICE$:

srvctl start service -db cdb2a -service pdb1_ro -pdb pdb1
srvctl stop service -db cdb2a -service pdb1_ro

Now the service information is persistent in the dictionary:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
3 PDB1 READ WRITE NO
 
SQL> select * from service$;
 
SERVICE_ID NAME NAME_HASH NETWORK_NAME CREATION_DATE CREATION_DATE_HASH DELETION_DATE FAILOVER_METHOD FAILOVER_TYPE FAILOVER_RETRIES FAILOVER_DELAY MIN_CARDINALITY MAX_CARDINALITY GOAL FLAGS EDITION PDB RETENTION_TIMEOUT REPLAY_INITIATION_TIMEOUT SESSION_STATE_CONSISTENCY SQL_TRANSLATION_PROFILE MAX_LAG_TIME GSM_FLAGS PQ_SVC STOP_OPTION FAILOVER_RESTORE DRAIN_TIMEOUT
---------- ---- --------- ------------ ------------- ------------------ ------------- --------------- ------------- ---------------- -------------- --------------- --------------- ---- ----- ------- --- ----------------- ------------------------- ------------------------- ----------------------- ------------ --------- ------ ----------- ---------------- -------------
14 pdb1 1888881990 pdb1 15-SEP-17 1332716667 136 PDB1
1 pdb1_ro 1562179816 pdb1_ro 15-SEP-17 1301388390 0 0 0 8 PDB1 86400 300 DYNAMIC ANY 0 0 0 0

This is from the primary, but after the redo has been transported and applied, I have the same on the standby. Now I can start the service I’ve created for the standby:

srvctl start service -db cdb2b -service pdb1_ro -pdb pdb1

Here is the new service registered on the listener, which I can use to connect to the read-only PDB1 on the Active Data Guard standby:

Service "pdb1" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1_ro" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1_rw" has 1 instance(s).

Read-Write service for the primary

You can see above that in order to select from SERVICE$ I connected to CDB$ROOT and switched to PDB1 with ‘set container’. There’s no other choice because using the service name directs me at random to any instance. Then, I need a service to connect to the primary only, and I’ll call it PDB1_RW as it is opened in Read Write there.

srvctl add service -db cdb2a -service pdb1_rw -pdb pdb1 -role primary
srvctl start service -db cdb2a -service pdb1_rw

Finally, here are the services registered from the listener:

Service "pdb1" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1_ro" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1_rw" has 1 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...

I’ll probably never use the ‘PDB1′ service because I want to know where I connect to.

In case of switchover, I also create the Read Write service in for the standby:

srvctl add service -db cdb2b -service pdb1_rw -pdb pdb1 -role primary

Here are the resources when CDB2A is the primary:

$ crsctl stat resource -t -w "TYPE = ora.service.type"
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cdb2a.pdb1_ro.svc
1 OFFLINE OFFLINE STABLE
ora.cdb2a.pdb1_rw.svc
1 ONLINE ONLINE vm106 STABLE
ora.cdb2b.pdb1_ro.svc
1 ONLINE ONLINE vm106 STABLE
ora.cdb2b.pdb1_rw.svc
1 OFFLINE OFFLINE STABLE
--------------------------------------------------------------------------------

I test as switchover to CDB2B:

$ dgmgrl sys/oracle
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Fri Sep 15 23:41:26 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
Welcome to DGMGRL, type "help" for information.
Connected to "CDB2B"
Connected as SYSDG.
DGMGRL> switchover to cdb2b;
Performing switchover NOW, please wait...
New primary database "cdb2b" is opening...
Oracle Clusterware is restarting database "cdb2a" ...
Switchover succeeded, new primary is "cdb2b"

Here are the services:

[oracle@VM106 blogs]$ crsctl stat resource -t -w "TYPE = ora.service.type"
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cdb2a.pdb1_ro.svc
1 ONLINE ONLINE vm106 STABLE
ora.cdb2a.pdb1_rw.svc
1 OFFLINE OFFLINE STABLE
ora.cdb2b.pdb1_ro.svc
1 OFFLINE OFFLINE STABLE
ora.cdb2b.pdb1_rw.svc
1 ONLINE ONLINE vm106 STABLE
--------------------------------------------------------------------------------

So what?

The recommendations are not new here:

  • Always do the same on the primary and the standby. Create services on both sites, then have started them depending on the role
  • Always use one or several application services rather than the default one, in order to have better control and flexibility on where you connect

In multitenant, because services are mandatory to connect to a container with a local user, all the recommendations about services are even more important than before. If you follow them, you will see that multitenant is not difficult at all.

This case may seem improbable, because you probably don’t put the standby on the same server or cluster as the primary. But you may have several standby databases on the same server. About the service registered from the PDB name, just don’t use it. I’m more concerned by the GUID service name (here 59408d6bed2c1c8ee0536a4ea8c0cfa9) which is also declared by both databases. If you plan to use online PDB relocate in a Data Guard configuration then be careful with that. I’ve not tested it, but it is probably better to keep the standby PDB closed, or at least do not register it on the same listener.

 

Cet article Active Data Guard services in Multitenant est apparu en premier sur Blog dbi services.

12c Access Control Lists

Sun, 2017-09-10 14:39

There is already enough information about the new simplified 12c way to define Access Control Lists, such as in oracle-base.
I’m just posting my example here to show how it is easy.

If, as a non-SYS user you want to connect to a host with TCP, you get an error:

SQL> connect DEMO1/demo@//localhost/PDB1
Connected.
SQL>
SQL>
SQL> declare
2 c utl_tcp.connection;
3 n number:=0;
4 begin
5 c:=utl_tcp.open_connection(remote_host=>'towel.blinkenlights.nl',remote_port=>23);
6 end;
7 /
 
Error starting at line : 27 File @ /media/sf_share/122/blogs/12cacl.sql
In command -
declare
c utl_tcp.connection;
n number:=0;
begin
c:=utl_tcp.open_connection(remote_host=>'towel.blinkenlights.nl',remote_port=>23);
end;
Error report -
ORA-24247: network access denied by access control list (ACL)
ORA-06512: at "SYS.UTL_TCP", line 19
ORA-06512: at "SYS.UTL_TCP", line 284
ORA-06512: at line 5
24247. 00000 - "network access denied by access control list (ACL)"
*Cause: No access control list (ACL) has been assigned to the target
host or the privilege necessary to access the target host has not
been granted to the user in the access control list.
*Action: Ensure that an access control list (ACL) has been assigned to
the target host and the privilege necessary to access the target
host has been granted to the user.
SQL>

Here are the ACLs defined by default:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
 
SQL> select * from dba_host_acls;
 
HOST LOWER_PORT UPPER_PORT ACL ACLID ACL_OWNER
---- ---------- ---------- --- ----- ---------
* NETWORK_ACL_4700D2108291557EE05387E5E50A8899 0000000080002724 SYS
 
SQL> select * from dba_host_aces;
 
HOST LOWER_PORT UPPER_PORT ACE_ORDER START_DATE END_DATE GRANT_TYPE INVERTED_PRINCIPAL PRINCIPAL PRINCIPAL_TYPE PRIVILEGE
---- ---------- ---------- --------- ---------- -------- ---------- ------------------ --------- -------------- ---------
* 1 GRANT NO GSMADMIN_INTERNAL DATABASE RESOLVE
* 2 GRANT NO GGSYS DATABASE RESOLVE

So, I add an ACL to access to towel.blinkenlights.nl on telnet port (23) for my user DEMO1:

SQL> exec dbms_network_acl_admin.append_host_ace(host=>'towel.blinkenlights.nl',lower_port=>23,upper_port=>23,ace=>xs$ace_type(privilege_list =>xs$name_list('connect'),principal_name=>'DEMO1',principal_type =>xs_acl.ptype_db));
 
PL/SQL procedure successfully completed.
 
SQL> select * from dba_host_acls;
 
HOST LOWER_PORT UPPER_PORT ACL ACLID ACL_OWNER
---- ---------- ---------- --- ----- ---------
towel.blinkenlights.nl 23 23 NETWORK_ACL_5876ADC67B6635CEE053684EA8C0F378 000000008000281F SYS
* NETWORK_ACL_4700D2108291557EE05387E5E50A8899 0000000080002724 SYS
 
SQL> select * from dba_host_aces;
 
HOST LOWER_PORT UPPER_PORT ACE_ORDER START_DATE END_DATE GRANT_TYPE INVERTED_PRINCIPAL PRINCIPAL PRINCIPAL_TYPE PRIVILEGE
---- ---------- ---------- --------- ---------- -------- ---------- ------------------ --------- -------------- ---------
* 1 GRANT NO GSMADMIN_INTERNAL DATABASE RESOLVE
* 2 GRANT NO GGSYS DATABASE RESOLVE
towel.blinkenlights.nl 23 23 1 GRANT NO DEMO1 DATABASE CONNECT

Now I can connect from my user:

SQL> connect DEMO1/demo@//localhost/PDB1
Connected.
 
SQL> declare
2 c utl_tcp.connection;
3 n number:=0;
4 begin
5 c:=utl_tcp.open_connection(remote_host=>'towel.blinkenlights.nl',remote_port=>23);
6 end;
7 /
 
PL/SQL procedure successfully completed.

If you don’t know why I used towel.blinkenlights.nl, then just try to telnet to it and have fun…

 

Cet article 12c Access Control Lists est apparu en premier sur Blog dbi services.

Create constraints in your datawarehouse – why and how

Fri, 2017-09-08 14:13

We still see some developers not declaring referential integrity constraints in datawarehouse databases because they think they don’t need it (integrity of data has been validated by the ETL). Here is a small demo I did to show why you need to declare them, and how to do it to avoid any overhead on the ETL.

Test case

I create 3 dimension tables and 1 fact table:

21:01:18 SQL> create table DIM1 (DIM1_ID number, DIM1_ATT1 varchar2(20));
Table DIM1 created.
 
21:01:19 SQL> create table DIM2 (DIM2_ID number, DIM2_ATT1 varchar2(20));
Table DIM2 created.
 
21:01:20 SQL> create table DIM3 (DIM3_ID number, DIM3_ATT1 varchar2(20));
Table DIM3 created.
 
21:01:21 SQL> create table FACT (DIM1_ID number, DIM2_ID number, DIM3_ID number,MEAS1 number);
Table FACT created.

I insert 10 million rows into the fact table:

21:01:22 SQL> insert into FACT select mod(rownum,3),mod(rownum,5),mod(rownum,10),rownum from xmltable('1 to 10000000');
10,000,000 rows inserted.
 
Elapsed: 00:00:18.983

and fill the dimension tables from it:

21:01:42 SQL> insert into DIM1 select distinct DIM1_ID,'...'||DIM1_ID from FACT;
3 rows inserted.
 
Elapsed: 00:00:01.540
 
21:01:52 SQL> insert into DIM2 select distinct DIM2_ID,'...'||DIM2_ID from FACT;
5 rows inserted.
 
Elapsed: 00:00:01.635
 
21:01:57 SQL> insert into DIM3 select distinct DIM3_ID,'...'||DIM3_ID from FACT;
10 rows inserted.
 
Elapsed: 00:00:01.579
 
21:01:58 SQL> commit;
Commit complete.

Query joining fact with one dimension

I’ll run the following query:

21:01:58 SQL> select count(*) from FACT join DIM1 using(DIM1_ID);
 
COUNT(*)
--------
10000000
 
Elapsed: 00:00:01.015

Here is the execution plan:

21:02:12 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 4pqjrjkc7sn17, child number 0
-------------------------------------
select count(*) from FACT join DIM1 using(DIM1_ID)
 
Plan hash value: 1826335751
 
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7514 (100)| |
| 1 | SORT AGGREGATE | | 1 | 26 | | |
|* 2 | HASH JOIN | | 10M| 253M| 7514 (2)| 00:00:01 |
| 3 | TABLE ACCESS FULL| DIM1 | 3 | 39 | 3 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL| FACT | 10M| 126M| 7482 (1)| 00:00:01 |
----------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("FACT"."DIM1_ID"="DIM1"."DIM1_ID")
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)

Actually, we don’t need that join. A dimension table has two goals:

  • filter facts on the dimension attributes. Example: filter on customer last name
  • add dimension attributes to the result. Example: add customer first name

Here, there is no WHERE clause on DIM1 columns, and no columns from DIM1 selected. We don’t need to join to DIM1. However, we often see those useless joins for two reasons:

  • We query a view that joins the fact with all dimensions
  • The query is generated by a reporting tool which always join to dimensions
Join elimination

The Oracle optimizer is able to remove those kinds of unnecessary joins. But one information is missing here for the optimizer. We know that all rows in the fact table have a matching row in each dimension, but Oracle doesn’t know that. And if there is no mathing row, then the inner join should not return the result. For this reason, the join must be done.

Let’s give this information to the optimizer: declare the foreign key from FACT to DIM1 so that Oracle knows that there is a many-to-one relationship:

21:02:17 SQL> alter table DIM1 add constraint DIM1_PK primary key(DIM1_ID);
Table DIM1 altered.
 
Elapsed: 00:00:00.051
 
21:02:20 SQL> alter table FACT add constraint DIM1_FK foreign key(DIM1_ID) references DIM1;
Table FACT altered.
 
Elapsed: 00:00:03.210

I’ve spent 3 seconds here to create this foreign key (would have been much longer with a real fact table and lot of columns and rows) but now, the optimizer is able to eliminate the join:

21:02:24 SQL> select count(*) from FACT join DIM1 using(DIM1_ID);
 
COUNT(*)
--------
10000000
 
21:02:25 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 4pqjrjkc7sn17, child number 0
-------------------------------------
select count(*) from FACT join DIM1 using(DIM1_ID)
 
Plan hash value: 3735838348
 
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7488 (100)| |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | TABLE ACCESS FULL| FACT | 10M| 126M| 7488 (1)| 00:00:01 |
---------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter("FACT"."DIM1_ID" IS NOT NULL)
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)

No join needed here, the query is faster. This is exactly the point of this blog post: to show you that declaring constraints improve performance of queries. It adds information to the optimizer, like statistics. Statistics gives estimated cardinalities. Foreign keys are exact cardinality (many-to-one).

No validate

When loading a datawarehouse, you usually don’t need to validate the constraints because data was bulk loaded from a staging area where all data validation has been done. You don’t want to spend time validating constraints (the 3 seconds in my small example above) and this is why some datawarehouse developers do not declare constraints.

However, we can declare constraints without validating them. Let’s do that for the second dimension table:

21:02:34 SQL> alter table DIM2 add constraint DIM2_PK primary key(DIM2_ID) novalidate;
Table DIM2 altered.
 
Elapsed: 00:00:00.018
%nbsp;
21:02:35 SQL> alter table FACT add constraint DIM2_FK foreign key(DIM2_ID) references DIM2 novalidate;
Table FACT altered.
 
Elapsed: 00:00:00.009

That was much faster than the 3 seconds we had for the ‘validate’ constraint which is the default. Creating a constraint in NOVALIDATE is immediate and do not depend on the size of the table.

However this is not sufficient to get the join elimination:

21:02:39 SQL> select count(*) from FACT join DIM2 using(DIM2_ID);
 
COUNT(*)
--------
10000000
 
21:02:40 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 4t9g2n6duw0jf, child number 0
-------------------------------------
select count(*) from FACT join DIM2 using(DIM2_ID)
 
Plan hash value: 3858910383
 
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7518 (100)| |
| 1 | SORT AGGREGATE | | 1 | 26 | | |
|* 2 | HASH JOIN | | 10M| 253M| 7518 (2)| 00:00:01 |
| 3 | INDEX FULL SCAN | DIM2_PK | 5 | 65 | 1 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL| FACT | 10M| 126M| 7488 (1)| 00:00:01 |
-------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("FACT"."DIM2_ID"="DIM2"."DIM2_ID")
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)

The constraint ensures that no rows will be inserted without a matching row in the dimension. However, because Oracle has not validated the result itself, it does not apply the join elimination, just in case a previously existing row has no matching dimension.

Rely novalidate

If you want the optimizer to do the join elimination on a ‘novalidate’ constraint, then it has to trust you and rely on the constraint you have validated.

RELY is an attribute of the constraint that you can set:

21:02:44 SQL> alter table DIM2 modify constraint DIM2_PK rely;
Table DIM2 altered.
 
Elapsed: 00:00:00.016
 
21:02:45 SQL> alter table FACT modify constraint DIM2_FK rely;
Table FACT altered.
 
Elapsed: 00:00:00.010

But this is not sufficient. You told Oracle to rely on your constraint, but Oracle must trust you.

Trusted

The join elimination is a rewrite of the query and, by default, rewrite is enabled but only when integrity is enforced by Oracle:

21:02:50 SQL> show parameter query_rewrite
NAME TYPE VALUE
----------------------- ------ --------
query_rewrite_enabled string TRUE
query_rewrite_integrity string ENFORCED

Let’s allow our session to have rewrite transformations to trust our RELY constraints:

21:02:52 SQL> alter session set query_rewrite_integrity=trusted;
Session altered.

Now, joining to DIM2 without using DIM2 columns outside of the join allows join elimination:

21:02:57 SQL> select count(*) from FACT join DIM2 using(DIM2_ID);
 
COUNT(*)
--------
10000000
 
Elapsed: 00:00:00.185
21:02:58 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 4t9g2n6duw0jf, child number 0
-------------------------------------
select count(*) from FACT join DIM2 using(DIM2_ID)
 
Plan hash value: 3735838348
 
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7494 (100)| |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | TABLE ACCESS FULL| FACT | 10M| 126M| 7494 (1)| 00:00:01 |
---------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter("FACT"."DIM2_ID" IS NOT NULL)
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- rely constraint used for this statement

In 12.2 the execution plan has a note to show that the plan depends on RELY constraint.

From this example, you can see that you can, and should, create RELY NOVALIDATE constraints on tables where you know the existing data is valid. They are immediately created, without any overhead on the load process, and helps to improve queries generated on your dimensional model.

Rely Disable

I said that a NOVALIDATE constraint has no overhead when created, but you may have further inserts or updates in your datawarehouse. And then, those constraints will have to be verified and this may add a little overhead. In this case, you can go further and disable the constraint:

21:03:04 SQL> alter table DIM3 add constraint DIM3_PK primary key(DIM3_ID) rely;
Table DIM3 altered.
 
Elapsed: 00:00:00.059
 
21:03:05 SQL> alter table FACT add constraint DIM3_FK foreign key(DIM3_ID) references DIM3 rely disable novalidate;
Table FACT altered.
 
Elapsed: 00:00:00.014

Note that I had to set the referenced constraint DIM3_PK to RELY here, even if it is enable and validate, or I would get: ORA-25158: Cannot specify RELY for foreign key if the associated primary key is NORELY.

My session still trusts RELY constraints for query rewrite:

21:03:07 SQL> show parameter query_rewrite
 
NAME TYPE VALUE
----------------------- ------ -------
query_rewrite_enabled string TRUE
query_rewrite_integrity string TRUSTED

Now, the join elimination occurs:

21:03:08 SQL> select count(*) from FACT join DIM3 using(DIM3_ID);
 
COUNT(*)
--------
10000000
 
21:03:09 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 3bhs523zyudf0, child number 0
-------------------------------------
select count(*) from FACT join DIM3 using(DIM3_ID)
 
Plan hash value: 3735838348
 
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7505 (100)| |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | TABLE ACCESS FULL| FACT | 11M| 138M| 7505 (1)| 00:00:01 |
---------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter("FACT"."DIM3_ID" IS NOT NULL)
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- rely constraint used for this statement

So, we can still benefit from the query optimization even with the RELY DISABLE NOVALIDATE.

But I would not recommend this. Be careful. Here are my foreign key constraints:

21:03:15 SQL> select table_name,constraint_type,constraint_name,status,validated,rely from all_constraints where owner='
DEMO' and table_name='FACT' order by 4 desc,5 desc,6 nulls last;
 
TABLE_NAME CONSTRAINT_TYPE CONSTRAINT_NAME STATUS VALIDATED RELY
---------- --------------- --------------- ------ --------- ----
FACT R DIM1_FK ENABLED VALIDATED
FACT R DIM2_FK ENABLED NOT VALIDATED RELY
FACT R DIM3_FK DISABLED NOT VALIDATED RELY

For DIM1_FK and DIM2_FK the constraints prevent us from inconsistencies:

21:03:17 SQL> insert into FACT(DIM1_ID)values(666);
 
Error starting at line : 1 in command -
insert into FACT(DIM1_ID)values(666)
Error report -
ORA-02291: integrity constraint (DEMO.DIM1_FK) violated - parent key not found

But the disabled one will allow inconsistencies:

21:03:19 SQL> insert into FACT(DIM3_ID)values(666);
1 row inserted.

That’s bad. I rollback this immediately:

21:03:20 SQL> rollback;
Rollback complete.

Star transformation

Join elimination is not the only transformation that needs to know about the many-to-one relationship between fact tables and dimensions. You usually create a bitmap index on each foreign key to the dimension, to get the higher selectivity when looking at the table rows from the combination of criteria on the dimension attributes.

21:03:24 SQL> create bitmap index FACT_DIM1 on FACT(DIM1_ID);
Index FACT_DIM1 created.
 
21:03:29 SQL> create bitmap index FACT_DIM2 on FACT(DIM2_ID);
Index FACT_DIM2 created.
 
21:03:33 SQL> create bitmap index FACT_DIM3 on FACT(DIM3_ID);
Index FACT_DIM3 created.

Here is the kind of query with predicates on each dimension attributes:

21:03:35 SQL> select count(*) from FACT
2 join DIM1 using(DIM1_ID) join DIM2 using(DIM2_ID) join DIM3 using(DIM3_ID)
3 where dim1_att1='...0' and dim2_att1='...0' and dim3_att1='...0';
 
COUNT(*)
--------
333333

By default, the optimizer applies those predicates on the dimension and do a cartesian join to get all accepted combinations of dimension IDs. Then the rows can be fetched from the table:

21:03:37 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 01jmjv0sz1dpq, child number 0
-------------------------------------
select count(*) from FACT join DIM1 using(DIM1_ID) join DIM2
using(DIM2_ID) join DIM3 using(DIM3_ID) where dim1_att1='...0' and
dim2_att1='...0' and dim3_att1='...0'
 
Plan hash value: 1924236134
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 5657 (100)| |
| 1 | SORT AGGREGATE | | 1 | 114 | | |
| 2 | NESTED LOOPS | | 55826 | 6215K| 5657 (1)| 00:00:01 |
| 3 | MERGE JOIN CARTESIAN | | 1 | 75 | 9 (0)| 00:00:01 |
| 4 | MERGE JOIN CARTESIAN | | 1 | 50 | 6 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | DIM1 | 1 | 25 | 3 (0)| 00:00:01 |
| 6 | BUFFER SORT | | 1 | 25 | 3 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL | DIM2 | 1 | 25 | 3 (0)| 00:00:01 |
| 8 | BUFFER SORT | | 1 | 25 | 6 (0)| 00:00:01 |
|* 9 | TABLE ACCESS FULL | DIM3 | 1 | 25 | 3 (0)| 00:00:01 |
| 10 | BITMAP CONVERSION COUNT | | 55826 | 2126K| 5657 (1)| 00:00:01 |
| 11 | BITMAP AND | | | | | |
|* 12 | BITMAP INDEX SINGLE VALUE| FACT_DIM3 | | | | |
|* 13 | BITMAP INDEX SINGLE VALUE| FACT_DIM2 | | | | |
|* 14 | BITMAP INDEX SINGLE VALUE| FACT_DIM1 | | | | |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
5 - filter("DIM1"."DIM1_ATT1"='...0')
7 - filter("DIM2"."DIM2_ATT1"='...0')
9 - filter("DIM3"."DIM3_ATT1"='...0')
12 - access("FACT"."DIM3_ID"="DIM3"."DIM3_ID")
13 - access("FACT"."DIM2_ID"="DIM2"."DIM2_ID")
14 - access("FACT"."DIM1_ID"="DIM1"."DIM1_ID")

Here rows are fetched from the fact table through a nested loop from the cartesian join on the dimensions, using the bitmap index access for each loop. If there are lot of rows to fetch, then the optimizer will chose a hash join and then will have to full scan the fact table, which is expensive. To lower that cost, the optimizer can add a ‘IN (SELECT DIM_ID FROM DIM WHERE DIM_ATT)’ for very selective dimensions. This is STAR transformation and relies on the foreign key constraints.

It is not enabled by default:

21:03:43 SQL> show parameter star
NAME TYPE VALUE
---------------------------- ------- -----
star_transformation_enabled string FALSE

We can enable it and then it is a cost based transformation:

21:03:45 SQL> alter session set star_transformation_enabled=true;
Session altered.

Here is my example:

21:03:47 SQL> select count(*) from FACT
2 join DIM1 using(DIM1_ID) join DIM2 using(DIM2_ID) join DIM3 using(DIM3_ID)
3 where dim1_att1='...0' and dim2_att1='...0' and dim3_att1='...0';
 
COUNT(*)
--------
333333

The star transformation, changing a join to an ‘IN()’ is possible only when we know that there is a many-to-one relationship. We have all constraints for that, disabled or not, validated or not, but all in RELY. Then Star Transformation can occur:

21:03:51 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 01jmjv0sz1dpq, child number 1
-------------------------------------
select count(*) from FACT join DIM1 using(DIM1_ID) join DIM2
using(DIM2_ID) join DIM3 using(DIM3_ID) where dim1_att1='...0' and
dim2_att1='...0' and dim3_att1='...0'
 
Plan hash value: 1831539117
 
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 68 (100)| |
| 1 | SORT AGGREGATE | | 1 | 38 | | |
|* 2 | HASH JOIN | | 2 | 76 | 68 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL | DIM2 | 1 | 25 | 3 (0)| 00:00:01 |
| 4 | VIEW | VW_ST_62BA0C91 | 8 | 104 | 65 (0)| 00:00:01 |
| 5 | NESTED LOOPS | | 8 | 608 | 56 (0)| 00:00:01 |
| 6 | BITMAP CONVERSION TO ROWIDS| | 8 | 427 | 22 (5)| 00:00:01 |
| 7 | BITMAP AND | | | | | |
| 8 | BITMAP MERGE | | | | | |
| 9 | BITMAP KEY ITERATION | | | | | |
|* 10 | TABLE ACCESS FULL | DIM1 | 1 | 25 | 3 (0)| 00:00:01 |
|* 11 | BITMAP INDEX RANGE SCAN| FACT_DIM1 | | | | |
| 12 | BITMAP MERGE | | | | | |
| 13 | BITMAP KEY ITERATION | | | | | |
|* 14 | TABLE ACCESS FULL | DIM2 | 1 | 25 | 3 (0)| 00:00:01 |
|* 15 | BITMAP INDEX RANGE SCAN| FACT_DIM2 | | | | |
| 16 | BITMAP MERGE | | | | | |
| 17 | BITMAP KEY ITERATION | | | | | |
|* 18 | TABLE ACCESS FULL | DIM3 | 1 | 25 | 3 (0)| 00:00:01 |
|* 19 | BITMAP INDEX RANGE SCAN| FACT_DIM3 | | | | |
| 20 | TABLE ACCESS BY USER ROWID | FACT | 1 | 25 | 43 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("ITEM_1"="DIM2"."DIM2_ID")
3 - filter("DIM2"."DIM2_ATT1"='...0')
10 - filter("DIM1"."DIM1_ATT1"='...0')
11 - access("FACT"."DIM1_ID"="DIM1"."DIM1_ID")
14 - filter("DIM2"."DIM2_ATT1"='...0')
15 - access("FACT"."DIM2_ID"="DIM2"."DIM2_ID")
18 - filter("DIM3"."DIM3_ATT1"='...0')
19 - access("FACT"."DIM3_ID"="DIM3"."DIM3_ID")
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- star transformation used for this statement
- this is an adaptive plan

Here, each dimension drives a range scan on the bitmap index: the predicate on the dimension table returns the dimension ID for the the index lookup on the fact table. The big advantage of bitmap indexes here is that when this access path is used for several dimensions, the bitmap result can be combined before going to the table. This transformation avoids the join and then you must be sure that there is a many-to-one relationship.

In summary

As you should rely on the integrity of data in your datawarehouse, you should find the following parameters to query on fact-dimension schemas:

NAME TYPE VALUE
---------------------------- ------- ------
query_rewrite_enabled string TRUE
query_rewrite_integrity string TRUSTED
star_transformation_enabled string FALSE

And you should define all constraints. When you are sure about the integrity of data, then those constraints can be created RELY ENABLE NOVALIDATE. If some processing must be optimized by not enforcing the constraint verification, then you may create those constraints as RELY DISABLE NOVALIDATE but the gain will probably minimal, and the risk high. But remember that there are not only the well-controlled processes which update data. You may have one day to do a manual update to fix something, and enabled constraint can prevent terrible errors.

I have not covered all optimizer transformations that rely on constraints. When using materialized views you, the rewrite capability also relies on constraints. Relationship cardinality is one of the most important information of database design, this information must be known by the optimizer.

 

Cet article Create constraints in your datawarehouse – why and how est apparu en premier sur Blog dbi services.

impdp content=metadata_only locks the stats

Wed, 2017-09-06 15:20

With Oracle you can learn something every day. Today, preparing a migration to 12.2, I found all tables had locked statistics. I learned that it is the expected behavior since 10.2 when importing metadata_only including statistics, to avoid that the automatic job gathering comes and replaces the stats by ‘0 rows’.

It is documented in DataPump Import Without Data Locks Table Statistics (Doc ID 415081.1) but as I was really surprised about that (and also frustrated to learn a 10.2 thing when migrating to 12.2) that I wanted to test myself.

I create a table DEMO with statistics:

SQL> connect sys/oracle@//localhost/pdb1 as sysdba
Connected.
SQL> create table demo.demo as select * from dual;
Table created.
 
SQL> create index demo.demo on demo.demo(dummy);
Index created.
 
SQL> exec dbms_stats.gather_table_stats('DEMO','DEMO');
PL/SQL procedure successfully completed.
 
SQL> create or replace directory TMP as '/tmp';
Directory created.
 
SQL> select count(*) from DEMO.DEMO;
 
COUNT(*)
----------
1
 
SQL> select object_type from dba_objects where owner='DEMO' and object_name='DEMO';
 
OBJECT_TYPE
-----------------------
TABLE
INDEX
 
SQL> select owner,table_name,last_analyzed,stattype_locked,num_rows from dba_tab_statistics where owner='DEMO' and table_name='DEMO';
 
OWNER TABLE_NAME LAST_ANALYZED STATTYPE_LOCKED NUM_ROWS
---------- -------------------- -------------------- -------------------- ----------
DEMO DEMO 06-SEP-17 1

I export it:

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Starting "SYS"."SYS_EXPORT_TABLE_01": "sys/********@//localhost/pdb1 AS SYSDBA" tables=DEMO.DEMO directory=TMP
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
. . exported "DEMO"."DEMO" 5.054 KB 1 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
/tmp/expdat.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Wed Sep 6 19:14:44 2017 elapsed 0 00:00:09

And drop it:

SQL> connect sys/oracle@//localhost/pdb1 as sysdba
Connected.
SQL> drop table demo.demo;
Table dropped.

Now import metadata only (for example because I want to change NLS semantics before importing the data)

Import: Release 12.2.0.1.0 - Production on Wed Sep 6 19:21:28 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "SYS"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_TABLE_01": "sys/********@//localhost/pdb1 AS SYSDBA" tables=DEMO.DEMO directory=TMP content=metadata_only exclude=index
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "SYS"."SYS_IMPORT_TABLE_01" successfully completed at Wed Sep 6 19:21:39 2017 elapsed 0 00:00:11

If I check the statistics:

SQL> select owner,table_name,last_analyzed,stattype_locked,num_rows from dba_tab_statistics where owner='DEMO' and table_name='DEMO';
 
OWNER TABLE_NAME LAST_ANALYZED STATTYPE_LOCKED NUM_ROWS
---------- -------------------- -------------------- -------------------- ----------
DEMO DEMO 06-SEP-17 ALL 1

Stats are locked. I suppose that the idea is that you have the tables with same statistics as production for example, and you can load them with a subset of data but expect the same execution plans as in production. But this is not what I want for a migration.

One possibility is to unlock the stats once you have imported the data.

The other possibility is to import metadata without the statistics:

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "SYS"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_TABLE_01": "sys/********@//localhost/pdb1 AS SYSDBA" tables=DEMO.DEMO directory=TMP content=metadata_only exclude=index exclude=table_statistics
Processing object type TABLE_EXPORT/TABLE/TABLE
Job "SYS"."SYS_IMPORT_TABLE_01" successfully completed at Wed Sep 6 21:11:03 2017 elapsed 0 00:00:03

Then the table statistics are not locked:

OWNER TABLE_NAME LAST_ANALYZED STATTYPE_LOCKED NUM_ROWS
---------- -------------------- -------------------- -------------------- ----------
DEMO DEMO

Once you have changed what you want on the tables, you import the data (table_exists_action=truncate) and then you import the remaining: indexes, ref_constraints, triggers.
This is where you can also add include=table_statistics:

Starting "SYS"."SYS_IMPORT_TABLE_01": "sys/********@//localhost/pdb1 AS SYSDBA" tables=DEMO.DEMO directory=TMP table_exists_action=truncate include=index include=table_statistics
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER

So that you have the statistics from the source, unlocked.

OWNER TABLE_NAME LAST_ANALYZED STATTYPE_LOCKED NUM_ROWS
---------- -------------------- -------------------- -------------------- ----------
DEMO DEMO 06-SEP-17 1

 

Cet article impdp content=metadata_only locks the stats est apparu en premier sur Blog dbi services.

12c dbms_stats.gather_table_stats on GTT do not commit

Mon, 2017-09-04 14:21

In my UKOUG OracleScene article on 12c online statistics and GTT I mentioned the following:

A final note about those 12c changes in statistics gathering on GTT. In 11g the dbms_stats did a commit at the start. So if you did gather stats after the load, you had to set the GTT as ON COMMIT PRESERVE ROWS. Or you just vacuum what you’ve loaded. That has changed in 12c. If you now choose to do a conventional insert followed by dbms_stats (having set private stats of course) then you don’t need to set on commit preserve rows anymore.

Today, I realized that I’ve never explained exactly when dbms_stats.gather_table_stats commits the transaction or not. Because, of course, it depends. In summary: 12c non-SYS owner GTT with private statistics.

Here is an example. I connect as non-SYS user:

SQL> connect demo/demo@//localhost/pdb1
Connected.
SQL> show user
USER is "DEMO"

I create a permanent table and a global temporary table:

SQL> create table DEMO(text varchar2(20));
Table created.
 
SQL> create global temporary table DEMOGTT(text varchar2(20));
Table created.

In the permanent table, I insert my row. The goal is to be sure that this insert is not commited and can be rolled back at the end:

SQL> insert into DEMO values('Forget me, please!');
1 row created.

In the global temporary table I insert one row. The goal is to be sure that the row remains until the end of my transaction (on commit delete rows):

SQL> insert into DEMOGTT values('Preserve me, please!');
1 row created.

Here it is:

SQL> select * from DEMO;
 
TEXT
--------------------
Forget me, please!
 
SQL> select * from DEMOGTT;
 
TEXT
--------------------
Preserve me, please!

Then, I gather statistics on the GTT:

SQL> exec dbms_stats.gather_table_stats(user,'DEMOGTT');
PL/SQL procedure successfully completed.

I check that my rows in the GTT are still there, which is a proof that no commit happened:

SQL> select * from DEMOGTT;
 
TEXT
--------------------
Preserve me, please!

And I check that, as no commit happened, I can rollback my previous insert on the permanent table:

SQL> rollback;
Rollback complete.
 
SQL> select * from DEMO;
no rows selected

This is the new behavior in 12c. The same in 11g would have committed my transaction before and after the call to dbms_stats.

GTT only

Here is the same example when gathering the stats on the permanent table:
SQL> show user
USER is "DEMO"
SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
PL/SQL procedure successfully completed.
&nbsp:
SQL> select * from DEMOGTT;
no rows selected
&nbsp:
SQL> rollback;
Rollback complete.
&nbsp:
SQL> select * from DEMO;
&nbsp:
TEXT
--------------------
Forget me, please!

The transaction was committed by dbms_stats here: no rows from GTT (on commit delete rows), and the insert in permanent table was commited before my rollback.

Not for SYS

When connected as SYS:
SQL> show user
USER is "SYS"
SQL> exec dbms_stats.gather_table_stats(user,'DEMOGTT');
PL/SQL procedure successfully completed.
 
SQL> select * from DEMOGTT;
no rows selected
 
SQL> rollback;
Rollback complete.
 
SQL> select * from DEMO;
 
TEXT
--------------------
Forget me, please!

The transaction was committed by dbms_stats here: when the table is owned by SYS, dbms_stats commits.

I mean, not for SYS owner

If I’m connected by SYS but gather stats on a non-SYS table, dbms_stats do not commit:

SQL> show user
USER is "SYS"
SQL> exec dbms_stats.gather_table_stats('DEMO','DEMOGTT');
PL/SQL procedure successfully completed.
 
SQL> select * from DEMOGTT;
 
TEXT
--------------------
Preserve me, please!
 
SQL> rollback;
Rollback complete.
 
SQL> select * from DEMO;
no rows selected

The behaviour is not related to the user who runs dbms_stats, but the owner of the GTT.

Private statistics only

The default in 12c for GTT is private statistics, visible to session only. Trying the same with shared statistics (as in 11g):
SQL> show user
USER is "DEMO"
 
SQL> select dbms_stats.get_prefs(ownname=>user,tabname=>'DEMO_GTT',pname=>'GLOBAL_TEMP_TABLE_STATS') from dual;
 
DBMS_STATS.GET_PREFS(OWNNAME=>USER,TABNAME=>'DEMO_GTT',PNAME=>'GLOBAL_TEMP_TABLE
--------------------------------------------------------------------------------
SESSION
 
SQL> exec dbms_stats.set_table_prefs(user,'DEMO_GTT','GLOBAL_TEMP_TABLE_STATS','SHARED' );
PL/SQL procedure successfully completed.
 
SQL> exec dbms_stats.gather_table_stats(user,'DEMOGTT');
PL/SQL procedure successfully completed.
&nbsp:
SQL> select * from DEMOGTT;
no rows selected
&nbsp:
SQL> rollback;
Rollback complete.
&nbsp:
SQL> select * from DEMO;
&nbsp:
TEXT
--------------------
Forget me, please!
 
SQL> exec dbms_stats.set_table_prefs(user,'DEMO_GTT', 'GLOBAL_TEMP_TABLE_STATS',null);
PL/SQL procedure successfully completed.

The dbms_stats did commit my transaction here.

So what?

Private session statistics for GTT is a great feature. Use it: gather statistics after filling the GTT. And don’t worry about on commit delete rows GTT (the default) because this statistic gathering do not commit the transation.

 

Cet article 12c dbms_stats.gather_table_stats on GTT do not commit est apparu en premier sur Blog dbi services.

When PDB name conflicts with CDB name

Sun, 2017-09-03 09:24

Going to multitenant architecture is not a big change. The administration things (DBA, monitoring, backups) connect to the CDB and the application things connect to the PDB. Without the multitenant option, it is still recommended to go to the CDB architecture. The non-CDB is deprecated and the multitenant architecture brings interesting features. People often ask how to name the CDB and the PDB, especially when they have naming rules or policies in the company. My recommendation is to name the PDB as you are used to naming the databases: the name often gives an idea of the data that is inside, the application, and the environment. The CDB is the container, and in my opinion, you should apply the same naming rules as for servers. Don’t forget that pluggable databases are made to be moved across CDB, so the CDB name should not depend on the content.

But, with single tenant, you have a one-to-one relationship between the CDB and the PDB and then may come the idea to set the same name for CDB and PDB… I’m not sure if it is supported or not and please, don’t do that.

Service Name

There’s one rule: the service name must be unique on a server, especially when registered to the same listener. The PDB name will be the default service name registered by the PDB. And the DB_UNIQUE_NAME of the CDB will be the default service name registered by the CDB. Then the PDB name must be different than the DBA_UNIQUE_NAME.

With this rule, it should be possible to have the same name for the CDB (the DB_NAME) and the PDB, given that we have set a different DB_UNIQUE_NAME.

Here is an example. The name of my Container Database is CDB1. But as it is part of a Data Guard configuration I changed the unique name to CDB1A (and standby will be CDB1B).

Here are the services from by CDB:

SQL> select * from v$services;
 
SERVICE_ID NAME NAME_HASH NETWORK_NAME CREATION_DATE CREATION_DATE_HASH GOAL DTP AQ_HA_NOTIFICATION CLB_GOAL COMMIT_OUTESSION_STATE_CONSISTENCY GLOBAL PDB SQL_TRANSLATION_PROFILE MAX_LAG_TIME STOP_OPTION FAILOVER_RESTORE DRAIN_TIMEOUT CON_ID
---------- ---- --------- ------------ ------------- ------------------ ---- --- ------------------ -------- ---------------------------------- ------ --- ----------------------- ------------ ----------- ---------------- ------------- ------
7 CDB1A 3104886812 CDB1A 27-AUG-17 1962062146 NONE N NO LONG NO NO CDB$ROOT NONE NONE 0 1
1 SYS$BACKGROUND 165959219 26-JAN-17 1784430042 NONE N NO SHORT NO NO CDB$ROOT NONE NONE 0 1
2 SYS$USERS 3427055676 26-JAN-17 1784430042 NONE N NO SHORT NO NO CDB$ROOT NONE NONE 0 1
0 pdb1 1888881990 pdb1 0 NONE N NO SHORT NO NO PDB1 NONE NONE 0 4
6 CDB1XDB 1202503288 CDB1XDB 27-AUG-17 1962062146 NONE N NO LONG NO NO CDB$ROOT NONE NONE 0 1

All are default services: CDB1A is the DB_UNIQUE_NAME, SYS$BACKGROUND for background processes, SYS$USERS when connecting without a service name, CDB1XDB is used to connec to XDB dispathers. PDB1 is the default service of my pluggable database PDB1.

I can also look at the services registred in the listener:


SQL> host lsnrctl status
 
LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 28-AUG-2017 20:34:36
 
Copyright (c) 1991, 2016, Oracle. All rights reserved.
 
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 27-AUG-2017 20:41:33
Uptime 0 days 23 hr. 53 min. 3 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/VM104/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=VM104)(PORT=1521)))
Services Summary...
Service "57c2283990d42152e053684ea8c05ea0" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1A" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully

There is just one additional service here: the GUI of my PDB (see https://blog.dbi-services.com/service-696c6f76656d756c746974656e616e74-has-1-instances/)

ORA-65149

Do you see any service named ‘CDB1′ here? No. Then I should be able to create a PDB with this name.

SQL> create pluggable database CDB1 admin user admin identified by covfefe file_name_convert=('pdbseed','cdb1');
 
Error starting at line : 1 in command -
create pluggable database CDB1 admin user admin identified by covfefe file_name_convert=('pdbseed','cdb1')
Error report -
ORA-65149: PDB name conflicts with existing service name in the CDB or the PDB
65149. 00000 - "PDB name conflicts with existing service name in the CDB or the PDB"
*Cause: An attempt was made to create a pluggable database (PDB) whose
name conflicts with the existing service name in the container
database (CDB) or the PDB.
*Action: Choose a different name for the PDB.

Ok. This is impossible. However, the error message is not correct. My PDB name does not conflict with existing service names. It may conflict with instance name or DB_NAME, but not with any service.

NID

As I’m not satisfied with this, I try to find another way to have the same name for CDB and PDB. I have a pluggable database named ‘PDB1′ and I’ll try to change the CDB name to this:


[oracle@VM104 ~]$ nid dbname=PDB1 target=sys/oracle
 
DBNEWID: Release 12.2.0.1.0 - Production on Mon Aug 28 20:40:08 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
Connected to database CDB1 (DBID=926862412)
 
Connected to server version 12.2.0
 
Control Files in database:
/u01/oradata/CDB1A/control01.ctl
/u01/fast_recovery_area/CDB1A/control02.ctl
 
The following datafiles are read-only:
/u01/oradata/CDB1A/PDB1/USERS2.db (17)
These files must be writable by this utility.
 
Change database ID and database name CDB1 to PDB1? (Y/[N]) => Y
 
Proceeding with operation
Changing database ID from 926862412 to 3460932968
Changing database name from CDB1 to PDB1
Control File /u01/oradata/CDB1A/control01.ctl - modified
Control File /u01/fast_recovery_area/CDB1A/control02.ctl - modified
Datafile /u01/oradata/CDB1A/system01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/sysaux01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/undotbs01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/pdbseed/system01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/pdbseed/sysaux01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/users01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/pdbseed/undotbs01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/system01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/sysaux01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/undotbs01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/USERS.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/USERS2.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/temp01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/pdbseed/temp012017-08-27_18-30-16-741-PM.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/temp012017-08-27_18-30-16-741-PM.db - dbid changed, wrote new name
Control File /u01/oradata/CDB1A/control01.ctl - dbid changed, wrote new name
Control File /u01/fast_recovery_area/CDB1A/control02.ctl - dbid changed, wrote new name
Instance shut down
 
Database name changed to PDB1.
Modify parameter file and generate a new password file before restarting.
Database ID for database PDB1 changed to 3460932968.
All previous backups and archived redo logs for this database are unusable.
Database is not aware of previous backups and archived logs in Recovery Area.
Database has been shutdown, open database with RESETLOGS option.
Succesfully changed database name and ID.
DBNEWID - Completed succesfully.
 
SQL> startup
ORACLE instance started.
 
Total System Global Area 859832320 bytes
Fixed Size 8798552 bytes
Variable Size 784338600 bytes
Database Buffers 58720256 bytes
Redo Buffers 7974912 bytes
ORA-01103: database name 'PDB1' in control file is not 'CDB1'
 
SQL> alter system set db_name=PDB1 scope=spfile;
 
System altered.
 
SQL> shutdown immediate
ORA-01507: database not mounted
 
ORACLE instance shut down.
SQL>
SQL> startup
ORACLE instance started.
 
Total System Global Area 859832320 bytes
Fixed Size 8798552 bytes
Variable Size 784338600 bytes
Database Buffers 58720256 bytes
Redo Buffers 7974912 bytes
Database mounted.
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open
 
SQL> alter database open resetlogs;
 
Database altered.

That’s done.
My CDB is named PDB1:
SQL> select * from v$database;
 
DBID NAME CREATED RESETLOGS_CHANGE# RESETLOGS_TIME PRIOR_RESETLOGS_CHANGE# PRIOR_RESETLOGS_TIME LOG_MODE CHECKPOINT_CHANGE# ARCHIVE_CHANGE# CONTROLFILE_TYPE CONTROLFILE_CREATED CONTROLFILE_SEQUENCE# CONTROLFILE_CHANGE# CONTROLFILE_TIME OPEN_RESETLOGS VERSION_TIME OPEN_MODE PROTECTION_MODE PROTECTION_LEVEL REMOTE_ARCHIVE ACTIVATION# SWITCHOVER# DATABASE_ROLE ARCHIVELOG_CHANGE# ARCHIVELOG_COMPRESSION SWITCHOVER_STATUS DATAGUARD_BROKER GUARD_STATUS SUPPLEMENTAL_LOG_DATA_MIN SUPPLEMENTAL_LOG_DATA_PK SUPPLEMENTAL_LOG_DATA_UI FORCE_LOGGING PLATFORM_ID PLATFORM_NAME RECOVERY_TARGET_INCARNATION# LAST_OPEN_INCARNATION# CURRENT_SCN FLASHBACK_ON SUPPLEMENTAL_LOG_DATA_FK SUPPLEMENTAL_LOG_DATA_ALL DB_UNIQUE_NAME STANDBY_BECAME_PRIMARY_SCN FS_FAILOVER_STATUS FS_FAILOVER_CURRENT_TARGET FS_FAILOVER_THRESHOLD FS_FAILOVER_OBSERVER_PRESENT FS_FAILOVER_OBSERVER_HOST CONTROLFILE_CONVERTED PRIMARY_DB_UNIQUE_NAME SUPPLEMENTAL_LOG_DATA_PL MIN_REQUIRED_CAPTURE_CHANGE# CDB CON_ID PENDING_ROLE_CHANGE_TASKS CON_DBID FORCE_FULL_DB_CACHING
---- ---- ------- ----------------- -------------- ----------------------- -------------------- -------- ------------------ --------------- ---------------- ------------------- --------------------- ------------------- ---------------- -------------- ------------ --------- --------------- ---------------- -------------- ----------- ----------- ------------- ------------------ ---------------------- ----------------- ---------------- ------------ ------------------------- ------------------------ ------------------------ ------------- ----------- ------------- ---------------------------- ---------------------- ----------- ------------ ------------------------ ------------------------- -------------- -------------------------- ------------------ -------------------------- --------------------- ---------------------------- ------------------------- --------------------- ---------------------- ------------------------ ---------------------------- --- ------ ------------------------- -------- ---------------------
3460932968 PDB1 27-AUG-17 1495032 28-AUG-17 1408558 27-AUG-17 ARCHIVELOG 1495035 0 CURRENT 27-AUG-17 2574 1496538 28-AUG-17 NOT ALLOWED 27-AUG-17 READ WRITE MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE ENABLED 3460947145 3460947145 PRIMARY 0 DISABLED NOT ALLOWED DISABLED NONE NO NO NO NO 13 Linux x86 64-bit 3 3 1497050 NO NO NO CDB1A 0 DISABLED 0 NO NO YES 0 NOT APPLICABLE 3460932968 NO

And I have a PDB with the same name:

SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 MOUNTED
 
SQL> alter pluggable database PDB1 open;
 
Pluggable database PDB1 altered.
 
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 READ WRITE NO

What was forbidden with a wrong error message was made possible with this other way.

So what?

Please, do not take this as a solution. There is clearly a problem here. Maybe the documentation and error message are wrong. Maybe the NID has a bug, allowing to do something that should be blocked. Or the create pluggable database has a bug, blocking something that should be possible. Until this is fixed (SR opened) I would recommend that the PDB name is always different than the CDB name, independently of service names. Well, I would recommend it anyway as it brings a lot of confusion: when you mention a database name, people will not know whether you are referring to the CDB or the PDB.

 

Cet article When PDB name conflicts with CDB name est apparu en premier sur Blog dbi services.

Postgres vs. Oracle access paths XI – Sample Scan

Sat, 2017-08-26 09:33

I was going to end this series with the previous post because the last access path available in Postgres is a bit special: a Seq Scan that returns only a sample of the rows, at random. However, it is the occasion to come back to the difference between random and sequential reads.

I’m still working on the same table as in the previous posts, with 10000 rows in 1429 pages. 5% of rows is 500 rows and 5% of blocks is about 72 pages.

Rows

Sometimes, you can answer your business question on a sample of rows, when you need an approximate result, trend or pattern Let’s say that you want to sum() on only 5 percent of rows:
explain (analyze,verbose,costs,buffers) select sum(a) from demo1 tablesample bernoulli(5) ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Aggregate (cost=1435.25..1435.26 rows=1 width=8) (actual time=1.940..1.940 rows=1 loops=1)
Output: sum(a)
Buffers: shared hit=1429
-> Sample Scan on public.demo1 (cost=0.00..1434.00 rows=500 width=4) (actual time=0.007..1.890 rows=509 loops=1)
Output: n, a, x
Sampling: bernoulli ('5'::real)
Buffers: shared hit=1429
Planning time: 0.373 ms
Execution time: 1.956 ms

This row sampling reads all rows and picks a sample of them at random. Unfortunately, it reads all blocks because you cannot get a good sample if you don’t know how many rows you have in each block. Working on a sample can make sense if you want to apply complex operations on the result. Here the cost in the database is similar to a Seq Scan: 1429 blocks read at seq_page_cost=1, but the sum() applied on 500 rows (cpu_operator_cost=0.0025) and 500 tuples from the scan and 1 tuple for the result, with cpu_tuple_cost=0.01

From execution statistics, you can see that the result is exactly what we asked: 500 rows returned.

Oracle has a different syntax and different algorithm:
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 1tsadjdd9ddam, child number 0
-------------------------------------
select /*+ */ sum(a) from demo1 sample(5)
-----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 397 (100)| 1 |00:00:00.01 | 581 |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.01 | 581 |
| 2 | TABLE ACCESS SAMPLE| DEMO1 | 1 | 500 | 397 (0)| 478 |00:00:00.01 | 581 |
-----------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=0) SUM("A")[22] 2 - (rowset=256) "A"[NUMBER,22]

Here we have not read all the blocks. Only 40% of them. This is faster than the Postgres approach, but the drawback is that the result is not exact: 478 rows were returned here.

Blocks

When we can afford an approximate sampling, we can sample on blocks rather than on rows:
explain (analyze,verbose,costs,buffers) select sum(a) from demo1 tablesample system(5) ;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------
Aggregate (cost=290.25..290.26 rows=1 width=8) (actual time=0.479..0.480 rows=1 loops=1)
Output: sum(a)
Buffers: shared hit=73
-> Sample Scan on public.demo1 (cost=0.00..289.00 rows=500 width=4) (actual time=0.016..0.377 rows=511 loops=1)
Output: n, a, x
Sampling: system ('5'::real)
Buffers: shared hit=73
Planning time: 0.698 ms
Execution time: 0.509 ms

The number of rows is still good here, but the result may depend on the blocks sampled. Only 73 blocks were read, which is exactly 5% and of course, the rows may be distributed differently within the blocks. However, the advantage is that it is faster as it reads less blocks. But those blocks being picked at random, they are by definition random reads: 71 pages read at random_page_cost=0:4 and, as in the previous case, 501 cpu_tuple_cost and 500 cpu_operator_cost

With block sampling, Oracle reads a smaller number of blocks than with row sampling, but still more than 5%, and the number of rows is not exact: 798 rows here:
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID fqgbwqfavgdrn, child number 0
-------------------------------------
select /*+ */ sum(a) from demo1 sample block(5)
-----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 22 (100)| 1 |00:00:00.01 | 134 |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.01 | 134 |
| 2 | TABLE ACCESS SAMPLE| DEMO1 | 1 | 500 | 22 (0)| 798 |00:00:00.01 | 134 |
-----------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=0) SUM("A")[22] 2 - (rowset=256) "A"[NUMBER,22]

Again, as for the previous access paths: same concepts and different implementation between Postgres and Oracle. Everything looks similar and easily portable from a far overview, but going into details you see all those little differences which make it no so easy to be database agnostic or easily portable.

Summary

This is the end of this series comparing Postgres access path with Oracle ones. The goal is not to tell you that one is better than the other. They have a different approach, different targets, different price, different history and probably future. But understanding how they work and how they estimate the cost is a good way to learn. I’m myself learning a lot about Postgres while writing those posts, matching things I discover on Postgres with those I know for a while in Oracle.

Here is the list of posts on Access Path:

  1. Postgres vs. Oracle access paths – intro
  2. Postgres vs. Oracle access paths I – Seq Scan
  3. Postgres vs. Oracle access paths II – Index Only Scan
  4. Postgres vs. Oracle access paths III – Partial Index
  5. Postgres vs. Oracle access paths IV – Order By and Index
  6. Postgres vs. Oracle access paths V – FIRST ROWS and MIN/MAX
  7. Postgres vs. Oracle access paths VI – Index Scan
  8. Postgres vs. Oracle access paths VII – Bitmap Index Scan
  9. Postgres vs. Oracle access paths VIII – Index Scan and Filter
  10. Postgres vs. Oracle access paths IX – Tid Scan
  11. Postgres vs. Oracle access paths X – Update
  12. Postgres vs. Oracle access paths XI – Sample Scan

I think my next series will be on Join methods.

 

Cet article Postgres vs. Oracle access paths XI – Sample Scan est apparu en premier sur Blog dbi services.

Postgres vs. Oracle access paths X – Update

Thu, 2017-08-24 15:03

In the previous post we have seen the cheapest way to get one row, reading only one block from its physical location. But that’s the optimal case where the row has not moved. I’ll (nearly) conclude this series about access path with an update.

ROWID in Oracle

Here is the ROWID of one row in Oracle:

select rowid from demo1 where n=1000;
ROWID
------------------
AAAR4WAAMAAAAEaAAF

There’s enough information here to get directly to the block with file_name and offset:
select file_name,dbms_rowid.rowid_block_number('AAAR4WAAMAAAAEaAAF')*block_size offset
from dba_data_files join dba_tablespaces using(tablespace_name)
where file_id=dbms_rowid.rowid_to_absolute_fno('AAAR4WAAMAAAAEaAAF','DEMO','DEMO1');
 
FILE_NAME OFFSET
---------------------------------------- ----------
/u01/oradata/CDB1A/PDB/users01.dbf 2310144

The ROWID also contains the index of the row within the block’s row directory:

select dbms_rowid.rowid_row_number('AAAR4WAAMAAAAEaAAF') from dual;
 
DBMS_ROWID.ROWID_ROW_NUMBER('AAAR4WAAMAAAAEAAAF')
-------------------------------------------------
5

TID in Postgres

And the TID of similar row in Postgres:

select ctid from demo1 where n=1000;
ctid
---------
(142,6)

The file is known from the table, as there is only one file per table:

show data_directory;
data_directory
----------------------------
/usr/share/postgresql/data
 
select pg_relation_filepath('demo1');
pg_relation_filepath
----------------------
base/16437/125852

The blocksize is common for the whole database:

show block_size;
block_size
------------
8192

Then the block is at offset 142+8192=8334.
Within the block, the row is at index 6.

SELECT

We have seen in the previous post that we can select using the ROWID/TID and Oracle and Postgres behave the same: only one block to read, cost estimation based on one random read:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8mj3ms08x0sfh, child number 0
-------------------------------------
select /*+ */ a from demo1 where rowid='AAAR4WAAMAAAAEaAAF'
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 (100)| 1 |00:00:00.01 | 1 |
| 1 | TABLE ACCESS BY USER ROWID| DEMO1 | 1 | 1 | 1 (0)| 1 |00:00:00.01 | 1 |
-----------------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "A"[NUMBER,22]

Different units but same signification: cost=1 for Oracle is for random reads, cost=1 for Postgres is for sequential reads and random reads are estimated to cost=4:

explain (analyze,verbose,costs,buffers) select a from demo1 where ctid='(142,6)' ;
QUERY PLAN
------------------------------------------------------------------------------------------------------
Tid Scan on public.demo1 (cost=0.00..4.01 rows=1 width=4) (actual time=0.007..0.007 rows=1 loops=1)
Output: a
TID Cond: (demo1.ctid = '(142,6)'::tid)
Buffers: shared hit=1
Planning time: 0.358 ms
Execution time: 0.016 ms

Oracle UPDATE

Now I’m updating this row, changing the column X which contains 1000 ‘x’ characters to 1000 ‘y’ characters:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID gpqv4k6m1q20y, child number 0
-------------------------------------
update /*+ */ demo1 set x=lpad('y',1000,'y') where rowid='AAAR4WAAMAAAAEaAAF'
------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | 1 | | 1 (100)| 0 |00:00:00.01 | 4 |
| 1 | UPDATE | DEMO1 | 1 | | | 0 |00:00:00.01 | 4 |
| 2 | TABLE ACCESS BY USER ROWID| DEMO1 | 1 | 1 | 1 (0)| 1 |00:00:00.01 | 1 |
------------------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
2 - (upd=2) ROWID[ROWID,10], "X"[VARCHAR2,1000]

In addition to the access to the block (1 buffer) the update had to read 3 additional buffers. There are no indexes on this updated column and then Oracle has no additional maintenance to do. One buffer is the table block to update (the TABLE ACCESS BY USER ROWID was a consistent get, the update needs the current version of the block).

Additional buffers are from the UNDO tablespace for MVCC (Multi Version Concurrency Control). It is the first modification in my transaction and then has to update the transaction table and undo segment, which is why we see 2 additional buffers. Another update within the same transaction reads only two buffers in total:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID gpqv4k6m1q20y, child number 0
-------------------------------------
update /*+ */ demo1 set x=lpad('z',1000,'z') where rowid='AAAR4WAAMAAAAEaAAF'
------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | 1 | | 1 (100)| 0 |00:00:00.01 | 2 |
| 1 | UPDATE | DEMO1 | 1 | | | 0 |00:00:00.01 | 2 |
| 2 | TABLE ACCESS BY USER ROWID| DEMO1 | 1 | 1 | 1 (0)| 1 |00:00:00.01 | 1 |
------------------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
2 - (upd=2) ROWID[ROWID,10], "X"[VARCHAR2,1000]

Only the table blocks are read: one consistent read as of the beginning of the query (or the transaction if in serializable isolation level) and one for the current block. Oracle has an optimization called In-Memory UNDO to avoid frequent access undo blocks.

There are no further re-visits needed. Oracle may choose to come back at commit if it can be done quickly (few blocks still in buffer cache) but that’s not required. The block can stay like this for years without the need to read it again for cleanup. If another session has to read it, then cleanup may be done by this session.

Postgres UPDATE

Here is the same update in Postgres:

explain (analyze,verbose,costs,buffers) update demo1 set x=lpad('y',1000,'y') where ctid='(142,6)' ;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Update on public.demo1 (cost=0.00..4.01 rows=1 width=46) (actual time=0.214..0.214 rows=0 loops=1)
Buffers: shared hit=6 dirtied=3
-> Tid Scan on public.demo1 (cost=0.00..4.01 rows=1 width=46) (actual time=0.009..0.009 rows=1 loops=1)
Output: n, a, 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'::text, ctid
TID Cond: (demo1.ctid = '(142,6)'::tid)
Buffers: shared hit=1
Planning time: 0.405 ms
Execution time: 0.232 ms

The Tid Scan is the same as for the select. Then the update has read 5 blocks and modified 3 of them. The update in Postgres is processed as a delete+insert. Here is my guess about those numbers. The new version is inserted, in a new block if there is no free space in the same block. The old version is updated. And the index must be maintained. Those are 3 blocks to modify. Here, the row was directly accessed through its TID. But we must find the index entry. The row contains the index value, and then an index scan is possible: two block reads for this small index having one branch only.

SELECT again

I said that with Oracle the row is updated in-place and doesn’t need further cleanup. If I run the same SELECT as the one I did before the UPDATE, I still have only one block to read:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8mj3ms08x0sfh, child number 0
-------------------------------------
select /*+ */ a from demo1 where rowid='AAAR4WAAMAAAAEaAAF'
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 (100)| 1 |00:00:00.01 | 1 |
| 1 | TABLE ACCESS BY USER ROWID| DEMO1 | 1 | 1 | 1 (0)| 1 |00:00:00.01 | 1 |
-----------------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "A"[NUMBER,22]

In Postgres, because the update was processed as insert+delete, running the same also reads only one block, but it returns no rows:

explain (analyze,verbose,costs,buffers) select a from demo1 where ctid='(142,6)' ;
QUERY PLAN
------------------------------------------------------------------------------------------------------
Tid Scan on public.demo1 (cost=0.00..4.01 rows=1 width=4) (actual time=0.014..0.014 rows=0 loops=1)
Output: a
TID Cond: (demo1.ctid = '(142,6)'::tid)
Buffers: shared hit=1
Planning time: 0.442 ms
Execution time: 0.028 ms

The new version is in another block, then the TID to find it is different:

select ctid from demo1 where n=1000;
ctid
----------
(1428,5)
(1 row)

There was not enough space for another version of the whole row within the same block. Free space was found in the last block (1428). Of course, this is why the index was updated even if the indexed column did not change: it had to address a different block.

Let’s query with the new TID:

explain (analyze,verbose,costs,buffers) select a from demo1 where ctid='(1428,5)' ;
QUERY PLAN
------------------------------------------------------------------------------------------------------
Tid Scan on public.demo1 (cost=0.00..4.01 rows=1 width=4) (actual time=0.008..0.008 rows=1 loops=1)
Output: a
TID Cond: (demo1.ctid = '(1428,5)'::tid)
Buffers: shared hit=1
Planning time: 0.449 ms
Execution time: 0.023 ms

Only one buffer read. However, as we have seen with the Index Only Scan, there is a need for cleanup to avoid Heap Fetches. There are also the old tuples that should be removed later or the updated tables and indexes grow forever.

There’s only one Postgres access path remaining. That’s for teh last post of this series, which will include the table of content.

 

Cet article Postgres vs. Oracle access paths X – Update est apparu en premier sur Blog dbi services.

Postgres vs. Oracle access paths IX – Tid Scan

Wed, 2017-08-23 15:29

In the previous post we have seen how Postgres and Oracle finds the table row from the index entry. It uses the TID / ROWID. I’ll focus on this access path and I will have covered all Postgres access paths to table data.

Oracle ACCESS BY ROWID

I start with Oracle because we already have seen the TABLE ACCESS BY ROWID. I’ll decompose an index acces to the table. The first step is getting the ROWID from the index entry:

SQL> select /*+ */ rowid from demo1 where n=1000;
 
ROWID
------------------
AAASPkAAMAAABIaAAF

The ROWID contains the data object ID (to be able to identify the segment and then the tablespace), the relative file number within the tablespace, the block number within this file and the row number within the block. This can be stored in 10 bytes. When in an index entry, except if this is a global index on a partitioned table, we don’t need the object ID (because there’s a one-to-one relationship between the table and the index objects) and the only 6 bytes are stored in the index entry.

This is a simple index access and the output (projection) is the ROWID:
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 32tsqy19ctmd4, child number 0
-------------------------------------
select /*+ */ rowid from demo1 where n=1000
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 (100)| 1 |00:00:00.01 | 2 |
|* 1 | INDEX UNIQUE SCAN| DEMO1_N | 1 | 1 | 1 (0)| 1 |00:00:00.01 | 2 |
----------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("N"=1000)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - ROWID[ROWID,10]

Now with the ROWID, I query a column from the table:
SQL> select /*+ */ a from demo1 where rowid='AAASPkAAMAAABIaAAF';
 
A
----------
1

And the plan is exactly the ‘TABLE ACCESS’ part we have seen in previous posts on index scans:
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID c46nq5t0sru8q, child number 0
-------------------------------------
select /*+ */ a from demo1 where rowid='AAASPkAAMAAABIaAAF'
Plan hash value: 3196731035
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 (100)| 1 |00:00:00.01 | 1 |
| 1 | TABLE ACCESS BY USER ROWID| DEMO1 | 1 | 1 | 1 (0)| 1 |00:00:00.01 | 1 |
-----------------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "A"[NUMBER,22]

There’s no Predicate section visible here, but the access is done on the ROWID which contains the file number, block number, and row number. This is the fastest way to get one row: reading only one buffer.

Postgres Tid Scan

Same idea in Postgres where we can query the TID (Tumple ID):

select ctid from demo1 where n=1000 ;
ctid
---------
(142,6)
(1 row)

Because my table is stored in a file (no tablespace with multiple data files here) the TID contains only the block number and the row number within the block.
explain (analyze,verbose,costs,buffers) select ctid from demo1 where n=1000 ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Index Scan using demo1_n on public.demo1 (cost=0.29..8.30 rows=1 width=6) (actual time=0.009..0.009 rows=1 loops=1)
Output: ctid
Index Cond: (demo1.n = 1000)
Buffers: shared hit=3
Planning time: 0.429 ms
Execution time: 0.023 ms

We already have seen the cost of this operation: 116 startup operations, 2 index pages read at random_page_cost=4 and 1 result row at cpu_tuple_cost=0.01 (note that the query planner does not count the cpu_index_tuple_cost here).

Then here is the query using this TID:
explain (analyze,verbose,costs,buffers) select a from demo1 where ctid='(142,6)' ;
QUERY PLAN
------------------------------------------------------------------------------------------------------
Tid Scan on public.demo1 (cost=0.00..4.01 rows=1 width=4) (actual time=0.006..0.006 rows=1 loops=1)
Output: a
TID Cond: (demo1.ctid = '(142,6)'::tid)
Buffers: shared hit=1
Planning time: 0.351 ms
Execution time: 0.017 ms

The cost estimation is very simple here: 1 seek()+read() at random_page_cost=4 and 1 result row at cpu_tuple_cost=0.01

Since the post on Index Only Scan, I’m working on a vacuumed table with no modifications. Now that I have the simplest access path, I’ll show it with an update, in the next post.

 

Cet article Postgres vs. Oracle access paths IX – Tid Scan est apparu en premier sur Blog dbi services.

Improving Statspack Experience

Wed, 2017-08-23 14:41

I’ve published a few month ago an article in the UKOUG OracleScene magazine on Improving Statspack Experience: quick setup script, changing settings, filling Idle Events,etc. In the article, I used dbms_job to schedule the snapshots, because I have this script for years and never took the time to do it with dbms_scheduler. Thanks to Nicolas Jardot here is the equivalent script using dbms_scheduler.

The idea is to have a script to run on each instance (when in RAC) in order to have a job calling statspack.snap and statspack.purge on each instance.

DECLARE
instno NUMBER;
snapjob VARCHAR2(30);
purgejob VARCHAR2(30);
BEGIN
select instance_number into instno from v$instance;
snapjob := 'PERFSTAT.STATSPACK_SNAP_' || instno;
purgejob := 'PERFSTAT.STATSPACK_PURGE_' || instno;
 
DBMS_SCHEDULER.CREATE_JOB (
job_name => snapjob,
job_type => 'PLSQL_BLOCK',
job_action => 'statspack.snap;',
number_of_arguments => 0,
start_date => systimestamp,
repeat_interval => 'FREQ=HOURLY;BYTIME=0000;BYDAY=MON,TUE,WED,THU,FRI,SAT,SUN',
end_date => NULL,
enabled => TRUE,
auto_drop => FALSE,
comments => 'Take hourly Statspack snapshot');
 
DBMS_SCHEDULER.CREATE_JOB (
job_name => purgejob,
job_type => 'PLSQL_BLOCK',
job_action => 'statspack.purge(i_num_days=>31,i_extended_purge=>true);',
number_of_arguments => 0,
start_date => systimestamp,
repeat_interval => 'FREQ=WEEKLY;BYTIME=120000;BYDAY=SUN',
end_date => NULL,
enabled => TRUE,
auto_drop => FALSE,
comments => 'Weekly purge Statspack snapshot');
 
DBMS_SCHEDULER.SET_ATTRIBUTE( name => snapjob, attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_OFF );
DBMS_SCHEDULER.SET_ATTRIBUTE( name => snapjob, attribute => 'INSTANCE_ID', value=>instno);
DBMS_SCHEDULER.SET_ATTRIBUTE( name => snapjob, attribute => 'INSTANCE_STICKINESS', value=>TRUE);
 
DBMS_SCHEDULER.SET_ATTRIBUTE( name => purgejob, attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_OFF );
DBMS_SCHEDULER.SET_ATTRIBUTE( name => purgejob, attribute => 'INSTANCE_ID', value=>instno);
DBMS_SCHEDULER.SET_ATTRIBUTE( name => purgejob, attribute => 'INSTANCE_STICKINESS', value=>TRUE);
END;
/

I used the Oracle Cloud Service to provision quickly a two nodes RAC database to validate, and I’ll check the scheduling:

[oracle@rac-dg01-1 admin]$ alias sql='JAVA_HOME=$ORACLE_HOME/jdk bash $ORACLE_HOME/sqldeveloper/sqlcl/bin/sql'
[oracle@rac-dg01-1 admin]$ TWO_TASK=//10.31.143.86/pdb1.a511644.oraclecloud.internal sql sys/"Ach1z0#d" as sysdba
 
SQLcl: Release 12.2.0.1.0 RC on Wed Aug 23 18:57:12 2017
 
Copyright (c) 1982, 2017, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bit Production
 
SQL> set sqlformat ansiconsole
SQL> select job_name, state, enabled, next_run_date, instance_stickiness, instance_id from dba_scheduler_jobs where owner='PERFSTAT';
JOB_NAME STATE ENABLED NEXT_RUN_DATE INSTANCE_STICKINESS INSTANCE_ID
-------- ----- ------- ------------- ------------------- -----------
STATSPACK_SNAP_1 SCHEDULED TRUE 23-AUG-17 07.00.00.981193000 PM +00:00 TRUE 1
STATSPACK_PURGE_1 SCHEDULED TRUE 27-AUG-17 12.00.00.074939000 PM +00:00 TRUE 1
STATSPACK_SNAP_2 SCHEDULED TRUE 23-AUG-17 07.00.00.644681000 PM +00:00 TRUE 2
STATSPACK_PURGE_2 SCHEDULED TRUE 27-AUG-17 12.00.00.755685000 PM +00:00 TRUE 2

One hour later, the job has run on each instance:

JOB_NAME STATE ENABLED NEXT_RUN_DATE INSTANCE_STICKINESS INSTANCE_ID
-------- ----- ------- ------------- ------------------- -----------
STATSPACK_SNAP_1 SCHEDULED TRUE 23-AUG-17 08.00.00.325755000 PM +00:00 TRUE 1
STATSPACK_PURGE_1 SCHEDULED TRUE 27-AUG-17 12.00.00.074939000 PM +00:00 TRUE 1
STATSPACK_SNAP_2 SCHEDULED TRUE 23-AUG-17 08.00.00.644681000 PM +00:00 TRUE 2
STATSPACK_PURGE_2 SCHEDULED TRUE 27-AUG-17 12.00.00.755685000 PM +00:00 TRUE 2

Now running a spreport to see the instances having snapshots:

[oracle@rac-dg01-1 admin]$ TWO_TASK=//10.31.143.86/pdb1.a511644.oraclecloud.internal/cdb12 sqlplus sys/"Ach1z0#d" as sysdba @ spreport
 
Instances in this Statspack schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
DB Id Inst Num DB Name Instance Host
----------- -------- ------------ ------------ ------------
924704606 2 CDB1 cdb12 rac-dg01-2
924704606 1 CDB1 cdb11 rac-dg01-1
 
Using 924704606 for database Id
Using 2 for instance number

Here it is. dbms_job is deprecated. Let’s use dbms_scheduler.

 

Cet article Improving Statspack Experience est apparu en premier sur Blog dbi services.

Bequeath connect to PDB: set container in logon trigger?

Wed, 2017-08-23 00:54

There are little changes when you go to multitenant architecture and one of them is that you must connect with a service name. You cannot connect directly to a PDB with a beaqueath (aka local) connection. This post is about a workaround you may have in mind: create a common user and set a logon trigger to ‘set container’. I do not recommend it and you should really connect with a service. Here is an example.

Imagine that I have a user connecting with bequeath connection to a non-CDB, using user/password without a connection string, the database being determined by the ORACLE_SID. And I want to migrate to CDB without changing anything on the client connection configuration side. The best idea would be to use a service, explicitly or implicitly with TWO_TASK or LOCAL. But let’s imagine that you don’t want to change anything on the client side.

As we can connect only the the CDB$ROOT with a bequeath connection, we have to create a common user. Because the idea is not to change anything on client configuration, and there’s a very little chance that the user starts with C## I’ll start to remove the mandatory prefix for common users.


SQL> show parameter common_user_prefix
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
common_user_prefix string
 
SQL> alter system set common_user_prefix='' scope=spfile;
System altered.
 
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
...

Then I create my common user:

SQL> create user MYOLDUSER identified by covfefe container=all;
User created.

This user must be able to connect to the CDB:

SQL> grant create session to MYOLDUSER container=current;
Grant succeeded.

And then I want it to switch immediately to PDB1 using a logon trigger:

SQL> create or replace trigger SET_CONTAINER_AT_LOGON after logon on database
2 when (user in ('MYOLDUSER'))
3 begin
4 execute immediate 'alter session set container=PDB1';
5 end;
6 /
Trigger created.

Once on PDB1 this user will have some privileges, and for the example I will grant him a default role:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> create role MYROLE;
Role created.
 
SQL> grant MYROLE to MYOLDUSER container=current;
Grant succeeded.

The documentation says that When you grant a role to a user, the role is granted as a default role for that user and is therefore enabled immediately upon logon so I don’t need to:

SQL> alter user MYOLDUSER default role MYROLE;
User altered.

But the doc say ‘logon’ and technically I do not logon to PDB1. I just set container. However, if you test it you will see that default roles are set also on ‘set container’. And anyway, we cannot set a role in a procedure, neither with ‘set role’ nor with dbms_session.set_role:

ORA-06565: cannot execute SET ROLE from within stored procedure

Then, I can now connect locally to the CDB$ROOT with this user:

SQL> connect MYOLDUSER/covfefe
Connected.

And I’m automatically switched to the PDB1:

SQL> show con_name
 
CON_NAME
------------------------------
PDB1

Issue #1: default roles

However the default roles are not set:

SQL> select * from session_roles;
 
no rows selected

I have to set the role once connected:

SQL> set role all;
Role set.
 
SQL> select * from session_roles;
 
ROLE
--------------------------------------------------------------------------------
MYROLE

This is probably not what we want when we cannot change anything on the application side. This is considered as a bug (Bug 25081564 : ALTER SESSION SET CONTAINER IN “ON LOGON TRIGGER” IS NOT WORKING) fixed in 18.1 (expected in Q1 2018) and there’s a patch for 12.1 and 12.2 https://updates.oracle.com/download/25081564.html

Issue #2: core dump

There’s another issue. If you run the same with SQLcl you have a core dump in the client library libclntsh.so on kpuSetContainerNfy

SQLcl: Release 17.2.0 Production on Tue Aug 22 22:00:52 2017
 
Copyright (c) 1982, 2017, Oracle. All rights reserved.
 
SQL> connect MYOLDUSER/covfefe
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007fcaa172faf6, pid=31242, tid=140510230116096
#
# JRE version: Java(TM) SE Runtime Environment (8.0_91-b14) (build 1.8.0_91-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.91-b14 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libclntsh.so.12.1+0x11d8af6] kpuSetContainerNfy+0x66
#
# Core dump written. Default location: /media/sf_share/122/blogs/core or core.31242

There’s a SR opened for that. This is not a no-go because the context being no change to the client part, then sqlplus will probably be used. However, that’s another point which shows that ‘set container’ in a logon trigger may have some implementation problems.

Issue #3: security

In my opinion, there is a bigger problem here. With sqlplus (or with sqlcl not using local connection) I can connect to the CDB$ROOT and switch to PDB1. But look at all the commands above… where did I grant the ‘set container’ privilege for MYOLDUSER on the PDB1 container? Nowhere. MYOLDUSER has no create session and no set container privileges, but is able to connect to PDB1 thanks to the logon trigger. Of course , the logon trigger is defined by a DBA who knows what he does. But in my opinion, it is not a good idea to bypass the privilege checking.

So what?

With no default role, connecting without the right privilege, the security model is biased here. And disabling the common user prefix will raise other issues one day with plugging operations. Then, in my opinion, this is not a solution to workaround the need to connect with a service. Especially in the context where we run legacy application with no possibility to change the way it connects: you just postpone the problems to bigger ones later.

The real solution is to connect to a service (and that’s not difficult even when you can’t change the code, with TWO_TASK environment variable).

 

Cet article Bequeath connect to PDB: set container in logon trigger? est apparu en premier sur Blog dbi services.

Pages