Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 7 hours 47 min ago

Multitenant internals: INT$ and INT$INT$ views

Sun, 2017-11-05 13:24

This month, I’ll talk – with lot of demos – about multitenant internals at DOAG conference. CaptureMultitenantInternals
The multitenant dictionary architecture starts with a simple idea: system metadata and data are in CDB$ROOT and user metadata and data are in PDB. And it could have been that simple. When a session connected to a PDB needs to read some system information, the session context is switched to the CDB$ROOT container and reads from CDB$ROOT SYS tablespace and objects, and then switches back to the PDB container. This is implemented by metadata and data links: the PDB lists the objects for which the session has to switch to CDB$ROOT to get metadata or data.

CaptureMultitenantInternals1But, for compatibility reason, and ease of administration, the dictionary views must display information from both containers, transparently, and then things become a little more complex with common views and extended data views.

At Oracle Open World, the multitenant architects, in the #PDBExpert session, answered questions about the multitenant architecture posted on Twitter. My first question (because I was investigating a bug at that time) was about some views, such as INT$INT$DBA_CONSTRAINTS, introduced to implement the complexity of showing the same information in dictionary views as the ones we had on non-CDB. Of course, the architects didn’t want to go too far on this and had a very accurate answer: INT$ is for internal, and here you have two ‘INT$’ so you shouldn’t look at that.

But I like to understand how things work and here is the explanation of these INT$INT$ views. And I’m not even sure that INT is for ‘internal’ but maybe ‘intermediate’. But for sure, the $ at the end is used by Oracle internal dictionary objects.

INT$ Extended Data views

We are used to seeing all objects, system ones and user ones, listed by the dictionary views. For example, DBA_PROCEDURES shows all procedures, system and user ones, and then have to read from both containers (current PDB and CDB$ROOT) through extended data links. ALL_PROCEDURES shows all procedures accessible by the user, and they also have to switch to CDB$ROOT if the user has been granted to read system objects. USER_PROCEDURES shows only the objects owned by the current user, and then can read from the current container only.

For the ease of the definition, in 12c all the joins on the underlying tables(such as procedureinfo$, user$, obj$) is done by an intermediate view such as INT$DBA_PROCEDURES which is defined as EXTENDED DATA link to read from CDB$ROOT in addition to the local table. Then DBA_PROCEDURES, ALL_PROCEDURES and USER_PROCEDURES are defined on top of it with the required where clause to filter out owner and privilege accessibility.

INT$INT$ Extended Data views

In this post, I’ll detail the special case of DBA_CONSTRAINTS because things are more complex to get the multitenant architecture behaving the same as the non-CDB.

There are several types of constraints which are identified with the CONSTRAINT_TYPE column of DBA_CONSTRAINTS, or the TYPE# of the underlying table CDEF#

Here, I query the underlying table with the CONTAINER() function to see what is stored in each container:

SQL> select decode(type#,1,'C',2,'P',3,'U',4,'R',5,'V',6,'O',7,'C',8,'H',9,'F',10,'F',11,'F',13,'F','?') constraint_type,
2 type#,con_id,count(*) from containers(cdef$)
3 group by type#,con_id order by type#,con_id;
 
CONSTRAINT_TYPE TYPE# CON_ID COUNT(*)
--------------- ----- ------ --------
C 1 1 74
C 1 3 74
P 2 1 843
P 2 3 844
U 3 1 238
U 3 3 238
R 4 1 324
R 4 3 324
V 5 1 11
O 6 1 172
O 6 3 26
C 7 1 5337
C 7 3 5337
F 11 1 11
F 11 3 11
? 12 1 3
? 12 3 3

I have very few user objects in this database. CON_ID=1 is CDB$ROOT and CON_ID=3 is my PDB. What we can see here is that we have nearly the same number of rows in both containers for the following constraint types: C (check constraint on a table), P (primary key), U (unique key), R (referential integrity), and other types related to tables. And some types have most of their rows in CDB$ROOT only: V (check option on views), R (read only on views)

That’s an implementation specificity of the multitenant architecture which makes things more complex for the dictionary views. For some objects (such as procedures and views) the metadata is stored in only one container: system objects have all their information in CDB$ROOT and the PDB has only a link which is a dummy row in OBJ$ which mentions the sharing (such as metadata link), owner and name (to match to the object in CDB$ROOT), and a signature (to verify that the DDL creating the object is the same). But other objects (such as tables) have their information duplicated in all containers for system objects (CDB$ROOT, PDB$SEED and all user PDBs). This is the reason why we see rows in both containers for constraint definition when they are related to a table.

Example on view constraint

I’ll take a constraint on system view as an example: constraint SYS_C003357 on table SYS.DBA_XS_SESSIONS


SQL> select owner,object_name,object_type,sharing from dba_objects where owner='SYS' and object_name='DBA_XS_SESSIONS';
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ----------- ----------- -------
SYS DBA_XS_SESSIONS VIEW METADATA LINK
 
SQL> select owner,table_name,constraint_type,constraint_name from containers(dba_constraints) where owner='SYS' and table_name='DBA_XS_SESSIONS' and rownum=1;
 
OWNER TABLE_NAME CONSTRAINT_TYPE CONSTRAINT_NAME
----- ---------- --------------- ---------------
SYS DBA_XS_SESSIONS O SYS_C003357

I’m looking at the dependencies for the DBA_CONSTRAINTS view:

SQL> select owner,name,referenced_owner,referenced_name from dba_dependencies where owner='SYS' and name='DBA_CONSTRAINTS' and type='VIEW';
 
OWNER NAME REFERENCED_OWNER REFERENCED_NAME
----- ---- ---------------- ---------------
SYS DBA_CONSTRAINTS SYS GETLONG
SYS DBA_CONSTRAINTS SYS INT$DBA_CONSTRAINTS

So the DBA_CONSTRAINT is a view on INT$DBA_CONSTRAINTS as we have seen above. However, this view is not directly reading the tables but another view:

SQL> select owner,name,referenced_owner,referenced_name from dba_dependencies where owner='SYS' and name='INT$DBA_CONSTRAINTS' and type='VIEW';
 
OWNER NAME REFERENCED_OWNER REFERENCED_NAME
----- ---- ---------------- ---------------
SYS INT$DBA_CONSTRAINTS SYS GETLONG
SYS INT$DBA_CONSTRAINTS SYS INT$INT$DBA_CONSTRAINTS

Here is our additional INT$INT$ view which is reading the tables:

SQL> select owner,name,referenced_owner,referenced_name from dba_dependencies where owner='SYS' and name='INT$INT$DBA_CONSTRAINTS' and type='VIEW';
 
OWNER NAME REFERENCED_OWNER REFERENCED_NAME
----- ---- ---------------- ---------------
SYS INT$INT$DBA_CONSTRAINTS SYS USER$
SYS INT$INT$DBA_CONSTRAINTS SYS CDEF$
SYS INT$INT$DBA_CONSTRAINTS SYS OBJ$
SYS INT$INT$DBA_CONSTRAINTS SYS CON$
SYS INT$INT$DBA_CONSTRAINTS SYS _CURRENT_EDITION_OBJ
SYS INT$INT$DBA_CONSTRAINTS SYS _BASE_USER
SYS INT$INT$DBA_CONSTRAINTS SYS GETLONG

In summary, the EXTENDED DATA view which reads the tables on each container (CDB$ROOT and PDB) is here the INT$INT$DBA_CONSTRAINTS and the INT$DBA_CONSTRAINTS is another intermediate one before the DBA_CONSTRAINTS view.


SQL> select owner,object_name,object_type,sharing from dba_objects where object_name in ('DBA_CONSTRAINTS','INT$DBA_CONSTRAINTS','INT$INT$DBA_CONSTRAINTS') order by object_id desc;
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ----------- ----------- -------
PUBLIC DBA_CONSTRAINTS SYNONYM METADATA LINK
SYS DBA_CONSTRAINTS VIEW METADATA LINK
SYS INT$DBA_CONSTRAINTS VIEW METADATA LINK
SYS INT$INT$DBA_CONSTRAINTS VIEW EXTENDED DATA LINK

In this example, we don’t understand the reason for the additional intermediate view because the return all the same number of rows in each container:


SQL> select con_id,constraint_type,constraint_name from containers(INT$INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 O SYS_C003357
3 O SYS_C003357
 
SQL> select con_id,constraint_type,constraint_name from containers(INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 O SYS_C003357
3 O SYS_C003357
 
SQL> select con_id,constraint_type,constraint_name from containers(DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 O SYS_C003357
3 O SYS_C003357

The difference is only a few additional columns from the object definition (OWNERID,OBJECT_ID,OBJECT_TYPE#,SHARING) in the INT$ and INT$INT$ which are not selected in the final view:

SQL> select * from containers(INT$INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
OWNER OWNERID CONSTRAINT_NAME CONSTRAINT_TYPE TABLE_NAME OBJECT_ID OBJECT_TYPE# SEARCH_CONDITION_VC R_OWNER R_CONSTRAINT_NAME DELETE_RULE STATUS DEFERRABLE DEFERRED VALIDATED GENERATED BAD RELY LAST_CHANGE INDEX_OWNER INDEX_NAME INVALID VIEW_RELATED SHARING ORIGIN_CON_ID CON_ID
----- ------- --------------- --------------- ---------- --------- ------------ ------------------- ------- ----------------- ----------- ------ ---------- -------- --------- --------- --- ---- ----------- ----------- ---------- ------- ------------ ------- ------------- ------
SYS 0 SYS_C003357 O DBA_XS_SESSIONS 10316 4 ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1 1
SYS 0 SYS_C003357 O DBA_XS_SESSIONS 10316 4 ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1 3
 
SQL> select * from containers(INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
OWNER OWNERID CONSTRAINT_NAME CONSTRAINT_TYPE TABLE_NAME OBJECT_ID OBJECT_TYPE# SEARCH_CONDITION_VC R_OWNER R_CONSTRAINT_NAME DELETE_RULE STATUS DEFERRABLE DEFERRED VALIDATED GENERATED BAD RELY LAST_CHANGE INDEX_OWNER INDEX_NAME INVALID VIEW_RELATED SHARING ORIGIN_CON_ID CON_ID
----- ------- --------------- --------------- ---------- --------- ------------ ------------------- ------- ----------------- ----------- ------ ---------- -------- --------- --------- --- ---- ----------- ----------- ---------- ------- ------------ ------- ------------- ------
SYS 0 SYS_C003357 O DBA_XS_SESSIONS 10316 4 ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1 1
SYS 0 SYS_C003357 O DBA_XS_SESSIONS 10316 4 ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1 3
 
SQL> select * from containers(DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
OWNER CONSTRAINT_NAME CONSTRAINT_TYPE TABLE_NAME SEARCH_CONDITION_VC R_OWNER R_CONSTRAINT_NAME DELETE_RULE STATUS DEFERRABLE DEFERRED VALIDATED GENERATED BAD RELY LAST_CHANGE INDEX_OWNER INDEX_NAME INVALID VIEW_RELATED ORIGIN_CON_ID CON_ID
----- --------------- --------------- ---------- ------------------- ------- ----------------- ----------- ------ ---------- -------- --------- --------- --- ---- ----------- ----------- ---------- ------- ------------ ------------- ------
SYS SYS_C003357 O DBA_XS_SESSIONS ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1
SYS SYS_C003357 O DBA_XS_SESSIONS ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17

If we look at the INT$DBA_CONSTRAINTS definition we see some filters on those object definition:

SQL> ddl INT$DBA_CONSTRAINTS
 
CREATE OR REPLACE FORCE NONEDITIONABLE VIEW "SYS"."INT$DBA_CONSTRAINTS" ("OWNER", "OWNERID", "CONSTRAINT_NAME", "CONSTRAINT_TYPE", "TABLE_NAME", "OBJECT_ID", "OBJECT_TYPE#", "SEARCH_CONDITION", "SEARCH_CONDITION_VC", "R_OWNER", "R_CONSTRAINT_NAME", "DELETE_RULE", "STATUS", "DEFERRABLE", "DEFERRED", "VALIDATED", "GENERATED", "BAD", "RELY", "LAST_CHANGE", "INDEX_OWNER", "INDEX_NAME", "INVALID", "VIEW_RELATED", "SHARING", "ORIGIN_CON_ID") AS
select OWNER, OWNERID, CONSTRAINT_NAME, CONSTRAINT_TYPE,
TABLE_NAME, OBJECT_ID, OBJECT_TYPE#, SEARCH_CONDITION,
SEARCH_CONDITION_VC, R_OWNER, R_CONSTRAINT_NAME, DELETE_RULE, STATUS,
DEFERRABLE, DEFERRED, VALIDATED, GENERATED,
BAD, RELY, LAST_CHANGE, INDEX_OWNER, INDEX_NAME,
INVALID, VIEW_RELATED, SHARING, ORIGIN_CON_ID
from INT$INT$DBA_CONSTRAINTS INT$INT$DBA_CONSTRAINTS
where INT$INT$DBA_CONSTRAINTS.OBJECT_TYPE# = 4 /* views */
OR (INT$INT$DBA_CONSTRAINTS.OBJECT_TYPE# = 2 /* tables */
AND (INT$INT$DBA_CONSTRAINTS.ORIGIN_CON_ID
= TO_NUMBER(SYS_CONTEXT('USERENV', 'CON_ID'))));

For views (OBJECT_TYPE#=4) there is no filter, which explains why we see the same number of rows in the previous example. But for tables (OBJECT_TYPE#=2) there’s an additional filter to keep the row from the current container only.

Example on table constraint

Then, I’ll take another example with a constraint definition for a table:

SQL> select owner,object_name,object_type,sharing from dba_objects where owner='SYS' and object_name='RXS$SESSIONS';
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ----------- ----------- -------
SYS RXS$SESSIONS TABLE METADATA LINK
 
SQL> select owner,table_name,constraint_type,constraint_name from dba_constraints where owner='SYS' and table_name='RXS$SESSIONS' and rownum=1;
 
OWNER TABLE_NAME CONSTRAINT_TYPE CONSTRAINT_NAME
----- ---------- --------------- ---------------
SYS RXS$SESSIONS C SYS_C003339

From the INT$INT$ view, we have a duplicate when we query on a PDB because for tables the PDB not only holds a dummy row in OBJ$ but full information about the table is duplicated in other tables such as TAB$ and CDEF$:

SQL> select con_id,constraint_type,constraint_name from containers(INT$INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003339'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 C SYS_C003339
3 C SYS_C003339
3 C SYS_C003339

This is the reason for the additional intermediate view: filtering out those duplicate by removing the rows from CDB$ROOT when queried from a PDB.

SQL> select con_id,constraint_type,constraint_name from containers(INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003339'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 C SYS_C003339
3 C SYS_C003339

Thanks to that, the duplicates are not visible to the end-user views DBA_CONSTRAINTS and PDB_CONSTRAINTS.

You may wonder why only DBA_CONSTRAINTS needs this views and not DBA_TABLES, DBA_INDEXES or DBA_TAB_COLUMNS? That’s because all information about system tables and indexes are replicated in all PDBs and then there is no need for EXTENDED DATA and context switches. DBA_CONSTRAINT has the particularity of showing information about tables and views, which implement the metadata links in a different way.

 

Cet article Multitenant internals: INT$ and INT$INT$ views est apparu en premier sur Blog dbi services.

Multitenant dictionary: what is consolidated and what is not

Sun, 2017-11-05 11:38

The documentation says that for Reduction of duplication and Ease of database upgrade the Oracle-supplied objects such as data dictionary table definitions and PL/SQL packages are represented only in the root.

Unfortunately, this is only partly true. System PL/SQL packages are only in root but system table definition are replicated into all PDBs.

This post is an extension of a previous blog post which was on 12cR1. This one is on 12cR2.

As I did at Open World and will do at DOAG, I show multitenant internals by creating a metadata link procedure. When I do a simple ‘describe’ when connected to a PDB, the sql_trace shows that the session switches to the CDB$ROOT to get the procedure information:

*** 2017-11-05T16:17:36.339126+01:00 (CDB$ROOT(1))
=====================
PARSING IN CURSOR #140420856738440 len=143 dep=1 uid=0 oct=3 lid=0 tim=101728244788 hv=2206365737 ad='7f60a7f0' sqlid='9fjf75a1s4y19'
select procedure#,procedurename,properties,itypeobj#, properties2 from procedureinfo$ where obj#=:1 order by procedurename desc, overload# desc
END OF STMT

All information about the system PL/SQL procedures is stored in the root only. The PDB has only a dummy row in OBJ$ to mention that it is a metadata link. And this is why you pay for the multitenant option: consolidation of all system dictionary objects into the root only. You save space (on disk and related memory) and you have only one place to upgrade.

But this is implemented only for some objects, like PL/SQL procedures, but not for others like table and indexes. If you ‘describe’ a metadata link table when connected to a PDB you will not see any switch to CDB$ROOT in the sql_trace:

*** 2017-11-05T13:01:53.541231+01:00 (PDB1(3))
PARSING IN CURSOR #139835947128936 len=86 dep=1 uid=0 oct=3 lid=0 tim=98244321664 hv=2195287067 ad='75f823b8' sqlid='32bhha21dkv0v'
select col#,intcol#,charsetid,charsetform from col$ where obj#=:1 order by intcol# asc
END OF STMT
PARSE #139835947128936:c=0,e=158,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=3765558045,tim=98244321664
BINDS #139835947128936:
Bind#0
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=08 fl2=1000001 frm=00 csi=00 siz=24 off=0
kxsbbbfp=7f2e124fef10 bln=22 avl=03 flg=05
value=747
EXEC #139835947128936:c=1000,e=603,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=3765558045,tim=98244322311
FETCH #139835947128936:c=0,e=15,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=4,plh=3765558045,tim=98244322342
FETCH #139835947128936:c=0,e=1,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,plh=3765558045,tim=98244322356
FETCH #139835947128936:c=0,e=4,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=3765558045,tim=98244322369
STAT #139835947128936 id=1 cnt=2 pid=0 pos=1 obj=0 op='SORT ORDER BY (cr=3 pr=0 pw=0 str=1 time=16 us cost=3 size=234 card=13)'
STAT #139835947128936 id=2 cnt=2 pid=1 pos=1 obj=21 op='TABLE ACCESS CLUSTER COL$ (cr=3 pr=0 pw=0 str=1 time=11 us cost=2 size=234 card=13)'
STAT #139835947128936 id=3 cnt=1 pid=2 pos=1 obj=3 op='INDEX UNIQUE SCAN I_OBJ# (cr=2 pr=0 pw=0 str=1 time=6 us cost=1 size=0 card=1)'
CLOSE #139835947128936:c=0,e=1,dep=1,type=3,tim=98244322439

Here all information about the columns is read from COL$ in the PDB. And if you look at TAB$ (tables), COL$ (table columns), IND$ (indexes), CONS$ and CDEF$ (constraints), you will see that they contain rows in a PDB where no user objects have been created. This is the case for all information related to tables: they are stored in CDB$ROOT and replicated into all other containers: PDB$SEED and all user created PDB. Only the information related to non-data objects, are stored only in one container.

I’ve run a query to count the rows in CDB$ROOT and PDB$SEED and here is the result:
CaptureMultitenantNumRows

All rows in OBJ$ are replicated, which is expected because this is where the metadata link information is stored. But you see also all information related to tables that are also replicated, such as the 100000+ columns in COL$. And this is the reason why you do not see a big consolidation benefit when you look at the size of the SYSTEM tablespace in pluggable databases which do no contain any user data:

List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 820 SYSTEM YES /u01/oradata/CDB1A/system01.dbf
3 630 SYSAUX NO /u01/oradata/CDB1A/sysaux01.dbf
4 80 UNDOTBS1 YES /u01/oradata/CDB1A/undotbs01.dbf
5 250 PDB$SEED:SYSTEM NO /u01/oradata/CDB1A/pdbseed/system01.dbf
6 390 PDB$SEED:SYSAUX NO /u01/oradata/CDB1A/pdbseed/sysaux01.dbf
7 5 USERS NO /u01/oradata/CDB1A/users01.dbf
8 100 PDB$SEED:UNDOTBS1 NO /u01/oradata/CDB1A/pdbseed/undotbs01.dbf
9 270 PDB1:SYSTEM YES /u01/oradata/CDB1A/PDB1/system01.dbf
10 440 PDB1:SYSAUX NO /u01/oradata/CDB1A/PDB1/sysaux01.dbf
11 100 PDB1:UNDOTBS1 YES /u01/oradata/CDB1A/PDB1/undotbs01.dbf
12 5 PDB1:USERS NO /u01/oradata/CDB1A/PDB1/users01.dbf

Here I have 250MB in PDB$SEED which is supposed to contain only links to the 820GB SYSTEM tablespace, but there is a lot more than that.

So, basically, not all the dictionary is consolidated in multitenant but only the non-data part such as those PL/SQL packages and the dictionary views definition. You can think about the multitenant option consolidation as an extension to sharing the Oracle Home among several databases. It concerns the software part only. But the part of the dictionary which contains data about system objects is replicated into all containers, and is read locally without a context switch. This also means that a patch or upgrade on them has to be run in all containers.

With the fact that some information is replicated and some are not, comes the complexity to manage that in the dictionary views, and this will be the subject of the next blog post about INT$INT$ views.

 

Cet article Multitenant dictionary: what is consolidated and what is not est apparu en premier sur Blog dbi services.

Can I have the same table published and subscribed (bi-directional) in PostgreSQL 10 logical replication?

Fri, 2017-11-03 04:03

When you start using PostgreSQL 10 logical replication you might think it is a good idea to setup bi-directional replication so you end up with two or more masters that are all writable. I will not go into the details of multi master replication here (conflict resolution, …) but will show what happens when you try to do that. Lets go …

My two instances run on the same host, one on port 6000 the other one on 6001. To start I’ll create the same table in both instances:

postgres=# create table t1 ( a int primary key, b varchar(50) );
CREATE TABLE
postgres=# alter table t1 replica identity using INDEX t1_pkey;
ALTER TABLE
postgres=# \d+ t1
                                            Table "public.t1"
 Column |         Type          | Collation | Nullable | Default | Storage  | Stats target | Description 
--------+-----------------------+-----------+----------+---------+----------+--------------+-------------
 a      | integer               |           | not null |         | plain    |              | 
 b      | character varying(50) |           |          |         | extended |              | 
Indexes:
    "t1_pkey" PRIMARY KEY, btree (a) REPLICA IDENTITY

Create the same publication on both sides:

postgres=# create publication my_pub for table t1;
CREATE PUBLICATION
postgres=# select * from pg_publication;
 pubname | pubowner | puballtables | pubinsert | pubupdate | pubdelete 
---------+----------+--------------+-----------+-----------+-----------
 my_pub  |       10 | f            | t         | t         | t
(1 row)
postgres=# select * from pg_publication_tables;
 pubname | schemaname | tablename 
---------+------------+-----------
 my_pub  | public     | t1
(1 row)

Create the same subscription on both sides (except for the port, of course):

postgres=# show port;
 port 
------
 6000
(1 row)
ppostgres=# create subscription my_sub connection 'host=localhost port=6001 dbname=postgres user=postgres' publication my_pub;
CREATE SUBSCRIPTION
postgres=# select * from pg_subscription;
 subdbid | subname | subowner | subenabled |                      subconninfo                       | subslotname | 
---------+---------+----------+------------+--------------------------------------------------------+-------------+-
   13212 | my_sub  |       10 | t          | host=localhost port=6001 dbname=postgres user=postgres | my_sub      | 
(1 row)


### second instance

postgres=# show port;
 port 
------
 6001
(1 row)

postgres=# create subscription my_sub connection 'host=localhost port=6000 dbname=postgres user=postgres' publication my_pub;
CREATE SUBSCRIPTION
postgres=# select * from pg_subscription;
 subdbid | subname | subowner | subenabled |                      subconninfo                       | subslotname | 
---------+---------+----------+------------+--------------------------------------------------------+-------------+-
   13212 | my_sub  |       10 | t          | host=localhost port=6000 dbname=postgres user=postgres | my_sub      | 
(1 row)

So far, so good, everything worked until now. Now lets insert a row in the first instance:

postgres=# insert into t1 (a,b) values (1,'a');
INSERT 0 1
postgres=# select * from t1;
 a | b 
---+---
 1 | a
(1 row)

That seemed to worked as well as the row is there on the second instance as well:

postgres=# show port;
 port 
------
 6001
(1 row)

postgres=# select * from t1;
 a | b 
---+---
 1 | a
(1 row)

But: When you take a look at the log file of the first instance you’ll see something like this (which is repeated over and over again):

2017-11-03 09:56:29.176 CET - 2 - 10687 -  - @ ERROR:  duplicate key value violates unique constraint "t1_pkey"
2017-11-03 09:56:29.176 CET - 3 - 10687 -  - @ DETAIL:  Key (a)=(1) already exists.
2017-11-03 09:56:29.178 CET - 29 - 10027 -  - @ LOG:  worker process: logical replication worker for subscription 16437 (PID 10687) exited with exit code 1
2017-11-03 09:56:34.198 CET - 1 - 10693 -  - @ LOG:  logical replication apply worker for subscription "my_sub" has started

Now the second instance is constantly trying to insert the same row back to the first instance and that obviously can not work as the row is already there. So the answer to the original question: Do not try to do that, it will not work anyway.

 

Cet article Can I have the same table published and subscribed (bi-directional) in PostgreSQL 10 logical replication? est apparu en premier sur Blog dbi services.

Quick history on database growth

Thu, 2017-11-02 12:13

AWR collects segment statistics, and this can be used to quickly understand an abnormal database growth. Here is a script I use to get, from the AWR history, the segments that have grown by more than 1% of the database size, in one hour.

First I must mention that this uses only the part of AWR which is not subject to additional option. This even works in Standard Edition:
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
control_management_pack_access string NONE

So here is the query, easy to modify with different threshold:
set echo on pagesize 1000
set sqlformat ansiconsole
select * from (
select
round(sum(SPACE_ALLOCATED_DELTA)/1024/1024/1024) GBYTE_ALLOCATED
,trunc(max(end_interval_time),'hh24') snap_time
,round(sum(SPACE_ALLOCATED_DELTA)/1024/1024/1024*24*(cast(max(end_interval_time) as date)-cast(min(begin_interval_time) as date))) "GB/hour"
,owner,object_name,subobject_name,object_type
from DBA_HIST_SEG_STAT join DBA_HIST_SEG_STAT_OBJ using (dbid,ts#,obj#,dataobj#) join dba_hist_snapshot using(dbid,snap_id)
group by trunc(end_interval_time,'hh24'),owner,object_name,subobject_name,object_type
) where "GB/hour" > (select sum(bytes)/1024/1024/1024/1e2 "one percent of database size" from dba_data_files)
order by snap_time
;

and the sample output, showing only the snapshots and segments where more than 1% of the database size has been allocated within one hour:

GBYTE_ALLOCATED SNAP_TIME GB/hour OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_TYPE
--------------- --------- ------- ----- ----------- -------------- -----------
4 25-OCT-2017 19:00:00 4 BIGDATA SYS_LOB0000047762C00006$$ LOB
9 25-OCT-2017 20:00:00 9 BIGDATA SYS_LOB0000047762C00006$$ LOB
9 25-OCT-2017 21:00:00 9 BIGDATA SYS_LOB0000047762C00006$$ LOB
3 25-OCT-2017 22:00:00 3 BIGDATA SYS_LOB0000047762C00006$$ LOB
5 26-OCT-2017 00:00:00 5 BIGDATA SYS_LOB0000047762C00006$$ LOB
6 26-OCT-2017 01:00:00 6 BIGDATA SYS_LOB0000047762C00006$$ LOB
7 26-OCT-2017 02:00:00 7 BIGDATA SYS_LOB0000047762C00006$$ LOB
7 26-OCT-2017 03:00:00 7 BIGDATA SYS_LOB0000047762C00006$$ LOB
7 26-OCT-2017 04:00:00 7 BIGDATA SYS_LOB0000047762C00006$$ LOB
5 26-OCT-2017 05:00:00 5 BIGDATA SYS_LOB0000047762C00006$$ LOB
2 26-OCT-2017 06:00:00 2 BIGDATA SYS_LOB0000047719C00008$$ LOB
2 26-OCT-2017 06:00:00 2 BIGDATA SYS_LOB0000047710C00006$$ LOB

With this, it is easier to ask to the application owners if this growth is normal or not.

 

Cet article Quick history on database growth est apparu en premier sur Blog dbi services.

Are large objects supported in PostgreSQL 10 logical replication

Thu, 2017-11-02 01:32

Another interesting topic that popped up last week during pgconfeu: Are large objects supported with logical replication in PostgreSQL 10? The only truth is a test, isn’t it? Lets go…

Obviously we need a table containing same large objects to start with:

postgres=# create table t1 ( a int, b oid);
CREATE TABLE

Before inserting some data lets create a publication for that table right now:

postgres=# create publication my_pub for table t1;
CREATE PUBLICATION

Ok, that works. Now we need a subscription for that, so on a second instance:

postgres=# create table t1 ( a int, b oid);
CREATE TABLE
postgres=# create subscription my_sub connection 'host=localhost port=6000 dbname=postgres user=postgres' publication my_pub;
CREATE SUBSCRIPTION

So far, so good. Lets insert some data on the publishing instance and see what happens:

postgres=# \! which cp
/usr/bin/cp
postgres=# insert into t1 (a,b) values (1, lo_import('/usr/bin/cp'));
INSERT 0 1

That worked. What do we see on the subscription side?

postgres=# select * from t1;
 a |   b   
---+-------
 1 | 16418
(1 row)

postgres=# select * from pg_size_pretty ( pg_relation_size ( 't1' ) );
 pg_size_pretty 
----------------
 8192 bytes
(1 row)

So, at least “something” is there. Lets prove it:

postgres=# select lo_export(b,'/tmp/cp') from t1;
ERROR:  large object 16418 does not exist
postgres=# 

Hm, this is not what was expected, right? Doing the same on the publishing side works:

postgres=# select lo_export(b,'/tmp/cp') from t1;
 lo_export 
-----------
         1
(1 row)

postgres=# \! chmod +x /tmp/cp
postgres=# \! /tmp/cp --help | head -1
Usage: /tmp/cp [OPTION]... [-T] SOURCE DEST

This means the OID is replicated but not the large object itself. So the answer is: No, large objects can not be used with PostgreSQL 10 logical replication.

 

Cet article Are large objects supported in PostgreSQL 10 logical replication est apparu en premier sur Blog dbi services.

Pass Summit 2017

Wed, 2017-11-01 23:52

Today starts the Pass Summit 2017 taking place in Seattle.
After a small fly over the Ocean, more than 10 hours… yesterday, and a nice jet lag which avoid me to sleep later than 4AM this morning, I arrived to the Convention Center in Seattle where the Pass takes place.

IMG_9474[1]

I start this first day by the session of Itzik Ben-Gan: T-SQL Tips and Tricks.
As part of the session, Itzik spoke about batch processing (start with 2012) which boost the execution of T_SQL script compare to Row execution mode.
The problem is that Batch mode is just available with columnstore indexes. So if you don’t have a columnstore index in your table you cannot benefit of this feature.
To cheat this drawback Itzik showed us the possibility to create a filter columnstore index (filter CI start with 2016) which will return no row but will enable the possibility to use batch processing.
Well done!

After a quick lunch, I continue this first day by the Session of Drew Furgiuele:

PowerShell

After having explained why to use PowerShell (automation, bridge between tools…) and how to install the SQLSERVER module (Install-Module SQLSERVER or Save-Module SQLServer), Drew shown how to use this module.
The first interesting point is how to browse SQL Server once the module has been installed.
For that just execute the PS script:

cd SQLSERVER:\

And after connection to your SQL Server instance with cd sql\<servername>\default for a SQL Server default instance or \<instancename> for a named instance it’s possible to browse your complete instance as you can do via SQL Server Management Studio with commands like:

$dbs = Get-Item
$dbs = Get-Item ¦ where-object {$_.name -eq AdventureWorks2104}

Easy for a fist step with PowerShell.
Of course Drew showed us really more with PowerShell scripts copying tables from an instance to an other one, managing backups identically in your whole environment or executing a Point in time restore.
Well done Drew.

The last session of the day as 2 parts and is driven by Glenn Berry about Migration to SQL Server 2017.
Glenn explained that there is plenty Reasons to upgrade to SQL Server 2017: great new features, features available with Standard Edition (start with 2016 SP1)…
But he also pointed that there is also big performance differences between Standard and Enterprise Edition with examples using columnstore indexes or when running a dbcc checkdb.
So it’s not just new features that are available with Enterprise Edition, it could also provide great performance gain which is often forgotten.
There is also limitation for memories, sockets and physical cores usage with Standard Edition, don’t build a Virtual Machine for a Standard Edition with too many memories or sockets/cores because it will not be able to use them ;-) You can learn more on Glenn Berry’s blog.

This first day was very great with lot’s of interesting sessions.
It’s time now to visit a little bit Seattle and waiting tomorrow for the second day with some other great sessions and speakers!

 

 

Cet article Pass Summit 2017 est apparu en premier sur Blog dbi services.

PASS SUMMIT 2017 – SQL Server Security

Wed, 2017-11-01 18:05

Today is the first day of the PASS SUMMIT 2017 in Seattle (WA). The weather is cloudy and we have only 11°C… but where is the problem? Everything happens inside! (at the Convention Center).

IMG_9474[1]

In this blog, I will make a summary of main attack vectors against MSSQL environments, based on Argenis FERANDEZ’s session called “Modern Security Attack Vectors Against SQL Server Environments”.

METASPLOIT

Metasploit is a penetration testing framework to exploit known security vulnerabilities. This tool is able to scan a server by providing an IP address, and to list all security vulnerabilities you can find on TechNet.

After your environment has been scanned, you can exploit these vulnerabilities on every non-patched server. This kind of tool remind us how it is important to keep environments up-to-date with security updates!

Metasploit can also be used to hack SQL Server login password with a Brute-Force method. Time to remain Windows Logins are recommended over SQL Logins.

 

PowerSploit

PowerSploit is a collection of PowerShell modules (CodeExecution, ScriptModification, Exfiltration…) which can be used to exploit information / data from a compromised machine. This module includes the famous Mimikatz cmdlet which can be used to extract plaintext passwords, hash, PIN code and Kerberos tickets from memory.

 

SQLMAP

sqlmap is a penetration testing tool. It can detect and exploit different SQL injection types, like Boolean-based blind or Time-based blind.

 

RDP Hijacking

Imagine a Domain Admin (or a SQL Admin) connects to a remote Windows server. When his work is done, he disconnects from his session (so the session is still available). It happens to many administrators, doesn’t it? And now imagine this Windows server has been compromised and the hacker has local administrator privileges. He is able to hijack the domain admin session, and so retrieve all his privileges…

But how is it possible? You can either use PSEXEC Sysinternals tool (but it needs to be there), or either create a service which will hijack user’s session. You can find the demonstration made by Alexander Korznikov.

As a consequence, it highly recommended to completely logoff from your rdp sessions!

 

CONCLUSION

This session was pretty interesting because it provided various approaches which can be used to attack your MSSQL environment. It also provided different best practices to take care about, and I know I will always log off from my sessions ;-)

 

Cet article PASS SUMMIT 2017 – SQL Server Security est apparu en premier sur Blog dbi services.

Enable Trusted Content Services (TCS) license in Documentum Content Server

Wed, 2017-11-01 08:12

The Trusted Content Services is a pack of features that you can enable/purchase to gain more security for your Content Server. The main key features are listed in this documentation:

https://www.emc.com/collateral/software/data-sheet/h3138-security-ds.pdf

In our case, we wanted to benefit from the Repository Encryption feature:

Repository Encryption: Documentum TCS prevents intruders from accessing information even if they obtain unauthorized access to repository files at the file-system or storage level. This capability protects content against an operating system level security breach and enables you to securely store back-up media containing information assets in encrypted form.

To enable the TCS first you must get a license key from OpenText then they will send you a key String. Put that key only in a file located here: $DOCUMENTUM/dba/tcs_license

Then login to the docbase with IAPI and enable TCS for this repo with:

retrieve,c,dm_server_config
set,c,l,r_trusted_mode
1
save,c,l
reinit,c

No need to restart, the TCS license is only checked dynamically when a TCS feature is called.
To see if the license has been properly integrated, login to Documentum Administrator and on the Admin page you should see:
Trusted Mode: Enabled

 

Cet article Enable Trusted Content Services (TCS) license in Documentum Content Server est apparu en premier sur Blog dbi services.

Dbvisit replicate – SQL Developer chart

Tue, 2017-10-31 14:27

Here is a quick SQL Developer report which display a chart about the Dbvisit replicate lag over the last hours

The idea is to have the following chart showing the lag in MINE and APPLY processes. Here is an example where I stopped the replication to show some lag.
CaptureDbvrepsqldev

The query is on the DBVREP.DBRSCOMMON_LAG_STATS on the APPLY side, which display the wallclock time with timestamp from the MINE and from the APPLY.

Here is the SQL Developer report .xml:

<?xml version="1.0" encoding="UTF-8" ?>
<displays>
<display id="72f37b59-015f-1000-8002-0af7a3b47766" type="" style="Chart" enable="true">
	<name><![CDATA[Dbvisit replicate gap]]></name>
	<description><![CDATA[]]></description>
	<tooltip><![CDATA[]]></tooltip>
	<drillclass><![CDATA[]]></drillclass>
	<CustomValues>
		<Y1AXIS_TITLE_TEXT><![CDATA[min]]></Y1AXIS_TITLE_TEXT>
		<PLOT_DATATIPS_TEXT><![CDATA[true]]></PLOT_DATATIPS_TEXT>
		<Y2AXIS_SCALE_MAXIMUM><![CDATA[80.0]]></Y2AXIS_SCALE_MAXIMUM>
		<XAXIS_TICK_LABEL_FONT.COLOR><![CDATA[-12565927]]></XAXIS_TICK_LABEL_FONT.COLOR>
		<LEGEND_LOCATION><![CDATA[AUTOMATIC]]></LEGEND_LOCATION>
		<PLOT_SERIES_OPTIONS_COLOR><![CDATA[\,-1344256,-16756836,-10066279,-16751002,-26368]]></PLOT_SERIES_OPTIONS_COLOR>
		<DATA_MAP_COLUMNS><![CDATA[\,"WALLCLOCK","WALLCLOCK"]]></DATA_MAP_COLUMNS>
		<Y1AXIS_SCALE_MAXIMUM><![CDATA[60.0]]></Y1AXIS_SCALE_MAXIMUM>
		<Y1AXIS_SCALE_INCREMENT_AUTOMATIC><![CDATA[false]]></Y1AXIS_SCALE_INCREMENT_AUTOMATIC>
		<XAXIS_TICK_LABEL_ROTATE><![CDATA[HORIZONTAL]]></XAXIS_TICK_LABEL_ROTATE>
		<TYPE><![CDATA[BAR_VERT_CLUST]]></TYPE>
		<DATA_MAP_COUNT><![CDATA[2]]></DATA_MAP_COUNT>
		<STYLE><![CDATA[Default]]></STYLE>
		<TITLE_ALIGNMENT><![CDATA[LEFT]]></TITLE_ALIGNMENT>
		<XAXIS_TICK_LABEL_FONT.NAME><![CDATA[Courier New]]></XAXIS_TICK_LABEL_FONT.NAME>
		<TITLE_TEXT><![CDATA[Dbvisit replicate gap (in minutes)]]></TITLE_TEXT>
		<Y2AXIS_TICK_LABEL_ROTATE><![CDATA[HORIZONTAL]]></Y2AXIS_TICK_LABEL_ROTATE>
		<PLOT_HGRID><![CDATA[true]]></PLOT_HGRID>
		<PLOT_DATATIPS_VALUE><![CDATA[true]]></PLOT_DATATIPS_VALUE>
		<Y2AXIS_LINE_WIDTH><![CDATA[THINNEST]]></Y2AXIS_LINE_WIDTH>
		<Y1AXIS_TICK_LABEL_ROTATE><![CDATA[HORIZONTAL]]></Y1AXIS_TICK_LABEL_ROTATE>
		<PLOT_HGRID_WIDTH><![CDATA[THINNER]]></PLOT_HGRID_WIDTH>
		<XAXIS_TICK_LABEL_AUTO_ROTATE><![CDATA[true]]></XAXIS_TICK_LABEL_AUTO_ROTATE>
		<Y1AXIS_SCALE_INCREMENT><![CDATA[60.0]]></Y1AXIS_SCALE_INCREMENT>
		<Y1AXIS_LINE_WIDTH><![CDATA[THINNEST]]></Y1AXIS_LINE_WIDTH>
		<Y1AXIS_TITLE_ALIGNMENT><![CDATA[CENTER]]></Y1AXIS_TITLE_ALIGNMENT>
		<LEGEND_ALIGNMENT><![CDATA[LEFT]]></LEGEND_ALIGNMENT>
		<XAXIS_LINE_WIDTH><![CDATA[THINNEST]]></XAXIS_LINE_WIDTH>
		<XAXIS_TICK_LABEL_FONT.SIZE><![CDATA[14]]></XAXIS_TICK_LABEL_FONT.SIZE>
		<XAXIS_TITLE_ALIGNMENT><![CDATA[CENTER]]></XAXIS_TITLE_ALIGNMENT>
		<PLOT_DATALABELS><![CDATA[false]]></PLOT_DATALABELS>
		<Y1AXIS_LOGARITHMIC_BASE><![CDATA[BASE_10]]></Y1AXIS_LOGARITHMIC_BASE>
		<GRID_WIDTH><![CDATA[THINNER]]></GRID_WIDTH>
		<PLOT_DATALABELS_BAR_POSITION><![CDATA[ABOVE]]></PLOT_DATALABELS_BAR_POSITION>
		<FOOTNOTE_ALIGNMENT><![CDATA[LEFT]]></FOOTNOTE_ALIGNMENT>
		<XAXIS_TICK_LABEL_SKIP_MODE><![CDATA[MANUAL]]></XAXIS_TICK_LABEL_SKIP_MODE>
		<XAXIS_TICK_LABEL_FONT.UNDERLINE><![CDATA[false]]></XAXIS_TICK_LABEL_FONT.UNDERLINE>
		<DATA_MAP_COLNAMES><![CDATA[\,"APPLY lag seconds","APPLY_PROCESS_NAME","DDC_ID","MINE lag seconds","MINE_PROCESS_NAME","WALLCLOCK"]]></DATA_MAP_COLNAMES>
		<DATA_MAP_SERIES><![CDATA[\,"MINE_PROCESS_NAME","APPLY_PROCESS_NAME"]]></DATA_MAP_SERIES>
		<Y2AXIS_LOGARITHMIC_BASE><![CDATA[BASE_10]]></Y2AXIS_LOGARITHMIC_BASE>
		<Y2AXIS_SCALE_MINIMUM><![CDATA[10.0]]></Y2AXIS_SCALE_MINIMUM>
		<XAXIS_TICK_LABEL_FONT.POSTURE><![CDATA[false]]></XAXIS_TICK_LABEL_FONT.POSTURE>
		<DATA_MAP_VALUES><![CDATA[\,"MINE lag seconds","MINE lag seconds"]]></DATA_MAP_VALUES>
		<PLOT_VGRID><![CDATA[true]]></PLOT_VGRID>
		<TITLE><![CDATA[true]]></TITLE>
		<Y1AXIS_TITLE><![CDATA[false]]></Y1AXIS_TITLE>
		<Y2AXIS_SCALE_INCREMENT><![CDATA[20.0]]></Y2AXIS_SCALE_INCREMENT>
		<PLOT_VGRID_WIDTH><![CDATA[THINNER]]></PLOT_VGRID_WIDTH>
		<Y2AXIS_TITLE_ALIGNMENT><![CDATA[CENTER]]></Y2AXIS_TITLE_ALIGNMENT>
		<SUBTITLE_ALIGNMENT><![CDATA[LEFT]]></SUBTITLE_ALIGNMENT>
		<XAXIS_TICK_LABEL_FONT.WEIGHT><![CDATA[false]]></XAXIS_TICK_LABEL_FONT.WEIGHT>
	</CustomValues>
	<query>
		<sql><![CDATA[SELECT  ddc_id,mine_process_name,apply_process_name,"WALLCLOCK", "APPLY lag seconds", "MINE lag seconds" FROM(
select ddc_id,mine_process_name,apply_process_name,
to_char(trunc(wallclock_date,'hh24'),'dd-mon hh24:mi') wallclock,
max(round((wallclock_date-apply_date)*24*60*60)) "APPLY lag seconds", max(round((wallclock_date-mine_date)*24*60*60)) "MINE lag seconds"
from DBVREP.DBRSCOMMON_LAG_STATS 
where wallclock_date>sysdate-3
group by ddc_id,mine_process_name,apply_process_name,to_char(trunc(wallclock_date,'hh24'),'dd-mon hh24:mi')
order by wallclock
)]]></sql>
	</query>
		<pdf version="VERSION_1_7" compression="CONTENT">
			<docproperty title="null" author="null" subject="null" keywords="null" />
			<cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" />
			<column>
				<heading font="null" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" />
				<footing font="null" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" />
				<blob blob="NONE" zip="false" />
			</column>
			<table font="null" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" />
			<header enable="false" generatedate="false">
				<data>
				null				</data>
			</header>
			<footer enable="false" generatedate="false">
				<data value="null" />
			</footer>
			<pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" />
		</pdf>
</display>
</displays>
 

Cet article Dbvisit replicate – SQL Developer chart est apparu en premier sur Blog dbi services.

Are temporary tables auto vacuumed in PostgreSQL?

Tue, 2017-10-31 02:03

While doing the EDB quiz at their booth last week at pgconfeu one of the questions was: Are temporary tables auto vacuumed? What do you think? My first thought was yes, but lets see. The first question we need to answer is: How can we check if a table (no matter if temporary or not for now) was auto vacuumed or not? PostgreSQL comes with many views that expose statistical information and one of those is pg_stat_all_tables. Lets have a look …

When you describe that view there is column named “last_autovacuum”:

postgres=# \d pg_stat_all_tables 
                      View "pg_catalog.pg_stat_all_tables"
       Column        |           Type           | Collation | Nullable | Default 
---------------------+--------------------------+-----------+----------+---------
 relid               | oid                      |           |          | 
 schemaname          | name                     |           |          | 
 relname             | name                     |           |          | 
 seq_scan            | bigint                   |           |          | 
 seq_tup_read        | bigint                   |           |          | 
 idx_scan            | bigint                   |           |          | 
 idx_tup_fetch       | bigint                   |           |          | 
 n_tup_ins           | bigint                   |           |          | 
 n_tup_upd           | bigint                   |           |          | 
 n_tup_del           | bigint                   |           |          | 
 n_tup_hot_upd       | bigint                   |           |          | 
 n_live_tup          | bigint                   |           |          | 
 n_dead_tup          | bigint                   |           |          | 
 n_mod_since_analyze | bigint                   |           |          | 
 last_vacuum         | timestamp with time zone |           |          | 
 last_autovacuum     | timestamp with time zone |           |          | 
 last_analyze        | timestamp with time zone |           |          | 
 last_autoanalyze    | timestamp with time zone |           |          | 
 vacuum_count        | bigint                   |           |          | 
 autovacuum_count    | bigint                   |           |          | 
 analyze_count       | bigint                   |           |          | 
 autoanalyze_count   | bigint                   |           |          | 

That should give us the time of the last autovacuum, right? Before we begin, here are my autovacuum settings which are all at their defaults:

postgres=# select name,setting from pg_settings where name like '%autovacuum%' order by 1;
                name                 |  setting  
-------------------------------------+-----------
 autovacuum                          | on
 autovacuum_analyze_scale_factor     | 0.1
 autovacuum_analyze_threshold        | 50
 autovacuum_freeze_max_age           | 200000000
 autovacuum_max_workers              | 3
 autovacuum_multixact_freeze_max_age | 400000000
 autovacuum_naptime                  | 60
 autovacuum_vacuum_cost_delay        | 20
 autovacuum_vacuum_cost_limit        | -1
 autovacuum_vacuum_scale_factor      | 0.2
 autovacuum_vacuum_threshold         | 50
 autovacuum_work_mem                 | -1
 log_autovacuum_min_duration         | -1
(13 rows)

That means autovacuum should kick in as soon as we change 50 rows in a table because autovacuum_vacuum_threshold is set to 50? The table:

postgres=# create table t1 (a int, b varchar(50));
CREATE TABLE
postgres=# insert into t1 (a,b) select a, md5(a::varchar) from generate_series ( 1, 1000000 ) a;
INSERT 0 1000000
postgres=# select count(*) from t1;
  count  
---------
 1000000
(1 row)

As soon as we change 50 or more rows we should see the last_autovacuum column updated in pg_stat_all_tables, so lets check:

postgres=# update t1 set a = a + 1 where a < 1000;
UPDATE 999
postgres=# select pg_sleep(10);
 pg_sleep 
----------
 
(1 row)
postgres=# select relname,last_autovacuum from pg_stat_all_tables where relname = 't1';
 relname | last_autovacuum 
---------+-----------------
 t1      | 
(1 row)

Hm, not really what was expected. When you check the documentation there is a formula we need to consider for our test, which is

vacuum threshold = autovacuum_vacuum_threshold +  autovacuum_vacuum_scale_factor * pg_class.reltuples

In our case that is:

postgres=# show autovacuum_vacuum_threshold;
 autovacuum_vacuum_threshold 
-----------------------------
 50
(1 row)

postgres=# show autovacuum_vacuum_scale_factor;
 autovacuum_vacuum_scale_factor 
--------------------------------
 0.2
(1 row)

postgres=# select reltuples::int from pg_class where relname = 't1';
 reltuples 
-----------
   1000000
(1 row)

postgres=# select 50 + 0.2 * 1000000;
 ?column? 
----------
 200050.0
(1 row)

This means we need to change at least 200050 rows to get autovacuum kicked in?

postgres=# update t1 set a = a + 1;
UPDATE 1000000

That should be fine as we updated all the rows in the table which is way more than 200050:

postgres=# select relname,last_autovacuum from pg_stat_all_tables where relname = 't1';
 relname |        last_autovacuum        
---------+-------------------------------
 t1      | 2017-10-31 07:40:56.553194+01
(1 row)

… and here we go. Now, as we know how to check that on a real table we can do the same test on temporary table:

postgres=# create temporary table tt1 as select * from t1;
SELECT 1000000
postgres=# update tt1 set a = a + 1;
UPDATE 1000000
postgres=# select relname,last_autovacuum from pg_stat_all_tables where relname = 'tt1';
 relname | last_autovacuum 
---------+-----------------
 tt1     | 
(1 row)

There is one point to consider: There is the parameter autovacuum_naptime which defaults to one minute so it might take some time until the autovacuum really did its work. But even when you wait for 10 minutes you’ll not see the last_autovacuum updated in pg_stat_all_tables for a temporary table. So, the answer is: No. There is no autovacuum on temporary tables but of course you can still do that manually:

postgres=# select relname,last_autovacuum, last_vacuum from pg_stat_all_tables where relname = 'tt1';
 relname | last_autovacuum |          last_vacuum          
---------+-----------------+-------------------------------
 tt1     |                 | 2017-10-31 07:50:58.041813+01
(1 row)

The same is true for the statistics used by the planner, you might need to analyze your temporary table manually:

postgres=# select last_analyze, last_autoanalyze from pg_stat_all_tables where relname = 'tt1';
 last_analyze | last_autoanalyze 
--------------+------------------
              | 
(1 row)

postgres=# analyze tt1;
ANALYZE
postgres=# select last_analyze, last_autoanalyze from pg_stat_all_tables where relname = 'tt1';
         last_analyze          | last_autoanalyze 
-------------------------------+------------------
 2017-10-31 07:52:27.690117+01 | 
(1 row)

Btw: This is clearly written in the documentation: “Temporary tables cannot be accessed by autovacuum. Therefore, appropriate vacuum and analyze operations should be performed via session SQL commands.”

Hope this helps …

 

Cet article Are temporary tables auto vacuumed in PostgreSQL? est apparu en premier sur Blog dbi services.

Configuring Fusion Middleware 12C Reports Server with Single Sign On and External LDAP server

Fri, 2017-10-27 12:50

Most of the time when the Reports Server is configured with an Single Sign On Server, it uses the same external LDAP server as the Single Sign On Server.
In our case, we used an Oracle Access Manager as Single Sign On and an Oracle Internet Directory as LDAP server.
Of course the Fusion Middleware needs to be configured with the external LDAP server. This means an Oracle Internet Directory authenticator provider declared in the WebLogic Domain.
And an OAM Identity Assert for the Single Sign On part. But this is part to the normal Fusion Middleware configuration with SSO and external LDAP.

With this configuration in place, once the Reports Server is configured to use the SSO, the following error raises:

REP-56071: User does not exist in Id Store

This is due because Reports Server is not using the default Fusion Middleware Policy Store(jps-config.xml) but the Java Policy Store(jps-config-jse.xml). Checking this file, we can see that the Identity Store is configured to file and not to External LDAP server.

The following of this blog provides the configuration to put in place for the reports Server to take the external LDAP Server as Identity Store.

1. Configure Report Server in SSO mode

cd $DOMAIN_HOME
 
vi  ./config/fmwconfig/servers/WLS_REPORTS/applications/reports_12.2.1/configuration/rwservlet.properties
<?xml version="1.0" encoding="UTF-8"?>
<rwservlet xmlns="http://xmlns.oracle.com/reports/rwservlet" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
   <server>rep_wls_reports_dev12cvm</server>
   <singlesignon>no</singlesignon>
   <inprocess>yes</inprocess>
   <webcommandaccess>L2</webcommandaccess>
</rwservlet>
Change singlesignon value to yes
<?xml version="1.0" encoding="UTF-8"?>
<rwservlet xmlns="http://xmlns.oracle.com/reports/rwservlet" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
   <server>rep_wls_reports_dev12cvm</server>
   <singlesignon>yes</singlesignon>
   <inprocess>yes</inprocess>
   <webcommandaccess>L2</webcommandaccess>
</rwservlet>

2. Configure the identity store for Report Server

Edit jps-config-jse.xml locate in the $DOMAIN_HOME/config/fmwconfig and add an additional service instance for the identity store

<serviceInstance name="idstore.ldap" provider="idstore.ldap.provider">
        <property name="idstore.type" value="OID"/>
        <property name="security.principal.alias" value="OidCredMap"/>
        <property name="security.principal.key" value="OidCredKey"/>
        <property name="ldap.url" value="ldap://vm01.dbi-workshop.com:3060"/>
        <property name="max.search.filter.length" value="500"/>
        <property name="username.attr" value="uid"/>
        <extendedProperty>
          <name>user.search.bases</name>
          <values>
            <value>cn=users,dc=geneva,dc=agicoa,dc=org</value>
          </values>
        </extendedProperty>
        <extendedProperty>
          <name>group.search.bases</name>
          <values>
            <value>cn=groups,dc=geneva,dc=agicoa,dc=org</value>
           </values>
        </extendedProperty>
      </serviceInstance>
It must be between and and keep the other service instances
At the bottom of the same file, change the default jpsContext to use the idstore.ldap
Change:
   <jpsContexts default="default">
      <jpsContext name="default">
         <serviceInstanceRef ref="credstore.db"/>
         <serviceInstanceRef ref="keystore.db"/>
         <serviceInstanceRef ref="policystore.db"/>
         <serviceInstanceRef ref="audit.db"/>
         <serviceInstanceRef ref="trust"/>
         <serviceInstanceRef ref="pdp.service"/>
         <serviceInstanceRef ref="attribute"/>
         <serviceInstanceRef ref="idstore.xml"/>
         <serviceInstanceRef ref="idstore.loginmodule"/>
      </jpsContext>

to:

  <jpsContexts default="default">
      <jpsContext name="default">
         <serviceInstanceRef ref="credstore.db"/>
         <serviceInstanceRef ref="keystore.db"/>
         <serviceInstanceRef ref="policystore.db"/>
         <serviceInstanceRef ref="audit.db"/>
         <serviceInstanceRef ref="trust"/>
         <serviceInstanceRef ref="pdp.service"/>
         <serviceInstanceRef ref="attribute"/>
         <serviceInstanceRef ref="idstore.ldap"/>
         <serviceInstanceRef ref="idstore.loginmodule"/>
      </jpsContext>

Save and quit

We need to create a Credential Map for this LDAP connection. Browse to the Enterprise Manager and sign in as weblogic user.

http://vm01.dbi-workshop.com:7003/em

EM1
Move to the Security -> credentials

EM2

EM3
Click on Create Map and name the new credential map: OidCredMap

EM4
Click OK

EM5
Select the credential map OidCredMap and click create key button

EM6

Key: OidCredKey
Type: Password
UserName: cn=orcladmin
Password: *****************
Confirm Password:*****************
Click OK

3. Stop and Start the full Reports WebLogic Domain

4. Configure the Report Server Security for Jobs

a.    Browse to the Enterprise Manager and sign in as weblogic user

http://vm01.dbi-workshop.com:7003/em

b.    Navigate to the EM MBean browser Weblogic Domain > System MBean Browser
c.    Navigate to reports server mbean
Folder: Application Defined MBeans
–> Folder:oracle.reportsApp.config
–> Server:<managedServername>,Application=reports, type=ReportsApp, name=rwserver, ReportsApp.Job
d.    Click child mbean: rwEngrwJaznSec
e.    For property securityId enter value as follows:

rwJaznSec – for OPSS based security

EM8

5. Configure the Report Server Roles

Browse to the Enterprise Manager and sign in as weblogic user

http://vm01.dbi-workshop.com:7003/em

EM9
From the drop down menu, select the Security -> Application roles. The following displays

EM10
Set Application Stripe on Reports and click the search arrow

EM11

Select each role you want to add members to and click edit. In the next wizard, search for the user or group or role and click add.

 

 

Cet article Configuring Fusion Middleware 12C Reports Server with Single Sign On and External LDAP server est apparu en premier sur Blog dbi services.

New installed Fusion Middleware 12C Reports or Forms WebLogic Servers fails to start after configuring SSL

Fri, 2017-10-27 12:23

We installed a Fusion Middleware report and Forms 12.2.1.2.0 on a Linux Server.
This was a single node Reports and Forms servers.
After disabling the Non SSL Listen Port, we get below error in logs while starting the Managed Server with only SSL ports enabled.

####<Oct 27, 2017, 2:38:23,265 PM CEST> <Info> <Deployer> <host01.example.com> <WLS_FORMS> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <728708fe-00a1-4078-bb31-f12fb2c6beae-00000030> <1509107903265> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-149060> <Module dms.war of application DMS Application [Version=12.2.1.1.0] successfully transitioned from STATE_ACTIVE to STATE_ADMIN on server WLS_FORMS.>
####<Oct 27, 2017, 2:38:23,765 PM CEST> <Critical> <WebLogicServer> <host01.example.com> <WLS_FORMS> <main> <<WLS Kernel>> <> <728708fe-00a1-4078-bb31-f12fb2c6beae-00000031> <1509107903765> <[severity-value: 4] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000386> <Server subsystem failed. Reason: A MultiException has 4 exceptions. They are:
1. java.lang.AssertionError: No replication server channel for WLS_FORMS
2. java.lang.IllegalStateException: Unable to perform operation: post construct on weblogic.cluster.replication.ReplicationService
3. java.lang.IllegalArgumentException: While attempting to resolve the dependencies of weblogic.cluster.singleton.SingletonServicesBatchManager errors were found
4. java.lang.IllegalStateException: Unable to perform operation: resolve on weblogic.cluster.singleton.SingletonServicesBatchManager

A MultiException has 4 exceptions. They are:
1. java.lang.AssertionError: No replication server channel for WLS_FORMS
2. java.lang.IllegalStateException: Unable to perform operation: post construct on weblogic.cluster.replication.ReplicationService
3. java.lang.IllegalArgumentException: While attempting to resolve the dependencies of weblogic.cluster.singleton.SingletonServicesBatchManager errors were found
4. java.lang.IllegalStateException: Unable to perform operation: resolve on weblogic.cluster.singleton.SingletonServicesBatchManager
at org.jvnet.hk2.internal.Collector.throwIfErrors(Collector.java:89)
at org.jvnet.hk2.internal.ClazzCreator.resolveAllDependencies(ClazzCreator.java:249)
at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:357)
at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:471)
at org.glassfish.hk2.runlevel.internal.AsyncRunLevelContext.findOrCreate(AsyncRunLevelContext.java:232)
at org.glassfish.hk2.runlevel.RunLevelContext.findOrCreate(RunLevelContext.java:85)
at org.jvnet.hk2.internal.Utilities.createService(Utilities.java:2020)
at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:114)
at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:88)
at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$QueueRunner.oneJob(CurrentTaskFuture.java:1213)
at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$QueueRunner.run(CurrentTaskFuture.java:1144)
at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:666)
at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:348)
at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:333)
at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:54)
at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:640)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:406)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:346)
Caused By: java.lang.AssertionError: No replication server channel for WLS_FORMS
at weblogic.cluster.replication.ReplicationManagerServerRef.initialize(ReplicationManagerServerRef.java:128)
at weblogic.cluster.replication.ReplicationManagerServerRef.<clinit>(ReplicationManagerServerRef.java:84)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at weblogic.rmi.internal.BasicRuntimeDescriptor.getServerReferenceClass(BasicRuntimeDescriptor.java:496)
at weblogic.rmi.internal.BasicRuntimeDescriptor.createServerReference(BasicRuntimeDescriptor.java:452)
at weblogic.rmi.internal.OIDManager.makeServerReference(OIDManager.java:193)
at weblogic.rmi.internal.OIDManager.getReplacement(OIDManager.java:173)
at weblogic.rmi.utils.io.RemoteObjectReplacer.replaceRemote(RemoteObjectReplacer.java:107)
at weblogic.rmi.utils.io.RemoteObjectReplacer.replaceObject(RemoteObjectReplacer.java:90)
at weblogic.rmi.extensions.server.ServerHelper.exportObject(ServerHelper.java:252)
at weblogic.cluster.replication.ReplicationServicesImplBase.exportSelf(ReplicationServicesImplBase.java:17)
at weblogic.cluster.replication.ReplicationManager.startService(ReplicationManager.java:305)
at weblogic.cluster.replication.ReplicationService.start(ReplicationService.java:46)
at weblogic.server.AbstractServerService.postConstruct(AbstractServerService.java:76)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.glassfish.hk2.utilities.reflection.ReflectionHelper.invoke(ReflectionHelper.java:1262)
at org.jvnet.hk2.internal.ClazzCreator.postConstructMe(ClazzCreator.java:332)
at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:374)
at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:471)
at org.glassfish.hk2.runlevel.internal.AsyncRunLevelContext.findOrCreate(AsyncRunLevelContext.java:232)
at org.glassfish.hk2.runlevel.RunLevelContext.findOrCreate(RunLevelContext.java:85)
at org.jvnet.hk2.internal.Utilities.createService(Utilities.java:2020)
at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:114)
at org.jvnet.hk2.internal.ServiceLocatorImpl.getService(ServiceLocatorImpl.java:693)
at org.jvnet.hk2.internal.ThreeThirtyResolver.resolve(ThreeThirtyResolver.java:78)
at org.jvnet.hk2.internal.ClazzCreator.resolve(ClazzCreator.java:211)
at org.jvnet.hk2.internal.ClazzCreator.resolveAllDependencies(ClazzCreator.java:234)
at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:357)
at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:471)
at org.glassfish.hk2.runlevel.internal.AsyncRunLevelContext.findOrCreate(AsyncRunLevelContext.java:232)
at org.glassfish.hk2.runlevel.RunLevelContext.findOrCreate(RunLevelContext.java:85)
at org.jvnet.hk2.internal.Utilities.createService(Utilities.java:2020)
at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:114)
at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:88)
at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$QueueRunner.oneJob(CurrentTaskFuture.java:1213)
at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$QueueRunner.run(CurrentTaskFuture.java:1144)
at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:666)
at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:348)
at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:333)
at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:54)
at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:640)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:406)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:346)
>
####<Oct 27, 2017, 2:38:23,815 PM CEST> <Notice> <WebLogicServer> <host01.example.com> <WLS_FORMS> <main> <<WLS Kernel>> <> <728708fe-00a1-4078-bb31-f12fb2c6beae-00000031> <1509107903815> <[severity-value: 32] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000365> <Server state changed to FAILED.>

Even if no cluster is used, the default Reports and Forms domain configuration creates a WebLogic Cluster.

To solve “Servers do not have a common channel to communicate over” errors do the following steps as administrator

  1. Login into the Administration Console
  2. Expand the Environment tab and navigate to Clusters
  3. Select the and click ‘Replication’ subtab in the Configuration TAB
  4. Find Replication Channel, check box beside “Secure Replication Enabled” (to set it as true)
  5. Save
  6. Rerun the steps 2 to 5 for all clusters
  7. Activate the changes
  8. Restart the full WebLogic domain

or in WLST:
Start wlst.sh and connect to the WebLogic Domain

connect('user','password','t3s://host01.example.com:7002')

startEdit()

cd('/Clusters/cluster_forms')
cmo.setClusterType('none')
cmo.setPersistSessionsOnShutdown(false)
cmo.setReplicationChannel('ReplicationChannel')
cmo.setSecureReplicationEnabled(true)

save()
activate()

Rerun the same for the reports_cluster

Once done restart the full WebLogic Domain

 

Cet article New installed Fusion Middleware 12C Reports or Forms WebLogic Servers fails to start after configuring SSL est apparu en premier sur Blog dbi services.

2017.pgconf.eu, some impressions

Fri, 2017-10-27 11:00

After we survived the EDB Postgres Rocks cafe on Tuesday, Wednesday was packed with interesting sessions. Especially listening to Robert Haas is always fun and interesting. Getting information directly from the people who work on the core code is one of the beauties of the PostgreSQL community. Several other core developers had sessions as well, and all of them were great. Thanks for all of that.

Selection_031

On Thursday morning, finally, Jan (EDB) and me had the pleasure to talk about “What happens when 30 years of Oracle experience hit PostgreSQL”. As far as I can tell the session was well accepted and we had interesting discussions afterwards. The main goal was to highlight that working in the PostgreSQL area can be very confusing at the very beginning when your only background is Oracle. Seems we hit the goal and the people attending had fun.

DNDPk9zWkAA9fnu

A very big thanks to the organizers of the event: Everything, from the registration, the rooms, the food, the drinks and of the course the sessions was great. I do not doubt that next year will be great as well.

Another big thanks to the EDB people (especially Anja and Jan) who let me drop my jacket and notebook at their booth when required. Another big thanks to Devrim for announcing the rpm packages for SLES 12 during the lightning talks which is what I need for a customer project.

Once uploaded all the slides should be available on the PostgreSQL wiki. Check them out, there is really great content.

Btw: There are interesting choices of beer in Poland:
large

 

Cet article 2017.pgconf.eu, some impressions est apparu en premier sur Blog dbi services.

Max PDBs in Standard Edition

Thu, 2017-10-26 14:26

Here is a small warning. In Standard Edition, you may expect that the features that are not available for your edition are blocked, but in 12.2 you can create more than one PDB in Standard Edition, and you should set MAX_PDBS to prevent that.

12cR1

In 12.1 Standard Edition, when you try to create more than one PDB (i.e with CON>ID > 3) you get an error:

ORA-65010: maximum number of pluggable databases created

12cR2

But it seems that this has been lost in 12.2:


oracle@SE122 ~$ sqlplus / as sysdba
 
SQL*Plus: Release 12.2.0.1.0 Production on Thu Oct 10 11:41:56 2017
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
 
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
 
SQL> create pluggable database PDB2 admin user admin identified by me;
 
Pluggable database created.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 MOUNTED
SQL> alter pluggable database PDB2 open;
 
Pluggable database altered.

So, if you are in Standard Edition, don’t forget to set MAX_PDBS to 1:

SQL> alter pluggable database PDB2 close immediate;
 
Pluggable database altered.
 
SQL> drop pluggable database PDB2 including datafiles;
 
Pluggable database dropped.
 
SQL> alter system set max_pdbs=1;
 
System altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
 
12:18:28 SQL> create pluggable database PDB2 admin user pdbadmin identified by oracle;
create pluggable database PDB2 admin user pdbadmin identified by oracle
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

 

Cet article Max PDBs in Standard Edition est apparu en premier sur Blog dbi services.

StandbyFileManagement is set to MANUAL : Don’t panic

Tue, 2017-10-24 12:59

As you may know the parameter STANDBY_FILE_MANAGEMENT enables or disables automatic standby file management. When automatic standby file management is enabled, operating system file additions and deletions on the primary database are replicated on the standby database.
It is generally recommended to set this parameter to AUTO, but it can happen that this parameter is set to MANUAL for x reasons. If this parameter is set to MANUAL, the replication will stop if we create a new tablespace, or add new data files in the primary until we manually create the same in the standby.
In the article we are going to show an example of tablespace creation in our oracle 12c Dataguard environment.
Below our configuration.

DGMGRL> show configuration;
Configuration - prod
Protection Mode: MaxPerformance
Members:
prod - Primary database
proddr - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 30 seconds ago)

StandbyFileManagement is set to manual for both primary and standby databases

DGMGRL> show database PROD StandbyFileManagement;
StandbyFileManagement = 'MANUAL'
DGMGRL> show database PRODDR StandbyFileManagement;
StandbyFileManagement = 'MANUAL'
DGMGRL>

Let’s show data files on the primary

SQL> select name,open_mode from v$database;
NAME OPEN_MODE
--------- --------------------
PROD READ WRITE
.
SQL> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/PROD/datafile/o1_mf_system_dymg6h55_.dbf
/u01/app/oracle/oradata/PROD/datafile/o1_mf_sysaux_dymg6loy_.dbf
/u01/app/oracle/oradata/PROD/datafile/o1_mf_undotbs1_dymg6nsx_.dbf
/u01/app/oracle/oradata/PROD/datafile/o1_mf_users_dymg6o9o_.dbf
/u01/app/oracle/oradata/PROD/datafile/o1_mf_data_dymwrtph_.dbf

And in the standby

SQL> select name,open_mode from v$database;
NAME OPEN_MODE
--------- --------------------
PROD MOUNTED
.
SQL> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_system_dymtycrm_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_sysaux_dymtytz6_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_undotbs1_dymtzbbr_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_users_dymtzko2_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_data_dymz1b8y_.dbf

Now let’s create a tablespace in the primary database PROD. In this tablespace we create a table and insert some data inside.

SQL> create tablespace TABDATA datafile size 100M;
Tablespace created.
.
SQL> create table test1(id number) tablespace data;
Table created.
.
SQL> insert into test1 values(1);
1 row created.
.
SQL> commit;
Commit complete.

And then let’s do some switches

SQL> alter system switch logfile;
System altered.
.
SQL> alter system switch logfile;
System altered.

As we may expect, this new tablespace is not automatically replicated on the standby as our StandbyFileManagement is set to MANUAL and our configuration should return errors.

DGMGRL> show configuration;
Configuration - prod
Protection Mode: MaxPerformance
Members:
prod - Primary database
proddr - Physical standby database
Error: ORA-16810: multiple errors or warnings detected for the member
Fast-Start Failover: DISABLED
Configuration Status:
ERROR (status updated 9 seconds ago)

The primary database is fine

DGMGRL> show database PROD;
Database - prod
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
PROD
Database Status:
SUCCESS

But the standby is not synchronized

DGMGRL> show database PRODDR;
Database - proddr
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 2 minutes 14 seconds (computed 1 second ago)
Average Apply Rate: 2.00 KByte/s
Real Time Query: OFF
Instance(s):
PROD
Database Error(s):
ORA-16766: Redo Apply is stopped
Database Warning(s):
ORA-16853: apply lag has exceeded specified threshold
Database Status:
ERROR
DGMGRL>

In the standby alert log file we can see some ORA-

File #6 added to control file as 'UNNAMED00006' because
the parameter STANDBY_FILE_MANAGEMENT is set to MANUAL
The file should be manually created to continue.

MRP0: Background Media Recovery terminated with error 1274
2017-10-20T15:49:43.368202+02:00
Errors in file /u01/app/oracle/diag/rdbms/proddr/PROD/trace/PROD_mrp0_7182.trc:
ORA-01274: cannot add data file that was originally created as
'/u01/app/oracle/oradata/PROD/datafile/o1_mf_tabdata_dymzqmt4_.dbf'
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
2017-10-20T15:49:43.448802+02:00
Errors in file /u01/app/oracle/diag/rdbms/proddr/PROD/trace/PROD_m000_7250.trc:
ORA-01110: data file 1: '/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_system_dymtycrm_.dbf'
Recovery stopped due to failure in applying recovery marker (opcode 17.30).
Datafiles are recovered to a consistent state at change 380449 but controlfile could be ahead of datafiles.
2017-10-20T15:49:43.618264+02:00

If we check data files in the standby, we can see that a UNAMED new file is created in the $ORACLE_HOME/dbs

SQL> select name,open_mode from v$database;
NAME OPEN_MODE
--------- --------------------
PROD MOUNTED
.
SQL> select name from v$datafile;
NAME
----------------------------------------------------------------------------------------------------
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_system_dymtycrm_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_sysaux_dymtytz6_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_undotbs1_dymtzbbr_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_users_dymtzko2_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_data_dymz1b8y_.dbf
/u01/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00006
6 rows selected.

How can I solve the issue and restart the replication. It’s easy. As I am using Oracle-Managed Files
(OMF), I first have to move the data file in the correct directory using following command.

SQL> alter database create datafile '/u01/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00006' as new;
Database altered.
.
SQL> select name from v$datafile;
NAME
----------------------------------------------------------------------------------------------------
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_system_dymtycrm_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_sysaux_dymtytz6_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_undotbs1_dymtzbbr_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_users_dymtzko2_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_data_dymz1b8y_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_tabdata_dyn06zm2_.dbf
6 rows selected.
SQL>

If OMF is not used we have to use something like

alter database create datafile '/u01/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00006' as 'datafile_location_and_name';

After I decide to update the parameter standbyfilemanagement to AUTO on both primary and standby.

DGMGRL> edit database PROD set property StandbyFileManagement = 'AUTO';
Property "standbyfilemanagement" updated
DGMGRL> edit database PRODDR set property StandbyFileManagement = 'AUTO';
Property "standbyfilemanagement" updated

And finally I have to restart the redo apply on the standby

DGMGRL> edit database PRODDR set STATE='APPLY-ON';
Succeeded.
DGMGRL>

The synchronization should be fine now and the command show configuration should return success

DGMGRL> show configuration;
Configuration - prod
Protection Mode: MaxPerformance
Members:
prod - Primary database
proddr - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 15 seconds ago)

On the primary database

DGMGRL> show database PROD;
Database - prod
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
PROD
Database Status:
SUCCESS

On the standby database

DGMGRL> show database PRODDR;
Database - proddr
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 83.00 KByte/s
Real Time Query: OFF
Instance(s):
PROD
Database Status:
SUCCESS
DGMGRL>

If we want to verify the new created table, we can open the standby database in a read only mode.
But if we don’t have the Active Dataguard option, the redo apply must be stopped for the standby database before opening it.

DGMGRL> edit database PRODDR set STATE='APPLY-OFF';
Succeeded.

And then open the database

SQL> alter database open read only;
Database altered.
.
SQL> select tablespace_name from dba_tablespaces;
TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
DATA
TABDATA
7 rows selected.
.
SQL> select * from test1;
ID
----------
1

 

Cet article StandbyFileManagement is set to MANUAL : Don’t panic est apparu en premier sur Blog dbi services.

2017.pgconf.eu started, and look who is there …

Tue, 2017-10-24 03:19

So, finally 2017.pgconf.eu started today and while checking the schedule I noticed something that I could not even imagine some years ago. There is a session from Microsoft: Latest update on Azure managed service for PostgreSQL. Personally I really like to see that Microsoft is more and more present at Open Source conferences and starts engaging in the community. But, of course this is not the only session that will be interesting. There is an impressive list of sessions from well known community members and hackers, full list here.

Selection_031

There are plenty of speaker interviews you might want to read as an introduction, here.

A final highlight this evening will be the EDB Postgres Rocks Cafe.

Selection_030

I am already curious who will be there and what discussions we’ll have. Not much more to tell right now, stay tuned.

 

Cet article 2017.pgconf.eu started, and look who is there … est apparu en premier sur Blog dbi services.

Database is more than persistence. It is shared. My thoughts about Microservice Database.

Sun, 2017-10-22 14:36

The development teams often see the database as a ‘persistence layer’ only. And sometimes it is mentioned that the database is always the cause of the problems, especially when implementing continuous integration and trying to be agile. Then cames the idea to have this ‘persistence layer’ in an isolated environment, such as a docker container, with the database for each developer. However, this overlooks the real cause of the problems, which is not the persistence function of the database, but the fact that it is shared. And when you share something, in a multi-user environment, you reach another level of complexity. And if you are not prepared for that, you perceive it as a problem.

This philosophical blog post contains some cool words which, in my opinion, must be carefully considered when dealing database: agile, persistence, stateless, docker, microservices, stateless, NoSQL, containers, lake, cloud,…


When I hear about ‘persistence layer’ it always reminds me a meeting a few years ago where the architects of a new application draw on the whiteboard something that they could have called microservices a few years later. Dedicated web services reading data, processing it, and writing the result to the ‘persistence layer’. And when the data was only a transient interface between two of those services, they’ve written ‘temporary persistence’ on the whiteboard diagram. When you come with two antonyms to define something, there is probably something wrong in the design.

Commitment

In order to understand what’s beyond persistence, you should think about what happens when you:
SQL> commit work;
Commit complete.

First, the commit statement closes a transaction that has been started by the first DML. That’s the first purpose of a database management system: process transactions. This is already something that is not very clear for some development architects. They came with stateless services, just because thinking about transactions is not easy when there is no one-to-one relationship between the database session and the end-user (as in client/server on dedicated terminal). So they designed stateless services. Then they encountered issues such as lost updates as soon as they had more than one user. And then implemented optimistic locking in their application server, not always in the right layer. I recommend this talk from Vlad Mihalcea about transactions.

Second, the commit statement ensures that changes are persistent and durable. The changes are written to disk, to backups, and to standby databases. This could be as simple as a write call, but is a bit more complex for performance reasons: random writes done on cache, written to disk asynchronously, redo transaction log written sequentially, synced on commit. Here, the developer do not need to care about the underlying mechanisms, beyond just trying to change only what is needed and commit only when needed.

Third, the commit marks the changes as publicly visible. Because the data is shared among multiple users. This is why developing and testing on your own personal database is limited to unit tests. As soon as you have to work on implementing a real use-case, you must work on a database shared by different services. I have seen enterprises going early on agile Database as a Service for agile development where each developer was working on his own copy (thin clone) of the database. They are now asking for common environments where multiple developers can work and test their different part of the application. Back to this pre-agile idea, and back to the same problem: the database is shared.

Finally, I think that some developers like to see the database as only a ‘persistence layer’ just because it is easier. You can be lazy and let the database system and the database administrators manage the durability of the data. The non-optimal performance will be compensated by software and hardware. And ignoring the two other properties of a database system is just walking away from the complexity. NoSQL to ignore transactions and consistencies, and containers to ignore the sharing concern.

Impedance Mismatch

This is not new. ‘Object Oriented’ was the buzzword before ‘buzzword’ itself was a buzzword. And OO development was ok until it had to manipulate data from the database. They called ‘impedance mismatch’ the problems encountered when trying to match the object-oriented model with the relational model. And they built frameworks to do this matching as if it were simple, such as matching the Object Identity concept with the Primary Key concept. And my experience is that this was nice to build good Proof or Concepts, but failed in production on consistency, performance, and scalability. Object Oriented development is good for non-shared transient objects. A GUI is based on objects, such as a window or a text field, where the object identity is the address in memory. If you restart the application, it is another object, with different address memory. And the GUI on your colleague computer is again another object. It is not persistent but transient. And it is not shared but all different. Applying this model to data doesn’t work. You can simulate persistency with an Object-Relational mapping (ORM) but sharing will be a problem. Those ORM usually work on proxy cached objects in the application server, trying to re-invent the management of concurrent changes, without using the database system which is built for that.

RDBMS

The current shift from ‘database technology’ to ‘developer community’ is probably a good idea, but only if we do not do the same errors such as using the database as a black box to persist objects. We must keep in mind the reasons why Relational Database Management Systems were built for.

Before RDBMS, data was modeled hierarchically. This was good for performance (because data is clustered) but was good for only one use-case. Other use-cases had to replicate the data into another physical design. The relational modeling stores data in a way it can be used by all use-cases. For example, You can look at the orders from one customer, or for the orders on one product. The business entities are in different tables and are joined dynamically when queried. This is a data point of view. The developer builds something different because the developer works on one use-case. This was the trend for XML a few years ago, and JSON now, bringing back the hierarchical model that failed decades ago. Very good to work on your use-case, but very bad when data is shared with other use cases. You have good Proof of Concept and good unit test. But integration will be impossible. I think that we will see the same with microservices: each one designed for its own usage without thinking about other (including future) use-cases.

Before RDBMS, data definition was included in the code for each use-case. A COBOL program had a data division describing the structure of the data that will be accessed by the program. This was not good for shared databases because one structural change had to change all programs. And this is why the relational model was introduced with physical and logical independence. The logical model is designed for the application, and it is the common interface between the developers and the database administrators. Here again, I’ve seen some application architects going backward, using ORM mapping used in the same way the COBOL data division was used in the past.

Microservice Database

Today, developers are running their code in containers. This has the same properties as the OO development I described before: isolated and transient. Docker starts a few processes in its own image of the disk. This is perfect. But, like they did with OO, they try to bring this idea to the database. And that will fail again because, except in early stage of development, you need a database that is shared and persistent.

In my opinion, running the Oracle Database in a Docker container gives a wrong message to the developer community. A database involves a lot of resources and is highly coupled with the OS. My lab databases on my laptop are VirtualBox VMs. Easy to build, easy to start, and easy to share. I’ve tested database on docker 3 years ago to see how we can build data virtualization, and thin clones for development, with a standby database on Docker. I’ve written an article about that for DOAG. The technology has evolved but I think that it is not a good idea except for the fun of trying something new.

Today, we have containers on the database itself, with multitenant pluggable databases and application containers. You can provision a pluggable database for each docker container running the application. And you have easy ways to copy, clone, refresh or have common data. This is ok for development.

Application containers have been introduced for SaaS: each application tenant has its own pluggable database. The data in the PDB is still shared by all application use-cases, and metadata (and some reference tables) is shared in application root by several applications.

However, we also see in 12cR2, and probably more in 18c, some cross-PDB DML which allows queries and modification among several pluggable databases. It is not yet stable, using database links for modifications (see here). But I can imagine that it will be fixed in 18c or later being able to update different PDBs within the same transaction.

And then, I foresee how this will be perceived by some developer architects (those ignoring that the database must be shared). They will build microservices, with small application servers running in a Docker container. And they will map one PDB for each service, doing something like a Microservice Database. Some will store data in XML, others in JSON, and some in relational tables. All those will be consolidated into a multi-tenant database to be managed easily. And my fear is that we will see a large bowl of spaghetti to exchange data between those Microservice PDBs using database links, common views, cross-PDB DML, and maybe some logical replication.

In my opinion, microservices can be an interesting design for future applications, but trying to map the same idea to the database will be a failure if developers don’t realize that the database is not there only to persist data but also to share it. But that’s just my own current opinion and I’ve no idea about the future. Maybe all this data modified by microservices will be shared in real time in a big Data Lake universally available on the cloud, and all IT problems about concurrency and latency will be solved.

 

Cet article Database is more than persistence. It is shared. My thoughts about Microservice Database. est apparu en premier sur Blog dbi services.

Archivelog deletion policy on Data Guard configuration

Sun, 2017-10-22 14:02

The deletion policy on a dataguard configuration should be:

CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
for the site where you don’t backup. It can be the standby or the primary.

and:

CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK;
for the site where you do the backups. It can be the primary or the standby.

I’ve always configured it in this way, but I recently discovered that the order of the subclause matters. Do not CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY; because no archivelogs will be reclaimable, and your recovery area will be full. This is probably a bug. I’ll update this post when I have more information about this.

Test case

I’m on the standby database where I do the backups:

 
Recovery Manager: Release 12.2.0.1.0 - Production on Sun Oct 22 17:37:18 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
RMAN-06568: connected to target database: CDB2 (DBID=693620717, not open)
 
RMAN>
RMAN-03029: echo set on
 

I start with all default configuration:

RMAN> show all;
RMAN-06607: RMAN configuration parameters for database with db_unique_name CDB2B are:
RMAN-01005: CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
RMAN-01005: CONFIGURE BACKUP OPTIMIZATION OFF; # default
RMAN-01005: CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
RMAN-01005: CONFIGURE CONTROLFILE AUTOBACKUP ON; # default
RMAN-01005: CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
RMAN-01005: CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
RMAN-01005: CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
RMAN-01005: CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
RMAN-01005: CONFIGURE MAXSETSIZE TO UNLIMITED; # default
RMAN-01005: CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
RMAN-01005: CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
RMAN-01005: CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
RMAN-01005: CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
RMAN-01005: CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/12.2.0/dbhome_1/dbs/snapcf_CDB2B.f'; # default

I have one full backup and then some archivelog backups:

RMAN> list backup summary;
RMAN-06345: List of Backups
RMAN-06346: ===============
RMAN-06347: Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
RMAN-06348: ------- -- -- - ----------- --------------- ------- ------- ---------- ---
RMAN-06349: 9 B F A DISK 22-OCT 17:05:16 1 1 NO TAG20171022T170250
RMAN-06349: 10 B F A DISK 22-OCT 17:06:23 1 1 NO TAG20171022T170250
RMAN-06349: 11 B F A DISK 22-OCT 17:07:18 1 1 NO TAG20171022T170250
RMAN-06349: 12 B F A DISK 22-OCT 17:07:35 1 1 NO TAG20171022T170732
RMAN-06349: 13 B A A DISK 22-OCT 17:15:09 1 1 NO TAG20171022T171502
RMAN-06349: 14 B F A DISK 22-OCT 17:15:19 1 1 NO TAG20171022T171518
RMAN-06349: 15 B A A DISK 22-OCT 17:27:28 1 1 NO TAG20171022T172721
RMAN-06349: 16 B F A DISK 22-OCT 17:27:39 1 1 NO TAG20171022T172737

I have no obsolete backups:

RMAN> report obsolete;
RMAN-06524: RMAN retention policy will be applied to the command
RMAN-06511: RMAN retention policy is set to redundancy 1
RMAN-06147: no obsolete backups found

APPLIED ON ALL STANDBY

I have 1.8% of the recovery area that has been applied:

RMAN> configure archivelog deletion policy to applied on all standby;
RMAN-06601: new RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
RMAN-06604: new RMAN configuration parameters are successfully stored
 
 
RMAN> select percent_space_used,percent_space_reclaimable,number_of_files from v$recovery_area_usage where file_type='ARCHIVED LOG';
RMAN-01005: PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
RMAN-01005: ------------------ ------------------------- ---------------
RMAN-01005: 1.3 1.08 3

BACKED UP 1 TIMES TO DISK

I have 1.3% of the recovery area that has been backed up:

RMAN> configure archivelog deletion policy to backed up 1 times to disk;
RMAN-06600: old RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
RMAN-06601: new RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;
RMAN-06604: new RMAN configuration parameters are successfully stored
 
 
RMAN> select percent_space_used,percent_space_reclaimable,number_of_files from v$recovery_area_usage where file_type='ARCHIVED LOG';
RMAN-01005: PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
RMAN-01005: ------------------ ------------------------- ---------------
RMAN-01005: 1.3 1.3 3

This looks good. I have some archived logs that have been applied but not backed up yet.

Both in the ‘bad’ order

But now I want to combine both:

RMAN> configure archivelog deletion policy to backed up 1 times to disk applied on all standby;
RMAN-06600: old RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;
RMAN-06601: new RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY;
RMAN-06604: new RMAN configuration parameters are successfully stored
 
 
RMAN> select percent_space_used,percent_space_reclaimable,number_of_files from v$recovery_area_usage where file_type='ARCHIVED LOG';
RMAN-01005: PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
RMAN-01005: ------------------ ------------------------- ---------------
RMAN-01005: 1.3 0 3

Nothing is recoverable here, wich is in my opinion a bug.

Both in the ‘right’ order

Trying the same but with different order:

RMAN> configure archivelog deletion policy to applied on all standby backed up 1 times to disk;
RMAN-06600: old RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY;
RMAN-06601: new RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK;
RMAN-06604: new RMAN configuration parameters are successfully stored
 
 
RMAN> select percent_space_used,percent_space_reclaimable,number_of_files from v$recovery_area_usage where file_type='ARCHIVED LOG';
RMAN-01005: PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
RMAN-01005: ------------------ ------------------------- ---------------
RMAN-01005: 1.3 1.08 3

This look good. 1.08% have been applied and backed up.

I’ve verified the same behavior when backups are done on primary, or on standby, and same in 11.2.0.4, 12.1 and 12.2 with latest RU.
When the deletion policy is starting with the backup clause before the standby clause, the files are not marked as reclaimable. However, they are deleted with a ‘delete archivelog’ statement without the ‘force’ option.
The behavior is the same with ‘shipped to’ instead of ‘applied on’.

So what?

Be careful with the syntax: the ‘applied’ or ‘shipped’ clause must be written before the ‘backup’ one.
Check that archived logs are reclaimable. The query I use for that is in: https://blog.dbi-services.com/archivelog-deletion-policy-for-standby-database-in-oracle-data-guard/. When more archived log detail is required, I’ve also this query: https://blog.dbi-services.com/drilling-down-vrecoveryareausage/
Always have a look at the recovery area usage after you change the deletion policy, the site where you backup, or after a switchover.

 

Cet article Archivelog deletion policy on Data Guard configuration est apparu en premier sur Blog dbi services.

VirtualBox 5.2 exports the VM to the Oracle Cloud

Sat, 2017-10-21 10:32

The new release of Oracle VM VirtualBox (aka VirtualBox) is there with a new functionality to export a VM to the Oracle Cloud Compute (aka Oracle Cloud Infrastructure). That can be interesting to prepare a VM on my laptop and move it to the Cloud to get it accessible from everywhere. Here’s my first try. In my opinion, it’s idea but probably need further evolution.

VirtualBox

Here is what is new: in addition to .ova you can export to an Oracle Public Cloud image:
CaptureVboxCloud000

This takes some time, as it compresses and writes all the disk images

CaptureVboxCloud002

The result is a .tar.gz for each disk attached to my VM. It is actually the image of the disk (.img) that is tar-ed and then gzipped. My VM (called VM101) had two disks (VM101-disk1.vdi and VM101-disk2.vdi). The export generated: VM101.tar.gz (containing VM101-disk002.img which looks like my first disk) and VM101-disk003.tar.gz (VM101-disk003.img which looks like my second disk)

Here is the content:


$ tar -ztvf VM101.tar.gz
-rw-r----- vboxopc10/vbox_v5.2.VBOX_VERSION_PATCHr11 4294967296 2017-10-19 21:23 VM101-disk002.img
 
$ tar -ztvf VM101-disk003.tar.gz
-rw-r----- vboxopc10/vbox_v5.2.VBOX_VERSION_PATCHr11 27917287424 2017-10-19 21:25 VM101-disk003.img

The .img is the image of the disk, with the partition and boot sector.

Compute Cloud

In the Oracle Public Cloud I can import this image: Compute Classic -> Images -> Upload Image

CaptureVboxCloud003

I upload only the image of the first disk, which contains the root filesystem:

CaptureVboxCloud004

CaptureVboxCloud005

And then I create the compute instance with the ‘Associate Image’ button:

CaptureVboxCloud006

Now, I’m ready to create an instance for it: Instance -> Customize -> Private Images

CaptureVboxCloud010

Then, I can define the shape (OCPU and memory), upload my SSH public key, and add storage (I could add my second disk here) and create the instance.

Here I’ve started it:

CaptureVboxCloud008

Unfortunately, my VM still has the network interface defined for my VirtualBox environment and then I have no way to connect to it. I hope that this feature will evolve to also export virtual network interfaces.

I have not seen any way to open a terminal on console. The only thing I can do is take snapshots of it:

CaptureVboxCloud009

Ok, so there’s a problem way before the network interfaces. My VM from Oracle VM VirtualBox (aka VirtualBox) now starts on Oracle VM (aka OVM) and besides the similar marketing name, they are different hypervisors (OVM running XEN). Probably a driver is missing to access block devices and maybe this Bug 21244825.

That’s probably all my tests on this until the next version. It is currently not easy to have a VM that can be started on different hypervisors and network environment.

So what?

Nothing very special here. Moving a VM from one hypervisor to the other is not an easy thing, but it is a good idea. And I hope that the integration into Oracle Cloud will be easier in the future with virtual disk and network interfaces. For the Oracle Cloud, it will be nice to have access to the console, but at least a screenshot may help to troubleshoot.

 

Cet article VirtualBox 5.2 exports the VM to the Oracle Cloud est apparu en premier sur Blog dbi services.

PostgreSQL Index Suggestion With Powa

Fri, 2017-10-20 09:21

A few time ago my colleague Daniel did a blog about POWA. In a nice article he shown how this tool can be used to monitor our PostgreSQL.
In this present article I am going to show how this powerful tool can help by suggesting indexes which can optimize our queries.
I am using postgeSQL 9.6

[root@pgservertools extension]# yum install postgresql96-server.x86_64
[root@pgservertools extension]# yum install postgresql96-contrib.x86_64

And Then I initialize a cluster

[root@pgservertools extension]# /usr/pgsql-9.6/bin/postgresql96-setup initdb
Initializing database ... OK

POWA require following extensions:
pg_qualstats: gathers statistics on predicates found in WHERE statements and JOIN clauses
pg_stat_kcache : gathers statistics about real reads and writes done by the filesystem layer
hypopg : extension adding hypothetical indexes in PostgreSQL. This extension can be used to see if PostgreSQL will use the index or no
btree_gist : provides GiST index operator classes that implement B-tree equivalent behavior for various data types
powa_web : will provide access to powa via a navigator

Just we will note that following packages are installed to resolve some dependencies during the installation of these extensions.

yum install python-backports-ssl_match_hostname.noarch
rpm -ivh python-tornado-2.2.1-8.el7.noarch.rpm
rpm -ivh python-tornado-2.2.1-8.el7.noarch.rpm

And then extensions are installed using yum

yum install powa_96.x86_64 pg_qualstats96.x86_64 pg_stat_kcache96.x86_64 hypopg_96.x86_64 powa_96-web.x86_64

After the installation the postgresql.conf is modified to load the extensions

[root@pgservertools data]# grep shared_preload_libraries postgresql.conf | grep -v ^#
shared_preload_libraries = 'pg_stat_statements,powa,pg_stat_kcache,pg_qualstats' # (change requires restart)
[root@pgservertools data]#

And then restart the PostgreSQL

[root@pgservertools data]# systemctl restart postgresql-9.6.service

For POWA configuration, the first step is to create a user for powa

postgres=# CREATE ROLE powa SUPERUSER LOGIN PASSWORD 'root';
CREATE ROLE

and the repository database we will use.

postgres=# create database powa;
CREATE DATABASE

The extensions must be created in the repository database and in all databases we want to monitor

postgres=#\c powa
powa=# CREATE EXTENSION pg_stat_statements;
CREATE EXTENSION
powa=# CREATE EXTENSION btree_gist;
CREATE EXTENSION
powa=# CREATE EXTENSION powa;
CREATE EXTENSION
powa=# CREATE EXTENSION pg_qualstats;
CREATE EXTENSION
powa=# CREATE EXTENSION pg_stat_kcache;
CREATE EXTENSION
powa=# CREATE EXTENSION hypopg;
CREATE EXTENSION

We can verify that extensions are loaded in the database using

powa=# \dx
List of installed extensions
Name | Version | Schema | Description
--------------------+---------+------------+-----------------------------------------------------------
btree_gist | 1.2 | public | support for indexing common datatypes in GiST
hypopg | 1.1.0 | public | Hypothetical indexes for PostgreSQL
pg_qualstats | 1.0.2 | public | An extension collecting statistics about quals
pg_stat_kcache | 2.0.3 | public | Kernel statistics gathering
pg_stat_statements | 1.4 | public | track execution statistics of all SQL statements executed
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
powa | 3.1.1 | public | PostgreSQL Workload Analyser-core
(7 rows)
powa=#

Now let’s create a database named mydb for our tests and let’s create all extensions inside the database.

[postgres@pgservertools ~]$ psql
psql (9.6.5)
Type "help" for help.
postgres=# create database mydb;
CREATE DATABASE
postgres=#

Let’s again verify extensions into the database mydb

mydb=# \dx
List of installed extensions
Name | Version | Schema | Description
--------------------+---------+------------+-----------------------------------------------------------
btree_gist | 1.2 | public | support for indexing common datatypes in GiST
hypopg | 1.1.0 | public | Hypothetical indexes for PostgreSQL
pg_qualstats | 1.0.2 | public | An extension collecting statistics about quals
pg_stat_kcache | 2.0.3 | public | Kernel statistics gathering
pg_stat_statements | 1.4 | public | track execution statistics of all SQL statements executed
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
powa | 3.1.1 | public | PostgreSQL Workload Analyser-core
(7 rows)
mydb=#

In mydb database we create a table mytab and insert in it some rows

mydb=# \d mytab
Table "public.mytab"
Column | Type | Modifiers
--------+---------+-----------
id | integer |
val | text |
.
mydb=# select count(*) from mytab;
count
-----------
100000000
(1 row)

The last step is to configure the powa-web configuration file. Below is our file

[root@pgservertools etc]# pwd
/etc
[root@pgservertools etc]# cat powa-web.conf
servers={
'main': {
'host': 'localhost',
'port': '5432',
'database': 'powa',
'query': {'client_encoding': 'utf8'}
}
}
cookie_secret="secret"
[root@pgservertools etc]#

And then powa-beb can be started by following command

[root@pgservertools etc]# powa-web &
[1] 5600
[root@pgservertools etc]# [I 171006 13:54:42 powa-web:12] Starting powa-web on http://0.0.0.0:8888

We can now log with the user powa we created at http://localhost:8888/
powa1

And then we can choose mydb database to monitor it
powa2

Now let’s run some queries. As my load is very low I set my pg_qualstats.sample_rate=1 in the postgresql.conf file (thanks to Julien Rouhaud)

[postgres@pgservertools data]$ grep pg_qualstats.sample_rate postgresql.conf
pg_qualstats.sample_rate = 1


mydb=# select * from mytab where id in (75,25,2014,589);
id | val
------+-----------
25 | line 25
75 | line 75
589 | line 589
2014 | line 2014
(4 rows)

Time: 9472.525 ms
mydb=#

Using the tab Index suggestions, we click on Optimize the database. We can see that an index creation is recommended with the potential gain.
powa3
powa4
powa5
We will just note that PostgreSQL uses the extension hypopg to see if the index will be used or no. Let’s see how this extension works. Hypothetical indexes are useful to know if specific indexes can increase performance of a query. They do not cost CPU as they don’t exist.
Let’s create a virtual index in mydb database

mydb=# select * from hypopg_create_index('create index on mytab (id)');
indexrelid | indexname
------------+-----------------------
55799 | btree_mytab_id
(1 row)
mydb=#

We can verify the existence of the virtual index by

mydb=# SELECT * FROM hypopg_list_indexes();
indexrelid | indexname | nspname | relname | amname
------------+-----------------------+---------+---------+--------
55799 | btree_mytab_id | public | mytab | btree
(1 row)

Using explain, we can see that PostgreSQL will use the index.

mydb=# explain select * from mytab where id in (75,25,2014,589);
QUERY PLAN
-------------------------------------------------------------------------------------
Index Scan using btree_mytab_id on mytab (cost=0.07..20.34 rows=4 width=17)
Index Cond: (id = ANY ('{75,25,2014,589}'::integer[]))
(2 rows)

Just not that explain analyze will not use the virtual index
Conclusion
In this article we see how POWA can help for optimizing our PostgreSQL database.

References: https://pgxn.org/dist/hypopg/; http://powa.readthedocs.io/en/latest/

 

Cet article PostgreSQL Index Suggestion With Powa est apparu en premier sur Blog dbi services.

Pages