Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 17 hours 56 sec ago

GDPR compliant by installing software editors tools?

Fri, 2018-01-05 02:51

In few months (25 May 2018) the EU General Data Protection Regulation (GDPR) will be in force and will replace the Data Protection Directive 95/46/EC. His goals are to harmonize data privacy laws across Europe, to protect and empower all EU citizens’ data privacy and to reshape the way organizations across the region approach data privacy. You can find all information related to GDPR on https://www.eugdpr.org and the official PDF of the Regulation (EU) 2016/679 on https://gdpr-info.eu/

GDPR

GDPR

 

The biggest change of GDPR compared to Data Protection Directive 95/46/EC is perhaps the extended jurisdiction as it applies to all companies processing the personal data of data subjects residing in the Union, regardless of the company’s location. But other key points such a penalties, consent, breach notification, right to access, right to be forgotten, data portability, privacy by design and data protection officers have been added to this Regulation.

From 25 May 2018 on, non-compliant organizations will face heavy penalties in terms of fine and reputation. Indeed according to the Regulation, non-compliant organizations can be fined up to 4% of annual global turnover for breaching GDPR or €20 Million. However there is no minimum fine or even an automatic fine in case of violation. To decide whether to impose a fine and its amount, the following items can be taken into consideration: the nature, severity and duration of the violation, the number of persons impacted, the impact on the persons, the measures taken to mitigate the damage, first or subsequent breach, and finaly the cooperation with the authority to remedy the violation. There is currently no clear procedure for a foreign authority to collect the fine imposed on a Swiss company without presence in the EU.

The impact of the GDPR is worldwide since it doesn’t only affect EU organization but all companies processing and holding the personal data of data subjects residing in the European Union, regardless of the company’s location. Meaning that all countries (even Switzerland and UK) are affected. You can find the official information regarding GDPR provided by the Swiss Confederation on the following link and the current Federal Act on Data protection here (Note: The Swiss Data Protection Act which is currently under revision will incorporate key regulations similar to the GDPR).  You can find the status of the UK Data Protection Bill on the Following link.

According to the GDPR the personal data definition is the following:
Any information related to a natural person or ‘Data Subject’, that can be used to directly or indirectly identify the person. It can be anything from a name, a photo, an email address, bank details, posts on social networking websites, medical information, or a computer IP address.

To protect these personal data, the GDPR has 99 articles divided in 11 chapters. Two of these articles, the article 25 (Data protection by design and default) and article 32 (Security of Processing) are usually put forward by software vendors: Oracle, Microsoft, IBM. These editors usually use these articles to promote encryption and anonymization tools which can make sense depending on the personal data hosted as the GDPR requires an adapted approach, depending on the nature and the scope of the personal data impacted. Indeed, encryption at rest may be appropriate depending on the circumstances, but they are not mandated by the GDPR in every instance.

In other terms the technical tools can help to be in conformity regarding a small subset of the Regulation but the GDPR is mostly about processes related to personal data identification and treatment. You will probably have a better understanding of what I mean by having a look on the two checklists (one for data controllers and one for data processors) provided by the ICO (UK Information Commissioner’s Office) on the following link. After having completed the checklist you will get an overall rating, a list of suggested actions and guidance.

 

Cet article GDPR compliant by installing software editors tools? est apparu en premier sur Blog dbi services.

SQL Server 2017: TRIM not only a replacement of RTRIM and LTRIM

Thu, 2018-01-04 10:38

Last month, I present in the IT-Tage 2017 in Frankfurt am Main, a session about SQL Server 2017 overview.
During my session I made a demo on one of the new string T-SQL Commands: TRIM.
My demo was very simple but I have also 2 very good questions.

Before, I write about these question, I will show you what is TRIM.
This function is a replacement of RTRIM and LTRIM to delete all space character before and after a string.

SELECT Name, RTRIM(LTRIM(Name)) as RLTrim_name, TRIM(Name) as Trim_name 
FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim01

As you can see in the result of the query, TRIM() do the same as RTRIM(LTRIM()).

But TRIM can also have a second usage and you can give a set of characters to be deleted and this use case is very useful.

I made an example with this query to delete all A,n & e from my precedent query:

SELECT Name, TRIM('A,n,e' FROM Name)  as Trim_name, TRIM('A,n,e, ' FROM Name)  as Trim_name2
FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim02

In the result in the column Trim_name, I haven’t what I expected… The problem is that all name values begin or end with several space characters.
I you look the Trim_name2 column and his associated query, you will see that I add a space character in the special characters’ list.
The first interesting question was:

Can I use the char(xx) to define the character to be deleted?

The ASCII code for a Space is 32 (20 in Hex). I test my query with char(32) like this:

SELECT Name, TRIM(char(32) FROM Name)  as Trim_name 
FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

I run the query…

trim03
Yes, it’s works! I can see that my result is without space character.
Now, I try with another character: char(65) à A

SELECT Name, TRIM(char(32) FROM Name) as Trim_name , TRIM(char(65) FROM Name)  as Trim_name2 
FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim04

The result is what I expected with a deleted A at the beginning of the string.
My last test is with both characters together in the same Trim function like this:

SELECT Name, TRIM(char(32),char(65) FROM Name) as Trim_name 
FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim05

As you can see, I get an error. Incorrect syntax near the keyword ‘FROM’.

This way is perhaps not the good way and I try another syntax with ‘char(32),char(65)':

SELECT Name, TRIM('char(32),char(65)' FROM Name) as Trim_name

FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim06

The result is very strange. The character A at the beginning of the string is deleted but not the space character…

This does not work! :oops:

Let’s go to the second question
The second question was:

Can I use a variable?

My first test with a variable is very simple with the same characters set that I use in my demo:

DECLARE @characters nvarchar(10);
SET @characters= 'A,n,e, '
SELECT Name, TRIM(@characters FROM Name)  as Trim_name FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim07
It’s working fine.
Now, I will try with the ASCII code.

First I try with one characters :

DECLARE @characters nvarchar(10);
SET @characters= char(32)
SELECT Name, TRIM(@characters FROM Name)  as Trim_name FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim11

I try with several characters.

DECLARE @characters nvarchar(10);
SET @characters= 'char(32),char(65)'
SELECT Name, TRIM(@characters FROM Name)  as Trim_name FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim08
As you can see, the variable with several characters in ASCII code does not work.

Summary

To finish, I summarize all my tests in a little report:

Direct characters set ok Space character in ASCII code ok One character in ASCII code ok Multiple characters in ASCII code nok Variable with several characters ok Variable with one character in ASCII code ok Variable with multiple characters in ASCII code nok

 

 

Cet article SQL Server 2017: TRIM not only a replacement of RTRIM and LTRIM est apparu en premier sur Blog dbi services.

GoldenGate 12.2.0.2 installation in a Grid infrastructure12cR2

Thu, 2018-01-04 08:55

This post is a demonstration on how to deploy a fully supported Oracle GoldenGate as cluster resources.

We are going to use the Oracle Grid Infrastructure Agents, called XAG to integrate the GoldenGate instance within our cluster as a cluster managed resource. This tool is provided by Oracle in the binaries with the release 12c and later but it is recommended to download and use the last version out of the box (available here).

This XAG provide an interface “agctl” which enable us to interact with GoldenGate resources the same way as “srvctl” does for Oracle database resources within the Oracle cluster. This tool support various product that can be integrated to the Oracle cluster like Tomcat, MySQL and so on.

Here what’s can be done with XAG:

[oracle@rac001 ~]$ agctl
Manages Apache Tomcat, Apache Webserver, Goldengate, JDE Enterprise Server, MySQL Server, Peoplesoft App Server, Peoplesoft Batch Server, Peoplesoft PIA Server, Siebel Gateway, Siebel Server, WebLogic Administration Server as Oracle Clusterware Resources

Usage: agctl <verb> <object> [<options>]
   verbs: add|check|config|disable|enable|modify|query|relocate|remove|start|status|stop
   objects: apache_tomcat|apache_webserver|goldengate|jde_enterprise_server|mysql_server|peoplesoft_app_server|peoplesoft_batch_server|peoplesoft_pia_server|siebel_gateway|siebel_server|weblogic_admin_server
For detailed help on each verb and object and its options use:
   agctl <verb> --help or
   agctl <verb> <object> --help

We are going to deploy that GoldenGate HA configuration on top of our existing 12.2 Grid Infrastructure. I choose for that demo to deploy the whole GoldenGate instance and its binaries to a dedicated mount point using ACFS to be sure my GoldenGate resources will have access to their data regardless of the node where the GoldenGate instance will be running. Could also be a DBFS or NFS.

Step 1 – Create an ACFS mount point called /acfsgg

[oracle@rac001 Disk1]$ /u01/app/12.2.0/grid/bin/asmcmd volcreate -G DGFRA -s 4G --width 1M --column 8 ACFSGG

[root@rac001 ~]# /sbin/mkfs -t acfs /dev/asm/acfsgg-215
[root@rac001 ~]# /u01/app/12.2.0/grid/bin/srvctl add filesystem -d /dev/asm/acfsgg-215 -m /acfsgg -u oracle -fstype ACFS -autostart ALWAYS
[root@rac001 ~]# /u01/app/12.2.0/grid/bin/srvctl start filesystem -d /dev/asm/acfsgg-215
[root@rac001 ~]# chown oracle:oinstall /acfsgg
[root@rac001 ~]# chmod 775 /acfsgg

[oracle@rac001 ~]$ ./execall "df -hT /acfsgg"
rac001: Filesystem          Type  Size  Used Avail Use% Mounted on
rac001: /dev/asm/acfsgg-215 acfs  4.0G   85M  4.0G   3% /acfsgg
rac002: Filesystem          Type  Size  Used Avail Use% Mounted on
rac002: /dev/asm/acfsgg-215 acfs  4.0G   85M  4.0G   3% /acfsgg

 

Step 2 – install GoldenGate binaries in /acfsgg

cd /u01/install/
unzip 122022_fbo_ggs_Linux_x64_shiphome.zip
cd fbo_ggs_Linux_x64_shiphome/Disk1

[oracle@rac001 Disk1]$ ./runInstaller -silent -nowait -showProgress \
INSTALL_OPTION=ORA12c \
SOFTWARE_LOCATION=/acfsgg \
START_MANAGER=false \
MANAGER_PORT= \
DATABASE_LOCATION= \
INVENTORY_LOCATION=/u01/app/oraInventory \
UNIX_GROUP_NAME=oinstall

[...]
Finish Setup successful.
The installation of Oracle GoldenGate Core was successful.
Please check '/u01/app/oraInventory/logs/silentInstall2018-01-04_06-37-02AM.log' for more details.
Successfully Setup Software.

 

Step 3 – install the last version of XAG (recommended by Oracle) in all nodes

# creation of the XAG home on all nodes as root
[root@rac001 ~]# mkdir /u01/app/xag
[root@rac001 ~]# chown oracle. /u01/app/xag

# back as oracle
[oracle@rac001 ~]$ cd /u01/install/
[oracle@rac001 install]$ unzip xagpack.zip
[oracle@rac001 xag]$ cd xag

[oracle@rac001 xag]$ export XAG_HOME=/u01/app/xag

[oracle@rac001 xag]$ ./xagsetup.sh --install --directory $XAG_HOME --all_nodes
Installing Oracle Grid Infrastructure Agents on: rac001
Installing Oracle Grid Infrastructure Agents on: rac002
Done.

[oracle@rac001 xag]$ /u01/app/12.2.0/grid/bin/agctl query releaseversion
The Oracle Grid Infrastructure Agents release version is 7.1.0

[oracle@rac001 xag]$ $XAG_HOME/bin/agctl query releaseversion
The Oracle Grid Infrastructure Agents release version is 8.1.0

 

Step 4 – configure GoldenGate instance and the manager

[oracle@rac002 acfsgg]$ export ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1
[oracle@rac002 acfsgg]$ export LD_LIBRARY_PATH=$ORACLE_HOME/lib
[oracle@rac002 acfsgg]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.2.2 OGGCORE_12.2.0.2.0_PLATFORMS_170630.0419_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Jun 30 2017 16:12:28
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.


GGSCI (rac001) 1> CREATE SUBDIRS

Creating subdirectories under current directory /acfsgg

Parameter files                /acfsgg/dirprm: created
Report files                   /acfsgg/dirrpt: created
Checkpoint files               /acfsgg/dirchk: created
Process status files           /acfsgg/dirpcs: created
SQL script files               /acfsgg/dirsql: created
Database definitions files     /acfsgg/dirdef: created
Extract data files             /acfsgg/dirdat: created
Temporary files                /acfsgg/dirtmp: created
Credential store files         /acfsgg/dircrd: created
Masterkey wallet files         /acfsgg/dirwlt: created
Dump files                     /acfsgg/dirdmp: created

GGSCI (rac001) 2> edit params mgr

PORT 7809
AUTORESTART ER *, RETRIES 3, WAITMINUTES 3, RESETMINUTES 10
AUTOSTART ER *
PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS
LAGCRITICALMINUTES 5
LAGREPORTMINUTES 60
LAGINFOMINUTES 0

 

Step 5 – Add the GoldenGate resource to the cluster

[root@rac001 ~]# export XAG_HOME=/u01/app/xag
[root@rac001 ~]# $XAG_HOME/bin/agctl add goldengate GGAPP01 --gg_home /acfsgg --instance_type source --oracle_home /u01/app/oracle/product/12.2.0/dbhome_1 --ip 192.168.179.15 --network 1 --user oracle --filesystems ora.dgfra.acfsgg.acfs
Calling POSIX::isdigit() is deprecated at /u01/app/xag/agcommon.pm line 809.

## all parameters in above command are required except the FS specification

[oracle@rac001 ~]$ $XAG_HOME/bin/agctl start goldengate GGAPP01

[oracle@rac001 ~]$ $XAG_HOME/bin/agctl status goldengate GGAPP01
Goldengate  instance 'GGAPP01' is running on rac001

[oracle@rac001 ~]$ /u01/app/12.2.0/grid/bin/crsctl stat res -t
[...]
Cluster Resources
--------------------------------------------------------------------------------
[...]
xag.GGAPP01-vip.vip
      1        ONLINE  ONLINE       rac001                   STABLE
xag.GGAPP01.goldengate
      1        ONLINE  ONLINE       rac001                   STABLE
--------------------------------------------------------------------------------

At this stage we have an operational GoldenGate fully managed by the cluster and ready to be configured for replication.

In case of failure of the node running the Manager, the cluster will restart the Manger process on first remaining available node. As soon as the Manager start, it will automatically restart all Extract and Replicate processes as we instruct it in the Manager configuration file (parameter AUTOSTART). The restarting of Extraction and Replication processes will be also done by the Manager and not by the cluster (so far with this release).

We can add Extract and Replicate processes as cluster resources but in this case it will only be monitored by the cluster and the cluster will update their states with ONLINE, OFFLINE, INTERMEDIATE or UNKNOWN depending the scenario.

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.

 

Cet article GoldenGate 12.2.0.2 installation in a Grid infrastructure12cR2 est apparu en premier sur Blog dbi services.

Experiencing update statistics on a big table with circuitous ways

Wed, 2018-01-03 14:56

This is my first blog of this new year and since a while by the way. Indeed, last year, I put all my energy to realign my Linux skills with the new Microsoft’s strategy that opens SQL Server to Open Source world. But at the same time, I did a lot of interesting stuff at customers shops and I decided to write about one  of them to begin this new year 2018.

blog 124 - 0 - stats

In this blog post, I will highlight a distinctive approach, according to me, to optimize an update statistics operation for one particular and big table. I already had to manage such maintenance tasks in one of my previous jobs as DBA and I continue to learn more about it but from a different position now.  The fact is as consultant, I usually try to provide to my customer the best approach regarding both the issue and the context. In reality, from my experience, the latter is much more important than you may think and sometimes we have to consider different other ways to get the expected outcome. I think this is not a necessarily a bad thing because following a different path (not the best) may reveal different other interesting options we may consider to make our final recipe.

Let’s go back to my customer case and let set the context. One big database (1.7TB) on SQL Server 2014 SP2 and an update statistics operation that is part of a global database maintenance strategy and takes a very long time (roughly 30h in the best-case scenario). We identified the biggest part of the execution time is related to one big and non-partitioned table (let’s say dbo.bigtable) with the following figures: 148 millions of rows / 222GB in size / 119 columns / 94 statistics / 14 indexes. Regarding the two last figures, the majority of statistics we re in fact auto-generated by the SQL Server engine over the time and from different application release campaigns. Furthermore, it is worth mentioning that the DBA had to use a customized sample value (between 10 and 50 percent) to minimize the impact of update statistics operation for this particular table regarding the table size and the configured maintenance windows timeframe.

 

My first and obvious approach

My first approach consisted in warning the database administrator about the number of statistics on this table (probably a lot of them are not in use anymore?) as well as the development team about the bad designed table. In addition, the number of rows in this table may also indicate that it contains a lot of archive data and we may reduce the global size by using archiving processes (why not built-in partitioning features and incremental statistics because we’re running with enterprise edition?). However, reviewing the model was not an option for my customer because it will require a lot of work and the DBA wasn’t keen on the idea of archiving data (business constraints) or removing auto generated statistics on this specific table. So, what I considered a good approach was not a success for adoption and the human aspect was definitely a big part of it. In a nutshell, at this stage a standby situation …

 

When a high-performance storage comes into rescue …

In parallel my customer was considering to replace his old storage by a new one and Pure Storage was in the loop. Pure Storage is one of the flash storage providers on the market and the good news is I already was in touch with @scrimi78 (Systems Engineer at Pure Storage) in Switzerland. During this project, we had a lot of interesting discussions and interaction s about Pure Storage products and I appreciated his availability to provide technical documentation and explanation. At the same time, they lent us generously a Pure Storage to play with snapshot volumes that will be used for database refresh between a production and dev environments.

In the context of my customer, we already were aware of the poor performance capabilities of the old storage and the replacement by a Pure Storage // M20 model was very beneficial for the database environments as shown by the following figures:

I only put the IO-related statistics of the production environment we had during the last year and we may notice a significant drop of average time after moving the production database files on the new storage layout. Figures are by year and month.

blog 124 - 1 - wait stats

We noticed the same from file IO statistics figures about the concerned database.

blog 124 - 3 - file io stats

Very impressive isn’t it? But what about our update statistics here? We naturally observed a significant drop in execution time to 6 hours (80% of improvement) because generally speaking this an IO-bound operation and especially in our case. The yellow columns represent operations we had to stop manually to avoid impacting the current application workload (> 35hours of execution time). You may also notice we changed the sample value to 100 percent after installing the Pure Storage // 20 model compared to previous runs with a sample value of 50 percent.

blog 124 - 3 - update stats execution time

 

Changing the initial way to update statistics for this table …

The point here is we know that we may now rely on the storage performance to update statistics efficiently and why not to push the limit of the storage by changing the way of running our update statistics operation – basically sequentially by default with one statistic at time. So, we decided to write a custom script to carry out the update operation in parallel to boost the overall execution performance. Since SQL Server 2014 SP1 CU6 we may benefit from an improved support for parallel statistics by using the trace flag 7471 that changes the locking behavior such that SQL Server engine no longer acquires X LOCK on UPDSTATS resource on this table. The script consists in creating a pool of parallel SQL jobs that update one particular statistic on a single table. I put it below if you want to use it but in a meantime, let’s say it is also possible to go through an interesting smart alternative solution based on the service broker capabilities here. What is certain is we will integrate one or other version – with some adjustments – to you DMK management kit tool. Here my proposal based on concurrent SQL jobs (feel free to comment):

SET NOCOUNT ON;

DECLARE 
	@database_name sysname = N'AdventureWorks2012', -- Target database
	@table_name sysname = N'bigTransactionHistory', -- Target table
	@schema_name sysname = N'dbo',                  -- Target schema
	@batch_upd_stats INT = 3,                       -- Number of simultaneous jobs
	@stat_sample INT = 0, 					        -- 0 = default sample rate value | 100 = FULLSCAN | Otherwise WITH SAMPLE @stat_sample PERCENT
	@debug BIT = 1									-- 0 = debug mode disabled -| 1 - Debug mode | 99 - Verbose mode
	

-- @stats_sample variable table
-- Will contains statistics with custom sampling rate value
-- Otherwise will use the default sample rate value from SQL Server
-- You may also use an user table for more flexibility that that will 
-- be used from this script 
DECLARE @stats_sample TABLE
(
	column_name sysname,
	stats_name sysname,
	sample_stat tinyint
);

INSERT @stats_sample VALUES ('TransactionID', '_WA_Sys_00000001_4CC05EF3', 100),
						    ('ProductID', '_WA_Sys_00000002_4CC05EF3', 100);



-- working parameters
DECLARE
	@nb_jobs_running INT = 0,
	@count INT = 0,
	@i INT,
	@j INT,
	@max INT,
	@sqlcmd NVARCHAR(MAX) = N'',
	@job_name_pattern sysname,
	@start_date DATETIME2(0),
	@end_date DATETIME2(0),
	@stats_name sysname,
	@sample_stat int
	;


-- Creation of update stat jobs
IF OBJECT_ID('tempdb..#update_stats_tt', 'U') IS NOT NULL
	DROP TABLE #update_stats_tt;

SELECT 
	id = IDENTITY(INT, 1,1),
	s.name AS [schema_name],
	t.name AS table_name,
	st.name AS stats_name,
	sample_stat,
	'USE [msdb]
	
	DECLARE @jobId BINARY(16);
	
	EXEC  msdb.dbo.sp_add_job @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', 
			@enabled=1, 
			@notify_level_eventlog=0, 
			@notify_level_email=2, 
			@notify_level_netsend=2, 
			@notify_level_page=2, 
			@delete_level=0, 
			@category_name=N''[Uncategorized (Local)]'', 
			@owner_login_name=N''sa'', @job_id = @jobId OUTPUT
	--select @jobId
	
	EXEC msdb.dbo.sp_add_jobserver @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', @server_name = N''' + @@SERVERNAME + '''
	
	EXEC msdb.dbo.sp_add_jobstep @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', @step_name=N''UPDATE STATS'', 
		@step_id=1, 
		@cmdexec_success_code=0, 
		@on_success_action=1, 
		@on_fail_action=2, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N''TSQL'', 
		@command=N''UPDATE STATISTICS ' + QUOTENAME(s.name) + '.' + QUOTENAME(t.name) + ' ' + st.name + CASE COALESCE(sample_stat, @stat_sample, 0)
																											WHEN 0 THEN ' '
																											WHEN 100 THEN ' WITH FULLSCAN'
																											ELSE ' WITH SAMPLE ' + CAST(COALESCE(sample_stat, @stat_sample, 0) AS VARCHAR(15)) + ' PERCENT'
																										END + ''', 
		@database_name=N''' + @database_name + ''', 
		@flags=0

	EXEC msdb.dbo.sp_update_job @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', 
			@enabled=1, 
			@start_step_id=1, 
			@notify_level_eventlog=0, 
			@notify_level_email=2, 
			@notify_level_netsend=2, 
			@notify_level_page=2, 
			@delete_level=0, 
			@description=N'''', 
			@category_name=N''[Uncategorized (Local)]'', 
			@owner_login_name=N''sa'', 
			@notify_email_operator_name=N'''', 
			@notify_netsend_operator_name=N'''', 
			@notify_page_operator_name=N''''
	' AS upd_stats_cmd
INTO  #update_stats_tt
FROM 
	sys.stats AS st
JOIN 
	sys.tables AS t ON st.object_id = t.object_id
JOIN
	sys.schemas AS s ON s.schema_id = t.schema_id
LEFT JOIN 
	@stats_sample AS ss ON ss.stats_name = st.name
WHERE 
	t.name =  @table_name
	AND s.name = @schema_name
ORDER BY 
	stats_id;

IF @debug = 99
	SELECT * FROM #update_stats_tt;

-- Enable traceflag 7471 to allow U lock while stat is updating
PRINT '--> Enable trace flag 7471 during update stats operation';

SET @sqlcmd = N'DBCC TRACEON(7471, -1);' + CHAR(13)
EXEC sp_executesql @sqlcmd;

PRINT '-----------------------------------';

SET @start_date = CURRENT_TIMESTAMP;

SET @max = (SELECT MAX(id) FROM #update_stats_tt);
SET @i = 0;

IF @debug = 99
	SELECT 'DEBUG -->  @max (nb stats) : ' + CAST(@max AS VARCHAR(15));

-- Entering to the loop ...
WHILE (@i <= @max OR @nb_jobs_running <> 0)
BEGIN

	SET @j = @i + 1;

	IF @debug = 99
	BEGIN
		SELECT 'DEBUG -->  @i : ' + CAST(@i AS VARCHAR(15));
		SELECT 'DEBUG --  @j = @i + 1 : ' + CAST(@j AS VARCHAR(15));
	END

	-- Computing number of update stats jobs to create
	-- regarding both the max configured of simulataneous jobs and current running jobs
	SET @count = @batch_upd_stats - @nb_jobs_running;

	IF @debug = 99
		SELECT 'DEBUG --  @count : ' + CAST(@count AS VARCHAR(15));

	-- Here we go ... creating update stats sql_jobs
	WHILE (@j <= @i + @count)
	BEGIN

		SET @sqlcmd = '';
		SET @stats_name = NULL;
		SET @sample_stat = NULL;
		SET @sqlcmd = NULL;

		SELECT 
			@stats_name = stats_name,
			@sample_stat = sample_stat,
			@sqlcmd = upd_stats_cmd + CHAR(13) + '---------------------' + CHAR(13)
		FROM 
			#update_stats_tt
		WHERE 
			id = @j;

		IF @debug = 99
		BEGIN
			SELECT 'DEBUG --  @j loop : ' + CAST(@j AS VARCHAR(15));
			SELECT @stats_name, @sample_stat
		END

		IF @debug = 1
			PRINT 'UPDATE STATISTICS ' + QUOTENAME(@schema_name) + '.' + QUOTENAME(@table_name) + ' ' + @stats_name + CASE COALESCE(@sample_stat, @stat_sample, 0)
																															WHEN 0 THEN ' '
																															WHEN 100 THEN ' WITH FULLSCAN'
																															ELSE ' WITH SAMPLE ' + CAST(COALESCE(@sample_stat, @stat_sample, 0) AS VARCHAR(15)) + ' PERCENT'
																														  END + '';
		IF @debug IN (0,1) 
		BEGIN
			PRINT '--> Create SQL job UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + @stats_name + '';
			
			EXEC sp_executesql @sqlcmd;
		END

		SET @j += 1;

	END

	-- We need to rewind by 1 to target the next stat to update
	SET @j -= 1;

	PRINT '-----------------------------------';

	-- Start all related update stats jobs 
	SET @sqlcmd = N'';

	SELECT @sqlcmd += 'EXEC msdb.dbo.sp_start_job @job_name = ''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + stats_name + ''';' + CHAR(13)
	FROM #update_stats_tt
	WHERE id BETWEEN (@i + 1) AND (@i + @count);

	IF @debug = 1
		PRINT @sqlcmd;
	
	IF @debug IN (0,1)
	BEGIN
		PRINT '--> Starting UPDATE_STATS_' + @schema_name + '_' + @table_name + ' jobs';
		EXEC sp_executesql @sqlcmd;
		PRINT '-----------------------------------';
	END

	-- Waiting 10 seconds before checking running jobs
	WAITFOR DELAY '00:00:10';

	-- Construction job pattern to the next steps - check running jobs and stop terminated jobs
	SET @job_name_pattern = 'UPDATE_STATS_' + @schema_name + '_' + @table_name + '_';
	
	IF @debug = 99
		SELECT 'DEBUG - @job_name_pattern = ' + @job_name_pattern

	SELECT 
		@nb_jobs_running = COUNT(*)
	FROM 
		msdb.dbo.sysjobactivity ja (NOLOCK)
	LEFT JOIN 
		msdb.dbo.sysjobhistory jh (NOLOCK) ON ja.job_history_id = jh.instance_id
	JOIN 
		msdb.dbo.sysjobs j  (NOLOCK) ON ja.job_id = j.job_id 
	JOIN 
		msdb.dbo.sysjobsteps js  (NOLOCK) ON ja.job_id = js.job_id AND ISNULL(ja.last_executed_step_id,0)+1 = js.step_id
	WHERE 
		ja.session_id = (
							SELECT TOP 1 session_id 
							FROM msdb.dbo.syssessions  (NOLOCK) 
							ORDER BY agent_start_date DESC)
		AND start_execution_date is not null
		AND stop_execution_date is NULL
		AND j.name LIKE @job_name_pattern + '%';

	IF @debug = 99
		SELECT 'DEBUG --  @nb_jobs_running : ' + CAST(@nb_jobs_running AS VARCHAR(15));
	
	IF @nb_jobs_running = @batch_upd_stats
		PRINT '--> All SQL jobs are running. Waiting for 5s ...';

	-- Waiting until at least one job is terminated ...
	WHILE (@nb_jobs_running = @batch_upd_stats)
	BEGIN

		-- Count nb of running jobs only
		SELECT 
			@nb_jobs_running = COUNT(*)
		FROM 
			msdb.dbo.sysjobactivity ja (NOLOCK)
		LEFT JOIN 
			msdb.dbo.sysjobhistory jh (NOLOCK) ON ja.job_history_id = jh.instance_id
		JOIN 
			msdb.dbo.sysjobs j  (NOLOCK) ON ja.job_id = j.job_id 
		JOIN 
			msdb.dbo.sysjobsteps js  (NOLOCK) ON ja.job_id = js.job_id AND ISNULL(ja.last_executed_step_id,0)+1 = js.step_id
		WHERE ja.session_id = (
								SELECT TOP 1 session_id 
								FROM msdb.dbo.syssessions  (NOLOCK) 
								ORDER BY agent_start_date DESC)
			AND start_execution_date is not null
			AND stop_execution_date is NULL
			AND j.name LIKE @job_name_pattern + '%'

		WAITFOR DELAY '00:00:05';

	END

	PRINT '-----------------------------------';

	-- Delete terminated SQL jobs 
	SET @sqlcmd = '';

	SELECT 
		@sqlcmd += 'EXEC msdb.dbo.sp_delete_job  @job_name = ''' + j.name + ''';' + CHAR(13)
	FROM 
		msdb.dbo.sysjobactivity ja (NOLOCK)
	LEFT JOIN 
		msdb.dbo.sysjobhistory jh (NOLOCK) ON ja.job_history_id = jh.instance_id
	JOIN 
		msdb.dbo.sysjobs j  (NOLOCK) ON ja.job_id = j.job_id 
	WHERE 
		j.name LIKE @job_name_pattern + '%'
		AND start_execution_date is not null
		AND stop_execution_date is not null

	IF @debug = 1
		PRINT @sqlcmd;

	IF @debug IN (0,1)
	BEGIN
		PRINT '--> Removing terminated UPDATE_STATS_' + @schema_name + '_' + @table_name + '_XXX jobs'
		EXEC sp_executesql @sqlcmd;
		PRINT '-----------------------------------';
	END

	SET @i = @j;

	IF @debug = 99
		SELECT 'DEBUG --  @i = @j : ' + CAST(@j AS VARCHAR(15));
	
END

-- Disable traceflag 7471 (only part of the update stats maintenance
PRINT '-----------------------------------';
PRINT '--> Disable trace flag 7471';
SET @sqlcmd = N'DBCC TRACEOFF(7471, -1);' + CHAR(13)
EXEC sp_executesql @sqlcmd;

SET @end_date = CURRENT_TIMESTAMP;

-- Display execution time in seconds
SELECT DATEDIFF(SECOND, @start_date, @end_date) AS duration_S;

 

We initially run the test on a QA environment with 4 VCPUs and 22GB of RAM – that was pretty close to the production environment. We noticed when we began to increase the number of parallel jobs over 3 we encountered RESOURCE_SEMAPHORE waits. This is because of memory grants required for each update statistics command . Unfortunately, no chance here to increase the amount of memory to push the limit further but we noticed a factor improvement of 1.5 in average (with still a sample of 100 percent).

blog 124 - 5 - update stats execution time

At this point I asked myself if we may rely only on the storage layout performance to update statistics. After all, we managed to reduce the execution time below to the maximum windows maintenance timeframe – fixed to 8 hours in your context.

Analyzing further the data distribution

Relying on the storage performance and the new SQL Server capabilities was a good thing for us but however I kept in mind that updating 94 statistics was probably not a smart idea because I was convicted a big part of them was pretty useless. There is no easy way to verify it because we had a mix of stored procedures and ad-hoc statements from different applications that refer to this database (let’s say we also have super users who run queries directly from SSMS). So I decided to put the question differently: If we may not remove some auto-generated statistics, do we have necessarily to update all of them with FULLSCAN for this specific table? What about data distribution for columns involved by auto generated statistics? In the context of the dbo.bigTable and regarding the number of rows we may easily run into cardinality estimation issues if the data distribution is not correctly represented especially in case of skewed data. Nevertheless, analyzing manually histogram steps of each statistic may be cumbersome and we decided to go through the stored procedures provided by Kimberly Tripp here. However, the version we got did not support analyzing columns not involved in an index as mentioned below:

-- Considering for v2
--	, @slowanalyze	char(5) = 'FALSE'		
-- No index that exists with this column as the 
-- high-order element...analyze anyway. 
-- NOTE: This might require MULTIPLE table scans. 
-- I would NOT RECOMMEND THIS. I'm not even sure why I'm allowing this...

 

We had to update a little bit of code to fix it but don’t get me wrong, the Kimberly’s recommendation still makes sense because in the background SQL Server order data from the concerned column to get a picture of the data distribution. Without any indexes on this concerned column, analyzing data distribution may be a very time and resource-consuming (including tempdb for sorting data) task especially when the table becomes big in size as illustrated by the following sample of code executed by SQL Server while updating statistics.

SELECT StatMan([SC0]) 
FROM 
(
	SELECT TOP 100 PERCENT [TransactionID] AS [SC0] 
	FROM [dbo].[bigTransactionHistory] WITH (READUNCOMMITTED)  
	ORDER BY [SC0] 
) AS _MS_UPDSTATS_TBL  OPTION (MAXDOP 16)

This was the case for my dbo.bigTable and I had to cut the operation into small pieces spread within several windows maintenance timeframes.

So, we have to add  to the sp_SQLskills_AnalyzeColumnSkew stored procedure the creation of temporary index on the concerned column before analyzing data. This compromise allowed us to gain a lot of time and was validated on the QA environment.

We tried different  combinations of parameters and we finally finished by using the following ones (@difference = 10000, @numofsteps = 5) according to our background when we faced query performance and cardinality estimation issues on this specific table. We got an interesting output as shown below:

blog 124 - 6 - update stats execution time

The number of columns with skewed data are low compared to those with data distributed uniformly. [Partial skew] value means we got only differences from the column [Nb steps (factor)] => 2.5 by default.

That was a pretty good news because for columns with no skew data we were able to consider updating them by either using the default sampling rate used by SQL Server (nonlinear algorithm under the control of the optimizer) or to specify a custom sampling rate value to make sure we are not scanning too much data. Regarding my context, mixing parallel jobs and a default sampling rate value for column statistics with no skewed data seems to be good enough (no query plan regression at this moment) but we will have probably to change in the future. Anyway, we managed to reduce the execution time to one hour as shown below:

blog 124 - 7 - update stats execution time

For columns with skewed data we are still keeping the FULLSCAN option and we plan to investigate filtered statistics to enhance further cardinality estimations in the future.

The bottom line of this story is that I probably never thought to go through all the aforementioned options if the customer accepted to follow my first proposal (who knows?). Technically and humanly speaking it was a good learning experience. Obviously, you would think it was not the best or the simplest approach and you would be right. Indeed, there are drawbacks here as adding overhead and complexity to write custom scripts and maintaining ndividual statistics over the time as well. But from my experience in a consulting world everything is not often black or white and we also have to compose with a plenty of customer’s context variations to achieve not necessarily what we consider the best but one satisfactory outcome for the business.

Happy new year 2018!

 

 

 

 

 

 

 

 

Cet article Experiencing update statistics on a big table with circuitous ways est apparu en premier sur Blog dbi services.

12c Multitenant internals: PDB_PLUG_IN_VIOLATIONS

Sat, 2017-12-30 13:54

In the previous post https://blog.dbi-services.com/12c-multitenant-internals-pdb-replay-ddl-for-common-users I’ve done some DDL on a common user to show how this is replayed later for PDBs that were not opened at that time. But what happens when one of the DDL fails on one PDB?

PDB$LASTREPLAY

In the last post, the C##USER1 common user was created and all pluggable databases (PDB1 with con_id=3 and PDB2 with con_id=4) were opened and synchronized:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where opcode=-1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

When REPLAY# in the PDB is equal to the CDB$ROOT one, this means that there are no additional statements to replicate on the PDB.

I have PDB1 opened read write and PDB2 in read only:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ ONLY NO

For the demo my user’s default tablespace is SYSTEM:

SQL> select con_id,username,common,default_tablespace from cdb_users where username='C##USER1' order by 1;
 
CON_ID USERNAME COMMON DEFAULT_TABLESPACE
------ -------- ------ ------------------
1 C##USER1 YES SYSTEM
3 C##USER1 YES SYSTEM
4 C##USER1 YES SYSTEM

Failure in opened containers

I want to change the default tablespace for C##USER1 and I have a USERS tablespace in CDB$ROOT (but not in the PDBs):

SQL> alter user C##USER1 default tablespace USERS;
 
Error starting at line : 50 File @ common-users-pdb-sync.sql
In command -
alter user C##USER1 default tablespace USERS
Error report -
ORA-65048: error encountered when processing the current DDL statement in pluggable database PDB1
ORA-00959: tablespace 'USERS' does not exist

As we have seen in the last post, the DDL is executed on all containers that are opened read write. Here it is fine on CDB$ROOT but fails on PDB1.

Then I create the USERS tablespace in PDB1:

SQL> alter session set container=PDB1;
Session altered.
SQL> create tablespace USERS datafile '/u01/oradata/CDB2/PDB1/users.dbf' size 5M;
Tablespace USERS created.
SQL> alter session set container=CDB$ROOT;
Session altered.

And now, the statement is successful in CDB$ROOT, replicated on PDB1:

SQL> alter user C##USER1 default tablespace USERS;
User C##USER1 altered.

This is nice: the statement is successful in all containers or fails. When it is successful, statements are recorded in PDB_SYNC$:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id=1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
1 0 852610 26-jan-17 02:57:26 CTXSYS SYS 5 1 alter user CTXSYS account unlock identified by *^@
1 0 853177 26-jan-17 02:57:34 CTXSYS SYS 5 2 alter user CTXSYS password expire account lock^@
1 0 1405359 26-jan-17 03:31:31 SYSTEM SYS 5 3 alter user system account lock password expire^@
1 0 1408693 23-dec-17 11:34:43 SYS SYS 5 4 alter user sys account unlock identified by *^@
1 0 1408703 23-dec-17 11:34:43 SYSTEM SYS 5 5 alter user system account unlock identified by *^@
1 0 1466615 29-dec-17 09:26:56 C##PROFILE1 SYS 7 6 create profile C##PROFILE1 limit inactive_account_time 15 container=all^@
1 0 1466641 29-dec-17 09:26:57 C##ROLE1 SYS 3 7 create role C##ROLE1 container=all^@
1 0 1466748 29-dec-17 09:26:58 C##USER1 SYS 1 8 create user C##USER1 identified by * container=all^@
1 0 1466812 29-dec-17 09:26:59 C##USER1 SYS 5 9 alter user C##USER1 profile C##PROFILE1^@
1 0 1466853 29-dec-17 09:26:59 C##USER1 C##ROLE1 SYS 10 10 grant C##ROLE1 to C##USER1 container=all^@
1 0 1467010 29-dec-17 09:27:01 C##USER1 SYS 5 11 alter user C##USER1 default tablespace USERS^@

Failure in replay at open for closed containers

But PDB2 is not synchronized because it was not opened read write:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

But I don’t have a USERS tablespace in PDB2, so the replay will fail:

SQL> alter pluggable database PDB2 close;
Pluggable database PDB2 altered.
 
SQL> alter pluggable database PDB2 open;
ORA-24344: success with compilation error
Pluggable database PDB2 altered.

This is a warning only. The SQlcl feedback is a bit misleading, mentioning a compilation error because this is where we used to have warnings, but the SQl*Plus message is more clear:

SQL> alter pluggable database PDB2 open;
Warning: PDB altered with errors.

The PDB2 cannot be left closed, because you need to create a tablespace here. But it cannot be opened to everyone, because it is not in sync with CDB$ROOT. So what happens is that the PDB is opened in restricted mode:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE YES

Note that if you look at PDB_SYNC$ in the PDB at that time, it looks like REPLAY#=11 has increased but you also see rows for the statement that has to be run. You have to connect to the PDB because containers() do not run in restricted session containers:

SQL> alter session set container=PDB2;
Session altered.
 
SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
4 0 1469022 29-dec-17 09:27:02 C##USER1 SYS 5 11 alter user C##USER1 default tablespace USERS^@
 
SQL> alter session set container=CDB$ROOT;
Session altered.

Actually, the attempt to sync has inserted the statements and pushed the last replay indicator. Now, the PDB has all information to do a sync without the need to go to CDB$ROOT. The DDL was not replayed, but has been stored locally. When the sync will be successful, statements will be removed from the local PDB_SYNC$ leaving only the LASTREPLAY indicator.

PDB_PLUG_IN_VIOLATIONS

More info about the warning is stored in PDB_ALERT$ which you query from PDB_PLUG_IN_VIOLATIONS (the strange name reminds the TRANSPORT_SET_VIOLATIONS view used by DBMS_TTS):

SQL> select name, cause, type, status,action,message,time from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS ACTION MESSAGE TIME
---- ----- ---- ------ ------ ------- -------------------------------
PDB2 Sync Failure ERROR PENDING Sync PDB failed with ORA-959 during 'alter user C##USER1 default tablespace USERS' 29-DEC-17 09.27.03.266780000 PM

Here you have the statement that failed and the error number, but no recommended ACTION. However, ORA-959 is “tablespace ‘%s’ does not exist” which gives a clue about the problem encountered.

As the PDB is opened a DBA (with RESTRICTED SESSION privilege) can add the tablespace:

SQL> alter session set container=PDB2;
Session altered.
 
SQL> create tablespace USERS datafile '/u01/oradata/CDB2/PDB2/users.dbf' size 5M;
Tablespace USERS created.

But you cannot simply disable restricted session:

SQL> alter system disable restricted session;
SQL> alter system disable restricted session;
 
Error starting at line : 74 File @ common-users-pdb-sync.sql
In command -
alter system disable restricted session
Error report -
ORA-65144: ALTER SYSTEM DISABLE RESTRICTED SESSION is not permitted
65144. 00000 - "ALTER SYSTEM DISABLE RESTRICTED SESSION is not permitted"
*Cause: An attempt was made to disable a restricted session while an unresolved error existed in PDB_PLUG_IN_VIOLATIONS.
*Action: Resolve all of the errors before trying to disable a restricted session.

One solution is to close and open the PDB to get the DDL replay:
SQL> alter session set container=CDB$ROOT;
Session altered.
SQL> alter pluggable database PDB2 close;
Pluggable database PDB2 altered.
SQL> alter pluggable database PDB2 open;
Pluggable database PDB2 altered.

The other solution is to call DBMS_PDB.SYNC_PDB and disable restricted mode:

SQL> exec dbms_pdb.sync_pdb; commit;
PL/SQL procedure successfully completed.
 
SQL> alter system disable restricted session;
System DISABLE altered.

In both case, no warning here, and no restricted mode:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE NO

and the PDB_PLUG_IN_VIOLATIONS is updated to flag the issue as resolved:

SQL> select name, cause, type, status,action,message,time from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS ACTION MESSAGE TIME
---- ----- ---- ------ ------ ------- -------------------------------
PDB2 Sync Failure ERROR RESOLVED Sync PDB failed with ORA-959 during 'alter user C##USER1 default tablespace USERS' 29-DEC-17 09.27.04.093659000 PM

At that time, the local PDB_SYNC$ table in PDB2 contains only the PDB$LASTREPLAY row, with the same value as in the CDB$ROOT table. The rows with the statements have been deleted once the DDL has been successfully replayed:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11

So what?

The mechanism is simple: record what is done in CDB$ROOT, replicate it in PDBs when possible (opened read-write) and try to replay it, mark the last replay step. For containers that were not writeable, at open, the DDL is replicated on the PDBs that lag being CDB$ROOT and replay step is updated. Then the DDL is replayed. When sucessful, the statement is removed from the replicated DDL. When it fails, you get a warning, and a message in PDB_PLUG_IN_VIOLATIONS, and the PDB is opened in restricted session mode to let you solve the problem.
If you can fix the issue so that the DDL to be replayed is successful, then you can just sync and disable restricted session, or simply close and re-open the PDB.
If you can’t fix it I suppose you need to hack the statements in the local PDB_SYNC$, with Oracle Support agreement of course, and make sure that you arrive to a state which is consistent with the other containers, especially CDB$ROOT.

 

Cet article 12c Multitenant internals: PDB_PLUG_IN_VIOLATIONS est apparu en premier sur Blog dbi services.

12c Multitenant internals: PDB replay DDL for common users

Fri, 2017-12-29 16:05

In multitenant, you can create common Users, Roles, and Profiles. You create them in CDB$ROOT, with the CONTAINER=ALL clause (which is optional because it is the only possible value when connected to CDB$ROOT) but they are visible to all containers. As the goal of multitenant is to avoid to duplicate common metadata to all containers, You may think that they are visible through those magic metadata links. But that’s actually wrong: they are simply replicated with a very simple mechanism: the DDL for common objects is replayed into each user PDB.

I’m connected to CDB2’s CDB$ROOT and I have two pluggable databases:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 MOUNTED

PDB1 is opened and PDB2 is closed.

PDB_SYNC$

In this example, I’ll query PDB_SYNC$ which is the table where Oracle stores all DDL for common users, roles, or profiles in order to be able to replay it later:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id=1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 852610 26-jan-17 02:57:26 CTXSYS SYS 5 1 alter user CTXSYS account unlock identified by *^@
1 0 853177 26-jan-17 02:57:34 CTXSYS SYS 5 2 alter user CTXSYS password expire account lock^@
1 0 1405359 26-jan-17 03:31:31 SYSTEM SYS 5 3 alter user system account lock password expire^@
1 0 1408693 23-dec-17 11:34:43 SYS SYS 5 4 alter user sys account unlock identified by *^@
1 0 1408703 23-dec-17 11:34:43 SYSTEM SYS 5 5 alter user system account unlock identified by *^@
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 5

I excluded the bitand(flags,8)=8 because it concerns application containers. I query with the container() to show the con_id but this is for con_id=1 which is the CDB$ROOT.

You can see some DDL for CTXSYS recorded on January 26th which is the day where this release (12.2.0.1) was built. I used a template with datafiles to create the CDB with DBCA. And you see some DDL to unlock SYS and SYSTEM on December 23rd when I created the database. You can also see that they are ordeded in sequence with REPLAY#.

More interesting is the OPCODE=-1 which is PDB$LASTREPLAY and looks like the last value of REPLAY#. This means that on this container, CDB$ROOT, all statements where REPLAY#<=5 was run.

With a similar query, I query the opened PDBs:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 5

There only one row here in CON_ID=3, which is PDB1: the PDB$LASTREPLAY mentioning that all statements up to REPLAY=5 have been run also in this container.
I don’t see PDB2 (CON_ID=4) here because the container() clause cannot query closed containers.

CONTAINER=ALL DDL

I’ll run some common DLL to create a profile, a role and a user:

SQL> create profile C##PROFILE1 limit inactive_account_time 15 container=all;
Profile C##PROFILE1 created.
 
SQL> create role C##ROLE1 container=all;
Role C##ROLE1 created.
 
SQL> create user C##USER1 identified by oracle container=all;
User C##USER1 created.
 
SQL> alter user C##USER1 profile C##PROFILE1;
User C##USER1 altered.
 
SQL> grant C##ROLE1 to C##USER1 container=all;
Grant succeeded.

The C## prefix is mandatory to isolate the common user namespace. You can change it with the common_prefix parameter. You can even set it to the empty string, but then you have a risk of namespace collision when you plug a PDB between CDB having different common profiles or roles.
The CONTAINER=ALL is the default and the only possibility when connected to CDB$ROOT so it is optional. I recommend to mention it explicitly in order to avoid problems when running the same DDL in CDB$ROOT and in PDBs.

All those DDL have been recorded into PDB_SYNC$ and the REPLAY# has been increased:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id=1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 852610 26-jan-17 02:57:26 CTXSYS SYS 5 1 alter user CTXSYS account unlock identified by *^@
1 0 853177 26-jan-17 02:57:34 CTXSYS SYS 5 2 alter user CTXSYS password expire account lock^@
1 0 1405359 26-jan-17 03:31:31 SYSTEM SYS 5 3 alter user system account lock password expire^@
1 0 1408693 23-dec-17 11:34:43 SYS SYS 5 4 alter user sys account unlock identified by *^@
1 0 1408703 23-dec-17 11:34:43 SYSTEM SYS 5 5 alter user system account unlock identified by *^@
1 0 1466615 29-dec-17 09:26:56 C##PROFILE1 SYS 7 6 create profile C##PROFILE1 limit inactive_account_time 15 container=all^@
1 0 1466641 29-dec-17 09:26:57 C##ROLE1 SYS 3 7 create role C##ROLE1 container=all^@
1 0 1466748 29-dec-17 09:26:58 C##USER1 SYS 1 8 create user C##USER1 identified by * container=all^@
1 0 1466812 29-dec-17 09:26:59 C##USER1 SYS 5 9 alter user C##USER1 profile C##PROFILE1^@
1 0 1466853 29-dec-17 09:26:59 C##USER1 C##ROLE1 SYS 10 10 grant C##ROLE1 to C##USER1 container=all^@
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

PDB1 (CON_ID=3) was opened read write, and then has been synchronized (the DDL has been run in the container to create the same profile, role and user) and the PDB$LASTREPLAY has been updated in this container to show that all has been done:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,opcode,replay#;
%nbsp;
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

MOUNTED or READ ONLY

I open the PDB2 read only because I want to see what is in PDB_SYNC$ there. But READ ONLY means that the DDL cannot be run because no write is allowed in the local dictionary.

SQL> alter pluggable database PDB2 open read only;
Pluggable database PDB2 altered.

Running the same query as above, I can see that PDB2 (CON_ID=4) is synchronized only up to the statements with REPLAY#=5 because my DDL was not replicated there.
SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,opcode,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 5

Sync at OPEN

When I open PDB2 in read write mode, the DDL can be synchronized:

SQL> alter pluggable database PDB2 open read write force;
Pluggable database PDB2 altered.

At open, the DDL from REPLAY#>5 has been replayed and once opened the PDB is in sync with CDB$ROOT:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,opcode,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

So what?

The common users, roles and profiles are not stored only in CDB$ROOT to be shared, but rather replicated to all PDBs. The DDL is replicated synchronously to all opened pluggable databases in read write, and stored into the CDB$ROOT PDB_SYNC$ table to be replayed later when non-synced PDBs are opened. I’ll show in the next post what happens when the DDL is in error.

Note that even when all pluggable databases are opened read write, the DDL is stored and they are purged later (when replayed on all PDBs) because they are needed when you create a new PDB and open it. The PDB$SEED has REPLAY#=0 for PDB$LASTREPLAY which means that all statements will be replayed.

This is 12.2.0.1 where this mechanism is only for common users, roles and profiles having DDL in CDB$ROOT. With application containers, more than that is recorded: all DML and DDL run between the ‘begin install/upgrade/patch’ and ‘end install/upgrade/patch’ in the application root. Then, the statements can be replayed into the application PDB with a simple SYNC command. In the future release (18c) we expect to have that application root behavior ported to CDB$ROOT so that we don’t have to run catupgrd.sql in all containers. Then the PDB will probably be patched or upgraded when opened.

 

Cet article 12c Multitenant internals: PDB replay DDL for common users est apparu en premier sur Blog dbi services.

Drop PDB including datafiles may keep files open

Mon, 2017-12-25 13:14

I like that Oracle drops the datafiles from the operating system when we remove them from the database (with drop tablespace or drop pluggable database) because I don’t like to have orphean files remaining in the filesystem. However, to ensure that space is reclaimed, we must be sure that Oracle did not leave a process with this file opened. Linux allows to drop an open file but then drops only the inode. The consequence is that we do not see the file, but space is not reclaimable until the process closes the handle.
Here is a case where I’ve had an issue in 12.2 where plugging a PDB is done in parallel and the parallel processes keep the files opened even if we drop the pluggable database.

I have 1.2 GB free on my filesystem:
SQL> host df -h /u02/oradata
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ol-root 3.1G 1.9G 1.2G 61% /

Plug

I plug a PDB from a PDB archvive:
SQL> create pluggable database PDB0
2 using '/u01/app/temp/PDB0.pdb'
3 file_name_convert=('/u01/app/temp','/u02/oradata')
4 /
 
Pluggable database PDB0 created.

Open

In my example the PDB was from an older PSU level. I open it:
SQL> alter pluggable database PDB0 open;
ORA-24344: success with compilation error
 
Pluggable database PDB0 altered.

I got a warning and the PDB is in restricted session mode:
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ ---------- ------------ ----------
2 PDB$SEED READ ONLY NO
3 PDB0 READ WRITE YES
4 PDB1 READ WRITE NO

The reason is that a PSU installed in the CDB$ROOT was not there when the PDB was unplugged:

SQL> select status,message from pdb_plug_in_violations;
STATUS MESSAGE
------ ------------------------------------------------------------------------------------------------------------
PENDING DBRU bundle patch 171017 (DATABASE RELEASE UPDATE 12.2.0.1.171017): Installed in the CDB but not in the PDB.

I can run datapatch, but let’s say that I realize it’s the wrong PDB archive and I want to drop what I’ve imported. Then I expect to reclaim the space in order to be able to import the right one again.

Processes

Before closing the PDB here are all the processes having one of the datafiles opened:
SQL> host for i in $(fuser /u02/oradata/*) ; do ps --no-headers -p $i ; done
/u02/oradata/sysaux01.dbf:
/u02/oradata/system01.dbf:
/u02/oradata/temp01.dbf:
/u02/oradata/undotbs01.dbf:
/u02/oradata/users01.dbf:
6053 ? 00:00:00 ora_dbw0_cdb1
6122 ? 00:00:01 ora_p003_cdb1
6594 ? 00:00:01 oracle_6594_cdb
6053 ? 00:00:00 ora_dbw0_cdb1
6120 ? 00:00:01 ora_p002_cdb1
6594 ? 00:00:01 oracle_6594_cdb
6053 ? 00:00:00 ora_dbw0_cdb1
6594 ? 00:00:01 oracle_6594_cdb
6053 ? 00:00:00 ora_dbw0_cdb1
6118 ? 00:00:00 ora_p001_cdb1
6594 ? 00:00:01 oracle_6594_cdb
6053 ? 00:00:00 ora_dbw0_cdb1
6116 ? 00:00:00 ora_p000_cdb1
6594 ? 00:00:01 oracle_6594_cdb

There is my session shadow process, also the DBWR, and the Pnnn parallel processes who did the copy of the datafiles during the plug.

Close

So, I want to drop it and then I close it:
SQL> alter pluggable database PDB0 close;
 
Pluggable database PDB0 altered.

Close means that all files are closed. Is it true? Actually not:
SQL> host for i in $(fuser /u02/oradata/*) ; do ps --no-headers -p $i ; done
/u02/oradata/sysaux01.dbf:
/u02/oradata/system01.dbf:
/u02/oradata/undotbs01.dbf:
/u02/oradata/users01.dbf:
6122 ? 00:00:01 ora_p003_cdb1
6120 ? 00:00:01 ora_p002_cdb1
6118 ? 00:00:00 ora_p001_cdb1
6116 ? 00:00:00 ora_p000_cdb1

The parallel processes still have the datafiles opened. This is probably a bug and I’ll open a SR referencing this blog post.

So I want to reclaim space:
SQL> host df -h /u02/oradata
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ol-root 3.1G 2.6G 527M 84% /

I expect to have this 527 MB available go back to 1.2 GB available once I drop the PDB.

Drop including datafiles

I cannot drop the PDB and keep the datafiles, or I’ll get ORA-65179: cannot keep datafiles for a pluggable database that is not unplugged
I don’t want to unplug it but just to drop it, then I must mention the ‘including datafiles':

SQL> drop pluggable database PDB0 including datafiles;
 
Pluggable database PDB0 dropped.

Unfortunately, the space is not reclaimed:
SQL> host df -h /u02/oradata
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ol-root 3.1G 2.6G 527M 84% /

As you have seen that the Pnnn processes were still there after the close, you know the reason. Linux has removed the inode but the file is still there in the filesystem until the processes close the handles (or the processes are killed). You can see them with lsof or from the /proc filesystem:

SQL> host find /proc/*/fd -ls 2>/dev/null | grep deleted
79174 0 lrwx------ 1 oracle oinstall 64 Dec 25 21:20 /proc/6116/fd/257 -> /u02/oradata/users01.dbf\ (deleted)
79195 0 lrwx------ 1 oracle oinstall 64 Dec 25 21:20 /proc/6118/fd/257 -> /u02/oradata/undotbs01.dbf\ (deleted)
79216 0 lrwx------ 1 oracle oinstall 64 Dec 25 21:20 /proc/6120/fd/257 -> /u02/oradata/system01.dbf\ (deleted)
79237 0 lrwx------ 1 oracle oinstall 64 Dec 25 21:20 /proc/6122/fd/257 -> /u02/oradata/sysaux01.dbf\ (deleted)

On a running CDB I cannot kill the background processes because they may be doing something useful. Until I can re-start the instance, the only way to reclaim the space is to write an empty file to replace those files. I cannot use the file name which has been deleted but I can use the /proc link:

SQL> host find /proc/*/fd -ls 2>/dev/null | grep -E " [/]u02[/]oradata[/].* [(]deleted[)]" | awk '{print $11}' | while read f ; do : > $f ; done

And finally space is reclaimed:
SQL> host df -h /u02/oradata
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ol-root 3.1G 1.9G 1.2G 61% /

So what?

I encountered this issue with a PDB plug but I’m quite sure we can encounter it in other situations when the parallel processes had opened the PDB datafiles. You can imagine the consequence on a multitenant CDB for DBaaS where you have hundreds of PDBs and constantly create and drop them, probably in an automated way. Space not reclaimed means that at one time the provisioning will fail. I hope this bug will be filled and fixed. Closing a PDB should take care that all processes close the files. The safe way is to include something like I did, emptying the deleted files, with the proper verifications that the old files belong to a dropped PDB.

 

Cet article Drop PDB including datafiles may keep files open est apparu en premier sur Blog dbi services.

DOAG 2017 – Middleware: Automation and Docker container streams

Fri, 2017-12-22 09:43

doag2017

End of last Month I participated as presenter to the DOAG 2017. It was interesting to see the growing interest on the automatic deployment tools.
I presented the way the Fusion Middleware cloning can be used with such automation tool to lower the time needed to install an environment.
The Fusion Middleware cloning is based on a master installation where all patches, configuration, system components are installed and configured.
Thus, a real time reduction when several identical installations are required (TEST, CA, PROD) and the usage of such automation tool guaranties that all environments are totally identical.
This automation was done using Ansible.

The Oracle Fusion middleware cloning is no more supported in 12.2.1.3 and we decided to move to full installation, patching, WebLogic domaine creation and configuration using Ansible scripts.
This is again a gain of time compared to manual installation even if those a silent installation. Using Ansible playbook is a guarantee that all installation are fully identical and avoids human errors in repetitive tasks.

An other interesting stream at the DOAG was the Docker one. There were a few presentations about the usage of Docker to run WebLogic instances.

Container im Schwarm – Weblogic und Docker Swarm
Docker Container für das Deployment eines WebLogic Clusters
Docker und Oracle’s JDK Grundlagen für Entwickler
Docker for developers

Docker is now supported by Oracle for the WebLogic deployments. There are even Oracle Docker images that can be downloaded.

The presentation quality increase years after years. This is a new challenge to be prepared for next year: DOAG 2018.

 

 

 

Cet article DOAG 2017 – Middleware: Automation and Docker container streams est apparu en premier sur Blog dbi services.

12cR2 Subquery Elimination

Thu, 2017-12-21 15:27

More and more we can see crazy queries generated by ORM frameworks or BI query generators. They are build to be easily generated rather than being optimal. Then, the optimizer has to implement more and more transformations to get an efficient execution plan. Here is one new that appeared in Oracle 12cR2: Subquery Elimination when the subquery do not filter any rows.

A semi-join is a join where we do not need to match with all rows, but only one. We write it with an EXISTS subquery or a =ANY or =SOME one, which is equivalent.

12.1

Here is the behaviour in 12.1.0.2 when the subquery do not filter any row because it reads the same table as the outer one, without any predicate:
SQL> select * from dbms_xplan.display_cursor(null,null,'allstats last +outline');
PLAN_TABLE_OUTPUT
-----------------
SQL_ID az1jcra46h5ua, child number 1
-------------------------------------
select * from EMP where ename in (select ename from EMP)
 
Plan hash value: 977554918
 
----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 14 |00:00:00.01 | 13 | | | |
|* 1 | HASH JOIN SEMI | | 1 | 14 | 14 |00:00:00.01 | 13 | 1098K| 1098K| 889K (0)|
| 2 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 7 | | | |
| 3 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 6 | | | |
----------------------------------------------------------------------------------------------------------------

We read 2 times the same table, join all rows and finally return as result exacly the same rows as those coming from the first full scan. This is not efficient.

12.2

Here is the same query in 12.2 where we can see that the table is scanned only once because the optimizer knows that the subquery do not filter anything:

SQL> select * from dbms_xplan.display_cursor(null,null,'allstats last +outline');
PLAN_TABLE_OUTPUT
SQL_ID az1jcra46h5ua, child number 0
-------------------------------------
select * from EMP where ename in (select ename from EMP)
 
Plan hash value: 3956160932
 
------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 14 |00:00:00.01 | 9 |
|* 1 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 9 |
------------------------------------------------------------------------------------

This is mentioned in the outline hints with ELIMINATE_SQ:

Outline Data
-------------
 
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
DB_VERSION('12.2.0.1')
ALL_ROWS
OUTLINE_LEAF(@"SEL$D0DB0F9A")
ELIMINATE_SQ(@"SEL$2")
OUTLINE(@"SEL$1")
OUTLINE(@"SEL$2")
FULL(@"SEL$D0DB0F9A" "EMP"@"SEL$1")
END_OUTLINE_DATA
*/

This feature can be disabled with the following paramter:

SQL> alter session set "_optimizer_eliminate_subquery"=false;
Session altered.

Or with the following hint:

SQL> select * from EMP where ename in (select /*+ NO_ELIMINATE_SQ */ ename from EMP);

Finally here is what you can see in the CDB trace:

SQE: Trying SQ elimination.
SQE: Trying SQ elimination.
SQE: Query valid for SQ elimination:******* UNPARSED QUERY IS *******
SELECT "EMP"."EMPNO" "EMPNO","EMP"."ENAME" "ENAME","EMP"."JOB" "JOB","EMP"."MGR" "MGR","EMP"."HIREDATE" "HIREDATE","EMP"."SAL" "SAL","EMP"."COMM" "COMM","EMP"."DEPTNO" "DEPTNO" FROM "SCOTT"."EMP" "EMP" WHERE "EMP"."ENAME"=ANY (SELECT "EMP"."ENAME" "ENAME" FROM "SCOTT"."EMP" "EMP")
Registered qb: SEL$D0DB0F9A 0xe7035778 (SUBQUERY ELIMINATE SEL$1; SEL$2)
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (): qb_name=SEL$D0DB0F9A nbfros=1 flg=0
fro(0): flg=0 objn=73253 hint_alias="EMP"@"SEL$1"
 
SQE: Query after SQ elimination:******* UNPARSED QUERY IS *******
SELECT "EMP"."EMPNO" "EMPNO","EMP"."ENAME" "ENAME","EMP"."JOB" "JOB","EMP"."MGR" "MGR","EMP"."HIREDATE" "HIREDATE","EMP"."SAL" "SAL","EMP"."COMM" "COMM","EMP"."DEPTNO" "DEPTNO" FROM "SCOTT"."EMP" "EMP" WHERE 0=0 AND "EMP"."ENAME" IS NOT NULL

This example is simple and it is obvious that the SQL should be re-written. But with large generated queries, on complex views, this is the kind of thing that can be seen in the resulting query and this transformation will help to avoid unnecessary work.

 

Cet article 12cR2 Subquery Elimination est apparu en premier sur Blog dbi services.

Oracle Licensing (R)evolution

Thu, 2017-12-21 08:49

In 2015 I wrote a blog named “All you need to know about Oracle Database licensing with VMware”. This blog generated lots of comments and hopefully helped some DBAs and IT Managers understanding the potential issues that they can face in case of Oracle LMS Audit.

In 2 years I made some new experiences related to Oracle LMS audit and I’d like to share those experiences with you to provide you up to date information. What is written in this blog is nothing more than summary of facts encountered with Swiss Oracle customers. The situation described in this blog is related to one of our customers named SITel (Service de l’Informatique et des Télécommunication) of Fribourg in Switzerland which has also been encountered by some others of our customers. I cannot guarantee that Oracle apply the same strategy worldwide. Please notice that this article has been written with the approval of all customers named in this blog.

What didn’t change?

Oracle on VMware not certified: Oracle still does not certify any of its products on VMware virtualized environments as explained in the My Oracle Support Note ID 249212.1. In summary it’s supported but not certified meaning that Oracle does not guarantee the proper working of these products on VMware architecture, but will support you in case of incidents not related to the VMware infrastructure. In cases where VMware could be involved, the Oracle support may ask you to reproduce the incident on a certified environment. VMware on his side has his own Oracle support policy available here.

Oracle Audit reputation: As mentioned on clearlicensing.org, Oracle didn’t improve them to get better audit reputation. Indeed according to this website, Oracle was voted worst vendor during the audit process. (cf graph below)

Clearlicensing

Who is the least helpful vendor in terms of audits (Agressive behavior, focused on short term revenue)

Soft and hard partitioning policy: VMware is still a soft partitioning technology according to Oracle. The distinction between hard and soft partitioning is explained in this Oracle pdf file. In addition VMware still explain to his customer that “DRS Host Affinity rules can be used to appropriately restrict the movement of virtual machines within the cluster. DRS Host Affinity is a vSphere feature that enables you to ensure that your Oracle applications are restricted to move  only between a subset of the hosts—that is, not all hardware in the cluster is “available” to the Oracle software”. – (cf UNDERSTANDING ORACLE CERTIFICATION, SUPPORT AND LICENSING FOR VMWARE ENVIRONMENTS). That’s for sure true but Oracle does not recognize this feature allowing VMware to provide a hard partitioning solution and won’t be taken into consideration during an LMS audit

 What changed ?

Oracle Edition: As you know Standard Edition One and Standard Edition are stopped and if you want to upgrade to 12.1.0.2 (12c patchet 1) then you have to go to Standard Edition Two. You can find a good blog from Franck Pachot related to the installation of Standard Edition Two here as well as the tests related to the 16 thread limitation here. Many questions related to Oracle Standard Edition 2 and the difference between the Standard Edition One and the Standard Edition Two are answered in this Oracle pdf.

Oracle LMS Auditors: In Switzerland we do now have a company (Entry of 15.02.2017), named Seven Eighths Schweiz GmbH doing Oracle audits. The JPE program is definitively stopped.

Oracle commercial approach for VMware customers: That is perhaps the most important thing that inspired me for the title of this blog. Up to the middle of last year, Oracle customers using VMware to host their Oracle products had to be licensed for all processors where Oracle products are installed and/or running but Oracle basically took into consideration all processors where Oracle products can/could be running. This has been well illustrated by the following famous image realized by House of brick.

House Of Brick Oracle Parking

House Of Brick Oracle Parking

Meaning than since VMware 5.1 all the physical hosts managed by a vCenter Server instance have to be licensed whatever the Oracle footprint on the virtual servers. This decision has probably been taken by Oracle due to the new features coming with VMware vSphere 5.1, particularly one named vSphere vMotion(zero-downtime migration) without the need for shared storage configurations. Of course with VMware 6.x according to this same principle all the vSphere environments of the company have to be licensed. Since last year some customers came to me requesting my advice on a proposal that have been done by Oracle. Basically Oracle proposed them to validate their Oracle/VMware infrastructure and license only the processors that are really used to execute Oracle products with these two conditions:

  1. Totally isolating Oracle products in their VMware infrastructure
  2. Oracle requires a “certain volume of business” (can be in terms of Oracle Cloud Credit) to validate the customer infrastructure schema

It is important to take into consideration that Oracle validated these infrastructures only for some specific versions of VMware, usually the current customer’s VMware version and the next one.

What means “Totally isolating Oracle products”?

Customers sent me the infrastructure example that has been provided to them by Oracle. You can find them below. (Notice that customer gave me the approval to publish these pictures)

Oracle VMware VLAN and storage configuration

Oracle VMware VLAN and storage configuration

 

vCenter, VLAN and storage configuration

vCenter, VLAN and storage configuration

What you can see on these infrastructure schemas is:

  1. A dedicated vCenter Server Instance with dedicated physical hosts
  2. A dedicated VLAN
  3. A Storage isolation through LUN Masking, Zoning or any approved restriction
What means a “certain volume of business”?

In the cases we encountered, Oracle asked a “certain volume of business” to validate the Oracle infrastructure on VMware. This volume of business has been in some cases realized by acquiring Oracle Cloud Credits. I cannot provide an exact budget for this compensation since it seems varying according to the customer infrastructure. In the cases we encountered with our customers we spoke of amounts above 40.000 US$ (CHF 40’000).

Conclusion

It seems that Oracle is not anymore so restrictive regarding the licensing of Oracle on VMware infrastructure. VMware is still not certified with any Oracle products but at least Oracle customers can speak with Oracle salespeople in order to avoid having to license all the processors of their vSphere environments. This discussion can lead in some circumstances and according to our experience, to the acquisition of some other Oracle products or Cloud Credits to get the infrastructure validated by Oracle for specific VMware versions.

The other alternative to VMware is Oracle VM Server. Although the support is chargeable, Oracle VM is free. Unfortunately even if Oracle did some great improvements over the last years, Oracle VM does not have the same functionalities as VMware but is can perfectly fit depending on your needs. You can find the documentation in order to configure it has hard partitioning, with CPU pining here since Oracle VM Server has to be configured specifically in order to be considered as a hard partitioning technology.

I very do hope that you find this blog helpful and would like to thanks the SITel for his precious collaboration in reviewing this blog.

 

Cet article Oracle Licensing (R)evolution est apparu en premier sur Blog dbi services.

Online datafile move in a 12c dataguard environment

Wed, 2017-12-20 12:13

Oracle 12c introduces moving online datafile. One question we might ask is what about moving datafile online in a dataguard environment. In this blog we will do some tests
Below our configuration, we are using oracle 12.2

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 8 seconds ago)
DGMGRL>

The StandbyFileManagement property is set to auto for both primary and standby database.

DGMGRL> show database 'MYCONT_SITE' StandbyFileManagement;
StandbyFileManagement = 'auto'
DGMGRL> show database 'MYCONT_SITE1' StandbyFileManagement;
StandbyFileManagement = 'auto'
DGMGRL>

Below datafiles on both primary and standby pluggable databases PDB1

SQL> show con_name;
CON_NAME
------------------------------
PDB1
SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf

Now let’s move for example /u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf to a new location on the primary PDB1

SQL> alter database move datafile '/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf' to '/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf';
Database altered.
SQL>

We can verify the new location on the primary

SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf
SQL>

As the StandbyFileManagement is set to auto for both databases, we might think that datafile is also moved in the standby, so let’s check

SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf

The answer is no.
Ok but is my dataguard still working? Let’s query the broker

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 0 seconds ago)

Yes the configuration is fine.
Ok now can I move online my datafile in the new location on the standby server? Let’s try

SQL> alter database move datafile '/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf' to '/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf';
Database altered.
SQL>

And we can verify that datafile was moved.

SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf
SQL>

And we also can verify that my dataguard configuration is still fine

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 46 seconds ago)
DGMGRL>

Conclusion
We can see that
1- StandbyFileManagement property dos not concern online datafile move
2- Moving online datafile in the primary does not move the datafile on the standby
3- Online datafile can be done on the standby database

 

Cet article Online datafile move in a 12c dataguard environment est apparu en premier sur Blog dbi services.

Oracle 12.2 Dataguard : PDB Flashback on the Primary

Wed, 2017-12-20 12:12

The last day I was discussing with one colleague about database flashback for a pluggable database in a dataguard environment. I did some tests and I present results in this blog.
Below our broker configuration. Oracle 12.2 is used.

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 35 seconds ago)
DGMGRL>

The primary database has the flashback database set to YES.

SQL> select db_unique_name,open_mode,flashback_on from v$database;
DB_UNIQUE_NAME OPEN_MODE FLASHBACK_ON
------------------------------ -------------------- ------------------
MYCONT_SITE READ WRITE YES
.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB$SEED READ ONLY
PDB1 READ WRITE
PDB2 READ WRITE

Same for the standby database

SQL> select db_unique_name,open_mode,flashback_on from v$database;
DB_UNIQUE_NAME OPEN_MODE FLASHBACK_ON
------------------------------ -------------------- ------------------
MYCONT_SITE1 READ ONLY WITH APPLY YES
.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB$SEED READ ONLY
PDB1 READ ONLY
PDB2 READ ONLY

For the tests we are going to do a flashback database for the primary PDB1.
Let’s connect to PDB1

10:15:59 SQL> alter session set container=pdb1;
Session altered.
.
10:16:15 SQL> show con_name;
CON_NAME
------------------------------
PDB1
10:16:22 SQL>

And let’s create a table article with some datafor reference

10:16:22 SQL> create table article (idart number);
Table created.
.
10:18:12 SQL> insert into article values (1);
1 row created.
10:18:31 SQL> insert into article values (2);
1 row created.
.
10:18:34 SQL> select * from article;
IDART
----------
1
2
.
10:18:46 SQL> commit;

Now let’s do a database flashback of primary pdb1 before the creation of the table article.

10:28:12 SQL> show con_name
CON_NAME
------------------------------
PDB1
.
10:28:16 SQL> shut immediate;
Pluggable Database closed.
.
10:28:28 SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB1 MOUNTED
10:28:54 SQL>
.
10:28:54 SQL> FLASHBACK PLUGGABLE DATABASE PDB1 TO TIMESTAMP TO_TIMESTAMP('2017-12-20 10:16:00', 'YYYY-MM-DD HH24:MI:SS');
Flashback complete.
10:30:14 SQL>

Now let’s open PDB1 with resetlogs option

10:31:08 SQL> alter pluggable database PDB1 open resetlogs;
Pluggable database altered.
10:32:15 SQL>

And let’s query the table article. As expected the table is no longer present

10:32:15 SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB1 READ WRITE
.
10:32:59 SQL> select * from article;
select * from article
*
ERROR at line 1:
ORA-00942: table or view does not exist
10:33:06 SQL>

Now if we check the status of our dataguard in the broker, we have errors

12/20/2017 10:23:07 DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Error: ORA-16810: multiple errors or warnings detected for the member
Fast-Start Failover: DISABLED
Configuration Status:
ERROR (status updated 48 seconds ago)
12/20/2017 10:34:40 DGMGRL>

The status of the Primary database is fine

12/20/2017 10:34:40 DGMGRL> show database 'MYCONT_SITE';
Database - MYCONT_SITE
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
MYCONT
Database Status:
SUCCESS

But the standby status is returning some errors
12/20/2017 10:35:11 DGMGRL> show database 'MYCONT_SITE1';
Database - MYCONT_SITE1
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 3 minutes 10 seconds (computed 1 second ago)
Average Apply Rate: 7.00 KByte/s
Real Time Query: OFF
Instance(s):
MYCONT
Database Error(s):
ORA-16766: Redo Apply is stopped
Database Warning(s):
ORA-16853: apply lag has exceeded specified threshold
Database Status:
ERROR
12/20/2017 10:35:15 DGMGRL>

And if we check the alert log of the standby dataset we can find following errors

(3):Recovery of pluggable database PDB1 aborted due to pluggable database open resetlog marker.
(3):To continue recovery, restore all data files for this PDB to checkpoint SCN lower than 2518041, or timestamp before 12/20/2017 10:16:01, and restart recovery
MRP0: Background Media Recovery terminated with error 39874
2017-12-20T10:32:05.565085+01:00
Errors in file /u01/app/oracle/diag/rdbms/mycont_site1/MYCONT/trace/MYCONT_mrp0_1590.trc:
ORA-39874: Pluggable Database PDB1 recovery halted
ORA-39873: Restore all data files to a checkpoint SCN lower than 2518041.
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Recovered data files to a consistent state at change 2520607
2017-12-20T10:32:05.612394+01:00
Errors in file /u01/app/oracle/diag/rdbms/mycont_site1/MYCONT/trace/MYCONT_mrp0_1590.trc:
ORA-39874: Pluggable Database PDB1 recovery halted
ORA-39873: Restore all data files to a checkpoint SCN lower than 2518041.
2017-12-20T10:32:05.612511+01:00
MRP0: Background Media Recovery process shutdown (MYCONT)

On the primary PDB, we can can query the current INCARNATION_SCN in the v$pdb_incarnation view. And we can remark that the current SCN is the same that the one specified in the standby alert log 2518041

11:08:11 SQL> show con_name
CON_NAME
------------------------------
PDB1
11:08:56 SQL> select status,INCARNATION_SCN from v$pdb_incarnation;
STATUS INCARNATION_SCN
------- ---------------
CURRENT 2518041
PARENT 2201909
PARENT 1396169
11:08:59 SQL>

And then as specified in the alert log we have to flashback the standby pdb to a SCN lower than 2518041
First let’s stop the redo apply on the standby

12/20/2017 11:13:14 DGMGRL> edit database 'MYCONT_SITE1' set state='APPLY-OFF';
Succeeded.
12/20/2017 11:13:59 DGMGRL>

And then let’s flashback to 2518039 ( i.e 2518041 -2 ) for example
Let’s shutdown the standby container MYCONT and startup it in a mount state

11:18:42 SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
.
11:19:10 SQL> startup mount
ORACLE instance started.
Total System Global Area 956301312 bytes
Fixed Size 8799656 bytes
Variable Size 348129880 bytes
Database Buffers 595591168 bytes
Redo Buffers 3780608 bytes
Database mounted.
11:19:50 SQL>
.
11:19:50 SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB$SEED MOUNTED
PDB1 MOUNTED
PDB2 MOUNTED

Now let’s flashback PDB1 on the standby

11:20:19 SQL> flashback pluggable database PDB1 to SCN 2518039;
Flashback complete.
11:20:40 SQL>

The last step is to enable again the redo apply for the standby container

12/20/2017 11:13:59 DGMGRL> edit database 'MYCONT_SITE1' set state='APPLY-ON';
Succeeded.
12/20/2017 11:23:08 DGMGRL>

And then we can verify that the configuration is now fine

12/20/2017 11:25:05 DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 1 second ago)
12/20/2017 11:25:07 DGMGRL>

Conclusion
In this article we saw that the flashback in a dataguard environment is working in the same way for a container or a non container. The only difference is the SCN we must consider to flashback the pluggable database. This SCN should be queried fom the v$pdb_incarnation and not from the v$database as we usually do for a non container database.

 

Cet article Oracle 12.2 Dataguard : PDB Flashback on the Primary est apparu en premier sur Blog dbi services.

Automate OVM deployment for a production ready Oracle RAC 12.2 architecture – (part 02)

Wed, 2017-12-20 07:58

In this post we are going to deploy a R.A.C system ready to run production load with near-zero knowledge with R.A.C, Oracle cluster nor Oracle database.

We are going to use the “Deploy Cluster Tool” which is provide by Oracle to perform Oracle deployment of many kind of database architectures you may need like Oracle single instance, Oracle Restart or Oracle R.A.C. This tool permits you to choose if you want an Enterprise Edition or a Standard Edition and if you want an Oracle Release 11g or 12c.

For this demonstration we are going to deploy a R.A.C 12cR2 in Standard Edition.

What you need at this stage

  • An OVM infrastructure as describe in this post. In this infrastructure we have
    • 2 virtual machines called rac001 and rac002 with the required network cabling and disk configuration to run R.A.C
    • The 2 VMs are create from the Oracle template which include the stuff needed to deploy whichever configuration you need
  • A copy of the last release of the “Deploy Cluster Tool” available here

The most important part here is to edit 2 configurations files to describe the configuration we want

  • deployRacProd_SE_RAC_netconfig.ini: network parameters needed for the deployment
  • deployRacProd_SE_RAC_params.ini: parameters related to database memory, name, ASM disk groups, User UID and so on

This is the content of the network configuration used for this infrastructure:

-bash-4.1# egrep -v "^$|^#" deployRacProd_SE_RAC_netconfig.ini
NODE1=rac001
NODE1IP=192.168.179.210
NODE1PRIV=rac001-priv
NODE1PRIVIP=192.168.3.210
NODE1VIP=rac001-vip
NODE1VIPIP=192.168.179.211
NODE2=rac002
NODE2IP=192.168.179.212
NODE2PRIV=rac002-priv
NODE2PRIVIP=192.168.3.212
NODE2VIP=rac002-vip
NODE2VIPIP=192.168.179.213
PUBADAP=eth1
PUBMASK=255.255.255.0
PUBGW=192.168.179.1
PRIVADAP=eth2
PRIVMASK=255.255.255.0
RACCLUSTERNAME=cluprod01
DOMAINNAME=
DNSIP=""
NETCONFIG_DEV=/dev/xvdc
SCANNAME=cluprod01-scan
SCANIP=192.168.179.205
FLEX_CLUSTER=yes
FLEX_ASM=yes
ASMADAP=eth3
ASMMASK=255.255.255.0
NODE1ASMIP=192.168.58.210
NODE2ASMIP=192.168.58.212

Let’s start from the OVM Manager server going in the “Deploy Cluster Tool” directory and initiating the first stage of the deployment:

-bash-4.1# cd /root/deploycluster3
-bash-4.1# ./deploycluster.py -u admin -M rac00? -P deployRacProd_SE_RAC_params.ini -N deployRacProd_SE_RAC_netconfig.ini
Oracle DB/RAC OneCommand (v3.0.5) for Oracle VM - deploy cluster - (c) 2011-2017 Oracle Corporation
 (com: 29100:v3.0.4, lib: 231275:v3.0.5, var: 1800:v3.0.5) - v2.6.5 - ovmm (x86_64)
Invoked as root at Mon Dec 18 14:19:48 2017  (size: 43900, mtime: Tue Feb 28 01:03:00 2017)
Using: ./deploycluster.py -u admin -M rac00? -P deployRacProd_SE_RAC_params.ini -N deployRacProd_SE_RAC_netconfig.ini

INFO: Login password to Oracle VM Manager not supplied on command line or environment (DEPLOYCLUSTER_MGR_PASSWORD), prompting...
Password:

INFO: Attempting to connect to Oracle VM Manager...

Oracle VM Manager Core WS-API Shell 3.4.2.1384 (20160914_1384)

Copyright (C) 2007, 2016 Oracle. All rights reserved.
See the LICENSE file for redistribution information.


Connecting to https://localhost:7002/...

INFO: Oracle VM Client CONNECTED to Oracle VM Manager (3.4.4.1709) UUID (0004fb00000100001f20e914973507f6)

INFO: Inspecting /root/deploycluster3/deployRacProd_SE_RAC_netconfig.ini for number of nodes defined....
INFO: Detected 2 nodes in: /root/deploycluster3/deployRacProd_SE_RAC_netconfig.ini

INFO: Located a total of (2) VMs;
      2 VMs with a simple name of: ['rac001', 'rac002']

INFO: Detected a RAC deployment...

INFO: Starting all (2) VMs...

INFO: VM with a simple name of "rac001" is in a Stopped state, attempting to start it.................................OK.

INFO: VM with a simple name of "rac002" is in a Stopped state, attempting to start it.................................OK.

INFO: Verifying that all (2) VMs are in Running state and pass prerequisite checks.....

INFO: Detected that all (2) VMs specified on command line have (9) common shared disks between them (ASM_MIN_DISKS=5)

INFO: The (2) VMs passed basic sanity checks and in Running state, sending cluster details as follows:
      netconfig.ini (Network setup): /root/deploycluster3/deployRacProd_SE_RAC_netconfig.ini
      params.ini (Overall build options): /root/deploycluster3/deployRacProd_SE_RAC_params.ini
      buildcluster: yes

INFO: Starting to send configuration details to all (2) VM(s).................................................................
INFO: Sending to VM with a simple name of "rac001"...........................................................................................................................................................................................................................................................
INFO: Sending to VM with a simple name of "rac002"..............................................................................................................................................................

INFO: Configuration details sent to (2) VMs...
      Check log (default location /u01/racovm/buildcluster.log) on build VM (rac001)...

INFO: deploycluster.py completed successfully at 14:21:28 in 100.4 seconds (0h:01m:40s)
Logfile at: /root/deploycluster3/deploycluster23.log

 

At this stage we have 2 nodes with the network configuration required like host name and IP addresses. The deployment script has also pushed the configuration files mentioned previously in the VMs.

So we connect to the first VM rac001 to

-bash-4.1# ssh root@192.168.179.210
Warning: Permanently added '192.168.179.210' (RSA) to the list of known hosts.
root@192.168.179.210's password:
Last login: Mon Dec 11 10:31:03 2017
[root@rac001 ~]#

Then we go the deployment directory which is part of the Template and we can execute the deployment

[root@rac001 racovm]# ./buildcluster.sh -s
Invoking on rac001 as root...
   Oracle DB/RAC 12c/11gR2 OneCommand (v2.1.9) for Oracle VM - (c) 2010-2017 Oracle Corporation
   Cksum: [2551004249 619800 racovm.sh] at Mon Dec 18 09:06:43 EST 2017
   Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen
   Kit Version: 12.2.0.1.170814 (RAC Mode, 2 nodes, Enterprise Edition)
   Step(s): buildcluster

INFO (node:rac001): Skipping confirmation, flag (-s) supplied on command line
2017-12-18 09:06:43:[buildcluster:Start:rac001] Building 12cR2 RAC Cluster

INFO (node:rac001): No database created due to (BUILD_RAC_DATABASE=no) & (BUILD_SI_DATABASE=no) setting in params.ini
2017-12-18 09:06:45:[setsshroot:Start:rac001] SSH Setup for the root user...
..
INFO (node:rac001): Passwordless SSH for the root user already configured, skipping...
2017-12-18 09:06:46:[setsshroot:Done :rac001] SSH Setup for the root user completed successfully
2017-12-18 09:06:46:[setsshroot:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)
2017-12-18 09:06:46:[copykit:Start:rac001] Copy kit files to remote nodes
Kit files: buildsingle.sh buildcluster.sh netconfig.sh netconfig.ini common.sh cleanlocal.sh diskconfig.sh racovm.sh ssh params.ini doall.sh  netconfig GetSystemTimeZone.class kitversion.txt mcast

INFO (node:rac001): Copied kit to remote node rac002 as root user
2017-12-18 09:06:48:[copykit:Done :rac001] Copy kit files to (1) remote nodes
2017-12-18 09:06:48:[copykit:Time :rac001] Completed successfully in 2 seconds (0h:00m:02s)
2017-12-18 09:06:48:[usrsgrps:Start:rac001] Verifying Oracle users & groups on all nodes (create/modify mode)..
..
2017-12-18 09:06:51:[usrsgrpslocal:Start:rac001] Verifying Oracle users & groups (create/modify mode)..
2017-12-18 09:06:51:[usrsgrpslocal:Start:rac002] Verifying Oracle users & groups (create/modify mode)..

INFO (node:rac001): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

2017-12-18 09:06:51:[usrsgrpslocal:Done :rac001] Verifying Oracle users & groups (create/modify mode)..
2017-12-18 09:06:51:[usrsgrpslocal:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac002): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

2017-12-18 09:06:51:[usrsgrpslocal:Done :rac002] Verifying Oracle users & groups (create/modify mode)..
2017-12-18 09:06:51:[usrsgrpslocal:Time :rac002] Completed successfully in 0 seconds (0h:00m:00s)
....
INFO (node:rac001): Passwordless SSH for the Oracle user (oracle) already configured to all nodes; not re-setting users passwords
2017-12-18 09:06:55:[usrsgrps:Done :rac001] Verifying Oracle users & groups on all nodes (create/modify mode)..
2017-12-18 09:06:55:[usrsgrps:Time :rac001] Completed successfully in 7 seconds (0h:00m:07s)

INFO (node:rac001): Parameters loaded from params.ini...
  Users & Groups:
   Role Separation: no  Running as: root
   OInstall    : oinstall       GID: 54321
   RAC Owner   : oracle         UID: 54321
    DB OSDBA   : dba            GID: 54322
    DB OSOPER  :                GID:
    DB OSBACKUP: dba            GID:
    DB OSDGDBA : dba            GID:
    DB OSKMDBA : dba            GID:
    DB OSRAC   : dba            GID:
   Grid Owner  : oracle         UID: 54321
    GI OSDBA   : dba            GID: 54322
    GI OSOPER  :                GID:
    GI OSASM   : dba            GID: 54322
  Software Locations:
   Operating Mode: RAC                   Database Edition: STD
   Flex Cluster: yes      Flex ASM: yes
   Central Inventory: /u01/app/oraInventory
   Grid Home: /u01/app/12.2.0/grid  (Detected: 12cR2, Enterprise Edition)
   Grid Name: OraGrid12c
   RAC Home : /u01/app/oracle/product/12.2.0/dbhome_1  (Detected: 12cR2, Enterprise Edition)
   RAC Name : OraRAC12c
   RAC Base : /u01/app/oracle
   DB/RAC OVM kit : /u01/racovm
   Attach RAC Home: yes   GI Home: yes  Relink Homes: no   On OS Change: yes
   Addnode Copy: no
  Database & Storage:
   Database : no         DBName: ORCL  SIDName: ORCL  DG: DGDATA   Listener Port: 1521
   Policy Managed: no
   DBExpress: no         DBExpress port: 5500
   Grid Management DB: no   GIMR diskgroup name:
   Separate GIMR diskgroup: no
   Cluster Storage: ASM
   ASM Discovery String: /dev/xvd[k-s]1
   ASM diskgroup: dgocrvoting      Redundancy: EXTERNAL   Allocation Unit (au_size): 4
      Disks     : /dev/xvdk1 /dev/xvdl1 /dev/xvdm1 /dev/xvdn1 /dev/xvdo1
   Recovery DG  : DGFRA            Redundancy: EXTERNAL
      Disks     : /dev/xvdr1 /dev/xvds1
      Attributes: 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0'
   Extra DG #1  : DGDATA           Redundancy: EXTERNAL
      Disks     : /dev/xvdp1 /dev/xvdq1
      Attributes: 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0'
   Persistent disknames: yes  Stamp: yes  Partition: yes  Align: yes  GPT: no Permissions: 660
   ACFS Filesystem: no

Network information loaded from netconfig.ini...
  Default Gateway: 192.168.179.1  Domain:
  DNS:
  Public NIC : eth1  Mask: 255.255.255.0
  Private NIC: eth2  Mask: 255.255.255.0
  ASM NIC    : eth3  Mask: 255.255.0.0
  SCAN Name: cluprod01-scan  SCAN IP: 192.168.179.205  Scan Port: 1521
  Cluster Name: cluprod01
  Nodes & IP Addresses (2 of 2 nodes)
  Node  1: PubIP : 192.168.179.210 PubName : rac001
     (Hub) VIPIP : 192.168.179.211 VIPName : rac001-vip
           PrivIP: 192.168.3.210   PrivName: rac001-priv
           ASMIP : 192.168.58.210
  Node  2: PubIP : 192.168.179.212 PubName : rac002
     (Hub) VIPIP : 192.168.179.213 VIPName : rac002-vip
           PrivIP: 192.168.3.212   PrivName: rac002-priv
           ASMIP : 192.168.58.212
Running on rac001 as root...
   Oracle DB/RAC 12c/11gR2 OneCommand (v2.1.9) for Oracle VM - (c) 2010-2017 Oracle Corporation
   Cksum: [2551004249 619800 racovm.sh] at Mon Dec 18 09:06:55 EST 2017
   Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen
   Kit Version: 12.2.0.1.170814 (RAC Mode, 2 nodes, Enterprise Edition)
2017-12-18 09:06:56:[printparams:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)
2017-12-18 09:06:56:[setsshora:Start:rac001] SSH Setup for the Oracle user(s)...
..
INFO (node:rac001): Passwordless SSH for the oracle user already configured, skipping...
2017-12-18 09:06:57:[setsshora:Done :rac001] SSH Setup for the oracle user completed successfully
2017-12-18 09:06:57:[setsshora:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)
2017-12-18 09:06:57:[diskconfig:Start:rac001] Storage Setup
2017-12-18 09:06:58:[diskconfig:Start:rac001] Running in configuration mode (local & remote nodes)
.
2017-12-18 09:06:58:[diskconfig:Disks:rac001] Verifying disks exist, are free and with no overlapping partitions (localhost)...
/dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq............................OK
2017-12-18 09:07:02:[diskconfig:Disks:rac001] Checking contents of disks (localhost)...
/dev/xvdk1/dev/xvdl1/dev/xvdm1/dev/xvdn1/dev/xvdo1/dev/xvdr1/dev/xvds1/dev/xvdp1/dev/xvdq1.
2017-12-18 09:07:02:[diskconfig:Remote:rac001] Assuming persistent disk names on remote nodes with stamping (existence check)...
/dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo......../dev/xvdr./dev/xvds...../dev/xvdp./dev/xvdq........OK
2017-12-18 09:07:23:[diskconfig:Remote:rac001] Verify disks are free on remote nodes...
rac002....................OK
2017-12-18 09:07:52:[diskconfig:Disks:rac001] Checking contents of disks (remote nodes)...
rac002.......OK
2017-12-18 09:07:54:[diskconfig:Disks:rac001] Setting disk permissions for next startup (all nodes)...
.....OK
2017-12-18 09:07:56:[diskconfig:ClearPartTables:rac001] Clearing partition tables...
./dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq.....................OK
2017-12-18 09:08:03:[diskconfig:CreatePartitions:rac001] Creating 'msdos' partitions on disks (as needed)...
./dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq.....................OK
2017-12-18 09:08:13:[diskconfig:CleanPartitions:rac001] Cleaning new partitions...
./dev/xvdk1./dev/xvdl1./dev/xvdm1./dev/xvdn1./dev/xvdo1./dev/xvdr1./dev/xvds1./dev/xvdp1./dev/xvdq1...OK
2017-12-18 09:08:13:[diskconfig:Done :rac001] Done configuring and checking disks on all nodes
2017-12-18 09:08:13:[diskconfig:Done :rac001] Storage Setup
2017-12-18 09:08:13:[diskconfig:Time :rac001] Completed successfully in 76 seconds (0h:01m:16s)
2017-12-18 09:08:15:[clearremotelogs:Time :rac001] Completed successfully in 2 seconds (0h:00m:02s)
2017-12-18 09:08:15:[check:Start:rac001] Pre-install checks on all nodes
..

INFO (node:rac001): Check found that all (2) nodes have the following (25586399 26609817 26609966) patches applied to the Grid Infrastructure Home (/u01/app/12.2.0/grid), the following (25811364 26609817 26609966) patches applied to the RAC Home (/u01/app/oracle/product/12.2.0/dbhome_1)
.2017-12-18 09:08:20:[checklocal:Start:rac001] Pre-install checks
2017-12-18 09:08:21:[checklocal:Start:rac002] Pre-install checks
2017-12-18 09:08:22:[usrsgrpslocal:Start:rac001] Verifying Oracle users & groups (check only mode)..

INFO (node:rac001): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

2017-12-18 09:08:22:[usrsgrpslocal:Done :rac001] Verifying Oracle users & groups (check only mode)..
2017-12-18 09:08:22:[usrsgrpslocal:Start:rac002] Verifying Oracle users & groups (check only mode)..

INFO (node:rac002): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

2017-12-18 09:08:22:[usrsgrpslocal:Done :rac002] Verifying Oracle users & groups (check only mode)..

INFO (node:rac001): Node forming new RAC cluster; Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen

WARNING (node:rac001): Not performing any memory checks due to (CLONE_SKIP_MEMORYCHECKS=yes) in params.ini.

INFO (node:rac001): Running disk checks on all nodes, persistent disk names (/u01/racovm/diskconfig.sh -n 2 -D 1 -s)
2017-12-18 09:08:23:[diskconfig:Start:rac001] Running in dry-run mode (local & remote nodes, level 1), no stamping, partitioning or OS configuration files will be modified...(assuming persistent disk names)

INFO (node:rac002): Node forming new RAC cluster; Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen

WARNING (node:rac002): Not performing any memory checks due to (CLONE_SKIP_MEMORYCHECKS=yes) in params.ini.

INFO (node:rac002): Running network checks...
......
2017-12-18 09:08:24:[diskconfig:Disks:rac001] Verifying disks exist, are free and with no overlapping partitions (localhost)...
/dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq.............................OK
2017-12-18 09:08:29:[diskconfig:Disks:rac001] Checking existence of automatically renamed disks (localhost)...
/dev/xvdk1./dev/xvdl1./dev/xvdm1./dev/xvdn1./dev/xvdo1./dev/xvdr1./dev/xvds1./dev/xvdp1./dev/xvdq1.
2017-12-18 09:08:30:[diskconfig:Disks:rac001] Checking permissions of disks (localhost)...
/dev/xvdk1/dev/xvdl1/dev/xvdm1/dev/xvdn1/dev/xvdo1/dev/xvdr1/dev/xvds1/dev/xvdp1/dev/xvdq1
2017-12-18 09:08:30:[diskconfig:Disks:rac001] Checking contents of disks (localhost)...
/dev/xvdk1/dev/xvdl1/dev/xvdm1/dev/xvdn1/dev/xvdo1/dev/xvdr1/dev/xvds1/dev/xvdp1/dev/xvdq1..
2017-12-18 09:08:31:[diskconfig:Remote:rac001] Assuming persistent disk names on remote nodes with NO stamping (existence check)...
rac002........OK
2017-12-18 09:08:37:[diskconfig:Remote:rac001] Verify disks are free on remote nodes...
rac002........
INFO (node:rac001): Waiting for all checklocal operations to complete on all nodes (At 09:08:50, elapsed: 0h:00m:31s, 2) nodes remaining, all background pid(s): 13222 13365)...
...............
INFO (node:rac002): Check completed successfully
2017-12-18 09:09:07:[checklocal:Done :rac002] Pre-install checks
2017-12-18 09:09:07:[checklocal:Time :rac002] Completed successfully in 46 seconds (0h:00m:46s)
.......OK
2017-12-18 09:09:11:[diskconfig:Remote:rac001] Checking existence of automatically renamed disks (remote nodes)...
rac002...
2017-12-18 09:09:17:[diskconfig:Remote:rac001] Checking permissions of disks (remote nodes)...
rac002....
2017-12-18 09:09:21:[diskconfig:Disks:rac001] Checking contents of disks (remote nodes)...
rac002.......OK
2017-12-18 09:09:26:[diskconfig:Done :rac001] Dry-run (local & remote, level 1) completed successfully, most likely normal run will too
..
INFO (node:rac001): Running multicast check on 230.0.1.0 port 42050 for 2 nodes...

INFO (node:rac001): All nodes can multicast to all other nodes on interface eth2 multicast address 230.0.1.0 port 42050...

INFO (node:rac001): Running network checks...
....................
INFO (node:rac001): Check completed successfully
2017-12-18 09:10:11:[checklocal:Done :rac001] Pre-install checks
2017-12-18 09:10:11:[checklocal:Time :rac001] Completed successfully in 111 seconds (0h:01m:51s)

INFO (node:rac001): All checklocal operations completed on all (2) node(s) at: 09:10:12
2017-12-18 09:10:12:[check:Done :rac001] Pre-install checks on all nodes
2017-12-18 09:10:13:[check:Time :rac001] Completed successfully in 117 seconds (0h:01m:57s)
2017-12-18 09:10:13:[creategrid:Start:rac001] Creating 12cR2 Grid Infrastructure
..
2017-12-18 09:10:16:[preparelocal:Start:rac001] Preparing node for Oracle installation

INFO (node:rac001): Resetting permissions on Oracle Homes... May take a while...
2017-12-18 09:10:17:[preparelocal:Start:rac002] Preparing node for Oracle installation

INFO (node:rac002): Resetting permissions on Oracle Homes... May take a while...

INFO (node:rac001): Configured size of /dev/shm is (see output below):
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.5G     0  1.5G   0% /dev/shm
2017-12-18 09:10:27:[preparelocal:Done :rac001] Preparing node for Oracle installation
2017-12-18 09:10:27:[preparelocal:Time :rac001] Completed successfully in 11 seconds (0h:00m:11s)

INFO (node:rac002): Configured size of /dev/shm is (see output below):
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.5G     0  1.5G   0% /dev/shm
2017-12-18 09:10:31:[preparelocal:Done :rac002] Preparing node for Oracle installation
2017-12-18 09:10:31:[preparelocal:Time :rac002] Completed successfully in 14 seconds (0h:00m:14s)
2017-12-18 09:10:32:[prepare:Time :rac001] Completed successfully in 19 seconds (0h:00m:19s)
....
2017-12-18 09:10:40:[giclonelocal:Start:rac001] Attaching 12cR2 Grid Infrastructure Home

INFO (node:rac001): Running on: rac001 as root: /bin/chown -HRf oracle:oinstall /u01/app/12.2.0/grid 2>/dev/null
2017-12-18 09:10:41:[giattachlocal:Start:rac001] Attaching Grid Infratructure Home on node rac001

INFO (node:rac001): Running on: rac001 as oracle: /u01/app/12.2.0/grid/oui/bin/runInstaller -silent -ignoreSysPrereqs -waitforcompletion -attachHome INVENTORY_LOCATION='/u01/app/oraInventory' ORACLE_HOME='/u01/app/12.2.0/grid' ORACLE_HOME_NAME='OraGrid12c' ORACLE_BASE='/u01/app/oracle'   CRS=TRUE -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
2017-12-18 09:10:41:[giclonelocal:Start:rac002] Attaching 12cR2 Grid Infrastructure Home

INFO (node:rac002): Running on: rac002 as root: /bin/chown -HRf oracle:oinstall /u01/app/12.2.0/grid 2>/dev/null
2017-12-18 09:10:42:[giattachlocal:Start:rac002] Attaching Grid Infratructure Home on node rac002

INFO (node:rac002): Running on: rac002 as oracle: /u01/app/12.2.0/grid/oui/bin/runInstaller -silent -ignoreSysPrereqs -waitforcompletion -attachHome INVENTORY_LOCATION='/u01/app/oraInventory' ORACLE_HOME='/u01/app/12.2.0/grid' ORACLE_HOME_NAME='OraGrid12c' ORACLE_BASE='/u01/app/oracle'   CRS=TRUE -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory pointer is located at /etc/oraInst.loc

INFO (node:rac001): Waiting for all giclonelocal operations to complete on all nodes (At 09:11:06, elapsed: 0h:00m:31s, 2) nodes remaining, all background pid(s): 18135 18141)...
Please execute the '/u01/app/oraInventory/orainstRoot.sh' script at the end of the session.
'AttachHome' was successful.

INFO (node:rac001): Running on: rac001 as root: /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
2017-12-18 09:11:08:[giattachlocal:Done :rac001] Attaching Grid Infratructure Home on node rac001
2017-12-18 09:11:08:[giattachlocal:Time :rac001] Completed successfully in 27 seconds (0h:00m:27s)
2017-12-18 09:11:09:[girootlocal:Start:rac001] Running root.sh on Grid Infrastructure home

INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/root.sh -silent
Check /u01/app/12.2.0/grid/install/root_rac001_2017-12-18_09-11-09-287116939.log for the output of root script
2017-12-18 09:11:09:[girootlocal:Done :rac001] Running root.sh on Grid Infrastructure home
2017-12-18 09:11:09:[girootlocal:Time :rac001] Completed successfully in 0 seconds (0h:00m:00s)

INFO (node:rac001): Resetting permissions on Oracle Home (/u01/app/12.2.0/grid)...
Please execute the '/u01/app/oraInventory/orainstRoot.sh' script at the end of the session.
'AttachHome' was successful.

INFO (node:rac002): Running on: rac002 as root: /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
2017-12-18 09:11:10:[giattachlocal:Done :rac002] Attaching Grid Infratructure Home on node rac002
2017-12-18 09:11:10:[giattachlocal:Time :rac002] Completed successfully in 28 seconds (0h:00m:28s)
2017-12-18 09:11:10:[girootlocal:Start:rac002] Running root.sh on Grid Infrastructure home

INFO (node:rac002): Running on: rac002 as root: /u01/app/12.2.0/grid/root.sh -silent
Check /u01/app/12.2.0/grid/install/root_rac002_2017-12-18_09-11-10-934545273.log for the output of root script
2017-12-18 09:11:11:[girootlocal:Done :rac002] Running root.sh on Grid Infrastructure home
2017-12-18 09:11:11:[girootlocal:Time :rac002] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac002): Resetting permissions on Oracle Home (/u01/app/12.2.0/grid)...
2017-12-18 09:11:11:[giclonelocal:Done :rac001] Attaching 12cR2 Grid Infrastructure Home
2017-12-18 09:11:11:[giclonelocal:Time :rac001] Completed successfully in 33 seconds (0h:00m:33s)
2017-12-18 09:11:13:[giclonelocal:Done :rac002] Attaching 12cR2 Grid Infrastructure Home
2017-12-18 09:11:13:[giclonelocal:Time :rac002] Completed successfully in 34 seconds (0h:00m:34s)

INFO (node:rac001): All giclonelocal operations completed on all (2) node(s) at: 09:11:14
2017-12-18 09:11:14:[giclone:Time :rac001] Completed successfully in 42 seconds (0h:00m:42s)
....
2017-12-18 09:11:18:[girootcrslocal:Start:rac001] Running rootcrs.pl

INFO (node:rac001): rootcrs.pl log location is: /u01/app/oracle/crsdata/rac001/crsconfig/rootcrs_rac001_<timestamp>.log

INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/perl/bin/perl -I/u01/app/12.2.0/grid/perl/lib -I/u01/app/12.2.0/grid/crs/install /u01/app/12.2.0/grid/crs/install/rootcrs.pl -auto
Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/rac001/crsconfig/rootcrs_rac001_2017-12-18_09-11-19AM.log
2017/12/18 09:11:30 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2017/12/18 09:11:30 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2017/12/18 09:12:08 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2017/12/18 09:12:08 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2017/12/18 09:12:19 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2017/12/18 09:12:25 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2017/12/18 09:12:28 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.
2017/12/18 09:12:40 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.
2017/12/18 09:13:23 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.
2017/12/18 09:13:23 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.
2017/12/18 09:14:06 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2017/12/18 09:14:19 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2017/12/18 09:14:19 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2017/12/18 09:14:27 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2017/12/18 09:14:43 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2017/12/18 09:15:48 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2017/12/18 09:16:56 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac001'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac001' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/12/18 09:18:14 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2017/12/18 09:18:23 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac001'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac001' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.driver.afd' on 'rac001'
CRS-2672: Attempting to start 'ora.evmd' on 'rac001'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac001'
CRS-2676: Start of 'ora.driver.afd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac001'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac001' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac001' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac001'
CRS-2676: Start of 'ora.gpnpd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac001'
CRS-2676: Start of 'ora.gipcd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac001'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac001'
CRS-2676: Start of 'ora.diskmon' on 'rac001' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac001' succeeded

Disk label(s) created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-171218AM091955.log for details.
Disk groups created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-171218AM091955.log for details.


2017/12/18 09:24:06 CLSRSC-482: Running command: '/u01/app/12.2.0/grid/bin/ocrconfig -upgrade oracle oinstall'
CRS-2672: Attempting to start 'ora.crf' on 'rac001'
CRS-2672: Attempting to start 'ora.storage' on 'rac001'
CRS-2676: Start of 'ora.storage' on 'rac001' succeeded
CRS-2676: Start of 'ora.crf' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac001'
CRS-2676: Start of 'ora.crsd' on 'rac001' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk e0321712cd544fa6bf438b5849f11155.
Successfully replaced voting disk group with +dgocrvoting.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   e0321712cd544fa6bf438b5849f11155 (AFD:DGOCRVOTING1) [DGOCRVOTING]
Located 1 voting disk(s).
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac001'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac001'
CRS-2677: Stop of 'ora.crsd' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.storage' on 'rac001'
CRS-2673: Attempting to stop 'ora.crf' on 'rac001'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac001'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac001'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac001'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac001' succeeded
CRS-2677: Stop of 'ora.storage' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac001'
CRS-2677: Stop of 'ora.crf' on 'rac001' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac001' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac001' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac001'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac001'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac001'
CRS-2677: Stop of 'ora.evmd' on 'rac001' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac001'
CRS-2677: Stop of 'ora.cssd' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.driver.afd' on 'rac001'
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac001'
CRS-2677: Stop of 'ora.driver.afd' on 'rac001' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac001' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac001' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2017/12/18 09:26:52 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac001'
CRS-2672: Attempting to start 'ora.evmd' on 'rac001'
CRS-2676: Start of 'ora.mdnsd' on 'rac001' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac001'
CRS-2676: Start of 'ora.gpnpd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac001'
CRS-2676: Start of 'ora.gipcd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac001'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac001'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac001'
CRS-2676: Start of 'ora.diskmon' on 'rac001' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac001'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac001'
CRS-2676: Start of 'ora.ctssd' on 'rac001' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac001'
CRS-2676: Start of 'ora.asm' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac001'
CRS-2676: Start of 'ora.storage' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac001'
CRS-2676: Start of 'ora.crf' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac001'
CRS-2676: Start of 'ora.crsd' on 'rac001' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: rac001
CRS-6016: Resource auto-start has completed for server rac001
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/12/18 09:30:15 CLSRSC-343: Successfully started Oracle Clusterware stack
2017/12/18 09:30:15 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac001'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac001'
CRS-2676: Start of 'ora.asm' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.DGOCRVOTING.dg' on 'rac001'
CRS-2676: Start of 'ora.DGOCRVOTING.dg' on 'rac001' succeeded
2017/12/18 09:32:38 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2017/12/18 09:34:22 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
2017-12-18 09:34:26:[girootcrslocal:Done :rac001] Running rootcrs.pl
2017-12-18 09:34:27:[girootcrslocal:Time :rac001] Completed successfully in 1388 seconds (0h:23m:08s)
2017-12-18 09:34:49:[girootcrslocal:Start:rac002] Running rootcrs.pl

INFO (node:rac002): rootcrs.pl log location is: /u01/app/oracle/crsdata/rac002/crsconfig/rootcrs_rac002_<timestamp>.log

INFO (node:rac002): Running on: rac002 as root: /u01/app/12.2.0/grid/perl/bin/perl -I/u01/app/12.2.0/grid/perl/lib -I/u01/app/12.2.0/grid/crs/install /u01/app/12.2.0/grid/crs/install/rootcrs.pl -auto
Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/rac002/crsconfig/rootcrs_rac002_2017-12-18_09-34-50AM.log
2017/12/18 09:35:03 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2017/12/18 09:35:04 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:35:18, elapsed: 0h:00m:31s, 1) node remaining, all background pid(s): 7263)...
2017/12/18 09:35:44 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2017/12/18 09:35:44 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
.2017/12/18 09:35:51 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2017/12/18 09:35:56 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2017/12/18 09:35:56 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.
2017/12/18 09:36:02 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.
..2017/12/18 09:36:59 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.
2017/12/18 09:37:01 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.
2017/12/18 09:37:08 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2017/12/18 09:37:16 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2017/12/18 09:37:17 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
.2017/12/18 09:37:23 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2017/12/18 09:37:40 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
..
INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:38:21, elapsed: 0h:03m:34s, 1) node remaining, all background pid(s): 7263)...
.2017/12/18 09:39:00 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
...2017/12/18 09:40:24 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac002'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac002' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
..
INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:41:25, elapsed: 0h:06m:38s, 1) node remaining, all background pid(s): 7263)...
2017/12/18 09:41:40 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2017/12/18 09:41:42 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac002'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac002' has completed
CRS-4133: Oracle High Availability Services has been stopped.
.CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac002'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac002'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac002' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac002' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2017/12/18 09:42:04 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
.....
INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:44:27, elapsed: 0h:09m:40s, 1) node remaining, all background pid(s): 7263)...
.CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac002'
CRS-2672: Attempting to start 'ora.evmd' on 'rac002'
CRS-2676: Start of 'ora.mdnsd' on 'rac002' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac002'
CRS-2676: Start of 'ora.gpnpd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac002'
CRS-2676: Start of 'ora.gipcd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac002'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac002'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac002'
CRS-2676: Start of 'ora.diskmon' on 'rac002' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac002'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac002'
CRS-2676: Start of 'ora.ctssd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac002'
CRS-2676: Start of 'ora.crf' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac002'
CRS-2676: Start of 'ora.crsd' on 'rac002' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac002'
CRS-2676: Start of 'ora.asm' on 'rac002' succeeded
CRS-6017: Processing resource auto-start for servers: rac002
CRS-2672: Attempting to start 'ora.net1.network' on 'rac002'
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac002'
CRS-2676: Start of 'ora.net1.network' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.ons' on 'rac002'
CRS-2676: Start of 'ora.ons' on 'rac002' succeeded
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac002'
CRS-2676: Start of 'ora.asm' on 'rac002' succeeded
CRS-6016: Resource auto-start has completed for server rac002
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/12/18 09:45:22 CLSRSC-343: Successfully started Oracle Clusterware stack
2017/12/18 09:45:22 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
.2017/12/18 09:45:49 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
..2017/12/18 09:46:38 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
2017-12-18 09:46:40:[girootcrslocal:Done :rac002] Running rootcrs.pl
2017-12-18 09:46:40:[girootcrslocal:Time :rac002] Completed successfully in 711 seconds (0h:11m:51s)

INFO (node:rac001): All girootcrslocal operations completed on all (2) node(s) at: 09:46:42
2017-12-18 09:46:42:[girootcrs:Time :rac001] Completed successfully in 2128 seconds (0h:35m:28s)
2017-12-18 09:46:42:[giassist:Start:rac001] Running RAC Home assistants (netca, asmca)

INFO (node:rac001): Creating the node Listener using NETCA... (09:46:44)

INFO (node:rac001): Running on: rac001 as oracle: export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/12.2.0/grid; /u01/app/12.2.0/grid/bin/netca /orahome /u01/app/12.2.0/grid /instype typical /inscomp client,oraclenet,javavm,server,ano /insprtcl tcp /cfg local /authadp NO_VALUE /responseFile /u01/app/12.2.0/grid/network/install/netca_typ.rsp /silent /orahnam OraGrid12c

Parsing command line arguments:
    Parameter "orahome" = /u01/app/12.2.0/grid
    Parameter "instype" = typical
    Parameter "inscomp" = client,oraclenet,javavm,server,ano
    Parameter "insprtcl" = tcp
    Parameter "cfg" = local
    Parameter "authadp" = NO_VALUE
    Parameter "responsefile" = /u01/app/12.2.0/grid/network/install/netca_typ.rsp
    Parameter "silent" = true
    Parameter "orahnam" = OraGrid12c
Done parsing command line arguments.
Oracle Net Services Configuration:
Profile configuration complete.
Profile configuration complete.
Listener "LISTENER" already exists.
Oracle Net Services configuration successful. The exit code is 0

INFO (node:rac001): Running on: rac001 as oracle: export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/12.2.0/grid; /u01/app/12.2.0/grid/bin/asmca -silent -postConfigureASM

Post configuration completed successfully


INFO (node:rac001): Setting initial diskgroup name dgocrvoting's attributes as defined in RACASMGROUP_ATTRIBUTES ('compatible.asm'='12.2.0.1.0', 'compatible.rdbms'='12.2.0.1.0')...

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1 at: 09:48:38: alter diskgroup dgocrvoting set attribute 'compatible.asm'='12.2.0.1.0';

Diskgroup altered.

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1 at: 09:48:40: alter diskgroup dgocrvoting set attribute  'compatible.rdbms'='12.2.0.1.0';

Diskgroup altered.
2017-12-18 09:48:44:[creatediskgroups:Start:rac001] Creating additional diskgroups

INFO (node:rac001): Creating Recovery diskgroup (DGFRA) at: 09:50:31...

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1: create diskgroup "DGFRA" EXTERNAL redundancy disk 'AFD:RECO1','AFD:RECO2' attribute 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0';

Diskgroup created.

Elapsed: 00:00:09.34

INFO (node:rac001): Creating Extra diskgroup (DGDATA) at: 09:52:22...

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1: create diskgroup "DGDATA" EXTERNAL redundancy disk 'AFD:DGDATA1','AFD:DGDATA2' attribute 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0';

Diskgroup created.

Elapsed: 00:00:10.48

INFO (node:rac001): Successfully created the following ASM diskgroups (DGFRA DGDATA), setting them for automount on startup and attempting to mount on all nodes...

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1 at: 09:52:35: alter system set asm_diskgroups='DGDATA', 'DGFRA';

System altered.

INFO (node:rac001): Successfully set the ASM diskgroups (DGDATA DGFRA) to automount on startup

INFO (node:rac001): Attempting to mount diskgroups on nodes running ASM: rac001 rac002

INFO (node:rac001): Running SQL on: rac002 as oracle user using SID: +ASM2 at: 09:52:38: alter diskgroup "DGFRA" mount;

Diskgroup altered.

INFO (node:rac001): Running SQL on: rac002 as oracle user using SID: +ASM2 at: 09:52:39: alter diskgroup "DGDATA" mount;

Diskgroup altered.

INFO (node:rac001): Successfully mounted the created (DGFRA DGDATA) ASM diskgroups on all nodes running an ASM instance (rac001 rac002)
2017-12-18 09:52:41:[creatediskgroups:Done :rac001] Creating additional diskgroups
2017-12-18 09:52:41:[creatediskgroups:Time :rac001] Completed successfully in 237 seconds (0h:03m:57s)

WARNING (node:rac001): Management Database not created due to CLONE_GRID_MANAGEMENT_DB=no. Note that starting with release 12.1.0.2 and higher, the Management Database (GIMR) is required for a fully supported environment
2017-12-18 09:52:41:[giassist:Done :rac001] Running RAC Home assistants (netca, asmca)
2017-12-18 09:52:41:[giassist:Time :rac001] Completed successfully in 359 seconds (0h:05m:59s)
2017-12-18 09:52:41:[creategrid:Done :rac001] Creating 12cR2 Grid Infrastructure
2017-12-18 09:52:41:[creategrid:Time :rac001] Completed successfully in 2548 seconds (0h:42m:28s)

INFO (node:rac001): Skipping CVU post crsinst checks, due to CLONE_SKIP_CVU_POSTCRS=yes
2017-12-18 09:52:41:[cvupostcrs:Time :rac001] Completed successfully in 0 seconds (0h:00m:00s)
2017-12-18 09:52:41:[racclone:Start:rac001] Cloning 12cR2 RAC Home on all nodes
..

INFO (node:rac001): Changing Database Edition to: 'Standard Edition'; The Oracle binary (/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle) is linked as (Enterprise Edition Release), however Database Edition set to (Standard Edition) in params.ini
2017-12-18 09:53:04:[racclonelocal:Start:rac001] Cloning 12cR2 RAC Home

INFO (node:rac001): Running on: rac001 as root: /bin/chown -HRf oracle:oinstall /u01/app/oracle/product/12.2.0/dbhome_1 2>/dev/null

INFO (node:rac001): Running on: rac001 as oracle: /u01/app/oracle/product/12.2.0/dbhome_1/perl/bin/perl /u01/app/oracle/product/12.2.0/dbhome_1/clone/bin/clone.pl -silent ORACLE_BASE='/u01/app/oracle' ORACLE_HOME='/u01/app/oracle/product/12.2.0/dbhome_1' ORACLE_HOME_NAME='OraRAC12c' INVENTORY_LOCATION='/u01/app/oraInventory' OSDBA_GROUP=dba OSOPER_GROUP= OSKMDBA_GROUP=dba OSDGDBA_GROUP=dba OSBACKUPDBA_GROUP=dba OSRACDBA_GROUP=dba oracle_install_db_InstallEdition=STD 'CLUSTER_NODES={rac001,rac002}' "LOCAL_NODE=rac001"  '-ignoreSysPrereqs'
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 6740 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 4059 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-12-18_09-53-12AM. Please wait ...
INFO (node:rac001): Waiting for all racclonelocal operations to complete on all nodes (At 09:53:14, elapsed: 0h:00m:31s, 2) nodes remaining, all background pid(s): 12271 12277)...

INFO (node:rac002): Changing Database Edition to: 'Standard Edition'; The Oracle binary (/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle) is linked as (Enterprise Edition Release), however Database Edition set to (Standard Edition) in params.ini
2017-12-18 09:53:15:[racclonelocal:Start:rac002] Cloning 12cR2 RAC Home

INFO (node:rac002): Running on: rac002 as root: /bin/chown -HRf oracle:oinstall /u01/app/oracle/product/12.2.0/dbhome_1 2>/dev/null

INFO (node:rac002): Running on: rac002 as oracle: /u01/app/oracle/product/12.2.0/dbhome_1/perl/bin/perl /u01/app/oracle/product/12.2.0/dbhome_1/clone/bin/clone.pl -silent ORACLE_BASE='/u01/app/oracle' ORACLE_HOME='/u01/app/oracle/product/12.2.0/dbhome_1' ORACLE_HOME_NAME='OraRAC12c' INVENTORY_LOCATION='/u01/app/oraInventory' OSDBA_GROUP=dba OSOPER_GROUP= OSKMDBA_GROUP=dba OSDGDBA_GROUP=dba OSBACKUPDBA_GROUP=dba OSRACDBA_GROUP=dba oracle_install_db_InstallEdition=STD 'CLUSTER_NODES={rac001,rac002}' "LOCAL_NODE=rac002"  '-ignoreSysPrereqs'
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 6757 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 4082 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-12-18_09-53-42AM. Please wait ....You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-12AM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........
Copy files in progress.
.
Copy files successful.

Link binaries in progress.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-42AM.log
...................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........
Copy files in progress.

Copy files successful.

Link binaries in progress.
...
INFO (node:rac001): Waiting for all racclonelocal operations to complete on all nodes (At 09:56:19, elapsed: 0h:03m:35s, 2) nodes remaining, all background pid(s): 12271 12277)...
..
Link binaries successful.

Setup files in progress.
.
Setup files successful.

Setup Inventory in progress.
.
Setup Inventory successful.

Finish Setup successful.
The cloning of OraRAC12c was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-12AM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   95% Done.

As a root user, execute the following script(s):
        1. /u01/app/oracle/product/12.2.0/dbhome_1/root.sh

Execute /u01/app/oracle/product/12.2.0/dbhome_1/root.sh on the following nodes:
[rac001]


..................................................   100% Done.

INFO (node:rac001): Relinking the oracle binary to disable Database Enterprise Edition options (09:58:45)...
..
INFO (node:rac001): Waiting for all racclonelocal operations to complete on all nodes (At 09:59:25, elapsed: 0h:06m:42s, 2) nodes remaining, all background pid(s): 12271 12277)...
..2017-12-18 10:00:28:[racrootlocal:Start:rac001] Running root.sh on RAC Home
Check /u01/app/oracle/product/12.2.0/dbhome_1/install/root_rac001_2017-12-18_10-00-28-680953042.log for the output of root script
2017-12-18 10:00:29:[racrootlocal:Done :rac001] Running root.sh on RAC Home
2017-12-18 10:00:29:[racrootlocal:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac001): Resetting permissions on Oracle Home (/u01/app/oracle/product/12.2.0/dbhome_1)...
2017-12-18 10:00:29:[racclonelocal:Done :rac001] Cloning 12cR2 RAC Home
2017-12-18 10:00:29:[racclonelocal:Time :rac001] Completed successfully in 464 seconds (0h:07m:44s)

Link binaries successful.

Setup files in progress.

Setup files successful.

Setup Inventory in progress.
.
Setup Inventory successful.

Finish Setup successful.
The cloning of OraRAC12c was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-42AM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   95% Done.

As a root user, execute the following script(s):
        1. /u01/app/oracle/product/12.2.0/dbhome_1/root.sh

Execute /u01/app/oracle/product/12.2.0/dbhome_1/root.sh on the following nodes:
[rac002]


..................................................   100% Done.

INFO (node:rac002): Relinking the oracle binary to disable Database Enterprise Edition options (10:01:17)...
..2017-12-18 10:02:21:[racrootlocal:Start:rac002] Running root.sh on RAC Home
Check /u01/app/oracle/product/12.2.0/dbhome_1/install/root_rac002_2017-12-18_10-02-21-660060386.log for the output of root script
2017-12-18 10:02:22:[racrootlocal:Done :rac002] Running root.sh on RAC Home
2017-12-18 10:02:22:[racrootlocal:Time :rac002] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac002): Resetting permissions on Oracle Home (/u01/app/oracle/product/12.2.0/dbhome_1)...
2017-12-18 10:02:22:[racclonelocal:Done :rac002] Cloning 12cR2 RAC Home
2017-12-18 10:02:22:[racclonelocal:Time :rac002] Completed successfully in 576 seconds (0h:09m:36s)

INFO (node:rac001): All racclonelocal operations completed on all (2) node(s) at: 10:02:24
2017-12-18 10:02:24:[racclone:Done :rac001] Cloning 12cR2 RAC Home on all nodes
2017-12-18 10:02:24:[racclone:Time :rac001] Completed successfully in 583 seconds (0h:09m:43s)

INFO (node:rac002): Disabling passwordless ssh access for root user (from remote nodes)
2017-12-18 10:02:28:[rmsshrootlocal:Time :rac002] Completed successfully in 0 seconds (0h:00m:00s)

INFO (node:rac001): Disabling passwordless ssh access for root user (from remote nodes)
2017-12-18 10:02:31:[rmsshrootlocal:Time :rac001] Completed successfully in 0 seconds (0h:00m:00s)
2017-12-18 10:02:31:[rmsshroot:Time :rac001] Completed successfully in 7 seconds (0h:00m:07s)

INFO (node:rac001): Current cluster state (10:02:31)...

INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/bin/olsnodes -n -s -t
rac001  1       Active  Hub     Unpinned
rac002  2       Active  Hub     Unpinned
Oracle Clusterware active version on the cluster is [12.2.0.1.0]
Oracle Clusterware version on node [rac001] is [12.2.0.1.0]
CRS Administrator List: oracle root
Cluster is running in "flex" mode
CRS-41008: Cluster class is 'Standalone Cluster'
ASM Flex mode enabled: ASM instance count: 3
ASM is running on rac001,rac002

INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/bin/crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGDATA.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGFRA.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGOCRVOTING.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.net1.network
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.ons
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.proxy_advm
               OFFLINE OFFLINE      rac001                   STABLE
               OFFLINE OFFLINE      rac002                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac001                   STABLE
ora.asm
      1        ONLINE  ONLINE       rac001                   Started,STABLE
      2        ONLINE  ONLINE       rac002                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac001                   STABLE
ora.qosmserver
      1        OFFLINE OFFLINE                               STABLE
ora.rac001.vip
      1        ONLINE  ONLINE       rac001                   STABLE
ora.rac002.vip
      1        ONLINE  ONLINE       rac002                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac001                   STABLE
--------------------------------------------------------------------------------

INFO (node:rac001): For an explanation on resources in OFFLINE state, see Note:1068835.1
2017-12-18 10:02:40:[clusterstate:Time :rac001] Completed successfully in 9 seconds (0h:00m:09s)
2017-12-18 10:02:40:[buildcluster:Done :rac001] Building 12cR2 RAC Cluster
2017-12-18 10:02:40:[buildcluster:Time :rac001] Completed successfully in 3357 seconds (0h:55m:57s)

INFO (node:rac001): This entire build was logged in logfile: /u01/racovm/buildcluster3.log

 

We are now going to multiplex network both, the cluster Hearthbeat and ASM because currently the configuration is not HA. We also need only 2 flex asm instances for my 2 nodes R.A.C :

[root@rac001 ~]# /u01/app/12.2.0/grid/bin/srvctl modify asm -count 2

[root@rac002 ~]# /u01/app/12.2.0/grid/bin/srvctl config listener -asmlistener
Name: ASMNET1LSNR_ASM
Type: ASM Listener
Owner: oracle
Subnet: 192.168.3.0
Home: <CRS home>
End points: TCP:1525
Listener is disabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:

/u01/app/12.2.0/grid/bin/srvctl add listener -asmlistener -l ASMNET2LSNR_ASM -subnet 192.168.58.0
/u01/app/12.2.0/grid/bin/srvctl start listener -l ASMNET2LSNR_ASM


[root@rac002 ~]# /u01/app/12.2.0/grid/bin/srvctl config listener -asmlistener
Name: ASMNET1LSNR_ASM
Type: ASM Listener
Owner: oracle
Subnet: 192.168.3.0
Home: <CRS home>
End points: TCP:1525
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:
Name: ASMNET2LSNR_ASM
Type: ASM Listener
Owner: oracle
Subnet: 192.168.58.0
Home: <CRS home>
End points: TCP:1526
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:



[root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg getif
eth1  192.168.179.0  global  public
eth2  192.168.3.0    global  cluster_interconnect
eth3  192.168.58.0   global  asm


[root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg setif -global eth2/192.168.3.0:cluster_interconnect,asm
[root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg setif -global eth3/192.168.58.0:cluster_interconnect,asm


[root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg getif
eth1  192.168.179.0  global  public
eth2  192.168.3.0    global  cluster_interconnect,asm
eth3  192.168.58.0   global  cluster_interconnect,asm

 

Let check the overall cluster state

[root@rac002 ~]# /u01/app/12.2.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.ASMNET2LSNR_ASM.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGDATA.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGFRA.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGOCRVOTING.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.net1.network
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.ons
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.proxy_advm
               OFFLINE OFFLINE      rac001                   STABLE
               OFFLINE OFFLINE      rac002                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac002                   STABLE
ora.asm
      1        ONLINE  ONLINE       rac001                   Started,STABLE
      2        ONLINE  ONLINE       rac002                   Started,STABLE
ora.cvu
      1        ONLINE  ONLINE       rac002                   STABLE
ora.qosmserver
      1        OFFLINE OFFLINE                               STABLE
ora.rac001.vip
      1        ONLINE  ONLINE       rac001                   STABLE
ora.rac002.vip
      1        ONLINE  ONLINE       rac002                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac002                   STABLE
--------------------------------------------------------------------------------

The cluster is now up, functional and resilient to network failure. We could have choose to create a database as a part of the deployment process because the “Deploy Cluster Tool” permits us to do that. Nevertheless, in this demonstration, I choose to execute manually the database creation on top of this deployment.
So we add a new database to the cluster:

 

[oracle@rac001 ~]$ dbca -silent -ignorePreReqs \
> -createDatabase \
> -gdbName app01 \
> -nodelist rac001,rac002 \
> -templateName General_Purpose.dbc \
> -characterSet AL32UTF8 \
> -createAsContainerDatabase false \
> -databaseConfigType RAC \
> -databaseType MULTIPURPOSE \
> -dvConfiguration false \
> -emConfiguration NONE \
> -enableArchive true \
> -memoryMgmtType AUTO_SGA \
> -memoryPercentage 75 \
> -nationalCharacterSet AL16UTF16 \
> -adminManaged \
> -storageType ASM \
> -diskGroupName DGDATA \
> -recoveryGroupName DGFRA \
> -sysPassword 0rAcle-Sys \
> -systemPassword 0rAcle-System \
> -useOMF true

Copying database files
1% complete
2% complete
15% complete
27% complete
Creating and starting Oracle instance
29% complete
32% complete
36% complete
40% complete
41% complete
43% complete
45% complete
Creating cluster database views
47% complete
63% complete
Completing Database Creation
64% complete
65% complete
68% complete
71% complete
72% complete
Executing Post Configuration Actions
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/app01/app01.log" for further details.

[oracle@rac001 ~]$ export ORACLE_SID=app011
[oracle@rac001 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Wed Dec 20 08:54:03 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production

SQL> select host_name from gv$instance ;

HOST_NAME
----------------------------------------------------------------
rac001
rac002

 

We have now a R.A.C 12.2 Standard Edition ready to run our critical applications with the latest patch level for OS and Oracle including all best practices and requirements.

So with this post we have a demonstration of how it make your life simpler, with a good underlying OVM infrastructure, to deploy various kinds of Oracle database infrastructure. This automation can also easily be done for any other technologies like PostgreSQL database or “Big Data” technologies like Hortonworks or any applications.

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.

 

Cet article Automate OVM deployment for a production ready Oracle RAC 12.2 architecture – (part 02) est apparu en premier sur Blog dbi services.

Oracle docker image from docker store

Fri, 2017-12-15 00:54

Did you notice that each time you want to download the Oracle binaries from the OTN website (for non-production usage) you have to click on the “Accept License Agreement”? This is because you have to agree with the OTN license agreement, and that makes it not easy to distribute an automated way to build an environment.
The only exception I have seen was the sealed envelopes provided for the RACattack:

Software is provided for @ludodba #racattack thanks to @OracleRACpm pic.twitter.com/1GQTVVn2BS

— Franck Pachot (@FranckPachot) September 17, 2015

In both cases, there must be a physical action involved to agree legally with the license terms.

Docker

There is now a new possibility where you click on ‘Agree’ only once, in the Docker Store, and then can download (aka ‘pull’) a container containing the binary distribution. You just go to: https://store.docker.com/images/oracle-database-enterprise-edition, login (you can create one in two minutes with a username, e-mail address, and password) and accept the license agreement:
CaptureOracle Docker

Once this is done, you will be able to pull the Oracle containers from the command line, after a ‘docker login’.

It may not be easy to use Docker on your laptop, especially in you are on Windows and don’t want to enable Hyper-V. Here is how I run it on a VirtualBox VM running Oracle Enterprise Linux. You may wonder what’s the point to run containers within a VM. But I think that you don’t have the choice here. The docker processes will run within the host. This means that you need an OS that is supported (and Oracle Enterprise Linux is the best fitted to run Oracle Database). This also means that you also need to adapt the kernel parameters for it, shm, limits, have the oracle user, etc. Better to do that in a VM dedicated for Oracle Database.

Then you wonder what’s the point of running in a container, given that you have to do all those installation prerequisites anyway, and installing Oracle is just two more commands (runInstaller and dbca). Well, it seems that the main reason is that it’s cool. In my opinion, any user of database (from developer to administrator) should have installed an Oracle Database at least with the DBA GUI, because it is a good way to understand what is a database, a listener, a datafile, the characterset,… But let’s be cool and pull it instead of install it.

Docker evolves quickly, I remove old releases just in case:

[root@localhost oracle]# yum -y remove docker-ce docker docker-common docker-selinux docker-engine
Loaded plugins: ulninfo
No Match for argument: docker-ce
No Match for argument: docker
No Match for argument: docker-common
No Match for argument: docker-selinux
No Match for argument: docker-engine
No Packages marked for removal

I’m on the latest OEL7:

[oracle@localhost ~]$ cat /etc/oracle-release
Oracle Linux Server release 7.4
 
[root@localhost oracle]# yum upgrade
Loaded plugins: ulninfo
No packages marked for update
 
[oracle@localhost ~]$ uname -a
Linux localhost.localdomain 4.1.12-103.10.1.el7uek.x86_64 #2 SMP Tue Dec 5 15:42:37 PST 2017 x86_64 x86_64 x86_64 GNU/Linux

I don’t want docker to fill my / filesystem, and those images with an Oracle Database are big, so I’m creating a link from /var/lib/docker to /u01

[root@localhost oracle]# mkdir /u01/docker
[root@localhost oracle]# ln -s /u01/docker /var/lib/docker
[root@localhost oracle]# ls -ld /var/lib/docker
lrwxrwxrwx. 1 root root 11 Dec 10 15:48 /var/lib/docker -> /u01/docker

Installing Docker (Community Edition):

[root@localhost oracle]# yum -y install docker-ce
Loaded plugins: ulninfo
docker-ce-stable | 2.9 kB 00:00:00
ol7_UEKR4 | 1.2 kB 00:00:00
ol7_addons | 1.2 kB 00:00:00
ol7_latest | 1.4 kB 00:00:00
docker-ce-stable/x86_64/primary_db | 10 kB 00:00:00
(1/7): ol7_UEKR4/x86_64/updateinfo | 135 kB 00:00:00
(2/7): ol7_addons/x86_64/updateinfo | 40 kB 00:00:00
(3/7): ol7_addons/x86_64/primary | 78 kB 00:00:00
(4/7): ol7_latest/x86_64/group | 681 kB 00:00:00
(5/7): ol7_latest/x86_64/updateinfo | 1.6 MB 00:00:02
(6/7): ol7_UEKR4/x86_64/primary | 25 MB 00:00:25
(7/7): ol7_latest/x86_64/primary | 31 MB 00:00:26
ol7_UEKR4 506/506
ol7_addons 251/251
ol7_latest 23517/23517
Resolving Dependencies
--> Running transaction check
---> Package docker-ce.x86_64 0:17.09.1.ce-1.el7.centos will be installed
--> Processing Dependency: container-selinux >= 2.9 for package: docker-ce-17.09.1.ce-1.el7.centos.x86_64
--> Running transaction check
---> Package container-selinux.noarch 2:2.21-1.el7 will be installed
--> Finished Dependency Resolution
 
Dependencies Resolved
 
==============================================================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================================================
Installing:
docker-ce x86_64 17.09.1.ce-1.el7.centos docker-ce-stable 21 M
Installing for dependencies:
container-selinux noarch 2:2.21-1.el7 ol7_addons 28 k
 
Transaction Summary
==============================================================================================================================================================================================
Install 1 Package (+1 Dependent package)
 
Total download size: 21 M
Installed size: 76 M
Is this ok [y/d/N]: y
Downloading packages:
(1/2): container-selinux-2.21-1.el7.noarch.rpm | 28 kB 00:00:01
(2/2): docker-ce-17.09.1.ce-1.el7.centos.x86_64.rpm | 21 MB 00:00:07
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 2.7 MB/s | 21 MB 00:00:07
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 2:container-selinux-2.21-1.el7.noarch 1/2
Installing : docker-ce-17.09.1.ce-1.el7.centos.x86_64 2/2
Verifying : docker-ce-17.09.1.ce-1.el7.centos.x86_64 1/2
Verifying : 2:container-selinux-2.21-1.el7.noarch 2/2
 
Installed:
docker-ce.x86_64 0:17.09.1.ce-1.el7.centos
 
Dependency Installed:
container-selinux.noarch 2:2.21-1.el7
 
Complete!

Starting Docker:

[root@localhost oracle]# systemctl start docker

I have to login with my credentials. This is the way to connect with the agreement I accepted online:

[root@localhost oracle]# docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: franck.pachot
Password:
Login Succeeded

Then I pull the docker container provided by Oracle. Oracle software is quit large when including all features, so I choose the ‘slim’ one:

[root@localhost oracle]# docker pull store/oracle/database-enterprise:12.2.0.1-slim
12.2.0.1-slim: Pulling from store/oracle/database-enterprise
4ce27fe12c04: Pull complete
9d3556e8e792: Pull complete
fc60a1a28025: Pull complete
0c32e4ed872e: Pull complete
be0a1f1e8dfd: Pull complete
Digest: sha256:dbd87ae4cc3425dea7ba3d3f34e062cbd0afa89aed2c3f3d47ceb5213cc0359a
Status: Downloaded newer image for store/oracle/database-enterprise:12.2.0.1-slim
[root@localhost oracle]#

Here is the image:

[root@localhost oracle]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
store/oracle/database-enterprise 12.2.0.1-slim 27c9559d36ec 3 months ago 2.08GB

To run a database, you just have to run the container. In order to connect to it, you need to forward the 1521 port:

[root@localhost oracle]# docker run -p 0.0.0.0:9001:1521 store/oracle/database-enterprise:12.2.0.1-slim
Setup Oracle Database
Oracle Database 12.2.0.1 Setup
Sun Dec 10 19:09:14 UTC 2017
 
Check parameters ......
log file is : /home/oracle/setup/log/paramChk.log
paramChk.sh is done at 0 sec
 
untar DB bits ......
log file is : /home/oracle/setup/log/untarDB.log

Ok. This takes some time. The ORACLE_HOME is unzipped, then the database created…

I’ll not describe further. Just go to the Setup Instructions in https://store.docker.com/images/oracle-database-enterprise-edition where everything is clearly explained.

In my opinion, it is good to try and think about it. Docker was created to containerize an application with process(es) and memory. A database is a different beast. The database is persistent, so you should store it in an external volume, because you don’t want to create a new empty database each time you start it. There are also the configuration files which should be persistent: should they belong to the container or be external? And the logs under ORACLE_BASE_DIAG? Do you want to keep them? purge them? Or just let the image grow, which can be very quick if you fill the Recovery Area. Finally, do you want to run a container into a Virtual Machine, this container running Oracle Database 12c, which is a Container Database (CDB), itself containing containers (PDBs)? Personally, I’m very skeptical about the usage of docker for an Oracle Database. But I also gave it an early try some years ago (read here). And you may see things differently in your context. The very good point is that Oracle now provides an easy way to test whether it helps you to run a database quickly or not, with a way to get the binaries without an ‘accept’ click for each download.

 

Cet article Oracle docker image from docker store est apparu en premier sur Blog dbi services.

Does pg_upgrade in check mode raises a failure when the old cluster is running?

Tue, 2017-12-12 14:31

Today I had the pleasure to have Bruce Momjian in my session about PostgreSQL Upgrade Best Practices at the IT Tage 2017 in Frankfurt. While browsing through the various options you have for upgrading there was one slide where I claimed that the old cluster needs to be down before you run pg_upgrade in check mode as you will hit a (non-critical) failure message otherwise. Lets see if that really is the case or I did something wrong…

To start with lets initialize a new 9.6.2 cluster:

postgres@pgbox:/home/postgres/ [PG962] initdb --version
initdb (PostgreSQL) 9.6.2 dbi services build
postgres@pgbox:/home/postgres/ [PG962] initdb -D /tmp/aaa
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     de_CH.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /tmp/aaa ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /tmp/aaa -l logfile start

Start that:

postgres@pgbox:/home/postgres/ [PG962] pg_ctl -D /tmp/aaa -l logfile start
postgres@pgbox:/home/postgres/ [PG962] psql -c "select version()" postgres
                                                           version                                                           
-----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6.2 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit
(1 row)

Time: 0.861 ms

For being able to upgrade we’ll need a new cluster, so:

postgres@pgbox:/home/postgres/ [PG10] initdb --version
initdb (PostgreSQL) 10.0 dbi services build
postgres@pgbox:/home/postgres/ [PG10] initdb -D /tmp/bbb
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     de_CH.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /tmp/bbb ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /tmp/bbb -l logfile start

We’ll not start that one but will just run pg_upgrade in check mode from the new binaries:

postgres@pgbox:/home/postgres/ [PG10] pg_upgrade --version
pg_upgrade (PostgreSQL) 10.0 dbi services build
postgres@pgbox:/home/postgres/ [PG10] export PGDATAOLD=/tmp/aaa
postgres@pgbox:/home/postgres/ [PG10] export PGDATANEW=/tmp/bbb
postgres@pgbox:/home/postgres/ [PG10] export PGBINOLD=/u01/app/postgres/product/96/db_2/bin/
postgres@pgbox:/home/postgres/ [PG10] export PGBINNEW=/u01/app/postgres/product/10/db_0/bin/
postgres@pgbox:/home/postgres/ [PG10] pg_upgrade -c

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
...

… and here we go. From the log:

postgres@pgbox:/home/postgres/ [PG10] cat pg_upgrade_server.log

-----------------------------------------------------------------
  pg_upgrade run on Tue Dec 12 21:23:43 2017
-----------------------------------------------------------------

command: "/u01/app/postgres/product/96/db_2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/tmp/aaa" -o "-p 50432 -c autovacuum=off -c autovacuum_freeze_max_age=2000000000  -c listen_addresses='' -c unix_socket_permissions=0700" start >> "pg_upgrade_server.log" 2>&1
pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....FATAL:  lock file "postmaster.pid" already exists
HINT:  Is another postmaster (PID 2194) running in data directory "/tmp/aaa"?
 stopped waiting
pg_ctl: could not start server
Examine the log output.

So, @Bruce: Something to improve :)
Again: It was a pleasure to have you there and I hope we’ll meet again at one of the conferences in 2018.

 

Cet article Does pg_upgrade in check mode raises a failure when the old cluster is running? est apparu en premier sur Blog dbi services.

#UKOUG_TECH17

Fri, 2017-12-08 15:40
Award

ukoug_tech17_award_paul_fitton.jpgI’ve received an award for an article I’ve written last year, the Most Read Oracle Scene Article in 2016 – Technology. I like to write for Oracle Scene magazine. It is rare today to have a magazine both available in a paper version, and also publicly available on the web. And I must say that as an author, all the people behind are doing a great work. Thanks to them and thanks for the award. Seeing that what I write helps is the motivation to do so.

The article is: Is your AWR/Statspack Report Relevant.

This is the occasion to link to other articles I’ve written for the UKOUG Magazine. Statspack, because not everybody has Enterprise Edition with optional Diagnostic Pack: Improving Statspack Experience. One on the parameter I hate the most: CBO Choice between Index & Full Scan: the Good, the Bad & the Ugly parameters. Another about the statistics that should replace this awful parameter: Demystifying WORKLOAD System Statistics Gathering.

ukoug_tech17_award_keynoteTwo interesting features in 12c: 12c Online Statistics Gathering & Session Private Statistics. A big change that happened in 12.2.0.2 concerning availability: I/O Error on Datafile: Instance Crash or Datafile Offline?. My SLOB performance tests when ODA Lite came with MVMe SSD: Performance for All Editions on ODA X6-2S/M. And finally, the article on the great Oracle features for physical transport/clone/migration: From Transportable Tablespaces to Pluggable Databases

I’ve stolen a few pictures to illustrate this blog post, from UKOUG photo albums, and from friends.

Presentations

Pres1The article on Transportable Tablespaces and Pluggable Databases is actually based one one presentation I did. I was in that big Hall 1 [I realize that some may do a joke on this, but I’m talking about the room] where I look like Ant-Man from the back of the room. But doing live demos is great on this huge screen.

The idea for this presentation came 1 year ago when preparing a 5 minutes talk for Oracle Open World 2016 ACED talks (blog post about this) and this subject is so exciting that I wanted to share more about it. Online PDB clone and relocate will be the features you will like the most when going to Multitenant.

pres2 My second presentation was more developer focused, exposing all Join Methods that can magically construct your query result in a record time, or, when bad Join Method is chosen, make your 2-second query still running after one hour. I explained the join methods by 3 ways: theory with .ppt animation, execution plan with dbms_xplan and in live when running queries, with SQL Monitor, showing the different operations and A-Rows increasing in live.

I was also co-presenting in a roundtable on Oracle Cloud PaaS, sharing my little experience on DBaaS. Everybody talks about Cloud and it is good to talk about problems encountered and how to deal with it.

Round tables

Despite the hard concurrency of good sessions, I also attended a few round tables. Those conferences are a good opportunity to meet and share other users and product managers. Oracle is a big company, and we sometimes think that they care only about their biggest customers, but that is not exact. There are several product managers who really listen to customers. A great one was the discussion about something that slowly changes for a few years: the documentation which was really precise in the past – in explaining the When, Why, and How – is now more vague.
IMG_4806

Community

Geeking at the Oracle Developer Community lounge, Eating something decent near the conference center, drinking while discussing technical stuff, the ACE dinner (and the first ACED briefing out of US), posing with award winners, and the amazing chocolate testing…

ukoug_tech17_geeksbulgugiIMG_4812aceDQTvPKSUIAADCqVachocolate1

 

Cet article #UKOUG_TECH17 est apparu en premier sur Blog dbi services.

SQL Server Tips: an orphan user owns a database role

Fri, 2017-12-08 02:15

A few days ago, I conduct an audit to detect all orphan’s windows accounts in a database and I was surprise to have an error during the drop user query.

 

The first step is to find all orphan’s windows accounts in a database

USE [dbi_database]

GO

/*Step1: Search the orphan user */

SELECT * FROM  sys.database_principals a

LEFT OUTER JOIN sys.server_principals b ON a.sid = b.sid

WHERE b.sid IS NULL

AND   a.type In ('U', 'G')

AND   a.principal_id > 4

 

I find the user called “dbi\orphan_user” and run the query to drop it

/*Drop Orphran User*/

DROP USER [dbi\orphan_user]

GO

orphan_user01

But as you can see, I receive the error message:

Msg 15421, Level 16, State 1, Line4

“The database principal owns a database role and cannot be dropped.”

 

This user is owner of database roles…

Be careful it is not this error message:

Msg 15138, Level 16, State 1, Line 4

The database principal owns a schema in the database, and cannot be dropped.

In this case, the user is owner on schema.

Do not confuse these two error messages:

  • Msg 15421 is for database role
  • Msg 15138 is for schema

 

The goal is to search all database roles owns by the user dbi\orphan_user

/*Search database role onws by this Orphran  user*/

  SELECT dp2.name, dp1.name FROM sys.database_principals AS dp1

                JOIN sys.database_principals AS dp2

                ON dp1.owning_principal_id = dp2.principal_id

                WHERE dp1.type = 'R' AND dp2.name = 'dbi\orphan_user';

As you can see in my select, I use two times the view sys.database_principals to do a cross check between the owning_principal_id and the principal_id.

orphan_user02

After that, I change the owner from this role to the good one (by default dbo).

/*Change the owner from these database role*/

ALTER AUTHORIZATION ON ROLE::<database role> TO dbo;

orphan_user03

And I drop the orphan user without problems…

/*Drop Orphran User*/

DROP USER [dbi\orphan_user]

GO

orphan_user04

To finish, I give you a Santa Klaus Gift:

I also rewrite the query to have the “Alter Authorization” query directly in the SELECT. You have just to copy/paste and execute it

SELECT dp2.name, dp1.name, 'ALTER AUTHORIZATION ON ROLE::' + dp1.name + ' TO dbo;' as query

FROM sys.database_principals AS dp1

JOIN sys.database_principals AS dp2

ON dp1.owning_principal_id = dp2.principal_id

WHERE dp1.type = 'R' AND dp2.name = 'dbi\orphan_user';

 

Et voila! 8-)

 

Cet article SQL Server Tips: an orphan user owns a database role est apparu en premier sur Blog dbi services.

How to reduce the size of an LVM partition formatted with xfs filesystem on CentOS7?

Wed, 2017-12-06 10:17

DISCLAIMER: I know it exists other solutions to do it

Pre-requisites:
– a virtual machine (or not) with CentOS7 installed
– a free disk or partition

I use a VBox machine and I added a 5GiB hard disk

We list the disk and partition to check if our new hard is added.

[root@deploy ~]$ lsblk
NAME                       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                          8:0    0   20G  0 disk
├─sda1                       8:1    0    1G  0 part /boot
└─sda2                       8:2    0   19G  0 part
  ├─cl-root                253:0    0   21G  0 lvm  /
  └─cl-swap                253:1    0    2G  0 lvm  [SWAP]
sdb                          8:16   0   10G  0 disk

Good, we can continue..

Let’s partition the disk using fdisk

[root@deploy ~]$ fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x76a98fa2.

Command (m for help): n

[root@deploy ~]$ fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x76a98fa2.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): +5G
Partition 1 of type Linux and of size 5 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Now, we need to inform the kernel that the partition table has changed. To do that, either we reboot the server or we run partprobe

[root@deploy ~]$ partprobe /dev/sdb1
[root@deploy ~]$

We create a physical volume

[root@deploy ~]$ pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
[root@deploy ~]$ pvs
  PV         VG Fmt  Attr PSize  PFree
  /dev/sda2  cl lvm2 a--  19.00g       0
  /dev/sdb1     lvm2 ---   5.00g    5.00g
  /dev/sdc2  cl lvm2 a--   5.00g 1020.00m

We create a volume group

[root@deploy ~]$ vgcreate  vg_deploy /dev/sdb1
  Volume group "vg_deploy" successfully created

We check that the volume group was created properly

[root@deploy ~]$ vgdisplay vg_deploy
  --- Volume group ---
  VG Name               vg_deploy
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               5.00 GiB
  PE Size               4.00 MiB
  Total PE              1279
  Alloc PE / Size       0 / 0
  Free  PE / Size       1279 / 5.00 GiB
  VG UUID               5ZhlvC-lpor-Ti8x-mS9P-bnxW-Gdtw-Gynocl

Here, I set the size of the logical volume with PE (Physical Extent). One PE represents 4.00 MiB

We create a logical volume on our volume group

[root@deploy ~]$ lvcreate -l 1000 -n lv_deploy vg_deploy
  Logical volume "lv_deploy" created.

We have a look to check how our logical volume “lv_deploy” looks like

[root@deploy ~]$ lvdisplay /dev/vg_deploy/lv_deploy
  --- Logical volume ---
  LV Path                /dev/vg_deploy/lv_deploy
  LV Name                lv_deploy
  VG Name                vg_deploy
  LV UUID                2vxcDv-AHfB-7c2x-1PM8-nbn3-38M5-c1QoNS
  LV Write Access        read/write
  LV Creation host, time deploy.example.com, 2017-12-05 08:15:59 -0500
  LV Status              available
  # open                 0
  LV Size                3.91 GiB
  Current LE             1000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3

Let’s create our file system on the new logical volume

[root@deploy ~]$ mkfs.xfs  /dev/vg_deploy/lv_deploy
meta-data=/dev/vg_deploy/lv_deploy isize=512    agcount=4, agsize=256000 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1024000, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

We now create a new directory “mysqldata” for example

[root@deploy ~]$ mkdir /mysqldata

We add the new entry for our new logical volume

[root@deploy ~]$ echo "/dev/mapper/vg_deploy-lv_deploy     /mysqldata      xfs     defaults      0 0" >> /etc/fstab

We mount it

[root@deploy ~]$ mount -a

We check the filesystem is mounted properly

[root@deploy ~]$ df -hT
Filesystem                      Type      Size  Used Avail Use% Mounted on
/dev/mapper/cl-root             xfs        21G  8.7G   13G  42% /
devtmpfs                        devtmpfs  910M     0  910M   0% /dev
tmpfs                           tmpfs     920M     0  920M   0% /dev/shm
tmpfs                           tmpfs     920M  8.4M  912M   1% /run
tmpfs                           tmpfs     920M     0  920M   0% /sys/fs/cgroup
/dev/sda1                       xfs      1014M  227M  788M  23% /boot
tmpfs                           tmpfs     184M     0  184M   0% /run/user/0
/dev/loop2                      iso9660   4.3G  4.3G     0 100% /media/iso
/dev/mapper/vg_deploy-lv_deploy xfs       3.9G   33M  3.9G   1% /mysqldata

We add some files to the /mysqldata directory (a for loop will help us)

[root@deploy mysqldata]$ for i in 1 2 3 4 5; do dd if=/dev/zero  of=/mysqldata/file0$i bs=1024 count=10; done
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000282978 s, 36.2 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000202232 s, 50.6 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000255617 s, 40.1 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000195752 s, 52.3 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000183672 s, 55.8 MB/s
[root@deploy mysqldata]$ ls -l
total 60
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file01
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file02
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file03
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file04
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file05

NOW the interesting part is coming because we are going to reduce our /mysqldata filesystem
But first let’s make a backup of our current /mysqldata FS

[root@deploy mysqldata]$ yum -y install xfsdump
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile

Bad news! we cannot reduce an xfs partition directly so we need:
– to backup our filesystem
– umount the filsesytem && delete the logical volume
– re-partition tle logical volume with xfs FS
– restore our data

Backup the file system

[root@deploy mysqldata]$ xfsdump -f /tmp/mysqldata.dump /mysqldata
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.4 (dump format 3.0) - type ^C for status and control

 ============================= dump label dialog ==============================

please enter label for this dump session (timeout in 300 sec)
 -> test
session label entered: "test"

 --------------------------------- end dialog ---------------------------------

xfsdump: level 0 dump of deploy.example.com:/mysqldata
xfsdump: dump date: Tue Dec  5 08:36:20 2017
xfsdump: session id: f010d421-1a34-4c70-871f-48ffc48c29f2
xfsdump: session label: "test"
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 83840 bytes

 ============================= media label dialog =============================

please enter label for media in drive 0 (timeout in 300 sec)
 -> test
media label entered: "test"

 --------------------------------- end dialog ---------------------------------

xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
xfsdump: dumping non-directory files
xfsdump: ending media file
xfsdump: media file size 75656 bytes
xfsdump: dump size (non-dir files) : 51360 bytes
xfsdump: dump complete: 5 seconds elapsed
xfsdump: Dump Summary:
xfsdump:   stream 0 /tmp/mysqldata.dump OK (success)
xfsdump: Dump Status: SUCCESS

Then, we unmount the filesystem and delete the logical volume

[root@deploy ~]$ umount /mysqldata/

[root@deploy ~]$ df -hT
Filesystem          Type      Size  Used Avail Use% Mounted on
/dev/mapper/cl-root xfs        21G  8.7G   13G  42% /
devtmpfs            devtmpfs  910M     0  910M   0% /dev
tmpfs               tmpfs     920M     0  920M   0% /dev/shm
tmpfs               tmpfs     920M  8.4M  912M   1% /run
tmpfs               tmpfs     920M     0  920M   0% /sys/fs/cgroup
/dev/sda1           xfs      1014M  227M  788M  23% /boot
tmpfs               tmpfs     184M     0  184M   0% /run/user/0
/dev/loop2          iso9660   4.3G  4.3G     0 100% /media/iso

[root@deploy ~]$ lvremove /dev/vg_deploy/lv_deploy
Do you really want to remove active logical volume vg_deploy/lv_deploy? [y/n]: y
  Logical volume "lv_deploy" successfully removed

We recreate the logical volume with a lower size (from 1000 PE to 800 PE)

[root@deploy ~]$ lvcreate -l 800 -n lv_deploy vg_deploy
WARNING: xfs signature detected on /dev/vg_deploy/lv_deploy at offset 0. Wipe it? [y/n]: y
  Wiping xfs signature on /dev/vg_deploy/lv_deploy.
  Logical volume "lv_deploy" created.

We build the XFS filesystem

[root@deploy ~]$ mkfs.xfs /dev/mapper/vg_deploy-lv_deploy
meta-data=/dev/mapper/vg_deploy-lv_deploy isize=512    agcount=4, agsize=204800 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=819200, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

We remount the filesystem

[root@deploy ~]$ mount -a
[root@deploy ~]$
[root@deploy ~]$
[root@deploy ~]$ df -hT
Filesystem                      Type      Size  Used Avail Use% Mounted on
/dev/mapper/cl-root             xfs        21G  8.7G   13G  42% /
devtmpfs                        devtmpfs  910M     0  910M   0% /dev
tmpfs                           tmpfs     920M     0  920M   0% /dev/shm
tmpfs                           tmpfs     920M  8.4M  912M   1% /run
tmpfs                           tmpfs     920M     0  920M   0% /sys/fs/cgroup
/dev/sda1                       xfs      1014M  227M  788M  23% /boot
tmpfs                           tmpfs     184M     0  184M   0% /run/user/0
/dev/loop2                      iso9660   4.3G  4.3G     0 100% /media/iso
/dev/mapper/vg_deploy-lv_deploy xfs       3.2G   33M  3.1G   2% /mysqldata

We list the content of /mysqldata directory

[root@deploy ~]$ ls -l /mysqldata
total 0

Let’s restore our data

[root@deploy ~]$ xfsrestore -f /tmp/mysqldata.dump /mysqldata
xfsrestore: using file dump (drive_simple) strategy
xfsrestore: version 3.1.4 (dump format 3.0) - type ^C for status and control
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: dump description:
xfsrestore: hostname: deploy.example.com
xfsrestore: mount point: /mysqldata
xfsrestore: volume: /dev/mapper/vg_deploy-lv_deploy
xfsrestore: session time: Tue Dec  5 08:36:20 2017
xfsrestore: level: 0
xfsrestore: session label: "test"
xfsrestore: media label: "test"
xfsrestore: file system id: 84832e04-e6b8-473a-beb4-f4d59ab9e73c
xfsrestore: session id: f010d421-1a34-4c70-871f-48ffc48c29f2
xfsrestore: media id: 8fda43c1-c7de-4331-b930-ebd88199d0e7
xfsrestore: using online session inventory
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: 1 directories and 5 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsrestore: restore complete: 0 seconds elapsed
xfsrestore: Restore Summary:
xfsrestore:   stream 0 /tmp/mysqldata.dump OK (success)
xfsrestore: Restore Status: SUCCESS

Our data are back

[root@deploy ~]$ ls -l /mysqldata/
total 60
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file01
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file02
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file03
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file04
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file05

Hope this helps :-)

 

Cet article How to reduce the size of an LVM partition formatted with xfs filesystem on CentOS7? est apparu en premier sur Blog dbi services.

Naming of archivelog files with non existing top level archivelog directory

Wed, 2017-12-06 06:33

In Oracle 12.2 an archive log directory is accepted, if top level directory does not exist:

oracle@localhost:/u01/app/oracle/product/12.2.0/dbhome_1/dbs/ [DMK] ls -l /u02/oradata/DMK/
 total 2267920
 drwxr-xr-x. 2 oracle dba        96 Dec  6 05:36 arch ...

Now database accepts this non existing archivelog destination:

SQL> alter system set log_archive_dest_3='LOCATION=/u02/oradata/DMK/arch/arch2';
System altered.

But not this:

SQL> alter system set log_archive_dest_4='LOCATION=/u02/oradata/DMK/arch/arch2/arch4';
 alter system set log_archive_dest_4='LOCATION=/u02/oradata/DMK/arch/arch2/arch4'
 *
 ERROR at line 1:
 ORA-02097: parameter cannot be modified because specified value is invalid
 ORA-16032: parameter LOG_ARCHIVE_DEST_4 destination string cannot be translated
 ORA-07286: sksagdi: cannot obtain device information.
 Linux-x86_64 Error: 2: No such file or directory

Log file format is set as following:

SQL> show parameter log_archive_format;
NAME                                 TYPE        VALUE
 ------------------------------------ ----------- ------------------------------
 log_archive_format                   string      %t_%s_%r.dbf
 SQL>

 

Now let’s see how archive log files look like in log_archive_dest_3:

oracle@localhost:/u01/app/oracle/product/12.2.0/dbhome_1/dbs/ [DMK] ls -l /u02/oradata/DMK/arch/arch2*
 -rw-r-----. 1 oracle dba 3845120 Dec  6 05:36 /u02/oradata/DMK/arch/arch21_5_960106002.dbf

So Oracle just adds the non existing top level directory to beginning of archivelog filename.

 

Cet article Naming of archivelog files with non existing top level archivelog directory est apparu en premier sur Blog dbi services.

No journal messages available before the last reboot of your CentOS/RHEL system?

Tue, 2017-12-05 02:20

As you probably noticed RedHat as well as CentOS switched to systemd with version 7 of their operating system release. This also means that instead of looking at /var/log/messages you are supposed to use journcalctl to browse the messages of the operating system. One issue with that is that messages before the last reboot of your system will not be available, which is probably not want you want.

Lets say I started my RedHat linux system just now:

Last login: Tue Dec  5 09:12:34 2017 from 192.168.22.1
[root@rhel7 ~]$ uptime
 09:14:14 up 1 min,  1 user,  load average: 0.33, 0.15, 0.05
[root@rhel7 ~]$ date
Die Dez  5 09:14:15 CET 2017

Asking for any journal logs before that will not show anything:

[root@rhel7 ~]$ journalctl --help  | grep "\-\-since"
  -S --since=DATE          Show entries not older than the specified date
[root@rhel7 ~]$ journalctl --since "2017-12-04 00:00:00"
-- Logs begin at Die 2017-12-05 09:13:07 CET, end at Die 2017-12-05 09:14:38 CET. --
Dez 05 09:13:07 rhel7.localdomain systemd-journal[86]: Runtime journal is using 6.2M (max allowed 49.6M, trying to 
Dez 05 09:13:07 rhel7.localdomain kernel: Initializing cgroup subsys cpuset
Dez 05 09:13:07 rhel7.localdomain kernel: Initializing cgroup subsys cpu
Dez 05 09:13:07 rhel7.localdomain kernel: Initializing cgroup subsys cpuacct

Nothing for yesterday, which is bad. The issue here is the default configuration:

[root@rhel7 ~]$ cat /etc/systemd/journald.conf 
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See journald.conf(5) for details.

[Journal]
#Storage=auto
#Compress=yes
#Seal=yes
#SplitMode=uid
#SyncIntervalSec=5m
#RateLimitInterval=30s
#RateLimitBurst=1000
#SystemMaxUse=
#SystemKeepFree=
#SystemMaxFileSize=
#RuntimeMaxUse=
#RuntimeKeepFree=
#RuntimeMaxFileSize=
#MaxRetentionSec=
#MaxFileSec=1month
#ForwardToSyslog=yes
#ForwardToKMsg=no
#ForwardToConsole=no
#ForwardToWall=yes
#TTYPath=/dev/console
#MaxLevelStore=debug
#MaxLevelSyslog=debug
#MaxLevelKMsg=notice
#MaxLevelConsole=info
#MaxLevelWall=emerg

“Storage=auto” means that the journal will only be persistent if this directory exists (it does not in the default setup):

[root@rhel7 ~]$ ls /var/log/journal
ls: cannot access /var/log/journal: No such file or directory

As soon as this is created and the service is restarted the journal will be persistent and will survive a reboot:

[root@rhel7 ~]$ mkdir /var/log/journal
[root@rhel7 ~]$ systemctl restart systemd-journald.service
total 4
drwxr-xr-x.  3 root root   46  5. Dez 09:15 .
drwxr-xr-x. 10 root root 4096  5. Dez 09:15 ..
drwxr-xr-x.  2 root root   28  5. Dez 09:15 a473db3bada14e478442d99da55345e0
[root@rhel7 ~]$ ls -al /var/log/journal/a473db3bada14e478442d99da55345e0/
total 8192
drwxr-xr-x. 2 root root      28  5. Dez 09:15 .
drwxr-xr-x. 3 root root      46  5. Dez 09:15 ..
-rw-r-----. 1 root root 8388608  5. Dez 09:15 system.journal

Of course you should look at the other parameters that control the size of journal as well as rotation.

 

Cet article No journal messages available before the last reboot of your CentOS/RHEL system? est apparu en premier sur Blog dbi services.

Pages