Feed aggregator

Using CSS Frameworks with PeopleSoft Fluid

Jim Marion - Wed, 2017-12-20 15:20

Fluid is Oracle's strategic direction. If you have experience with Fluid development, you know that dragging, dropping, and aligning fields on a canvas isn't enough to develop a multi-form factor Fluid user interface. With Fluid comes an emphasis on CSS3 for multi-form factor support. This means you must add CSS class names to page fields to support different screen sizes. PeopleTools comes with an extensive list of predefined CSS class names. The challenge is identifying which CSS class to use. Fortunately, Oracle published two extremely helpful documents:

If you learn the style classes in these two documents, you will do quite well with Fluid layout (actually, just learn a few dozen of these classes and you will do quite well). But what if you want to use features of CSS that don't exist in Fluid-delivered CSS classes? Flexbox, for example, is one of the most powerful features of CSS3. When I find myself terribly annoyed with Fluid layout, I throw a Flexbox at my layout issue and the problem is solved. If we want to apply styling using CSS attributes that PeopleSoft hasn't already defined, we have two options:

  • Define our own CSS classes by creating a free-formed stylesheet or
  • Borrow someone else's CSS class names from a CSS framework.

Writing your own CSS can be a rewarding experience. I can often solve layout problems with just a few lines of CSS. My concern, however, is maintenance. What starts as a one-time CSS "fix" (or hack) for a layout often turns into a copy/paste exercise replicated at least a dozen times, with each page using some of the same CSS and some different CSS. Then what? Do we create a separate CSS file for each Fluid page? Do we refactor common CSS, moving similar code into a shared library?

Given the age and history of the internet, most web layout problems have already been solved. Since PeopleSoft is just another web application, we can leverage the work of the world wide web's pioneers. The solutions to most of our layout problems exist in today's common CSS frameworks, Boostrap being the most popular. There are many PeopleSoft consultants happily using Bootstrap to enhance PeopleSoft Fluid pages. Here is how they do it:

  1. Import Bootstrap into a Freeform Stylesheet
  2. Use AddStylesheet to insert Bootstrap into a Peoplesoft page
  3. Apply Bootstrap style classes to Fluid page elements
  4. Create a "reset stylesheet" to fix everything Bootstrap broke.

Yes, you read that last line correctly, "... fix everything Bootstrap broke." Please don't misread this. There are many developers successfully using Bootstrap with PeopleSoft. But here is the problem: Most CSS frameworks directly style HTML elements. This is actually good. Developers call this a "reset" stylesheet. What makes this a problem for PeopleSoft is that PeopleTools ALSO applies CSS directly to HTML elements. PeopleTools includes its own reset stylesheet. In a sense, we could say that Fluid is a CSS framework itself. The end result is a mixture of styles applied to HTML elements by two competing and complementing CSS frameworks. I call this "Fluid-strap." Consultants work around this problem by creating a reset for the competing reset stylesheets — a reset for the reset.

Here is another alternative: Use a CSS framework that does NOT style HTML elements, but instead relies on class names. This type of CSS framework was designed for compatibility. This type of framework understands that another CSS framework is in charge. My personal favorite CSS compatibility library is Oracle JET. In Oracle JET's GitHub repository, you will find oj-alta-notag.css, a CSS file containing a lot of CSS class names and no element declarations. To use this library, follow the first three steps described above, skipping the final step:

  1. Import oj-alta-notag.css into a Freeform Stylesheet
  2. Use AddStylesheet to insert the Oracle JET Stylesheet into a Peoplesoft page
  3. Apply Oracle JET style classes to Fluid page elements

The key difference is we don't have to create a reset for the reset. The Oracle JET stylesheet silently loads into a PeopleSoft page without changing any styling unless specifically asked to style an element through the element's Default Style Name (Style Classes on 8.56) property.

Consider a PeopleSoft page built with 4 group boxes aligned horizontally as demonstrated in the following screenshot.

In Classic, what you see is mostly what you get, so the online rendering would look nearly the same as the Application Designer screenshot. In Fluid when viewed online, however, each group box will render vertically as follows:

We can fix this by applying a CSS Flexbox to the 4 group boxes. With Flexbox, the 4 group boxes will align horizontally as long as the device has enough horizontal real estate. If the display is too small, any group boxes that don't fit horizontally will move to the next row. For this example, we will use Oracle Jet's Flex Layout.Here are the steps

  1. Add a Layout only Group Box around the 4 horizontal group boxes and mark the container group box as Layout Only
  2. While still setting container group box properties, set the group box's style class to oj-flex oj-sm-flex-items-1
  3. Likewise, to each of the 4 horizontal group boxes, add the Style Class oj-flex-item
  4. Create a Freeform stylesheet definition containing oj-alta-notag-min.css
  5. Use the AddStylesheet PeopleCode function in PageActivate to insert the Stylesheet into your page

The end result will look something like this:

Several years ago, I read the book Test Driven Development by Kent Beck. In that book, Kent identifies the first step of each project as the hardest step. Why? Because each new project contains significant uncertainty. Software development seems to involve a lot of unknowns (if the solution was known, someone would have created it, automated it, and published it). His advice? Start with what you know. You start with what you know and work torwards what you don't know. This is how we teach Fluid at JSMPros. Your developers understand Classic development and we use that knowledge to springboard students into a higher level of Fluid understanding. If you are ready to take the Fluid challenge, I encourage you to register for one of our monthly Fluid classes at www.jsmpros.com/training-live-virtual. Have a group of eight or more developers? Contact us to schedule your own personalized Fluid training event.

Benefits and Use of Analytics for Insurance

Nilesh Jethwa - Wed, 2017-12-20 15:11

The world in which we live is a volatile world, a fact that doesn’t bode well for the insurance industry whose greatest ally is “certainty”. Analytics, however, can somehow fill the gap. In this regard, BI reporting tools such as data visualization software are extremely useful.

Advanced analytics for insurance allows all stakeholders in the industry to identify new growth opportunities and risk factors as soon as they take shape. Insurance companies such as yours can benefit from analytics in two ways.

  • Protect your enterprise.
  • Optimize your business’s growth.

Three important things to remember

Read more at http://www.infocaptor.com/dashboard/benefits-and-use-of-analytics-for-insurance

Online datafile move in a 12c dataguard environment

Yann Neuhaus - Wed, 2017-12-20 12:13

Oracle 12c introduces moving online datafile. One question we might ask is what about moving datafile online in a dataguard environment. In this blog we will do some tests
Below our configuration, we are using oracle 12.2

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 8 seconds ago)
DGMGRL>

The StandbyFileManagement property is set to auto for both primary and standby database.

DGMGRL> show database 'MYCONT_SITE' StandbyFileManagement;
StandbyFileManagement = 'auto'
DGMGRL> show database 'MYCONT_SITE1' StandbyFileManagement;
StandbyFileManagement = 'auto'
DGMGRL>

Below datafiles on both primary and standby pluggable databases PDB1

SQL> show con_name;
CON_NAME
------------------------------
PDB1
SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf

Now let’s move for example /u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf to a new location on the primary PDB1

SQL> alter database move datafile '/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf' to '/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf';
Database altered.
SQL>

We can verify the new location on the primary

SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf
SQL>

As the StandbyFileManagement is set to auto for both databases, we might think that datafile is also moved in the standby, so let’s check

SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf

The answer is no.
Ok but is my dataguard still working? Let’s query the broker

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 0 seconds ago)

Yes the configuration is fine.
Ok now can I move online my datafile in the new location on the standby server? Let’s try

SQL> alter database move datafile '/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf' to '/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf';
Database altered.
SQL>

And we can verify that datafile was moved.

SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf
SQL>

And we also can verify that my dataguard configuration is still fine

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 46 seconds ago)
DGMGRL>

Conclusion
We can see that
1- StandbyFileManagement property dos not concern online datafile move
2- Moving online datafile in the primary does not move the datafile on the standby
3- Online datafile can be done on the standby database

 

Cet article Online datafile move in a 12c dataguard environment est apparu en premier sur Blog dbi services.

Oracle 12.2 Dataguard : PDB Flashback on the Primary

Yann Neuhaus - Wed, 2017-12-20 12:12

The last day I was discussing with one colleague about database flashback for a pluggable database in a dataguard environment. I did some tests and I present results in this blog.
Below our broker configuration. Oracle 12.2 is used.

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 35 seconds ago)
DGMGRL>

The primary database has the flashback database set to YES.

SQL> select db_unique_name,open_mode,flashback_on from v$database;
DB_UNIQUE_NAME OPEN_MODE FLASHBACK_ON
------------------------------ -------------------- ------------------
MYCONT_SITE READ WRITE YES
.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB$SEED READ ONLY
PDB1 READ WRITE
PDB2 READ WRITE

Same for the standby database

SQL> select db_unique_name,open_mode,flashback_on from v$database;
DB_UNIQUE_NAME OPEN_MODE FLASHBACK_ON
------------------------------ -------------------- ------------------
MYCONT_SITE1 READ ONLY WITH APPLY YES
.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB$SEED READ ONLY
PDB1 READ ONLY
PDB2 READ ONLY

For the tests we are going to do a flashback database for the primary PDB1.
Let’s connect to PDB1

10:15:59 SQL> alter session set container=pdb1;
Session altered.
.
10:16:15 SQL> show con_name;
CON_NAME
------------------------------
PDB1
10:16:22 SQL>

And let’s create a table article with some datafor reference

10:16:22 SQL> create table article (idart number);
Table created.
.
10:18:12 SQL> insert into article values (1);
1 row created.
10:18:31 SQL> insert into article values (2);
1 row created.
.
10:18:34 SQL> select * from article;
IDART
----------
1
2
.
10:18:46 SQL> commit;

Now let’s do a database flashback of primary pdb1 before the creation of the table article.

10:28:12 SQL> show con_name
CON_NAME
------------------------------
PDB1
.
10:28:16 SQL> shut immediate;
Pluggable Database closed.
.
10:28:28 SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB1 MOUNTED
10:28:54 SQL>
.
10:28:54 SQL> FLASHBACK PLUGGABLE DATABASE PDB1 TO TIMESTAMP TO_TIMESTAMP('2017-12-20 10:16:00', 'YYYY-MM-DD HH24:MI:SS');
Flashback complete.
10:30:14 SQL>

Now let’s open PDB1 with resetlogs option

10:31:08 SQL> alter pluggable database PDB1 open resetlogs;
Pluggable database altered.
10:32:15 SQL>

And let’s query the table article. As expected the table is no longer present

10:32:15 SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB1 READ WRITE
.
10:32:59 SQL> select * from article;
select * from article
*
ERROR at line 1:
ORA-00942: table or view does not exist
10:33:06 SQL>

Now if we check the status of our dataguard in the broker, we have errors

12/20/2017 10:23:07 DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Error: ORA-16810: multiple errors or warnings detected for the member
Fast-Start Failover: DISABLED
Configuration Status:
ERROR (status updated 48 seconds ago)
12/20/2017 10:34:40 DGMGRL>

The status of the Primary database is fine

12/20/2017 10:34:40 DGMGRL> show database 'MYCONT_SITE';
Database - MYCONT_SITE
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
MYCONT
Database Status:
SUCCESS

But the standby status is returning some errors
12/20/2017 10:35:11 DGMGRL> show database 'MYCONT_SITE1';
Database - MYCONT_SITE1
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 3 minutes 10 seconds (computed 1 second ago)
Average Apply Rate: 7.00 KByte/s
Real Time Query: OFF
Instance(s):
MYCONT
Database Error(s):
ORA-16766: Redo Apply is stopped
Database Warning(s):
ORA-16853: apply lag has exceeded specified threshold
Database Status:
ERROR
12/20/2017 10:35:15 DGMGRL>

And if we check the alert log of the standby dataset we can find following errors

(3):Recovery of pluggable database PDB1 aborted due to pluggable database open resetlog marker.
(3):To continue recovery, restore all data files for this PDB to checkpoint SCN lower than 2518041, or timestamp before 12/20/2017 10:16:01, and restart recovery
MRP0: Background Media Recovery terminated with error 39874
2017-12-20T10:32:05.565085+01:00
Errors in file /u01/app/oracle/diag/rdbms/mycont_site1/MYCONT/trace/MYCONT_mrp0_1590.trc:
ORA-39874: Pluggable Database PDB1 recovery halted
ORA-39873: Restore all data files to a checkpoint SCN lower than 2518041.
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Recovered data files to a consistent state at change 2520607
2017-12-20T10:32:05.612394+01:00
Errors in file /u01/app/oracle/diag/rdbms/mycont_site1/MYCONT/trace/MYCONT_mrp0_1590.trc:
ORA-39874: Pluggable Database PDB1 recovery halted
ORA-39873: Restore all data files to a checkpoint SCN lower than 2518041.
2017-12-20T10:32:05.612511+01:00
MRP0: Background Media Recovery process shutdown (MYCONT)

On the primary PDB, we can can query the current INCARNATION_SCN in the v$pdb_incarnation view. And we can remark that the current SCN is the same that the one specified in the standby alert log 2518041

11:08:11 SQL> show con_name
CON_NAME
------------------------------
PDB1
11:08:56 SQL> select status,INCARNATION_SCN from v$pdb_incarnation;
STATUS INCARNATION_SCN
------- ---------------
CURRENT 2518041
PARENT 2201909
PARENT 1396169
11:08:59 SQL>

And then as specified in the alert log we have to flashback the standby pdb to a SCN lower than 2518041
First let’s stop the redo apply on the standby

12/20/2017 11:13:14 DGMGRL> edit database 'MYCONT_SITE1' set state='APPLY-OFF';
Succeeded.
12/20/2017 11:13:59 DGMGRL>

And then let’s flashback to 2518039 ( i.e 2518041 -2 ) for example
Let’s shutdown the standby container MYCONT and startup it in a mount state

11:18:42 SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
.
11:19:10 SQL> startup mount
ORACLE instance started.
Total System Global Area 956301312 bytes
Fixed Size 8799656 bytes
Variable Size 348129880 bytes
Database Buffers 595591168 bytes
Redo Buffers 3780608 bytes
Database mounted.
11:19:50 SQL>
.
11:19:50 SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB$SEED MOUNTED
PDB1 MOUNTED
PDB2 MOUNTED

Now let’s flashback PDB1 on the standby

11:20:19 SQL> flashback pluggable database PDB1 to SCN 2518039;
Flashback complete.
11:20:40 SQL>

The last step is to enable again the redo apply for the standby container

12/20/2017 11:13:59 DGMGRL> edit database 'MYCONT_SITE1' set state='APPLY-ON';
Succeeded.
12/20/2017 11:23:08 DGMGRL>

And then we can verify that the configuration is now fine

12/20/2017 11:25:05 DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 1 second ago)
12/20/2017 11:25:07 DGMGRL>

Conclusion
In this article we saw that the flashback in a dataguard environment is working in the same way for a container or a non container. The only difference is the SCN we must consider to flashback the pluggable database. This SCN should be queried fom the v$pdb_incarnation and not from the v$database as we usually do for a non container database.

 

Cet article Oracle 12.2 Dataguard : PDB Flashback on the Primary est apparu en premier sur Blog dbi services.

nVision Performance Tuning 12: Hinting nVision with SQL Profiles

David Kurtz - Wed, 2017-12-20 10:00
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.  It is a PeopleSoft specific version of a posting on my Oracle blog.

As I explained earlier in this series, it is not possible to add hints to nVision.  The dynamic nature of the nVision SQL means that it is not possible to use SQL Patches.  nVision SQL statements contain literal values and never use bind variables.  When dynamic selectors are used, the SELECTOR_NUM will be different for every execution. A SQL_ID found in one report will be not be seen again in another report.  Even static selector numbers will change after the tree is updated or when a new tree is created.
It is possible to use SQL Profiles to introduce hints because they can optionally match the force match signature of a SQL.  SQL statements that differ only in the literal values they contain will have different SQL IDs but will have the same force matching signature.  Although you will still have a lot of force matching signatures, you should find that you have far fewer force matching signatures than SQL_IDs.   Picking out the signatures that account for the most elapsed execution time and creating profiles for them is manageable.
Note: SQL Profiles require the Tuning Pack to be licenced.
As far as is possible, good nVision performance should be achieved by setting appropriate tree performance options at tree level.  These are global settings.  You may find that a particular setting on a particular tree is not optimal for all reports.  You may then choose to override the tree-level setting in specific layouts.  You may also find that you still need hints to control execution plans.
In particular, parallel query can be an effective tactic in nVision performance tuning.  However, you should put a degree of parallelism on PS_LEDGER or PS_LEDGER_BUDG because that will invoke parallelism in many other processes.  I have found that even putting a degree of parallelism on a summary ledger table can easily result in too many concurrent parallel queries.   On OLTP systems, such as PeopleSoft, I recommend that parallelism should be used sparingly and in a highly controlled and targetted fashion.
ExampleLet's take the following nVision query as an example.
SELECT L2.TREE_NODE_NUM,L3.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_XX_SUM_CONSOL_VW A, PSTREESELECT05 L2, PSTREESELECT10 L3
WHERE A.LEDGER='S_USMGT'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD BETWEEN 0 AND 12
AND (A.DEPTID BETWEEN 'A0000' AND 'A8999' OR A.DEPTID BETWEEN 'B0000' AND 'B9149'
OR A.DEPTID='B9156' OR A.DEPTID='B9158' OR A.DEPTID BETWEEN 'B9165' AND 'B9999'
OR A.DEPTID BETWEEN 'C0000' AND 'C9999' OR A.DEPTID BETWEEN 'D0000' AND 'D9999'
OR A.DEPTID BETWEEN 'G0000' AND 'G9999' OR A.DEPTID BETWEEN 'H0000' AND 'H9999'
OR A.DEPTID='B9150' OR A.DEPTID=' ')
AND L2.SELECTOR_NUM=10228
AND A.BUSINESS_UNIT=L2.RANGE_FROM_05
AND L3.SELECTOR_NUM=10231
AND A.ACCOUNT=L3.RANGE_FROM_10
AND A.CHARTFIELD1='0012345'
AND A.CURRENCY_CD='GBP'
GROUP BY L2.TREE_NODE_NUM,L3.TREE_NODE_NUM
/
We can tell from the equality join conditions that the two selectors still joined to the are dynamic selectors.
A third selector on DEPTID has been suppressed with the 'use literal values' performance option.  The number of DEPTID predicates in the statement will depend on the tree and the node selected for the report.  Note, that if these change then the statement will not force match the same profile.  SQL profiles might suddenly cease to work due to a tree or selection criteria change.
This is the plan I get initially and without a profile. It doesn't perform well.
Plan hash value: 808840077
-----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 10408 (100)| | | |
| 1 | HASH GROUP BY | | 517 | 50666 | 10408 (1)| 00:00:01 | | |
| 2 | HASH JOIN | | 517 | 50666 | 10407 (1)| 00:00:01 | | |
| 3 | PARTITION RANGE SINGLE | | 731 | 13158 | 3 (0)| 00:00:01 | 10228 | 10228 |
| 4 | INDEX FAST FULL SCAN | PSAPSTREESELECT05 | 731 | 13158 | 3 (0)| 00:00:01 | 10228 | 10228 |
| 5 | HASH JOIN | | 518 | 41440 | 10404 (1)| 00:00:01 | | |
| 6 | PARTITION RANGE SINGLE | | 249 | 5727 | 2 (0)| 00:00:01 | 10231 | 10231 |
| 7 | INDEX FAST FULL SCAN | PSAPSTREESELECT10 | 249 | 5727 | 2 (0)| 00:00:01 | 10231 | 10231 |
| 8 | PARTITION RANGE ITERATOR | | 7785 | 433K| 10402 (1)| 00:00:01 | 28 | 40 |
| 9 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PS_X_LEDGER_ACCTS | 7785 | 433K| 10402 (1)| 00:00:01 | 28 | 40 |
| 10 | SORT CLUSTER BY ROWID BATCHED | | 5373 | | 5177 (1)| 00:00:01 | | |
| 11 | INDEX SKIP SCAN | PS_X_LEDGER_ACCTS | 5373 | | 5177 (1)| 00:00:01 | 28 | 40 |
-----------------------------------------------------------------------------------------------------------------------------------
These are the hints I want to introduce (on Oracle 12c).
SELECT /*+OPT_PARAM('parallel_degree_policy','AUTO') OPT_PARAM('parallel_min_time_threshold',2) 
OPT_PARAM('parallel_degree_limit',4) REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)*/…
  • Use automatic parallel degree, statement queuing and in-memory parallel execution.
  • Invoke parallelism if the statement is estimated to run for at least 2 seconds
  • However, I will also limit the automatic parallelism to a degree of 4
  • Force materialize view rewrite
  • Use a Bloom filter when joining to the materialized view.
I have created a data-driven framework to create the profiles. I have created working storage table to hold details of each force matching signature for which I want to create a profile.
CREATE TABLE dmk_fms_profiles
(force_matching_signature NUMBER NOT NULL
,sql_id VARCHAR2(13)
,plan_hash_value NUMBER
,module VARCHAR2(64)
,report_id VARCHAR2(32) /*Application Specific*/
,tree_list CLOB /*Application Specific*/
,sql_profile_name VARCHAR2(30)
,parallel_min_time_threshold NUMBER
,parallel_degree_limit NUMBER
,other_hints CLOB
,delete_profile VARCHAR2(1)
,sql_text CLOB
,CONSTRAINT dmk_fms_profiles_pk PRIMARY KEY (force_matching_signature)
,CONSTRAINT dmk_fms_profiles_u1 UNIQUE (sql_id)
,CONSTRAINT dmk_fms_profiles_u2 UNIQUE (sql_profile_name)
)
/
Using conditional parallelism with the PARALLEL_MIN_TIME_THRESHOLD, but limited with PARALLEL_DEGREE_LIMIT is an effective tactic with nVision, so I have specified columns in the metadata table for those hints, otherwise, hints are injected via a string. I identified the problematic SQL by analysis with ASH, and hence I also obtained the FORCE_MATCHING_SIGNATURE. The metadata is keyed by FORCE_MATCHING_SIGNATURE. I have specified a meaningful name for the SQL profile.
INSERT INTO dmk_fms_profiles (force_matching_signature, parallel_min_time_threshold, parallel_degree_limit, other_hints, sql_profile_name) 
VALUES (16625752171077499412, 1, 4, 'REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)', 'NVS_GBGL123I_BU_CONSOL_ACCOUNT');
COMMIT;
Profiles are created using the text of a SQL rather than the SQL_ID or FORCE_MATCHING_SIGNATURE directly. Therefore the SQL_TEXT must be extracted from the AWR, so this method also requires that the SQL statement has been captured by an AWR snapshot.
UPDATE dmk_fms_profiles a
SET (module, action, sql_id, plan_hash_value, sql_text)
= (SELECT s.module, s.action, s.sql_id, s.plan_hash_value, t.sql_text
FROM dba_hist_sqlstat s
, dba_hist_sqltext t
WHERE t.dbid = s.dbid
AND t.sql_id = s.sql_id
AND s.force_matching_signature = a.force_matching_signature
AND s.snap_id = (
SELECT MAX(s1.snap_id)
FROM dba_hist_sqlstat s1
WHERE s1.force_matching_signature = a.force_matching_signature
AND s1.module = 'RPTBOOK' /*Application Specific*/
AND s1.action LIKE 'PI=%:%:%' /*Application Specific*/)
AND s.module = 'RPTBOOK' /*Application Specific*/
AND s.action LIKE 'PI=%:%:%' /*Application Specific*/
AND ROWNUM = 1)
WHERE sql_id IS NULL
/

MERGE INTO dmk_fms_profiles u
USING (
SELECT a.sql_id, a.force_matching_signature, p.name
FROM dmk_fms_profiles a
, dba_sql_profiles p
WHERE p.signature = a.force_matching_signature
) s
ON (s.force_matching_signature = u.force_matching_signature)
WHEN MATCHED THEN UPDATE
SET u.sql_profile_name = s.name
/
Columns REPORT_ID and TREE_LIST contain application specific information extracted from the application instrumentation and tree selector logging.
/*Application Specific - extract report ID from ACTION*/
UPDATE dmk_fms_profiles a
SET report_id = substr(regexp_substr(s.action,':([A-Za-z0-9_-])+',1,1),2)
WHERE report_id IS NULL
AND action IS NOT NULL
/
/*Application Specific - extract financial analysis tree from application logging*/
UPDATE dmk_fms_profiles a
SET tree_list =
(SELECT LISTAGG(tree_name,', ') WITHIN GROUP (ORDER BY tree_name)
FROM (select l.tree_name, MAX(l.length) length
FROM dba_hist_sql_plan p
, ps_nvs_treeslctlog l
WHERE p.plan_hash_value = a.plan_hash_value
AND p.sql_id = a.sql_id
AND p.object_name like 'PS%TREESELECT__'
AND p.partition_start = partition_stop
AND p.partition_start = l.selector_num
AND l.tree_name != ' '
GROUP BY l.tree_name)
)
WHERE tree_list IS NULL
/

Now I can produce a simple report of the metadata in order to see what profiles should be created.
column sql_text word_wrapped on format a110
column module format a8
column report_id heading 'nVision|Report ID'
column tree_list word_wrapped on format a20
column plan_hash_value heading 'SQL Plan|Hash Value' format 9999999999
column parallel_min_time_threshold heading 'Parallel|Min Time|Threshold' format 999
column parallel_degree_limit heading 'Parallel|Degree|Limit' format 999
set long 500
SELECT * FROM dmk_fms_profiles
/

SQL Plan
FORCE_MATCHING_SIGNATURE SQL_ID Hash Value MODULE ACTION
------------------------ ------------- ----------- -------- ----------------------------------------------------------------
Parallel Parallel
nVision Min Time Degree
Report ID TREE_LIST SQL_PROFILE_NAME Threshold Limit D
-------------------------------- -------------------- ------------------------------ --------- -------- -
OTHER_HINTS
--------------------------------------------------------------------------------
SQL_TEXT
--------------------------------------------------------------------------------------------------------------
12803175998948432502 5pzxhha3392cs 988048519 RPTBOOK PI=3186222:USGL233I:10008
USGL233I BU_GAAP_CONSOL, NVS_GBGL123I_BU_CONSOL_ACCOUNT 1 4
GAAP_ACCOUNT
REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)
SELECT L2.TREE_NODE_NUM,A.ACCOUNT,SUM(A.POSTED_TOTAL_AMT) FROM PS_LEDGER A, PSTREESELECT05 L2, PSTREESELECT10 L3
WHERE A.LEDGER='S_GBMGT' AND A.FISCAL_YEAR=2017 AND A.ACCOUNTING_PERIOD BETWEEN 0 AND 12 AND (A.DEPTID BETWEEN
'A0000' AND 'A8999' OR A.DEPTID BETWEEN 'B0000' AND 'B9149' OR A.DEPTID='B9156' OR A.DEPTID='B9158' OR A.DEPTID
BETWEEN 'B9165' AND 'B9999' OR A.DEPTID BETWEEN 'C0000' AND 'C9999' OR A.DEPTID BETWEEN 'D0000' AND 'D9999' OR
A.DEPTID BETWEEN 'G0000' AND 'G9999' OR A.DE
Next, this PL/SQL block will create or recreate SQL profiles from the metadata. The various hints can be concatenated into a single string and passed as a parameter to SQLPROF_ATTR. The SQL text is passed as a parameter when the profile is created.
set serveroutput on
DECLARE
l_signature NUMBER;
h SYS.SQLPROF_ATTR;
e_no_sql_profile EXCEPTION;
PRAGMA EXCEPTION_INIT(e_no_sql_profile, -13833);
l_description CLOB;
BEGIN

FOR i IN (
SELECT f.*, s.name
FROM dmk_fms_profiles f
LEFT OUTER JOIN dba_sql_profiles s
ON f.force_matching_signature = s.signature
) LOOP

BEGIN
IF i.name IS NOT NULL AND i.delete_profile = 'Y' THEN
dbms_sqltune.drop_sql_profile(name => i.name);
END IF;
EXCEPTION WHEN e_no_sql_profile THEN NULL;
END;

IF i.delete_profile = 'Y' THEN
NULL;
ELSIF i.sql_text IS NOT NULL THEN
h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
CASE WHEN i.parallel_min_time_threshold>=0 THEN 'OPT_PARAM(''parallel_degree_policy'',''AUTO'') ' END||
CASE WHEN i.parallel_degree_limit >=0 THEN 'OPT_PARAM(''parallel_degree_limit'',' ||i.parallel_degree_limit ||') ' END||
CASE WHEN i.parallel_min_time_threshold>=0 THEN 'OPT_PARAM(''parallel_min_time_threshold'','||i.parallel_min_time_threshold||') ' END||
i.other_hints,
q'[END_OUTLINE_DATA]');

l_signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(i.sql_text);
l_description := 'coe nVision '||i.report_id||' '||i.tree_list||' '||i.force_matching_signature||'='||l_signature;
dbms_output.put_line(i.sql_profile_name||' '||l_description);

DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
sql_text => i.sql_text,
profile => h,
name => i.sql_profile_name,
description => l_description,
category => 'DEFAULT',
validate => TRUE,
replace => TRUE,
force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );

END IF;
END LOOP;
END;
/
I can verify that the profile has been created, and the hints that it contains, thus:
SELECT profile_name,
xmltype(comp_data) as xmlval
FROM dmk_fms_profiles p
, dbmshsxp_sql_profile_attr x
WHERE x.profile_name = p.sql_profile_name
AND p.status = 'ENABLED'
ORDER BY 1
/

PROFILE_NAME
------------------------------
XMLVAL
------------------------------------------------------------------------------------------------
NVS_GBGL123I_BU_CONSOL_ACCOUNT
<![CDATA[BEGIN_OUTLINE_DATA]]>
<![CDATA[OPT_PARAM('parallel_degree_policy','AUTO') OPT_PARAM('parallel_degree_limit',4) OPT_PARAM('parallel_min_time_threshold',1) REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)]]>
<![CDATA[END_OUTLINE_DATA]]>
And now when the application runs, I get the plan that I wanted.
  • The query runs in parallel.
  • The SQL is rewritten to use materialized view.
  • There are no indexes on the materialized view, so it must full scan it.
  • It generates a bloom filter from PSTREESELECT10 and applies it to the materialized view.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2219 (100)| | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10004 | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,04 | P->S | QC (RAND) |
| 3 | HASH GROUP BY | | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,04 | PCWP | |
| 4 | PX RECEIVE | | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,04 | PCWP | |
| 5 | PX SEND HASH | :TQ10003 | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,03 | P->P | HASH |
| 6 | HASH GROUP BY | | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,03 | PCWP | |
| 7 | HASH JOIN | | 536 | 47704 | 2218 (6)| 00:00:01 | | | Q1,03 | PCWP | |
| 8 | PX RECEIVE | | 536 | 38056 | 2215 (6)| 00:00:01 | | | Q1,03 | PCWP | |
| 9 | PX SEND HYBRID HASH | :TQ10002 | 536 | 38056 | 2215 (6)| 00:00:01 | | | Q1,02 | P->P | HYBRID HASH|
| 10 | STATISTICS COLLECTOR | | | | | | | | Q1,02 | PCWC | |
| 11 | HASH JOIN | | 536 | 38056 | 2215 (6)| 00:00:01 | | | Q1,02 | PCWP | |
| 12 | BUFFER SORT | | | | | | | | Q1,02 | PCWC | |
| 13 | JOIN FILTER CREATE | :BF0000 | 236 | 3776 | 2 (0)| 00:00:01 | | | Q1,02 | PCWP | |
| 14 | PX RECEIVE | | 236 | 3776 | 2 (0)| 00:00:01 | | | Q1,02 | PCWP | |
| 15 | PX SEND BROADCAST | :TQ10000 | 236 | 3776 | 2 (0)| 00:00:01 | | | | S->P | BROADCAST |
| 16 | PARTITION RANGE SINGLE | | 236 | 3776 | 2 (0)| 00:00:01 | 36774 | 36774 | | | |
| 17 | INDEX FAST FULL SCAN | PSAPSTREESELECT10 | 236 | 3776 | 2 (0)| 00:00:01 | 36774 | 36774 | | | |
| 18 | JOIN FILTER USE | :BF0000 | 8859 | 475K| 2213 (6)| 00:00:01 | | | Q1,02 | PCWP | |
| 19 | PX BLOCK ITERATOR | | 8859 | 475K| 2213 (6)| 00:00:01 | 29 | 41 | Q1,02 | PCWC | |
| 20 | MAT_VIEW REWRITE ACCESS STORAGE FULL| PS_XX_SUM_GCNSL_MV | 8859 | 475K| 2213 (6)| 00:00:01 | 29 | 41 | Q1,02 | PCWP | |
| 21 | BUFFER SORT | | | | | | | | Q1,03 | PCWC | |
| 22 | PX RECEIVE | | 731 | 13158 | 3 (0)| 00:00:01 | | | Q1,03 | PCWP | |
| 23 | PX SEND HYBRID HASH | :TQ10001 | 731 | 13158 | 3 (0)| 00:00:01 | | | | S->P | HYBRID HASH|
| 24 | PARTITION RANGE SINGLE | | 731 | 13158 | 3 (0)| 00:00:01 | 36773 | 36773 | | | |
| 25 | INDEX FAST FULL SCAN | PSAPSTREESELECT05 | 731 | 13158 | 3 (0)| 00:00:01 | 36773 | 36773 | | | |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Conclusion SQL Profiles can be used in much the same way as SQL Patches to introduce hints into application SQL without changing the code, the difference being that SQL Profiles can force match SQL.  However, SQL Profiles do require the Tuning pack to be licenced, whereas SQL Patches and Baselines do not.
Applying force matching SQL profiles to nVision is an effective, though reactive tactic.   Tree changes can result in changes to the number of literal criteria in nVision SQL statements that may, therefore, cease to match existing profiles.  nVision will always require on-going monitoring and introduction of new profiles.

Introducing Pixel Perfect Reporting in Oracle Analytics Cloud

Tim Dexter - Wed, 2017-12-20 08:30

 

For all you BI Publisher fans, here is the good news - BI Publisher is now available with Oracle Analytics Cloud !!

Oracle Analytics Cloud (OAC) is a scalable and secure public cloud service that provides a full set of capabilities to explore and perform collaborative analytics for your enterprise. You can take data from any source, explore with Data Visualization and collaborate with real-time data. It is available in three flavors - Standard Edition, Data Lake Edition and Enterprise Edition, with Standard Edition giving the base ability to explore data, Data Lake Edition allowing insights into big data, and Enterprise Edition offering the full platter of data exploration, big data analytics, dashboard, enterprise reporting, Essbase etc. Refer to this documentation for additional details on different editions.

With OAC 17.4.5 Enterprise Edition, now you can create pixel perfect report and deliver to a variety of destinations such as email, printer, fax, file server using ftp or WebDAV, Webcenter Content and Content & Experience Cloud. The version of BI Publisher here is 12.2.4.0.

If you have used BI Publisher On-prem, the experience will be very similar feature wise and look-and-feel wise, and therefore you will find it easy to get on-board. If you are new to BI Publisher, you will now be able to create pixel perfect and highly formatted business documents in OAC such as Invoices, Purchase Orders, Dunning Letters, Marketing Collateral, EFT & EDI documents, Financial Statements, Government Forms, Operational Reports, Management Reports, Retail Reports, Shipping Labels with barcodes, Airline boarding passes with PDF417 barcode, Market to Mobile content using QR code, Contracts with fine-print on alternate page, Cross-tab reports, etc.

You can connect to a variety of data sources including BI Subject Areas, BI Analysis and RPD; Schedule your report to run once or as a recurring job; and even burst documents to render in multiple formats and be delivered to multiple destinations.

 

Can we move from BI Publisher on-prem to BI Publisher on OAC?

Well yes, you can. You will have to understand your on-prem deployment and plan accordingly. If your data can be migrated to OAC, that will be the best otherwise you can plan to extend your network to Oracle Cloud allowing OAC to access your on-prem data. The repository can be migrated by archiving and unarchiving mechanism. User data management will be another task where application roles from On-prem will need to be added to OAC application roles. Details on this will be coming soon.

 

Benefits of BI Publisher on OAC

First of all OAC comes with many great features around data exploration and visualization with advanced analytics capabilities. BI Publisher compliments this environment for pixel perfect reporting. So now you have an environment that is packed with Industry leading BI products providing an end-to-end solution for an enterprise. 

Managing Server instances will be a cake walk now, with just few clicks you will be able to scale up/down to a different compute shape or scale out/in to manage nodes in the cluster, saving you both time and money.

Many self service features to manage reports and server related resources.

 

What's new in BI Publisher 12.2.4.0?

BI Publisher in OAC includes all features of 12.2.1.3 and has the following new features in this release:

Accessible PDF Support (Tagged PDF & PDF/UA-1) New Barcodes - QR Code and PDF417 Ability to purge Job History Ability to view diagnostic log for online report Widow-orphan support for RTF template

 

So why wait? You can quickly check this out by creating a free trial account here. Once you login, you are in OAC home page. To get to BI Publisher you need to click on the Page Menu on right side top of the page and then select option "Open Classic Home". BI Publisher options are available under Published Reporting in the classic home page.

For further details on pixel perfect reporting, check the latest Oracle Analytics Cloud Documentation.

 

Stay tuned for more updates on upgrade and new features !

Categories: BI & Warehousing

Automate OVM deployment for a production ready Oracle RAC 12.2 architecture – (part 02)

Yann Neuhaus - Wed, 2017-12-20 07:58

In this post we are going to deploy a R.A.C system ready to run production load with near-zero knowledge with R.A.C, Oracle cluster nor Oracle database.

We are going to use the “Deploy Cluster Tool” which is provide by Oracle to perform Oracle deployment of many kind of database architectures you may need like Oracle single instance, Oracle Restart or Oracle R.A.C. This tool permits you to choose if you want an Enterprise Edition or a Standard Edition and if you want an Oracle Release 11g or 12c.

For this demonstration we are going to deploy a R.A.C 12cR2 in Standard Edition.

What you need at this stage

  • An OVM infrastructure as describe in this post. In this infrastructure we have
    • 2 virtual machines called rac001 and rac002 with the required network cabling and disk configuration to run R.A.C
    • The 2 VMs are create from the Oracle template which include the stuff needed to deploy whichever configuration you need
  • A copy of the last release of the “Deploy Cluster Tool” available here

The most important part here is to edit 2 configurations files to describe the configuration we want

  • deployRacProd_SE_RAC_netconfig.ini: network parameters needed for the deployment
  • deployRacProd_SE_RAC_params.ini: parameters related to database memory, name, ASM disk groups, User UID and so on

This is the content of the network configuration used for this infrastructure:

-bash-4.1# egrep -v "^$|^#" deployRacProd_SE_RAC_netconfig.ini
NODE1=rac001
NODE1IP=192.168.179.210
NODE1PRIV=rac001-priv
NODE1PRIVIP=192.168.3.210
NODE1VIP=rac001-vip
NODE1VIPIP=192.168.179.211
NODE2=rac002
NODE2IP=192.168.179.212
NODE2PRIV=rac002-priv
NODE2PRIVIP=192.168.3.212
NODE2VIP=rac002-vip
NODE2VIPIP=192.168.179.213
PUBADAP=eth1
PUBMASK=255.255.255.0
PUBGW=192.168.179.1
PRIVADAP=eth2
PRIVMASK=255.255.255.0
RACCLUSTERNAME=cluprod01
DOMAINNAME=
DNSIP=""
NETCONFIG_DEV=/dev/xvdc
SCANNAME=cluprod01-scan
SCANIP=192.168.179.205
FLEX_CLUSTER=yes
FLEX_ASM=yes
ASMADAP=eth3
ASMMASK=255.255.255.0
NODE1ASMIP=192.168.58.210
NODE2ASMIP=192.168.58.212

Let’s start from the OVM Manager server going in the “Deploy Cluster Tool” directory and initiating the first stage of the deployment:

-bash-4.1# cd /root/deploycluster3
-bash-4.1# ./deploycluster.py -u admin -M rac00? -P deployRacProd_SE_RAC_params.ini -N deployRacProd_SE_RAC_netconfig.ini
Oracle DB/RAC OneCommand (v3.0.5) for Oracle VM - deploy cluster - (c) 2011-2017 Oracle Corporation
 (com: 29100:v3.0.4, lib: 231275:v3.0.5, var: 1800:v3.0.5) - v2.6.5 - ovmm (x86_64)
Invoked as root at Mon Dec 18 14:19:48 2017  (size: 43900, mtime: Tue Feb 28 01:03:00 2017)
Using: ./deploycluster.py -u admin -M rac00? -P deployRacProd_SE_RAC_params.ini -N deployRacProd_SE_RAC_netconfig.ini

INFO: Login password to Oracle VM Manager not supplied on command line or environment (DEPLOYCLUSTER_MGR_PASSWORD), prompting...
Password:

INFO: Attempting to connect to Oracle VM Manager...

Oracle VM Manager Core WS-API Shell 3.4.2.1384 (20160914_1384)

Copyright (C) 2007, 2016 Oracle. All rights reserved.
See the LICENSE file for redistribution information.


Connecting to https://localhost:7002/...

INFO: Oracle VM Client CONNECTED to Oracle VM Manager (3.4.4.1709) UUID (0004fb00000100001f20e914973507f6)

INFO: Inspecting /root/deploycluster3/deployRacProd_SE_RAC_netconfig.ini for number of nodes defined....
INFO: Detected 2 nodes in: /root/deploycluster3/deployRacProd_SE_RAC_netconfig.ini

INFO: Located a total of (2) VMs;
      2 VMs with a simple name of: ['rac001', 'rac002']

INFO: Detected a RAC deployment...

INFO: Starting all (2) VMs...

INFO: VM with a simple name of "rac001" is in a Stopped state, attempting to start it.................................OK.

INFO: VM with a simple name of "rac002" is in a Stopped state, attempting to start it.................................OK.

INFO: Verifying that all (2) VMs are in Running state and pass prerequisite checks.....

INFO: Detected that all (2) VMs specified on command line have (9) common shared disks between them (ASM_MIN_DISKS=5)

INFO: The (2) VMs passed basic sanity checks and in Running state, sending cluster details as follows:
      netconfig.ini (Network setup): /root/deploycluster3/deployRacProd_SE_RAC_netconfig.ini
      params.ini (Overall build options): /root/deploycluster3/deployRacProd_SE_RAC_params.ini
      buildcluster: yes

INFO: Starting to send configuration details to all (2) VM(s).................................................................
INFO: Sending to VM with a simple name of "rac001"...........................................................................................................................................................................................................................................................
INFO: Sending to VM with a simple name of "rac002"..............................................................................................................................................................

INFO: Configuration details sent to (2) VMs...
      Check log (default location /u01/racovm/buildcluster.log) on build VM (rac001)...

INFO: deploycluster.py completed successfully at 14:21:28 in 100.4 seconds (0h:01m:40s)
Logfile at: /root/deploycluster3/deploycluster23.log

 

At this stage we have 2 nodes with the network configuration required like host name and IP addresses. The deployment script has also pushed the configuration files mentioned previously in the VMs.

So we connect to the first VM rac001 to

-bash-4.1# ssh root@192.168.179.210
Warning: Permanently added '192.168.179.210' (RSA) to the list of known hosts.
root@192.168.179.210's password:
Last login: Mon Dec 11 10:31:03 2017
[root@rac001 ~]#

Then we go the deployment directory which is part of the Template and we can execute the deployment

[root@rac001 racovm]# ./buildcluster.sh -s
Invoking on rac001 as root...
   Oracle DB/RAC 12c/11gR2 OneCommand (v2.1.9) for Oracle VM - (c) 2010-2017 Oracle Corporation
   Cksum: [2551004249 619800 racovm.sh] at Mon Dec 18 09:06:43 EST 2017
   Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen
   Kit Version: 12.2.0.1.170814 (RAC Mode, 2 nodes, Enterprise Edition)
   Step(s): buildcluster

INFO (node:rac001): Skipping confirmation, flag (-s) supplied on command line
2017-12-18 09:06:43:[buildcluster:Start:rac001] Building 12cR2 RAC Cluster

INFO (node:rac001): No database created due to (BUILD_RAC_DATABASE=no) & (BUILD_SI_DATABASE=no) setting in params.ini
2017-12-18 09:06:45:[setsshroot:Start:rac001] SSH Setup for the root user...
..
INFO (node:rac001): Passwordless SSH for the root user already configured, skipping...
2017-12-18 09:06:46:[setsshroot:Done :rac001] SSH Setup for the root user completed successfully
2017-12-18 09:06:46:[setsshroot:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)
2017-12-18 09:06:46:[copykit:Start:rac001] Copy kit files to remote nodes
Kit files: buildsingle.sh buildcluster.sh netconfig.sh netconfig.ini common.sh cleanlocal.sh diskconfig.sh racovm.sh ssh params.ini doall.sh  netconfig GetSystemTimeZone.class kitversion.txt mcast

INFO (node:rac001): Copied kit to remote node rac002 as root user
2017-12-18 09:06:48:[copykit:Done :rac001] Copy kit files to (1) remote nodes
2017-12-18 09:06:48:[copykit:Time :rac001] Completed successfully in 2 seconds (0h:00m:02s)
2017-12-18 09:06:48:[usrsgrps:Start:rac001] Verifying Oracle users & groups on all nodes (create/modify mode)..
..
2017-12-18 09:06:51:[usrsgrpslocal:Start:rac001] Verifying Oracle users & groups (create/modify mode)..
2017-12-18 09:06:51:[usrsgrpslocal:Start:rac002] Verifying Oracle users & groups (create/modify mode)..

INFO (node:rac001): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

2017-12-18 09:06:51:[usrsgrpslocal:Done :rac001] Verifying Oracle users & groups (create/modify mode)..
2017-12-18 09:06:51:[usrsgrpslocal:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac002): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

2017-12-18 09:06:51:[usrsgrpslocal:Done :rac002] Verifying Oracle users & groups (create/modify mode)..
2017-12-18 09:06:51:[usrsgrpslocal:Time :rac002] Completed successfully in 0 seconds (0h:00m:00s)
....
INFO (node:rac001): Passwordless SSH for the Oracle user (oracle) already configured to all nodes; not re-setting users passwords
2017-12-18 09:06:55:[usrsgrps:Done :rac001] Verifying Oracle users & groups on all nodes (create/modify mode)..
2017-12-18 09:06:55:[usrsgrps:Time :rac001] Completed successfully in 7 seconds (0h:00m:07s)

INFO (node:rac001): Parameters loaded from params.ini...
  Users & Groups:
   Role Separation: no  Running as: root
   OInstall    : oinstall       GID: 54321
   RAC Owner   : oracle         UID: 54321
    DB OSDBA   : dba            GID: 54322
    DB OSOPER  :                GID:
    DB OSBACKUP: dba            GID:
    DB OSDGDBA : dba            GID:
    DB OSKMDBA : dba            GID:
    DB OSRAC   : dba            GID:
   Grid Owner  : oracle         UID: 54321
    GI OSDBA   : dba            GID: 54322
    GI OSOPER  :                GID:
    GI OSASM   : dba            GID: 54322
  Software Locations:
   Operating Mode: RAC                   Database Edition: STD
   Flex Cluster: yes      Flex ASM: yes
   Central Inventory: /u01/app/oraInventory
   Grid Home: /u01/app/12.2.0/grid  (Detected: 12cR2, Enterprise Edition)
   Grid Name: OraGrid12c
   RAC Home : /u01/app/oracle/product/12.2.0/dbhome_1  (Detected: 12cR2, Enterprise Edition)
   RAC Name : OraRAC12c
   RAC Base : /u01/app/oracle
   DB/RAC OVM kit : /u01/racovm
   Attach RAC Home: yes   GI Home: yes  Relink Homes: no   On OS Change: yes
   Addnode Copy: no
  Database & Storage:
   Database : no         DBName: ORCL  SIDName: ORCL  DG: DGDATA   Listener Port: 1521
   Policy Managed: no
   DBExpress: no         DBExpress port: 5500
   Grid Management DB: no   GIMR diskgroup name:
   Separate GIMR diskgroup: no
   Cluster Storage: ASM
   ASM Discovery String: /dev/xvd[k-s]1
   ASM diskgroup: dgocrvoting      Redundancy: EXTERNAL   Allocation Unit (au_size): 4
      Disks     : /dev/xvdk1 /dev/xvdl1 /dev/xvdm1 /dev/xvdn1 /dev/xvdo1
   Recovery DG  : DGFRA            Redundancy: EXTERNAL
      Disks     : /dev/xvdr1 /dev/xvds1
      Attributes: 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0'
   Extra DG #1  : DGDATA           Redundancy: EXTERNAL
      Disks     : /dev/xvdp1 /dev/xvdq1
      Attributes: 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0'
   Persistent disknames: yes  Stamp: yes  Partition: yes  Align: yes  GPT: no Permissions: 660
   ACFS Filesystem: no

Network information loaded from netconfig.ini...
  Default Gateway: 192.168.179.1  Domain:
  DNS:
  Public NIC : eth1  Mask: 255.255.255.0
  Private NIC: eth2  Mask: 255.255.255.0
  ASM NIC    : eth3  Mask: 255.255.0.0
  SCAN Name: cluprod01-scan  SCAN IP: 192.168.179.205  Scan Port: 1521
  Cluster Name: cluprod01
  Nodes & IP Addresses (2 of 2 nodes)
  Node  1: PubIP : 192.168.179.210 PubName : rac001
     (Hub) VIPIP : 192.168.179.211 VIPName : rac001-vip
           PrivIP: 192.168.3.210   PrivName: rac001-priv
           ASMIP : 192.168.58.210
  Node  2: PubIP : 192.168.179.212 PubName : rac002
     (Hub) VIPIP : 192.168.179.213 VIPName : rac002-vip
           PrivIP: 192.168.3.212   PrivName: rac002-priv
           ASMIP : 192.168.58.212
Running on rac001 as root...
   Oracle DB/RAC 12c/11gR2 OneCommand (v2.1.9) for Oracle VM - (c) 2010-2017 Oracle Corporation
   Cksum: [2551004249 619800 racovm.sh] at Mon Dec 18 09:06:55 EST 2017
   Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen
   Kit Version: 12.2.0.1.170814 (RAC Mode, 2 nodes, Enterprise Edition)
2017-12-18 09:06:56:[printparams:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)
2017-12-18 09:06:56:[setsshora:Start:rac001] SSH Setup for the Oracle user(s)...
..
INFO (node:rac001): Passwordless SSH for the oracle user already configured, skipping...
2017-12-18 09:06:57:[setsshora:Done :rac001] SSH Setup for the oracle user completed successfully
2017-12-18 09:06:57:[setsshora:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)
2017-12-18 09:06:57:[diskconfig:Start:rac001] Storage Setup
2017-12-18 09:06:58:[diskconfig:Start:rac001] Running in configuration mode (local & remote nodes)
.
2017-12-18 09:06:58:[diskconfig:Disks:rac001] Verifying disks exist, are free and with no overlapping partitions (localhost)...
/dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq............................OK
2017-12-18 09:07:02:[diskconfig:Disks:rac001] Checking contents of disks (localhost)...
/dev/xvdk1/dev/xvdl1/dev/xvdm1/dev/xvdn1/dev/xvdo1/dev/xvdr1/dev/xvds1/dev/xvdp1/dev/xvdq1.
2017-12-18 09:07:02:[diskconfig:Remote:rac001] Assuming persistent disk names on remote nodes with stamping (existence check)...
/dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo......../dev/xvdr./dev/xvds...../dev/xvdp./dev/xvdq........OK
2017-12-18 09:07:23:[diskconfig:Remote:rac001] Verify disks are free on remote nodes...
rac002....................OK
2017-12-18 09:07:52:[diskconfig:Disks:rac001] Checking contents of disks (remote nodes)...
rac002.......OK
2017-12-18 09:07:54:[diskconfig:Disks:rac001] Setting disk permissions for next startup (all nodes)...
.....OK
2017-12-18 09:07:56:[diskconfig:ClearPartTables:rac001] Clearing partition tables...
./dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq.....................OK
2017-12-18 09:08:03:[diskconfig:CreatePartitions:rac001] Creating 'msdos' partitions on disks (as needed)...
./dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq.....................OK
2017-12-18 09:08:13:[diskconfig:CleanPartitions:rac001] Cleaning new partitions...
./dev/xvdk1./dev/xvdl1./dev/xvdm1./dev/xvdn1./dev/xvdo1./dev/xvdr1./dev/xvds1./dev/xvdp1./dev/xvdq1...OK
2017-12-18 09:08:13:[diskconfig:Done :rac001] Done configuring and checking disks on all nodes
2017-12-18 09:08:13:[diskconfig:Done :rac001] Storage Setup
2017-12-18 09:08:13:[diskconfig:Time :rac001] Completed successfully in 76 seconds (0h:01m:16s)
2017-12-18 09:08:15:[clearremotelogs:Time :rac001] Completed successfully in 2 seconds (0h:00m:02s)
2017-12-18 09:08:15:[check:Start:rac001] Pre-install checks on all nodes
..

INFO (node:rac001): Check found that all (2) nodes have the following (25586399 26609817 26609966) patches applied to the Grid Infrastructure Home (/u01/app/12.2.0/grid), the following (25811364 26609817 26609966) patches applied to the RAC Home (/u01/app/oracle/product/12.2.0/dbhome_1)
.2017-12-18 09:08:20:[checklocal:Start:rac001] Pre-install checks
2017-12-18 09:08:21:[checklocal:Start:rac002] Pre-install checks
2017-12-18 09:08:22:[usrsgrpslocal:Start:rac001] Verifying Oracle users & groups (check only mode)..

INFO (node:rac001): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

2017-12-18 09:08:22:[usrsgrpslocal:Done :rac001] Verifying Oracle users & groups (check only mode)..
2017-12-18 09:08:22:[usrsgrpslocal:Start:rac002] Verifying Oracle users & groups (check only mode)..

INFO (node:rac002): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

2017-12-18 09:08:22:[usrsgrpslocal:Done :rac002] Verifying Oracle users & groups (check only mode)..

INFO (node:rac001): Node forming new RAC cluster; Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen

WARNING (node:rac001): Not performing any memory checks due to (CLONE_SKIP_MEMORYCHECKS=yes) in params.ini.

INFO (node:rac001): Running disk checks on all nodes, persistent disk names (/u01/racovm/diskconfig.sh -n 2 -D 1 -s)
2017-12-18 09:08:23:[diskconfig:Start:rac001] Running in dry-run mode (local & remote nodes, level 1), no stamping, partitioning or OS configuration files will be modified...(assuming persistent disk names)

INFO (node:rac002): Node forming new RAC cluster; Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen

WARNING (node:rac002): Not performing any memory checks due to (CLONE_SKIP_MEMORYCHECKS=yes) in params.ini.

INFO (node:rac002): Running network checks...
......
2017-12-18 09:08:24:[diskconfig:Disks:rac001] Verifying disks exist, are free and with no overlapping partitions (localhost)...
/dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq.............................OK
2017-12-18 09:08:29:[diskconfig:Disks:rac001] Checking existence of automatically renamed disks (localhost)...
/dev/xvdk1./dev/xvdl1./dev/xvdm1./dev/xvdn1./dev/xvdo1./dev/xvdr1./dev/xvds1./dev/xvdp1./dev/xvdq1.
2017-12-18 09:08:30:[diskconfig:Disks:rac001] Checking permissions of disks (localhost)...
/dev/xvdk1/dev/xvdl1/dev/xvdm1/dev/xvdn1/dev/xvdo1/dev/xvdr1/dev/xvds1/dev/xvdp1/dev/xvdq1
2017-12-18 09:08:30:[diskconfig:Disks:rac001] Checking contents of disks (localhost)...
/dev/xvdk1/dev/xvdl1/dev/xvdm1/dev/xvdn1/dev/xvdo1/dev/xvdr1/dev/xvds1/dev/xvdp1/dev/xvdq1..
2017-12-18 09:08:31:[diskconfig:Remote:rac001] Assuming persistent disk names on remote nodes with NO stamping (existence check)...
rac002........OK
2017-12-18 09:08:37:[diskconfig:Remote:rac001] Verify disks are free on remote nodes...
rac002........
INFO (node:rac001): Waiting for all checklocal operations to complete on all nodes (At 09:08:50, elapsed: 0h:00m:31s, 2) nodes remaining, all background pid(s): 13222 13365)...
...............
INFO (node:rac002): Check completed successfully
2017-12-18 09:09:07:[checklocal:Done :rac002] Pre-install checks
2017-12-18 09:09:07:[checklocal:Time :rac002] Completed successfully in 46 seconds (0h:00m:46s)
.......OK
2017-12-18 09:09:11:[diskconfig:Remote:rac001] Checking existence of automatically renamed disks (remote nodes)...
rac002...
2017-12-18 09:09:17:[diskconfig:Remote:rac001] Checking permissions of disks (remote nodes)...
rac002....
2017-12-18 09:09:21:[diskconfig:Disks:rac001] Checking contents of disks (remote nodes)...
rac002.......OK
2017-12-18 09:09:26:[diskconfig:Done :rac001] Dry-run (local & remote, level 1) completed successfully, most likely normal run will too
..
INFO (node:rac001): Running multicast check on 230.0.1.0 port 42050 for 2 nodes...

INFO (node:rac001): All nodes can multicast to all other nodes on interface eth2 multicast address 230.0.1.0 port 42050...

INFO (node:rac001): Running network checks...
....................
INFO (node:rac001): Check completed successfully
2017-12-18 09:10:11:[checklocal:Done :rac001] Pre-install checks
2017-12-18 09:10:11:[checklocal:Time :rac001] Completed successfully in 111 seconds (0h:01m:51s)

INFO (node:rac001): All checklocal operations completed on all (2) node(s) at: 09:10:12
2017-12-18 09:10:12:[check:Done :rac001] Pre-install checks on all nodes
2017-12-18 09:10:13:[check:Time :rac001] Completed successfully in 117 seconds (0h:01m:57s)
2017-12-18 09:10:13:[creategrid:Start:rac001] Creating 12cR2 Grid Infrastructure
..
2017-12-18 09:10:16:[preparelocal:Start:rac001] Preparing node for Oracle installation

INFO (node:rac001): Resetting permissions on Oracle Homes... May take a while...
2017-12-18 09:10:17:[preparelocal:Start:rac002] Preparing node for Oracle installation

INFO (node:rac002): Resetting permissions on Oracle Homes... May take a while...

INFO (node:rac001): Configured size of /dev/shm is (see output below):
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.5G     0  1.5G   0% /dev/shm
2017-12-18 09:10:27:[preparelocal:Done :rac001] Preparing node for Oracle installation
2017-12-18 09:10:27:[preparelocal:Time :rac001] Completed successfully in 11 seconds (0h:00m:11s)

INFO (node:rac002): Configured size of /dev/shm is (see output below):
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.5G     0  1.5G   0% /dev/shm
2017-12-18 09:10:31:[preparelocal:Done :rac002] Preparing node for Oracle installation
2017-12-18 09:10:31:[preparelocal:Time :rac002] Completed successfully in 14 seconds (0h:00m:14s)
2017-12-18 09:10:32:[prepare:Time :rac001] Completed successfully in 19 seconds (0h:00m:19s)
....
2017-12-18 09:10:40:[giclonelocal:Start:rac001] Attaching 12cR2 Grid Infrastructure Home

INFO (node:rac001): Running on: rac001 as root: /bin/chown -HRf oracle:oinstall /u01/app/12.2.0/grid 2>/dev/null
2017-12-18 09:10:41:[giattachlocal:Start:rac001] Attaching Grid Infratructure Home on node rac001

INFO (node:rac001): Running on: rac001 as oracle: /u01/app/12.2.0/grid/oui/bin/runInstaller -silent -ignoreSysPrereqs -waitforcompletion -attachHome INVENTORY_LOCATION='/u01/app/oraInventory' ORACLE_HOME='/u01/app/12.2.0/grid' ORACLE_HOME_NAME='OraGrid12c' ORACLE_BASE='/u01/app/oracle'   CRS=TRUE -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
2017-12-18 09:10:41:[giclonelocal:Start:rac002] Attaching 12cR2 Grid Infrastructure Home

INFO (node:rac002): Running on: rac002 as root: /bin/chown -HRf oracle:oinstall /u01/app/12.2.0/grid 2>/dev/null
2017-12-18 09:10:42:[giattachlocal:Start:rac002] Attaching Grid Infratructure Home on node rac002

INFO (node:rac002): Running on: rac002 as oracle: /u01/app/12.2.0/grid/oui/bin/runInstaller -silent -ignoreSysPrereqs -waitforcompletion -attachHome INVENTORY_LOCATION='/u01/app/oraInventory' ORACLE_HOME='/u01/app/12.2.0/grid' ORACLE_HOME_NAME='OraGrid12c' ORACLE_BASE='/u01/app/oracle'   CRS=TRUE -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory pointer is located at /etc/oraInst.loc

INFO (node:rac001): Waiting for all giclonelocal operations to complete on all nodes (At 09:11:06, elapsed: 0h:00m:31s, 2) nodes remaining, all background pid(s): 18135 18141)...
Please execute the '/u01/app/oraInventory/orainstRoot.sh' script at the end of the session.
'AttachHome' was successful.

INFO (node:rac001): Running on: rac001 as root: /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
2017-12-18 09:11:08:[giattachlocal:Done :rac001] Attaching Grid Infratructure Home on node rac001
2017-12-18 09:11:08:[giattachlocal:Time :rac001] Completed successfully in 27 seconds (0h:00m:27s)
2017-12-18 09:11:09:[girootlocal:Start:rac001] Running root.sh on Grid Infrastructure home

INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/root.sh -silent
Check /u01/app/12.2.0/grid/install/root_rac001_2017-12-18_09-11-09-287116939.log for the output of root script
2017-12-18 09:11:09:[girootlocal:Done :rac001] Running root.sh on Grid Infrastructure home
2017-12-18 09:11:09:[girootlocal:Time :rac001] Completed successfully in 0 seconds (0h:00m:00s)

INFO (node:rac001): Resetting permissions on Oracle Home (/u01/app/12.2.0/grid)...
Please execute the '/u01/app/oraInventory/orainstRoot.sh' script at the end of the session.
'AttachHome' was successful.

INFO (node:rac002): Running on: rac002 as root: /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
2017-12-18 09:11:10:[giattachlocal:Done :rac002] Attaching Grid Infratructure Home on node rac002
2017-12-18 09:11:10:[giattachlocal:Time :rac002] Completed successfully in 28 seconds (0h:00m:28s)
2017-12-18 09:11:10:[girootlocal:Start:rac002] Running root.sh on Grid Infrastructure home

INFO (node:rac002): Running on: rac002 as root: /u01/app/12.2.0/grid/root.sh -silent
Check /u01/app/12.2.0/grid/install/root_rac002_2017-12-18_09-11-10-934545273.log for the output of root script
2017-12-18 09:11:11:[girootlocal:Done :rac002] Running root.sh on Grid Infrastructure home
2017-12-18 09:11:11:[girootlocal:Time :rac002] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac002): Resetting permissions on Oracle Home (/u01/app/12.2.0/grid)...
2017-12-18 09:11:11:[giclonelocal:Done :rac001] Attaching 12cR2 Grid Infrastructure Home
2017-12-18 09:11:11:[giclonelocal:Time :rac001] Completed successfully in 33 seconds (0h:00m:33s)
2017-12-18 09:11:13:[giclonelocal:Done :rac002] Attaching 12cR2 Grid Infrastructure Home
2017-12-18 09:11:13:[giclonelocal:Time :rac002] Completed successfully in 34 seconds (0h:00m:34s)

INFO (node:rac001): All giclonelocal operations completed on all (2) node(s) at: 09:11:14
2017-12-18 09:11:14:[giclone:Time :rac001] Completed successfully in 42 seconds (0h:00m:42s)
....
2017-12-18 09:11:18:[girootcrslocal:Start:rac001] Running rootcrs.pl

INFO (node:rac001): rootcrs.pl log location is: /u01/app/oracle/crsdata/rac001/crsconfig/rootcrs_rac001_<timestamp>.log

INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/perl/bin/perl -I/u01/app/12.2.0/grid/perl/lib -I/u01/app/12.2.0/grid/crs/install /u01/app/12.2.0/grid/crs/install/rootcrs.pl -auto
Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/rac001/crsconfig/rootcrs_rac001_2017-12-18_09-11-19AM.log
2017/12/18 09:11:30 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2017/12/18 09:11:30 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2017/12/18 09:12:08 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2017/12/18 09:12:08 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2017/12/18 09:12:19 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2017/12/18 09:12:25 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2017/12/18 09:12:28 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.
2017/12/18 09:12:40 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.
2017/12/18 09:13:23 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.
2017/12/18 09:13:23 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.
2017/12/18 09:14:06 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2017/12/18 09:14:19 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2017/12/18 09:14:19 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2017/12/18 09:14:27 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2017/12/18 09:14:43 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2017/12/18 09:15:48 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2017/12/18 09:16:56 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac001'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac001' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/12/18 09:18:14 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2017/12/18 09:18:23 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac001'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac001' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.driver.afd' on 'rac001'
CRS-2672: Attempting to start 'ora.evmd' on 'rac001'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac001'
CRS-2676: Start of 'ora.driver.afd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac001'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac001' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac001' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac001'
CRS-2676: Start of 'ora.gpnpd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac001'
CRS-2676: Start of 'ora.gipcd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac001'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac001'
CRS-2676: Start of 'ora.diskmon' on 'rac001' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac001' succeeded

Disk label(s) created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-171218AM091955.log for details.
Disk groups created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-171218AM091955.log for details.


2017/12/18 09:24:06 CLSRSC-482: Running command: '/u01/app/12.2.0/grid/bin/ocrconfig -upgrade oracle oinstall'
CRS-2672: Attempting to start 'ora.crf' on 'rac001'
CRS-2672: Attempting to start 'ora.storage' on 'rac001'
CRS-2676: Start of 'ora.storage' on 'rac001' succeeded
CRS-2676: Start of 'ora.crf' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac001'
CRS-2676: Start of 'ora.crsd' on 'rac001' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk e0321712cd544fa6bf438b5849f11155.
Successfully replaced voting disk group with +dgocrvoting.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   e0321712cd544fa6bf438b5849f11155 (AFD:DGOCRVOTING1) [DGOCRVOTING]
Located 1 voting disk(s).
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac001'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac001'
CRS-2677: Stop of 'ora.crsd' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.storage' on 'rac001'
CRS-2673: Attempting to stop 'ora.crf' on 'rac001'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac001'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac001'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac001'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac001' succeeded
CRS-2677: Stop of 'ora.storage' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac001'
CRS-2677: Stop of 'ora.crf' on 'rac001' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac001' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac001' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac001'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac001'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac001'
CRS-2677: Stop of 'ora.evmd' on 'rac001' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac001'
CRS-2677: Stop of 'ora.cssd' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.driver.afd' on 'rac001'
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac001'
CRS-2677: Stop of 'ora.driver.afd' on 'rac001' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac001' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac001' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2017/12/18 09:26:52 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac001'
CRS-2672: Attempting to start 'ora.evmd' on 'rac001'
CRS-2676: Start of 'ora.mdnsd' on 'rac001' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac001'
CRS-2676: Start of 'ora.gpnpd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac001'
CRS-2676: Start of 'ora.gipcd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac001'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac001'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac001'
CRS-2676: Start of 'ora.diskmon' on 'rac001' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac001'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac001'
CRS-2676: Start of 'ora.ctssd' on 'rac001' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac001'
CRS-2676: Start of 'ora.asm' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac001'
CRS-2676: Start of 'ora.storage' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac001'
CRS-2676: Start of 'ora.crf' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac001'
CRS-2676: Start of 'ora.crsd' on 'rac001' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: rac001
CRS-6016: Resource auto-start has completed for server rac001
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/12/18 09:30:15 CLSRSC-343: Successfully started Oracle Clusterware stack
2017/12/18 09:30:15 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac001'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac001'
CRS-2676: Start of 'ora.asm' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.DGOCRVOTING.dg' on 'rac001'
CRS-2676: Start of 'ora.DGOCRVOTING.dg' on 'rac001' succeeded
2017/12/18 09:32:38 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2017/12/18 09:34:22 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
2017-12-18 09:34:26:[girootcrslocal:Done :rac001] Running rootcrs.pl
2017-12-18 09:34:27:[girootcrslocal:Time :rac001] Completed successfully in 1388 seconds (0h:23m:08s)
2017-12-18 09:34:49:[girootcrslocal:Start:rac002] Running rootcrs.pl

INFO (node:rac002): rootcrs.pl log location is: /u01/app/oracle/crsdata/rac002/crsconfig/rootcrs_rac002_<timestamp>.log

INFO (node:rac002): Running on: rac002 as root: /u01/app/12.2.0/grid/perl/bin/perl -I/u01/app/12.2.0/grid/perl/lib -I/u01/app/12.2.0/grid/crs/install /u01/app/12.2.0/grid/crs/install/rootcrs.pl -auto
Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/rac002/crsconfig/rootcrs_rac002_2017-12-18_09-34-50AM.log
2017/12/18 09:35:03 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2017/12/18 09:35:04 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:35:18, elapsed: 0h:00m:31s, 1) node remaining, all background pid(s): 7263)...
2017/12/18 09:35:44 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2017/12/18 09:35:44 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
.2017/12/18 09:35:51 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2017/12/18 09:35:56 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2017/12/18 09:35:56 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.
2017/12/18 09:36:02 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.
..2017/12/18 09:36:59 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.
2017/12/18 09:37:01 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.
2017/12/18 09:37:08 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2017/12/18 09:37:16 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2017/12/18 09:37:17 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
.2017/12/18 09:37:23 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2017/12/18 09:37:40 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
..
INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:38:21, elapsed: 0h:03m:34s, 1) node remaining, all background pid(s): 7263)...
.2017/12/18 09:39:00 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
...2017/12/18 09:40:24 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac002'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac002' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
..
INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:41:25, elapsed: 0h:06m:38s, 1) node remaining, all background pid(s): 7263)...
2017/12/18 09:41:40 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2017/12/18 09:41:42 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac002'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac002' has completed
CRS-4133: Oracle High Availability Services has been stopped.
.CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac002'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac002'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac002' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac002' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2017/12/18 09:42:04 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
.....
INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:44:27, elapsed: 0h:09m:40s, 1) node remaining, all background pid(s): 7263)...
.CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac002'
CRS-2672: Attempting to start 'ora.evmd' on 'rac002'
CRS-2676: Start of 'ora.mdnsd' on 'rac002' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac002'
CRS-2676: Start of 'ora.gpnpd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac002'
CRS-2676: Start of 'ora.gipcd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac002'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac002'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac002'
CRS-2676: Start of 'ora.diskmon' on 'rac002' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac002'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac002'
CRS-2676: Start of 'ora.ctssd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac002'
CRS-2676: Start of 'ora.crf' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac002'
CRS-2676: Start of 'ora.crsd' on 'rac002' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac002'
CRS-2676: Start of 'ora.asm' on 'rac002' succeeded
CRS-6017: Processing resource auto-start for servers: rac002
CRS-2672: Attempting to start 'ora.net1.network' on 'rac002'
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac002'
CRS-2676: Start of 'ora.net1.network' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.ons' on 'rac002'
CRS-2676: Start of 'ora.ons' on 'rac002' succeeded
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac002'
CRS-2676: Start of 'ora.asm' on 'rac002' succeeded
CRS-6016: Resource auto-start has completed for server rac002
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/12/18 09:45:22 CLSRSC-343: Successfully started Oracle Clusterware stack
2017/12/18 09:45:22 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
.2017/12/18 09:45:49 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
..2017/12/18 09:46:38 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
2017-12-18 09:46:40:[girootcrslocal:Done :rac002] Running rootcrs.pl
2017-12-18 09:46:40:[girootcrslocal:Time :rac002] Completed successfully in 711 seconds (0h:11m:51s)

INFO (node:rac001): All girootcrslocal operations completed on all (2) node(s) at: 09:46:42
2017-12-18 09:46:42:[girootcrs:Time :rac001] Completed successfully in 2128 seconds (0h:35m:28s)
2017-12-18 09:46:42:[giassist:Start:rac001] Running RAC Home assistants (netca, asmca)

INFO (node:rac001): Creating the node Listener using NETCA... (09:46:44)

INFO (node:rac001): Running on: rac001 as oracle: export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/12.2.0/grid; /u01/app/12.2.0/grid/bin/netca /orahome /u01/app/12.2.0/grid /instype typical /inscomp client,oraclenet,javavm,server,ano /insprtcl tcp /cfg local /authadp NO_VALUE /responseFile /u01/app/12.2.0/grid/network/install/netca_typ.rsp /silent /orahnam OraGrid12c

Parsing command line arguments:
    Parameter "orahome" = /u01/app/12.2.0/grid
    Parameter "instype" = typical
    Parameter "inscomp" = client,oraclenet,javavm,server,ano
    Parameter "insprtcl" = tcp
    Parameter "cfg" = local
    Parameter "authadp" = NO_VALUE
    Parameter "responsefile" = /u01/app/12.2.0/grid/network/install/netca_typ.rsp
    Parameter "silent" = true
    Parameter "orahnam" = OraGrid12c
Done parsing command line arguments.
Oracle Net Services Configuration:
Profile configuration complete.
Profile configuration complete.
Listener "LISTENER" already exists.
Oracle Net Services configuration successful. The exit code is 0

INFO (node:rac001): Running on: rac001 as oracle: export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/12.2.0/grid; /u01/app/12.2.0/grid/bin/asmca -silent -postConfigureASM

Post configuration completed successfully


INFO (node:rac001): Setting initial diskgroup name dgocrvoting's attributes as defined in RACASMGROUP_ATTRIBUTES ('compatible.asm'='12.2.0.1.0', 'compatible.rdbms'='12.2.0.1.0')...

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1 at: 09:48:38: alter diskgroup dgocrvoting set attribute 'compatible.asm'='12.2.0.1.0';

Diskgroup altered.

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1 at: 09:48:40: alter diskgroup dgocrvoting set attribute  'compatible.rdbms'='12.2.0.1.0';

Diskgroup altered.
2017-12-18 09:48:44:[creatediskgroups:Start:rac001] Creating additional diskgroups

INFO (node:rac001): Creating Recovery diskgroup (DGFRA) at: 09:50:31...

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1: create diskgroup "DGFRA" EXTERNAL redundancy disk 'AFD:RECO1','AFD:RECO2' attribute 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0';

Diskgroup created.

Elapsed: 00:00:09.34

INFO (node:rac001): Creating Extra diskgroup (DGDATA) at: 09:52:22...

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1: create diskgroup "DGDATA" EXTERNAL redundancy disk 'AFD:DGDATA1','AFD:DGDATA2' attribute 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0';

Diskgroup created.

Elapsed: 00:00:10.48

INFO (node:rac001): Successfully created the following ASM diskgroups (DGFRA DGDATA), setting them for automount on startup and attempting to mount on all nodes...

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1 at: 09:52:35: alter system set asm_diskgroups='DGDATA', 'DGFRA';

System altered.

INFO (node:rac001): Successfully set the ASM diskgroups (DGDATA DGFRA) to automount on startup

INFO (node:rac001): Attempting to mount diskgroups on nodes running ASM: rac001 rac002

INFO (node:rac001): Running SQL on: rac002 as oracle user using SID: +ASM2 at: 09:52:38: alter diskgroup "DGFRA" mount;

Diskgroup altered.

INFO (node:rac001): Running SQL on: rac002 as oracle user using SID: +ASM2 at: 09:52:39: alter diskgroup "DGDATA" mount;

Diskgroup altered.

INFO (node:rac001): Successfully mounted the created (DGFRA DGDATA) ASM diskgroups on all nodes running an ASM instance (rac001 rac002)
2017-12-18 09:52:41:[creatediskgroups:Done :rac001] Creating additional diskgroups
2017-12-18 09:52:41:[creatediskgroups:Time :rac001] Completed successfully in 237 seconds (0h:03m:57s)

WARNING (node:rac001): Management Database not created due to CLONE_GRID_MANAGEMENT_DB=no. Note that starting with release 12.1.0.2 and higher, the Management Database (GIMR) is required for a fully supported environment
2017-12-18 09:52:41:[giassist:Done :rac001] Running RAC Home assistants (netca, asmca)
2017-12-18 09:52:41:[giassist:Time :rac001] Completed successfully in 359 seconds (0h:05m:59s)
2017-12-18 09:52:41:[creategrid:Done :rac001] Creating 12cR2 Grid Infrastructure
2017-12-18 09:52:41:[creategrid:Time :rac001] Completed successfully in 2548 seconds (0h:42m:28s)

INFO (node:rac001): Skipping CVU post crsinst checks, due to CLONE_SKIP_CVU_POSTCRS=yes
2017-12-18 09:52:41:[cvupostcrs:Time :rac001] Completed successfully in 0 seconds (0h:00m:00s)
2017-12-18 09:52:41:[racclone:Start:rac001] Cloning 12cR2 RAC Home on all nodes
..

INFO (node:rac001): Changing Database Edition to: 'Standard Edition'; The Oracle binary (/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle) is linked as (Enterprise Edition Release), however Database Edition set to (Standard Edition) in params.ini
2017-12-18 09:53:04:[racclonelocal:Start:rac001] Cloning 12cR2 RAC Home

INFO (node:rac001): Running on: rac001 as root: /bin/chown -HRf oracle:oinstall /u01/app/oracle/product/12.2.0/dbhome_1 2>/dev/null

INFO (node:rac001): Running on: rac001 as oracle: /u01/app/oracle/product/12.2.0/dbhome_1/perl/bin/perl /u01/app/oracle/product/12.2.0/dbhome_1/clone/bin/clone.pl -silent ORACLE_BASE='/u01/app/oracle' ORACLE_HOME='/u01/app/oracle/product/12.2.0/dbhome_1' ORACLE_HOME_NAME='OraRAC12c' INVENTORY_LOCATION='/u01/app/oraInventory' OSDBA_GROUP=dba OSOPER_GROUP= OSKMDBA_GROUP=dba OSDGDBA_GROUP=dba OSBACKUPDBA_GROUP=dba OSRACDBA_GROUP=dba oracle_install_db_InstallEdition=STD 'CLUSTER_NODES={rac001,rac002}' "LOCAL_NODE=rac001"  '-ignoreSysPrereqs'
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 6740 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 4059 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-12-18_09-53-12AM. Please wait ...
INFO (node:rac001): Waiting for all racclonelocal operations to complete on all nodes (At 09:53:14, elapsed: 0h:00m:31s, 2) nodes remaining, all background pid(s): 12271 12277)...

INFO (node:rac002): Changing Database Edition to: 'Standard Edition'; The Oracle binary (/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle) is linked as (Enterprise Edition Release), however Database Edition set to (Standard Edition) in params.ini
2017-12-18 09:53:15:[racclonelocal:Start:rac002] Cloning 12cR2 RAC Home

INFO (node:rac002): Running on: rac002 as root: /bin/chown -HRf oracle:oinstall /u01/app/oracle/product/12.2.0/dbhome_1 2>/dev/null

INFO (node:rac002): Running on: rac002 as oracle: /u01/app/oracle/product/12.2.0/dbhome_1/perl/bin/perl /u01/app/oracle/product/12.2.0/dbhome_1/clone/bin/clone.pl -silent ORACLE_BASE='/u01/app/oracle' ORACLE_HOME='/u01/app/oracle/product/12.2.0/dbhome_1' ORACLE_HOME_NAME='OraRAC12c' INVENTORY_LOCATION='/u01/app/oraInventory' OSDBA_GROUP=dba OSOPER_GROUP= OSKMDBA_GROUP=dba OSDGDBA_GROUP=dba OSBACKUPDBA_GROUP=dba OSRACDBA_GROUP=dba oracle_install_db_InstallEdition=STD 'CLUSTER_NODES={rac001,rac002}' "LOCAL_NODE=rac002"  '-ignoreSysPrereqs'
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 6757 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 4082 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-12-18_09-53-42AM. Please wait ....You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-12AM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........
Copy files in progress.
.
Copy files successful.

Link binaries in progress.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-42AM.log
...................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........
Copy files in progress.

Copy files successful.

Link binaries in progress.
...
INFO (node:rac001): Waiting for all racclonelocal operations to complete on all nodes (At 09:56:19, elapsed: 0h:03m:35s, 2) nodes remaining, all background pid(s): 12271 12277)...
..
Link binaries successful.

Setup files in progress.
.
Setup files successful.

Setup Inventory in progress.
.
Setup Inventory successful.

Finish Setup successful.
The cloning of OraRAC12c was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-12AM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   95% Done.

As a root user, execute the following script(s):
        1. /u01/app/oracle/product/12.2.0/dbhome_1/root.sh

Execute /u01/app/oracle/product/12.2.0/dbhome_1/root.sh on the following nodes:
[rac001]


..................................................   100% Done.

INFO (node:rac001): Relinking the oracle binary to disable Database Enterprise Edition options (09:58:45)...
..
INFO (node:rac001): Waiting for all racclonelocal operations to complete on all nodes (At 09:59:25, elapsed: 0h:06m:42s, 2) nodes remaining, all background pid(s): 12271 12277)...
..2017-12-18 10:00:28:[racrootlocal:Start:rac001] Running root.sh on RAC Home
Check /u01/app/oracle/product/12.2.0/dbhome_1/install/root_rac001_2017-12-18_10-00-28-680953042.log for the output of root script
2017-12-18 10:00:29:[racrootlocal:Done :rac001] Running root.sh on RAC Home
2017-12-18 10:00:29:[racrootlocal:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac001): Resetting permissions on Oracle Home (/u01/app/oracle/product/12.2.0/dbhome_1)...
2017-12-18 10:00:29:[racclonelocal:Done :rac001] Cloning 12cR2 RAC Home
2017-12-18 10:00:29:[racclonelocal:Time :rac001] Completed successfully in 464 seconds (0h:07m:44s)

Link binaries successful.

Setup files in progress.

Setup files successful.

Setup Inventory in progress.
.
Setup Inventory successful.

Finish Setup successful.
The cloning of OraRAC12c was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-42AM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   95% Done.

As a root user, execute the following script(s):
        1. /u01/app/oracle/product/12.2.0/dbhome_1/root.sh

Execute /u01/app/oracle/product/12.2.0/dbhome_1/root.sh on the following nodes:
[rac002]


..................................................   100% Done.

INFO (node:rac002): Relinking the oracle binary to disable Database Enterprise Edition options (10:01:17)...
..2017-12-18 10:02:21:[racrootlocal:Start:rac002] Running root.sh on RAC Home
Check /u01/app/oracle/product/12.2.0/dbhome_1/install/root_rac002_2017-12-18_10-02-21-660060386.log for the output of root script
2017-12-18 10:02:22:[racrootlocal:Done :rac002] Running root.sh on RAC Home
2017-12-18 10:02:22:[racrootlocal:Time :rac002] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac002): Resetting permissions on Oracle Home (/u01/app/oracle/product/12.2.0/dbhome_1)...
2017-12-18 10:02:22:[racclonelocal:Done :rac002] Cloning 12cR2 RAC Home
2017-12-18 10:02:22:[racclonelocal:Time :rac002] Completed successfully in 576 seconds (0h:09m:36s)

INFO (node:rac001): All racclonelocal operations completed on all (2) node(s) at: 10:02:24
2017-12-18 10:02:24:[racclone:Done :rac001] Cloning 12cR2 RAC Home on all nodes
2017-12-18 10:02:24:[racclone:Time :rac001] Completed successfully in 583 seconds (0h:09m:43s)

INFO (node:rac002): Disabling passwordless ssh access for root user (from remote nodes)
2017-12-18 10:02:28:[rmsshrootlocal:Time :rac002] Completed successfully in 0 seconds (0h:00m:00s)

INFO (node:rac001): Disabling passwordless ssh access for root user (from remote nodes)
2017-12-18 10:02:31:[rmsshrootlocal:Time :rac001] Completed successfully in 0 seconds (0h:00m:00s)
2017-12-18 10:02:31:[rmsshroot:Time :rac001] Completed successfully in 7 seconds (0h:00m:07s)

INFO (node:rac001): Current cluster state (10:02:31)...

INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/bin/olsnodes -n -s -t
rac001  1       Active  Hub     Unpinned
rac002  2       Active  Hub     Unpinned
Oracle Clusterware active version on the cluster is [12.2.0.1.0]
Oracle Clusterware version on node [rac001] is [12.2.0.1.0]
CRS Administrator List: oracle root
Cluster is running in "flex" mode
CRS-41008: Cluster class is 'Standalone Cluster'
ASM Flex mode enabled: ASM instance count: 3
ASM is running on rac001,rac002

INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/bin/crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGDATA.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGFRA.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGOCRVOTING.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.net1.network
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.ons
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.proxy_advm
               OFFLINE OFFLINE      rac001                   STABLE
               OFFLINE OFFLINE      rac002                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac001                   STABLE
ora.asm
      1        ONLINE  ONLINE       rac001                   Started,STABLE
      2        ONLINE  ONLINE       rac002                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac001                   STABLE
ora.qosmserver
      1        OFFLINE OFFLINE                               STABLE
ora.rac001.vip
      1        ONLINE  ONLINE       rac001                   STABLE
ora.rac002.vip
      1        ONLINE  ONLINE       rac002                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac001                   STABLE
--------------------------------------------------------------------------------

INFO (node:rac001): For an explanation on resources in OFFLINE state, see Note:1068835.1
2017-12-18 10:02:40:[clusterstate:Time :rac001] Completed successfully in 9 seconds (0h:00m:09s)
2017-12-18 10:02:40:[buildcluster:Done :rac001] Building 12cR2 RAC Cluster
2017-12-18 10:02:40:[buildcluster:Time :rac001] Completed successfully in 3357 seconds (0h:55m:57s)

INFO (node:rac001): This entire build was logged in logfile: /u01/racovm/buildcluster3.log

 

We are now going to multiplex network both, the cluster Hearthbeat and ASM because currently the configuration is not HA. We also need only 2 flex asm instances for my 2 nodes R.A.C :

[root@rac001 ~]# /u01/app/12.2.0/grid/bin/srvctl modify asm -count 2

[root@rac002 ~]# /u01/app/12.2.0/grid/bin/srvctl config listener -asmlistener
Name: ASMNET1LSNR_ASM
Type: ASM Listener
Owner: oracle
Subnet: 192.168.3.0
Home: <CRS home>
End points: TCP:1525
Listener is disabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:

/u01/app/12.2.0/grid/bin/srvctl add listener -asmlistener -l ASMNET2LSNR_ASM -subnet 192.168.58.0
/u01/app/12.2.0/grid/bin/srvctl start listener -l ASMNET2LSNR_ASM


[root@rac002 ~]# /u01/app/12.2.0/grid/bin/srvctl config listener -asmlistener
Name: ASMNET1LSNR_ASM
Type: ASM Listener
Owner: oracle
Subnet: 192.168.3.0
Home: <CRS home>
End points: TCP:1525
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:
Name: ASMNET2LSNR_ASM
Type: ASM Listener
Owner: oracle
Subnet: 192.168.58.0
Home: <CRS home>
End points: TCP:1526
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:



[root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg getif
eth1  192.168.179.0  global  public
eth2  192.168.3.0    global  cluster_interconnect
eth3  192.168.58.0   global  asm


[root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg setif -global eth2/192.168.3.0:cluster_interconnect,asm
[root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg setif -global eth3/192.168.58.0:cluster_interconnect,asm


[root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg getif
eth1  192.168.179.0  global  public
eth2  192.168.3.0    global  cluster_interconnect,asm
eth3  192.168.58.0   global  cluster_interconnect,asm

 

Let check the overall cluster state

[root@rac002 ~]# /u01/app/12.2.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.ASMNET2LSNR_ASM.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGDATA.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGFRA.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGOCRVOTING.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.net1.network
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.ons
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.proxy_advm
               OFFLINE OFFLINE      rac001                   STABLE
               OFFLINE OFFLINE      rac002                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac002                   STABLE
ora.asm
      1        ONLINE  ONLINE       rac001                   Started,STABLE
      2        ONLINE  ONLINE       rac002                   Started,STABLE
ora.cvu
      1        ONLINE  ONLINE       rac002                   STABLE
ora.qosmserver
      1        OFFLINE OFFLINE                               STABLE
ora.rac001.vip
      1        ONLINE  ONLINE       rac001                   STABLE
ora.rac002.vip
      1        ONLINE  ONLINE       rac002                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac002                   STABLE
--------------------------------------------------------------------------------

The cluster is now up, functional and resilient to network failure. We could have choose to create a database as a part of the deployment process because the “Deploy Cluster Tool” permits us to do that. Nevertheless, in this demonstration, I choose to execute manually the database creation on top of this deployment.
So we add a new database to the cluster:

 

[oracle@rac001 ~]$ dbca -silent -ignorePreReqs \
> -createDatabase \
> -gdbName app01 \
> -nodelist rac001,rac002 \
> -templateName General_Purpose.dbc \
> -characterSet AL32UTF8 \
> -createAsContainerDatabase false \
> -databaseConfigType RAC \
> -databaseType MULTIPURPOSE \
> -dvConfiguration false \
> -emConfiguration NONE \
> -enableArchive true \
> -memoryMgmtType AUTO_SGA \
> -memoryPercentage 75 \
> -nationalCharacterSet AL16UTF16 \
> -adminManaged \
> -storageType ASM \
> -diskGroupName DGDATA \
> -recoveryGroupName DGFRA \
> -sysPassword 0rAcle-Sys \
> -systemPassword 0rAcle-System \
> -useOMF true

Copying database files
1% complete
2% complete
15% complete
27% complete
Creating and starting Oracle instance
29% complete
32% complete
36% complete
40% complete
41% complete
43% complete
45% complete
Creating cluster database views
47% complete
63% complete
Completing Database Creation
64% complete
65% complete
68% complete
71% complete
72% complete
Executing Post Configuration Actions
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/app01/app01.log" for further details.

[oracle@rac001 ~]$ export ORACLE_SID=app011
[oracle@rac001 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Wed Dec 20 08:54:03 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production

SQL> select host_name from gv$instance ;

HOST_NAME
----------------------------------------------------------------
rac001
rac002

 

We have now a R.A.C 12.2 Standard Edition ready to run our critical applications with the latest patch level for OS and Oracle including all best practices and requirements.

So with this post we have a demonstration of how it make your life simpler, with a good underlying OVM infrastructure, to deploy various kinds of Oracle database infrastructure. This automation can also easily be done for any other technologies like PostgreSQL database or “Big Data” technologies like Hortonworks or any applications.

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.

 

Cet article Automate OVM deployment for a production ready Oracle RAC 12.2 architecture – (part 02) est apparu en premier sur Blog dbi services.

Podcast: Blockchain: Beyond Bitcoin

OTN TechBlog - Wed, 2017-12-20 07:00

Blockchain originally gained attention thanks to its connection to Bitcoin. But blockchain has emerged from under the crypto-currency’s shadow to become a powerful trend in enterprise IT -- and something that should be on every developer's radar.  For this program we’ve assembled a panel of blockchain experts to discuss the technology's impact, examine some use cases, and offer suggestions for developers who want to learn more in order to take advantage of the opportunities blockchain represents.

 

This program was recorded on Thursday November, 9, 2017.

 

The Panelists

Listed alphabetically

Lonneke Dikmans

Lonneke Dikmans
Chief Product Officer, eProseed, Utrecht, NL
Oracle Developer Champion

John King

John King
Tech Enablement Specialist/Speaker/Trainer/Course Developer, King Training Resources, Scottsdale, AZ

Robert van Molken

Robert van Mölken
Senior Integration / Cloud Specialist, AMIS, Utrecht, NL
Oracle Developer Champion

Arturo Viveros

Arturo Viveros
SOA/Cloud Architect, Sysco AS, Oslo, NO
Oracle Developer Champion

 

Additional Resources Coming Soon
  • Combating Complexity
    Chris Newcombe, Chris Richardson, Adam Bien, and Lucas Jellema discuss the creeping complexity in software development and strategies heading off the "software apocalypse."
  • DevOps: Can This Marriage be Saved
    Nicole Forsgen, Leonid Igolnik, Alena Prokharchyk, Baruch Sadogursky, Shay Shmeltzer, and Kelly Shortridge discuss the state of DevOps, where organizations get it wrong, and what developers can do to thrive in a DevOps environment.
Subscribe

Never miss an episode! The Oracle Developer Podcast is available via...

Green Mountain Higher Education Consortium Selects Oracle to Streamline Operations

Oracle Press Releases - Wed, 2017-12-20 07:00
Press Release
Green Mountain Higher Education Consortium Selects Oracle to Streamline Operations GMHEC chooses Oracle Cloud Applications to improve operational efficiencies and embrace modern best practices

Redwood Shores, Calif.—Dec 20, 2017

The Green Mountain Higher Education Consortium (GMHEC) has selected Oracle Cloud Applications to modernize critical finance and human resources business processes to increase customer service levels while reducing administrative costs. With Oracle Human Capital Management (HCM) Cloud, Oracle Enterprise Resource Planning (ERP) Cloud and Oracle Enterprise Performance Management (EPM) Cloud, GMHEC will be able to empower faculty, staff, and students to embrace modern best practices, drive institutional innovation and optimize the overall learning experience.

GMHEC is a collaborative endeavor of three Vermont colleges: Champlain College, Middlebury College and Saint Michael’s College. In order to continue to deliver the best possible student and staff experiences, GMHEC needed to replace its existing legacy on-premises business systems - which rely on manual processes and operate in a silo - with a unified business platform supporting modern best practices, real-time data insights, and transparent reporting across its campuses. After a careful evaluation of competing solutions, GMHEC selected Oracle ERP Cloud, Oracle EPM Cloud and Oracle HCM Cloud to modernize its finance, reporting, procurement, recruitment and on-boarding processes.

“GMHEC aims to create a collaborative environment where we are sharing best practices across all three colleges to directly address rising educational costs and continue to make higher education accessible and affordable to all students,” said Corinna Noelke, executive director, Green Mountain Higher Education Consortium. “Collectively, we selected Oracle as it has a proven history of success in the higher education market, as well as the resources to continue to drive innovation and to allow implementation of standardized best business practices within a single instance for all member colleges.”

With Oracle ERP Cloud, GMHEC will be able to leverage a complete, modern and secure solution to streamline and optimize financial processes. Oracle EPM Cloud will enable GMHEC to strategically analyze data for accurate forecasting. Oracle HCM Cloud will enable the HR teams of the GMHEC schools to spend less time on manual processes and more time on sourcing, developing and retaining top talent through self-service capabilities that empower employees to engage with each other.

“Higher-education institutions are constantly challenged to reduce their administrative costs and optimize business processes in order to meet rapidly changing student and faculty demands,” said Steve Miranda, executive vice president of applications development, Oracle. “With Oracle Cloud Applications, the GMHEC team will be able to improve operational efficiency and increase productivity in order to spend more time focused on the student and staff experience.”

Additional Information

To get the latest news and insights about Oracle ERP Cloud and Oracle EPM Cloud, follow @OracleERPCloud on Twitter or Facebook, @OracleEPMCloud on Twitter or Facebook, or read the Modern Finance Leader blog.

More information about Oracle HCM Cloud can be found on the Modern HR in the Cloud blog, follow @OracleHCM on Twitter or Facebook.

Contact Info
Evelyn Tam
Oracle PR
1.650.506.5936
evelyn.tam@oracle.com
About Green Mountain Higher Education Consortium

The Green Mountain Higher Education Consortium (GMHEC) is a collaborative endeavor of three Vermont colleges: Champlain College, Middlebury College and Saint Michael’s College. The colleges created the consortium in 2013 to create and foster economic efficiencies and improved business and administrative practices across their campuses.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Evelyn Tam

  • 1.650.506.5936

OSB 12c Customization in WLST, some new insights: use the right jar for the job!

Darwin IT - Wed, 2017-12-20 04:41
Problem setting  and investigationYears ago I created a Release & Deploy framework for Fusion Middleware, also supporting Oracle Service Bus. Recently revamped it to use 12c. It uses WLST to import the OSB service to the Service Bus, including the execution the customization file.

There are lots of examples to do this, but I want to zoom in on the execution of the customization file.

The WLST function that does this, that I use is as follows:
#=======================================================================================
# Function to execute the customization file.
#=======================================================================================
def executeCustomization(ALSBConfigurationMBean, createdRefList, customizationFile):
if customizationFile!=None:
print 'Loading customization File', customizationFile
inputStream = FileInputStream(customizationFile)
if inputStream != None:
customizationList = Customization.fromXML(inputStream)
if customizationList != None:
filteredCustomizationList = ArrayList()
setRef = HashSet(createdRefList)
print 'Filter to remove None customizations'
print "-----"
# Apply a filter to all the customizations to narrow the target to the created resources
print 'Number of customizations in list: ', customizationList.size()
for customization in customizationList:
print "Add customization to list: "
if customization != None:
print 'Customization: ', customization, " - ", customization.getDescription()
newCustomization = customization.clone(setRef)
filteredCustomizationList.add(newCustomization)
else:
print "Customization is None!"
print "-----"
print 'Number of resulting customizations in list: ', filteredCustomizationList.size()
ALSBConfigurationMBean.customize(filteredCustomizationList)
else:
print 'CustomizationList is null!'
else:
print 'Input Stream for customization file is null!'
else:
print 'No customization File provided, skip customization.'

The parameter ALSBConfigurationMBean can be fetched with:
...
sessionName = createSessionName()
print 'Created session', sessionName
SessionMBean = getSessionManagementMBean(sessionName)
print 'SessionMBean started session'
ALSBConfigurationMBean = findService(String("ALSBConfiguration.").concat(sessionName), "com.bea.wli.sb.management.configuration.ALSBConfigurationMBean")
...

The other parameter is the createdRefList, that is build up from the default ImportPlan during import of the config jar:
...
print 'ÒSB project', project, 'will get updated'
osbJarInfo = ALSBConfigurationMBean.getImportJarInfo()
osbImportPlan = osbJarInfo.getDefaultImportPlan()
osbImportPlan.setPassphrase(passphrase)
operationMap=HashMap()
operationMap = osbImportPlan.getOperations()
print
print 'Default importPlan'
printOpMap(operationMap)
set = operationMap.entrySet()

osbImportPlan.setPreserveExistingEnvValues(true)

#boolean
abort = false
#list of created artifact refences
createdRefList = ArrayList()
for entry in set:
ref = entry.getKey()
op = entry.getValue()
#set different logic based on the resource type
type = ref.getTypeId
if type == Refs.SERVICE_ACCOUNT_TYPE or type == Refs.SERVICE_PROVIDER_TYPE:
if op.getOperation() == ALSBImportOperation.Operation.Create:
print 'Unable to import a service account or a service provider on a target system', ref
abort = true
else:
#keep the list of created resources
print 'ref: ',ref
createdRefList.add(ref)
if abort == true :
print 'This jar must be imported manually to resolve the service account and service provider dependencies'
SessionMBean.discardSession(sessionName)
raise
print
print 'Modified importPlan'
printOpMap(operationMap)
importResult = ALSBConfigurationMBean.importUploaded(osbImportPlan)
printDiagMap(importResult.getImportDiagnostics())
if importResult.getFailed().isEmpty() == false:
print 'One or more resources could not be imported properly'
raise
...

The meaning is to build up a set of references of created artefact, to narrow down the customizations to only execute them on the artefacts that are actually imported.

Now, back to the executeCustomization function. It first creates an InputStream on the customization file:
inputStream = FileInputStream(customizationFile)

on which it builds a list of customizations using the .fromXML method of the Customization object:
        customizationList = Customization.fromXML(inputStream)

These customizations are interpreted from the Customization file. If you open that you can find several customization elements:
 <cus:customization xsi:type="cus:EnvValueActionsCustomizationType">
<cus:description/>
...
<cus:customization xsi:type="cus:FindAndReplaceCustomizationType">
<cus:description/>
...
<cus:customization xsi:type="cus:ReferenceCustomizationType">
<cus:description/>


These all are mapped to subclasses of the Customization. And now the reason that I write this blogpost is that I ran into a problem with my import tooling. In the EnvValueActionsCustomizationType the endpoint replacements for the target environments is done. And the weren't executed. In fact these customizations were in the customizationList, but as a None/Null object. Thus, executing this complete list using ALSBConfigurationMBean.customize(filteredCustomizationList) would run in an exception, refering to a null object in the customization list. That's why they're filtered out. But why weren't these interpreted by the .fromXml() method?

Strangely enough in the javaAPI docs of 12.2.1 the EnvValueActionsCustomization does not exist, but the EnvValueCustomization does. But searching My Oracle Support shows in Note 1679528.2: 'A new customization type EnvValueActionsCustomizationType is available in 12c which is used when creating a configuration plan file.' and here in the Java API doc (click on com.bea.wli.config.customization) it is stated that EnvValueCustomization is deprecated and EnvValueActionsCustomization should be used in stead.
Apparently the docs is not updated completely....
And it seems that I used a wrong jar file: The customization file was created using the Console, and executing the customization file using the console did execute the endpoint replacements. So I figured that I must be using a wrong version of the jar file.
So I searched on my BPM quickstart installation (12.2.1.2) for the class EnvValueCustomization:
Jar files containing EnvValueCustomization
  • C:\Oracle\JDeveloper\12210_BPMQS\osb\lib\modules\oracle.servicebus.configfwk.jar/com\bea\wli\config\customization\EnvValueCustomization.class
  • C:\Oracle\JDeveloper\12210_BPMQS\oep\spark\lib\spark-osa.jar/com\bea\wli\config\customization\EnvValueCustomization.class
  • C:\Oracle\JDeveloper\12210_BPMQS\oep\common\modules\com.bea.common.configfwk_1.3.0.0.jar/com\bea\wli\config\customization\EnvValueCustomization.class
And then I did a search with EnvValueActionsCustomization.
Jar files containing EnvValueActionsCustomization:
  • C:\Oracle\JDeveloper\12210_BPMQS\osb\lib\modules\oracle.servicebus.configfwk.jar/com\bea\wli\config\customization\EnvValueActionsCustomization.class
SolutionIt turns out that in my ANT script I used:
<path id="library.osb">
<fileset dir="${fmw.home}/oep/common/modules">
<include name="com.bea.common.configfwk_1.3.0.0.jar"/>
</fileset>
<fileset dir="${weblogic.home}/server/lib">
<include name="weblogic.jar"/>
<include name="wls-api.jar"/>
</fileset>
<fileset dir="${osb.home}/lib">
<include name="alsb.jar"/>
</fileset>
</path>

Where I should use:
<path id="library.osb">
<fileset dir="${fmw.home}/osb/lib/modules">
<include name="oracle.servicebus.configfwk.jar"/>
</fileset>
<fileset dir="${weblogic.home}/server/lib">
<include name="weblogic.jar"/>
<include name="wls-api.jar"/>
</fileset>
<fileset dir="${osb.home}/lib">
<include name="alsb.jar"/>
</fileset>
</path>
ConclusionIt took me quite some time to debug this. But learned how the customization works. I found quite some examples that use com.bea.common.configfwk_1.X.0.0.jar. And apparently during my revamping, I updated this class path (actually I had 1.7, and found only 1.3 in my environment).  But, somehow Oracle found it sensible to replace it with oracle.servicebus.configfwk.jar while keeping the old jar files.
So use the right Jar for the job!

Early Bird Extension – UK February Dates: “Oracle Indexing Internals and Best Practices” Seminar

Richard Foote - Wed, 2017-12-20 00:49
As a Christmas present to those in the UK looking at attending my “Oracle Indexing Internals and Best Practices” seminar in February next year, the Early Bird rates are now available until 19th January 2018. Take this opportunity to attend this highly acclaimed seminar that is packed full of information designed to significantly improve the […]
Categories: DBA Blogs

How to create Teradata Server Dashboard

Nilesh Jethwa - Tue, 2017-12-19 15:10

Teradata delivers better business outcomes through technology-enabled solutions in the areas that matter most ? from operational excellence and asset optimization, to customer experience and product innovation, to finance transformation and risk mitigation. It works with leading businesses in over 75 countries worldwide including many of the top performers and best-known brands in telecom, transportation, consumer packaged goods, financial services and manufacturing.

Are you using Teradata Server for your data marts or data-warehouse? If so, build your Free Teradata Server infocaptor dashboard software.

Read more at http://www.infocaptor.com/ice-database-connect-dashboard-to-teradata-sql

Grouping punch clock data into 24 hour periods, starting with the time_in punches

Tom Kyte - Tue, 2017-12-19 12:06
We have punch data coming in from employees tasking into work orders. As these records are encountered, I need to know based on employee group if their calc_hrs (field in table) for a 24 period is > 13 hours. I was able to get it started: <cod...
Categories: DBA Blogs

Update Table X with data from Table Y with Parallel DBMS EXECUTE

Tom Kyte - Tue, 2017-12-19 12:06
Hi TOM, I am trying to use DBMS PARALLEL EXECUTE package to update one column of around 40 million rows from a 50 million rows table(X) with the column data from another table(Y) which has a many to one relationship with the table to be updated. ...
Categories: DBA Blogs

unusual behavior on mview refresh

Tom Kyte - Tue, 2017-12-19 12:06
Hi Connor, I asked this question on OTN: We have some mviews with joins, and we found out sometimes an update on the base table will result in insert+delete on the mview, not delete+insert. Because insert comes first, we have to make the primary ...
Categories: DBA Blogs

JSON_QUERY operator queries

Tom Kyte - Tue, 2017-12-19 12:06
Hi Guys DB version: 12.2.0.1.0 I have the following "Collection" table created in DB which contains various relationship information for a user in the system. <code> DESC "USERRELATIONSHIPS"; Name Null Type ------...
Categories: DBA Blogs

how to infer proper precision and scale from NUMBER data type columns without precision and scale

Tom Kyte - Tue, 2017-12-19 12:06
one of our Oracle data sources has hundreds of tables with all numeric columns defined using NUMBER data type without precision and scale. But in fact, a column can store pure integer values or decimal values - there is no way to tell that by looking...
Categories: DBA Blogs

Nonprofits Making a Bigger, Better Difference with NetSuite

Oracle Press Releases - Tue, 2017-12-19 08:00
Press Release
Nonprofits Making a Bigger, Better Difference with NetSuite FH Canada, the Caring and Sharing Exchange, and the Legal Aid Society of Rochester, NY Streamline Processes to Do More Good with NetSuite Social Impact

SAN MATEO, Calif.—Dec 19, 2017

Oracle NetSuite, one of the world’s leading providers of cloud-based financials / ERPHRProfessional Services Automation (PSA) and omnichannel commerce software suites, today announced three nonprofit customers that are leveraging the NetSuite platform to create efficiencies, raise additional funds and further their mission. Charitable organizations and social enterprises are abandoning entry-level desktop applications, disparate systems and manual work for more effective and streamlined operations in the cloud. Nonprofits such as FH Canada (Canadian Food for the Hungry), the Caring and Sharing Exchange, and the Legal Aid Society of Rochester are effectively tackling key challenges in program management, regulatory reporting, fundraising, and tracking donors and beneficiaries with NetSuite. These organizations are realizing sizable cost savings and operational efficiencies that they can channel into social impact programs for education, healthcare, clean water, sustainable agriculture, legal aid and other support for populations in need—whether in a nonprofit’s home city or halfway around the world.

Traditionally, nonprofits have been slower than other industries in adopting technology to improve their performance. Limitations in funding, working capital and IT resources create a barrier to entry that can be difficult to overcome. As a result, many nonprofits struggle with standalone Excel spreadsheets, Access databases and server-based applications to manage financials and constituents, driving up administrative costs. That also hurts a nonprofit’s accountability and transparency ratings on sites like Charity Navigator, which individual, corporate and philanthropic donors use when evaluating prospective beneficiaries. Ultimately, outdated technology and labor-intensive manual work can undermine a nonprofit’s fundraising and ability to fully achieve its mission.

Founded in 2006, the NetSuite Social Impact group is empowering nonprofits to use NetSuite to further their mission, regardless of their ability to pay. More than 1,000 nonprofits and social enterprises around the world are supported by NetSuite Social Impact, which makes available free and discounted software licensing to qualified organizations. The program also includes Suite Pro Bono, under which NetSuite employees provide their expertise to help nonprofits with training and customizations to make the most of the platform. To learn more about NetSuite Social Impact, please visit www.netsuite.com/socialimpact.

Nonprofits using NetSuite for such functions as fund accounting, grants management, FASB reporting, fundraising, constituent management, and program management are gaining greater efficiencies and visibility into operations. FH Canada, the Caring and Sharing Exchange and the Legal Aid Society of Rochester are among the many nonprofits to improve efficiency and heighten their social impact with NetSuite’s unified cloud business management platform.

Food for the Hungry Canada (FH Canada) (www.fhcanada.org), part of the global Food for the Hungry (FH) organization, works with communities in Africa, Asia and Latin America who end poverty and become self-sustaining through leadership training and programs to improve healthcare, education, clean water, agriculture and more. Since 1994, the Christian nonprofit has “graduated” 63 communities after 10 years of collaboration to address the root causes of poverty. Today, Food for the Hungry staff, the vast majority of whom are hired from local countries, work with communities in Burundi, Ethiopia, Rwanda, Uganda, Cambodia, Bangladesh, Haiti and Guatemala. FH Canada relies heavily on individual donors, who may sponsor a child or purchase a gift such as school supplies, gardening tools or livestock for recipients through FH Canada’s online store.

Live in 2005, NetSuite has given FH Canada a unified system for all key processes, including its ecommerce web store. Fundraising income has grown by over 20 percent annually in recent years as the nonprofit utilizes a central donor database for personalized communications. Reallocation of savings on IT personnel has allowed FH Canada to funnel resources into fulfilling its mission, while real-time data has markedly improved insights and collaboration across the 20-person organization. “Having everything integrated in a single system lets us re-purpose resources to our mission in supporting programs and fundraising,” said Mark Petzold, Director of Communications and Technology, FH Canada. “NetSuite makes it so much easier to maintain and build donor relationships with one snapshot across all interactions.”

The Caring and Sharing Exchange (www.caringandsharing.ca) since 1915 has made a big difference for needy people in Ottawa, the capital of Canada. Its flagship Christmas Exchange program helps about 20,000 people each holiday season with food hampers or vouchers. Since 2011, a program called Sharing in Student Success has equipped thousands of students with backpacks full of school supplies—more than 2,500 students in 2016. Importantly, the nonprofit serves as a data repository to identify duplicate recipients across 350 partner agencies to prevent recipient duplication—saving the community $600,000 a year and ensuring equitable distribution. With NetSuite live in 2013, Caring and Sharing is more effective than ever as need continues to grow. The nonprofit is saving $50,000 a year in bookkeeping and administrative staff, freeing funds for program work and fundraising. It estimates eliminating 1,000 hours a year in manual data entry. And fundraising income is up 20 percent a year with NetSuite supplying a unified donor database that supports more personalized outreach, while duplication checks are done faster and easier for both Caring and Sharing and partner agencies.

“To have everything in one place helps us run more efficiently and puts more resources where they are needed,” said Cindy Smith, Caring and Sharing Executive Director. “Because we’re able to spend more time on programs and fundraising, we’re able to make more of a difference where it really counts.”

The Legal Aid Society of Rochester, NY (LASROC) (www.lasroc.org), incorporated in 1921, provides pro bono and low-cost legal services in such areas as domestic violence, child support, housing and immigration for about 10,000 individuals a year in a nine-county region of Western New York. Like dozens of other Legal Aid chapters across the U.S., LASROC continuously juggles a rising caseload against limited resources and funding uncertainty. A staff of more than 80 committed legal and administrative professionals, as well as volunteers, are essential to LASROC’s success. So is identifying time and cost savings that enable LASROC to focus resources on social impact. NetSuite plays a key role, enabling LASROC to avoid about $50,000 a year in licensing, hardware and maintenance costs—nearly the salary of a full-time starting attorney who can handle up to 400 cases a year. Those savings are in addition to better efficiency and visibility across the organization. LASROC managers have real-time insights into costs vs. revenue, while accounting is far faster. For instance, payroll journal entries that previously took hours are complete in five minutes with NetSuite, giving finance personnel flexibility to contribute elsewhere across the organization.

“Because NetSuite is so efficient, we spend a lot less time on administration and put those savings directly into client services,” said Kathia Casion, Civil Division Director at LASROC. “If we don’t have to spend as much time on record-keeping, we’re able to help more clients.”

Contact Info
Christine Allen
Oracle NetSuite
603-743-4534
PR@netsuite.com
About Oracle NetSuite

Oracle NetSuite pioneered the Cloud Computing revolution in 1998, establishing the world's first company dedicated to delivering business applications over the internet. Today, it provides a suite of cloud-based financials/Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Christine Allen

  • 603-743-4534

Elliptic Curve Cryptography Certificates Now Certified with EBS Release 12.2

Steven Chan - Tue, 2017-12-19 07:54

We are pleased to announce that Elliptic Curve Cryptography (ECC) certificates are now certified for use with Oracle E-Business Suite Release 12.2.

Key Points

  • Elliptic Curve Cryptography supports both forward secrecy and stronger cipher suites.
  • Apple's App Transport Security mandates forward secrecy, and we expect this to be a requirement for mobile clients.
  • For additional information, and instructions about deploying ECC certificates with Oracle E-Business Suite Release 12.2, refer to Enabling TLS in Oracle E-Business Suite Release 12.2 (Note 1367293.1).
  • We are currently working on certification of ECC certificates with EBS 12.1.3. Subscribe to this blog for the latest news about this and other EBS technology certification developments.

Related Articles

References

 

Categories: APPS Blogs

Partner Webcast - Storage Integration with Oracle Database (ZFS)

          ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator