Feed aggregator

Migrating (and Upgrading!) Your EM12c Repository Database

Don Seiler - Mon, 2014-04-07 08:00
This week I migrated our EM12c repository database to a new server as part of its promotion to production status. Just to make it a little more exciting, the migration also involved an in-flight upgrade from 11.2.0.3 to 11.2.0.4. Much of this post is directly inspired by Martin Bach's post on the same subject. I ran into a few other snags that weren't mentioned so I thought it would be worthwhile to document the experience here for your benefit.

I'm assuming you have all the software installed (and patched to the latest PSU, right?). Alright then, let's begin!

Stop OMS
We want to make sure there are no more changes coming, and nothing needs to access the repository database, so be sure to stop all OMS instances:

$ emctl stop oms -all

Backup PFILE
We need to get the pfile for the current repo and copy it into place on new host:

SQL> create pfile='/mnt/emrepo_backup/initemrepo.ora' from spfile;

I use /mnt/emrepo_backup here because that is the directory that I'll be backing the database up to and copying to the new host after. If you create your pfile somewhere else, be sure to copy it to the new host under $ORACLE_HOME/dbs/
Backup Repo Database
Next we backup the repo database. Here's a snippet from my ksh script that I used:


#!/bin/ksh

BACKUPDIR=/mnt/emrepo_backup
LOGFILE=backup_emrepo.log

mkdir -p $BACKUPDIR

rman log=$LOGFILE <<EOF
connect target /
set echo on


run {

        allocate channel c1 device type disk format '$BACKUPDIR/%U';
        allocate channel c2 device type disk format '$BACKUPDIR/%U';
        allocate channel c3 device type disk format '$BACKUPDIR/%U';
        allocate channel c4 device type disk format '$BACKUPDIR/%U';

        backup as compressed backupset database
                include current controlfile
                plus archivelog;
}
EOF

When the backup is finished, review the RMAN log and make note of which backup piece contains the controlfile backup. We'll need to refer to it by name as part of the restore process.

If your backup directory is an NFS mount, then you can simply unmount it from here and mount it to the new server. Otherwise, be sure to copy the files there after the backup is complete, for example:

$ scp -r /mnt/emrepo_backup newhost:/path/to/emrepo_backup

After this, it should be safe to shutdown the old repository database.

$ sqlplus / as sysdba
SQL> shutdown immediate

If you use Oracle Restart:

$ srvctl stop database -d emrepo
$ srvctl disable database -d emrepo

Prepare New Host for Repo DB
Now we need to set things up on the new host for the emrepo DB.

Create oratab Entry
First let's create an entry in /etc/oratab for this DB under the new 11.2.0.4 home. For example:

emrepo:/oracle/app/product/11.2.0.4:N

Edit PFILE and Create SPFILE
Then let's copy that parameter file into place.

$ . oraenv
ORACLE_SID = [oracle] ? emrepo
The Oracle base has been set to /oracle/app
$ cd $ORACLE_HOME/dbs/
$ cp /mnt/emrepo_backup/initemrepo.ora .

Now edit that file and make sure you update the parameters that require updating. In my case, I'm using Oracle Managed Files (OMF) so I set db_create_file_dest and db_create_online_log_dest_1. I also set db_recovery_file_dest for the FRA. I then set the control_files parameter to specify where I want the control file(s) restored to from the backup when I get to that point.

Now, Martin Bach noted in his blog post that he did not have to specify a db_file_name_convert or log_file_name_convert. I was having some difficulty during the restore phase, and added these parameters out of pure speculation. They didn't help the problem, but I left them in for the duration of my process. I only mention this as an FYI if you end up comparing your settings to mine.

Once you have all your parameters set as desired, create the SPFILE:

$ sqlplus / as sysdba
SQL> create spfile from pfile;

Now, let us restore ourselves the database.
Restore Repo DB on New Host
The restore was done largely as part of a ksh script, which I'll reference snippets of here. Let's start by defining some variables:

BACKUPDIR=/mnt/emrepo_backup
DESTDIR=/oracle/app/oradata/data/EMREPO


Restore Controlfile and Mount Database
From the script, we call RMAN to start the instance in nomount mode, restore the controlfile from the specified backuppiece and mount the database:

rman log=$LOGFILE <<EOF
connect target /
set echo on
startup force nomount;
restore controlfile from '$BACKUPDIR/1abcd123_1_1';
alter database mount;

catalog start with '$BACKUPDIR' noprompt;
EOF

We end by cataloging the backup files, as you can see.

Generate SET NEWNAME Script
Here I dip into sqlplus to generate an script for RMAN to call SET NEWNAME for each of the datafiles. Without this, RMAN would try to restore the datafiles to their old paths on the original host. Here I set them for the path that OMF will use:


sqlplus -s /nolog <<EOF
connect / as sysdba
set head off pages 0 feed off echo off verify off
set lines 200
spool rename_datafiles.rman
select 'set newname for datafile ' || FILE# || ' to ''' || '$DESTDIR/datafile/' || substr(name,instr(name,'/',-1)+1) || ''';' from v\$datafile;
select 'set newname for tempfile ' || FILE# || ' to ''' || '$DESTDIR/tempfile/' || substr(name,instr(name,'/',-1)+1) || ''';' from v\$tempfile;
spool off
EOF

Restore & Recover Database
Now we're ready to restore the database and perform recovery. Again, we call RMAN and run this:

run {
  allocate channel c1 device type disk;
  allocate channel c2 device type disk;
  allocate channel c3 device type disk;
  allocate channel c4 device type disk;
  @rename_datafiles.rman
  restore database;
  switch datafile all;
  switch tempfile all;
  recover database;
}


At this point we're done with the restore and recovery. Normally I would OPEN RESETLOGS, but remember that we're restoring this to an 11.2.0.4 home, so we still need to UPGRADE the database!


Open and Upgrade Database
First we still call OPEN RESETLOGS, but with the UPGRADE option. This replaces the "STARTUP UPGRADE" command you would find in the manual upgrade instructions.

$ sqlplus / as sysdba
SQL> alter database open resetlogs upgrade;

Now we follow the rest of the manual upgrade instructions, I'll just post the commands here, but you should definitely review the documentation:

$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus / as sysdba
SQL> spool upgrade.log
SQL> @catupgrd.sql

-- Start database again
SQL> startup;

-- Check status of components, some will be fixed by utlrp.sql
SQL> @utlu112s.sql

-- Rebuild everything
SQL> @catuppst.sql
SQL> @utlrp.sql

-- Confirm everything is OK now
SQL> SELECT count(*) FROM dba_invalid_objects;
SQL> SELECT distinct object_name FROM dba_invalid_objects;
SQL> @utlu112s.sql


The utlu112s.sql should now report all components as VALID. If not, you'll want to refer to the upgrade documentation for troubleshooting.

At the point the database is upgraded and open. Make sure you have a listener running and that the new database is registered. The only thing  left is the tell your OMS servers to look for the repository database in its new location.

Update OMS Repository Settings
First we need to start just the administration server:

$ emctl start oms -admin_only

This is necessary if you used the "-all" option when stopping OMS earlier. If you did not use "-all" then the admin server should still be running.

Now, update the store_repos_details setting in the OMS configuration:

$ emctl config oms -store_repos_details -repos_port 1521 \
  -repos_sid emrepo -repos_host newhost.mydomain.com \
  -repos_user sysman -repos_pwd xxx

Repeat this step for all your OMS servers (emctl should remind you to do so when changing the config). Then on each, completely shutdown and restart OMS:

$ emctl stop oms -all
$ emctl start oms

And that should be it! Don't forget to drop/delete the database from the original server when you're comfortable doing so.
Categories: DBA Blogs

OAMSSA-06252 after patching

Frank van Bortel - Sun, 2014-04-06 05:01
Once upon a time.. you had a working environment with WebLogic, Access and Identity Management (or Discoverer, or ...) and all of a sudden things start failing. Symptoms You notice the dreaded OAMSSA-06252 (Policy Store not Available) while starting up, and start fearing the worst. Also, it seems as-if you cannot login to OAM management console anymore; your credentials are accepted, but you Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

Bangalore Coonoor on Royal Enfield

Vattekkat Babu - Sat, 2014-04-05 12:28

Route is indicated by green icons on the map. Return was on next day, indicated by red icons. Each marker was done when I had stopped for at least a 5 minute break. Click/hover on the marker to get info and odometer reading.

Open Google Route Map

Onward

Outside Ramanagaram.
Outside Ramanagaram at 6:30am. From here till Mysore, 2 hours non-stop ride!

  • Early morning traffic was peaceful. Nobody on the roads to Mysore really. Very different when you are driving on weekends though.
  • Route was Sony World - Hosur Road - NICE Road - Ramanagaram - Mysore City - Nanjangud - Gundlupet - Bandipur - Theppakkadu - Masinagudi - Ooty
  • Overall about 330km. Took about 8 hours with about 1.25 hours break.
  • I was apprehensive of climbing the Kallatti Ghat on a relatively new bike. Just pushed it - it climbed with no drama.
  • Steady speed of 60kmph was followed mostly. Once in a while, went up to 70kmph for less than 1km, just to try the bike.
  • The waterhole near Bandipur visitor's center has all the trees burned down. Quite bad. This is the place where I had seen elephants, bisons and even a tiger across the road once before.

Fusion Accounting Hub is the talk of Las Vegas

David Haimes - Fri, 2014-04-04 14:01

As I was planning my agenda for the Collaborate 14 (#C14LV) conference next week, I noticed a lot of sessions on Fusion Accounting Hub.  In my earlier post defining Fusion Accounting Hub (FAH) I introduced this idea of a reporting platform, it is a compelling reporting and consolidation solution and as such is gaining a a lot of attention, particularly when Coexisting with Ebusiness Suite, Peoplesoft or JD Edwards ERP systems.

I will be presenting Monday at Collaborate on Fusion Accounting Hub coexisting with E business Suite R12 and how it was used to implement a global chart of accounts.  The multinational SIG has a series on reporting and there are several other sessions from various customers and partners on Fusion Accounting Hub and Coexistence.  I’m listing them below so you can plan accordingly, I will be at most of these, look forward to seeing you there too.

 Monday 7th April  Reaping the Benefits of Coexistence Strategy With Fusion Accounting Hub: A Case Study

1:00 PM-2:00 PM

Session ID: 14480
Room: Level 3, Murano – 3201
Abstract: Oracle’s Fusion Application provides organizations with the option of next generation of enterprise applications which can optimize their business user experience and productivity. With Fusion Applications, organizations now also have a choice to coexist with their existing IT investments. This session will focus on the Fusion Application Coexistence strategy with the use of Fusion Accounting Hub and show case a few case studies of how fusion accounting hub was used to integrate EBS, PeopleSoft and JD Edwards.

  • Sujoy Hajra, Deloitte Consulting USA
  • Ganesh Bhoominathan, Deloitte Consulting USA
  • Rattan Singh, Deloitte Consulting USA
 E-Business Suite Coexistince With Fusion Accounting Hub and Implementing a Global Chart of Accounts

3:20 PM-4:20 PM

Session ID: 14977
Room: Level 1, Marco Polo – 805
Abstract: Hear how Oracle implemented Fusion Accounting Hub Coexisting with it’s R12 EBS Financials and transitioned to a new single Global Chart of Accounts in less than a year. You will learn the principles and process used to determine and reach agreement on the new chart of accounts as well as how the final result looks. In addition you will learn how over 100 ledgers are consolidated in Fusion Accounting Hub and the time to close reduced. We will discuss the best practice recommendations for Fusion Accounting Hub.

Tuesday April 8th Fusion Financials Cloud & Fusion Accounting Hub

1:45 PM-2:45 PM

Session ID: 105810
Room: Level 3, Murano 3202
Abstract: Get up and running on Fusion Financials at the speed of Cloud using the Fusion Financials Implementation tools. Upload tools allow you to upload repetitive type entries directly from Excel to Fusion Financials Cloud, such as the chart of accounts. The Functional Setup Manager guides you through the essential steps to set up and configure your applications using the most common features and best practice considerations. Also discover how Oracle Fusion Accounting Hub address common problems such as integrating accounting from both Oracle and non-Oracle applications. All this will allow you to reap the benefits of Fusion on the Cloud or On Premise faster than ever before.

Making the Choice: E-Business Suite 12.1, 12.2 and/or Fusion

3:00 PM-4:00 PM

Session ID: 14953
Room: Sands, Level 1 – 307
Abstract: With the availability of E-Business Suite 12.1, 12.2, and Fusion, Oracle customers who are still on 11i have a whole host of choices to consider when migrating from their current 11i system. This presentation will outline both the benefits and risks in upgrading/implementing all three ERP systems and how you can develop a business case and migration strategy based on this knowledge. Fusion co-existence will also be discussed and considered within this roadmap.

Co-Existence Versus Replacement, the Future of Global Implementations, a Multi-National User Panel

4:15 PM-5:15 PM

Session ID: 14436
Room: Sands, Level 1 – 307
Abstract: We need to do better! Global ERP implementations rarely have provided the expected business benefits and never seem completed. We need more flexibility, nimbleness, and adaptability to change:prepare for co-existence of systems, not just replacement. At the same time we need to promote multi-country compliance solutions and parallel multi-country integration.Rigorous focus on data validation, global process standards, and minimizing applications diversity are other corner stones. We will have a spirited discussion.

 

Thursday April 10th Using Fusion Accounting Hub to Consolidate Third Party Accounting Systems into PeopleSoft Financials

1:00 PM-2:00 PM

Session ID: 108720
Room: Level 4, Lando 4303
Abstract: If you have many third party systems that require specific accounting rules and have built individual integrations into Oracle’s PeopleSoft Financials, then this session is for you. Let us show you how Oracle’s Fusion Accounting Hub can streamline the maintenance and creation of accounting from the many feeder systems. This session will also deep dive into the integration built between the accounting creation of the hub into PeopleSoft General Ledger. We will show you how you can use the power of PeopleSoft and the power of the Fusion Accounting Hub to reduce costs, maintain higher levels of controls and auditability.

Friday April 11th Fusion Accounting Hub: How to Maximize Your Existing Investment in Oracle Apps Unlimited Products?

09:45 AM-10:45 AM

Session ID: 14399
Room: Level 3, Murano – 3201
Abstract: Fusion Accounting Hub (FAH), provides a complete set of accounting tools. The product supports all financial management and analytical reporting needs. There are various approaches to adopting FAH in co-existence with various systems. Oracle has provided pre-built integrators using GoldenGate from EBS and via ODI from PeopleSoft to integrate to FAH. Non-Oracle sources can also be integrated. FAH offers the product’s full reporting, catalytic, drill down and integration capabilities to AU customers.

Fusion Financials: How to leverage Fusion Financials ( New Implementation vs Coexistence)

12:15 PM-1:15 PM

Session ID: 15147
Room: Level 3, Murano – 3201
Abstract: This presentation will provide an overview of Fusion Financials. We will review the new functionality that exists in Fusion Financials and compare it to E-Business Suite. We will also review the implementation options and compare a new Financial Implementation with a Coexistence model alongside E-Business Suite.

  • Eric Deering, Apps Associates
  • Srinivas Pothireddy, Apps Associates

Categories: APPS Blogs

SQL Developer’s Interface for GIT: Cloning a GitHub Repository

Galo Balda's Blog - Wed, 2014-04-02 23:05

SQL Developer 4 provides an interface that allows us to interact with Git repositories. In this post, I’m going to show how to clone a GitHub (A web based hosting service for software development projects that uses the Git revision control system) repository.

First you need to sign up for a GitHub account. You can skip this step if you already have one.

Your account will give you access to public repositories that could be cloned but I suggest you create your own repository so that you can play with SQL Developer and see what the different options can do.

Once you have an account, click on the green button that says “New Repository”. It will take you to a screen like this:

github_create_repo

Give your repository a name, decide if you want it to be public or private (you have to pay), click on the check box and then click on the green button. Now you should be taken to the main repository page.

github_repo

Pay attention to the red arrow on the previous image. It points to a text box that contains the HTTPS clone URL that we’re going to use in SQL Developer to connect to GitHub.

Let’s go to SQL Developer and click on Team –> Git –> Clone… to open the “Clone from Git Wizard”. Click on the next button and you should see the screen that lets you enter the repository details:

remote_repo

Enter the repository name, the HTTPS clone URL, your GitHub user name and your password. Click on next to connect to the repository and see the remote branches that are available.

remote_branch

The master branch gets created by default for every new repository. Take the defaults on this screen and click on next to get to the screen where you specify the destination for your local Git repository.

destination

Enter the path for the local repository and click on next. A summary screen is displayed and showing the options you chose. Click on finish to complete the setup.

How do we know if it worked? Go to the path of your local repository and it should contain the same structure as in the online repository.

local_repo

On a next post I’ll show how to commit changes to the local repository and how to push them to GitHub.


Filed under: GIT, SQL Developer, Version Control Tagged: GIT, SQL Developer, Version Control
Categories: DBA Blogs

MaxPermSize Be Gone!

Steve Button - Wed, 2014-04-02 18:48

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
No further commentary required.

Cupcake Wars at NoCOUG Spring Conference

Iggy Fernandez - Tue, 2014-04-01 15:49
FOR IMMEDIATE RELEASE Cupcake Wars at NoCOUG Spring Conference on May 15 at UCSC Extension Silicon Valley SILICON VALLEY (APRIL 1, 2014) – In a bold experiment aimed at increasing attendance at its awesome educational conferences, the Northern California Oracle Users Group (NoCOUG) is considering changing the format of its spring conference to that of […]
Categories: DBA Blogs

User Groups and Speaking About Support and diag tools.

Fuad Arshad - Tue, 2014-04-01 08:58
The Chicago Oracle Users Group (COUG) is finally in its reboot mode. Thanks to Alfredo Abate for taking on  the responsibility and bringing the enthusiasm to bring the community back together.  Jeremy Schneider has blogged about this here .  There is a Linked in Group now open for business and i would recommend every one to contribute and lets make this reboot a success.

I am also going to be presenting  at the Ohio Users Group on April 17th along with Jeremy Schneider. The Details of the Event can be found at http:///www.ooug.org. If you are in the area, Please stop by and say hi. I'll be talking about various support tools that Oracle has and how to use them effectively.



The Art of Exploiting Injection Flaws

Slavik Markovich - Mon, 2014-03-31 16:14
Sid is doing his popular course, The Art of Exploiting Injection Flaws, at this year’s Black Hat. You can find more details here. Definitely highly recommended.

Customized pages with Distributed Credential Collector (DCC)

Frank van Bortel - Mon, 2014-03-31 13:29
One of the worst documented areas in OAM; customizing pages with DCC. One revelation: you must use login.pl when you want logout.pl to work, as login.pl seems to build the "Callback URL" list, that logout.pl uses to destroy the session cookies. Update sept 2014This blog entry of the ATeam looks promising: part two is on how to customize DCC login pages.Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

APEX World 2014

Rob van Wijk - Mon, 2014-03-31 04:23
The fifth edition of OGh APEX World took place last Tuesday at Hotel Figi, Zeist in the Netherlands. Again it was a beautiful day full of great APEX sessions. Every year I think we've reached the maximum number of people interested in APEX and we'll never attract more participants. But, after welcoming 300 attendees last year, 347 people showed up this year. Michel van Zoest and Denes Kubicek Rob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com0

Bash infinite loop script

Vattekkat Babu - Mon, 2014-03-31 00:42

There are times when cron won't do for automated jobs. Classic example is when you need to start a script, enter some data and then get it into cron. Now, in most cases this can be handled by having env variables or config files. However, what if some one needs to enter a secret value? You don't want that stored in the filesystem anywhere. For situations like these, you can get inspiration from the following script.

ORA-00600 [kkzlpllg:5] When Dropping MView Log

Don Seiler - Sun, 2014-03-30 18:43
This week a co-worker and I have been doing some rapid-fire testing to improve fast-refresh performance on an old materialized view, which does some summary aggregations on a 1.9 billion row (and growing) master table. One of the things we tested was using the new-in-11gR2 COMMIT SCN feature. There is a great blog post describing the benefits of this feature by Alberto Dell'Era. To quickly summarize and over-simplify, it provides a much faster way to update rows in the materialized view log that are eligible for refresh and purging. This definitely sounds like something we'd want, so let's roll!

Well we quickly hit a snag when testing our creation script the second time around, when it wouldn't let us drop the materialized view log:
SQL> DROP MATERIALIZED VIEW LOG ON FOO.BAR
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [kkzlpllg:5], [], [], [], [], [],[], [], [], [], [], []


We found that we also could no longer perform fast refreshes of the materialized view, getting the same ORA-00600 error. Our initial MOS search turned up Doc ID 14158012.8, which indicates Bug 14158012 (Orphan rows in SNAP_LOGDEP$ causes ORA-600 [kkzlpllg:5]). The bug description is:
With this bug, when there are orphan rows in SNAP_LOGDEP$ with RSCN=NULL,
a CREATE MATERIALIZED VIEW or DROP MATERIALIZED VIEW statement will report ORA-600 [kkzlpllg:5].


I verified that we did have such NULL RSCN values in SNAP_LOGDEP$ related to the master table here.

SQL> select d.tableobj#, o.name
  2  from sys.snap_logdep$ d, sys.obj$ o
  3  where d.tableobj# = o.obj#
  4  and o.name='BAR'
  5  and d.rscn is null;

 TABLEOBJ# NAME
---------- ------------------------------
    605668 BAR
    605668 BAR

Unfortunately, that doc also said there was no workaround other than to apply a one-off patch (otherwise fixed in 11.2.0.4 and 12c)! Not exactly the quick fix we were hoping for. 

However, we did some more searching and found Doc 1310296.1: Internal Error ORA-00600 [kkzlpllg:5] When Executing Drop Materialized View Log

Pretty much the same thing, only this gives us a workaround:

Since the information stored in the Materialized View log is bogus in any case, the Materialized View Log is not useable anymore. The only option is to drop this object and re-create it. To make this possible, a dummy non-NULL value needs to be written into the RSCN column for that table.So we update those rows to set RSCN to a dummy value (9999):

SQL> UPDATE snap_logdep$
  2  SET RSCN = 9999
  3  WHERE tableobj# = 605668

  4  AND RSCN IS NULL;

And we were able to drop the materialized view log and resume testing afterwards.

Hopefully this article saves someone from hitting the same initial roadblock that I did. Especially nice to see the Oracle support docs contradicting themselves!

We also made another interesting find with this particular materialized view that I'll be blogging about later. Definitely an eye-opener, face-palmer and forehead-smacker all in one.

UPDATE - 8 April 2014

Nothing mind blowing, but a little time saver if you know the object name. This doesn't take the owner into account, so be careful out there.

update snap_logdep$
set rscn = 9999
where tableobj# = (
        select distinct d.tableobj#
        from sys.snap_logdep$ d, sys.obj$ o
        where d.tableobj# = o.obj#
        and d.rscn is null
        and o.name='BAR'
)
and rscn is null;


commit;

Categories: DBA Blogs

Ubuntu upgrade to 12.04: grub timeout does not work anymore

Dietrich Schroff - Sun, 2014-03-30 14:11
After doing the upgrade and solving some issues with my screen-resolution, another grub problem hit me:
The timeout for booting the standard kernel did not work anymore.
Inside /etc/default/grub
GRUB_TIMEOUT=10and update-grub, grub worked like
 GRUB_TIMEOUT=-1If you need a  good manual  just look here, but this does not help me, too.

After some tries, i did the following:

In /boot/grub/grub.cfg i changed from
terminal_output gfxterm
if [ "${recordfail}" = 1 ] ; then
  set timeout=-1
else
  if [ x$feature_timeout_style = xy ] ; then
    set timeout_style=menu
    set timeout=10
  fi
fito
terminal_output gfxterm
recordfail=0
if [ "${recordfail}" = 1 ] ; then
  set timeout=-1
else
  if [ x$feature_timeout_style = xy ] ; then
    set timeout_style=menu
    set timeout=10
  fi
fiThis works, but i have to redo the change everytime update-grub is used...

Ubuntu upgrade to 12.04: screen only works with 1024x768

Dietrich Schroff - Sun, 2014-03-30 13:57
After upgrading a laptop-system to 12.04 (precise pangolin) X only starts with a resolution of 1024x768 and not with 1366x768.
I followed many postings and tried many things:
  • add new resolution with xrandr (here)
  • create a xorg.conf file (here)
  • add new drivers (here)
  • ...
But there was only one thing wrong: /etc/default/grub

After changing the following line everything worked:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash i915.modeset=0 to
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"(i added i915.modeset=0  because with the older ubuntu version hibernate does not work and this configuration fixed it. [link])

Running an Airport

Pete Scott - Sun, 2014-03-30 09:26
If Manston Airport is up for sale (and the current owner’s motivation to make money may mean that she has other plans) I’d buy the place. The price has to be right though as a lot of extra money will need to be invested to give a chance of success. Sadly, I can’t stretch to […]

One Queue to Rule them All

Antony Reynolds - Fri, 2014-03-28 16:16
Using a Single Queue for Multiple Message Types with SOA SuiteProblem StatementYou use a single JMS queue for sending multiple message types /  service requests.  You use a single JMS queue for receiving multiple message types / service requests.  You have multiple SOA JMS Adapter interfaces for reading and writing these queues.  In a composite it is random which interface gets a message from the JMS queue.  It is not a problem having multiple adapter instances writing to a single queue, the problem is only with having multiple readers because each reader gets the first message on the queue. Background

The JMS Adapter is unaware of who receives the messages.  Each adapter instance just takes the message from the queue and delivers it to its own configured interface, one interface per adapter instance.  The SOA infrastructure is then responsible for routing that message, usually via a database table and an in memory notification message, to a component within a composite.  Each message will create a new composite but the BPEL engine and Mediator engine will attempt to match callback messages to the appropriate Mediator or BPEL instance.
Note that message type, including XML document type, has nothing to do with the preceding statements.

The net result is that if you have a sequence of two receives from the same queue using different adapters then the messages will be split equally between the two adapters, meaning that half the time the wrong adapter will receive the message.  This blog entry looks at how to resolve this issue.

Note that the same problem occurs whenever you have more than 1 adapter listening to the same queue, whether they are in the same composite or different composites.  The solution in this blog entry is also relevant to this use case.

SolutionsIn order to correctly deliver the messages to the correct interface we need to identify the interface they should be delivered to.  This can be done by using JMS properties.  For example the JMSType property can be used to identify the type of the message.  A message selector can be added to the JMS inbound adapter that will cause the adapter to filter out messages intended for other interfaces.  For example if we need to call three services that are implemented in a single application:
  • Service 1 receives messages on the single outbound queue from SOA, it send responses back on the single inbound queue.
  • Similarly Service 2 and Service 3 also receive messages on the single outbound queue from SOA, they send responses back on the single inbound queue.
First we need to ensure the messages are delivered to the correct adapter instance.  This is achieved as follows:
  • aThe inbound JMS adapter is configured with a JMS message selector.  The message selector might be "JMSType='Service1'" for responses from Service 1.  Similarly the selector would be "JMSType='Service2'" for the adapter waiting on a response from Service 2.  The message selector ensures that each adapter instance will retrieve the first message from the queue that matches its selector.
  • The sending service needs to set the JMS property (JMSType in our example) that is used in the message selector.
Now our messages are being delivered to the correct interface we need to make sure that they get delivered to the correct Mediator or BPEL instance.  We do this with correlation.  There are several correlation options:
  1. We can do manual correlation with a correlation set, identifying parts of the outbound message that uniquely identify our instance and matching them with parts of the inbound message to make the correlation.
  2. We can use a Request-Reply JMS adapter which by default expects the response to contain a JMSCorrelationID equal to the outgoing JMSMessageID.  Although no configuration is required for this on the SOA client side, the service needs to copy the incoming JMSMessageID to the outgoing JMSCorrelationID.
Special Case - Request-Reply Synchronous JMS Adapter

When using a synchronous Request-Reply JMS adapter we can omit to specify the message selector because the Request-Reply JMS adapter will immediately do a listen with a message selector for the correlation ID rather than processing the incoming message asynchronously.
The synchronous request-reply will block the BPEL process thread and hold open the BPEL transaction until a response is received, so this should only be used when you expect the request to be completed in a few seconds.

The JCA Connection Factory used must point to a non-XA JMS Connection Factory and must have the isTransacted property set to “false”.  See the documentation for more details.

Sample

I developed a JDeveloper SOA project that demonstrates using a single queue for multiple incoming adapters.  The overall process flow is shown in the picture below.  The BPEL process on the left receives messages from the jms/TestQueue2 and sends messages to the jms/Test Queue1.  A Mediator is used to simulate multiple services and also provide a web interface to initiate the process.  The correct adapter is identified by using JMS message properties and a selector.

 

The flow above shows that the process is initiated from EM using a web service binding on mediator.  The mediator, acting as a client, posts the request to the inbound queue with a JMSType property set to "Initiate". ModelClientBPELServiceInbound RequestClient receives web service request and posts the request to the inbound queue with JMSType='Initiate'The JMS adapter with a message selector "JMSType='Initiate'" receives the message and causes a composite to be created.  The composite in turn causes the BPEL process to start executing.
The BPEL process then sends a request to Service 1 on the outbound queue.
Key Points

  • Initiate message can be used to initate a correlation set if necessary
  • Selector required to distinguish initiate messages from other messages on the queue
Service 1 receives the request and sends a response on the inbound queue with JMSType='Service1' and JMSCorrelationID= incoming JMS Message ID.Separate Request and Reply Adapters The JMS adapter with a message selector "JMSType='Service1'" receives the message and causes a composite to be created.  The composite uses a correlation set to in turn deliver the message to BPEL which correlates it with the existing BPEL process.
The BPEL process then sends a request to Service 2 on the outbound queue.
Key Points
  • Separate request & reply adapters require a correlation set to ensure that reply goes to correct BPEL process instance
  • Selector required to distinguish service 1 response messages from other messages on the queue
Service 2 receives the request and sends a response on the inbound queue with JMSType='Service2' and JMSCorrelationID= incoming JMS Message ID.Asynchronous Request-Reply Adapter The JMS adapter with a message selector "JMSType='Service2'" receives the message and causes a composite to be created.  The composite in turn delivers the message to the existing BPEL process using native JMS correlation.
Key Point
  • Asynchronous request-reply adapter does not require a correlation set, JMS adapter auto-correlates using CorrelationID to ensure that reply goes to correct BPEL process instance
  • Selector still required to distinguish service 2 response messages from other messages on the queue
The BPEL process then sends a request to Service 3 on the outbound queue using a synchronous request-reply.
Service 3 receives the request and sends a response on the inbound queue with JMSType='Service2' and JMSCorrelationID= incoming JMS Message ID.Synchronous Request-Reply Adapter The synchronous JMS adapter receives the response without a message selector and correlates it to the BPEL process using native JMS correlation and sends the overall response to the outbound queue.
Key Points
  • Synchronous request-reply adapter does not require a correlation set, JMS adapter auto-correlates using CorrelationID to ensure that reply goes to correct BPEL process instance
  • Selector also not required to distinguish service 3 response messages from other messages on the queue because the synchronous adapter is doing a selection on the expected CorrelationID
 Outbound ResponseClient receives the response on an outbound queue.   Summary

When using a single JMS queue for multiple purposes bear in mind the following:

  • If multiple receives use the same queue then you need to have a message selector.  The corollary to this is that the message sender must add a JMS property to the message that can be used in the message selector.
  • When using a request-reply JMS adapter then there is no need for a correlation set, correlation is done in the adapter by matching the outbound JMS message ID to the inbound JMS correlation ID.  The corollary to this is that the message sender must copy the JMS request message ID to the JMS response correlation ID.
  • When using a synchronous request-reply JMS adapter then there is no need for the message selector because the message selection is done based on the JMS correlation ID.
  • Synchronous request-reply adapter requires a non-XA connection factory to be used so that the request part of the interaction can be committed separately to the receive part of the interaction.
  • Synchronous request-reply JMS adapter should only be used when the reply is expected to take just a few seconds.  If the reply is expected to take longer then the asynchronous request-reply JMS adapter should be used.
Deploying the Sample

The sample is available to download here and makes use of the following JMS resources:

JNDIResource;Notesjms/TestQueueQueueOutbound queue from the BPEL processjms/TestQueue2QueueInbound queue to the BPEL processeis/wls/TestQueueJMS Adapter Connector FactoryThis can point to an XA or non-XA JMS Connection Factory such as weblogic.jms.XAConnectionFactoryeis/wls/TestQueueNone-XA JMS Adapter Connector FactoryThis must point to a non-XA JMS Connection Factory such as weblogic.jms.ConnectionFactory and must have isTransacted set to “false”

To run the sample then just use the test facility in the EM console or the soa-infra application.

One Queue to Rule them All

Antony Reynolds - Fri, 2014-03-28 16:16
Using a Single Queue for Multiple Message Types with SOA Suite Problem StatementYou use a single JMS queue for sending multiple message types /  service requests.  You use a single JMS queue for receiving multiple message types / service requests.  You have multiple SOA JMS Adapter interfaces for reading and writing these queues.  In a composite it is random which interface gets a message from the JMS queue.  It is not a problem having multiple adapter instances writing to a single queue, the problem is only with having multiple readers because each reader gets the first message on the queue. Background

The JMS Adapter is unaware of who receives the messages.  Each adapter instance just takes the message from the queue and delivers it to its own configured interface, one interface per adapter instance.  The SOA infrastructure is then responsible for routing that message, usually via a database table and an in memory notification message, to a component within a composite.  Each message will create a new composite but the BPEL engine and Mediator engine will attempt to match callback messages to the appropriate Mediator or BPEL instance.
Note that message type, including XML document type, has nothing to do with the preceding statements.

The net result is that if you have a sequence of two receives from the same queue using different adapters then the messages will be split equally between the two adapters, meaning that half the time the wrong adapter will receive the message.  This blog entry looks at how to resolve this issue.

Note that the same problem occurs whenever you have more than 1 adapter listening to the same queue, whether they are in the same composite or different composites.  The solution in this blog entry is also relevant to this use case.

SolutionsIn order to correctly deliver the messages to the correct interface we need to identify the interface they should be delivered to.  This can be done by using JMS properties.  For example the JMSType property can be used to identify the type of the message.  A message selector can be added to the JMS inbound adapter that will cause the adapter to filter out messages intended for other interfaces.  For example if we need to call three services that are implemented in a single application:
  • Service 1 receives messages on the single outbound queue from SOA, it send responses back on the single inbound queue.
  • Similarly Service 2 and Service 3 also receive messages on the single outbound queue from SOA, they send responses back on the single inbound queue.
First we need to ensure the messages are delivered to the correct adapter instance.  This is achieved as follows:
  • aThe inbound JMS adapter is configured with a JMS message selector.  The message selector might be "JMSType='Service1'" for responses from Service 1.  Similarly the selector would be "JMSType='Service2'" for the adapter waiting on a response from Service 2.  The message selector ensures that each adapter instance will retrieve the first message from the queue that matches its selector.
  • The sending service needs to set the JMS property (JMSType in our example) that is used in the message selector.
Now our messages are being delivered to the correct interface we need to make sure that they get delivered to the correct Mediator or BPEL instance.  We do this with correlation.  There are several correlation options:
  1. We can do manual correlation with a correlation set, identifying parts of the outbound message that uniquely identify our instance and matching them with parts of the inbound message to make the correlation.
  2. We can use a Request-Reply JMS adapter which by default expects the response to contain a JMSCorrelationID equal to the outgoing JMSMessageID.  Although no configuration is required for this on the SOA client side, the service needs to copy the incoming JMSMessageID to the outgoing JMSCorrelationID.
Special Case - Request-Reply Synchronous JMS Adapter

When using a synchronous Request-Reply JMS adapter we can omit to specify the message selector because the Request-Reply JMS adapter will immediately do a listen with a message selector for the correlation ID rather than processing the incoming message asynchronously.
The synchronous request-reply will block the BPEL process thread and hold open the BPEL transaction until a response is received, so this should only be used when you expect the request to be completed in a few seconds.

The JCA Connection Factory used must point to a non-XA JMS Connection Factory and must have the isTransacted property set to “false”.  See the documentation for more details.

Sample

I developed a JDeveloper SOA project that demonstrates using a single queue for multiple incoming adapters.  The overall process flow is shown in the picture below.  The BPEL process on the left receives messages from the jms/TestQueue2 and sends messages to the jms/Test Queue1.  A Mediator is used to simulate multiple services and also provide a web interface to initiate the process.  The correct adapter is identified by using JMS message properties and a selector.

 

The flow above shows that the process is initiated from EM using a web service binding on mediator.  The mediator, acting as a client, posts the request to the inbound queue with a JMSType property set to "Initiate". Model Client BPEL Service Inbound Request Client receives web service request and posts the request to the inbound queue with JMSType='Initiate' The JMS adapter with a message selector "JMSType='Initiate'" receives the message and causes a composite to be created.  The composite in turn causes the BPEL process to start executing.
The BPEL process then sends a request to Service 1 on the outbound queue.
Key Points

  • Initiate message can be used to initate a correlation set if necessary
  • Selector required to distinguish initiate messages from other messages on the queue
Service 1 receives the request and sends a response on the inbound queue with JMSType='Service1' and JMSCorrelationID= incoming JMS Message ID. Separate Request and Reply Adapters   The JMS adapter with a message selector "JMSType='Service1'" receives the message and causes a composite to be created.  The composite uses a correlation set to in turn deliver the message to BPEL which correlates it with the existing BPEL process.
The BPEL process then sends a request to Service 2 on the outbound queue.
Key Points
  • Separate request & reply adapters require a correlation set to ensure that reply goes to correct BPEL process instance
  • Selector required to distinguish service 1 response messages from other messages on the queue
Service 2 receives the request and sends a response on the inbound queue with JMSType='Service2' and JMSCorrelationID= incoming JMS Message ID. Asynchronous Request-Reply Adapter   The JMS adapter with a message selector "JMSType='Service2'" receives the message and causes a composite to be created.  The composite in turn delivers the message to the existing BPEL process using native JMS correlation.
Key Point
  • Asynchronous request-reply adapter does not require a correlation set, JMS adapter auto-correlates using CorrelationID to ensure that reply goes to correct BPEL process instance
  • Selector still required to distinguish service 2 response messages from other messages on the queue
The BPEL process then sends a request to Service 3 on the outbound queue using a synchronous request-reply.
Service 3 receives the request and sends a response on the inbound queue with JMSType='Service2' and JMSCorrelationID= incoming JMS Message ID. Synchronous Request-Reply Adapter   The synchronous JMS adapter receives the response without a message selector and correlates it to the BPEL process using native JMS correlation and sends the overall response to the outbound queue.
Key Points
  • Synchronous request-reply adapter does not require a correlation set, JMS adapter auto-correlates using CorrelationID to ensure that reply goes to correct BPEL process instance
  • Selector also not required to distinguish service 3 response messages from other messages on the queue because the synchronous adapter is doing a selection on the expected CorrelationID
  Outbound Response Client receives the response on an outbound queue.       Summary

When using a single JMS queue for multiple purposes bear in mind the following:

  • If multiple receives use the same queue then you need to have a message selector.  The corollary to this is that the message sender must add a JMS property to the message that can be used in the message selector.
  • When using a request-reply JMS adapter then there is no need for a correlation set, correlation is done in the adapter by matching the outbound JMS message ID to the inbound JMS correlation ID.  The corollary to this is that the message sender must copy the JMS request message ID to the JMS response correlation ID.
  • When using a synchronous request-reply JMS adapter then there is no need for the message selector because the message selection is done based on the JMS correlation ID.
  • Synchronous request-reply adapter requires a non-XA connection factory to be used so that the request part of the interaction can be committed separately to the receive part of the interaction.
  • Synchronous request-reply JMS adapter should only be used when the reply is expected to take just a few seconds.  If the reply is expected to take longer then the asynchronous request-reply JMS adapter should be used.
Deploying the Sample

The sample is available to download here and makes use of the following JMS resources:

JNDI Resource; Notes jms/TestQueue Queue Outbound queue from the BPEL process jms/TestQueue2 Queue Inbound queue to the BPEL process eis/wls/TestQueue JMS Adapter Connector Factory This can point to an XA or non-XA JMS Connection Factory such as weblogic.jms.XAConnectionFactory eis/wls/TestQueue None-XA JMS Adapter Connector Factory This must point to a non-XA JMS Connection Factory such as weblogic.jms.ConnectionFactory and must have isTransacted set to “false”

To run the sample then just use the test facility in the EM console or the soa-infra application.

db.person.find( { "role" : "DBA" } )

Tugdual Grall - Fri, 2014-03-28 09:00
Wow! it has been a while since I posted something on my blog post. I have been very busy, moving to MongoDB, learning, learning, learning…finally I can breath a little and answer some questions. Last week I have been helping my colleague Norberto to deliver a MongoDB Essentials Training in Paris. This was a very nice experience, and I am impatient to deliver it on my own. I was happy to see thatTugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0

Pages

Subscribe to Oracle FAQ aggregator