Feed aggregator

Product Manager at Microsoft

Chris Grillone - Tue, 2008-06-03 12:38
Hi,

I am grateful for all of your support while I was at Oracle. I am now a Product Manager at Microsoft for Windows Essential Business Server (EBS), formerly code named "Centro". EBS is scheduled to be released later this year. EBS is a server solution for mid-size businesses, 25-300 clients. Please check out my blog for EBS: http://www.microsoft.com/windowsserver/essential/default.mspx.

Cheers,

Chris

DBMS_STATS becomes INVALID after Refresh/Database Upgrade.

Madan Mohan - Mon, 2008-06-02 21:04
Issue:
******

One of the DBMS JOB failed and recorded error in the alertlog as

From ORCL database alert log file /ORCL/ORCLDB/10.2.0/admin/ORCL_ctloraerp06/bdump/alert_ORCL.log

9:ORA-12012: error on auto execute of job 282253
10:ORA-04063: ORA-04063: package body "SYS.DBMS_STATS" has errors


Observation:
************


SYS.DBMS_STATS Package Body is Invalid.

select object_name,object_type ,owner, status from dba_objects where object_name='DBMS_STATS';

OBJECT_NAME OBJECT_TYPE OWNER STATUS
-------------------- ------------------- ------------------------------ -------
DBMS_STATS PACKAGE SYS VALID
DBMS_STATS PACKAGE BODY SYS INVALID
DBMS_STATS SYNONYM PUBLIC VALID

Solution
*********


Rebuild/ Re-compile the DBMS_STATS Objects as

connect / as sysdba

@?/rdbms/admin/dbmsstat.sql
@?/rdbms/admin/prvtstas.plb
@?/rdbms/admin/prvtstat.plb

The right combination of options to trace configurator servlet..

Gaurav Verma - Sun, 2008-06-01 22:57

A long while back, we had faced a weird situation with configurator and desperately needed to trace it to find out what was happening behind the scenes.

After consulting many a metalink notes, however hard I tried, the tracing would simply not happen. The right sequence of character and options is quite tricky and not very well documented. (Not surprising, as Configurator is an acquired product)

After many attempts, the tracing did work and this article is more as a reminder to myself and any other people out there who want to trace the confiurator servlet. Be reminded though, that this tracing is more like FRD log tracing (for Oracle Forms) and is NOT sql tracing.

Having said so, here are the magical options that you need to add to $IAS_ORACLE_HOME/Apache/Jserv/etc/zone.properties file:

zone.properties
==========
....
....
servlets.default.initArgs=cz.activemodel=|/tl=@$TRACEDIR/ciotracesupport.lce| /gt=$TRACEDIR/ciotracesupport.java|/nolp|/nodp|/noatp|
....

....

where, $TRACEDIR is the directory where the trace (.lce) file will be produced. Also, please note that the /gt= option was just wrapped around to the below line for reading purposes. In reality, it needs to be in the same line as the /tl= option.

If anyone of you reading this article has a better idea than this, please let me know through the comment mechanism.


OTL Time Keeper

RameshKumar Shanmugam - Sun, 2008-06-01 18:13
There are multiple ways you can enter time into OTL

  • Self Service time - employee entering his own time
  • Line Manager time Entry - Manager Entering time for his direct reports
  • Timestore Deposit API - Time import using the interface from the third party system
  • Timekeeper - one person entering time for the group of employees based on the group assigned to them
  • Authorized Delegate - One person entering time for the group of employees based on the security profile attached in the user/responsibility
In this section I am going to explain how to setup Timekeeper module for entering the time for the Group of employees

Before starting the Timekeeper configuration , if you directly go the Responsibility OTL Super timekeeper and click the function timekeeper Entry you will get the following error msg,

The above error message will clearly tell you what are the setup steps you need to do to enable the Timekeeper

The first two steps are to be done in the OTL Application developer Responsibility
1. Timekeeper Misc Setup Items
2. Timekeeper Layout attribute

The third step should be done in the OTL super Timekeeper Responsibility
3. Timekeeper Group is created

The Forth step should be done in the System administrator, this step should be done at the user level and it is only for the super timekeeper
4 Profile OTL: Allow Change Group Timekeeper

I'll be explaining each of the above steps in detail in my next post
Categories: APPS Blogs

So you thought you upgraded your OLAP database to 10g..

Gaurav Verma - Sat, 2008-05-31 02:01


W
ell, so did we. In fact, the dba_registry view showed the version of OLAP option as 10.2.0.3. Here, see it for yourself:

  1  select comp_name , version, status from dba_registry
  2* where comp_name like  '%Analytic%'
SQL> /

COMP_NAME                  VERSION      STATUS
-------------------------- ------------ -------
OLAP Analytic Workspace    10.2.0.3.0   VALID


Seems pretty convincing, does it not? One would think so.
An unexpected error..But, a month later, when the customer tried to do an archive and restore of a demand plan, they received the following error:

Demand Planning: Version : 11.5.0

Copyright (c) 1979, 1999, Oracle Corporation. All rights
reserved.

MSDRESTORE module: Restore demand plan

+---------------------------------------------------------------------------+

Current system time is 30-MAY-2008 08:24:19

+---------------------------------------------------------------------------+

**Starts**30-MAY-2008 08:24:19

**Ends**30-MAY-2008 08:24:21

ORA-35071: EIF file ARCH4027.eif cannot be imported because
analytic workspace ODPCODE has not been upgraded to version 10.0.0.0.

+---------------------------------------------------------------------------+

Start of log messages from FND_FILE

+---------------------------------------------------------------------------+

Error in restore process: see batch log for details

ORA-35071: EIF file ARCH4027.eif cannot be imported because
analytic workspace ODPCODE has not been upgraded to version 10.0.0.0.

+---------------------------------------------------------------------------+

End of log messages from FND_FILE

+---------------------------------------------------------------------------+

Executing request completion options...

Finished executing request completion options.

+---------------------------------------------------------------------------+

Exceptions posted by this request:

Concurrent Request for "Restore demand plan" has
completed with error.

+---------------------------------------------------------------------------+
Metalink, our friend in need..So we turned to our friend, Metalink and came across Note 390004.1, which says that Imported the Analytic Workspace in an different database release, but the AWs are still of the old version.

Eh! So we tried to confirm the same and found that there were indeed 4 analytical workspaces that were still of 9.1 version, including the ODPCODE AW:

SQL> select aw_name , aw_version
     from all_aws
     where aw_version like '9%';

AW_NAME                        AW_V
------------------------------ ----
ZPBANNOT                       9.1
ZPBDATA                        9.1
MSD4029                        9.1
ODPCODE                        9.1


Strangely, the other AWs were all 10.2:

SQL> select aw_name , aw_version
     from all_aws
     where aw_version like '10%';

AW_NAME                        AW_V
------------------------------ ----
EXPRESS                        10.2
AWMD                           10.2
AWCREATE                       10.2
AWCREATE10G                    10.2
AWXML                          10.2
AWREPORT                       10.2
XWDEVKIT                       10.2
MMSD4027                       10.2
MSD4027                        10.2
MMMSD4027                      10.2
PMMSD4027A0                    10.2
PMMSD4027A21                   10.2
PMMSD4027A22                   10.2
PMMSD4027A26                   10.2
PMMSD4027A32                   10.2
PMMSD4027A33                   10.2
PMMSD4027A55                   10.2
PMMSD4027A57                   10.2
PMMSD4027A58                   10.2
MSD4027A58                     10.2
PMMSD4027A78                   10.2
PMMSD4027A79                   10.2
PMMSD4027A80                   10.2
PMMSD4027A86                   10.2
PMMSD4027A87                   10.2
PMMSD4027A88                   10.2
PMMSD4027A93                   10.2
PMMSD4027A95                   10.2
PMMSD4029A                     10.2
PMMSD4029A5                    10.2
PMMSD4029A3                    10.2
PMMSD4029A1                    10.2
MMMSD4029                      10.2
MMSD4029                       10.2
ZPBCODE                        10.2

35 rows selected.

SQL>

Where did our OLAP upgrade procedure go wrong..So the question before us was that how did our production upgrade procedure, the one which we thought was so meticulously planned and executed, go wrong?

For upgrading the ODP workspaces, we had followed Note 418200.1 - ODP: Steps When Upgrading the RDBMS Version Used With ODP.
We had even
got a
clarification from the author of that note about what steps were needed
for ODP
workspaces upgrade.


Unfortunately, The following note does not have any direct reference to upgrading OLAP workspaces:


Note 362203.1

Interopratbility notes Oracle Applications Release 11i with Oracle 10g Release 2 (10.2.0)
We had found the following Notes on OLAP upgrade/exporting/clone:

472215.1 How to Export AWs or a Plan in Demand Planning
418200.1 ODP Steps When Upgrading the RDBMS Version Used With ODP
412308.1 How to Check the Version for ODP 11.5.10?
339110.1 How To Clone the OLAP Portion of ODP 11.5.10

Note 418200.1 is the most relevant for the upgrade activity, but unfortunately, even that does not have any reference to dba_aws view to check the aw_version column of each workspace and use the dbms_aw.convert API for the workspaces that could not be upgraded due to some reason.

So that is definitely one thing that should be part of that note. We would be sending a note to the author to include these steps into note
418200.1.

Another gotcha while implementing the solution (exec dbms_aw.convert API) is that it does not accept the OWNER name of the workspace in the first argument. Note 390004.1 actually sets a false expectation on this front.

Also, you need to be logged in as the owner of the workspace while issuing the dbms_aw API or you would get this error:

SQL> show user
USER is "SYS"
SQL> exec dbms_aw.convert('ODPCODE');
BEGIN dbms_aw.convert('
ODPCODE'); END;

*
ERROR at line 1:
ORA-33262: Analytic workspace
ODPCODE does not exist.
ORA-06512: at "SYS.DBMS_AW", line 58
ORA-06512: at "SYS.DBMS_AW", line 134
ORA-06512: at "SYS.DBMS_AW", line 1245
ORA-06512: at line 1


Oh, Oh.. A Gotcha while implementing the solution..
To make matters worse, by a copy-paste, I specified the owner name along with the workspace like this:

SQL> exec dbms_aw.convert('apps.odpcode');

It came back with an error like you cant specify the owner name of the table.   

After that, the name of the aw got changed to ODPCODE_ (with the _ in the end), but it upgraded to 10.2 as per dba_aws or all_aws.

SQL> select aw_name, aw_version
     from all_aws
     where aw_name like 'ODP%';


AW_NAME                        AW_V
------------------------------ ----
ODPCODE_                       10.2


Essentially, the dbms_aw.convert had left the ODPCODE workspace in a transient state and not done a good job of cleaning up after the exception.
A workaround for cleaning up...Well, there is an API called dbms_aw.aw_rename that would do the trick of putting the name back to ODPCODE like this:

SQL> show user
USER is APPS

SQL> exec dbms_aw.aw_rename('ODPCODE_','ODPCODE');

PL/SQL procedure successfully completed.

SQL> select aw_name, aw_version from all_aws where aw_name='ODPCODE';

AW_NAME                         AW_V
------------------------------ ----
ODPCODE                        10.2

Now just to make sure that it was really 10.2, we ran the dbms_aw.convert API again and got this message, which implied that it was already in 10.2 format:

SQL> exec dbms_aw.convert('ODPCODE');
BEGIN dbms_aw.convert('ODPCODE'); END;

*
ERROR at line 1:
ORA-33298: Analytic Workspace ODPCODE is already in the newest format allowed
by the current compatibility setting
ORA-06512: at "SYS.DBMS_AW", line 1267
ORA-06512: at line 1


After this, the MSDRESTORE module: Restore demand plan concurrent job completed successfully.

Conclusion...There were multiple learnings from this Sev 1 experience in production instance:
  • Ask your customer to test exhaustively. In this case, the customer is a very small shop with an IT department of 5-6 people and yet to graduate to the enterprise level thinking for managing their IT systems.
Any testing was good testing for them and this error was discovered when they were going to go live with another project phase implementation.

Unfortunately, in this case, we had very little say in their testing plan.
  • In this case, we were caught a little off guard because we did not have exhaustive expertise in OLAP upgrades and had to rely on published metalink notes, which did not cross link all the known upgrade issues. 
In this case, it would have paid to check the output of all_aws view too. Similarly, other Oracle Server options may have their own views.
  • While executing the dbms_aw APIs, it is best to be logged in as the owner of the analytical workspace being worked on, or you might land up in a worse soup than you signed up for.
As per our observation, even the alter session set current_schema=<OWNER> sql does not work on dbms_aw APIs.


Never move house….

Lisa Dobson - Fri, 2008-05-30 15:30
….unless you are fully prepared for it to take over your life for a couple of months and push your stress levels through the roof, only for it to come crashing down round your feet on the day it’s all supposed to complete.The worst part is knowing that now that it’s back on the market, I’m going to have to go through the whole thing again in the not too distant future.Anyway, that sort of Lisahttp://www.blogger.com/profile/16434297444320005874noreply@blogger.com1

FND_GLOBAL affected by New Global Performance Changes

Aviad Elbaz - Thu, 2008-05-29 04:59

After applying ATG Rollup 5 patch (and above) we discovered an issue with some of our custom developments.
For some processes we got the following errors:

ORA-20001: Oracle error -20001: ORA-20001: Oracle error -4092: ORA-04092: cannot SET NLS in a trigger
has been detected in fnd_global.set_nls.set_parameter('NLS_LANGUAGE','AMERICAN').
has been detected in fnd_global.set_nls.
ORA-06512: at "APPS.APP_EXCEPTION", line 72
ORA-06512: at "APPS.FND_GLOBAL", line 240
ORA-06512: at "APPS.FND_GLOBAL", line 1410
ORA-06512: at "APPS.FND_GLOBAL", line 1655
ORA-06512: at "APPS.FND_GLOBAL", line 2170
ORA-06512: at "APPS.FND_GLOBAL", line 2312
ORA-06512: at "APPS.FND_GLOBAL", line 2250

and this:

ORA-20001: Oracle error -2074: ORA-02074: cannot SET NLS in a distributed transaction has been
detected in
fnd_global.set_nls.set_paramenters('NLS_LANGUAGE','AMERICAN').

After some debug work we found that this issue happens when executing FND_GLOBAL.apps_initialize more than once within a trigger/via a db link in the same transaction.

According to Note: 556391.1 - "ORA-02074: Cannot SET NLS in a Distributed Transaction" this issue cause by a new global performance changes.

Oracle Development said: "Very sorry if the new global performance changes have exposed you to this error, but there is no way we can back out these changes. They are not only complex and wide spread but required to maintain functional performance levels. Using fnd_global to change user/resp context from a trigger is not only not supported it is ill advised."

OK, So we had to find a workaround to this issues and we found two...

I'll start with a sample of the new behavior of fnd_global to demonstrate the issue and the solutions/workarounds will come right after.

SQL> create table test1 (a number, b number);
Table created

SQL> insert into test1 (a) values (1001);
1 row inserted

SQL> insert into test1 (a) values (1002);
1 row inserted

SQL> commit;
Commit complete

SQL> create or replace trigger test1_trg_bi
  2  after update on test1
  3  for each row
  4  begin
  5       fnd_global.APPS_INITIALIZE(:new.a,1,1);
  6       -- fnd_request.submit_request...
  7       -- ....
  8       -- ....
  9  end;
10  /
Trigger created

SQL> select fnd_global.user_id from dual; 
   USER_ID
----------
        -1

SQL> update test1 set b=1101 where a=1001;
1 row updated

SQL> select fnd_global.user_id from dual; 
   USER_ID
----------
      1001

SQL> update test1 set b=1102 where a=1002;

update test1 set b=1102 where a=1002

ORA-20001: Oracle error -20001: ORA-20001: Oracle error -4092: ORA-04092: cannot SET NLS in a trigger
has been detected in fnd_global.set_nls.set_parameter('NLS_LANGUAGE','AMERICAN').
has been detected in fnd_global.set_nls.
ORA-06512: at "APPS.APP_EXCEPTION", line 72
ORA-06512: at "APPS.FND_GLOBAL", line 240
ORA-06512: at "APPS.FND_GLOBAL", line 1410
ORA-06512: at "APPS.FND_GLOBAL", line 1655
ORA-06512: at "APPS.FND_GLOBAL", line 2170
ORA-06512: at "APPS.FND_GLOBAL", line 2312
ORA-06512: at "APPS.FND_GLOBAL", line 2250
ORA-06512: at "APPS.TEST1_TRG_BI", line 2
ORA-04088: error during execution of trigger 'APPS.TEST1_TRG_BI'

As you can see, the second update failed because apps_initialize was executed for the second time in the same transaction.

Now I'll show two ways to workaround this issue:

1) As suggested in Note: 556391.1 - "ORA-02074: Cannot SET NLS in a Distributed Transaction" a wrapper Concurrent Request which contain a call to the context set (apps_initialize) and afterwards submits the original request, is one possible solution.

instead:

create or replace trigger test1_trg_bi
after update on test1
for each row
declare
     . . .
begin
     fnd_global.APPS_INITIALIZE(:new.a,1,1);
     ret_code := fnd_request.submit_request ('OWNER', 'ORIGINAL_CONC', . . .);
     . . .
     . . .
end;

create the following trigger:

create or replace trigger test1_trg_bi
after update on test1
for each row
declare
     . . .
begin
     ret_code := fnd_request.submit_request ('OWNER', 'WRAPPER_CONC', . . . , :new.a, . . . );
     . . .
     . . .
end;

additionally - create a new plsql concurrent (WRAPPER_CONC) that contains the fnd_global.apps_initialize and submits the ORIGINAL_CONC concurrent request.

This way, the apps_initialize statement executed in a separate transaction with no error.

This is the preferred and recommended solution by Oracle.

2) The second solution is easier to implement, works fine but according to Note: 556391.1 is not supported since it contains calls to fnd_global within a database trigger.

Anyway...

The idea is to call the apps_initialize in an Autonomous Transaction procedure.

Follow this sample:

SQL> create or replace procedure test1_apps_init (p_user_id number) is
  2  pragma autonomous_transaction;
  3  begin
  4       fnd_global.APPS_INITIALIZE(p_user_id,1,1);
  5       commit;
  6  end;
  7  /

Procedure created

SQL> create or replace trigger test1_trg_bi
  2  after update on test1
  3  for each row
  4  begin
  5       test1_apps_init (:new.a);
  6       -- fnd_request.submit_request...
  7       -- ....
  8       -- .....
  9  end;
10  /

Trigger created

SQL> select fnd_global.user_id from dual; 
   USER_ID
----------
        -1

SQL> update test1 set b=1101 where a=1001;
1 row updated

SQL> select fnd_global.user_id from dual; 
   USER_ID
----------
      1001

SQL> update test1 set b=1102 where a=1002;
1 row updated

SQL> select fnd_global.user_id from dual; 
   USER_ID
----------
      1002

As you can see, the update statements were executed successfully this time and the session was updated with the appropriate user context in each update statement.

Those two solutions are working fine, but keep in mind that the second is not supported.

You are welcome to leave a comment.

Aviad

Categories: APPS Blogs

Site Specific Browser for EPM Applications

Oracle EPM Smart Space - Wed, 2008-05-28 11:00

So for the past few posts I have been talking about blurring the lines between desktop and web applications and the technologies that will get us there. And in my first post on this topic I mentioned a technology called 'site specific browsers'. Site specific browsers are just that, it is a browser window dedicated to one site or in the case of BI and EPM a web based application. The installed application gets a shortcut on the user's desktop or the start menu and is a simple click away from running. You can read more about site specific browsers and how Mozilla Labs describes their product 'Prism'.


So in doing some searches I found that there are 2 major options with site specific browsers, Fluid and Prism. Fluid is Mac only so for my testing I used Prism. I decided to throw a couple of BI applications at it and see what I thought of the user experience. (I ignored minor bugs or issues as this stuff is still not official product). So I put Workspace (9.3.1) and OBI (10.1.3.3.2) in Prism and played around a bit. Here are some screen shots:


Workspace with a planning form loaded.



OBI with the default Paint Dashboard.

Both applications worked pretty well out of the box and it was nice not to have all the clutter and overhead that a browser carries with it. It was also nice to have the applications launch from a shortcut on the desktop. Some of the things I didn't like were as follows:


  1. There were times where it would have been nice to have navigation (ability to go back). This is not an issue with Prism itself, it was the fact that the application was designed to live in a browser.
  2. This does not change the application experience that much. (I did not expect it to) Basically if you are happy with the application experience this will improve on it. If you hate the way the application works then this will do little.

Overall this is a very cool concept but you are simply making web applications run as if they were desktop applications. I will use Prism for things like Google Mail or Yahoo Mail but at the end of the day, from a user experience perspective, I still prefer Thunderbird or Outlook for email. It would be cool to see if an RIA (Rich Internet Application) developed in Flex or Silverlight can run in a site specific browser…

Categories: Development

ORA-12547 while creating ASM instance using DBCA in 10gR2

Madhu Thatamsetty - Wed, 2008-05-28 04:39
ORA-12547 - TNS Lost Contact while creating ASM instance using DBCA.Cause: Seems like Oracle Binaries were not relinked properly.[oracle@testsrv01 ~]$ ldd `which oracle`lddlibc4: cannot read header from `/oracle01/oracle/product/10.2.0/db_1/bin/oracle'[oracle@testsrv01 ~]$Fix: shutdown the listener and kill any stale processes referring to the executable of your ORACLE_HOME andcd $ORACLE_HOME/Madhu Sudhanhttp://www.blogger.com/profile/11947987602520523332noreply@blogger.com0

Recommended by Joe

Mark A. Williams - Tue, 2008-05-27 18:21

A friend of mine at Oracle (that is to say Greg is still at Oracle whilst I am not) pointed out to me that Microsoft's Joe Stagner has Pro .NET Oracle Programming as a recommended book on the Oracle - ASP.NET forum. Currently the recommended books list looks like this (subject to change naturally):

joe_stagner_recommends_me

That got me to thinking a bit. It has been just over 4 years since I started writing that book. (I started the first chapter in March 2004). Certainly a lot has changed since then with several releases of the Oracle Data Provider for .NET, Oracle Database, Visual Studio, the Oracle Developer Tools for Visual Studio .NET, etc. I was just at one of the major booksellers here locally over the weekend and the "computer section" has dwindled immensely. I wonder if it would be worthwhile to update the book? There are several things I would want to change to be sure, but would it be worth it? Do people get their technical information from OTN and MSDN mostly now?

More on JavaFX

Oracle EPM Smart Space - Tue, 2008-05-27 14:18

OK I will be totally honest I don't have a whole lot on this one, simply because it is the newest entry in the market. It does look promising from the videos I have seen and what is being said about platform support sounds great. I just hope we can avoid all the issues I have seen over the years with JRE and version compatibility… I have signed up to preview the SDK and when I get a hold of it I will be sure to share more. I do want to share one cool feature I have seen on video. It is the ability to start with the application in the browser and then drag it to the desktop. Here the video that shows this:



The feature I am talking about is about 2:10 into the video and I think this will be a key differentiator that the other RIA (Rich Internet Application) players will quickly try to copy.

Categories: Development

Are your projects failing? How to avoid the Pitfalls

Project Directions - Tue, 2008-05-27 10:20

In a recently published article entitled Why Projects Fail (And How to Avoid the Pitfalls) published by Enterprise Systems, Senior Director of Strategy for Oracle Projects Colleen Baumbach outlines many of the common mistakes that lead to project failure.

I think one of the best points Ms. Baumbach makes is at the end where she says the accumulated years of project failures almost creates a mindset from the start that a project is doomed.  As she notes, there are countless studies that have been prepared showing how dismal project success rates are.

How are companies addressing this?  According to a Forrester study published in early 2007, twenty-six percent of IT leaders planned to hire project managers and 59 percent planned to train their current staff in project management in 2007.  They noted that those numbers changed very little from 2002.

Further reasoning behind the rush to acquire or train more qualified project managers: 

“The reason for the continued emphasis on project management skills is because IT’s value to business remains contingent on it’s ability to deliver projects which meet business requirements both on time and on budget. IT staff accustomed to more technical roles struggle to transition to project management, CIO’s argue, and complain that educational institutions are not putting adequate focus on these skills through coursework.”

It should be a good time to be a project manager as long as you know how to avoid the pitfalls.


"Demo It To Oracle" (DITO) - CamStudio Help

Pankaj Chandiramani - Sun, 2008-05-25 22:03

Now you can record & share the issues you are facing to Oracle Support .

https://metalink.oracle.com/metalink/plsql/f?p=130:14:7679480494464650902::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,11.1,1,1,0,helvetica

Its a nice way to share the error & show the support guys on how that error occurred & faster reproducibility.

Categories: DBA Blogs

Time saving tactic: How we saved 6 hrs of downtime in production 10g upgrade

Gaurav Verma - Sat, 2008-05-24 03:57

PrefaceSo you want to upgrade your database to 10g from 9i. Well, welcome to the club.

If you have RAC, then you will definitely need to install the CRS and database binaries, along with some RDBMS patches.

When we were challenged with keeping the downtime to within 24 hrs by a customer, that set us thinking as to what time saving tactics could be employed to achieve this end.
Divide and conquer the downtime..The old strategy of divide and conquer is really time tested and works well in most paradigms. In this article, we demonstrate how its possible to split the 10g upgrade downtime into two logical parts:

Part 1)

Install the 10g technology stack ahead of time, including the database, any patchsets (usually 10.2.0.3) and rdbms patches. Before doing this, we shutdown the 9i RAC/DB binaries. After the 10g CRS/DB technology stack installation, you shutdown the 10g CRS , bring up the 9i RAC processes and carry on as it nothing happened.


   At this stage, 9i and 10g technology stack will co-exist peacefully with each other. The production system can run on 9i RAC for a week or more till you decide to do the actual upgrade.

Part 2)

In the subsequent main upgrade outage, you can shutdown the 9i RAC/DB, bring up DB using 10g CRS/DB oracle home  and do the actual upgrade. This outage could last anywhere from 10-24 hrs or even upto 32 hrs depending on your pre-production timing practice. It is assumed that one would do at least 2-3 rounds of upgrade before gaining the confidence of doing the production round.


Adopting this split strategy saved us about 6 hrs in the main upgrade downtime window and we were able to do the upgrade in a window of 16-20 hrs. The size of the database was ~1 TB on HP-UX PA RISC 64 bit OS.

How was it done..When you do an 9i->10g upgrade for RAC, the following happens:

1) OCR device location is automatically picked up from /var/opt/oracle/srvConfig.loc (9i  setup file)

2) The OCR device's contents is upgraded to 10g format

3) A new file called /var/opt/oracle/ocr.loc is created with the OCR device name

Since we had to preserve the 9i OCR device for running 9i RAC after the 10g CRS/DB techstack installation, we did the following:

1) Got a new set of OCR and Voting device for 10g. This was a separate set of devices in addition to the 9i OCR and voting disks. Then we brought down the 9i CRS processes.

A caveat here was that the HP service guard (vendor cluster solution) was required to be up to perform the 10g CRS installation.

2) copied the contents of 9i OCR into the 10g OCR device using the dd command:

$ whoami
oracle

$ dd if=/dev/rawdev/raw_9i_ocr.dbf  of=/dev/rawdev/raw_10g_ocr.dbf   bs=1024


3) Edited the srvConfig.loc file to point to the /dev/rawdev/raw_10g_ocr.dbf file

4) Did the 10g CRS installation and ran root.sh that upgraded the OCR device to 10g format

5) We then installed the 10.2.0.1 DB binaries and applied the 10.2.0.3 patchset, along with RDBMS patches. This was the major activity and took about 4-5 hrs.

Another option to save time here was that we could have cloned the 10.2.0.3 ORACLE_HOME from the UAT servers, but since this was production, we wanted to do everything with a clean start and not carry on any mistakes from UAT.

6) Brought down 10g CRS services with $ORA_CRS_HOME/bin/crsctl stop crs and also disabled the automatic startup of CRS with /etc/init.d/init.crs crs disable

7) Re-pointed the 9i OCR device back in /var/opt/oracle/srvConfig.loc as  /dev/rawdev/raw_9i_ocr.dbf

Then we brought up the 9i CRS services again and then brought up the Database with 9i binaries.

At this point, the 9i and 10g binaries co-existed with each other peacefully as if 10g techstack was never there.
ConclusionThere might be better ways of reducing downtime during the upgrade, but this was one of the selling points of projecting a successful 10g upgrade to the customer by reducing the downtime, especially when we were under extreme pressure to keep the main downtime window to less than 24 hrs.




puzzling RMAN: No channel to restore a backup or copy of datafile?

Gaurav Verma - Fri, 2008-05-23 06:43

An RMAN puzzle  
In this article, we will look at a puzzling situation that happened at one of our environments, when we were carrying out an RMAN restoration. The RMAN backup was taken using a catalog (not using controlfile) and our mission was to restore PROD to UAT. Simple, right?

Wrong!
An Arrrgh! Moment..

We were using the following restoration script:

RMAN>
RMAN> run
2> {
3> set until time  "to_date('05/13/08 14:40:00','MM/DD/YY hh24:mi:ss')";
4> allocate auxiliary channel 'adisk_0' type DISK;
5> allocate auxiliary channel 'adisk_1' type DISK;
6> allocate auxiliary channel 'adisk_2' type DISK;
7> allocate auxiliary channel 'adisk_3' type DISK;
8> DUPLICATE TARGET DATABASE TO UAT;
9> }


The backup location was available and so were adequate backup channels, but we always ended up getting this laconic error:

...
...

Starting restore at 13-MAY-08

released channel: adisk_0
released channel: adisk_1
released channel: adisk_2
released channel: adisk_3
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 05/13/2008 15:25:38
RMAN-03015: error occurred in stored script Memory Script
RMAN-06026: some targets not found - aborting restore
RMAN-06100: no channel to restore a backup or copy of datafile 471
RMAN-06100: no channel to restore a backup or copy of datafile 466

RMAN>
RMAN> **end-of-file**

RMAN>

Recovery Manager complete.


No channel to restore?...Huh? Well, as you can see, the channels were very well defined and even the rman log showed that the channels were being allocated. We even tried increasing the channels to 8 or 10, and also tried allocating auxiliary channels (was irrelevant), but to no avail.

Another option that was tried was plain and simple restoration using the RMAN> restore datafile 1; command. Although that failed since the paths of datafile 1 was not switched to UAT's convention, it did not prove anything.

My teammates, Dhandapani Perungulam    and Sandeep Rebba  were trying out different options. Then someone had the bright idea of checking the timestamp of the completed backup. It was possible that we were trying to restore to a point in time before the backup had completed.
Verifying the rman backup completion time from catalogFirst of all, we needed to set the NLS_DATE_FORMAT variable to include the HH24:MI format:

$  export NLS_DATE_FORMAT='dd-mon-rr hh24:mi'

Then we needed to connect to the target (source instance for replication in RMAN terminology) and RMAN catalog:

$  rman catalog rman@rman target sys@prd

Recovery Manager: Release 10.2.0.3.0 - Production on Fri May 23 14:34:02 2008

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

rman database Password:

target database Password:

connected to target database: PRD (DBID=2530951715)
connected to recovery catalog database

RMAN>

This command will give us details of the backupsets and the time when the backup completed:

RMAN> list backup of database completed between '12-may-2008' and '14-may-2008;

...
...
...
BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2794098 Incr 0  53.28G     SBT_TAPE    00:38:54     13-may-08 23:11
        BP Key: 2794121   Status: AVAILABLE  Compressed: NO  Tag: TAG20080513T203101
        Handle: racprod1_PRD<PRD_116563:654647578:1>.dbf   Media:
  List of Datafiles in backup set 2794098
  File LV Type Ckp SCN    Ckp Time        Name
  ---- -- ---- ---------- --------------- ----
  7    0  Incr 7247000051870 13-may-08 22:32 /u01/oracle/prdracdata/owad01.dbf
  295  0  Incr 7247000051870 13-may-08 22:32 /u01/oracle/prdracdata/system06.dbf
  379  0  Incr 7247000051870 13-may-08 22:32 /u01/oracle/prdracdata/rbs11.dbf
  459  0  Incr 7247000051870 13-may-08 22:32 /u01/oracle/prdracdata/apps_ts_interface01.dbf
  460  0  Incr 7247000051870 13-may-08 22:32 /u01/oracle/prdracdata/apps_ts_summary01.dbf
  481  0  Incr 7247000051870 13-may-08 22:32 /u01/oracle/prdracdata/apps_ts_tx_data12.dbf
  509  0  Incr 7247000051870 13-may-08 22:32 /u01/oracle/prdracdata/apps_ts_archive02.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2794099 Incr 0  55.62G     SBT_TAPE    00:39:54     13-may-08 23:12
        BP Key: 2794122   Status: AVAILABLE  Compressed: NO  Tag: TAG20080513T203101
        Handle: racprod1_PRD<PRD_116562:654647562:1>.dbf   Media:
  List of Datafiles in backup set 2794099
  File LV Type Ckp SCN    Ckp Time        Name
  ---- -- ---- ---------- --------------- ----
  5    0  Incr 7247000051797 13-may-08 22:32 /u01/oracle/prdracdata/system05.dbf
  17   0  Incr 7247000051797 13-may-08 22:32 /u01/oracle/prdracdata/rbs15.dbf
  18   0  Incr 7247000051797 13-may-08 22:32 /u01/oracle/prdracdata/apps_ts_interface03.dbf
  22   0  Incr 7247000051797 13-may-08 22:32 /u01/oracle/prdracdata/apps_ts_summary05.dbf
  398  0  Incr 7247000051797 13-may-08 22:32 /u01/oracle/prdracdata/discoverer01.dbf
  480  0  Incr 7247000051797 13-may-08 22:32 /u01/oracle/prdracdata/apps_ts_tx_data11.dbf
  508  0  Incr 7247000051797 13-may-08 22:32 /u01/oracle/prdracdata/apps_ts_nologging02.dbf

RMAN>

As we can see,
13-may-08 22:32 is the golden time. Anything after that in the until time statement will do.
Further progress..So then, we tried with this script:

RMAN>
RMAN> run {
2> allocate auxiliary channel 'adisk_0' type DISK;
3> allocate auxiliary channel 'adisk_1' type DISK;
4> allocate auxiliary channel 'adisk_2' type DISK;
5> allocate auxiliary channel 'adisk_3' type DISK;
6> allocate auxiliary channel 'adisk_4' type DISK;
7>
8> set until time  "to_date('09/013/2008 22:40:00','DD/MM/YYYY HH24:MI:SS')";
9>
10> DUPLICATE TARGET DATABASE TO UAT;
11> }

Although, we got the following error for the archivelogs, we were able to open the database after manually supplying them and then doing an SQL> alter database open resetlogs; :

...
...

Starting recover at 13-MAY-08

starting media recovery

Oracle Error:
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/u01/oracle/uatracdata/system01.dbf'

released channel: adisk_0
released channel: adisk_1
released channel: adisk_2
released channel: adisk_3
released channel: adisk_4
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 05/13/2008 22:47:51
RMAN-03015: error occurred in stored script Memory Script
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06102: no channel to restore a backup or copy of log thread 2 seq 111965 lowscn 7246877796511
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 143643 lowscn 7246877796417
ConclusionSome RMAN errors can be really hard to understand or debug. Especially, it seems that the no channel to restore a backup or copy seems to be a very generic error message in RMAN, which could really mean anything and can be very misleading.

For example, we can see that at the end, it gave the
RMAN-06102: no channel to restore a backup or copy of log thread 2 seq 111965 lowscn 7246877796511 error, but that meant that it could not find the archive logs. After supplying the archive logs, the database opened up fine with the resetlogs option.


upcoming posts

Pankaj Chandiramani - Thu, 2008-05-22 20:37

Been long time since i posted something  the reason being have started in new directions  ......working with OIM , OID & stuff these days .

So will be writing a post to install/config OIM in next couple of days & will keep on posting regularly on the topic.



 

Categories: DBA Blogs

Hurray Patchset!

Carl Backstrom - Thu, 2008-05-22 16:02
As Joel blogged here our patchset for Application Express 3.1 is out.

Hopefully this fixes any issues you ran into with APEX 3.1 so we can start on creating all brand new ones in future versions ;)

CFO’s rate Data Integrity a Critical Issue

Project Directions - Thu, 2008-05-22 10:23

In a recent article from Baseline titled, One in Nine CFOs See High Return Benefits from IT, they report that of the 629 CFO’s surveyed, close to half of them rate improving data quality in their enterprise as a critical issue.

Here in Oracle Projects one of our greatest assets has always been to promote our tight integration with the rest of the E-Business Suite, such as Financial’s and HR.  When products are built from the ground up to be integrated together, you eliminate many of the problems that occur from having disparate 3rd party systems all trying to share data.

Even if you must use special best-of-breed or niche products to help manage your business, then CFO’s and CIO’s should at least look into our AIA strategy to help them build tighter integration and hence improve their data integrity.


Using mod_rails with Rails applications on Oracle

Raimonds Simanovskis - Tue, 2008-05-20 16:00

As many others I also got interested in new mod_rails deployment solution for Rails applications. And when I read how to use it for development environment needs I decided to try it out.

As you probably know I am using Mac for development and using Oracle database for many Rails applications. So if you do it as well then at first you need to setup Ruby and Oracle on your Mac.

After that I installed and did setup of mod_rails according to these instructions and these additional notes.

One additional thing that I had to do was to change the user which will be used to run Apache httpd server as otherwise default www user did not see my Rails applications directories. You should do it in /etc/apache2/httpd.conf:

User yourusername
Group yourusername

And then I started to fight with the issue that ruby which was started from mod_rails could not load ruby-oci8 library as it could not find Oracle Instant Client shared library. And the reason for that was that mod_rails launched ruby with very minimal list of environment variables. E.g. as DYLD_LIBRARY_PATH environment variable was not specified then ruby-oci8 could not find Oracle Instant Client libraries.

The issue is that there is no documented way how to pass necessary environment variables to mod_rails. Unfortunately mod_rails is ignoring SetEnv settings from Apache httpd.conf file. Therefore I needed to find some workaround for the issue and finally I did the following solution.

I created executable script file /usr/local/bin/ruby_with_env:

#!/bin/bash
export DYLD_LIBRARY_PATH="/usr/local/oracle/instantclient_10_2:$DYLD_LIBRARY_PATH"
export SQLPATH=$DYLD_LIBRARY_PATH
export TNS_ADMIN="/usr/local/oracle/network/admin"
export NLS_LANG="AMERICAN_AMERICA.UTF8"
/usr/bin/ruby $*

and then in Apache httpd.conf file I changed RailsRuby line to

RailsRuby /usr/local/bin/ruby_with_env

As a result in this way I was able to specify necessary environment variables before Ruby and Rails was started and after this change ruby-oci8 libraries were successfully loaded.

You can use this solution also on Linux hosts where you will deploy Rails applications in production.

Currently I still have issue with mod_rails that it fails to execute RMagick library methods (which is compiled with ImageMagick). I get strange errors in Apache error_log:

The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec().
Break on __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__() to debug.
[error] [client ::1] Premature end of script headers:

When I was running the same application with Mongrel then everything was running correctly. If anyone has any ideas what could be the reason please write some comment.

Categories: Development

Pages

Subscribe to Oracle FAQ aggregator