Feed aggregator

Concepts Guide: 8/27 - Memory Architecture

Charles Schultz - Thu, 2009-10-08 14:08
Like the picture on the 2nd page. Yes, I am visually oriented.

On page 4:
When automatic SGA memory management is enabled, the sizes of the different SGA
components are flexible and can adapt to the needs of a workload without requiring
any additional configuration.

However, the SGA components are flexible regardless of the setting for automatic SGA memory management (aka, automatic shared

memory management, ak ASMM). While I agree that ASMM as a conceptual feature is rather cool, there are known issues when

allocations and deallocations oscillate rapidly, causing locks on the underlying pools and decreasing performance overall.

To be truly effective, dynamic memory management would allow the kernel to dynamically allocate memory segments from the OS.

As it stands, 100% of greater(SGA_TARGET,SGA_MAX_SIZE) is allocated when the instance starts; if SGA_MAX_SIZE is greater, the

memory is allocated but not used, saved away for a potential future use. How is that dynamic?

From page 5:
"As the workload changes, memory is redistributed to ensure optimal performance."

*cough cough* Unless of course you hit a bug....

Good overview and discussion of the SGA and buffer cache in general.

Page 6:
"If the user process finds a dirty buffer as it searches the LRU list, it moves that buffer
to the write list and continues to search."

I did not realize that the user process could move dirty buffers to the write list; I thought only the DBWriter processes

could do that. Hmm...

Slightly disappointed that there is no talk about the negative ramifications of setting a large buffer cache, specifically

how it adversely affects the time to scan the LRU. The positive effects are fairly obvious. =) I was also hoping for more

coverage of the Redo Log buffer. But, the couple of paragraphs in the document (and more laterin Chap 9?) at least introduce

the concept. I still find it strange that the defaults are so small.

By the time I get to page 10, I am ready for another picture; it would be nice to start with the overall picture, and then

"zoom in" on each section to go along with the text a little bit.

Data Dictionary Cache, aka Row Cache. Is this the same cache that 11g is using to store sql results?

Like the detailed steps on page 12; this is the kind of details I have been waiting for. And now I can rip them up. =)
The first bullet of #2 states that the ANALYZE command (when used to update/modify stats) flushes the cursor from the shared

pool. People still use the ANALYZE command in 10g? I thought the dbms_stats procedures were preferred and more popular. In

any event, the underlying mechanics are still the same; if you alter the stats, the cursor is no longer valid and dropped.

Now if only we had a way to surgically splice out individual cursors without having to make relatively "impactful" changes...

Bullet 3 is interesting, I had no idea that changing the global name flushed the shared pool. Wow.

The couple of paragraphs about the Large Pool are succint, a bit too much so. How does Oracle decide when to use Large Pool

memory, as opposed to the Shared Pool? What are some good rules of thumb in terms of how much to allocate (probably covered

in the Performance Tuning Guide, but the Concepts Guide should say _something_)? No Large Pool Advisor? =)

The section on Streams Pool is even worse. It does not tell me what exactly the memory is used for, how to tune it, what to

do with it, nothing. And the 2nd paragraph is quite confusing; the Streams Pool may come from different parent pools

depending on how SGA_TARGET is set? Who signed off on that little bit of tom-foolerly?

The section on Dynamic SGA is almost misleading. The shared memory allocated to the server instance does not actually change

at all; 100% of the requested memory (even if it is not used) is given during instance startup. This concept of a "dynamic

memory usage" feature totally ignores that fact. So if the instance gets 50GB of memory but only uses 500MB, you are wasting

45GB for no good reason. That is not dynamic memory.

"Both the buffer cache and the SGA pools can grow and shrink at runtime according to some internal, Oracle-managed policy."

This is the Concepts Guide, for crying out loud! Some internal, Oracle-managed policy?!? Who wrote that?!? Worse, this

particular policy has some serious drawbacks, especially earlier in 10g. We observed that Oracle would thrash itself trying

to resize the "dynamic" pools many times a second, apparently oscillating between two different "optimal" thresholds. Some


LOCK_SGA: I have never heard of this parameter. And, oh, here is the reason why; we run Solaris. Good thing the

Concepts Guide cleared that up. So, this is the Concepts Guide, right? I am not reading some PR BS, am I? Why would

LOCK_SGA be a bad thing? If you are running a Production database, especially on a host that is dedicated to Oracle, why

would I ever want memory pages to swap out?

SHARED_MEMORY_ADDRESS: Again, never heard of this one. In fact, Google is suspiciously quiet on this one as well. Why would I

want to set this parameter? Don't get me wrong, I am glad the Concepts Guide at least says *something*, but some explanations

would be nice. I assume this parameter is offset; if you set the same thing in multiple databases, I would have a hard time

believing that this parameter actually is meant to grab an absolute, OS Memory address. That would be just silly. Actually, I

am surprised this is not a underscore parameter. Which makes me wonder, is this short little blurb the only thing that keeps

this parameter from being "internally supported"?

Didn't realize the PGA had two independently-maintained subsections (pesistent area and run-time area). Good to know.

It is puzzling that the author would choose to include v$sesstat and v$sysstat as means of collecting performance statistics for the PGA without giving any insight whatosever as to what to look for; in fact, just browsing v$statname (the lookup table for the statistic#) does not make this any more clearer either. I personally find it quite unhelpful a document that purports to "demystify" something flippantly references a source that is not at all obvious.

I'll wrap up with that. I still find it hard to wade through these chapters; drains my brain. *grin* Overall I felt that I learned a few small things about the Memory Architecture, but I still feel like I have a ways to go before I fully grasp the skeleton that holds everything together. I hope this becomes more lucid as the chapters progress.

The ultimate story about OCR, OCRMIRROR and 2 storage boxes – Chapter 4

Geert De Paep - Wed, 2009-10-07 09:00
Scenario 4: Loss of ocrmirror from the non-OCR MASTER

This is a vollow-up of chapter 3.
Let’s try to do the same thing as scenario 3, however now hiding the lun from a node NOT being the OCR MASTER, while crs is running on both nodes.

What happens?

So we hide the ocrmirror lun from node 1, because after the previous test node 2 is still the master. At the moment of hiding the lun nothing apears in any logfile on any node. This is an interesting fact, because when we removed the ocrmirror from the ocr master in scenario 3, we got the messages “problem writing the buffer …” and “Vote information on disk 0 [/dev/oracle/ocr] is adjusted from [1/2] to [2/2]” immediately in the crsd logfile. So this indicates that only the ocr master reads/writes(?) constantly in the ocr/ocrmirror and hence, detects IO errors immediately. The non-ocr-master doesn’t do anything.

To prove that the ocrmirror is really invisble for the non-ocr-master, we do on node 1:

(nodea01 /app/oracle/bin) $ dd if=/dev/oracle/ocrmirror of=/dev/null bs=64k count=1
dd: /dev/oracle/ocrmirror: open: I/O error

ocrcheck on node 2 (the master) has no problem, as it still sees both devices:

         Device/File Name         : /dev/oracle/ocr
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/oracle/ocrmirror
                                    Device/File integrity check succeeded
         Cluster registry integrity check succeeded

But doing ocrcheck on node 1 gives:

PROT-602: Failed to retrieve data from the cluster registry

and its alert file says:

2008-07-23 09:33:43.221
[client(14867)]CRS-1011:OCR cannot determine that the OCR content contains the latest updates. Details in /app/oracle/crs/log/nodea01/client/ocrcheck_14867.log.

and the associated client logfile shows:

Oracle Database 10g CRS Release Production Copyright 1996, 2008 Oracle. All rights reserved.
2008-07-23 09:33:43.210: [OCRCHECK][1]ocrcheck starts…
2008-07-23 09:33:43.220: [ OCRRAW][1]proprioini: disk 0 (/dev/oracle/ocr) doesn’t have enough votes (1,2)
2008-07-23 09:33:43.221: [ OCRRAW][1]proprinit: Could not open raw device
2008-07-23 09:33:43.221: [ default][1]a_init:7!: Backend init unsuccessful : [26]
2008-07-23 09:33:43.221: [OCRCHECK][1]Failed to access OCR repository: [PROC-26: Error while accessing the physical storage]
2008-07-23 09:33:43.221: [OCRCHECK][1]Failed to initialize ocrchek2
2008-07-23 09:33:43.221: [OCRCHECK][1]Exiting [status=failed]…

Now let’s see if node 1 really has a problem with this or not?

(nodea01 /app/oracle/bin) $ crsstat<br />HA Resource                                   Target     State<br />-----------                                   ------     -----<br />ora.ARES.ARES1.inst                           ONLINE     ONLINE on nodea01<br />ora.ARES.ARES2.inst                           ONLINE     ONLINE on nodeb01<br />ora.ARES.db                                   ONLINE     ONLINE on nodeb01<br />ora.AMIGO.AMIGO1.inst                         ONLINE     ONLINE on nodea01<br />ora.AMIGO.AMIGO2.inst                         ONLINE     ONLINE on nodeb01<br />ora.AMIGO.db                                  ONLINE     ONLINE on nodeb01<br />ora.nodea01.ASM1.asm                          ONLINE     ONLINE on nodea01<br />ora.nodea01.LSNRARES_NODEA01.lsnr             ONLINE     ONLINE on nodea01<br />ora.nodea01.LSNRAMIGO_NODEA01.lsnr            ONLINE     ONLINE on nodea01<br />ora.nodea01.gsd                               ONLINE     ONLINE on nodea01<br />ora.nodea01.ons                               ONLINE     ONLINE on nodea01<br />ora.nodea01.vip                               ONLINE     ONLINE on nodea01<br />ora.nodeb01.ASM2.asm                          ONLINE     ONLINE on nodeb01<br />ora.nodeb01.LSNRARES_NODEB01.lsnr             ONLINE     OFFLINE<br />ora.nodeb01.LSNRAMIGO_NODEB01.lsnr            ONLINE     OFFLINE<br />ora.nodeb01.gsd                               ONLINE     ONLINE on nodeb01<br />ora.nodeb01.ons                               ONLINE     ONLINE on nodeb01<br />ora.nodeb01.vip                               ONLINE     ONLINE on nodea01

Obviously not. I seem to be able to query the ocr without any problem. Even modifying the OCR succeeds from node 1:

srvctl add service -d ARES -s aressrv -r ARES1 -a ARES2

gives in the crs logfile:

2008-07-23 09:41:04.656: [ CRSRES][46723] Resource Registered: ora.ARES.aressrv.cs
2008-07-23 09:41:05.786: [ CRSRES][46724] Resource Registered: ora.ARES.aressrv.ARES1.srv

This seems to indicate again that all OCR manipulation (read + write) goes though the ocr master (node 2 who still sees both ocr and ocrmirror). Still during all these actions nothing has appeared in any logfile of node 2 (the master who still sees both luns).

Now let’s do an interesting test and stop the master. We may assume that node 1 will become the new master then, however node 1 at this time sees only one device with one vote. So it cannot run CRS like that…

Let’s see what happens. After stopping crs on node 2, the crs logfile on node 1 shows:

2008-07-23 09:44:08.406: [ OCRMAS][25]th_master:13: I AM THE NEW OCR MASTER at incar 2. Node Number 1
2008-07-23 09:44:08.415: [ OCRRAW][25]proprioo: for disk 0 (/dev/oracle/ocr), id match (1), my id set (1385758746,1866209186) total id sets (2), 1st set (1385758746,1866209186), 2nd set (1385758746,1866209186) my votes (1), total votes (2)
2008-07-23 09:44:08.418: [ OCRRAW][25]propriowv_bootbuf: Vote information on disk 0 [/dev/oracle/ocr] is adjusted from [1/2] to [2/2]

This makes sense. Node 1 becomes the master because node 2 is leaving. However it evaluates its configuration and sees an ocr with one vote and no ocrmirror. This violates rule 3 and hence it updates the vote count (he can do that, he is the new master), and luckily he does NOT decide to crash…

And indeed all seems as expected now on node 1:

         Device/File Name         : /dev/oracle/ocr<br />                                    Device/File integrity check succeeded<br />         Device/File Name         : /dev/oracle/ocrmirror<br />                                    Device/File unavailable<br />

The situation is now that the unavailable ocrmirror still has one vote (because node 1 could not update its vote count) and the ocr has just received 2 votes from node 1.

Now we restart crs again on node 2 and we see in its logfile:

2008-07-23 09:47:19.583: [ OCRRAW][1]proprioo: for disk 0 (/dev/oracle/ocr), id match (1), my id set (1385758746,1866209186) total id sets (1), 1st set (1385758746,1866209186), 2nd set (0,0) my votes (2), total votes (2)
2008-07-23 09:47:19.583: [ OCRRAW][1]proprioo: for disk 1 (/dev/oracle/ocrmirror), id match (1), my id set (1385758746,1866209186) total id sets (2), 1st set (1385758746,1866209186), 2nd set (1385758746,1866209186) my votes (1), total votes (2)

Node 2 may be a little confused, because when both ocr and ocrmirror are available, he would expect each of them to have one vote…

Now lets do an ocrcheck again on node 2 and get:

         Device/File Name         : /dev/oracle/ocr<br />                                    Device/File integrity check succeeded<br />         Device/File Name         : /dev/oracle/ocrmirror<br />                                    Device/File needs to be synchronized with the other device<br />

Aha, this is the right output when ocrmirror has 2 votes and ocr has 1 vote.

So let’s do as what Oracle tells in the output above, let’s synchronize the ocrmirror again with the other device:

So we do an “ocrconfig -replace ocrmirror /dev/oracle/ocrmirror” on node 2 (who can see both ocr and ocrmirror). Bad luck, this fails, because node 2 is not the master. Node 1 has become the master (see above) and node 1 cannot see ocrmirror. So node 1 cannot verify or correct the vote count on ocrmirror. Hence this last command gives in the alert file of node 1:

2008-07-23 09:57:26.251: [ OCROSD][35]utdvch:0:failed to open OCR file/disk /dev/oracle/ocrmirror, errno=5, os err string=I/O error
2008-07-23 09:57:26.251: [ OCRRAW][35]dev_replace: master could not verify the new disk (8)
[ OCRSRV][35]proas_replace_disk: Failed in changing configurations in the Master 8

After making the lun visible again on node 1 and repeating the last command, all succeeds again without error: the crs logfile of node 1 then shows:

2008-07-23 10:04:48.419: [ OCRRAW][33]proprioo: for disk 0 (/dev/oracle/ocr), id match (1), my id set (1385758746,1866209186) total id sets (1), 1st set (1385758746,1866209186), 2nd set (0,0) my votes (2), total votes (2)
2008-07-23 10:04:48.419: [ OCRRAW][33]propriogid:1: INVALID FORMAT
2008-07-23 10:04:48.484: [ OCRRAW][33]propriowv_bootbuf: Vote information on disk 1 [/dev/oracle/ocrmirror] is adjusted from [0/0] to [1/2]
2008-07-23 10:04:48.485: [ OCRRAW][33]propriowv_bootbuf: Vote information on disk 0 [/dev/oracle/ocr] is adjusted from [2/2] to [1/2]
2008-07-23 10:04:48.557: [ OCRMAS][25]th_master: Deleted ver keys from cache (master)
2008-07-23 10:04:48.557: [ OCRMAS][25]th_master: Deleted ver keys from cache (master)

and in the same file on node 2 we see:

2008-07-23 10:04:48.492: [ OCRRAW][40]proprioo: for disk 0 (/dev/oracle/ocr), id match (1), my id set (1385758746,1866209186) total id sets (2), 1st set (1385758746,1866209186), 2nd set (1385758746,1866209186) my votes (1), total votes (2)
2008-07-23 10:04:48.493: [ OCRRAW][40]proprioo: for disk 1 (/dev/oracle/ocrmirror), id match (1), my id set (1385758746,1866209186) total id sets (2), 1st set (1385758746,1866209186), 2nd set (1385758746,1866209186) my votes (1), total votes (2)
2008-07-23 10:04:48.504: [ OCRMAS][25]th_master: Deleted ver keys from cache (non master)
2008-07-23 10:04:48.504: [ OCRMAS][25]th_master: Deleted ver keys from cache (non master)

The fact that the logfile on node 2 shows these messages indicates that the vote update which is done by the master (node 1) is propagated in some way to the other nodes, who in term update some kind of local cache (I think).

Conclusion of Scenario 4

Hiding the lun from the non-ocr-master still can’t confuse CRS. However it takes longer for the cluster to detect that there is a problem with the storage, as only the master is able to detect io errors on the ocr/ocrmirror. But in the end it can be recovered again without downtime.

So you should be convinced now that we can’t confuse CRS, right?… Then you don’t know me yet, I still have scenario 5. Read on in the next chapter.

Oracle OpenWorld 2009: From Silicon Welly to Silicon Valley Again

Gareth Roberts - Wed, 2009-10-07 04:00

Off to Openworld 2009 in a couple of days. Given that it's my second time I know a lot more of what to expect. But of course my schedule is overflowing again - even more of a mixed bag this time with fewer conference sessions, but more meetups and unconference sessions that I should have looked at more closely last year!

Here's what my main schedule looks like:

Hope to see you there - And a huge thanks to Oracle for the blogger's pass - wouldn't be there without it!

Catch ya!

Related Posts

Book: Middleware Management with Enterprise Manager Grid Control

Debu Panda - Tue, 2009-10-06 23:46
I coauthored another book Middleware Management with Oracle Enterprise Manager with Arvind Maheshwari. This covers managing both Oracle Fusion Middleware and non-Oracle Middleware such as IBM WebSphere, JBoss, Microsoft Middleware with Oracle Enterprise Manager.

Experiences on Java programming within the databases, tips & tricks, tools, open source libraries and more

Marcelo Ochoa - Tue, 2009-10-06 12:56
This is the topic of my talk during unconference this year (Tuesday - Oct 13 - 10am - Overlook II) but, What is an unconference?
An unconference is a conference where the content of the sessions is driven and created by the participants, generally day-by-day during the course of the event, rather than by a single organizer, or small group of organizers, in advance (Source: Wikipedia). Unconferences came from the realization that "the sum of the expertise of the people in the audience is greater than the sum of expertise of the people on stage" (Source: Dave Winer).
During this talk I would like to transfer my experience on Java programming withing the Database since 1999.
Specially looking for three scenarios of work:
  • Traditional Java Stored Procedure programming
  • HTPP/HTTPS services through the XMLDB Servlet connector
  • Domain Index and pipe-line table function
This scenarios are covered by different Open Source projects used as practical examples DBPrism CMS, XMLDB Restlet Connector and Lucene Domain Index.
Each projects included several Open Source Libraries used and many tools for doing deployment and installing.
I hope I can see you all there next week... Marcelo

Your hosted application is a virus vector

Peter O'Brien - Tue, 2009-10-06 06:02
We are all used to seeing SaaS as an acronym for Software as a Service. It also doubles as another acronym for the non-functional features of a SaaS architecture. When you're providing a hosted service there are a number of non-functional features that need to be considered such as Scalability, Availability, Agility and Security. I'll flesh out these concepts at a later date as they are all quite broad and encompass a lot of features. Security, for example, includes, but is not limited to ensuring the integrity of the system and the systems that connect to it.

One threat to system integrity is a computer virus. If your hosted application becomes a vector for the dissemination of viruses to your clients system it becomes a threat to your business, not just your systems. So, how do you test you hosted application to make sure infected files are handled properly?

At the beginning of this year I started working on Oracle CRM Sales Library OnDemand, which permits the uploading of documents to be shared with others in your organisation. It provides a rich set of social networking features that enables users to share, rate, review, and tag PowerPoint presentations an MS Word documents so that everyone in your organization can leverage the most effective sales materials. To share a document, or it's contents, it must be uploaded to Sales Library and of course, the upload process involves virus scanning. In fact, virus scanning is the first step. Irrespective of the file type, before any logical validation takes place, such as supported formats, the file is interrogated by an anti-virus system. When a file fails the virus scan, the user has to be informed with meaningful message. So, while your hosted application does not have to be coded with anti-virus logic, it does need to be able to handle and report effectively, when a virus is found.

Once you have this logic in place, you have to be able to test that the behaviour of your application is correct. Now, you don't want your development or testing environment to be using files with viruses to test with. In fact, your organisations anti-malware system would probably prevent you from storing the files anyway. Thankfully, the European Institute for Computer Antivirus Research has a solution. For testing the anti-virus error handling logic in our application we attempt to upload a file with just the EICAR test string as content.


The above is not a virus, but is a test string for anti-malware systems. In unit tests, this can be used as the only content of an InputStream. You should provide a dummy implementation of your virus scan system when unit testing too. Remember, you're not trying to test the anti-virus software. You want to test the behaviour of your application when the anti-virus software reports a virus is found. When testing the deployed Sales Library application, the text file can be saved with a .ppt or .doc extension and then uploaded. The only content of the file should be the test string.

I have yet to find an anti-virus system that does not comply with this EICAR anti-virus test string. A simple test in your own environment will reveal if your anti-virus software does or not. I suspect that it will, and you can proceed with confidence that your hosted application is handling contaminated files properly for the protection of yours and your customers business.

Silicon Valley Code Camp 09

Peeyush Tugnawat - Sun, 2009-10-04 10:53

Attended Silicon Valley Code Camp this weekend. They had great sessions for people interested at looking at real code. Met some wonderful people really passionate about technologies.

Oracle had some interesting sessions there too.



Silicon Valley Code Camp 09

Peeyush Tugnawat - Sun, 2009-10-04 10:53

Attended Silicon Valley Code Camp this weekend. They had great sessions for people interested at looking at real code. Met some wonderful people really passionate about technologies.

Oracle had some interesting sessions there too.



The ultimate story about OCR, OCRMIRROR and 2 storage boxes – Chapter 3

Geert De Paep - Fri, 2009-10-02 10:36
Scenario 3: Loss of OCRmirror from the OCR MASTER only

This is a followup of chapter 2.

As we have seen in scenario 1, the OCR MASTER will update the votecount. Now let’s hide the ocrmirror from only 1 node: the node being the OCR MASTER, while the other node continues to see the ocrmirror. Will CRS get confused about this?

Note: while doing this test, crs is running on both nodes.

In this scenario, node 2 is the OCR MASTER. In fact I haven’t found any command to query who is the master. The only way to find out is to compare the crsd logfiles on all nodes to find the most recent message “I AM THE NEW OCR MASTER”. If anyone knows a better way for determining this, please let me know.

So when hiding the ocrmirror from node 2, we see in its alert file, as expected:

2008-07-23 09:14:53.921
[crsd(8215)]CRS-1006:The OCR location /dev/oracle/ocrmirror is inaccessible. Details in /app/oracle/crs/log/nodeb01/crsd/crsd.log.

and in its logfile:

2008-07-23 09:14:53.920: [ OCROSD][14]utwrite:3: problem writing the buffer 1a33000 buflen 4096 retval -1 phy_offset 143360 retry 0
2008-07-23 09:14:53.920: [ OCROSD][14]utwrite:4: problem writing the buffer errno 5 errstring I/O error
2008-07-23 09:14:53.922: [ OCRRAW][34]propriowv_bootbuf: Vote information on disk 0 [/dev/oracle/ocr] is adjusted from [1/2] to [2/2]

Nothing appears in the logfiles of the non-ocr-master, i.e. node 1. So until now this situation is still identical as in scenario 1: it is the ocr master who updates the votecount after loosing the other ocr.

The ocrcheck on node 2 (master) now gives:

         Device/File Name         : /dev/oracle/ocr
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/oracle/ocrmirror
                                    <span style="color: rgb(255, 128, 0);">Device/File unavailable</span>
         Cluster registry integrity check succeeded

But the output on node 1 (non-master) is different:

         Device/File Name         : /dev/oracle/ocr<br />                                    Device/File integrity check succeeded<br />         Device/File Name         : /dev/oracle/ocrmirror                                    <br />                                    <span style="color: rgb(255, 128, 0);">Device/File needs to be synchronized with the other device</span><br />         Cluster registry integrity check succeeded

This makes sense, because node 2 cannot see the device (device/file unavailable) while node 1 sees both devices with different vote count (2 votes for ocr and 1 vote for ocrmirror, so it asks to resync just as in scenario 1 after the ocrmirror was visible again).

So now I want to try to confuse CRS. I will try to resync the ocrmirror again from node 1. So this will update the vote count of each device to 1. Technically this is possible because node 1 can see both devices, but if it succeeds node 2 will be left with one ocr device having one vote., and we know from rule 3 that CRS cannot run in that case. Will crs then crash on node 2?…

So we do on node 1:

-bash-3.00# ocrconfig -replace ocrmirror /dev/oracle/ocrmirror

Bad luck, it fails with

PROT-21: Invalid parameter

Very clear, right… The interesting part however appears in the crsd logfile of node 2:

2008-07-23 09:19:34.712: [ OCROSD][32]utdvch:0:failed to open OCR file/disk /dev/oracle/ocrmirror, errno=5, os err string=I/O error
2008-07-23 09:19:34.712: [ OCRRAW][32]dev_replace: master could not verify the new disk (8)
[ OCRSRV][32]proas_replace_disk: Failed in changing configurations in the Master 8

So this learns us that, when the ocrconfig command is done on node 1 not being the master, that it will send this to node 2 being the master and node 2 will execute it. What NOT happens is that crs crashes, nor that node 1 takes over the mastership of node 2. Nice to know.
Very unlogical however is that, when doing the last command above, the crs alert file of node 2 shows

2008-07-23 09:19:34.711
[crsd(8215)]CRS-1007:The OCR/OCR mirror location was replaced by /dev/oracle/ocrmirror.

This is WRONG. The ocrmirror was not replaced. The message should be: “Trying to replace the OCR/OCR mirror location by /dev/oracle/ocrmirror”. It is just that you know it.
The logs on node 1 are correct. Find the latest log in the “client” directory and read:

Oracle Database 10g CRS Release Production Copyright 1996, 2008 Oracle. All rights reserved.
2008-07-23 09:19:34.694: [ OCRCONF][1]ocrconfig starts…
2008-07-23 09:19:34.716: [ OCRCLI][1]proac_replace_dev:[/dev/oracle/ocrmirror]: Failed. Retval [8]
2008-07-23 09:19:34.716: [ OCRCONF][1]The input OCR device either is identical to the other device or cannot be opened
2008-07-23 09:19:34.716: [ OCRCONF][1]Exiting [status=failed]…

Conclusion: we cannot confuse the crs!

After making the ocrmirror visible again and reissuing the replace command on node 1, we get in the crs logfile on node 2 (master):

2008-07-23 09:27:15.384: [ OCRRAW][32]proprioo: for disk 0 (/dev/oracle/ocr), id match (1), my id set (1385758746,1866209186) total id sets (2), 1st set (1385758746,1866209186), 2nd set (1385758746,1866209186) my votes (2), total votes (2)
2008-07-23 09:27:15.384: [ OCRRAW][32]propriogid:1: INVALID FORMAT
2008-07-23 09:27:15.516: [ OCRRAW][32]propriowv_bootbuf: Vote information on disk 1 [/dev/oracle/ocrmirror] is adjusted from [0/0] to [1/2]
2008-07-23 09:27:15.517: [ OCRRAW][32]propriowv_bootbuf: Vote information on disk 0 [/dev/oracle/ocr] is adjusted from [2/2] to [1/2]
2008-07-23 09:27:15.518: [ OCRMAS][25]th_master: Deleted ver keys from cache (master)
2008-07-23 09:27:15.628: [ OCRMAS][25]th_master: Deleted ver keys from cache (master)

and the crs logfile of node 1 (non-master):

2008-07-23 09:27:15.543: [ OCRRAW][36]proprioo: for disk 0 (/dev/oracle/ocr), id match (1), my id set (1385758746,1866209186) total id sets (2), 1st set (1385758746,1866209186), 2nd set (1385758746,1866209186) my votes (1), total votes (2)
2008-07-23 09:27:15.543: [ OCRRAW][36]proprioo: for disk 1 (/dev/oracle/ocrmirror), id match (1), my id set (1385758746,1866209186) total id sets (2), 1st set (1385758746,1866209186), 2nd set (1385758746,1866209186) my votes (1), total votes (2)
2008-07-23 09:27:15.571: [ OCRMAS][25]th_master: Deleted ver keys from cache (non master)
2008-07-23 09:27:15.572: [ OCRMAS][25]th_master: Deleted ver keys from cache (non master)

and the client logfile of node 1:

Oracle Database 10g CRS Release Production Copyright 1996, 2008 Oracle. All rights reserved.
2008-07-23 09:27:15.346: [ OCRCONF][1]ocrconfig starts…
2008-07-23 09:27:15.572: [ OCRCONF][1]Successfully replaced OCR and set block 0
2008-07-23 09:27:15.572: [ OCRCONF][1]Exiting [status=success]…

and all is ok again. Each device has one vote again, and we are back in the ‘normal’ situation.


We cannot confuse the CRS when ocr or ocrmirror disappears from the ocr master node only.

But what is it disappears from the non-master node…? That’s stuff for the next chapter.

New release of Lucene Domain Index based on Lucene 2.9.0

Marcelo Ochoa - Wed, 2009-09-30 06:47
A new binary distribution of Lucene Domain Index ( for Oracle 10g/11g has been released.
Lucene Domain Index is integration of Lucene Project running inside the Oracle JVM and integrated to the SQL layer by adding a new index type.
This new version uses latest Lucene 2.9.0 core libraries and introduces some of the changes on API.
Here complete list of changes:
  • Tested with Oracle 11gR2, 11gR1 and 10.2 databases.
  • DefaultUserDataStore do a SAX parsing to get text nodes and attributes from an XMLType value.
  • A SimpleLRUCache is used to load rowids and his associated Lucene doc id, this reduce memory consumption when querying very big tables. A new parameter has been added, CachedRowIdSize by default 10000 to control the size of the LRU cache.
  • Lucene Domain Index core was updated to use TopFieldCollector and to avoid computation time when lscore() is not used.
  • Two new parameter has been added NormalizeScore which control when to track the Max Score and when querying, both parameters are consequence of new Lucene Collector API and boost the performance when querying.
  • A table alias L$MT is defined for the master table associated to the index to be used in complex queries associating columns from master tables and columns from dependant tables.
Full documentation is at Google Doc.
Download binary version for Oracle 10g and 11g.
One of the biggest changed introduced into Lucene core libraries are in the TopCollector API, it introduces optimizations when you don't need to compute the score, track maximum score and preserve the Lucene document ID in order.
This changes are reflected in Lucene Domain Index through the usage of lscore() ancilliary operator. For example:
create table emails (
emailFrom VARCHAR2(256),
emailTo VARCHAR2(256),
subject VARCHAR2(4000),
emailDate DATE,
bodyText CLOB)
and an index created as:
create index emailbodyText on emails(bodyText) indextype is lucene.LuceneIndex
parameters('Analyzer:org.apache.lucene.analysis.StopAnalyzer;ExtraCols:emailDate "emailDate",subject "subject",emailFrom "emailFrom",emailTo "emailTo"');
-- required to Sort by subject
alter index emailbodyText parameters('FormatCols:subject(NOT_ANALYZED),emailFrom(NOT_ANALYZED),emailTo(NOT_ANALYZED)');
-- do not track max score
alter index emailbodyText parameters('NormalizeScore:false');
this queries reflect when score is computed or not.
SELECT /*+ DOMAIN_INDEX_SORT */ lscore(1) sc,subject
FROM emails where emailfrom like '%@gmail.com' and
lcontains(bodytext,'subject:lucene injection',1)>0
order by lscore(1) DESC;
  • Lucene score is computed and do not track maximum score, the result will be returned using relevance order descendant, default Lucene ordering no extra effort.
FROM emails
where lcontains(bodytext,'subject:lucene injection','subject:ASC',1)>0;
  • Lucene score is not computed and obviously the maximum score is not tracked.
Note that if we are querying using the optimizer hint DOMAIN_INDEX_SORT and lcontains(..,'subject:ASC',..) the score value is not relevant the result will be returned using the column subject ascending.
Similar result can be obtained using order by SQL, but to do that the RDBMS first collects all the rows that match to lcontains() operator and then do an order by, the difference in this simple test is an optimizer cost 3 over 2.
The NormalizeScore parameter is by default true, to get back compatibility, applications which assume an score in range 0..1 will run without any change, but if you want to get faster response time by avoiding max score computation you can change it by using alter index parameters DDL command.
To see which really means NormalizeScore parameter look a this example using WikiPedia English dump:
select /*+ FIRST_ROWS DOMAIN_INDEX_SORT */ lscore(1),
extractValue(object_value,'/page/revision/timestamp') "revisionDate"
from pages where lcontains(object_value,'rownum:[1 TO 2] AND sport','revisionDate:ASC',1)>0;
with NormalizeScore:false returns:
1.56775963306427001953125 SCCA 25-FEB-02 PM +00:00
1.79172527790069580078125 Sports utility vehicle 04-APR-02 PM +00:00
with NormalizeScore:true returns:
0.875 SCCA 25-FEB-02 PM +00:00
1 Sports utility vehicle 04-APR-02 PM +00:00
Both results are logically because we are looking for a result order by revisionDate:ASC, but the score computed is different.
Well, I will be at Oracle Open World 09, if anybody want to know more about this kind of internal implementation I will talk at the Oracle Unconference and obviously we can meet at the OTN Lounge, see you there Marcelo.

"conn / as sysdba" on Windows

Wijaya Kusumo - Tue, 2009-09-29 04:33
We can do the following when we want to connect to Oracle db using "conn / as sysdba" in *nix environment:% export ORACLE_SID=ORCL% sqlplus /nolog% conn / as sysdbaNot a big deal. However, I got the following doing the same on Windows XP:C:\>set ORACLE_SID=ORCL C:\>sqlplus /nologSQL*Plus: Release - Production on Tue Sep 29 16:08:51 2009Copyright (c) 1982, 2007, Oracle. All Rights

Module name for logon trigger in 11g R2 AWR report

Virag Sharma - Mon, 2009-09-28 00:53
Today while working on Production performance tuning, We came across one difference in awr report of 11g R1 and 11R2

In oracle 11g R1 AWR report logon trigger modelue name usually come like
sqlplus / perl / Mid Tier@server_name_from_it_login

But in 11g R2 AWR report logon trigger modelue name usually come like oraagent.bin@DB_SERVER_WHERE_IT_RUN

New change looks more logical because DB server running logon trigger code ,
not the user.

# Excerpt from 11g R1 AWR report
# In below given example app678 is server name from where
# user logged in to database using sqlplus

Module: sqlplus@app678utl (TNS V1-V3)

#Excerpt from 11g R2 AWR report
#In below given example apps001 is database server

Module: oraagent.bin@apps001 (TNS V1-V3)

Categories: DBA Blogs

New features in ActiveRecord Oracle enhanced adapter version 1.2.2

Raimonds Simanovskis - Sun, 2009-09-27 16:00

During the last months many new features have been implemented for ActiveRecord Oracle enhanced adapter which are now included in Oracle enhanced adapter version 1.2.2. You can find full list in change history file, here I will tell about the main ones.


Now Oracle enhanced adapter has improved RDoc documentation for all public methods. So you can go to RDoc documentation of installed gem or go and view published documentation on-line.

Schema definition

There are many new features in schema definition methods that you can use in migration files:

  • When you use add_index then ActiveRecord is automatically generating index name using format index_table_name_on_column1_and_column2_… which previously could cause Oracle errors as Oracle identifiers should be up to 30 characters long. Now default index names are automatically shortened down to 30 or less characters (of course you can always use also :name option to specify shortened version by yourself).
  • Now adapter is ignoring :limit option for :text and :binary columns (as in Oracle you cannot specify limit for CLOB and BLOB data types). Previously it could cause errors if you tried to migrate Rails application from e.g. MySQL where :text and :binary columns could have :limit in schema definition.
  • If you define :string column with* :limit option then it will define VARCHAR2 column with size in characters and not in bytes (this makes difference if you use UTF-8 with language where one character might be stored as several bytes). This is expected behavior from ActiveRecord that you define maximum string size in UTF-8 characters.
  • Now you can use add_foreign_key and remove_foreign_key to define foreign key constraints in migrations (see RDoc documentation for details). Syntax and some implemenatation for foreign key definition was taken from foreigner Rails plugin as well as some ideas taken from active_record_oracle_extensions plugin.
  • add_foreign_key definitions will be also extracted in schema.rb by rake db:schema:dump task. Therefore they will be also present in test database when you will recreate it from schema.rb file.
  • Foreign keys are also safe for loading of fixtures (in case you are still using them instead of factories :)). disable_referential_integrity method is implemented for Oracle enhanced adapter which is called by ActiveRecord before loading fixtures and which disables all currently active foreign key constraints during loading of fixtures.
  • You can use add_synonym and remove_synonym to define database synonyms to other tables, views or sequences. add_synonym definitions will also be extracted in schema.rb file.
  • It is possible to create tables with primary key trigger. There will be no difference in terms how you would create new records in such table using ActiveRecord but in case you have also need to do direct INSERTs into the table then it will be easier as you can omit primary key from INSERT statement and primary key trigger will populate it automatically from corresponding sequence.
  • ActiveRecord schema dumper is patched to work correctly when default table prefixes or suffixes are used – they are now removed from schema.rb dump to avoid duplicate prefixes and suffixes when recreating schema from schema.rb.
Legacy schema support

Some features which can support “weird” legacy database schemas:

  • If you are using ActiveRecord with legacy schema which have tables with triggers that populate primary key triggers (and not using default Rails and Oracle enhanced adapter conventions) then you can use set_sequence_name :autogenerated in class definition to tell adapter to omit primary key value from INSERTs.
  • You can use ActiveRecord also with tables that you can access over database link. To do that you need to define local synonym to remote table (and also remote sequence if you want to insert records as well) and then use local synonym in set_table_name in class definition. Previously adapter could not get remote table columns, now it will get table columns also over database link.
    But still you cannot specify remote table (like “table_name@db_link”) directly in set_table_name as table_name will be used as column prefix in generated SQL statements where “@db_link” will not be valid syntax.
    And when you define local synonyms then please use the new add_synonym feature :)
Connection options
  • cursor_sharing option default value is changed from “similar” to “force” – please read explanation in discussion group post what it is and why the new default value is recommended choice.
  • When using JRuby and JDBC you can set TNS_ADMIN environment variable to tnsnames.ora directory and then use TNS database alias in database.yml file (specify just database: option and remove host: option). This might be useful for more complex TNS connection definitions, e.g. connection to load balanced Oracle RAC.
  • Adapter will not raise error if it cannot locate ojdbc14.jar* file. So either put it in $JRUBY_HOME/lib or ensure that it will be loaded by application server. Would love to hear feedback from people who are using this adapter with JRuby to find out if this behaves well now :)
  • Now you can get PL/SQL debugging information into your ActiveRecord log file. Use dbms_output.put_line in your PL/SQL procedures and functions (that are called from ActiveRecord models) and in your ActiveRecord model use connection.enable_dbms_output and connection.disable_dbms_output around your database calls to get dbms_output logging information into ActiveRecord log file. But please use it just in development environment with debug log level as in production it would add too much overhead for each database call. And this feature also requires that you install ruby-plsql gem.

As you see this probably is the largest “point” release that I have had :) Thanks also to other contributors which patches were included in this release.

As always you can install Oracle enhanced adapter on any Ruby platform (Ruby 1.8.6 / 1.8.7 or Ruby 1.9.1 or JRuby) with

gem install activerecord-oracle_enhanced-adapter

If you have any questions please use discussion group or post comments here.

Categories: Development

Getting Started with Oracle Pro*C on Linux (Instant Client Zip Files)

Mark A. Williams - Sat, 2009-09-26 13:57

The Oracle Pro*C Precompiler is a popular choice for developing Oracle applications in the C/C++ languages. The primary advantage of using the precompiler is that it allows you to embed SQL (and PL/SQL) directly into your application. Used in combination with Oracle Instant Client packages you can easily begin to develop applications using the Pro*C Precompiler capabilities. For information on getting started using Pro*C on Windows, please see my previous post entitled "Oracle Pro*C on Windows with Express Edition Products".

Many of the components used in the previous post are also used here and the steps are similar (if not the same). For this post, all components are installed on a single host (oel02) running Oracle Enterprise Linux. The components used are (valid at time of posting):

  • Oracle Database 10g Express Edition (available here)
  • Oracle Enterprise Linux (available here)
  • Oracle Instant Client Packages for Linux x86 (available here)
  •     Instant Client Package - Basic Lite
  •     Instant Client Package - SDK
  •     Instant Client Package - Precompiler
  •     Instant Client Package - SQL*Plus

NOTE: The precompiler uses the "standard" Oracle Technology Network license rather than the Instant Client license. Be sure to review the license!

You can, of course, allow for some variation in the above; however, you may then need to make adjustments to the steps that follow. For example, the database can be on another host and/or platform. In order to keep things as simple as possible I have elected to use a single host for everything. In order for the sample code to work unaltered you should have access to the HR sample schema user (included in Express Edition). If you require character set or language support not provided by the "Basic Lite" Instant Client package you should use the "Basic" package. In addition, version of the components should work as well (though I have not tested it).

Installing the Instant Client Packages

I downloaded the following Instant Client packages using the above provided link to my home directory on oel02:

  • instantclient-basiclite-linux32-
  • instantclient-precomp-linux32-
  • instantclient-sdk-linux32-
  • instantclient-sqlplus-linux32-

Installing each one is simply a matter of unzipping each file:

[markwill@oel02 ~]$ unzip instantclient-basiclite-linux32-
[markwill@oel02 ~]$ unzip instantclient-precomp-linux32-
[markwill@oel02 ~]$ unzip instantclient-sdk-linux32-
[markwill@oel02 ~]$ unzip instantclient-sqlplus-linux32-

The act of unzipping the files will create an "instantclient_11_2" directory in the directory where the files are unzipped (/home/markwill in my case). The complete list of files installed after unzipping each file is as follows:

[markwill@oel02 ~]$ cd instantclient_11_2/
[markwill@oel02 instantclient_11_2]$ pwd
[markwill@oel02 instantclient_11_2]$ find .
[markwill@oel02 instantclient_11_2]$

Configure the Environment

One of the nice things about using Pro*C on Linux (and UNIX) is that we do not normally have to configure the Pro*C configuration file (pcscfg.cfg) as would generally be necessary in Windows. As a convenience for setting environment variables (such as the PATH), in my home directory I have created a file I use to do this (I'm using the bash shell here):

[markwill@oel02 ~]$ cat oic11.env
export ORACLE_BASE=/home/markwill
export ORACLE_HOME=$ORACLE_BASE/instantclient_11_2
export PATH=$ORACLE_HOME:$ORACLE_HOME/sdk:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:~/bin
[markwill@oel02 ~]$

I can then set my environment to use the newly installed Oracle components as follows:

[markwill@oel02 ~]$ . ./oic11.env

Test the Oracle Installation

Since SQL*Plus was installed as part of the above packages, it can be used to quickly and easily test the installation. Here I connect to the Express Edition database as the HR sample schema user using the EZConnect syntax (host:port/Service_Name):

[markwill@oel02 ~]$ sqlplus hr/hr@oel02:1521/XE

SQL*Plus: Release Production on Sat Sep 26 13:27:59 2009

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 10g Express Edition Release - Production

SQL> select user from dual;


SQL> exit
Disconnected from Oracle Database 10g Express Edition Release - Production
[markwill@oel02 ~]$

Create a Test Application

Now that I have tested that the Oracle software is working correctly, I create a simple test application. I call the test application "proctest" and create a new directory to hold the files:

[markwill@oel02 ~]$ mkdir -p Projects/proc/proctest
[markwill@oel02 ~]$ cd Projects/proc/proctest

Here's the content of the proctest.pc source file:

[markwill@oel02 proctest]$ cat proctest.pc
** standard include files for Pro*C application
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <sqlca.h>
#include <sqlda.h>
#include <sqlcpr.h>

#define NAME_LEN 30
#define ERR_LEN 512


** host variables to hold results of query
int     dept;
int     sal;
int     rn;

** indicator variables used to determine null-ness
short dept_ind;
short fname_ind;
short lname_ind;
short sal_ind;
short rn_ind;

** standard Pro*C error handler function
void sql_error(char *msg)
  char err_msg[ERR_LEN];
  size_t buf_len, msg_len;



  if (msg)
    printf("%s\n", msg);

  buf_len = sizeof (err_msg);
  sqlglm(err_msg, &buf_len, &msg_len);
  printf("%.*s", msg_len, err_msg);



int main(int argc, char *argv[])
  ** setup username, password and database (ezconnect format)
  strncpy((char *) username.arr, "hr", NAME_LEN);
  username.len = (unsigned short) strlen((char *) username.arr);

  strncpy((char *) password.arr, "hr", NAME_LEN);
  password.len = (unsigned short) strlen((char *) password.arr);

  strncpy((char *) database.arr, "oel02:1521/XE", NAME_LEN);
  database.len = (unsigned short) strlen((char *) database.arr);

  ** register the error handler function
  EXEC SQL WHENEVER SQLERROR DO sql_error("ORACLE error: \n");

  ** attempt to connect to the database
  EXEC SQL CONNECT :username IDENTIFIED BY :password USING :database;

  ** indicate we want to break out of the loop when we get a no data
  ** found message (i.e. at end of fetch)

  ** declare and open cursor using analytic function
  ** to get "top 3" in department by highest salary
    select   *
      select   department_id,
               row_number() over (partition by department_id
                                  order by salary desc) rn
      from     employees
      where    department_id is not null
    where   rn <= 3;

  EXEC SQL OPEN emp_cursor;

  ** loop through cursor fetching results
  ** and printing as we go
  for (;;)
    EXEC SQL FETCH emp_cursor
      INTO :dept:dept_ind,

    fname.arr[fname.len] = '\0';
    lname.arr[lname.len] = '\0';

    printf("%d, %s %s, %d, %d\n", dept, fname.arr, lname.arr, sal, rn);

  ** clean-up and exit
  EXEC SQL CLOSE emp_cursor;


  return EXIT_SUCCESS;
[markwill@oel02 proctest]$

The sample application is pretty bland - the only real point of interest is the use of an analytic function to get the "top 3" ranking of salaries by department. The use of the analytic function will allow us to see how options can be passed to the Pro*C precompiler.

Building the Sample

Rather than craft a custom Makefile or enter the various build/compiler commands by hand, the demonstration Makefile that is provided as part of the precompiler package installed earlier can be used with only small modification. To use this Makefile, copy it into the directory with the source file. On my system this was done as follows:

[markwill@oel02 proctest]$ cp $ORACLE_HOME/sdk/demo/demo_proc_ic.mk .

I use VIM to edit the file. Of course you can use whatever editor you wish.

[markwill@oel02 proctest]$ vi demo_proc_ic.mk

The important part is listed in the "NOTES" section in the Makefile:

#    1. Please change "cc/CC" and the "InstantClient directories" to point to
#       appropiate locations on your machine before using this makefile.

Because the CC and cc entries are already correct, I did not alter them. I did, however, change the Instant Client directory entries as follows:

# InstantClient Directories.

By using the ORACLE_HOME environment variable (which is set in my oic11.env file) in the Makefile I do not need to hard-code the actual path.

Before building the sample, take a minute or two to review the Makefile comments. I build the sample using the following command-line:

[markwill@oel02 proctest]$ make -f demo_proc_ic.mk build PROCFLAGS="common_parser=yes" \
> EXE=proctest OBJS="proctest.o"

Notice how PROCFLAGS is used to pass the "common_parser=yes" to the proc binary (i.e the Pro*C program itself). The EXE option determines the name of the binary executable produced and the OBJS option determines what object files are needed. In this case the options are simple, but larger and more complex projects likely use more than a single object file and possibly other Pro*C options.

Once the build has completed test the application:

[markwill@oel02 proctest]$ ./proctest
10, Jennifer Whalen, 4400, 1
20, Michael Hartstein, 13000, 1
20, Pat Fay, 6000, 2

[ snip ]

100, Nancy Greenberg, 12000, 1
100, Daniel Faviet, 9000, 2
100, John Chen, 8200, 3
110, Shelley Higgins, 12000, 1
110, William Gietz, 8300, 2
[markwill@oel02 proctest]$

One final addition I make to the Makefile is creating a new target called "dust". A Makefile traditionally has a "clean" target which can be used to remove various (non-source!) files. These files are generally used during the build process, but are not needed by the final executable. The "clean" target also removes the executable, however. Since I like the idea of being able to clean the no longer necessary files, but want to keep the executable, I add a target called "dust". Here's the "end" of the Makefile with the "dust" target added:

# Clean up all executables, *.o and generated *.c files
clean: $(CLNCACHE)

dust: $(CLNCACHE)
        $(REMOVE) $(PROCDEMO).o $(PROCDEMO).c $(PROCDEMO).lis

        $(REMOVE) $(CACHEDIR)
        $(REMOVE) $(ICLIBHOME)libclntsh$(SO_EXT)

Of course, adding this target is purely optional; however, after building the sample, I then "dust" the directory:

[markwill@oel02 proctest]$ make -f demo_proc_ic.mk dust PROCDEMO=proctest
rm -rf SunWS_cachea
rm -rf /home/markwill/instantclient_11_2/libclntsh.so
rm -rf proctest.o proctest.c proctest.lis
[markwill@oel02 proctest]$

This then leaves the directory a bit cleaner with only the Makefile, source file, and executable:

[markwill@oel02 proctest]$ ls -al
total 28
drwxrwxr-x  2 markwill markwill 4096 Sep 26 14:31 .
drwxrwxr-x  3 markwill markwill 4096 Sep 26 13:40 ..
-rw-rw-r--  1 markwill markwill 3617 Sep 26 14:31 demo_proc_ic.mk
-rwxrwxr-x  1 markwill markwill 9798 Sep 26 14:29 proctest
-rw-rw-r--  1 markwill markwill 2900 Sep 26 14:14 proctest.pc
[markwill@oel02 proctest]$

Sanity check your ORM

Peter O'Brien - Fri, 2009-09-25 15:54
Have you ever had your application completely crash under load and wondered why something that just worked fine yesterday can drive you mental today?

Most of the time an Object-Relational Mapping package is the right solution for your applications to interact with a database. It does not matter if it is free, commercial or home grown, you will generally find that it reduces the amount of code. Moreover, the application code is better designed and easier to test. Depending on your chosen solution, you may even see performance improvements with caching and database specific optimisations.

However, you may also find substational performance degradations when the data volume increases. It is important to remember that no matter how sophisticated your ORM package is, it may still produce inefficient SQL statements. If you had SQL statements embedded in your code you would review that as part of the code review. So if you are going to hand over a critical operation such as the generation of SQL statements then these need to be reviewed.

Your application can become a performance crime scene. What can you do to prevent it? Early analysis.

The execution plan, sometimes called query plan or explain plan, for an SQL statement will provide insight into how the database will act on the SQL statement. It is available in most databases and is a useful tool to determine the efficiency of the statement. For example, an execution plan can indicate what indexes would be used. When code is being reviewed, the execution plans for SQL statements should also be reviewed, in much the same way that the running test classes would be reviewed for correctness.

I learned this the hard way when I found a delete operation took too long and eventually timed out in a test environment that had a high volume of records. A review of the code showed a nice, clean object oriented design and a very simple, easy to understand ORM configuration. The application in question was used to manage financial documents. It had the concept of a ChequeBook, one or more Cheques in a ChequeBook, but also that both a ChequeBook and Cheque were a Document. The class diagram in this article illustrates that relationship. The relationship was also reflected in the database with corresponding tables. The 'type' attribute was used as a discriminator for the two concrete objects: ChequeBook and Cheque.

The problem arose when attempting to delete a ChequeBook. The ORM could delete the Cheque records using the parentContentId easily enough, but since there was no corresponding column on the Documents table, the ORM then produced this innocuous looking statement.


Appears quite innocent at first, in fact it might even be thought quite a clever attempt to clean up all records on the DOCUMENTS table that should have a corresponding CHEQUES record but doesn't. In the development database with only a few hundred CHEQUES, this performs ok. Push that up to a few thousand and you soon realise the problem with doing this blanket select statement on CHEQUES even if it does use an index.

In this case I had to code around the ORM with a hand crafted SQL statement. I could have just invoked remove() on each Cheque entity, but for large ChequeBooks this would have produced a lot of SQL statements. Another approach could have been to use a stored procedure but that would not have been easily portable.

The key lesson learned though was to sanity check the ORM before it drives you crazy with it's unexpected eccentricities. Check your ORM documentation to see how you can get profile details of the SQL statements generated. Go to your database server and get reports on SQL execution plans. The ORM is effectively another developer on your team generating SQL statements. Know and understand what it is producing.

Oracle R12 Payables Discount

V.J. Jain - Thu, 2009-09-24 14:54

One user posted a question on OTN forums for Payables – http://forums.oracle.com/forums/thread.jspa?messageID=3785430#3785430

Hi all,

I have a scenario involving discount on the purchase of an Item. My invoice, as received as the supplier is as follows:

Item xxx 100
Discount (10)

Total 90
VAT(15%) 15.50

Net 103.50

When I create an invoice in AP, I add a line with amount $100 and assign a tax of 15%
The above create a total of $115 and i add this total as header amount in the invoice.
AND in the Scheduled Payment tab at line level, I add a discount amount of $10 for the “First Discount” Info.

When I make payment for the above invoice, a pop-up appears indicating discount will be applied. But the Payment amount is $105 (Invoice header amount minus discount).

Is this the only way to treat discount in AP?
I need the payment amount to be $103.50 after applying discount.

My response was:

If I understand correctly, it seems like you are manually calculating the tax and entering it to the invoice amount. If you configure your taxes in Oracle, and then enter the invoice for the actual amount, then Oracle will know the true invoice amount which is the discount basis and create the tax line according to the configurations (the way you want it). What you are doing now is adding the tax outside Apps configuration and then expecting Apps to calculate the appropriate discount. If you want to use the standard functionality, you need to set up all dependent parts of that functionality. If tax calculation is a dependency of the discount basis, which is certainly a dependency of discount payment terms, then you need to set it up properly.

This really demonstrates one major theme that I consistently encounter with Oracle Apps consultants.  There is a lack of true understanding of the fundamentals behind the applications.  I’m not saying that is the case for this particular user because I don’t know him/her but it is the case for the majority of consultants that are staffed into most projects.

Everything in the ERP system is part of a business flow.  For a company that sells products, it is something similar to
- Leads to Quotes to Sales to [PO|Manufacturing to] Shipping to Invoice to Receipt to Cash to Financials

For Payables it is something like
- Purchasing to Receipt to Accrual to Invoice to Payment

For Discount Payment Term it is going to be
- Invoice /w Discount Terms to Payment with Discount Taken

Most vendors are going to exclude tax and freight from the discount allowed by prompt payment of the invoice.  So obviously if you want Oracle to calculate the amount to take on an invoice with discount terms, it is going to need to know what portion of the Invoice is for freight and tax.  Expecting the discount to be applied to only the $100 of a $115 ($15 from tax) invoice when the invoice is entered as $115 invoice is not going to work.  Have you ever heard of a tax line?

R12 does complicate the management of taxes with the E-Business Tax module.  I know of several consultants who are “experts” in R12 E-Tax and are getting great bill rates as they help setup the new module.

I learned R12 EBTax but not without several weeks of continuous headaches.  I won’t take projects to setup the R12 EBTax because it is tedious.  Regardless, it is important for Oracle apps consultants to be aware of how taxes work in R12 or you run into situations where things are not working the way you want and you don’t know why.  To someone who understands the applications, it jus seems like you lack common sense.

A trivial index demo

Claudia Zeiler - Wed, 2009-09-23 16:20
Today I was sitting in a public library, minding my own business when a man who I had never seen before, leaned over to me and asked me, "Is it true that you can drop and index created explicitly, but not one created implicitly as part of a constraint? Initially the reaction was implicit/explicit? Please speak English. When I sorted that out there were 2 further reactions, also unspoken, "Duh, that's pretty obvious", and "Why me?"

"It is an Ancient DBA and he stoppeth one of three."

I demonstrated the matter a bit to him.

Logged on as scott, I create a play table:
SQL> create table tmp(f1 number(4), f2 number(4), f3 number(4));

Table created.

SQL> insert into tmp values (1, 2, 3);

1 row created.

SQL> create index tmp_f2 on tmp(f2);

Index created.

SQL> drop index tmp_f2;

Index dropped.

All straight forward. The index was created 'explicitly' and there is no constraint that it affects if it is dropped, so I can drop it without problem.

Now to add a constraint, and an 'implicit' index.

SQL> alter table tmp modify f2 unique;

Table altered.



SQL> select index_name from user_indexes where table_name = 'TMP';


The constraint has been created as has an index

SQL> drop index SYS_C0011482;
drop index SYS_C0011482
ERROR at line 1:
ORA-02429: cannot drop index used for enforcement of unique/primary key

and as expected, the index cannot be dropped since it is used by the constraint.

It is all very trivial.

What did get a little interesting was that I was able to enter multiple rows of nulls in a field with a 'unique' constraint. I need to review the properties of null.

And the real question that was never answered

"Now wherefore stopp'st thou me?"

Concepts Guide: 7/27 - Data Dictionary

Charles Schultz - Wed, 2009-09-23 09:36
I still cannot believe "someone" had the gall to reduce the data dictionary to 5 pages. =) And one blank page to rub salt into the wound.

I had an interesting challenge when trying to explain the dictionary to a colleague. So we all have a general idea of what the data dictionary is and what it does (just read the first page of this chapter in the docs). When does it get built? Interesting question. Usually we say that catalog.sql builds the dictionary, or even bsq.sql. However, did you realize that 601 fixed tables ( exist in memory just for starting the instance (NOTE: no database!)? Try this one on for size:
/u01/app/oracle/product/ echo "db_name = SAC" > initSAC.ora
/u01/app/oracle/product/ . oraenv
ORACLE_HOME = [/u01/app/oracle] ? /u01/app/oracle/product/
/u01/app/oracle/product/ sqlplus / as sysdba

SQL*Plus: Release - Production on Wed Sep 23 09:32:35 2009

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Connected to an idle instance.

SAC_SQL > startup nomount
ORACLE instance started.

Total System Global Area 419430400 bytes
Fixed Size 2286584 bytes
Variable Size 114105352 bytes
Database Buffers 268435456 bytes
Redo Buffers 34603008 bytes

SAC_SQL > select count(*) from X$KQFTA;


If you want all the names of those fixed tables, select KQFTANAM.

So, even without running any scripts, we have "data dictionary" objects residing in memory. This helps me to understand what the dictionary is and what it does. For an analogy, it is like the overhead address space a program uses to store all its global variables. In this case, it is just happens to be organized into relational tables with gobbly-gook table/column names and normalized up the wazoo! =)

I have to confess, I was quite disappointed with this chapter. Of the 5 pages, the first 4 tell you not to change anything and give a brief overview of the differences between USER, ALL and DBA views. The last page starts to get just a tiny bit juicy and at least mention dynamic performance views, but then like a cheap battery dies out too soon.

There is some really cool stuff locked away in the x$ tables, let alone the rest of the db metadata. For a quick glimpse under the covers, I found eygle's list quit informative and helpful; it is allegdegly a copy of Metalink note 22241.1, which is supposed to be internal only (I cannot see it at this point in time); it merely lists the fixed tables to demonstrate the naming convention, but it also gives us an idea how things are organized and what types of things the kernel tracks. I hope one day Oracle wakes up and realizes the advantage of sharing more information like this.

Yong Huang also has a listing.

Starting my blog, finally

V.J. Jain - Mon, 2009-09-21 16:06

I’ve been wanting to create a blog for a long time now and I’m finally getting around to it.  Originally, I was hoping that the Oracle Forums would be an interactive resource but with almost 20,000 registered members, it is too difficult to moderate.

So here it is, my new blog.  Feel free to join and comment!

- V.J. Jain

A new decade for Enterprise Java Applications

Peter O'Brien - Mon, 2009-09-21 14:54
We are coming up on the 10th anniversary of the EJB 1.1 final release. When you take a look at the J2EE specification and the Enterprise Java Application programming models and frameworks that are out there now you begin to appreciate what an important step the EJB specification was. Not so much for what you can do with it today, but how it introduced a paradigm shift.

A more recent paradigm shift is cloud computing which effectively puts the application container, in a container. One to watch is CloudFoundry which provides a pay-as-you-go platform using the SpringSource application server. This is a Spring base java application server which is built on Apache Tomcat. The CloudFoundry makes this platform available using Amazon Web Service infrastructure. Although still in beta, it is available now.


Subscribe to Oracle FAQ aggregator