Feed aggregator

Fix for Rails 2.0 on Oracle with database session store

Raimonds Simanovskis - Mon, 2008-01-07 16:00

As I started to explore Rails 2.0 I tried to migrate one application to Rails 2.0 which is using Oracle as a database. Here are some initial tips for Rails 2.0 on Oracle that I found out.

Oracle adapter is no more included in Rails 2.0 so you need to install it separately. It is also not yet placed on gems.rubyforge.org therefore you need to install it with:

sudo gem install activerecord-oracle-adapter --source http://gems.rubyonrails.org

The next issue that you will get is error message “select_rows is an abstract method”. You can find more information about it in this ticket. As suggested I fixed this issue with the following Oracle adapter patch that I call from anvironment.rb file:

module ActiveRecord
  module ConnectionAdapters
    class OracleAdapter
      def select_rows(sql, name = nil)
        result = select(sql, name)
        result.map{ |v| v.values}
      end
    end
  end
end

And then I faced very strange behaviour that my Rails application was not working with database session store – no session data was saved. When I changed session store to cookies then everything worked fine.

When I continued investigation I found out that the issue was that for each new session new row was created in “sessions” table but no session data was saved in “data” column. As “data” column is text field which translates to CLOB data type in Oracle then it is not changed in Oracle adapter by INSERT or UPDATE statements but with special “write_lobs” after_save callback (this is done so because in Oracle there is limitation that literal constants in SQL statements cannot exceed 4000 characters and therefore such hack with after_save callback is necessary). And then I found that class CGI::Session::ActiveRecordStore::Session (which is responsible for database session store) does not have this write_lobs after_save filter. Why so?

As I understand now in Rails 2.0 ActiveRecord class definition sequence has changed – now at first CGI::Session::ActiveRecordStore::Session class is defined which inherits from ActiveRecord::Base and only afterwards OracleAdapter is loaded which adds write_lobs callback to ActiveRecord::Base but at this point it is not adding this callback to already defined Session class. As in Rails 1.2 OracleAdapter was loaded together with ActiveRecord and before Session class definition then there was no such issue.

So currently I solved this issue with simple patch in environment.rb file:

class CGI::Session::ActiveRecordStore::Session 
  after_save :write_lobs
end

Of course it would be nicer to force that OracleAdapter is loaded before CGI::Session::ActiveRecordStore::Session definition (when ActionPack is loaded). If somebody knows how to do that please write a comment :)

Categories: Development

Where has all my memory gone ?

Christian Bilien - Sun, 2008-01-06 14:09
A while ago, I came across an interesting case of memory starvation on a Oracle DB server running Solaris 8 that was for once not directly related to the SGA or the PGA. The problem showed up from a user perspective as temporary “hangs” that only seemed to happen at a specific time of the […]
Categories: DBA Blogs

Happy New Year 2008

Peter Khos - Fri, 2008-01-04 22:57
I hoped that 2007 has been good to you all both in your professional and personal lives. 2007 has been eventful and lots of stuff happening. We are now almost just 2 years away from the 2010 Olympics (Feb 2010) and construction of the various venues and transportation systems are chugging along full steam. In my own suburb, Richmond, we have the Canada Line (a light rail system linking the Peter Khttp://www.blogger.com/profile/14068944101291927006noreply@blogger.com1

Unconventional Oracle Installs, part One

Moans Nogood - Wed, 2008-01-02 18:24
You have to watch this:

http://www.youtube.com/watch?v=CHzV4LZnvHc

We'll follow it up with a few other initiatives in order to help the big companies bring down the time spent to install Oracle from, say, 50 hours to one or two.

Perrow and Normal Accidents

Moans Nogood - Wed, 2008-01-02 18:21
While reading the book 'Deep Survival' (most kindly given to me at the UKOUG conference in Birmingham by Sir Graham Wood of Oracle after the fire in my house) I happened on a description on page 107 of a book called 'Normal Accidents' by a fellow named Perrow (get it? per row - a perfect name for database nerds).

Perrow's theses is that in any tightly coupled system - in which unexpected interactions can happen - accidents WILL happen, and they're NORMAL.

Also, he states that technological steps taken to remedy this will just make matters worse.

Perrow and IT systems
=====================
I have freely translated Perrow's thoughts into the following:

IT systems are tightly coupled. A change - a patch, a new application, or an upgrade - to a layer in the stack can cause accidents to happen, because they generate unexpected interactions between the components of the system.

This is normal and expected behaviour, and any technological gear added to the technology stack in order to minimize this risk will make the system more complex and therefor more prone to new accidents.

For instance, I find that two of the most complexing things you can do to an IT system are clusters and SAN's.

These impressive technologies are always added in order to make systems more available and guard against unexpected accidents.

Hence, they will, in and by themselves, guarantee other normal accidents to happen to the system.

Complexing and de-complexing IT systems
=======================================
So you could say that it's a question of complexing or de-complexing IT systems.

I have found four situations that can complex IT systems (I'm being a bit ironic here):

1. To cover yourself (politics).
2. Exploration.
3. SMS decisions.
4. Architects.

1. Reason One: To cover yourself (politics)
===========================================
You might want to complex systems in order to satisfy various parties that you depend on or who insist on buying certain things they've heard about at vendor gatherings:

"Yes, we've done everything humanely possible, including buying state-of-the-art technology from leading vendors and asking independant experts to verify our setup".

This is known as CYB (Cover Your Behind).

2. Reason Two: Exploration
==========================
Ah, the urge to explore unknown territories and boldly go where no man has ever gone before...

Because you can.

The hightened awareness thus enabled might be A Good Thing for your system and your customers.

It could also create situations that you and others find way too interesting.

Reason Two is often done by men, because we love to do stupid or dangerous things.

3. Reason Three: SMS decisions
==============================
A third reason for complexing IT systems could be pure ignorance in what is commonly referred to as Suit Meets Suit (SMS) decisions - where a person of power from the vendor side with no technical insight talks to a person of power from the customer side with no technical insight.

These SMS situations tend to cause considerable increases in the GNP (just like road accidents and fires) of any country involved because of all the - mostly unneccessary - work following.

The costs to humans, systems and users can be enormous. Economists tend to love it.

4. Reason Four: Architects
==========================
A fourth reason for complexing IT systems can be architects. Don't get me wrong: There are many good IT architects. The very best ones, though, tend not to call themselves architects.

One of my dear friends once stated that an architect is often a developer that can't be used as a developer any more. Very funny.

However, what I have witnessed myself is that the combination of getting further away from the technical reality and getting closer to the management levels (the C class, as it were) tend to make some architects less good at making architectural decisions after a while.

That's where the vendors get their chance of selling the latest and greatest and thus complexing new and upcoming systems.


Summary: The end of reasoning
=============================
Four reasons must be enough. There are probably more, but I cannot think of them right now.

Anyway, imagine what savings in costs and worries you can obtain by moving just a notch down that steep slope of complexity in your system.

You might be able to de-complex your system to a degree where it becomes
absolutely rock solid and enormously available.

That should be our goal in the years to come: To help our customers de-complex their systems, while of course trying everything we can to support those who chose to complex theirs.

Two new angles on tuning/optimising Oracle

Moans Nogood - Wed, 2008-01-02 18:00
Now and then some new angles and thoughts emerge in a field where a lot of people think there's not much new to be said.

Two examples:

1. James Morle told me a while ago, that he thinks all performance problems relate to skew, to latency, or to both. It's brilliant, I think. I hope James will one day write about it. He's a damn fine writer when he gets down to it.

2. This one from Dan Fink. Impressive piece, I think. Enjoy it.

http://optimaldba.blogspot.com/2007/12/how-useful-is-wait-interface.html

When I emailed Dan and told him I admired his angle on this, he responded:

"I think it is a matter of keeping an open mind and knowing that you have friends and colleagues who are open to new ideas. Support is absolutely critical, even when you don't necessarily agree with what is being said. That keeps the flow of information open.

I shall never forget walking into a conference room. In big letters on one of the whiteboards were the words "THINK OUTSIDE THE BOX". For emphasis...someone had drawn a nice large box around them! "

I like that one :-)).

Using NFS partitions on AIX

Mark Vakoc - Wed, 2008-01-02 13:29
Unless you are running an E1 enterprise server on an NFS partition on the AIX platform, you can probably skip this posting.

Still here? Ok. This post outlines a potential problem with changing the tools release of an enterprise server when it is running on a NFS partition on AIX. It pertains to that combination only.

The AIX operating system has a feature that keeps shared libraries in memory even when the program that loads them terminates. Subsequent loads of that or any other program using the same library would be faster because the library is already in memory.

This behavior can cause some problems when the shared library is located on an NFS partition. Consider the case when Server Manager is performing a tools change for an enterprise server. The management agent will 1) stop the enterprise server, 2) delete the existing tools release, 3) extract and replace it with the new tools release.

So where's the problem? After stopping the enterprise server the E1 shared libraries may be cached by AIX even though no active processes are using them. AIX maintains open file handle to the shared library. On UNIX based platforms you are able to delete a file that is open by another process; although it will immediately disappear from the file system directory listings it will not actually be removed once the last handle to that file is closed. This behavior is done within the filesystem implementation.

The remote nature of the NFS file system requires a special implementation. When an open file is deleted on a NFS partition it will appear as a .nfs##### file in the same directory, where #### refers to a number randomly assigned. This file cannot be removed directly; it will disappear as soon as the last process holding the originally deleted file closes that handle.

So what does this have to do with E1 and Server Manager? The second step of performing a tools release change involves deleting the existing tools release. The caching of the shared libraries, and thus the presence of the .nfs#### files in the $EVRHOME/system/lib directories will prevent the removal of the system directory. This will cause the tools release change to fail, and the previous tools release will be restored. Even root cannot delete this .nfs files directly.

What can be done is to stop the enterprise server using Server Manager then sign on as root and run the command 'slibclean'. This will instruct AIX to unload/uncache any shared libraries that are no longer being used by an active process. You may then change the tools release using Server Manager without any issue.

Solaris Express on a Toshiba Satellite Pro A200

Hampus Linden - Tue, 2008-01-01 05:25
I bought myself one of those cheap laptops the other month. I needed a small machine for testing and since laptops are just as cheap (if not cheaper) as desktops these days I got a laptop.
The machine came with Vista but I wanted to triple boot Vista, Ubuntu and Solaris Express Community Edition.

  • Use diskmgmt.msc in Vista to shrink the partition the machine came with, Windows can do this natively so there is no need to use Partition Magic or similar tools. Create at least three new partitions. One for Solaris, one for Linux and one for Linux swap.
  • Secondly install Solaris, boot off the CD and go through the basic installer. The widescreen resolution worked out of the box (as usual). Do a full install, spending time "fixing" a smaller installer is just annoying. Solaris will install it's grub boot loader on both the MBR and superblock (on the Solaris partition). It probably makes sense to leave a large slice unused so it can be used with ZFS after the installation is done.
  • Install Ubuntu. Nothing simpler than that.
  • Edit Ubuntu's grub menu config (/boot/grub/menu.lst) to include Solaris. Simply point it to the Solaris parition (hd0,2 for me). Add these lines at the end of the file.
    title Solaris
    root (hd0,2)
    chainloader
Done!

I had to install the gani NIC driver in Solaris to get the Ethernet card working and the Open Sound System sound card driver to get sound working.
The Atheros WiFi card is supposed to be supported but I couldn't get it to work, even after adding the pci device alias to the driver. I'll post an update if I get it to work.

Google - just another big, dumb, brutal organisation?

Moans Nogood - Mon, 2007-12-31 04:42
I found this article in The Economist interesting:

http://economist.com/business/displaystory.cfm?story_id=10328123

There's some truth there, I think. Google is buying stuff (like blogger), is making pirate copies (sorry: clones) of other companies' software and in general trying to be as dominant and brutal as Microsoft, IBM, Oracle and the others. Yawn.

What the Hell happened to "Don't do evil"? Why did Google sell out to the Chinese horror regime?

They're just after the money and the happiness of shareholdes. Boring stuff.

Mogens

R12 Global Deployment functionality

RameshKumar Shanmugam - Sat, 2007-12-29 17:04
It is a common Operation process in any industry to move people around or transfer employee’s temporary basis for a particular project or Assignment. Or transfer them permanently to a different country.

Though this functionality was available in 11i for the HR professionals to do it manually if the cross business group is enabled they can update organization and location in the assignment form, or another method of doing this is to terminate and hire the employee in the new business group.

Now in R12 this functionality has been made as a standard functionality in the Manager Self Service Responsibility Under the function 'Transfer'

Manager Self Service > Transfer



Select the employee you wanted to transfer, follow the wizard which will take you through complete process like new salary change, new direct report, New Location Change, Time card approver, work Schedule etc., finally you will receive a Review summary page where you can review and submit for the approval

Note: if you are a Oracle Payroll Customer you need to take necessary actions when changing the work location for the payroll Taxation
Try this out!!!
Categories: APPS Blogs

Oracle 11g NF Database Replay

Virag Sharma - Thu, 2007-12-27 22:10

Oracle 11g New Feature Database Replay

“Simulating production load is not possible” , you might have heard these word.

In one project, where last 2 year management want to migrate from UNIX system to Linux system ( RAC ) , but they still testing because they are not sure where this Linux Boxes where bale to handle load or not. They have put lot of efforts and time in load testing and functional testing etc, but still not le gain confidence.

After using these feature of 11g , they will gain confidence and will able to migrate to Linux with full confidence and will know how there system will behave after migration/upgrade.

As per datasheet given on OTN

Database Replay workload capture of external clients is performed at the database server level. Therefore, Database Replay can be used to assess the impact of any system changes below the database tier level such as below:

  • Database upgrades, patches, parameter, schema changes, etc.
  • Configuration changes such as conversion from a single instance to RAC etc.
  • Storage, network, interconnect changes
  • Operating system, hardware migrations, patches, upgrades, parameter changes

DB replay does this by capturing a workload on the production system with negligible performance overhead( My observation is 2-5% more CPU usage ) and replaying it on a test system with the exact timing, concurrency, and transaction characteristics of the original workload. This makes possible complete assessment of the impact of the change including undesired results; new contentions points or performance regressions. Extensive analysis and reporting ( AWR , ADDM report and DB replay report) is provided to help identify any potential problems, such as new errors encountered and performance divergences. The ability to accurately capture the production workload results in significant cost and timesaving since it completely eliminates the need to develop simulation workloads or scripts. As a result, realistic testing of even complex applications using load simulation tools/scripts that previously took several months now can be accomplished at most in a few days with Database Replay and with minimal effort. Thus using Database Replay, businesses can incur much lower costs and yet have a high degree of confidence in the overall success of the system change and significantly reduce production deployment

Steps for Database Replay

  1. Workload Capture

Database are tracked and stored in binary files, called capture files, on the file system. These files contain all relevant information about the call needed for replay such as SQL text, bind values, wall clock time, SCN, etc.

1) Backup production Database #

2) Add/remove filter ( if any you want )
By default, all user sessions are recorded during workload capture. You can use workload filters to specify which user sessions to include in or exclude from the workload. Inclusion filters enable you to specify user sessions that will be captured in the workload. This is useful if you want to capture only a subset of the database workload.
For example , we don't want to capture load for SCOTT user

BEGIN
DBMS_WORKLOAD_CAPTURE.ADD_FILTER (
fname => 'user_scott',
fattribute => 'USER',
fvalue => 'SCOTT');
END;

Here filter name is "user_scott" ( user define name)

3) Create directory make sure enough space is there

CREATE OR REPLACE DIRECTORY db_replay_dir
AS '/u04/oraout/test/db-replay-capture';

Remember in case on Oracle RAC directory must be on shared disk otherwise , you will get following error

SQL> l
1 BEGIN
2 DBMS_WORKLOAD_CAPTURE.start_capture (name =>'capture_testing',dir => 'DB
3 END;
4*

SQL> /
BEGIN
*
ERROR at line 1:
ORA-15505: cannot start workload capture because instance 2 encountered errors
while accessing directory "/u04/oraout/test/db-replay-capture"
ORA-06512: at "SYS.DBMS_WORKLOAD_CAPTURE", line 799
ORA-06512: at line 2



4) Capture workload

BEGIN
DBMS_WORKLOAD_CAPTURE.start_capture (
name => capture_testing',dir=>'DB_REPLAY_DIR',
duration => NULL );
END
;

Duration => NULL mean , it will capture load till we stop with below mentioned manual SQL command. Duration is optional input to specify the duration (in seconds) , default is NULL

5) Finish capture

BEGIN
DBMS_WORKLOAD_CAPTURE.finish_capture;
END;

# Take backup of production before Load capture, so we can restore database on test environment and will run replay on same SCN level of database to minimize data divergence

Note as per Oracle datasheet

The workload that has been captured on Oracle Database release 10.2.0.4 and higher can also be replayed on Oracle Database 11g release.So , I think , It simply mean NEW patch set 10.2.0.4 will support capture processes. Is it mean Current patch set (10.2.0.3) not support load capture ??????

2. Workload Processing

Once the workload has been captured, the information in the capture files has to be processed preferably on the test system because it is very resource intensive job. This processing transforms the captured data and creates all necessary metadata needed for replaying the workload.

exec DBMS_WORKLOAD_REPLAY.process_capture('DB_REPLAY_DIR');

  1. Workload Replay

1) Restore database backup taken step one to test system and start Database

2) Initialize

BEGIN
DBMS_WORKLOAD_REPLAY.initialize_replay (
replay_name => 'TEST_REPLAY',
replay_dir => 'DB_REPLAY_DIR');
END;

3) Prepare

exec DBMS_WORKLOAD_REPLAY.prepare_replay(synchronization => TRUE)

4) Start clients

$ wrc mode=calibrate replaydir=/u03/oradata/test/db-replay-capture

Workload Replay Client: Release 11.1.0.6.0 - Production on Wed Dec 26 00:31:41 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Report for Workload in: /u03/oradata/test/db-replay-capture
-----------------------

Recommendation:
Consider using at least 1 clients divided among 1 CPU(s).

Workload Characteristics:
- max concurrency: 1 sessions
- total number of sessions: 7

Assumptions:
- 1 client process per 50 concurrent sessions
- 4 client process per CPU
- think time scale = 100
- connect time scale = 100
- synchronization = TRUE




$ wrc system/pass mode=replay replaydir=/u03/oradata/test/db-replay-capture

Workload Replay Client: Release 11.1.0.6.0 - Production on Wed Dec 26 00:31:52 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.

Wait for the replay to start (00:31:52)

5) Start Replay

BEGIN
DBMS_WORKLOAD_REPLAY.start_replay;
END;
/



$ wrc system/pass mode=replay replaydir=/u03/oradata/test/db-replay-capture

Workload Replay Client: Release 11.1.0.6.0 - Production on Wed Dec 26 00:31:52 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.

Wait for the replay to start (00:31:52)
Replay started (00:33:32)
Replay finished (00:42:52)



  1. Analysis and Reporting

Generate AWR , ADDM and DB reply report and compare with data gathered on production for same timeperiod when load was captured on Production database. For Database Replay Report run following command

SQL> COLUMN name FORMAT A20
SQL> SELECT id, name FROM dba_workload_replays;

ID NAME
---------- --------------------
1 TEST_REPLAY

DECLARE
v_report CLOB;
BEGIN
v_report := DBMS_WORKLOAD_replay.report(
replay_id => 1,
format=>DBMS_WORKLOAD_CAPTURE.TYPE_HTML
);
dbms_output.put_line(l_report);
END;
/


For sample report [ Click Here]



Reference
Chapter 22 Database Replay
Categories: DBA Blogs

Mobile phones, fats and backups

Moans Nogood - Thu, 2007-12-27 04:56
As with such things, life has been rather dull since the fire - relatively speaking. Fortunately, I had a wonderful thing happening to my mobile phone that brightened several of my days before Christmas.

It all started about half a year ago, when the menu button on my Nokia E60 stopped working. That's rather inconvenient, but I could still call people up and receive calls, so no big problem.

Then, one Saturday in December, the old team from the National Nurses' Dormitory in Copenhagen had our annual, traditional, Danish Christmas lunch in a place called Told & Snaps in Copenhagen. When the frist dish was brought in - pickled herrings, of course - my dear friend Ole and I decided to see if soft butter on the keys could bring the menu button back to life. So we, uhm, buttered the keyboard - and it worked! The menu button worked again!

Flushed with succes we decided to try and repair the problems I had with the microphone and loudspeaker in the E60. So we used fat (from a duck, I think) on the, eh, bottom of the phone. Didn't seem to have the desired effect. In fact, in the days that followed I had to shout louder and louder in order for people to hear me. It was getting silly - I had to be in the privacy of my car in order not to disturb the general population with my shouting.

But I could still send and receive SMS messages, so things were OK.

Then my wife Anette and I had dinner at restaurant Avanti and that's when Anette hit the oil lamp on the table so that the E60 became soaked in oil. The display looked like a lava lamp and the keys became very loooong and soooft to use.

During the night, while I was asleep, the E60 sadly expired.

That's when I discovered that my 800 contacts were residing inside the E60, not on the mini-SD-card. So I got a new phone from a friendly phone broker (an N-73, which seems to be a fine phone, by the way) but every call I received were wonderfully new and exciting since I didn't recognice any of the numbers.

Of course I didn't have any backup. I'm a man.

Then a miracle happened. Anette and some good Miracle folks managed to wake up the phone for a short while and unload my contacts. That was a good day.

So people have told me: This will teach you to remember to take a backup!

But look at it this way: I started carrying a mobile phone 25/8/370 back in 1991 and this was the first time I was in danger of losing everything. And I have never taken a backup.

Chances are it won't happen again anytime soon either, unless somebody steals it.

So I think I'll continue with my usual mobile phone backup strategy :-))).

Merry Christmas.

Mogens

Oracle 11g Database Replay

Virag Sharma - Thu, 2007-12-27 01:32

If your database currently running on 10g R2 , and want upgrade database to 11g then you can take advantage of Database Replay , As per Datasheet given on OTN workload capture on 10.2.0.4 can run/replay on 11g.

So , it simply mean , before you going to upgrade from 10g R2 to 11g , you can take advantage of database Replay feature i.e. capture work load on Production 10g R2 database , then copy workload to test system , upgrade test system to 11g , run workload captured on production and check how your system performing. This make life easier , isn't it ?

Check following links

Categories: DBA Blogs

Add / Delete a row from a SQL based Tabular Form (Static ID)

Duncan Mein - Tue, 2007-12-25 15:40
The current application I am working on involves a re-write to a 12 screen wizard that was written 18 months ago. Several of the screens make use of manually built tabular forms (SQL report regions) and collections to hold the values entered. Some of the screens in the wizard have multiple tabular forms on them as well.

Currently all tabular forms have 15 lines which cannot be added to or deleted from. In the new version, we removed this limit and allow the user to add as many rows as he / she needs. Furthermore, specific rows can now be removed from the Tabular form. Since all entered data is written into collections, we wanted to avoid "line by line" processing i.e. submitting the form for each time, updating the collection and branching back to the page. By utilising some simple JavaScript and the new "Static ID" of the Reports Region new to APEX 3.0, all requirements could be met.

The Static ID attribute of the reports region allow us to add our own (unique) ID to a report region. From here we can simpley navigate down the DoM, clone a row in the form using cloneNode and append it to the table using appendChild.

The JavaScript will work even if you have multiple report regions on the same page providing each report region has a unique Static ID value.

Method:
  • Create a new region on your page using the following details:
  1. Region Type: Report
  2. Report Implementation: SQL Report
  3. Title: Add Row to Report
  4. Region Template: Reports Region
  5. SQL Query: view
  6. Report Template: Standard
  • Add the following JavaScript to your Page Header: view
  • Copy the region template: Reports Region and name it Reports Region (Static ID)
  • Edit the region template: Reports Region (Static ID) and replace the Substitution String #REGION_ID# with #REGION_STATIC_ID# in the Definition section

  • Edit the region: Add Rows to Report and insert the value: REPORT1 into the Static ID textbox found in the Identification seciton. Note that the values entered into the Static ID textbox must be unique to the page if using multiple report regions where you are specifying a Static ID. Then change the template of the region to use the newley created Reports Region (Static ID) template

  • Copy the report template: Standard and name it Standard (Static ID)

  • Edit the report template: Standard (Static ID) and replace the text: id="#REGION_ID#" with: id="datatable_#REGION_ID#" in the before rows section.

  • Edit the report attributes and change the report template to use the newley created: Standard (Static ID)

  • Add a button to the page using the following details:
  1. Select a Region for the Button: Add Rows to Report
  2. Position: Create a button in a region position
  3. Button Name: ADD_ROW
  4. Label: Add Row
  5. Button Type: HTML Button
  6. Action: Redirect to URL without submitting page
  7. Target is a: URL
  8. URL Target: javascript:addRow('REPORT1');

Please note that REPORT1 refers to the Static ID of the region you want to add your row to

  • Test your Add and Delete a row functionality

An example with all the source code can be seen here

Creating a physical standby Database in Oracle10g

Ayyappa Yelburgi - Mon, 2007-12-24 05:13
STEPS for creating 10g dataguardprerequisite : 9i dataguard setup knowledgestep1 :Prepare initSID.ora file for primary and standby databases as follow.** STANDBY setup parameters are given in BOLDpart A)**** Production database primary file ****prod.__db_cache_size=125829120prod.__java_pool_size=4194304prod.__large_pool_size=4194304prod.__shared_pool_size=79691776prod.__streams_pool_size=0*.ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com125

Migrating Dictionar managed tablespace to locally managed tablespace

Ayyappa Yelburgi - Mon, 2007-12-24 05:08
SQL> EXEC DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('TEMPD')BEGIN DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('TEMPD'); END;*ERROR at line 1:ORA-03245: Tablespace has to be dictionary managed, online and permanent to be able to migrateORA-06512: at "SYS.DBMS_SPACE_ADMIN", line 0ORA-06512: at line 1SQL> EXEC DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('USERSD')PL/SQL procedure successfully ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com116

Installation of oracle9i/10g on Linux & Solaris

Ayyappa Yelburgi - Mon, 2007-12-24 05:04
9/10g install on Linux/Solaris Install Oracle 9i Database on Linux RHEL AS 3the following lines can be added to the /etc/sysctl.conf file:kernel.shmmax = 2147483648kernel.shmmni = 128kernel.shmall = 2097152kernel.sem = 250 32000 100 128fs.file-max = 65536net.ipv4.ip_local_port_range = 1024 65000In addition the following lines can be added to the /etc/security/limits.conf file:oracle soft nofile ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com76

Pages

Subscribe to Oracle FAQ aggregator