Feed aggregator

NZOUG Conference 2010 Agenda online now!

Gareth Roberts - Wed, 2010-03-03 23:45

I'm pleased to advise that the agenda is now online for the NZOUG 2010 Conference in Rotorua, March 15th / 16th.

The lineup of speakers is excellent, including a stack of Oracle Ace Directors:

  • Tim Hall
  • Chris Muir
  • Daniel Morgan
  • Franciso Munoz Alvarez
  • Robert Freeman

Only a week or so left so register for the conference now, and meet the Aces!

Find out more at the NZOUG Conference 2010 web page .

Disclaimer: I'm on the NZOUG Committee
Note: The speaker lineup/abstract list are subject to change.

Catch ya!
This is a post from Gareth's blog at http://garethroberts.blogspot.com


Treasure Trove of Oracle Security Documents

Jared Still - Tue, 2010-03-02 11:32
This morning I stumbled across a veritable treasure trove of Oracle Security documents at the web site of the Defense Information Systems Agency.

I've only spent a few minutes glancing through a small sampling of them, and they appear to be fairly comprehensive. When you consider that these are documents used to audit sensitive database installations, it makes a lot of sense that they would cross all the t's and dot all the i's.

The index of documents can be found here: DISA IT Security Documents

As you can see for yourself, there are documents covering security concerns for a number of other systems.
Categories: DBA Blogs

Trying Out Apex 4.0 Dynamic Actions

Tony Andrews - Tue, 2010-03-02 10:34
In my spare moments I am currently familiarising myself with the Apex 4.0 Early Adopter edition. One of the many exciting new features is Dynamic Actions. These allow you to add functionality to your pages that would previously have required writing Javascript, AJAX calls and On Demand PL/SQL processes, but can now be done declaratively.The following very simple example shows 3 uses:1) EnablingTony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com4http://tonyandrews.blogspot.com/2010/03/trying-out-apex-40-dynamic-actions.html

Simple MongoDB+Java example

Vattekkat Babu - Tue, 2010-03-02 04:03

MongoDB is pretty slick for ease of use and scalability. As part of figuring out how to interface with MongoDB, I wrote a simple Java class that gives you an idea about how to create a collection and do CRUD on that. See below. Quite a bit of information is available on the web, but I thought a sample that shows off all the basic things will be useful. Good link - notes from a production mongoDB deployment

Compression for tables with more than 250 columns

Alejandro Vargas - Sun, 2010-02-28 19:15

Compression for tables with more than 250 columns

Tables with more than 250 columns are not supported to be compressed, this restriction remains in place even on 11g R2.

On the 11g R2, Sql Language Reference Manual, page 16-36 we can read:

Restrictions on Table Compression

* COMPRESS FOR OLTP and COMPRESS BASIC are not supported for tables with more than 255 columns.

This is a serious limitation specially for Telecoms where CDR tables can have a number of columns way over 255.

The available workaround:

  • Split the table into 2 sub-tables.

  • create table A as select pk,field 1 to 150 from origtable

  • create table B as select pk,field 151 to 300 from origtable

  • Each one will have less than 250 rows.

  • They will be joined by the primary key.

  • The table will be accessed using a view that has all the columns of the original table.

  • create view origtable as select a.pk,field a.1 to a.150, field b.151 to b.300 from a, b where a.pk=b.pk

Categories: DBA Blogs

ruby-plsql 0.4.2 - better support for object types and types in packages

Raimonds Simanovskis - Thu, 2010-02-25 16:00

I just released ruby-plsql version 0.4.2 which mainly adds support for more PL/SQL procedure parameter types. See change history file for more detailed list of changes.

Object types and object methods

Now you can use ruby-plsql to construct PL/SQL objects and call methods on these object. For example, if you have the following type defined:

  street    VARCHAR2(50),
  city      VARCHAR2(50),
  country   VARCHAR2(50),
  CONSTRUCTOR FUNCTION t_address(p_full_address VARCHAR2)
  MEMBER FUNCTION display_address(p_separator VARCHAR2 DEFAULT ',') RETURN VARCHAR2,
  MEMBER PROCEDURE set_country(p_country VARCHAR2),
  STATIC FUNCTION create_address(p_full_address VARCHAR2) RETURN t_address

Then you can construct PL/SQL objects and call methods on them:

# call default constructor with named parameters
address = plsql.t_address(:street => 'Street', :city => 'City', :country => 'Country')
# call default constructor with sequential parameters
address = plsql.t_address('Street', 'City', 'Country')
# call custom constructor
address = plsql.t_address('Street, City, Country')
address = plsql.t_address(:p_full_address => 'Street, City, Country')

# returned PL/SQL object is Hash object in Ruby
address == {:street => 'Street', :city => 'City', :country => 'Country'}

# but in addition you can call PL/SQL methods on it
address.display_address == 'Street, City, Country'
address.set_country('Other') == {:street => 'Street', :city => 'City', :country => 'Other'}

# or you can call object member methods also with explicit self parameter
plsql.t_address.display_address(:self => {:street => 'Street', :city => 'City', :country => 'Other'},
  :p_separator => ',') == 'Street, City, Country'

# or you can call static methods of type
plsql.t_address.create_address('Street, City, Country') ==
  {:street => 'Street', :city => 'City', :country => 'Country'}
Record types and table of record types inside packages

Now you can call Pl/SQL procedures with parameters which have record or table of record type that is defined inside PL/SQL package. For example if you have the following package:

  TYPE t_employee IS RECORD(
    employee_id   NUMBER(15),
    first_name    VARCHAR2(50),
    last_name     VARCHAR2(50),
    hire_date     DATE
  TYPE t_employees IS TABLE OF t_employee;
  TYPE t_employees2 IS TABLE OF t_employee
  FUNCTION test_employee (p_employee IN t_employee)
    RETURN t_employee;
  FUNCTION test_employees (p_employees IN t_employees)
    RETURN t_employees;
  FUNCTION test_employees2 (p_employees IN t_employees2)
    RETURN t_employees2;

Then you can call these package functions from Ruby:

employee = {
  :employee_id => 1,
  :first_name => 'First',
  :last_name => 'Last',
  :hire_date => Time.local(2010,2,26)
# PL/SQL record corresponds to Ruby Hash
plsql.test_records.test_employee(employee) == employee
# PL/SQL table corresponds to Ruby Array
plsql.test_records.test_employees([employee, employee]) == [employee, employee]
# PL/SQL index-by table corresponds to Ruby Hash
plsql.test_records.test_employees({1 => employee, 2 => employee}) == {1 => employee, 2 => employee}

If you will use table types defined inside PL/SQL packages then ruby-plsql will dynamically create session specific temporary tables which will be used to pass and get table parameter values. To ensure that these session specific temporary tables will be dropped you need to explicitly call plsql.logoff to close connection. For example, if you use ruby-plsql-spec for PL/SQL unit testing then in spec_helper.rb include

at_exit do

to ensure that connection will be closed with plsql.logoff before Ruby script will exit. But in case of some script failure if this was not executed and you notice that there are temporary tables with RUBY_ prefix in your schema then you can call plsql.connection.drop_all_ruby_temporary_tables to drop all temporary tables.

Establish new connection

Now there is simpler connect! method how to establish new ruby-plsql connection when you need a new connection just for ruby-plsql needs. You can do it in several ways:

plsql.connect! username, password, database_tns_alias
plsql.connect! username, password, :host => host, :port => port, :database => database
plsql.connect! :username => username, :password => password, :database => database_tns_alias
plsql.connect! :username => username, :password => password, :host => host, :port => port, :database => database

And the good thing is that this method will work both with MRI 1.8 or 1.9 or with JRuby – you do not need to change the way how you are establishing connection to database.


Now there is simpler way how to define savepoints and how to rollback to savepoint:

plsql.savepoint "before_something"
plsql.rollback_to "before_something"
Check validity of database objects

Now ruby-plsql will check if referenced database object is valid before trying to call it. And if it will not be valid then corresponding compilation error will be displayed. For example, if you have invalid database object:

  l_dummy invalid_table.invalid_column%TYPE;
  RETURN p_dummy;

then when trying to call it


you will get the following error message:

ArgumentError: Database object 'HR.TEST_INVALID_FUNCTION' is not in valid status
Error on line    2:   l_dummy invalid_table.invalid_column%TYPE;
     position   11: PLS-00201: identifier 'INVALID_TABLE.INVALID_COLUMN' must be declared
     position   11: PL/SQL: Item ignored
Other improvements

See History.txt file for other new features and improvements and see RSpec tests in spec directory for more usage examples.

Categories: Development

ActiveRecord Oracle enhanced adapter version 1.2.4

Raimonds Simanovskis - Tue, 2010-02-23 16:00

I have released maintenance version of ActiveRecrod Oracle enhanced adapter with some bug fixes and some new features. This is the last maintenance version for Rails 2, I have already done majority of work to support also Rails 3 in next adapter versions, but that deserves another post when it will be ready :).

Detailed changes can be found in change history file and commit log, here I will point out the main changes.

Schema and structure dump

There are several improvements in schema (rake db:schema:dump) and structure dump (rake db:structure:dump) tasks. Now structure dump is improved to contain all schema objects in SQL statements format.

Also db:test:purge rake task (which is run before recreating test schema when running rake test or rake spec) is changed that it will delete all schema objects from test schema – including also views, packages, procedures and functions which are not recreated from schema.rb. So if you need to have additional database objects in your schema besides tables, indexes, sequences and synonyms (which are dumped in schema.rb) then you need to recreate them after standard rake task db:schema:load is run. Here is example how to execute any additional tasks after db:schema:load (include this in some .rake file in lib/tasks directory):

namespace :db do
  namespace :schema do
    task :load do
    task :create_other_objects do
      # include code here which creates necessary views, packages etc.
Additional options for schema definition methods

You can pass :temporary => true option for create_table method to create temporary tables.

You can use :tablespace => "tablespace name" option for add_index method to create index in non-default Oracle tablespace that is specified for user (e.g. if it is requested by your DBA for performance reasons). You can also define function based indexes using add_index and they will be correctly dumped in schema.rb.

Savepoints and nested ActiveRecord transactions

oracle_enhanced adapter now supports ActiveRecord nested transactions using database savepoints.

ruby-oci8 version

As I am using and testing oracle_enhanced adapter just with ruby-oci8 2.0.3 then I have made this as precondition (if you use MRI 1.8 or 1.9). So if you haven’t yet upgraded to latest ruby-oci8 version then please do so before upgrading to oracle_enhanced 1.2.4.

JNDI connection support

If you are using oracle_enhanced with JRuby then now you can also use JNDI database connections – please see this issue with comments to see some examples.


As always you can install Oracle enhanced adapter on any Ruby platform (Ruby 1.8.6 / 1.8.7 or Ruby 1.9.1 or JRuby) with

gem install activerecord-oracle_enhanced-adapter

If you have any questions please use discussion group or post comments here.

Categories: Development

Improving your SQL Queries

Robert Vollman - Tue, 2010-02-23 15:27
Greetings! You probably found this page either through a Search Engine, an aggregator that doesn't delete blogs for inactivity, or possibly from a site with a very long blogroll, so let me introduce myself to you.My name is Robert Vollman. Those who were active in the on-line Oracle community from mid-2005 until mid-2007 may remember me either from this site or sites like it, from the Oracle Robert Vollmanhttp://www.blogger.com/profile/08275044623767553681noreply@blogger.com2

Deploying several mappings using OMBPlus

Klein Denkraam - Tue, 2010-02-23 10:39

Every now and then it is a pain to deploy mappings from the Control Center Manager, especially when you have a slow connection to your control center. Retrieving all mappings that can be deployed to a location can take some time (putting it very politely), and you can’t stop the control center from doing that. At least I have not found that possibility. Moreover sometimes you have assembled a list of mappings you want to deploy. In Excel or a plain text file. Mappings you have moved to your acceptance – or production environment and now want to deploy. You can do that using OMBPlus. And, as this is a scripting language, you can create a script to do that for all mappings you want, in one go. Unfortunately deploying is not really very straightforward in OMBPlus. So I created a tcl procedure that sort of simplifies the task. Here it is.

# File   : deploy_map.tcl
# Purpose: deploy MAPPING (parameter)
#          in each target module (parameter)
#          of a project (parameter)
# Author : Kadenza - Eric Buddelmeijer 200908
# Prereq.: a valid repository connection is presumed here

proc deploy_map {p_project p_module p_map p_cc_pwd} {
   OMBCC '/${p_project}'
   # cleanup
   foreach deplAction $deplPlan {
   set moduleList [OMBLIST ORACLE_MODULES '${p_module}.*' ]
   foreach  moduleName $moduleList {
      puts "--------------------- $moduleName -----------------"
      OMBCC '/${p_project}/$moduleName'
      # start of objecttype loop ------------------------------------------
      # order of objecttypes is relevant!!!
      set typeList [list MAPPING ]
      set N 1
      set plural "S"
      foreach objectType $typeList {
         set listType [concat $objectType$plural]
         set J 1
         set objectList [OMBLIST $listType '${p_map}.*']
         puts "... Checking ${listType}"
         if {[llength $objectList] > 0} {
            foreach objectName $objectList {
               set deployAction [OMBRETRIEVE $objectType \
                                 '$objectName' GET PROPERTIES (DEPLOYABLE)]

               if {$deployAction == "true"} {
                  set valid [OMBVALIDATE $objectType '$objectName']
                  if {$valid == "Valid." } {
                     switch $objectType {
                           "TABLE"    {set deployAction "REPLACE"}
                           "SEQUENCE" {set deployAction "REPLACE"}
                           default    {set deployAction "REPLACE" }
                     puts "...... Adding ${J}.$objectType $objectName"
                     if {$J == 1 } {
                                      'GEN_DEPLOY_${objectType}' \
                                      ADD ACTION 'DEPLOY_${objectName}'\
                                      SET PROPERTIES (OPERATION) \
                                          VALUES ('$deployAction') \
                                      SET REFERENCE $objectType '$objectName'
                     } else {
                            OMBALTER DEPLOYMENT_ACTION_PLAN \
                                     ADD ACTION 'DEPLOY_A_${objectName}' \
                                     SET PROPERTIES (OPERATION) \
                                         VALUES ('$deployAction') \
                                     SET REFERENCE $objectType '$objectName'
                     incr J
                  } else { puts "###### SKIPPING $valid $objectType \
                              $objectName" }
               } else { puts "...... Skipping explicit NOT deployable \
                           $objectType $objectName" }
            #end of objectName loop ---------------------------
            puts "... Deploying ${listType}"
            set exedeplPlan [OMBLIST DEPLOYMENT_ACTION_PLANS]
            foreach exedeplAction $exedeplPlan {
               OMBDEPLOY DEPLOYMENT_ACTION_PLAN '$exedeplAction'
               OMBDROP   DEPLOYMENT_ACTION_PLAN '$exedeplAction'
            incr N
         # end of if llength objectList > 0
      # end of object type loop
   # end of module loop

And the procedure can be executed with:

deploy_map <project name> <module name> <map name> <control center password>

Which can be repeated several times of course. The procedure also tries to expand the parameters ‘module name’ and ‘map name’ using a wildcard. This means the procedure tries to expand the parameter in such a way that it is treated as ‘begins with’.

Have fun deploying.

Excel and Oracle OLAP - Reporting No-Agg Measures

Keith Laker - Tue, 2010-02-23 08:41
I've run into this a few times recently, so here's a quick tip related to using Excel with Oracle OLAP (via the Simba MDX Provider for Oracle OLAP, of course).

Here's a situation that's been reported as a bug, but you really just need to know the right Excel Pivot Table option to choose. Consider a cube that has measures that do not aggregate but is dimensioned by a dimension with a hierarchy. In this case, there is a cube with a Store dimension with levels Store > Store Type > All Stores. The stores are located in different countries and sell in local currencies. There is a Local Currency measure, with sales reported in whatever the local currencies might be (Euros, Dollars, Yen, etc.) and a Dollar Sales measure with the U.S. Dollar conversation. As a common currency, Dollars can be aggregated. Local currencies can't be aggregated.

Here's a sample report in Excel.

Note that Dollar Sales is reported for Direct and Indirect but Local Sales is not. That's correct because Local Sales doesn't aggregate.

But what if I happen to select only Local Sales (which is null at the aggregate members Direct and Indirect). By default, Excel will display the report as shown below.

This isn't very useful because I can't drill down on the Direct member to get at the stores. The solution is simple, but a lot of people seem to miss it. Just choose the Show items with no data in rows PivotTable option.

Now you will be able to see the Direct and Indirect members, allowing you to drill to stores.

Now, after the drill.

Categories: BI & Warehousing

APEX - Identify Report Columns Vulnerable to XSS

Duncan Mein - Tue, 2010-02-23 04:10
The following query is a very simple way of identifying all report columns within your APEX application that may be exposed by Cross Site Scripting (XSS).

XSS allows an attacker to inject web script (JavaScript) into an application and when this is rendered in the report, the script is interpreted rather than rendered as text.

To safe guard against this attack, APEX provides a "Display as Text (escape special characters)" report column attribute that can be applied to classic and Interactive Reports. This causes the script text to be displayed as text rather than interpreted by the browser. If you have any markup (HTML) within your query that the report is based on, this markup will also be displayed as text and not interpreted. I personally think this is a good by product as you should not really be coding look and feel into your raw SQL.

Anyway I digress. Here is the query that will identify all vulnerable report columns within your APEX application:

SELECT application_id,
FROM apex_application_page_rpt_cols
WHERE display_as NOT IN
('Display as Text (escape special characters, does not save state)',
'Text Field')
AND workspace != 'INTERNAL'
AND application_id = :APP_ID
ORDER BY 1, 2, 3;


How is SaaS Product Management different from traditional Product Management?

Arvind Jain - Tue, 2010-02-23 00:27
As Enterprise Architects we are inclined to always question that how a particular technical architecture is going to benefit business strategy of my company. In the same thoughts I had a debate with my colleague that Product Management for a SaaS or Cloud based product is very different than a traditional approach to product management.

As SOA Architect I can see some of the challenges with reuse or creating global services. So here are some of the key differences between traditional product management vs SaaS product management, that I can think of. Please comment your thoughts or elaborate more.

In Saas product management you have to worry about all these additional things,

1) Data Management of customer data (Backup, recovery, export, migration)
2) Additional security around Access & Authorization
3) You earn your money every day and every moment, so it is not a traditional sell once and forget till the next new producty is available. If you fail customers may not and will not renew the subscription. So you have to develop SaaS with some stickiness feature like creating a website with lowest bounce rate and higher CTR (click through rate). so that there is highest probability of customers renewing.
4) Special considerations for On Demand / Multi Tenacy of the product / solution.
5) Much higer emphasis on Disaster Recovery, Peak Load and High Availablity.
6) One size does not fit all, so how would you provide innovation in cloud? How to empower customers in cloud so that they can maintain their cuttting edge by intelligent customizations.

I am thinking there will be additional issues like Multi Tenant Pricing that will be of concern (based on usage pattern, product differentiation etc.) so please comment your thoughts or elaborate more if you can.

My Interview Published in the Peer-To-Peer Column of Oracle Magazine

Sabdar Syed - Mon, 2010-02-22 13:55

This is to share with you all that Oracle Published my interview in the
peer-topeer column of Oracle Magazine (March - April 2010 Edition).

Sabdar Syed.

Intel's cloud chip and physicalization

Vikas Jain - Mon, 2010-02-22 01:00
Per Intel's CTO Justin Rattner, Intel is working on a single chip cloud computer
  • Parts of the chip will be powered down when not in use
  • First iteration involves a 48 core processor that consumes 25 - 125 watts
  • New term invented "physicalization" which means dedicating one or more cores to a specific application or portion of the application. This is completely opposite to "virtualization" which means running applications on whatever processor resources are available
For complete story, see this Forbes article

Cisco's new urbanism

Vattekkat Babu - Sun, 2010-02-21 00:17

Fast Company's article about Cisco's vision for New Songdo is a fascinating read on smart cities. Perhaps all connected, with all the apartments being able to do lot with network via great functional phones feeding off mobile platforms?

Labelling in Outlook 2003 ala Gmail

Vattekkat Babu - Sun, 2010-02-21 00:01

I found that moving mails into project folders, when the mails are still on an open topic takes too much time. Gmail's "label" idiom really helps in this situation. Turns out we can do that with Outlook 2003 too with some small macro work. First, see my entry on how to put macros and arrange toolbar in Outlook 2003. Then add the following macro to the module. Duplicate the second subroutine to as many as you've categories and then put toolbar entries for each of those. I think you should be able to do with less than 10 categories. At times you may get mails on an old subject, which you can just read and act and archive directly without tracking it.

Sub SetCategory(strCat As String)
   Dim Item As Object
   Dim SelectedItems As Selection
   Set SelectedItems = Outlook.ActiveExplorer.Selection
   For Each Item In SelectedItems
   With Item
    .Categories = strCat
    End With
   Next Item
End Sub

Sub SetCategoryAdmin()
    SetCategory ("Admin")
End Sub

Cool but unknown RMAN feature

Jared Still - Fri, 2010-02-19 11:53
Unknown to me anyway until just this week.

Some time ago I read a post about RMAN on Oracle-L that detailed what seemed like a very good idea.

The poster's RMAN scripts were written so that the only connection while making backups was a local one using the control file only for the RMAN repository.
rman target sys/manager nocatalog

After the backups were made, a connection was made to the RMAN catalog and a SYNC command was issued.

The reason for this was that if the catalog was unavailable for some reason, the backups would still succeed, which would not be the case with this command:

rman target sys/manager catalog rman/password@rcat

This week I found out this is not true.

Possibly this is news to no one but me, but I'm sharing anyway. :)

Last week I cloned an apps system and created a new OID database on a server. I remembered to do nearly everything, but I did forget to setup TNS so that the catalog database could be found.

After setting up the backups vie NetBackup, the logs showed that there was an error condition, but the backup obviously succeeded:

archive log filename=/u01/oracle/oradata/oiddev/archive/oiddev_arch_1_294_709899427.dbf recid=232 stamp=710999909
deleted archive log
archive log filename=/u01/oracle/oradata/oiddev/archive/oiddev_arch_1_295_709899427.dbf recid=233 stamp=710999910
Deleted 11 objects

Starting backup at 16-FEB-10
released channel: ORA_DISK_1
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=369 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: VERITAS NetBackup for Oracle - Release 6.0 (2008081305)
channel ORA_SBT_TAPE_1: starting full datafile backupset
channel ORA_SBT_TAPE_1: specifying datafile(s) in backupset
including current controlfile in backupset
channel ORA_SBT_TAPE_1: starting piece 1 at 16-FEB-10
channel ORA_SBT_TAPE_1: finished piece 1 at 16-FEB-10
piece handle=OIDDEV_T20100216_ctl_s73_p1_t711086776 comment=API Version 2.0,MMS Version
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:45
Finished backup at 16-FEB-10

Starting Control File and SPFILE Autobackup at 16-FEB-10
piece handle=c-3982952863-20100216-02 comment=API Version 2.0,MMS Version
Finished Control File and SPFILE Autobackup at 16-FEB-10


Recovery Manager complete.

Script /usr/openv/netbackup/scripts/oiddev/oracle_db_rman.sh
==== ended in error on Tue Feb 16 04:07:59 PST 2010 ====

That seemed rather strange, and it was happening in both of the new databases.
The key to this was to look at the top of the log file, where I found the following:

ORACLE_SID : oiddev
PWD_SID : oiddev
ORACLE_HOME: /u01/oracle/oas
PATH: /sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin

Recovery Manager: Release - Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.

connected to target database: OIDDEV (DBID=3982952863)

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-04004: error from recovery catalog database: ORA-12154: TNS:could not resolve the connect identifier specified

Starting backup at 16-FEB-10
using target database controlfile instead of recovery catalogallocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=369 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: VERITAS NetBackup for Oracle - Release 6.0 (2008081305)
channel ORA_SBT_TAPE_1: starting incremental level 0 datafile backupset

Notice the line near the bottom of the displayed output?

The one that says "using target database controlfile instead of recovery catalog" ?

RMAN will go ahead with the backup of the database even though the connection to the catalog database failed.  This apparently only works when running in a scripted environment, as when I tried connecting on the command line RMAN would simply exit when the connection to the catalog could not be made.

The RMAN scripts are being run on a linux server in the following format:

$OH/bin/rman target sys/manager catalog rman/password@rcat <<-EOF >> $LOGFILE

rman commands go here


This was quite interesting to discover, and my be old news to many of you, but it was new to me.

This is not exactly a new feature either - one of the databases being backed up is And of course there is now no need to update the backup scripts.
Categories: DBA Blogs

Oracle SQL Developer on OS X Snow Leopard

Duncan Mein - Fri, 2010-02-19 05:40
I have been using Oracle SQL Developer Data Modeller for a while now within a Windows XP environment. It seems pretty good (albeit a little slow but hey show some an Oracle Java client application that is quick. Oracle Directory Manager?, OWB Design Centre? I shall labour this point no more) and I was looking forward to trying it out on my new 27" iMac.

I promptley downloaded the software from OTN and a quick read of the instructions suggested I need to do no more other than run the datamodeler.sh shell script since I already had Java SE 6 installed.

As it turns out, the datamodeler.sh script in the root location does little more than call another script called datamodeler.sh found in the /datamodeler/bin directory which is the once you actually need to execute to fire up SQL Data Modeler

When this script runs, it prompts you for a the full J2SE file path (which I had no idea where it was) before it will run. After a quick look around google and I came across the command: java_home which when executed like:

cd /usr/libexec

prints the full path value that you need to open SQL Data Modeler

e.g. /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home

Now we are armed with the full path needed, opening up SQL Data Modeller from a virgin command window goes like this:

cd Desktop/datamodeler/bin
. ./datamodeler.sh

Oracle SQL Developer Data Modeler
Copyright (c) 1997, 2009, Oracle and/or its affiliates.All rights reserved.

Type the full pathname of a J2SE installation (or Ctrl-C to quit), the path will be stored in ~/jdk.conf

and hey presto, SQL Data Modeller is up and running.

Once you have pointed the shell script at your J2SE installation, you wont have to do it again.

Now I can finally use Data Modeler on my 27" Screen :)

Unit Testing with SQL Developer

Peter O'Brien - Thu, 2010-02-18 16:03
December's release of SQL Developer 2.1 has a number of new bells and whistles. Two of the main new features are:
  • Data Model Viewer. This is a free, read only, viewer based on Oracle SQL Developer Data Modeler. With this viewer you can open existing data models as well as generate data models based on your database. The generated data model can not be saved though.
  • Unit Testing. Based on the popular xunit Four Phase Test pattern, this feature makes testing of procedures and functions a breeze. Put simply, it allows one to construct a repository of unit tests cases which includes what one would expect for automated testing: setup, execute, assert result, record results, teardown.
The free Data Model Viewer is a nice introduction to the Data Modeler product which is not free. The unit testing framework though really does mean that the quality of code in the database can be asserted and maintained much easier than previously possible. There is a simple Oracle By Example tutorial on the SQL Developer Unit Test feature available at:


VPD + bad ANYDATA practices can really bite

Charles Schultz - Thu, 2010-02-18 13:20
After several days of intense testing, 4 SRs with Oracle Support (and another with the ERP vendor), and the very helpful information from Maxim Demenko about "out-of-range" date values, I have developed a testcase that demonstrates how using bad ANYDATA practices in the context of VPD can really mess you up.

Some background:
We have an application that recently started to utilize ANYDATA. Unfortunately, the application did not implement validation checks, and the nature of ANYDATA makes table check constraints a near impossibility (I have not found any good ways to go about it). So we (not I, but colleagues) developed VPD rules to validate data. After a month of testing, a tester noticed that we had some really funny dates, ranging from 4290 BC to 5090 BC.

We tried tracing (10046, 10053, 10730), but nothing jumped out at us; except we may have uncovered a new bug, but more on that in a second. We tried using LogMiner, but Oracle Support finally convinced us that LogMiner does not support ANYDATA. :-( Finally we just started shooting in the dark, testing different combinations of rules and data inputs.

We stumbled upon the fact that using CAST to convert ANYDATA into a datatype has bad consequences. In particular, if you try something like cast(some_anydata_column as varchar2(1)) and the column is a DATE, for example, you get a ora-3113/ora-7445 (under + JanPSU2010). The fine folks who had written our RLS policies had used CAST extensively, and the ironic part is that no errors were being generated on the application side. Instead, bad dates were sneaking into the dataset.

After reading the documentation a bit more, I discovered that ANYDATA is an object-oriented object (much to my surprise), and it has member functions. We had a hard time trying to figure out exactly how to use the member functions since one needs to instantiate a member first, and the documentation does not give any examples, let alone explain the usage of "object-oriented" in a relationship database. Finally I stumbled upon using sys.anydata as an instantiation, which seemed to work well for us.

Why did Oracle develope ANYDATA?!? It seems anti-RDBMS. And it makes things messy for us DBA types. As I explained to my colleagues, object-oriented data buckets are great for developers, up until they break. Then they are a pain to figure out.

I still have an outstanding question of exactly how the ANYDATA column overflows into the DATE field and gives us whacked out dates. If any Oracle gurus out there want to chime in, please do so.

Here is the code I used to replicate our issue:

drop user test cascade;
drop user test_no_vpd cascade;

create user test_no_vpd identified by test4#;
grant create session, EXEMPT ACCESS POLICY to test_no_vpd;

create user test identified by test3#;
grant create session, alter session, resource, create any context to test;
grant execute on dbms_rls to test;
connect test/test3#;

pill_1 number default 1,
pill_2 number default 2,
pill_3 number default 3)

insert into gorsdav values ('some_table_1',sys.anydata.convertnumber(1),sysdate,0,0,0);
insert into gorsdav values ('some_table_1',sys.anydata.convertdate(sysdate),sysdate,0,0,0);
insert into gorsdav values ('some_table_1',sys.anydata.convertvarchar2('Y'),sysdate,1,0,0);
insert into gorsdav values ('some_table_2',sys.anydata.convertvarchar2('Yes'),sysdate,0,0,0);
insert into gorsdav values ('some_table_2',sys.anydata.convertvarchar2('Y'),sysdate,0,0,3);
insert into gorsdav values ('some_table_2',sys.anydata.convertvarchar2('No'),sysdate,0,0,0);
insert into gorsdav values ('some_table_3',sys.anydata.convertvarchar2('MaybeSo'),sysdate,0,0,0);


-- Using FGAC example from http://www.orafusion.com/art_fgac.htm

-- A dummy procedure to satisfy the CREATE CONTEXT command; does not actually do anything

PROMPT Create Application Role Procedure
create or replace procedure
set_testapp_role(p_user varchar2 default sys_context('userenv', 'session_user')) is
v_ctx varchar2(16) := 'testapp_ctx';

PROMPT Create context
create or replace context testapp_ctx using set_testapp_role;

-- This is just a mock up test; I am not concerned about real-life roles or security,
-- thus I am returning the same predicate no matter who the user is

PROMPT Create security function
create or replace function testapp_security_function (p_schema varchar2, p_object varchar2)
return varchar2 is
return '(sys.anydata.accessvarchar2(gorsdav_value) = ''Y'' and pill_1 = 1) or pill_1 <> 1';

PROMPT Create RLS Table Policy
object_schema => 'TEST',
object_name => 'GORSDAV',
policy_name => 'TESTAPP_POLICY',
function_schema => 'TEST',
policy_function => 'TESTAPP_SECURITY_FUNCTION',
statement_types => 'SELECT,UPDATE,INSERT,DELETE',
update_check => TRUE,
enable => TRUE,
static_policy => FALSE);

PROMPT Inserting a control row into the table to show the date and insert are fine
insert into gorsdav values ('some_table_4',sys.anydata.convertvarchar2('123456789'),sysdate,0,0,0);

PROMPT Selecting data from table - should return eight rows with no errors
select * from gorsdav;

-- The following function uses CAST to get the varchar2 data; however, a majority of the
-- data is larger than the CAST target, thus we get an error. Even if we use varchar2(200),
-- some datatypes are DATE and NUMBER.

PROMPT Create "bad" security function
create or replace function testapp_security_function (p_schema varchar2, p_object varchar2)
return varchar2 is
return '((cast(gorsdav_value as varchar2(1)) = ''Y'' and pill_1 = 1) or pill_1 <> 1)';

PROMPT Inserting into table - this will work with no problems.
insert into gorsdav values ('some_table_4',sys.anydata.convertvarchar2('Y'),sysdate,0,2,0);


PROMPT Inserting into table - this will complete successfully, but will insert a "bad" date
insert into gorsdav values ('some_table_4',sys.anydata.convertvarchar2('123456789'),sysdate,0,0,0);


-- PROMPT Selecting data from table - should hang for about 10 seconds and kick you out with
-- PROMPT ORA-3113 and ORA-7445 in the alert.log
-- select * from gorsdav;

grant select on test.gorsdav to test_no_vpd;

PROMPT Connecting as a non-VPD user (exempt access policy)
connect test_no_vpd/test4#
select * from test.gorsdav;


Subscribe to Oracle FAQ aggregator