Feed aggregator

FOTY0001: type error

Peeyush Tugnawat - Fri, 2009-02-27 08:05

If you are using Cross References (using xref:lookupXRef1M) function in your integration processes (BPEL/ESB) and ever come across the following error

"XPath expression failed to execute.

Error while processing xpath expression, the expression is

Please verify the xpath query."

This exception can occur because of the following reasons:

  • The cross reference table with the given name is not found

  • The specified column names are not found

  • The specified reference value is empty

  • Multiple values are found

FOTY0001: type error

Peeyush Tugnawat - Fri, 2009-02-27 08:05

If you are using Cross References (using xref:lookupXRef1M) function in your integration processes (BPEL/ESB) and ever come across the following error

"XPath expression failed to execute.

Error while processing xpath expression, the expression is

Please verify the xpath query."

 

This exception can occur because of the following reasons:

  • The cross reference table with the given name is not found

  • The specified column names are not found

  • The specified reference value is empty

  • Multiple values are found

Blogging at posulliv.com Now

Padraig O'Sullivan - Tue, 2009-02-24 22:59
I've decided to start blogging with WordPress now as it just makes putting code samples and things like that so much easier! Also, I had a domain name for a while so I figured I might as well use it and it will encourage me to blog more. I really want to blog more at the moment since I'm working on an interesting project for my database course this semester.You can find my blog here from now on. Padraighttp://www.blogger.com/profile/17562327461254304451noreply@blogger.com0

Oracle Technology Network (OTN) TechBlog

Peeyush Tugnawat - Tue, 2009-02-24 14:03

If you have not checked out the OTN TechBlog already, I would like to share this great resource blog from Justin Kestelyn. He is the OTN Editor-in-Chief.

Here is the link to his blog

http://blogs.oracle.com/otn/

 

Add to Technorati Favorites

Oracle Technology Network (OTN) TechBlog

Peeyush Tugnawat - Tue, 2009-02-24 14:03

If you have not checked out the OTN TechBlog already, I would like to share this great resource blog from Justin Kestelyn. He is the OTN Editor-in-Chief.

Here is the link to his blog

http://blogs.oracle.com/otn/

 

 

Add to Technorati Favorites

Get Rid of NULL

Robert Vollman - Tue, 2009-02-24 10:17
"My query isn't returning the right rows" is a problem we all hear often. Upon investigation, I find that the most common cause of a miswritten query is a misunderstanding and/or mishandling of NULLs.Well, you know what? I've had it. Let's get rid of NULL. There, I said it.I've written articles explaining NULL before, a couple of times in fact*. So have plenty of other Oracle specialists, inRobert Vollmanhttp://www.blogger.com/profile/08275044623767553681noreply@blogger.com8

Remote dependencies

Yasin Baskan - Mon, 2009-02-23 08:58
When changing plsql code in a production system the dependencies between objects can cause some programs to be invalidated. Making sure all programs are valid before opening the system to users is very important for application availability. Oracle automatically compiles invalid programs on their first execution, but a successfull compilation may not be possible because the error may need code correction or the compilation may cause library cache locks when applications are running.

Dependencies between plsql programs residing in the same database are easy to handle. When you compile a program the programs dependent on that one are invalidated right away and we can see the invalid programs and compile or correct them before opening the system to the users.

If you have db links and if your plsql programs are dependent on programs residing in other databases you have a problem handling the invalidations. There is an initialization parameter named remote_dependencies_mode that handles this dependency management. If it is set to TIMESTAMP the timestamp of the local program is compared to that of the remote program. If the remote program's timestamp is more recent than the local program, the local program is invalidated in the first run. If the parameter is set to SIGNATURE the remote program's signature (number and types of parameters, subprogram names, etc...) is checked, if there has been a change the local program is invalidated in the first run.

The problem here is; if you change the remote program's signature or timestamp you cannot see that the local program is invalidated because it will wait for the first execution to be invalidated. If you open the system to the users before correcting this you may face a serious application problem leading to downtime.

Here is a simple test case to see the problem.




I start by creating a plsql package in the database TEST.

SQL> create package test_pack as
2 procedure test_proc(a number);
3 end;
4 /

Package created.

SQL> create package body test_pack as
2 procedure test_proc(a number) is
3 begin
4 null;
5 end;
6 end;
7 /

Package body created.


On another database let's create a database link pointing to the remote database TEST, create a synonym for the remote package and create a local package calling the remote one.

SQL> create database link test.world connect to yas identified by yas using 'TEST';

Database link created.

SQL> select * from dual@yas.world;

D
-
X

SQL> create synonym test_pack for test_pack@yasbs.world;

Synonym created.

SQL> create package local_pack as
2 procedure local_test(a number);
3 end;
4 /

Package created.

SQL> r
1 create or replace package body local_pack as
2 procedure local_test(a number) is
3 begin
4 test_pack.test_proc(1);
5 end;
6* end;

Package body created.


If we look at the status of this local package we see that it is valid.


SQL> col object_name format a30
SQL> r
1* select status,object_name,object_type from user_objects where object_name='LOCAL_PACK'

STATUS OBJECT_NAME OBJECT_TYPE
------- ------------------------------ ------------------
VALID LOCAL_PACK PACKAGE
VALID LOCAL_PACK PACKAGE BODY


We can execute it without any problems.


SQL> exec LOCAL_PACK.local_test(1);

PL/SQL procedure successfully completed.


Now, let's change the remote package code including the package specification.


SQL> create or replace package test_pack as
2 procedure test_proc(b number,a number default 1);
3 end;
4 /

Package created.

SQL> create or replace package body test_pack as
2 procedure test_proc(b number,a number default 1) is
3 begin
4 null;
5 end;
6 end;
7 /

Package body created.


When we look at the local package we see its status as valid.


SQL> r
1* select status,object_name,object_type from user_objects where object_name='LOCAL_PACK'

STATUS OBJECT_NAME OBJECT_TYPE
------- ------------------------------ ------------------
VALID LOCAL_PACK PACKAGE
VALID LOCAL_PACK PACKAGE BODY


But when it is executed we get an error.


SQL> exec LOCAL_PACK.local_test(1);
BEGIN LOCAL_PACK.local_test(1); END;

*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-04062: signature of package "YASBS.TEST_PACK" has been changed
ORA-06512: at "YASBS.LOCAL_PACK", line 4
ORA-06512: at line 1


SQL> select status,object_name,object_type from user_objects where object_name='LOCAL_PACK';

STATUS OBJECT_NAME OBJECT_TYPE
------- ------------------------------ ------------------
VALID LOCAL_PACK PACKAGE
INVALID LOCAL_PACK PACKAGE BODY

SQL> exec LOCAL_PACK.local_test(1);

PL/SQL procedure successfully completed.

SQL> select status,object_name,object_type from user_objects where object_name='LOCAL_PACK';

STATUS OBJECT_NAME OBJECT_TYPE
------- ------------------------------ ------------------
VALID LOCAL_PACK PACKAGE
VALID LOCAL_PACK PACKAGE BODY


The second execution compiles the package and returns the status to valid.

How can we know beforehand that the local program will be invalidated on the first execution? The only way I can think of is to check the dependencies across all databases involved. By collecting all rows from dba_dependencies from all databases we can see that when the remote program is changed the programs on other databases that use this remote program will be invalidated when they are executed. Then we can compile these programs and see if they compile without errors.

Database links may be very annoying sometimes, this case is just one of them.

Archiving OTL Timecard

RameshKumar Shanmugam - Sun, 2009-02-22 16:18
The Timecard archive process in OTL helps to archive the timecard which are no longer needed and to improve performance of the OTL. Once archived the timecard summary information is available for the users but not the detailed view. Users will not be able to edit or modify any details of the timecard

Following are the process that need to be performed for the archiving the OTL timecard

1) Setup following profile option
  • OTL: Archive Restore Chunk Size
  • OTL: Minimum Age of Data Set for Archiving
  • OTL: Max Errors in Validate Data Set
2) Run 'Define Data Set Process' -This is the first step in the overall process of archiving timecard, This processes helps to identify the date range of the timecard that need to be archived. This process will move the data set to temporary tables in preparation for archiving.
Note: Make sure not to select too much data at once, the process may fail.
If the process fails the data can be restored using the 'Undo Data Set process'



3) Next step in the archiving process is run 'Validate Data Set Process' - This process checks for error on the timecards in the data set.This process returns a validation warning message if the process finds the timecard with the status: working, rejected, submitted, approved, or error.

Note:The validation process may encounter some errors during processing. To restrict the number of errors reported at one time, you can set the OTL: Max Errors in Validate Data Set profile option to a maximum number of errors. The process stops running when it reaches the number of errors you define.


4) Final Step in the Timecard archiving is to run the 'Archive Data Set Process' - The archive process moves the defined data set of the validated timecard data from active tables in the OTL application to archive table.


In my next blog I'll write the details of restoring the archived data back to OTL application

Try this out!
Categories: APPS Blogs

Doubly dynamic SQL

Nigel Thomas - Sun, 2009-02-22 06:36
It is great to see a new post from Oracle WTF last week, after a quiet period. Which reminded me to post this example of a dynamic search.

I won't post the whole thing, and I have disguised the column names to protect the guilty. The basic problem is that the developer didn't quite understand that if you are going to generate a dynamic query, you don't have to include all the possibilities into the final SQL.

Let's say the example is based on books published in a given year. First, to decide whether to do a LIKE or an equality, he did this:

' WHERE' ||
' (('||p_exact||' = 1 AND pdc.title = '||chr(39)||p_title||chr(39)||')' ||
' OR ('||p_exact||' = 0 AND pdc.title LIKE '||chr(39)||l_title||'%'||chr(39)||')) ' ||
' AND ('||p_year||' = 0 OR pdc.year = '||p_year||')' ||

So at runtime you get both predicates coming through. Suppose you wanted an exact search (p_exact=1), for p_title='GOLDFINGER'. We don't know the year of publication so we supply 0. The generated predicates are:

WHERE ((1 = 1 AND pdc.title = 'GOLDFINGER')
OR (1 = 0 AND pdc.title LIKE 'GOLDFINGER%'))
AND (0 = 0 or pdc.year = 0)

Wouldn't the logically equivalent:

WHERE (pdc.title = 'GOLDFINGER')

have been much easier? Add a few of these together and a nice indexed query plan soon descends into a pile of full table scans and humungous hash joins. Oh, and no use of bind variables, so in a busy OLTP application this could sabotage the SQL cache quite quickly.

My favourite part though is with the sort order. The user can choose to order by a number of different columns, either ascending or descending:

l_order := ' ORDER BY ' ||
' case '||chr(39)||p_sort||chr(39)||' when ''publisher asc'' then publisher end asc, ' ||
' case '||chr(39)||p_sort||chr(39)||' when ''publisher desc'' then publisher end desc,' ||
' case '||chr(39)||p_sort||chr(39)||' when ''book_type asc'' then book_type end asc,' ||
' case '||chr(39)||p_sort||chr(39)||' when ''book_type desc'' then book_type end desc,' ||
' case '||chr(39)||p_sort||chr(39)||' when ''title asc'' then title end asc,' ||
' case '||chr(39)||p_sort||chr(39)||' when ''title desc'' then title end desc,' ||
' case '||chr(39)||p_sort||chr(39)||' when ''year asc'' then year end asc,' ||
' case '||chr(39)||p_sort||chr(39)||' when ''year desc'' then year end desc,' ||
' case '||chr(39)||p_sort||chr(39)||' when ''book_id asc'' then book_id end asc,' ||
' case '||chr(39)||p_sort||chr(39)||' when ''book_id desc'' then book_id end desc';

Yes, you got it; given that the variable p_sort has been picked from an LOV, the whole piece of PL/SQL can be replaced by:

l_order := ' ORDER BY ' ||p_sort;

That looks better, doesn't it?

Best Practices: SOA and Enterprise Applications

Peeyush Tugnawat - Sat, 2009-02-21 14:14

Oracle Fusion Middleware Best Practice Centers for Applications are great reference sources for information on Service Oriented Architecture for Enterprise Applications.

Following are the links to pages for each application:

Best Practice Center: Oracle E-Business Suite and Fusion Middleware

Best Practice Center: Oracle Siebel and Fusion Middleware

Best Practice Center: Oracle PeopleSoft and Fusion Middleware

Best Practice Center: Oracle JD Edwards and Fusion Middleware

I am also a contributor and so far have shared my experiences at Best Practice Center: Oracle E-Business Suite and Fusion Middleware. This blog is also listed there.

There is wealth of information available on these centers and comes from experts in their fields. Highly recommended!

 

Add to Technorati Favorites

Best Practices: SOA and Enterprise Applications

Peeyush Tugnawat - Sat, 2009-02-21 14:14

Oracle Fusion Middleware Best Practice Centers for Applications are great reference sources for information on Service Oriented Architecture for Enterprise Applications.

Following are the links to pages for each application:

Best Practice Center: Oracle E-Business Suite and Fusion Middleware

Best Practice Center: Oracle Siebel and Fusion Middleware

Best Practice Center: Oracle PeopleSoft and Fusion Middleware

Best Practice Center: Oracle JD Edwards and Fusion Middleware

 

I am also a contributor and so far have shared my experiences at Best Practice Center: Oracle E-Business Suite and Fusion Middleware. This blog is also listed there.

There is wealth of information available on these centers and comes from experts in their fields. Highly recommended!

 

Add to Technorati Favorites

OVERLAPS

Robert Vollman - Fri, 2009-02-20 10:54
"What's wrong?" the guru asked as he sat down next to the company's newest database analyst. "I heard you were having some trouble with the room booking application.""Yep," said Chad. "I just can't seem to get it right.""Don't put it all on yourself," said the guru. "What can I do to help?""Well, I've got it narrowed down to this one procedure, which is responsible for checking if the room is Robert Vollmanhttp://www.blogger.com/profile/08275044623767553681noreply@blogger.com4

JRE Plug-in “Next-Generation” – to migrate or not?

Aviad Elbaz - Wed, 2009-02-18 04:39

It has been more than half a year since we've migrated from Oracle Jinitiator to Sun Java JRE Plug-in (Java 6 update 5) in our Oracle Applications (EBS) system, and I must say, I'm not satisfied yet.

For the first months we had been struggling with a lot of mouse focus bugs which have made our users very angry about this upgrade. Although we've applied some patches related to this bugs, we still have some with no resolution.
Upgrading to Developer 6i patchset 19 has solved some bugs but not all of them.

As part of an SR we had opened about mouse focus issue, we was advised by Oracle to install the latest Java JRE (Java 6 update 12 this days) as a possible solution for the remaining bugs.

Starting with Java 6 update 10, Sun has introduced the new "next-generation Java Plug-in", which makes troubles with Oracle EBS. You can read more about this new architecture at Sun Java site - "What is next-generation Java Plug-in".

Right after installing Java 6 update 11, I encountered a problem - when trying to open forms the screen freezes.


The browser window hangs inconsistently. I have no idea when it's going to be opened and when it's not. I've tried Java 6 update 12 and it's the same – sometimes it opens and sometimes it doesn’t. No matter what I did - clear java cache on client, clear Apache cache, install JRE in different directory (in case you have installed previous update of version 6), uninstall previous versions of Java Plug-in installed on same pc, I tried with explorer 6 and 7 - the problem wasn't resolved.

There is an unpublished opened bug for this problem: Bug 7875493 - "Application freezes intermittently when using JRE 6U10 and later". I've been told by Oracle support that they have some incompatibilities with the new next-generation architecture and that they are working with Sun about it.

Meanwhile there are 2 workarounds: (the second doesn't work for me but suggested by Oracle support)

1) Disable the "next generation Java Plug-in" option:
Go to Control Panel -> Java -> Select the "Advanced" tab -> expand the "Java Plug-in" -> uncheck the "Enable the next-generation Java Plug-in" option.
 

This workaround always works (at least for me...).

2) Set the swap file to system managed + Tune the heap size for java:
- Go to Control Panel -> System -> Select the "Advanced" tab -> click on Settings (in Performance frame) -> Select the "Advanced" tab -> Click on Change -> Select the "System managed size" option.

- Go to Control Panel -> Java -> Select the "Java" tab -> Click "View..." (in Java Applet Runtime Settings frame) -> update the "Java Runtime Parameters" field with: "-Xmx128m -Xms64m".

This workaround doesn't work for me.

For now, I've decided to stay with the "old" Java Plug-in 6 update 5 and do not upgrade our users to the new next-generation Java Plug-in. I Hope the following updates of Java Plug-in will be better or Oracle will publish a patch to solve this problem.

I’ll keep update as soon as I have more info’.

Aviad

Categories: APPS Blogs

Oracle Open World 2001 in Berlin: The Truth (Finally)

Moans Nogood - Tue, 2009-02-17 12:09
It’s time that we admit it. We did horrible things at OOW in Berlin. We’ve not told anyone for all these years, but the pressure is building inside. So I’ve decided to come clean.

We had just started Miracle, so we were only about eight folks or so in total. So we decided to go to the conference in Berlin all of us. We rented two or three apartments and also invited our friends (customers) to stay with us.

We drove down there in a few cars and found out upon arrival that the apartments were empty except for the mattresses on the floor. Oh well, easier to find your way around.

I’m still not sure why Peter Gram or someone else decided to bring along our big office printer/scanner/copier, but the guys quickly set up the network, the printer and the laptops, and then we just sat around, worked on the laptops, drank beers and talked about all sorts of Oracle internals.

I went down to registration and got a badge, so that was good. Then someone (forget who) came up with the idea that we should simply copy my badge so the rest of the guys could get in for free.

It wasn’t because we didn’t have the money or anything. Oh no. It was just because it sounded stupid and a little risky. So that’s why you’ll find pictures here and there (including in my office) of the guys copying and modifying badges.

The biggest challenge was that the badges had an “Oracle-red” stripe at the bottom.

But Oracle Magazine had a special conference edition out which had a lot of “Oracle-red” on the front cover, so it was just a matter of using the scissors in the Swiss army knife.

It worked perfectly for the whole conference and we were very proud, of course.

It was also the conference where I was introduced to James Morle by Anjo Kolk, our old-time friend from Oracle. I had placed myself strategically in a café/bar between the two main halls in the conference center which meant that everybody came walking by sooner or later. So I met lots of old friends that way. And a new friend named James Morle, who was in need for an assignment – and we had a customer in Germany who badly need his skills, so he ended up working for Mobilcom for half a year or more.

So the next bad thing we did was to crash the Danish country dinner. Oracle Denmark might not have been too fond of us back then, because they thought we were too many who had left in one go. Nevertheless, we thought it was not exactly stylish of them not to invite us to the Danish country dinner – as the only Danish participants.

Our friend (and future customer) Ivan Bajon from Simcorp stayed with us in the apartments and he was invited to the country dinner. So we found out where it was, snooped around a little, and then simply climbed a rather high fence and gate-crashed the dinner.

That was fun. The Oracle folks there were visibly nervous when we suddenly stormed in, but what could they do in front of all the customers, who very well knew who we were? So we sat down at the tables and had a good evening with all the other Danes there.

We had lots of fun during those few days in Berlin, had many political debates and beers, and went home smiling but tired.

To my knowledge we’ve not faked badges or gate-crashed country dinners since.

There have been a few suggestions since then that the badges we copied were actually free to begin with, but that can't possible be. I strongly object to that idea.

The Consultant

Oracle WTF - Tue, 2009-02-17 01:22

The Consultant has a Ph. D, vast experience of high-performance systems architecture, a black belt in karate and a reputation as a genius. He's been brought in by senior management at vast but necessary expense for a strategic rethink of the way data is shared between systems, while implementing SOA, improving performance and finding the Higgs Boson. Needless to say, he tends towards the view that database development is overrated. He's already sorted out the data warehouse. Overheard by our correspondent:

I’ve finished the design for the data warehouse. Although I say design, it’s pretty simple. That’s why it was so quick. All data warehouses are essentially the same in that they are a dimensional model. That means that you essentially have everything that is a fact, an immutable fact [waves arms expressively], in the fact table. Just the one, big, table. That’s why they’re so attractive as reporting solutions - everything is in the same place so it’s easy to understand and the reporting is easy to automate. So in that fact table you’ve got all trades, the cashflows, positions, accounting information, accounts, exceptions, counterparties. Anything that’s a fact goes in that table [does wide googly eyes expression with dramatic pause]. Then anything derived is called a dimension, like for instance P&L calculations, whether the account is on balance sheet, or off... they go in the dimension table. Basically all we have to do is just pump messages into that fact table from the bus and then recalculate the dimensions in the dimension table periodically, and that’s the technical job. Getting the facts in there and getting the calculations done.

Installing Required RPMs from DVD Before You Install Oracle Database 10g or 11g

Sergio's Blog - Tue, 2009-02-17 00:42

I came across a forum post in which one of my co-workers, Avi Miller, explained how to install Oracle Enterprise Linux from DVD so that all software requirements are met when you fire up the Oracle Database 10g or 11g installer.

There are several ways to deal with the software preinstallation requirements for the Oracle Database. And, while it's not very difficult to install the required RPMs, it can be somewhat awkward to do so in the correct dependency order.

To simplify this task, Oracle provides the oracle-validated RPM, discussed here, here, and here previously on this blog. The Oracle® Database Installation Guide 11g Release 1 (11.1) for Linux also describes how to install the oracle-validated RPM if you have access to Unbreakable Linux Network (ULN)

If you're installing Oracle Enterprise Linux, to run Oracle Database 10g or 11g, following the steps Avi outlined will save you time. Especially if you don't have access to ULN.

  1. Follow the installation process as normal until you get to the first software selection screen (it lists a series of tasks that you can add support for, and has an option at the bottom: "Customize Later" and "Customize Now"
  2. Select the "Customize Now" option and click Next




  3. Select "Base System" in the left-hand list and then "System Tools" in the right hand list. Click the checkbox next to "System Tools", then click the "Optional Packages" button.






  4. ins4.png

  5. Scroll down and select the "oracle-validated" package and click "Close"


Avi continues: "You can now click "next" and continue the installation as normal. This also creates the oracle user/groups, sets up sysctl.conf, limits.conf, etc. It's fairly nifty." I followed these steps using the Oracle Enterprise Linux 5, Update 3 DVD I downloaded via edelivery.oracle.com, and it works like a breeze. Here's a final screen shot of the installer looking happy:

Categories: DBA Blogs

A difficult choice in difficult times

Lisa Dobson - Mon, 2009-02-16 05:32
Friday before last I made the difficult decision to leave SolStonePlus.I joined SolStone in April 2006, and whilst I thoroughly enjoyed the work I did it has always been a juggling act to balance my personal life at home in Newcastle with working in the South.In order to find a more manageable work/life balance I had to choose to either move closer to work or work closer to home.Although not an Lisahttp://www.blogger.com/profile/16434297444320005874noreply@blogger.com4

Oracle Buys mValent

Habib Gohar - Mon, 2009-02-16 04:23
Oracle has acquired mValent, a leading provider of application configuration management solutions. With the acquisition of mValent, Oracle is expected to enable customers with the ability to collect, compare and reconcile deep configuration information of complex systems. The mValent solution ensures consistent configuration across IT environments and also provides agile support for ongoing environment and […]

A Comprehensive Database Security Model

Kenneth Downs - Sat, 2009-02-14 13:05

This week I am taking a bit of a departure. Normally I write about things I have already done, but this week I want to speculate a bit on a security model I am thinking of coding up. Basically I have been asking myself how to create a security model for database apps that never requires elevated privileges for code, but still allows for hosts sharing multiple applications, full table security including row level and column level security, and structural immunity to SQL injection.

The Functional Requirements

Let's consider a developer who will be hosting multiple database applications on a server, sometimes instances of the same application for different customers. The applications themselves will have different needs, but they all boil down to this:

  • Some applications will allow surfers to join the site and create accounts for themselves, while others will be private sites where an administrator must make user accounts.
  • Some applications will not contain sensitive data, and so the site owner wants to send forgotten passwords in email -- which means the passwords must be stored in plaintext. Other site owners will need heightened security that disallows storing of passwords in plaintext.
  • In both cases, administrators must of course be able to manage accounts themselves.
  • The system should be structurally immune to SQL injection.
  • It must be possible to have users with the same user id ("Sheilia", "John", etc.) on multiple applications who are actually totally different people.
  • The application code must never need to run at an elevated privelege level for any reason -- not even to create accounts on public sites where users can join up and conduct transactions.
  • It must be possible for the site owners or their agents to directly connect to the database at very least for querying and possibly to do database writes without going through our application.
  • Users with accounts on one app must never be able to sign on to another app on the same server.

These requirements represent the most flexible possible combination of demands that I have so far seen in real life. The question is, can they be met while still providing security? The model I'd like to speculate on today says yes. Informed Paranoia Versus Frightened Ignorance

Even the most naive programmer knows that the internet is not a safe place, but all too often a lot of security advice you find is based on frightened ignorance and takes the form, "never do x, you don't know what might happen." If we are to create a strong security model, we have to do better than this.

Much better is to strive to be like a strong system architect, whose approach is based on informed paranoia. This hypothetical architect knows everybody is out to compromise his system, but he seeks a thorough knowledge of the inner workings of his tools so that he can engineer the vulnerabilities out as much as possible. He is not looking to write rules for the programmer that say "never do this", he is rather looking to make it impossible for the user or programmer to compromise the system.

Two Examples

Let us consider a server hosting two applications, which are called "social" and "finance".

The "social" application is a social networking site with minimal security needs. Most important is that the site owners want members of the general public to sign up, and they want to be able to email forgotten passwords (and we can't talk them out of it) -- so we have to store passwords in plaintext.

The "finance" application is a private site used by employees of a corporation around the world. The general public is absolutely not welcome. To make matters worse however, the corporation's IT department demands to be able to directly connect to the database and write to the database without going through the web app. This means the server will have an open port to the database. Sure it will be protected with SSL and passwords, but we must make sure that only users of "finance" can connect, and only to their own application.

Dispensing With Single Sign-On

There are two ways to handle connections to a database. One model is to give users real database accounts, the other is to use a single account to sign on to the database. Prior to the web coming along, there were proponents of both models in the client/server world, but amongst web developers the single sign-on method is so prevalent that I often wonder if they know there is any other way to do it.

Nevertheless, we must dispense with the single sign-on method at the start, regardless of how many people think that Moses carved it on the third tablet, because it just has too many problems:

  • Single Sign-on is the primary architectural flaw that makes SQL injection possible. As we will see later, using real database accounts makes your site (almost) completely immune to SQL injection.
  • Single Sign-on requires a connection at the maximum privilege level that any system user might have, where the code then decides what it will let a particular user do. This is a complete violation of the requirement that code always run at the lowest possible privilege level.
  • Single Sign-on totally prevents the requirement that authorized agents be allowed to connect to the database and directly read and write values.

So single sign-on just won't work with the requirements listed. This leads us to creating real accounts on the database server.

Real Accounts and Basic Security

When you use a real database account, your code connects to the database using the username and password provided by the user. Anything he is allowed to do your code will be allowed to do, and anything he is not allowed to do will throw and error if your code tries to do it.

This approach meets quite a few of our requirements nicely. A site owner's IT department can connect with the same accounts they use on the web interface -- they have the same privileges in both cases. Also, there is no need to ever have application code elevate its privilege level during normal operations, since no regular users should ever be doing that. This still leaves the issue of how to create accounts, but we will see that below.

A programmer who thinks of security in terms of what code can run will have a very hard time wrapping his head around using real database accounts for public users. The trick to understanding this approach is to forget about code for a minute and to think about tables. The basic fact of database application security is that all security resolves to table permissions. In other words, our security model is all about who can read or write to what tables, it is not about who can run which program.

If we grant public users real database accounts, and they connect with those accounts, the security must be handled within the database itself, and it comes down to:

  • Defining "groups" as collections of users who share permissions at the table level.
  • Deciding which groups are allowed select, insert, update, and delete privileges on which tables.
  • Granting and revoking those privileges on the server itself when the database is built.
  • At very least row-level security will be required, wherein a user can only see and manipulate certain rows in a table. This is how you keep users from using SQL Injection to mess with each other's order history or member profiles.
  • Column security is also very nice to finish off the picture, but we will not be talking about that today as it does not play into the requirements.

Now we can spend a moment and see why this approach eliminates most SQL Injection vulnerabilities. We will imagine a table of important information called SUPERSECRETS. If somebody could slip in a SQL injection exploit and wipe out this table we'd all go to jail, so we absolutely cannot allow this. Naturally, most users would have no privileges on this table -- even though they are directly connected to the database they cannot even see the table exists, let alone delete from it. So if our hypothetical black hat somehow slips in ";delete from supersecrets" and our code fails to trap for it, nothing happens. They have no privlege on that table. On the other side of things, consider the user who is privileged to delete from that table. If this user slips in a ";delete from supersecrets" he is only going to the trouble with SQL Injection to do something he is perfectly welcome to do anyway through the user interface. So much for SQL injection.

To repeat a point made above: row-level security is a must. If you grant members of a social site global UPDATE privileges on the PROFILES table, and you fail to prevent a SQL Injection, all hell could break loose. Much better is the ability to limit the user to seeing only his own row in the PROFILE table, so that once again you have created a structural immunity to SQL injection.

Anonymous Access

Many public sites allow users to see all kinds of information when they are not logged on. The most obvious example would be an eCommerce site that needs read access to the ITEMS table, among others. Some type of anonymous access must be allowed by our hypothetical framework.

For our two examples, the "social" site might allow limited viewing of member profiles, while the "finance" application must show absolutely nothing to the general public.

If we want a general solution that fits both cases, we opt for a deny-by-default model and allow each application to optionally have an anonymous account.

First we consider deny-by-default. This means simply that our databases are always built so that no group has any permissions on any tables. The programmer of the "social" site now has to grant certain permissions to the anonymous account, while the programmer of the "finance" application does nothing - he already has a secure system.

But still the "finance" site is not quite so simple. An anonymous user account with no privileges can still log in, and that should make any informed paranoid architect nervous. We should extend the deny-by-default philosophy so the framework will not create an anonymous account unless requested. This way the programmer of the "finance" application still basically does nothing, while the programmer of the "social" must flip a flag to create the anonymous account.

Virtualizing Users

If we are having real database accounts, there is one small detail that has to be addressed. If the "social" site has a user "johnsmith" and the finance application has a user of the same name, but they are totally different people, we have to let both accounts exist but be totally separate.

The answer here is to alias the accounts. The database server would actually have accounts "finance_johnsmith" and "social_johnsmith". Our login process would simply take the username provided and append the code in front of it when authenticating on the server. 'nuf said on that.

Allowing Public Users To Join

The "social" site allows anybody to join up and create an account. This means that somehow the web application must be able to create accounts on the database server. Yet it must do this without allowing the web code to elevate its privileges, and while preventing the disaster that would ensue if a user on the "social" site somehow got himself an account on the "finance" site.

Believe it or not, this is the easy part! Here is how it works for the "social" site:

  • Create a table of users. The primary key is the user_id which prevents duplication.
  • For the social site, there is a column called PASSWORD that stores the password in plaintext.
  • Allow the anonymous account to INSERT into this table! (Remember though that deny-by-default means that so far this account has no other privileges).
  • Put an INSERT trigger on the table that automatically creates an aliased user account, so that "johnsmith" becomes "social_johnsmith". The trigger also sets the password.
  • A DELETE trigger on the table would delete users if the row is deleted.
  • An UPDATE trigger on the table would update the password if the user UPDATES the table.
  • Row level security is an absolute must. Users must be able to SELECT and UPDATE table, but only their own row. If your database server or framework cannot support row-level security, it's all out the window.

This gives us a system that almost gets us where we need to be: the general public can create acounts, the web application does not need to elevate its privileges, users can set and change their passwords, and no user can see or set anything for any other user. However, this leaves the issue of password recovery.

In order to recover passwords and email them to members of the "social" site, it is tempting to think that the anonymous account must be able to somehow read the users table, but that is no good because then we have a structural flaw where a successful SQL injection would expose user accounts. However, this also turns out to be easy. There are two options:

  • Write a stored procedure that the anonymous user is free to execute, which does not return a password but actually emails it directly from within the database server. This requires your database server be able to send emails. (Postgres can, and I assume SQL Server can, and I don't really know about mySql).
  • Create a table for password requests, allow inserts to it but nothing else. A trigger sends the email. In this approach you can track email recovery requests.

For the "finance" application we cannot allow any of this to happen, so again we go to the deny-by-default idea. All of the behaviors above will not happen unless the programmer sets a flag to turn them on when the database is built.

This does leave the detail of how users of the "finance" application will reset their passwords. For details on how a secure app can still allow password resets, see my posting of Sept 7 2008 Secure Password Resets.

One More Detail on Public Users

We still have one more detail to handle for public users. Presumably a user, having joined up, has more privileges than the anonymous account. So the web application must be able to join them into a group without elevating its privileges. The solution here is the same as for creating the account: there will be a table that the anonymous user can make inserts into (but nothing else), and a trigger will join the user to whatever group is named.

Except for one more detail. We cannot let the user join whatever group they want, only the special group for members. This requirement can be met by defining the idea of a "freejoin" group and also a "solo" group. If the anonymous user inserts into a user-group table, and the requested group is flagged as allowing anybody to join, the trigger will allow it, but for any other group the trigger will reject the insert. The "solo" idea is similar, it means that if a user is in the "members" group, and that group is a "solo" group, they may not join any other groups. This further jails in members of the general public.

Almost Done: User Administration

In the last two sections we saw the idea of a table of users and a cross-reference of users to groups. This turns out to solve another issue we will have: letting administrators manage groups. If we define a group called "user_administrators" and give them total power on these tables, and also give them CRUD screens for them, then we have a user administrator system. This works for both the "social" and the "finance" application.

The triggers on the table have to be slightly different for the two cases, but that is a small exercise to code them up accordingly.

Cross-Database Access

Believe it or not, the system outlined above has met all of our requirements except one. So far we have a system that never requires the web server to have any elevated priveleges within the database, allows members of the public to join some sites while barring them from others, is structurally immune from SQL injection, allows different people on different sites to have the same user id, and allows administrators of both sites to directly manage accounts. Moreover, we can handle both plaintext passwords and more serious reset-only situations.

This leaves only one very thorny issue: cross-database access. The specific database server I use most is PostgreSQL, and this server has a problem (for this scenario) anyway, which is that out-of-the-box, a database account can connect to any database. This does not mean the account has any priveleges on the database, but we very seriously do not want this to happen at all. If a member of the "social" site can connect to the "finance" app, we have a potential vulnerability even if he has zero privileges in that database. We would be much happier if he could not connect at all.

In Postgres there is a solution to this, but I've grown to not like it. In Postgres you can specify that a user can only connect to a database if they are in a group that has the same name as the database. This is easy to set up, but it requires changing the default configuration of Postgres. However, for the sheer challenge of it I'd like to work out how to do it without requiring that change. So far I'm still puzzling this out. I'd also like to know that the approach would work at very least on MS SQL Server and mySql.

Conclusion

Most of what is in this week's essay is not that radical to any informed database veteran. But to web programmers who were unfortunate enough to grow up in the world of relational-databases-must-die nonsense, it is probably hard or impossible to imagine a system where users are connecting with real database accounts. The ironic thing is that the approached described here is far more secure than any single sign-on system, but it requires the programmer to shift thinking away from action-based code-centric models to what is really going on: table-based privileges. Once that hurdle is past, the rest of it comes easy.

Categories: Development

Pages

Subscribe to Oracle FAQ aggregator