Feed aggregator

Why are some of the tables in my query missing from the plan?

Oracle Optimizer Team - Thu, 2008-06-26 17:35
We apologize for our brief hiatus from blogging. We've been busy working on improvements to the optimizer.

In 10gR2, we introduced a new transformation, table elimination (alternately called "join elimination"), which removes redundant tables from a query. A table is redundant if its columns are only referenced to in join predicates, and it is guaranteed that those joins neither filter nor expand the resulting rows. There are several cases where Oracle will eliminate a redundant table. We will discuss each case in turn.

Primary Key-Foreign Key Table Elimination

Starting in 10gR2, the optimizer eliminates tables that are redundant due to primary key-foreign key constraints. Consider the following example tables:

create table jobs
(
job_id NUMBER PRIMARY KEY,

job_title VARCHAR2(35) NOT NULL,
min_salary NUMBER,
max_salary NUMBER
);

create table departments
(
department_id NUMBER PRIMARY KEY,
department_name VARCHAR2(50)
);
create table employees
(
employee_id NUMBER PRIMARY KEY,
employee_name VARCHAR2(50),
department_id NUMBER REFERENCES departments(department_id),
job_id NUMBER REFERENCES jobs(job_id)
);

and the query:

select e.employee_name
from employees e, departments d
where e.department_id = d.department_id;

In this query, the join to departments is redundant. The only column referenced in the query appears in the join predicate, and the primary key-foreign key constraint guarantees that there is at most one match in departments for each row in employees. Hence, the query is equivalent to:

select e.employee_name
from employees e
where e.department_id is not null;


The optimizer will generate this plan for the query:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
* 1 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("E"."DEPARTMENT_ID" IS NOT NULL)

Note that the IS NOT NULL predicate is not necessary if the column has a NOT NULL constraint on it.

Starting in 11gR1, the optimizer will also eliminate tables that are semi-joined or anti-joined. Consider the following query:

select e.employee_id, e.employee_name
from employees e
where not exists (select 1
from jobs j
where j.job_id = e.job_id);


Since employees.job_id is a foreign key to jobs.job_id, any non-null value in employees.job_id must have a match in jobs. So only employees with null values for employees.job_id will appear in the result. Hence, this query is equivalent to:

select e.employee_id, e.employee_name
from employees e
where job_id is null;

and the optimizer can choose this plan:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
* 1 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("E"."JOB_ID" IS NULL)

Suppose employees.job_id has a NOT NULL constraint:

alter table employees modify job_id not null;

In this case, there could not possibly be any rows in EMPLOYEES, and the optimizer could choose this plan:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
* 1 FILTER
2 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(NULL IS NOT NULL)

The "NULL IS NOT NULL" filter is a false constant predicate, that will prevent the table scan from even taking place.

Also in 11gR1, the optimization became available for ANSI compliant joins. For this query:

select employee_name
from employees e inner join jobs j
on e.job_id = j.job_id;

the optimizer can eliminate JOBS and produce this plan:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
1 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Outer Join Table Elimination

In 11gR1, a new form of table elimination was introduced for outer joins, which does not require PK-FK constraints. For the example, we require a new table and an addition to EMPLOYEES:

create table projects
(
project_id NUMBER UNIQUE,
deadline DATE,
priority NUMBER
);

alter table employees add project_id number;

Now consider a query that outer joins employees and projects:

select e.employee_name, e.project_id
from employees e, projects p
where e.project_id = p.project_id (+);

The outer join guarantees that every row in employees will appear at least once in the result. The unique constraint on projects.project_id guarantees that every row in employees will match at most one row in projects. Together, these two properties guarantee that every row in employees will appear in the result exactly once. Since no other columns from projects are referenced, projects can be eliminated, and the optimizer can choose this plan:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
1 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Why Would I Ever Write Such a Query?

All of the example queries in this post are very simple, and one would be unlikely to write a query where the join is so obviously unnecessary. There are many real world scenarios where table elimination may be helpful, including machine-generated queries and elimination of tables in views. For example, a set of tables might be exposed as a view, which contains a join. The join may be necessary to retrieve all of the columns exposed by the view. But some users of the view may only access a subset of the columns, and in this case, the joined table can be eliminated.

For example, consider the view:

create view employee_directory_v as
select e.employee_name, d.department_name, j.job_title
from employees e, departments d, jobs j
where e.department_id = d.department_id
and e.job_id = j.job_id;

This view might be exposed to a simple employee directory application. To lookup employee names by job title, the application issues a query:

select employee_name
from employee_directory_v
where department = 'ACCOUNTING';

Since the job_title column is not referenced, jobs can be eliminated from the query, and the optimizer can choose this plan:

--------------------------------------------
Id Operation Name
--------------------------------------------
0 SELECT STATEMENT
* 1 HASH JOIN
2 TABLE ACCESS FULL EMPLOYEES
* 3 TABLE ACCESS FULL DEPARTMENTS
--------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")
3 - filter("D"."DEPARTMENT_NAME"='ACCOUNTING')

Known Limitations

There are currently a few limitations of table elimination:
  • Multi-column primary key-foreign key constraints are not supported.
  • Referring to the join key elsewhere in the query will prevent table elimination. For an inner join, the join keys on each side of the join are equivalent, but if the query contains other references to the join key from the table that could otherwise be eliminated, this prevents elimination. A workaround is to rewrite the query to refer to the join key from the other table (we realize this is not always possible).

Why are some of the tables in my query missing from the plan?

Inside the Oracle Optimizer - Thu, 2008-06-26 17:35
We apologize for our brief hiatus from blogging. We've been busy working on improvements to the optimizer.

In 10gR2, we introduced a new transformation, table elimination (alternately called "join elimination"), which removes redundant tables from a query. A table is redundant if its columns are only referenced to in join predicates, and it is guaranteed that those joins neither filter nor expand the resulting rows. There are several cases where Oracle will eliminate a redundant table. We will discuss each case in turn.

Primary Key-Foreign Key Table Elimination

Starting in 10gR2, the optimizer eliminates tables that are redundant due to primary key-foreign key constraints. Consider the following example tables:

create table jobs
(
job_id NUMBER PRIMARY KEY,

job_title VARCHAR2(35) NOT NULL,
min_salary NUMBER,
max_salary NUMBER
);

create table departments
(
department_id NUMBER PRIMARY KEY,
department_name VARCHAR2(50)
);
create table employees
(
employee_id NUMBER PRIMARY KEY,
employee_name VARCHAR2(50),
department_id NUMBER REFERENCES departments(department_id),
job_id NUMBER REFERENCES jobs(job_id)
);

and the query:

select e.employee_name
from employees e, departments d
where e.department_id = d.department_id;

In this query, the join to departments is redundant. The only column referenced in the query appears in the join predicate, and the primary key-foreign key constraint guarantees that there is at most one match in departments for each row in employees. Hence, the query is equivalent to:

select e.employee_name
from employees e
where e.department_id is not null;


The optimizer will generate this plan for the query:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
* 1 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("E"."DEPARTMENT_ID" IS NOT NULL)

Note that the IS NOT NULL predicate is not necessary if the column has a NOT NULL constraint on it.

Starting in 11gR1, the optimizer will also eliminate tables that are semi-joined or anti-joined. Consider the following query:

select e.employee_id, e.employee_name
from employees e
where not exists (select 1
from jobs j
where j.job_id = e.job_id);


Since employees.job_id is a foreign key to jobs.job_id, any non-null value in employees.job_id must have a match in jobs. So only employees with null values for employees.job_id will appear in the result. Hence, this query is equivalent to:

select e.employee_id, e.employee_name
from employees e
where job_id is null;

and the optimizer can choose this plan:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
* 1 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("E"."JOB_ID" IS NULL)

Suppose employees.job_id has a NOT NULL constraint:

alter table employees modify job_id not null;

In this case, there could not possibly be any rows in EMPLOYEES, and the optimizer could choose this plan:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
* 1 FILTER
2 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(NULL IS NOT NULL)

The "NULL IS NOT NULL" filter is a false constant predicate, that will prevent the table scan from even taking place.

Also in 11gR1, the optimization became available for ANSI compliant joins. For this query:

select employee_name
from employees e inner join jobs j
on e.job_id = j.job_id;

the optimizer can eliminate JOBS and produce this plan:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
1 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Outer Join Table Elimination

In 11gR1, a new form of table elimination was introduced for outer joins, which does not require PK-FK constraints. For the example, we require a new table and an addition to EMPLOYEES:

create table projects
(
project_id NUMBER UNIQUE,
deadline DATE,
priority NUMBER
);

alter table employees add project_id number;

Now consider a query that outer joins employees and projects:

select e.employee_name, e.project_id
from employees e, projects p
where e.project_id = p.project_id (+);

The outer join guarantees that every row in employees will appear at least once in the result. The unique constraint on projects.project_id guarantees that every row in employees will match at most one row in projects. Together, these two properties guarantee that every row in employees will appear in the result exactly once. Since no other columns from projects are referenced, projects can be eliminated, and the optimizer can choose this plan:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
1 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Why Would I Ever Write Such a Query?

All of the example queries in this post are very simple, and one would be unlikely to write a query where the join is so obviously unnecessary. There are many real world scenarios where table elimination may be helpful, including machine-generated queries and elimination of tables in views. For example, a set of tables might be exposed as a view, which contains a join. The join may be necessary to retrieve all of the columns exposed by the view. But some users of the view may only access a subset of the columns, and in this case, the joined table can be eliminated.

For example, consider the view:

create view employee_directory_v as
select e.employee_name, d.department_name, j.job_title
from employees e, departments d, jobs j
where e.department_id = d.department_id
and e.job_id = j.job_id;

This view might be exposed to a simple employee directory application. To lookup employee names by job title, the application issues a query:

select employee_name
from employee_directory_v
where department = 'ACCOUNTING';

Since the job_title column is not referenced, jobs can be eliminated from the query, and the optimizer can choose this plan:

--------------------------------------------
Id Operation Name
--------------------------------------------
0 SELECT STATEMENT
* 1 HASH JOIN
2 TABLE ACCESS FULL EMPLOYEES
* 3 TABLE ACCESS FULL DEPARTMENTS
--------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")
3 - filter("D"."DEPARTMENT_NAME"='ACCOUNTING')

Known Limitations

There are currently a few limitations of table elimination:
  • Multi-column primary key-foreign key constraints are not supported.
  • Referring to the join key elsewhere in the query will prevent table elimination. For an inner join, the join keys on each side of the join are equivalent, but if the query contains other references to the join key from the table that could otherwise be eliminated, this prevents elimination. A workaround is to rewrite the query to refer to the join key from the other table (we realize this is not always possible).
Categories: DBA Blogs, Development

EJB 3 Stateless bean Web service in WebLogic Server

Debu Panda - Thu, 2008-06-26 05:39
I recently got some questions from customers about the procedure for deploying EJB 3 Session Bean Web service in WebLogic 10. In this blog, I will outline the steps.

First develop your EJB3 session bean web service as follows:

@WebService(name="PlaceBid",
serviceName="PlaceBidService", portName = "PlaceBidPort",
targetNamespace = "http://actionbazaar.com/xml")
@SOAPBinding(style = SOAPBinding.Style.DOCUMENT)
@PersistenceContext(unitName="actionBazaar",name="actionBazaar")
@Stateless
public class PlaceBidBean {

@WebMethod

public Long addBid(String userId, Long itemId, Double bidPrice) throws BidException {

….

}

}


Deployment

Unlike OC4J, WebLogic do not support parsing of JAX-WS annotations at deployment time and if you try to deploy the compiled EJB 3 Web service, you will get deployment time exception. You have to run the EJB 3 WS thru annotation parsing tool (jwsc –java web services compile) supplied with WebLogic to generate the artifacts and package in your ejb-jar module.

Here is an example ant task to create the web service:

<target
name="build-ws-server" description="Target that builds the
target Web Service"&gt
<jwsc
srcdir="${src.ejb.dir}&quot
destdir="${bld.service.dir}&quot
classpath="${common.j2ee.class.path}&quot
fork="true&quot
keepGenerated="true&quot
deprecation="${deprecation}&quot
keepTempFiles="true&quot
listfiles="true&quot
debug="${debug}"&gt
<jws
file="actionbazaar/buslogic/PlaceBidBean.java" type="JAXWS"
explode="true"/&gt
</jwsc&gt
</target&gt

<target name="package-ejb"
depends="compile-ejb-classes,build-ws-server">;

<mkdir dir="${bld.ejb.dir}/actionbazaar/buslogic/jaxws" />

<copy todir="${bld.ejb.dir}/actionbazaar/buslogic/jaxws">

<fileset dir="${bld.service.dir}/actionbazaar/buslogic/PlaceBidBean/actionbazaar/buslogic/jaxws" includes="*"/> </copy>
<echo message="-----> Create EJB jar file"/>
<jar jarfile="${bld.ear.dir}/${ejb.name}.jar">
<fileset dir="${bld.ejb.dir}" includes="**" />
</jar>
</target >

Make sure to package the generated artifacts in the EJB JAR and then deploy the EJB-JAR or EAR to the WebLogic Server. Note that if your web service depends upon custom Java objects - the generated artifacts contain duplicate Java classes that you already may have. These duplicated classes are on a separate package structure and cause ClassCastException for me. So I avoid packaging these classes.

Accessing EJB 3 Web service

After successful deployment you can access the web service as follows:

http://hostname:port/ejb-name/web-service-name?wsdl

http://localhost:7001/PlaceBidBean/PlaceBidBeanService?wsdl



Hope this helps.

You can download the working examples from here. Chapter 15 contains web services example.

ruby-plsql gem now supports JRuby and Oracle JDBC driver

Raimonds Simanovskis - Wed, 2008-06-25 16:00

Some time ago I created ruby-plsql gem which provides simple Ruby API for Oracle PL/SQL stored procedures.

Initially this gem supported just MRI with ruby-oci8 library which provides connectivity to Oracle database as this was my main development and production environment for Ruby & Oracle applications. But as JRuby is fast growing alternative Ruby deployment platform and as it can be integrated into Oracle Fusion middleware platform (e.g. Oracle Mix is running on JRuby on Oracle Fusion middleware) then I planned to support JRuby as well.

I started to work on JRuby support during RailsConf JRuby hackfest and initially this did not seem very hard task for me. But as I did not know JDBC very well it took me much more time than initially planned. And unfortunately JDBC is also much less powerful compared to ruby-oci8 library for construction of dynamic PL/SQL calls. In addition I needed to strugle with Ruby and JDBC data type mappings which differs from Ruby and ruby-oci8 data type mappings.

But finally I have completed JRuby support and released ruby-plsql gem version 0.2.0. And good news are that from usage perspective ruby-plsql behaves identically on MRI and JRuby – at least my RSpec tests are telling so.

To install this gem on JRuby execute

sudo jruby -S gem install ruby-plsql

or

sudo jgem install ruby-plsql

depending on how you have installed JRuby on your computer.

Source code of ruby-plsql is located on GitHub where you can find usage examples in RSpec tests.

Categories: Development

Zimbra Desktop on Solaris

Siva Doe - Tue, 2008-06-24 22:03

Not much to write about. Just this screenshot. This is on a snv_90 build desktop.

The ABCs of Workflow for Oracle E-Business Suite Release 11i and Release 12-Shining a Light on Oracle Workflow...Now Available For Sale

Solution Beacon - Tue, 2008-06-24 11:59
Have you always wanted to know more about Workflow, but found yourself overwhelmed and mystified? Do you think your E-Business Suite Workflow environment needs to be tuned, but aren’t sure where to start? Are you curious to know the future of Workflow and BPEL? Would you like to see an example of how to convert a Workflow into a BPEL?We have your solution! Solution Beacon and OAUG have joined

Vote! The best session on APEX Authentication you'll ever see!

Carl Backstrom - Tue, 2008-06-24 11:37
I'm talking about Raj Mattamal's session up for votes on mix.oracle.com. Raj is a good friend of mine and used to be on the APEX team until he decided to strike out on his own to use APEX to make money in the real world. He is very knowledge about all subjects APEX and this one in particular.

Both Tyler and Dimitri have mentioned voting for Raj's session but I just wanted to throw my endorsement out for this session as well.

If you've seen Raj present before you will know that it is an event in itself and when you combine that with the amount of useful information that comes out of the session it's a winner. Just take a look at my review of his session from last Oracle World.

I have to admit I'm doing this for a selfish reason, I missed his session at ODTUG , I didn't think it was a big deal since I've seen it before, until I was informed it was an entirely different session I was pretty disappointed on missing it, plus he might have some blackmail on me I don't know and don't want to find out ;).

Vote Early , Vote Often.

How to execute TKPROF on trace files larger than 2GB ? --> Use pipe

Aviad Elbaz - Tue, 2008-06-24 05:54

Here is a nice trick to work with files larger than 2GB on Unix/Linux using pipe.

First case - TKPROF

When trying to execute TKPROF on a trace file larger than 2 GB I got this error:

[udump]$ ll test_ora_21769.trc

-rw-r-----  1 oratest dba 2736108204 Jun 23 11:04 test_ora_21769.trc

[udump]$ tkprof test_ora_21769.trc test_ora_21769.out

TKPROF: Release 9.2.0.6.0 - Production on Thu Jun 23 21:05:10 2008

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

could not open trace file test_ora_21769.trc

In order to successfully execute TKPROF on this trace file you can use the mkfifo command to create named pipe as follow:

  • Open a new unix/linux session (1st), change directory where the trace file exists and execute:

[udump]$ mkfifo mytracepipe
[udump]$ tkprof mytracepipe test_ora_21769.out

TKPROF: Release 9.2.0.6.0 - Production on Thu Jun 23 21:07:35 2008

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

  • Open another session (2nd), change directory where the trace file exists and execute:

[udump]$ cat test_ora_21769.trc > mytracepipe

This way you'll successfully get the output file.

 

Second case - spool

Similar issue with spool to file larger than 2GB can be treat similarly.

$ mkfifo myspoolpipe.out

--> Create new named pipe called 'myspoolpipe.out'

$ dd if=myspoolpipe.out of=aviad.out &

--> What you read from 'myspoolpipe.out' write to 'aviad.out'

$ sqlplus user/pwd@dbname

SQL*Plus: Release 9.2.0.6.0 - Production on Tue Jun 24 12:05:37 2008

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production

SQL> spool myspoolpipe.out

--> Spool to the pipe

SQL> select .....

SQL> spool off
SQL> 5225309+294082 records in
5367174+1 records out

SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production

[1]+  Done                    dd if=myspoolpipe.out of=aviad.out

$ ls -ltr

prw-r--r--  1 oratest dba          0 Jun 24 12:22 myspoolpipe.out
-rw-r--r--  1 oratest dba 2747993487 Jun 24 12:22 aviad.out

Related Notes:

Note 62427.1 - 2Gb or Not 2Gb - File limits in Oracle
Note 94486.1 - How to Create a SQL*Plus Spool File Larger Than 2 GB on UNIX

Aviad

Categories: APPS Blogs

ODTUG 2008 (week in review)

Carl Backstrom - Mon, 2008-06-23 13:39
Well alot of people have been giving day by day reports about this years ODTUG (where do they get the time). So I figured I'd just post a weekly roundup on the proceedings.

ODTUG

The ODTUG 2008 event itself was alot of fun and very well organized.
From Tom Kyte's opening keynote , which was a very cool and non technical , looking at why we always need to question how and why we do the things we do because the way you used to do it might not be the right way anymore and it's our job to always make sure things are done in the right way.

To the closing night get together, complete with band , fortune tellers and beads the event was just very interesting with the right amount fun to keep everybody looking forward to what comes next. And the infrastructure of the event was put together so that getting to the sessions or events you needed to took the minimal of effort or fuss. I recommend this event to anyone interested in any of the tools that Oracle provides.

APEX @ ODTUG

All I can say is "WOW"!! I knew people are using and interested in APEX but the amount of interest and usage is amazing. Almost every APEX session had a full room and many were standing room only. Many times I would just end up standing outside the door looking in just so I wasn't taking up a seat , that's the type of guy I am ;). I would say everybody at the event was surprised at the interest and energy of the APEX crowd including the APEX crowd itself.

During the sessions I'm amazed at the things people have done with our product, both from the people that just use the stock out of the box features. To applications that don't look or act like APEX at all until you look at the URL in the browser.

Both of these scenarios are perfect examples of the awesome cross section of APEX developers and uses, from the business user just trying to solve a business problem in the quickest / cheapest / reliable way around, to the advanced developer using custom javascript and/or third party libraries to provide APEX based RIA. APEX runs the gamut. Trying to be everything to everyone is hard and from everything I saw at ODTUG I would say that APEX is doing a very good job of it.

Carl @ ODTUG

Well here I have and will to be my own worst critic , unless someone else wants to jump in just try and make it constructive. I'll start with where I felt things were bad, and then mention the good. I wasn't as happy with my presentation as I should have been, and it was 100% my own fault, I will do better next time.

The Bad.

Two things I learned for giving presentations.

1. Never rewrite your whole presentation the day of the presentation. This seems such an obvious statement but this was my second presentation ever and I've made this same mistake twice now, I will not do it again.

2. Make sure that your mic is adjusted correctly before you start. Trying to adjust a mic in a very hot room 5 minutes into you presentation , while already a bit nervous, wastes too much time, makes the presenter even more nervous and compounds the problem. Next time mic will be right on the collar and everything will be perfect.

The Good

Rewriting my presentation was the right thing to do, I just should have done it earlier. The presentation I had , which I will eventually show , was very flashy and whizbang and would have been useless to pretty much everybody, though would have made me look great ;).

What I wanted to do ,and did, was show people some nuts and bolts examples of how things can be done in APEX, not how to fix or build particular things but to give people ideas of what can be done. Using an interactive report as the example I went through the features that are in APEX that allowed us to build them.

It ended up being a short and sweet session (48 min) but the room was very hot and it was the end of the day so I think the timing was near perfect in that respect. And I had enough people come up to me afterwards to ask questions and/or ask for the application that I think I did fairly well.

Thanks to everybody that attended, it does make one feel good to know that so many people are interested.

One thing I will do next time is Dietmar's suggestion , and how he did his session, which is to use screencasts of the functionality, live demo's are great but a screencast will show the same thing without the issue of having to jump back and forth between applications.

After a bit of cleanup I'll be putting both the slides and the application out for everybody to take a look at , plus to ODTUG so they can host on their site as well.

New Orleans

This was my first time to New Orleans and I had a real nice time. There were some very good restaurants and watering holes and I only had time to sample a few of each. The French Quarter was very cool with some of it's old style architecture. Bourbon Street was a experience in itself , and considering how crowded it was on just a random week I couldn't imagine being there during Mardi Gras, it must be insane.

Being from Vegas we have a joke , "Sure it's 120 f (50 c) but it's a dry heat!" , if New Orleans taught me one thing , that is not a joke! Vegas might be 120 but New Orleans at 82 felt alot hotter. I will tell that joke again in the future, but it will be a cold shiver down my spine as I remember what a non-dry heat feels like ;).

Lakers @ ODTUG

Congratulation's Boston.

Game 6 just happened to be the same night as the APEX meetup , which made going to a place with a TV mandatory. I was the only Laker fan in attendance at the APEX meetup and I remember all the names and faces of you haters :D

Hand grenade

A horrible yet intriguing drink, the name should be enough to keep you away from it, you have been warned.

Overview

At events like this it's the people that make it worthwhile , both the people in attendance and the people giving the session's, and this years ODTUG is no exception. Some of the most interesting ideas / questions / comments come outside the sessions, though the sessions are the catalyst. I learned quite a few new things about APEX and how people use it , along as with some other Oracle technologies and can't wait to start putting this information into action.

If you want more detailed accounts try searching through APEX Blog Aggregator or the Oracle News Aggregator there are many postings with much more detail on specific sessions.

The Happiness Meter

Rob Baillie - Mon, 2008-06-23 04:01
As part of any iteration review / planning meeting there should be a section where everybody involved talks about how they felt the last iteration went, what they thought stood in the way, what they though went particularly well and suchlike.

We find that as the project goes on, and the team gets more and more used to each other, this tends to pretty much always dissolve into everyone going "alright I suppose", "yeah fine".

Obviously, this isn't ideal and will tend to mean that you only uncover problems in the project when they've got pretty serious and nerves are pretty frayed.

This is where "The Happiness Meter" comes in.

Instead of asking the team if they think things are going OK and having most people respond non-committally, ask people to put a value against how happy they are with the last iteration's progress. Any range of values is fine, just as long as it has enough levels in it to track subtle movements. I'd go with 1-10.

You don't need strict definitions for each level, it's enough to say '1 is completely unacceptable, 5 is kinda OK, 10 is absolute perfection'.

At some point in the meeting, everyone in the team declares their level of happiness. When I say everyone, I mean everyone: developers, customers, XP coaches, infrastructure guys, project managers, technical authors, absolutely everyone who is valuable enough to have at the iteration review meeting should get a say.

In order to ensure that everyone gets to provide their own thought, each person writes down their number and everyone presents it at the same time. The numbers are then taken recorded and a graph is drawn.

From the graph we should be able to see:
  1. The overall level of happiness at the progress of the project.

  2. If there is any splits / factions in the interpretation of the progress.




If the level of happiness is low, this should be investigated; if there are any splits, this should be investigated; and just as importantly - if there are any highs, this should be investigated. It's good to know why things go well so you can duplicate it over the next iteration.

Factions tend to indicate that one part of the team has more power than the rest and the project is skewed into their interests rather than those of the team as a whole.

You may want to split the graph into different teams (customer / developer) if you felt that was important, but I like to think of us all as one team on the same side...

All said and done, the graph isn't the important bit - the discussion that comes after the ballot is the crucial aspect. This should be a mechanism for getting people to talk openly about the progress of the project.

UPDATE: Someone at work suggested a new name that I thought I should share: The Happy-O-Meter.

Auxiliary Constructs Appeal

Oracle WTF - Sat, 2008-06-21 17:24

Will somebody give this guy some auxiliary constructs? He just needs to know what's the auxiliary constructs, and examples in the auxiliary constructs. So if you have any auxiliary constructs you don't need, now's the time to dig deep. The appeal starts here.

Ideas for improving innovation and creativity in an IS department

Rob Baillie - Sat, 2008-06-21 04:49
At our work we've set up a few 'action teams' to try to improve particular aspects of our working environment.

The team that I'm a member of is responsible for 'Innovation and Creativity'.

We're tasked with answering the question "How do we improve innovation and creativity in IS?" - How we can foster an environment that encourages innovation rather than stifles it.

As a bit of a background, the company is a a medium sized (2,500 plus employees) based mainly in the UK, but recently spreading through the world, the vast majority of whom are not IS based. The IS department is about 100 strong and includes a development team of 25 people. It's an SME at the point where it's starting to break into the big-time and recognises that it needs to refine its working practices a little in order to keep up with the pace of expansion.

We met early last week and have put together a proposal to be taken to the senior management tier. I get a feeling it will be implemented since our team included the IS Director (you don't get any senior in our department), but you never know what'll happen.

I figured it might be interesting to record my understanding of the plan as it stands now, and then take another look in 6 months time to see what's happened to it...

We decided that in order to have an environment that fosters creativity and innovation you need:

Freedom:

Time for ideas for form, for you to explore them, and then to put them into practice.

Stimulus:

Outside influences that that can help to spark those ideas off - this may be from outside the organisation, or through cross-pollination within it.

Courage:

The conviction to try things, to allow them to fail or succeed on their own merit - both on the part of the individual and the organisation as a whole.

Natural Selection:

The need to recognise success when it happens, to take it into the normal operation of the business and make it work in practice. Also, the need to recognise failure when it happens, and stop that from going into (or continuing to exist within) the team.

Recognition:

When we have a good idea, the people involved need to be celebrated. When we have a bad idea, the people involved DO NOT need to be ridiculed.

Refinement:

The initial ideas aren't always the ones that are successful, it's the 4th, 5th or 125th refinement of that idea that forms the breakthrough. We need to understand what we've tried, and recognise how and why each idea has failed or succeeded so we can learn from that.



We put together some concrete ideas on how we're going to help put these in place - and bear in mind that this isn't just for the development team, this is for the whole of the IS department - development, project management, infrastructure, operations, service-desk, even the technology procurement...

Curriculum:

A position set up that will be responsible for defining / tracking a curriculum for each job role in the department.

Obviously this will be fed by those people that currently fulfil the roles, and will involve things ranging from ensuring the process documentation is up to scratch, through specifying reading lists (and organising the purchasing of the books for the staff) and suggesting / collecting / booking conferences, training courses and the like that might be of use.

This takes the burden of responsibility away from the staff and managers - all you need is the idea and someone else will organise it and ensure it's on the curriculum for everyone else to follow up.

IdeaSpace (TM ;-) ):

A forum for the discussion of ideas, and collection of any documentation produced on those ideas and their investigation. This will (hopefully) form a library of past investigations as well as a stimulus for future ones. Everyone in the department will be subscribed to it.

Lab days:

Every employee is entitled to 2 days a month outside of their normal job to explore some idea they might have. That time can be sandbagged to a point, although you can't take more than 4 days in one stint. Managers have to approve the time in the lab (so that it can be planned into existing projects) and can defer the time to some extent, but if requests are forthcoming they have to allow at least 5 days each rolling quarter so that the time can't be deferred indefinitely.

Whilst the exact format of the lab is yet to be decided, we're aiming to provide space away from the normal desks so that their is a clear separation from the day job and lab time. People will be encouraged to take time in the lab as a team as well as individually. Also, if we go into the lab for 3 days to find that an idea doesn't work, that idea should still be documented and the lab time regarded as a success (we learnt something)

Dragon's Den:

Gotta admit, I'm not sure about some of the connotations of this - but the basic idea is sound. Coming out of time in the Lab should be a discussion with peers about the conclusion of the investigation in a Dragon's Den format. This allows the wider community to discuss the suitability of the idea for future investigations, or even immediate applicability. One output of this meeting may be the formalisation of conclusions in the IdeaSpace.

Press Releases:

The company is already pretty good at this, but when something changes for the better we will ensure that we celebrate those changes and, even for a day, put some people up on pedestals.

None of the above should be seen as a replacement for just trying things in our day to day job - but the idea is that these things should help stress to the department that change and progress are important aspects of what we do, and that we value it enough to provide a structure in which big ideas can been allowed to gestate. Cross pollination and communication should just form part of our normal day job anyway, and we should ensure that our project teams are cohesive and communicate freely amongst and between themselves.

Also, an important factor in the success of the above has to be the format of the Dragon's Den - if it is in any way imposing or nerve-racking then the idea is doomed to failure. As soon as people feel under pressure to justify themselves then the freedom disappears.

I'm quite excited by the prospect of putting these ideas into practice, and I wonder exactly where we'll end up.

I'll keep you all posted.

Q.E.D.

Claudia Zeiler - Thu, 2008-06-19 17:36
User on test DB, "Response time is terrible."

DBA, "You are the only user on the DB, but you have a dozen sessions open. Can you close some sessions?"

User, "I can't see my sessions through the application. Bounce the database"

DBA, " I can see the sessions just fine. I'll kill your excess sessions."

User, "No, don't kill my sessions. Bounce the database. I'm bouncing the database."


We don't need no stinking controls around here.
Everyone can do everything.

He bounced the database.

User, "I bounced the database. My [one] session is running just fine."

Q.E.D. - bouncing the database improves performance.




==========================================================

An update to the post above:

The same user has informed me that I should always shutdown the database with "Shutdown abort" . "It works much better."

I have always assumed that everyone else knows more than I do.

Maybe that isn't true. Maybe I do understand more than some people...

Want to Add a Responsibility? How about Oracle User Management?

Solution Beacon - Thu, 2008-06-19 14:06
When you think of adding a new responsibility to an existing E-Business Suite user, does System Administrator come to mind? Visions of going to the Define User form, tabbing down and finding the new responsibility to add? Did you know that you can add responsibilities to a user through Oracle User Management? Let’s step through how this can happen together. Below I have setup a brand new

Lessons Learned after a Hell Weekend

Claudia Zeiler - Mon, 2008-06-16 22:55
  1. A necessary part of any database project plan is a fall back plan. What is planned if there is a failure at any particular step.
  2. Backup - this includes enough space allocated for additional backups as needed.
  3. Part of any upgrade script should be rollback scripts.


I should have know that I was in trouble when on Friday afternoon when I was given a time line which was called 'optimistic' with no what-if forseen.

tapiGen in the wild (new opensource)

Carl Backstrom - Thu, 2008-06-12 18:43
Dan McGhan has started a sourceforge project tapiGen which generates PL/SQL API's to access many table based database features.

You can read more about it here. Try it out and drop him a line on all his work, I'm sure he'd appreciate hearing from you.

If you like what he's done you should considering signing up to help out, or at least providing some feature requests , bug reports, and feedback and such.

APEX 3.1.1 Released

Duncan Mein - Tue, 2008-06-10 10:19
Just upgraded from APEX 3.1 to 3.1.1 on an Oracle Enterpise Linux 4 Update 4 platform.

Intall took: 5:39 and termintaed without error.

The patch can be downloaded from metalink (patch number 7032837)

All in all, a very simple upgrade and now onto the task of regression testing our current 3.1 apps

Pages

Subscribe to Oracle FAQ aggregator