Feed aggregator

REPOST: Pivot and Crosstab Queries

Robert Vollman - Thu, 2007-01-25 15:00
Here is another advanced concept that will come in useful when solving Oracle problems. Imagine you're trying to create a result set where the rows need to be columns, or vice versa. In essence, you need to "pivot" rows into columns, or vice versa. That is a very common requirement, and this is where you need to look at a pivot (or crosstab) query to get the job done.As always, when you want Robert Vollmanhttp://www.blogger.com/profile/08275044623767553681noreply@blogger.com3

OracleXml putxml limitations

Vidya Bala - Wed, 2007-01-24 10:29
OracleXml putxml limitations:

Example1:xml file(test.xml):

ROWSET
ROW num="2"
ID 15 /ID
/ROW
/ROWSET

(using the java API for XDK) the below command
java OracleXML putXML -user "vidya/vidya" -ignorecase –filename "test.xml" "emp"

will load one row into table emp.

Example2:Xml file with namespaces (test1.xml):

ns:EMP
ns:ITEM
ns:ID 2 /ns:ID
/ns:ITEM
/ns:EMP

(using the java API for XDK) the below command
java OracleXML putXML -user "vidya/vidya" -ignorecase –filename "test1.xml" "emp"

if “OracleXML” cannot identifiy the ROWTAG the above command can be modified as below

java OracleXML putXML -user "vidya/vidya" -ignorecase –rowtag “ITEM” –filename "test1.xml" "emp"

note the above will error as OracleXML doesn’t seem to be working on
a file with namespaces. The only way to probably get around this is by applying a stylesheet.
Categories: Development

What hardware should I buy?

Edward Whalen - Wed, 2007-01-24 10:28

I recently had one of my blog readers ask me for some advice on some new hardware that he was going purchase for running Oracle on Windows. It is difficult to give specific advice on what to purchase, but I can provide a few general guidelines.

  1. Get something that is expandable. If you don’t need 4 CPUs now you can get a system capable of supporting 4 CPUs, but only purchase one or two. Make sure that you can add sufficient memory as necessary. Start with 2 or 4 GB but make sure that there are free slots in case you need to add more.
  2. If you will be running Oracle 10g, absolutely go 64-bit. Any recent Xeon or Opteron processor supports 64-bit Windows. The 64-bit version of Windows 2003 works great and is priced similar to the 32-bit version.
  3. Get a name brand. HP, IBM, Dell, etc. Get something that is supported by the manufacturer.
  4. If possible, separate the application tier from the database tier.
  5. Get sufficient disk drives. I'm not saying that you need to start with 8, but you need enough so that IO performance is not a problem. How many do you need? I can't tell you without knowing more about the database and application, etc.
  6. Absolutely use a RAID controller and disk mirroring (RAID 1). If you lose your data, you are out of business.
  7. In larger systems I recommend separating the log drives and data drives, since the loss of one of the two is a recoverable failure. The loss of both is catastrophic.
  8. If you don't already have one, get something to back up your database with. This can be tape, DVD, etc..
  9. Get trained, get some books, etc.. I'm trying to convince Oracle Press to let me do an Oracle on Windows book.

I hope that this is helpful. If you have any comments or a suggestion for a future topic, drop me an email at ewhalen@perftuning.com.

Oracle Text 9i Bug searching XML Data

Vidya Bala - Wed, 2007-01-24 10:25
My last post I discussed that there was a 9i (9.2.0.6) bug using section_group_type auto section group . This group type automatically creates a zone section for each start-tag/end-tag pair in a XML document. The section names derived from XML tags are case sensitive as in XML.
Searches with auto_section_group work in 9.2.0.6 but not for attributes within a tag. For example
Book title="A" author="B"
attributes title and author cannot be searched using auto_section_group section type in 9.2.0.6. The bug has been fixed in 9.2.0.8
Categories: Development

Oracle SOA Suite 10.1.3.1 review - Oracle sows the seeds for SOA

Clemens Utschig - Wed, 2007-01-24 02:47
Infoworld published a review of our SOA Suite here today.

Here is one quote

My experience with the Oracle SOA Suite revealed a top-notch toolset well-culled from a variety of sources without much sacrifice to aptitude or usability. When it comes to message routing and services orchestration, Oracle SOA Suite meets or exceeds most needs for governance, security, insight, and optimization at a price that’s hard to beat.
James R. Borck is senior contributing editor of the InfoWorld Test Center.
-- http://www.infoworld.com/article/07/01/22/04TCoraclesoa_2.html

Emjoy reading,..

OOP 2007 Keynote - Pictures

Clemens Utschig - Wed, 2007-01-24 01:37
As already announced earlier, I had a keynote an OOP this year (on "managing successfull SOA projects") here in munich. Aside from all the really good speakers and a very geeky crowd - snow finally caught even me :D

What a blast, monday evening - and a full room of interested people - something you don't see too often at conferences, where the exhibition starts on tuesday.

Everytime I speak at a conference on this topic I learn something new, from questions after a session as well as from how the crowd reacts. This time the biggest take-away from OOP - the waterfall is dead, at least there :D

Here are 2 pictures from my keynote.



Presenting at ODTUG Kaleidoscope 2007 in Orlando

Clemens Utschig - Sat, 2007-01-20 16:15
As fellow blogger Wilfred van der Deijl wrote on his blog, I got an email too, invitation to speak there - last year I got the slots from my manager (thx Dave :D), this year the are my own - awesome.

I'll present on 3 topics, two for beginners, and one intermediate
  • Managing successful SOA projects, a view beyond agile science
    The session will give an inside view into methodologies applied to govern successful SOA projects.We will discover some of the common challenges by examining the different phases of a project and by splitting the project into different categories - each representing its own set of issues, and how these can be successfully mastered - applying the right tools and the right skills. An insiders view - to make your SOA projects successful.

  • Oracle 11g — Oracle's Next Generation SOA Infrastructure
    which will give a sneak preview on 11g - the all integrated SCA based SOA plattform

  • Advanced Concepts of the BPEL Language
    which is one of the sessions from last year that was very well attended and that I also presented at Oracle Open World last year
Last year the conference was really well attended and Washington DC was great - although insanely hot and humid, which made every walk outside a little bit of a sauna, bath trip.

I hope to see you there - This conference is definetely worth the trip, and hey Daytona is not that bad :D

Heading for Europe - to support one of our key SOA Suite customers

Clemens Utschig - Sat, 2007-01-20 15:47
After a bad abdominal virus this week, that seems to pollute the Bay Area, and got me a sleepless night in the ER at UCSF Med - I am somewhat well again, and on my way to Europe.

I got the chance of a keynote at the OOP 2007 in Munich (Germany), speaking about Managing successfull SOA projects - and the importance of the human/organizational side in SOA projects.

If you happen to be at the OOP or around Munich, I think the entry to the demo pods / show floor is free - so come by and visit me - I'll be there to answer questions around SOA in general, demo the SOA Suite and meet customers.

Also during the week I'll be visiting one of our customers in Germany to make sure their large, and fledged SOA implementation hits the target.
With this in mind, and all the feedback we receive from the fields in our OTN forums I am very happy to see the industry and the market adopting our SOA Suite, the latest edition of, an all over integrated plattform from Governance, Development and Security, to BPEL as well as ESB.

Logical Reads and Orange Trees

Eric S. Emrick - Thu, 2007-01-18 23:36
My previous post was a riddle aimed to challenge us to really think about logical I/O (session logical reads). Usually we think of I/O in terms of OS block(s), memory pages, Oracle blocks, Oracle buffer cache buffers, etc. In Oracle, a logical I/O is neither a measure of the number of buffers visited, nor the number of distinct buffers visited. We could of course craft scenarios yielding these results, but these would be contrived special cases - like an episode of Law and Order only better. Instead, logical I/O is the number of buffer visits required to satisfy your SQL statement. There is clearly a distinction between the number of buffers visited and the number of buffer visits. The distinction lies in the target of the operation being measured: the visits not the buffers. As evidenced in the previous post we can issue a full table scan and perform far more logical I/O operations than there are blocks in the table that precede the high water mark. In this case I was visiting each buffer more than one time gathering up ARRAYSIZE rows per visit.

If I had to gather up 313 oranges from an orchard using a basket that could only hold 25 oranges, then it would take me at least 13 visits to one or more trees to complete the task. Don't count the trees. Count the visits.

Oracle Riddles: What's Missing From This Code?

Eric S. Emrick - Mon, 2007-01-15 18:56
The SQL script below has one line intentionally omitted. The missing statement had a material impact on the performance of the targeted query. I have put diagnostic bookends around the targeted query to show that no DML or DDL has been issued to alter the result. In short, the script inserts 32K rows into a test table. I issue a query requiring a full table scan, run a single statement and rerun the same query - also a full table scan. While the second query returns the same number of rows, it performs far fewer logical I/O operations to achieve the same result set. Review the output from the script. Can you fill in the missing statement? Fictitious bonus points will be awarded for the Oracle scholar that can deduce the precise statement :)

/* Script blog.sql


spool blog.out
set feed on echo on;
select * from v$version;
drop table mytable;
create table mytable (col1 number) tablespace users;
insert into mytable values (3);
commit;
begin
for i in 1..15 loop
insert into mytable select * from mytable;
commit;
end loop;
end;
/
analyze table mytable compute statistics;
select count(*) from mytable;
select blocks from dba_tables where table_name = 'MYTABLE';
select blocks from dba_segments where segment_name = 'MYTABLE';
select index_name from user_indexes where table_name = 'MYTABLE';
set autot traceonly;
select * from mytable;
set autot off;
REM Bookends to show no DML or DDL statement has been executed.
select statistic#, value from v$mystat where statistic# in (4,134);
... missing statement
REM Bookends to show no DML or DDL statement has been executed.
select statistic#, value from v$mystat where statistic# in (4,134);
set autot traceonly;
select * from mytable;
set autot off;
select blocks from dba_tables where table_name = 'MYTABLE';
select blocks from dba_segments where segment_name = 'MYTABLE';
select index_name from user_indexes where table_name = 'MYTABLE';
select count(*) from mytable;
spool off;


End Script blog.sql */

/* Output

oracle@eemrick:SQL> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for Solaris: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
5 rows selected.
oracle@eemrick:SQL> drop table mytable;
Table dropped.
oracle@eemrick:SQL> create table mytable (col1 number) tablespace users;
Table created.
oracle@eemrick:SQL> insert into mytable values (3);
1 row created.
oracle@eemrick:SQL> commit;
Commit complete.
oracle@eemrick:SQL> begin
2 for i in 1..15 loop
3 insert into mytable select * from mytable;
4 commit;
5 end loop;
6 end;
7 /
PL/SQL procedure successfully completed.
oracle@eemrick:SQL> analyze table mytable compute statistics;
Table analyzed.
oracle@eemrick:SQL> select count(*) from mytable;
COUNT(*)
----------
32768
1 row selected.
oracle@eemrick:SQL> select blocks from dba_tables where table_name =
'MYTABLE';
BLOCKS
----------
61
1 row selected.
oracle@eemrick:SQL> select blocks from dba_segments where segment_name =
'MYTABLE';
BLOCKS
----------
64
1 row selected.
oracle@eemrick:SQL> select index_name from user_indexes where table_name =
'MYTABLE';
no rows selected
oracle@eemrick:SQL> set autot traceonly;
oracle@eemrick:SQL> select * from mytable;
32768 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 1229213413
-----------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time

-----------------------------------------------------------------------------
0 SELECT STATEMENT 32768 65536 26 (4) 00:00:01

1 TABLE ACCESS FULL MYTABLE 32768 65536 26 (4) 00:00:01

-----------------------------------------------------------------------------

Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2248 consistent gets
0 physical reads
0 redo size
668925 bytes sent via SQL*Net to client
24492 bytes received via SQL*Net from client
2186 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
32768 rows processed
oracle@eemrick:SQL> set autot off;
oracle@eemrick:SQL> REM Bookends to show no DML or DDL statement has been
executed.
oracle@eemrick:SQL> select statistic#, value from v$mystat where statistic#
in (4,134);
STATISTIC# VALUE
---------- ----------
4 18 <-- Statistic #4 is user commits

134 461920 <-- Statistic #134 is redo size
2 rows selected.
oracle@eemrick:SQL> ... missing echo of statement
oracle@eemrick:SQL> REM Bookends to show no DML or DDL statement has been
executed.
oracle@eemrick:SQL> select statistic#, value from v$mystat where statistic#
in (4,134);
STATISTIC# VALUE
---------- ----------
4 18
134 461920
2 rows selected.
oracle@eemrick:SQL> set autot traceonly;
oracle@eemrick:SQL> select * from mytable;
32768 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 1229213413
-----------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time

-----------------------------------------------------------------------------
0 SELECT STATEMENT 32768 65536 26 (4) 00:00:01

1 TABLE ACCESS FULL MYTABLE 32768 65536 26 (4) 00:00:01

-----------------------------------------------------------------------------

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
173 consistent gets
0 physical reads
0 redo size
282975 bytes sent via SQL*Net to client
1667 bytes received via SQL*Net from client
111 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
32768 rows processed
oracle@eemrick:SQL> set autot off;
oracle@eemrick:SQL> select blocks from dba_tables where table_name =
'MYTABLE';
BLOCKS
----------
61
1 row selected.
oracle@eemrick:SQL> select blocks from dba_segments where segment_name =
'MYTABLE';
BLOCKS
----------
64
1 row selected.
oracle@eemrick:SQL> select index_name from user_indexes where table_name =
'MYTABLE';
no rows selected
oracle@eemrick:SQL> select count(*) from mytable;
COUNT(*)
----------
32768
1 row selected.
oracle@eemrick:SQL> spool off;


End Output */

Clue: The missing statement is not "alter system set do_less_work = true;"

Second Life client running on Solaris x64 - contd

Siva Doe - Thu, 2007-01-11 20:38

As promised, here is the update.

For Mads and whomever is interested in building the Second Life client on Solaris (x64), this is what I did.
Please do remember, these are just to get the client build on Solaris. I havent completely run the client yet (nothing beyond the login screen). So, buyer beware.
Basically, I followed the Linux instructions in Second Life Twiki page.
Downloaded and built all the libraries mentioned and copied the libraries and headers in the directory under, 'i686-sunos5', instead of 'i686-linux'
Modified SConstruct  (for scons building).

  • Look for 'linux' and introduce the code for 'sunos5' (replace -DLL_LINUX with -DLL_SOLARIS)
  • Remove the 'db-4.2' entry under libs line.
  • Replace 'yacc' with 'bison -y'; 'lex' with 'flex'; 'g++-3.4' with 'g++' (under /usr/sfw/bin); 'strip' with 'gstrip'

Whichever subdirectory contains 'files.linux.lst', make a copy of it called 'files.sunos5.lst'.
Then comes the code changes. Basically, search for files containing 'LL_LINUX' and add "|| LL_SOLARIS" or "&& ! LL_SOLARIS" as appropriate.
In llcommon/llpreprocessor.h:38, I added  "|| (defined(LL_SOLARIS) && !defined(__sparc))" to the line to set the ENDIAN correctly.
In llcommon/llsys.cpp, I added code to use the output of 'psrinfo -v', instead of reading from '/proc/cpuinfo' for SOLARIS.  Similarly, used 'getpagesize() \* sysconf(_SC_PHYS_PAGES)' to get the "Physical kb". I know there are better ways, but just wanted to get the build completed.
In llmath/llmath.h, I was running into some problems regarding 'isfinite'. I replaced with this.
#define llfinite(val) (val <= std::numeric_limits<double>::max())
The other significant work is in 'llvfs/llvfs.cpp' and 'newview/viewer.cpp'. Replaced the code for 'flock' with the appropriate 'fcntl' code.
In files 'newview/lldrawpoolsky.h' and 'newview/llvosky.cpp', replace the variable 'sun' with 'Sun'. 'sun' in a SunOS is defined already, of course.
In 'newview/viewer.cpp', rewrote the 'do_basic_glibc_backtrace()' to use 'printstacktrace()' instead.
Well, you can follow the above instructions, or send me a mail, I will send the diff output. :-)

For runtime, you will have to set your LD_LIBRARY_PATH as mentioned in the Twiki page.
Thats all I have for now. Will update later, if any.

Regards
Siva

PS: It is indeed a pity that I cant login to a server named after me ;-) (userserver.siva.lindenlab.com)

Increasing the Longevity of Your CPU

Eric S. Emrick - Thu, 2007-01-11 18:42

One of my current assignments is to evaluate the potential to increase CPU headroom on a server running a large OLTP Oracle database. Of course, any project such as this is typically motivated by the desire to save money by foregoing a seemingly imminent hardware upgrade. Realizing more CPU headroom for your Oracle database server can be achieved, but not limited to, the following approaches:

1. Add more same-speed CPUs to your existing server.
2. Replace your existing CPUs with faster CPUs.
3. Replace your existing CPUs with a greater number of faster CPUs.
4. Commission a new server platform with more same-speed and/or faster CPUs.
5. Commission a new server platform with a greater number of slower CPUs.
6. Chronologically distribute the load on the system to avoid spikes in CPU.
7. Reduce the work required of the system to satisfy the business.


More times than not I suspect approaches 1-5 are chosen. I am of the opinion that 1) and 3) are more predictable when trying to evaluate the expected CPU headroom yield. Propositions 2), 4) and 5) can be a little less predictable. For example, if I double the speed of my current CPUs will I yield the upgraded CPU cycles as headroom? That is, if I am running 10x500MHz and upgrade to 10x1GHz will I now have the additional 5GHz as headroom? It has been my experience that upgrades such as these do not produce such predictable results, especially if your current box approximates 100% utilization. Certainly, moving to a new server with a greater number of same-speed and/or faster CPUs is a tricky proposition. New servers need to be tested using Production volume with great rigor. While at face value 500MHz would appear to be universally “portable” to any server, there are many other factors that can influence your CPU horsepower: memory architecture, amount of processor cache, processor crosstalk, etc. Options 1-5 can all be very costly and in some cases yield undesirable and unpredictable results.

If you have the luxury of distributing the load on your system to avoid spikes in CPU then that is a great first option. It could buy you more time to evaluate a longer-term solution. For example, shifting any batch jobs to off-peak OLTP hours might give you immediate relief.

What if we can simply “do less” to satisfy the needs of the business? This concept is not new to most Database Administrators and rings a rather cliché tone. After all, aren’t we brow-beaten by the dozens of books and countless articles that speak to database and SQL optimization? The “do less” principle is very sound, but it can be intractable. Reducing the work required of an application often requires management support and can run into political obstacles at every turn. Getting Application Developers and Database Administrators to work in lockstep can require a significant effort. If Management, Developers and Database Administrators buy into a synergistic endeavor the benefits can be amazing – and can save the company a large sum of money.

If you are lucky enough to be working on a project where the common goal of all parties is to reduce the CPU load on your system then I have learned a few things that I hope can help you.

Identify the Targets for Optimization

Identify those SQL statements that contribute the greatest to the CPU load on your database server. These statements usually relate to those that produce the most logical I/O on your database. Caution needs to be taken when trying to identify these statements. You shouldn’t focus solely on those statements that have the highest logical I/O (LIO) to execution ratio. Often you will find statements that are well optimized but are executed with extremely high frequency. Look for the aggregate LIO footprint of a SQL statement. Without Statspack or AWR this analysis might be very difficult. However, if you collect this diagnostic data you can use the LEAD analytical function to craft a nice SQL statement to identify the top CPU consuming statements on your system (join stats$sql_summary and stats$snaphot).

Don’t limit your SQL statement identification to just those statements flagged by your analysis as a top CPU consumer. Go another step and identify the most frequently executed statements. Some of the most frequently executed statements are the most optimized on your system. These statements if executed by many programs concurrently can influence concurrency and thusly CPU load. One approach I took recently identified the top 20 CPU consuming statements during a 12 hour window of each week day. I then ran the same analysis against the most frequently executed statements on the system. The results yielded only 31 distinct statements as 9 were on both lists. The amazing thing is that, on average, these 31 statements contributed to 58% of all logical reads on the system and 59% of all executions. Keep in mind that there were over 20 thousand distinct statements cached in the Shared Pool. It is rather amazing that such a small subset of the application footprint contributed so greatly to the aggregate load.

Ask The Right Questions

The identification phase is crucial as you want to optimize that which will yield the greatest benefit. Subsequent to the identification phase the Database Administrators and Developers can sit and discuss approaches to reduce the load incurred by these SQL statements. Here are some of the key points I have taken away during such collaborative efforts.

1. Is there a better execution plan for the statement? Optimization is often achieved by rewriting the query to get at a better execution plan. While I don’t like hinting code, they can relieve pressure in a pinch.

2. Does the statement need to be executed? If you see SQL statements that seldom/never return rows (rows/exec approaches 0) there is a possibility it can be eliminated from your application.

3. Does the statement need to be executed so frequently? You might be surprised that Developers often have other application-side caching techniques that can dramatically reduce the frequency of a statement’s execution against the database. Or, the application might simply call the statement needlessly. It doesn’t hurt to ask!

4. Are the requirements of the business immutable? Sometimes you can work an optimization by simply redefining what is required. This is not the tail wagging the dog here. It is possible that the business would be completely happy with a proposed optimization. For example, can the query return just the first 100 rows found instead of all rows.

5. Do the rows returned to the application need to be sorted? Highly efficient SQL statements can easily have their CPU profile doubled by sorting the output.

6. Are all columns being projected by a query needed? If your application retrieves the entire row and it only needed a very small subset of the attributes it is possible you could satisfy the query using index access alone.

7. Is the most suitable SQL statement being executed to meet the retrieval requirements of the application? Suitability is rather vague but could apply to: the number of rows fetched, any misplaced aggregation, insufficient WHERE clause conditions etc.

8. Are tables being joined needlessly? I have encountered statements that Developers have determined are joining a table, projecting some of its attributes, without using its data upon retrieval. The inclusion of another table in such a manner can dramatically increase the logical I/O required. This is extremely difficult for a DBA to discern without intimate application code knowledge.

9. How well are your indexes clustered with your table(s)? Sometimes data reorganization techniques can greatly reduce the logical I/O required of a SQL statement. Sometimes IOTs prove to be very feasible solutions to poor performing queries.

10. Can I add a better index or compress/rebuild an existing index to reduce logical I/O? Better indexing and/or index compression could take a query that required 10 logical I/O operations down to 5 or 6. This might feel like a trivial optimization. But, if this statement is executed 300 times each second that could save your system 1,500 logical I/Os per second. Never discount the benefit of a 50% reduction of an already seemingly optimized statement.

11. Can I reorganize a table to reduce logical I/O?


I suspect most of us have read that 80% of optimization is application centric (I tend to feel that the percentage is higher). Usually the implication is that the SQL being generated and sent to the database is 80% of the target for optimization. More specifically, optimization requires the tuning of SQL 80% of the time. However, don’t limit your efforts to optimize your application to “tuning the SQL.” Sometimes a portion of your optimization will include “tuning the algorithms” used by the application. Needless execution and improper execution of SQL statements can be equally destructive. Hardware resources, in particular CPUs, can be very expensive to purchase and license for production Oracle databases. It is well worth the effort to at least investigate the possibilities of increasing CPU headroom by decreasing CPU utilization.

Update: An astute reader suggested I mention Cary Millsap's Optimizing Oracle Performance with regard to this topic. I highly recommend reading this book as it weighs in heavy on Oracle optimization and Method-R. Trust me if you have optimization on the brain don't miss this read.

9i bug using Oracle Text to search XML data

Vidya Bala - Thu, 2007-01-11 13:05
Using Oracle Text to Search XML Data: XML Data inserted in DATA column in TEST table

Create table test(id integer,DATA CLOB)


Department name="CS"
Employee
Vidya Bala
/Employee
/Department


Step1:
--------

Create an auto section group

begin
ctx_ddl.create_section_group('myautosectiongroup', 'AUTO_SECTION_GROUP');
end;

Step2:
-------
create index test_index on test(DATA)
indextype is ctxsys.context
parameters ('SECTION GROUP myautosectiongroup');

Step3:
---------
SELECT DATA FROM TEST
WHERE CONTAINS(DATA, 'Vidya WITHIN Employee') > 0;
1 Row Returned

SELECT DATA FROM TEST
WHERE CONTAINS(DATA, 'CS WITHIN Department@name') > 0;

0Rows (10g returns 1 row – 9i returns no row – Support is working on getting bug fix for the issue)
Categories: Development

Business Integration Journal changes its name - and the last BIJ contains my new article

Clemens Utschig - Wed, 2007-01-10 15:06
"Business Integration Journal (BIJ) is changing to Business Transformation & Innovation (BTIJ) to better reflect the current editorial content being published and new topics readers want covered.

Targeted at the intersection of business and IT, BTIJ carries on the “how-to” and “what-works” tradition of Business Integration Journal helping IT and business managers quickly adapt to changing business needs, continuously innovate successful new products and services, and consistently gain and maintain a competitive advantage.
"

In the last edition of the BIJ - as we know it - my article on SOA projects got published - and I am very proud of it.

"A World of Assumptions That Make an SOA Project the Perfect Storm—and What You Should Know About Them by Clemens Utschig-Utschig. This article shows that, while Service-Oriented Architecture (SOA) brings many advantages, it doesn’t alleviate the need for good planning, organization, and training."

The whole article can be found here
http://www.bijonline.com/index.cfm?section=article&aid=802

Second Life client running on Solaris x64

Siva Doe - Wed, 2007-01-10 14:31

Yes. I was able to build the recently open sourced Second Life client. It took me couple of days to get to this stage. To get it running took half a day. The reason being that ENDIAN was set to BIG for non-Linux boxes and that had me stumped for a long time. Of course, being behind SWAN firewall doesnt let the client connect to the server :-( Any one knows a way to achieve this?

Any way, here is the obligatory screenshot. Will keep posted on the updates.

Siva

Remote automated install of Oracle 10g client

Stephen Booth - Wed, 2007-01-10 06:04
We have a situation where we need to rationalise the range of installed Oracle clients (i.e. the bit that sits between the app and the network stack) we have installed. We currently have versions from 7.x through to 10.2 installed accross approximately 12,000 desktops (accross various locations in an area of around 26 square miles) running various apps on Windows versions from NT4 to XP (mostly Stephen Boothhttps://plus.google.com/107526053475064059763noreply@blogger.com1

Auditing on FND_FLEX_VALUES: How to see Audit History

Jo Davis - Tue, 2007-01-09 21:23
To keep an audit trail be kept for mapping of segment values to external values (such as for extracts to other systems) and changes to it
1) Define DFF on FND_FLEX_VALUES to contain the mapping values
2) enabled auditing on FND_FLEX_VALUES,
3) if they wish to query the audit data start with this:
select *
from FND_FLEX_VALUES_AC1
where attribute1 is not null
FND_FLEX_VALUES_AC1 is a view of the audit table

How to Migrate Personalizations from TEST to PROD on Release 11.5.10.2

Jo Davis - Tue, 2007-01-09 20:35
This applies to iProcurement (11.5.10 only), iExpenses 2nd Generation (i.e. above 11.5.7 and the white screens, not the blue ones) and anything else that uses the new self-service architecture (i.e. NOT iTime or anything else with those blue screens and no bouncing balls across the top.... you know what I mean)

It's all covered in Metalink Note 370734.1 but suffice is to say

- it can be done, you don't need to redo all the personalizations

- you can only use this on release 11.5.10.2

- you will need the appltest and applprod passwords and a passing familiarity with Unix (or a tame developer or DBA to help you)

Have a great day

Jo

A new SOA year has begun - what will the SOA Santa bring?

Clemens Utschig - Thu, 2007-01-04 16:00
Wow, the new SOA year is here - and many cool things are about to arrive. But before looking into the future, let's look back into 2006 - and what happened there.

My first year, living in San Francisco and working within a wonderfull group, envisioning future and creating our next generation products went by - and I must admit it was far better than I thought it'd be.

First, and most important - we delivered SOA Suite 10.1.3.1 - the first major, integrated SOA plattform. Not just integrated, but also with two new key components, the Oracle Enterprise Service Bus (ESB) and Rules. Since OOW 2006 we we presented the first public available release, our customers are using it heavily to build their next generation SOA - and according to many of them - they enjoy doing so :D

Also, we signed an OEM agreement with IDS Scheer, to build the ORACLE BPA Suite on top of it. Using Business Process Architect - for the first time Business Processes will make it all the road down to be executable, and all the way back to feed real world data into simulation and continous refinement.

Another also, was the overwhelming amount of SOA related sessions at Oracle Open World, and the great feedback we received. It was a blast, more than 40000 people made the city for one week being red, being Oracle. Elton John played during the big Oracle party, and so on.

Open SOA - www.osoa.org went live - and with it the big players in the SOA market space, to drive next generation, easy to use standards.

So after all what's in for 2007?

While we all work hard on delivering 11g - our next generation SOA infrastructure, based on Service Component Architecture (SCA) and supporting Service Data Objects (SDO) - we expect a patchset for 10.1.3.1 (which will offer a ton of new features, and little fixes here and there).

BPEL 2.0 will see the light of life - after more than 2 years, and the last public review finished, some more polishing needs to happen - and then we are ready to go. With it - BPEL will present a number of long awaited cool features, such as the new variable layout ($inputVariable) and scoped partnerlinks.

Evangelism on SOA will continue to drive the spirit of SOA and help customers adopting to it - I really look forward to even more presence around the globe, and many customers going live on SOA Suite.

Personally for me - it's moving on with my SOA Best practices series on OTN, supporting people around the globe and the OTN forums, doing evangelism on SOA & help creating 11g. More than enough.

10G SQL Access Advisor and SQL Tuning Advisor

Vidya Bala - Thu, 2006-12-28 15:45
I was able to take advantage of the holidays to complete a 9i to 10g cross platform migration. While I was quite happy with the migration process (especially on how datapump has made this effort much easier), I was a little bit disappointed on what SQL Access and SQL Tuning Advisor had to offer.

There were a bunch of queries that we have been meaning to tune for quite some time , I thought I could create a tuning task with 10G SQL Tuning advisor and see if I could get some valuable recommendations.

The Recommendations I got were far from anything of significance (eg: add an index to a small lkup table).

I couldn’t help but wonder is there is much success/help using Oracle 10g SQL Access/Tuning Advisor in the industry
Categories: Development

Pages

Subscribe to Oracle FAQ aggregator