Feed aggregator

What is EclipseLink?

Omar Tazi - Thu, 2007-03-08 18:57
Hopefully by now most of you know that Oracle has been actively contributing resources and IP to the Eclipse community. Oracle has been an active member of the Eclipse community since its inception and a leading participant in both the Eclipse Web Tools Platform (WTP) and the Technology project. Oracle currently leads the JavaServer Faces tooling, Dali JPA tools and BPEL tools projects. Before diving into to the announcement, I would like to personally thank all the developers they know who they are who spontaneously stopped by the Oracle booth at EclipseCon'07 to tell me how much they thought Oracle is doing a better job of working with the OSS community and how much their perception of Oracle had changed.

So what’s new?

- First, Oracle is now a board member of the Eclipse Foundation.
- Second, Oracle steps up its involvement from simple membership to “Strategic Developer” status. Based on the size of our latest donation (see below) and the level of involvement required for this project and Oracle’s interest in the success of the Eclipse platform we decided to upgrade our status.
- Third, Oracle is donating its award winning Java persistence framework, Oracle TopLink, to the open source community. What’s the big deal TopLink was already donated to the JCP and project Glassfish as well as Spring 2.0? That was TopLink Essentials (TLE) not TopLink. I will post another blog entry soon explaining the difference between TLE and TopLink. Basically Oracle TopLink which has been around for 13 years is hands down the industry's most advanced persistence product with object-to-relational, object-to-XML, and Enterprise Information System data access through all of the major standards, including the Java Persistence API, Java API for XML Binding, Service Data Objects, and the Java Connector Architecture. TopLink supports most databases and most application servers and most development tools.
- Last but not least, based on this major contribution (TopLink source code and test cases), Oracle proposed an Eclipse project to deliver a comprehensive persistence platform. The project’s name is Eclipse Persistence Platform (EclipseLink). EclipseLink will be led by Oracle.

Can you provide more details about EclipseLink? (from the EclipseLink FAQ)

EclipseLink will deliver a number of components (listed below) which together will constitute a solid framework with support for a number of persistence standards. Here is a list of some planned components:
- EclipseLink-ORM will provide an extensible Object-Relational Mapping (ORM) framework with support for the Java Persistence API (JPA). It will provide persistence access through JPA as well as having extended persistence capabilities configured through custom annotations and XML. These extended persistence features include powerful caching (including clustered support), usage of advanced database specific capabilities, and many performance tuning and management options.
- EclipseLink-OXM will provide an extensible Object-XML Mapping (OXM) framework with support for the Java API for XML Binding (JAXB). It will provide serialization services through JAXB along with extended functionality to support meet in the middle mapping, advanced mappings, and critical performance optimizations.
- EclipseLink -SDO will provide a Service Data Object (SDO) implementation as well as the ability to represent any Java object as an SDO and leverage all of its XML binding and change tracking capabilities.
- EclipseLink -DAS will provide an SDO Data Access Service (DAS) that brings together SDO and JPA.
- EclipseLink -DBWS will provide a web services capability for developers to easily and efficiently expose their underlying relational database (stored procedures, packages, tables, and ad-hoc SQL) as web services. The metadata driven configuration will provide flexibility as well as allow default XML binding for simplicity.
- EclipseLink -XR will deliver key infrastructure for situations where XML is required from a relational database. The metadata driven mapping capabilities EclipseLink-ORM and EclipseLink-OXM are both leveraged for the greatest flexibility. Using this approach to XML-Relational access enables greater transformation optimizations as well as the ability to leverage the Eclipse Persistence Platform’s shared caching functionality.
- EclipseLink -EIS provides support for mapping Java POJOs onto non-relational data stores using the Java Connector Architecture (JCA) API.

Oracle's love story with Eclipse seems to be getting stronger, is JDeveloper dead?
I keep getting this question over and over. So before anybody posts it in the comments I will address it. At Oracle we believe in "Productivity with Choice". Oracle remains fully committed to JDeveloper as the IDE of choice for Java and service-oriented architecture development. That said, we are also committed to helping our customers who for whatever reason choose Eclipse for their development. So the answer is crystal clear, JDeveloper is stronger than ever and Oracle will continue to invest in making it better.

These Eclipse-related announcements are yet another proof that Oracle continues to deploy significant efforts to initiate, lead, and contribute technology and resources to the OSS community. Stay tuned for more on Oracle and OSS!

OEM 10gR3

Herod T - Thu, 2007-03-08 09:28

For those of you that remember, about a year ago, we installed and got OEM 10gV2 installed and running, and I was hopeful. Well, about the only thing OEM was used for was downtime reporting by a manager. The occasional email from the system when something came down, but not much more. No new agents installed on new servers, nothing kept up to date, basically OEM 10gV2 was a dismal failure.


Well, we have decided to upgrade to OEM 10gV3 and take another kick at the can and see if we can get OEM configured the way it should be and use it the way it should be. Our big push in this came from our Oracle Technical Sales representative. He came by (for free) and learned our environment over 2 days and then he presented some very compelling reasons for using OEM during a 7 hour presentation. Every single one of the reasons was expensive, but he got the managers convinced.

We will be upgrading (or reinstalling) to V3 by the end of March. Our sales rep gave us a 60 day free trial of all of the OEM packs on all of our servers to get me hooked, as well as 5 full days of the technical sales rep here helping out configuring and the proper way to use it. For the estimated $900K bill (before discount) to have the OEM packs on every database and the non-oracle database servers monitored as well, and our SQL Server databases plugged in too. Oracle is willing to spend some time.

We are also looking at purchasing Oracle Fusion Middleware as our SOA solution, so later this year, oracle is going to make some money on us.

I am now off to a presentation where I am going to try my darndest to convince management that we really and truly need to upgrade our oracle 7 and oracle 8 production databases.



DST... Ready?

Herod T - Thu, 2007-03-08 09:13

Well,

We appear to be ready for the big bad early DST. Yesterday we rolled every single one of our test servers forward and waited for the OS to do the switch. No databases came down (yeah!), unfortunately some of our vendor supplied patches for our large JSP based application seemed to have failed badly as the application simply refused to allow data to be entered "PO create date can not be past PO update date" or something like that.

Now, for the databases where test and production are on the same server - well, that is going to be a "fingers crossed" type of fix.There will still be a large number of IT people in on the morning of the 11th for the old 'just in case'.



Insert into multiple tables from a single query

Herod T - Sat, 2007-03-03 21:52

A friend who does not blog wrote this up for his co-workers, it is straight forward but useful. Enjoy.

A few days ago someone asked if it was possible in an oracle DB to insert into multiple different tables from a single query. I said "Yes of course", they asked "So, How?", I of course said "RTFM". Well, here it is a little easier to read than in "The Fine Manual"

This works all the way back to Oracle 8 so feel free to test it out. But, since it does drop objects, please do it in a test location. Personally I recommend everybody download and install oracle XE ( http://www.oracle.com/technology/software/products/database/xe/htdocs/102xewinsoft.html ) on your PC, gives you a nice safe place to work, play and learn, and as an added bonus Oracle XE comes with Application Express (APEX) already installed and ready to go. Now that I said that, I don't support PC's so who knows what it will change on the configuration for on your PC. Do so at your own risk.


Simply creating some test tables and a sequence for later use in this example.

SQL> CREATE TABLE BASETABLE (BASEID NUMBER PRIMARY KEY,BASEDATA VARCHAR2(30));

Table created.

Elapsed: 00:00:00.03

SQL> CREATE TABLE DEST1 (DESTID NUMBER PRIMARY KEY,BASEID NUMBER UNIQUE,BASEDATA VARCHAR2(30));

Table created.

Elapsed: 00:00:00.01

SQL> CREATE TABLE DEST2 (DESTID NUMBER PRIMARY KEY,BASEID NUMBER UNIQUE,BASEDATA VARCHAR2(30));

Table created.

Elapsed: 00:00:00.03

SQL> CREATE TABLE DEST3 (DESTID NUMBER PRIMARY KEY,BASEID NUMBER UNIQUE,BASEDATA VARCHAR2(30));

Table created.

Elapsed: 00:00:00.06

SQL> CREATE TABLE DEST4 (DESTID NUMBER PRIMARY KEY,BASEID NUMBER UNIQUE,BASEDATA VARCHAR2(30));

Table created.

Elapsed: 00:00:00.03

SQL>

SQL>

SQL> CREATE SEQUENCE DESTID_SEQ;

Sequence created.

Elapsed: 00:00:00.00

SQL>

Insert some data into the base table for use later

SQL> INSERT INTO BASETABLE SELECT ROWNUM*-1,DBMS_RANDOM.STRING('A',30) FROM DUAL CONNECT BY LEVEL <=500;

500 rows created.

Elapsed: 00:00:00.09

SQL> COMMIT;

Commit complete.

Elapsed: 00:00:00.00

Now the actual insert, you can see the WHEN and ELSE clause of the INSERT statement. You can have as many of those as you want, each inserting different combination of columns for the VALUES section. In this case, I am using a sequence to satisfy the primary key of the DESTx table and then the two column names from the select clause at the end.

SQL>

SQL> INSERT ALL

2 WHEN BASEID=-1 THEN INTO DEST1 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)

3 WHEN BASEID=-10 THEN INTO DEST2 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)

4 WHEN BASEID IN (-100,-200,-300,-400) THEN INTO DEST3 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)

5 ELSE INTO DEST4 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)

6 SELECT BASEID,BASEDATA FROM BASETABLE ORDER BY BASEID DESC;

500 rows created.

Elapsed: 00:00:00.01

SQL> COMMIT;

Commit complete.

Elapsed: 00:00:00.00


Now to show what happened. From the following query you can see that the BASEID of -1 was inserted and the DESTID was the very first record in the insert as shown by the sequence value of 1.

This following data was inserted based on the

WHEN BASEID=-1 THEN INTO DEST1 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)line in the insert statement.

SQL> SELECT * FROM DEST1;

DESTID BASEID BASEDATA

---------- ---------- ------------------------------

1 -1 uzvIPoJevGslWNzcsEULVsOIHrWtkA

Elapsed: 00:00:00.00



From the following query you can see that the BASEID of -10 was inserted, and was the 10th line in the select query return result. This was inserted based on the line

WHEN BASEID=-10 THEN INTO DEST2 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)in the insert statement.

SQL> SELECT * FROM DEST2;

DESTID BASEID BASEDATA

---------- ---------- ------------------------------

10 -10 AzRwrjLpzvxtacxBOitYhGDGDuKmaU

Elapsed: 00:00:00.01


From the following query you can see that the BASEIDs of -100,-200,-300 and -400 were inserted.This was inserted based on the

line WHEN BASEID IN (-100,-200,-300,-400) THEN INTO DEST3 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)in the insert statement.


SQL> SELECT * FROM DEST3;

DESTID BASEID BASEDATA

---------- ---------- ------------------------------

100 -100 uJixIEqFTeZEBDOCPYkJgyipInuTdt

200 -200 ikmTNgdjGTjkINEGbxEFifWAetPBMt

300 -300 gKcFyianMOtGzdJzVlkjqaLPiwBkic

400 -400 prucyUxTqhPhUTzarsJRyFQYlOUlWz

Elapsed: 00:00:00.01

From the following query you can see the remainder of the records in the BASETABLE were inserted into the DEST4 table. If you look you can see that BASEID of -1,-10,-100 and -200 are missing. You will have to trust me that -300 and -400 are missing in the result set as well, but I didn't want this running too long.


SQL> SELECT * FROM DEST4 ORDER BY DESTID;

DESTID BASEID BASEDATA

---------- ---------- ------------------------------

2 -2 fPNMkRbJAEoeaWejzrAigZjKqZVzUl

3 -3 NDmRQNKmPhAnzfuWhLQDnWIcRVpjLF

4 -4 DoNnVEskItQAfANavQVHdJWdOeZbAc

5 -5 SNacUWsrPCPyLwDBxEtndSsiiSTmPW

6 -6 gLxiVlWXsdcLPhDgLThISCutKBfuOj

7 -7 sZCNlljiTveZPIUgyEBPalpJPrMdck

8 -8 UOwvqNxyPXcpsxRmjsxLQGfEsHQOqO

9 -9 WDwQqUnMHjDautMrYYBMCcjIoNWMKg

11 -11 BOfKwqtFZWQuLVEHFhMRHrfBGyeTfQ

<SNIP>

99 -99 VjmavGgzdQroTHutlhcOQjiqlTiLHW

101 -101 cjuHxrklWRaQmRJZyVShliswLRCgBm

<SNIP>

199 -199 xvaXYHPkexmFOkXCDBOODqjEatyMwY

201 -201 fXwQaaSTWAEDrYDqnRHVxLqcQEkbCZ

<SNIP>

500 -500 eLqsjEKEzWTmQUTsEtHFcRVEkEiQZz

494 rows selected.

Elapsed: 00:00:01.06

Now simply the cleanup.

SQL> DROP SEQUENCE DESTID_SEQ;

Sequence dropped.

Elapsed: 00:00:00.03

SQL> DROP TABLE BASETABLE;

Table dropped.

Elapsed: 00:00:00.03

SQL> DROP TABLE DEST1;

Table dropped.

Elapsed: 00:00:00.03

SQL> DROP TABLE DEST2;

Table dropped.

Elapsed: 00:00:00.04

SQL> DROP TABLE DEST3;

Table dropped.

Elapsed: 00:00:00.03

SQL> DROP TABLE DEST4;

Table dropped.

Elapsed: 00:00:00.01

SQL>

SQL> SPOOL OFF


New Look

Herod T - Sat, 2007-03-03 21:09

I decided to finally allow Google to move my blog to the new now no longer beta blogger.

It looks good. New "spot", I decided on a different look.

If you care, let me know if you have any issues with it.

Thanks.

ORA-00821 Specified value of sga_target is too small, needs to be at least

Neil Jarvis - Thu, 2007-03-01 05:42
Resized your SGA_TARGET too small and found you can’t now start your database.

If you are using a PFILE then just edit it and set the SGA_TARGET to a larger value. But what if you’re using an SPFILE. One possibility is to create the pfile from the spfile edit the pfile, and then either start the database using the pfile and remove the spfile and start the database as normal and the new pfile will be picked up.

The problem arises when the spfile is in an ASM, creating the pfile from this can be a problem. One solution is to create a pfile which calls the spfile in the ASM but after the call to the spfile add an extra line which alters the SGA_TARGET as follows

SPFILE='+DATA1/PROD1/spfilePROD1.ora'
SGA_TARGET=1024M

This pfile can be places in $OH/dbs thus, the next time you start the database this pfile will be run. Alternatively, you could explicitly use the ‘pfile=’ parameter when starting the database thus

Startup pfile=$OH/dbs/initPROD1.ora

Using Oracle SQLDeveloper to access SQLServer

Dong Jiang - Tue, 2007-02-27 05:53

It is a pretty cool feature to use Oracle’s SQLDeveper 1.1 to access SQLServer.
The steps are:

  • Download jTDS (open-source SQLServer JDBC driver) from here. Unzip and extract the jtds-1.2.jar or whatever the latest version.
  • Start Oracle’s SQLDeveloper, Tools->Preferences->Database->Third Party JDBC Drivers. Click “Add Entry” and point to the jtds-1.2.jar
  • Create a new connection, choose SQLServer tab, type in hostname, port, username and password. It appears that the initial connection name has to be the same as the database and you can click the “Retrieve database” button. Once you found the database, you can rename the connection.

Try it out.
Of course, certain things don’t work. Like explain plan and auto trace.

Per comments below, please make sure jtds 1.2 is used. Apparently, 1.3 does not work.

powered by performancing firefox


Fun With Tom Kyte

Robert Vollman - Mon, 2007-02-26 13:54
As devoted readers may have noticed, my new job doesn't involve nearly as much work with Oracle. I stay sharp by reading Ask Tom, the very site that has provided me with 90% of the answers that I can't find in Oracle documentation or figure out on my own.Those of you who may find it nerdly to spend lunch hours reading Oracle Q&A are actually really missing out. It's far more entertaining than Robert Vollmanhttp://www.blogger.com/profile/08275044623767553681noreply@blogger.com7

A Strange Production Problem!!!

Vidya Bala - Mon, 2007-02-26 12:11

A Strange Production Problem!!!

I suddenly got a call that the Front end Applications have frozen (those are the worst calls….). I logged on to the database server, was unable to login to the database, at the same time got a call that the ……………….

Network Appliance filer experienced a kernel panic or a low-level system-related lockup. The device then rebooted itself to correct the problem and proceeded normally through the startup process.

The database was a 2node RAC Cluster both accessing the NetApp Device via NFS mount points. After the NetApp rebooted itself:

NodeA on the database looked fine: ORACM was up on the server, could login to the database from NodeA.
NodeB: ORACM was down, Instance on NodeB was down.

Net Result: Application was still unable to connect to either of the Nodes using TAF.

Since the Applications were anyways down, the decision was made to restart the Cluster Manager on both nodes and start both the instances. The above resumed operations fairly quickly (not too much time was spent on roll forward and rollback operations, we did not have any long running transactions at the time of abort).

An SR has been opened to discuss if the above was the expected behavior.

With RAC I would have expected the following to happen:


Each Oracle instance registers with the local Cluster Manager. The Cluster Manager monitors the status of local Oracle instances and propagates this information to Cluster Managers on other nodes. If the Oracle instance fails on one of the nodes, the following events occur:
1. The Cluster Manager on the node with the failed Oracle instance informs the Watchdog daemon about the failure.
2. The Watchdog daemon requests the Watchdog timer to reset the failed node.
3. The Watchdog timer resets the node.
4. The Cluster Managers on the surviving nodes inform their local Oracle instances that the failed node is removed from the cluster.
5. Oracle instances in the surviving nodes start the Oracle9i Real Application Clusters reconfiguration procedure.

The nodes must reset if an Oracle instance fails. This ensures that:
· No physical I/O requests to the shared disks from the failed node occur after the Oracle instance fails.
· Surviving nodes can start the cluster reconfiguration procedure without corrupting the data on the shared disk.

In 9i Cluster Reconfiguring is supposed to be fast remastering resources only if necessary and processes on Node A will be able to resume active work during reconfiguration as their locks and resources need not be moved.

However, this was not the behavior we saw when one node totally crashed in our case – while RAC is great it helps you load balance your requests – does it really help in Disaster Recovery ?

Categories: Development

10g not available on all flavors of Vista

Dong Jiang - Fri, 2007-02-23 08:21

According to this “Statement of Direction“, current plan calls for 32-bit 10gR2 available only for Vista Business, Ultimate and Enterprise Edition.
I guess Microsoft has put Oracle in a hard position by bringing out ridiculous various flavors. But Vista Home, basic and premium, may not be able to install 10g at all.
I am wondering about the XE. I haven’t tried myself, but some claimed to have installed XE on Vista Home.
PS: In response to APC’s comment, I tried XE on Vista Hom Basic and it works.


QEDWiki - introduction video

Rakesh Saha - Fri, 2007-02-23 01:59

Informatica Step by Step to create a Simple Workflow Run successfully:

Vidya Bala - Thu, 2007-02-22 07:37
As a follow-up to my previous post
This post will cover a)How to create Repository User accounts and managing security in Informatica

b)Create a mapping , session, workflow and successfully execute a workflow

How to create Repository User accounts and managing security in Informatica

1) Login to Repository Server Admin console.Connect to the Repository Server



Right click and create new Repository
4)give the following :
repository name
db connect string
db username : pcenter1
db password : pcenter1

license key information

when you click Apply the Repository content will get created.

5)Once the Repository is created loginto Repsoitory Manager
You can loginto the Repository either as
a) The Repository username/ password provided in the above step
b) Or Administrator
Go the Security > Manage Users and Privileges


By default 2 grps are created “Administrator” and “Public”
2 users are created “Administrator” and “Repository User”
Privileges tab lists all the privileges that are available. This security window can be used to manage Security and privileges – refer help guide for further information on security and privileges.

Create a mapping , session, workflow and successfully execute a workflow

I will use the HR schema to demonstrate how you can create a mapping, session and workflow. The HR schema has the following tables

COUNTRIES
DEPARTMENTS
EMPLOYEES
JOBS
JOB_HISTORY
LOCATIONS
REGIONS
COUNTRY_REGION is table that has country_name and region_name.

To populate the country_region table : is a join between the country table and the region table
To create a mapping:

Open Repository Manager – Connect to the Repository and create new folder within the Repository using Repository Manager.
Connect to Designer

Open the folder up and you should see Sources, Targets , Cubes , Dimensions etc.
From the Sources menu import from source Database objects you need: in this case you will import COUNTRIES and REGION
Open Warehouse Designer and Import TargetsImport COUNTRY_REGION from Target Menu.
Sources and Targets Menu should be as above

Open Mapping Designer.
Drop in the sources to the Mapping Designer, Drop in the target as well to Mapping Designer.Include a Join Transform to join appropriately the COUNTRY and REGION table.
Name the mapping as COUNTRY_REGION_MAPPING. While saving the mapping make sure parsing completed with no errors. Errors will be reported on the Output window of the Designer.
Once you have saved the Mapping you can now open up your Workflow Manager to create a session and a workflow.
Tasks - Create - creates a new Session or Task
select the COUNTRY_REGION mapping and save the Repository.
Click on Connections/Relational to create 2 new connections for your Source and Target databases.
once the connections are created Click on the Task and you should see the following properties window open up
click on the Mappings Tab and verify the connections are set appropriately.
When you are ready to create the Workflow – Open Workflow Designer and drag and drop the mapping.
Name the Workflow as COUNTRY_REGION_WORKFLOW
Save Repository and in the Output window verify that the workflow is valid.

Before you start running the workflow make sure to register the Power Center Server

Open Workflow Manager – Server – Server Registrationgive all the Power Center Server Registration Properties and define your PMRootDir

Click OK and , Right Click Server and assign the workflow you want to run using the Server. Once you have assigned the workflow to the server you can start the workflow – right click the workflow and click start


workflow monitor should start indicating the status of the run
Right click the workflow and task and you should be able to view the workflow log and session log. From workflow Manager workflows can also be scheduled.
If you run into any issues running a workflow – feel free to post comments. The next 2 posts will cover a) versioning b)debugging using informatica.

Categories: Development

Informatica Eval Install

Vidya Bala - Mon, 2007-02-19 21:31
I checked out statcounter on my blog to see what the Response was after a long time.While I can see that my blog hitcount is increasing slowly– the most frequently searched key word that led to my blog was “Informatica eval install”. In one of my previous posts I mentioned that I will follow-up on Install Instruction for an Informatica eval install. So here it goes – finally
Step1
Download Informatica eval install software and license keys.
This was not an easy search for me but finally managed to find where the third party software is.Below are the part numbers you need to downloadLogon to edelivery.oracle.com






Download B27745-01 Part 1 of 4 Parts 1 through 4 – Siebel Business Applications. Extract the zip files and you should find the eval software for Informatica. Informatica eval license keys can be found in B27757-01 and B27756-01 documentation.
Once you have downloaded and extracted the zip files, go through setup.exe , make sure to install the Server Components and Client Tools





As discussed in my previous post , Informatica has 4 components to it : Client Tools; Repository Server; Informatica Server and Repository.

At the end of the Install you will have to configure the Repository Server and the Informatica Server.

Configure Repository Server Below:
The following Information needs to be provided:
Server Port Number : 5001 (default can be chosen)
Admin password : enter an Admin Password for the Repos Server
Minimum Port Number
The minimum port number the Repository Server can assign to the Repository Agent process. Default is 5002.
Maximum Port Number
The maximum port number the Repository Server can assign to the Repository Agent process. Default is 65535.
Configuration Directory
The name of the directory in which the Repository Server stores repository configuration information files. You can specify either a relative path or an absolute path. Default is Config.
Backup Directory
The name of the directory in which the Repository Server stores repository backup files. You can specify either a relative path or an absolute path. Default is Backup.
Plugin Directory
The name of the directory in which the Repository Server stores repository plugin files. You can specify either a relative path or an absolute path. Default is Plugin.
Severity Level
The level of error messages written to the Repository Server log. Specify one of the following:
Error. Writes ERROR code messages to the log.
Warning. Writes WARNING and ERROR code messages to the log.
Information. Writes INFO, WARNING, and ERROR code messages to the log.
Tracing. Writes TRACE, INFO, WARNING, and ERROR code messages to the log.
Debug. Writes DEBUG, TRACE, INFO, WARNING, and ERROR code messages to the log.
Output to Event Log
Enable this option if you want to write Repository Server messages to the Windows Event Log. This option is enabled by default.
Output to File
Enable this option if you want to write Repository Server log messages to a file. When you enable this option, enter a file name for the Repository Server log. This option is disabled by default. The default Repository Server log file name is pmrepserver.log. The Repository Server stores the Repository Server log file in the Repository Server installation directory.


Configure Informatica Server (PowereCenter Server) as a Service:
Information that needs to be provided is as below:
Server Tab:
Server Name: The name of the PowerCenter Server to register with the repository. This name must be unique to the repository. This name must also match the name you specify when you use the Workflow Manager to register the PowerCenter Server.
TCP/IP Host Address: The TCP/IP host address as an IP number (such as 123.456.789.1), or a local host name (such as RECDB), or a fully qualified name (such as RECDB.INVOICE.COM). If you leave this field blank, the PowerCenter Server uses the default local host address.
Max No. of Concurrent Sessions: The maximum number of sessions the PowerCenter Server runs at a time. Increase this value only if you have sufficient shared memory. Default is 10.
Shared Memory: The amount of shared memory available for use by the PowerCenter Server Load Manager process. For every 10 sessions in Max Sessions, you need at least 2,000,000 bytes reserved in Load Manager Shared Memory. Default is 2,000,000 bytes.
Error Severity Level for Log Files: The level of error messages written to the PowerCenter Server log. Specify one of the following message levels:
Error. Writes ERROR code messages to the log.
Warning. Writes WARNING and ERROR code messages to the log.
Information. Writes INFO, WARNING, and ERROR code messages to the log.
Tracing. Writes TRACE, INFO, WARNING, and ERROR code messages to the log. Debug. Writes DEBUG, TRACE, INFO, WARNING, and ERROR code messages to the log.
Fail Session if Maximum Number of Concurrent Sessions Reached: Enable this option if you want the PowerCenter Server to fail the session if the number of sessions already running is equal to the value configured for Maximum Number of Concurrent Sessions. If you disable this option, the PowerCenter Server places the session in a ready queue until a session slot becomes available. This option is disabled by default.
Allow mapping/session debugging: If selected, you can run the Debugger. This option is enabled by default.
Time Stamp Workflow Log: Enable this option if you want to append a time stamp to messages written to the workflow log. This option is disabled by default.
Output to Event Log: Enable this option if you want to write PowerCenter Server messages to the Windows Event Log. This option is enabled by default.
Output to File: Enable this option if you want to write PowerCenter Server log messages to a file. When you enable this option, enter a file name for the PowerCenter Server log.


Repository Tab:
Repository Name :
The name of the repository to connect to. You create a repository in the Repository Server Administration Console.
Repository User: The account used to access the repository. When you first create a repository, the Repository User is the database username. You create other Repository Users in the Repository Manager.
Repository Password : The password for the Repository User. When you first create a repository, the password is the password for the database user.
Repository Server Host Name: The name of the machine hosting the Repository Server.
Repository Server Port Number: The port number the Repository Server uses to communicate with repository client applications.
Repository Server Timeout: The maximum number of seconds that the PowerCenter Server tries to establish a connection to the Repository Server. If the PowerCenter Server is unable to connect to the Repository Server in the time specified, the PowerCenter Server shuts down. Default is 60 seconds.

Licenses Tab:
Enter the license Key’s and then click update

Compatibility and Database Tab:
PMServer 3.X aggregate compatibility: If selected, the PowerCenter Server handles Aggregator transformations as it did in PowerMart 3.x. This overrides both Aggregate treat nulls as zero and Aggregate treat rows as insert.
If you select this option, the PowerCenter Server treats nulls as zeros in aggregate calculations and performs aggregate calculations before flagging records for insert, update, delete, or reject in Update Strategy expressions. If you do not select this option, the PowerCenter Server treats nulls as nulls and performs aggregate calculations based on the Update Strategy transformation.
PMServer 6.X Joiner source order compatibility: If selected, the PowerCenter Server processes master and detail pipelines sequentially as it did in versions prior to 7.0. The PowerCenter Server processes all data from the master pipeline before starting to process the detail pipeline. Also, if you enable this option, you cannot specify the Transaction level transformation scope for Joiner transformations. If you do not select this option, the PowerCenter Server processes the master and detail pipelines concurrently.
Aggregate Treat Nulls as Zero: If selected, the PowerCenter Server treats nulls as zero in Aggregator transformations. If you do not select this option, the PowerCenter Server treats nulls as nulls in aggregate calculations.
Aggregate Treat Rows as Insert : If selected, the PowerCenter Server performs aggregate calculations before flagging records for insert, update, delete, or reject in Update Strategy expressions. If you do not select this option, the PowerCenter Server performs aggregate calculations based on the Update Strategy transformation.
PMServer 4.0 date handling compatibility: If selected, the PowerCenter Server handles dates as in PowerCenter 1.0/PowerMart 4.0. Date handling significantly improved in PowerCenter 1.5 and PowerMart 4.5. If you need to revert to PowerCenter 1.0 or PowerMart 4.0 behavior, you can configure the PowerCenter Server to handle dates as in PowerCenter 1.0 and PowerMart 4.0.
Treat CHAR as CHAR on Read: If you have PowerCenter Connect for PeopleSoft, you can use this option for PeopleSoft sources on Oracle. You cannot, however, use it for PeopleSoft lookup tables on Oracle or PeopleSoft sources on Microsoft SQL Server.
Max LKP/SP DB Connections: Allows you to specify a maximum number of connections to a lookup or stored procedure database when you start a workflow. If the number of connections needed exceeds this value, session threads must share connections. This can result in a performance loss. If you do not specify a value, the PowerCenter Server allows an unlimited number of connections to the lookup or stored procedure database.
If the PowerCenter Server allows an unlimited number of connections, but the database user does not have permission for the number of connections required by the session, the session fails. A default value is not specified.
Max Sybase Connections: Allows you to specify a maximum number of connections to a Sybase database when you start a session. If the number of connections required by the session is greater than this value, the session fails. Default is 100.
Max MSSQL Connections: Allows you to specify a maximum number of connections to a Microsoft SQL Server database when you start a workflow. If the number of connections required by the workflow is greater than this value, the workflow fails. Default is 100.
Number of Deadlock Retries: Allows you to specify the number of times the PowerCenter Server retries a target write on a database deadlock. Default is 10.
Deadlock Sleep Before Retry (seconds): Allows you to specify the number of seconds before the PowerCenter Server retries a target write on database deadlock. Default is 0 and the PowerCenter Server retries the target write immediately.

Configuration Tab
Data Movement Mode: Choose ASCII or Unicode. The default data movement mode is ASCII, which passes 7-bit ASCII character data. To pass 8-bit ASCII and multibyte character data from sources to targets, use Unicode mode.
Validate Data Code Pages: If you enable this option, the PowerCenter Server enforces data code page compatibility. If you disable this option, the PowerCenter Server lifts restrictions for source and target data code page selection, stored procedure and lookup database code page selection, and session sort order selection. This option is only available when the PowerCenter Server runs in Unicode data movement mode. By default, this option is enabled.
Output Session Log In UTF8: If you enable this option, the PowerCenter Server writes to the session log using the UTF-8 character set. If you disable this option, the PowerCenter Server writes to the session log using the PowerCenter Server code page. This option is available when the PowerCenter Server runs in Unicode data movement mode. By default, this option is disabled.
Warn About Duplicate XML Rows: If you enable this option, the PowerCenter Server writes duplicate row warnings and duplicate rows for XML targets to the session log. By default, this option is enabled.
Create Indicator Files for Target Flat File Output: If you enable this option, the PowerCenter Server creates indicator files when you run a session with a flat file target.
Output Metadata for Flat File Target: If you enable this option, the PowerCenter Server writes column headers to flat file targets. It writes the target definition port names to the flat file target in the first line, starting with the # symbol. By default, this option is disabled.
Treat Database Partitioning As Pass Through: If you enable this option, the PowerCenter Server uses pass-through partitioning for non-DB2 targets when the partition type is Database Partitioning. Enable this option if you specify Database Partitioning for a non-DB2 target. Otherwise, the PowerCenter Server fails the session.
Export Session Log Lib Name: If you want the PowerCenter Server to write session log messages to an external library, enter the name of the library file.
Treat Null In Comparison Operators As: Determines how the PowerCenter Server evaluates null values in comparison operations. Enable one of the following options:
a)Null. The PowerCenter Server evaluates null values as null in comparison expressions. If either operand is null, the result is null. This is the default behavior.
b)High. The PowerCenter Server evaluates null values as greater than non-null values in comparison expressions. If both operands are null, the PowerCenter Server evaluates them as equal. When you choose High, comparison expressions never result in null.
c)Low. The PowerCenter Server evaluates null values as less than non-null values in comparison expressions. If both operands are null, the PowerCenter Server treats them as equal. When you choose Low, comparison expressions never result in null.
WriterWaitTimeOut: In target-based commit mode, the amount of time in seconds the writer remains idle before it issues a commit when the following conditions are true:
a)The PowerCenter Server has written data to the target.
b)The PowerCenter Server has not issued a committed.
The PowerCenter Server may commit to the target before or after the configured commit interval. Default is 60 seconds. If you configure the timeout to be 0 or a negative number, the PowerCenter Server defaults to 60 seconds.
Microsoft Exchange Profile: Microsoft Exchange profile used by the Service Start Account to send post-session email. The Service Start Account must be set up as a Domain account to use this feature.
Date Display Format: If specified, the PowerCenter Server validates the date display format and uses it in session log and server log entries. If the date display format is invalid, the PowerCenter Server uses the default date display format. The default date display format is DY MON DD HH 24:MI:SS YYYY. When you specify a date display format, it displays in the test window. An invalid date display format is marked invalid.
Test Formatted Date: Read-only field that displays the current date using the format selected in the Date Display Format field.

JVM Options Tab:
You can configure JVM options if you run Java-based programs with PowerCenter Connect products, such as PowerCenter Connect for JMS or PowerCenter Connect for webMethods.

HTTP Proxy Tab:

Server Name: Name of the HTTP proxy server.
Server Port: Port number of the HTTP proxy server.
Username: Authenticated user name for the HTTP proxy server. This is required if the proxy server requires authentication.
Password: Password for the authenticated user. This is required if the proxy server requires authentication.
Domain: Domain for authentication.






Once you have completed configuring the Repository Server and Power Center Server – Login to the Repository Server Admin Console and create a new Repository






Once the Repository is created (to create the repository a repository schema needs to be created in the database server); the Repository owner information can be used to Login to Designer, Workflow Manager and Workflow Monitor.

The next few posts will cover the following:
a)How to create Repository User accounts and managing security in Informatica
b)Create a mapping , session, workflow and successfully execute a workflow
c)Version Control in Informatica

d)How to debug mappings in Informatica.

Categories: Development

Where should data be validated?

Stephen Booth - Fri, 2007-02-16 05:48
This just came up on the mailing list for my local Linux Users Group following last night's meeting (which I didn't attend), the original mail and my response is below:> In the pub, there was an interesting conversation going on regarding> validation of data in databases.>> Excuse the omissions, as I said, it was overheard>> Someone brought up the point that in databaseX If say, you have a> Stephen Boothhttps://plus.google.com/107526053475064059763noreply@blogger.com2

Consolidation is the key driver for Migration from Microsoft Access to Oracle Application Express

Donal Daly - Thu, 2007-02-15 14:01
I have just been reviewing the survey results and feedback received from early adopters of the Application Migration Workshop a new feature of Oracle Application Express to assist with migration of Microsoft Access Forms and Reports to Oracle Application Express.

Looking at the data, consolidation is the key reason. In fact, it was twice as popular as the next given reasons of company direction and performance.

So what does consolidation really mean in this context. It is 1 to 1, i.e. one Access database to a unique schema in Oracle or is it a many to 1, removing duplication and increasing the shared access to a "single source of truth". I think it is a mixture of both.

In talking with customers over the last year or so, some customers have told me that they have thousands of access applications and refer to it as their "access problem". Clearly it would be a stretch to think that they have thousands of unique applications, so there must be a lot of duplication and redundant access databases.

So how do you define an appropriate access migration strategy. Well I believe firstly, you need more information about your access databases. This is really an assessment phase. In doing an assessment of a large number of access databases, I would split it into two parts:
  1. Technical Assessment
  2. Business Assessment
Technical Assessment

In the technical assessment phase, you are trying to build up some key data about these access databases. I refer to it as a technical assessment, as hopefully you could have a utility to collect this information. Data to collect would include:
  • Name, Owner, Size
  • Date created, Date last modified
  • Version
  • # of Tables, Forms, Reports, Queries, Pages, Modules,
  • Links
    • links to other mdb files
    • linked data e.g. link tables via ODBC to Oracle, SQL Server, ...
  • Location (server)
  • ...
Business Assessment (Business Level Discovery)

This requires human involvement, and is critical for a subsequent analysis phase. Data points here would include:

  • Business Owner (by department, group)
  • Description (purpose)
  • Application Type (data entry, reporting, ...)
  • Priority (Business Critical to Redundant)
  • Action (migrate, ignore, duplicate, remove, ...)
  • # of users
I can see this being the basis for a nice Oracle Application Express Application, that would support the data entry of business level data, and the generation of useful reports and charts. This would then facilitate the next level of analysis and allow you to ask some interesting questions:
  • What is my spread of versions of Microsoft Access?
  • How many copies of project.mdb or tasklist.mdb do I have? :-)
  • What's my largest / average access database size?
  • Which access databases are most heavily used?
  • ...
Now I think that would be a very useful data collection utility and Oracle APEX application.

Siebel Analytics Answers and Dashboards (a quick getting started guide based on HR schema)

Vidya Bala - Thu, 2007-02-15 05:04
Siebel Analytics Answers and Dashboards (a quick a quick getting started guide based on HR schema)

The last post on Sibel Analytics discussed how we build a physical , Business and presentation layer using Siebel Analytics Administration.
This post will focus on using Answers and Dashboards.
Make sure Siebel Analytics Web is up.
For the purpose of this post , I have demonstrated on how you can build Reports using “Answers” against the “hr” schema.

Physical Layer - Make sure to define the object relationships appropriately in the physical layer
COUNTRIES
DEPARTMENTS
EMPLOYEES
JOB_HISTORY
JOBS
LOCATIONS
REGIONS

Busines Layer - Make sure to define the object relationships appropriately in the Business layer
COUNTRIES
DEPARTMENTS
EMPLOYEES
JOB_HISTORY
JOBS
LOCATIONS
REGIONS

Dimensions:
Department Dimension (3 level Dimension)
-Department
-Region
-Country
Job Dimension
-Job Detail

Presentation Layer defined as follows:
Employee
First Name
Last Name
Email
Phone Number
Hire Date
Department
Department Name
City
Region Name
Country Name
Job
Job Title


Once the Presentation Layer is defined you are ready to move on to Answers to build your Siebel Analytics Reports


If you have all Siebel Client Tools on your Desktop ; Siebel Analytics web will take you to the Analytics web page


The default installation
Default username : AdministratorDefault password: no password
once you have logged in you will see the following Tabs
Dashboards,Answers, Advanced Reports,Marketing,Delivers,Disconnected,Admin,My-Account

Step1:
Before proceeding to answers to create Reports let us first create a Shared Folder named “hr” where we can save our Reports. Click on Answers and on the left hand side you should see My Folder > Shared Folder > Manage Catalog

click on Manage catalog > Shared Folder and you should be able to create a new folder
make sure to Refresh Display to verify that the changes have taken effect

Step2:
Now you are ready to create Reports using Answers
Click on Answers > Answers is basically your web interface to building Reports
on the right handside you should see Subject areas
click on HRDetails
you will now see on the left hand side bar all the Data Items that were defined in the Presentation Layer defined available in Answers
Employee
First Name
Last Name
Email
Phone Number
Hire Date
Department
Department Name
City
Region Name
Country Name
Job
Job Title

Example1 – Report1
To build a Report
First Name,LastName,Department Name,Country Name, Region Name, City
Click on the attributes so that they appear under columns:

click on Results and you should see the Results
save your Report to the “hr” shared folder.

The above Report was HR Employee Demographic Details (List of employees by name,department,city,county)


Example2 – Report 2

Pie Chart to give an overview of number of employees by DepartmentClick on create new request

Edit Column formula to display first name and last name as a single column called “Employee Name”

The next step would be to get a count of employees by DepartmentCriteria Tab add Department name and modify “Employee Name” column formula to “count(Employee."First Name"' ' Employee."Last Name")”
The Results would be as above. Now to covert this to a pie chart (make sure to start Siebel Analytics Java Host)
Click on the Pie Chart
Save the Request as “Employees by Department”

Example 3:We now have 2 Reports

“Employees by Department”
“HR Employee Demographic Details”

lets now add the above 2 Reports to a Dashboard

Click on the Admin Tab

Click on Manage Intelligence Dashboards.

Create Dashboard. Name the dashboard as “HR Details”
you should see the “HR Details” dashboard appear near the “My Dashboard”
Click to Add content and you should be able to add the 2 Reports we created in the “hr” shared folder.
The output would be as below

The layout can be changed by editing properties of sections in the Dashboard

Page Option > Edit Dashboard
Click on the Properties Tab of sections if you want to edit the layout of sections.In the below view we have arranged the Reports Horizontally.
Example 4: Drill down from the Pie Chart to the Report Table in the HR Details Dashboard.

a)To Navigate from Dashboard to Answers easily include the modify link in the Dashboard.Page Options > Edit Dashboard
for each section select Properties > Report Links > Modify. Save and go back to Dashboard now you should see the modify links.
b) To set the navigate link on the Pie Chart.Modify the pie chart > Results > edit view of the pie chart


select Additional Charting Options
Interaction Tab > select Navigate > and give the Navigation Page > “HR Details” Dashboard Page 1
c) Add an is prompted filter to the Table Report

from the dashboard click on Modify link of the Table. This will take you to criteria tab on Answers.Add a filter and set Department Name to “is prompted” and save the filter
d) create a Dashboard Prompt click on Answers
you should see an icon on the Left Menu to create a new Dashboard Prompt
scope=Dashboardselect the column name on which you want to set the prompt to
save the Dashboard Prompt.
Now go to your Dashboard > Edit Page Options > Drop the Dashboard Prompt in the Section you would like to appear in and you should now see the Prompt on your Dashboard

you can drill down by Department Name from the Dashboard Prompt drop down.
e) column sorting on Employee Demographic details
assume you also wanted the ability to sort by columns on the Table Report.Click on the Modify link on the Table Report > go to the Results Tab on Answers> Click on the edit view of the Table and then edit Table properties

check “ enable column sorting on Dashboard”

At this point you have a fully functional Dashboard with Drill down ability both from the Dashboard prompt and from the pie chart to the Table.

The next posts will discuss:
1)XML Publisher and Answers/Dashboards

2)Admin Options with Answers(managing security, ibots etc)

Categories: Development

Productivity with choice

Omar Tazi - Wed, 2007-02-14 16:39
This article published in the Oracle Magazine March/April 2007 issue explains well our tooling strategy and why Oracle is committed to both JDeveloper and Eclipse to increase our customers' productivity no matter what development platform they end up using. Thanks to Rich Schwerin for putting it together:
http://www.oracle.com/technology/oramag/
oracle/07-mar/o27opensource.html

Wikipedia and Oracle Data Mining

Marcos Campos - Mon, 2007-02-12 15:44
Wikipedia has a nice page on Oracle Data Mining (link). It provides a good overview of the features and history of the product. Here is a snippet of the text:Oracle Data Mining (ODM) is a software product distributed as an option to Oracle Corporation's Relational Database Management System (RDBMS) Enterprise Edition (EE). This product supports a collection of data mining and data analysis Marcoshttp://www.blogger.com/profile/14756167848125664628noreply@blogger.com0
Categories: BI & Warehousing

New Oracle Statistical Functions Page

Marcos Campos - Mon, 2007-02-12 14:55
OTN has a new page (link) describing the statistical functions in the Oracle 10g Database. These functions are available in all versions of the database at no extra cost. Features include:Descriptive statisticsHypothesis testingCorrelations analysis (parametric and nonparametric)Ranking functionsCross Tabulations with Chi-square statisticsLinear regressionANOVATest Distribution fitWindow Marcoshttp://www.blogger.com/profile/14756167848125664628noreply@blogger.com2
Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator