Feed aggregator

Improving data move on EXADATA IV

Mathias Magnusson - Wed, 2013-06-05 07:00

Reducing storage requirements

In the last post in this series I talked about how we sped up the move of data from operational to historical tables from around 16 hours down to just seconds. You find that post here.

The last area of concern was the amount of storage this took and would take in the future. As it was currently taking 1.5 TB it would be a fairly large chunk of the available storage and that raised concerns for capacity planning and for availability of space on the EXADATA for other systems we had plans to move there.

We set out to see what we could do to both estimate max disk utilisation this disk space would reach as well as what we could do to minimize the needed disk space. There were two considerations  minimize disk utilisation at the same time as query time should not be worsened. Both these were of course to be achieved without adding a large load to the system, especially not during business hours.

The first attempt was to just compress one of the tables with the traditional table compression. After running the test across the set of tables we worked with, we noticed a compression ratio of 57%. Not bad, not bad at all. However, this was now to be using an EXADATA. One of the technologies that are EXADATA only (to be more technically correct, only available with Oracle branded storage) is HCC. HCC stands for Hybrid Columnar Compression. I will not explain how it is different from normal compression in this post, but as the name indicates the compression is based around columns rather than on rows as traditional compression is. This can achieve even better results, at least that is the theory and the marketing for EXADATA says that this is part of the magic sause of EXADATA. Time to take it out for a spin.

After having set it up for our tables having the same exact content as we had with the normal compression, we had a compression rate of 90%. That is 90% of the needed storage was reduced by using HCC. I tested the different options available for the compression (query high and low as well as archive high and low), and ended up choosing query high. My reasoning there was that the compression rate of query high over query low was improved enough and the processing power needed was well worth it. I got identical results on query high and archive low. It took the same time, resulted in the same size dataset and querying took the same time. I could not tell that they were different in any way. Archive high however  is a different beast. It took about four times the processing power to compress and querying too longer and used more resources too. As this is a dataset I expect the users to want to run more and more queries against when they see that it can be done in a matter of seconds, my choice was easy, query high was easily the best for us.

How do we implement it then? Setting a table to compress query high and then run normal inserts against it is not achieving a lot. There is some savings with it, but it is just marginal compared to what can be achieved. For HCC to kick in, we need direct path writes to occur. As this data is written once and never updated, we can get everything compressed once the processing day is over. Thus, we set up a job to run thirty minutes past midnight which compressed the previous days partition. This is just one line in the job that does the move of the partitions described in the last post in this series.

The compression of  one very active day takes less than two minutes. In fact, the whole job to move and compress has run in less than 15 seconds for each days compression since we took this solution live a while back. That is a time well worth the 90% saving in disk consumption we achieve.

It is worth to note that while HCC is an EXADATA feature not available in most Oracle databases, traditional compression is available. Some forms of it requires licensing, but it is available so while you may not get the same ratio as described in this post you can get a big reduction in disk space consumption using the compression method available to you.

With this part the last piece of the puzzle fell in place and there were no concerns left with the plan for fixing the issues the organisation had with managing this log data. The next post in this serie will summarise and wrap up what was achieved with the changes described in this serie.


Salesforce.com Real Time integration with Oracle using Informatica PowerCenter 9.5

Kubilay Çilkara - Tue, 2013-06-04 17:03
In this post I will describe how you can integrate your Salesforce.com org with a relational database, like Oracle in real time, or better 'near' real time!

Many times I come across the requirement of quickly propagating changes from cloud platforms like Salesforce.com to on-premise data stores. You can do this with webservices, but that is not middleware and it requires coding.

How about doing with a data integration tool?

+Informatica Corporation's  Informatica PowerCenter can achieve this by using the CDC (Change Data Capture) feature of the Informatica PowerCenter Salesforce connector, when Salesforce is the source in a mapping.

The configuration is simple. All you really have to set up is 2 properties in the Mapping Tab of a Session Task in Informatica Workflow Manager.

These are the properties:
  • Time Limit property to -1
  • Flush interval property to 60 seconds (minimum 60 seconds)
See a picture from one of my settings










And here is what these two settings mean from the PowerCenter PowerExchange for Salesforce.com User Guide:

CDC Time Limit

Time period (in seconds) that the Integration Service reads changed Salesforce data. When you set the CDC Time Limit to a non-zero value, the Integration Service performs a full initial read of the source data and then captures changes to the Salesforce data for the time period you specify. Set the value to -1 to capture changed data for an infinite period of time. Default is 0. 

Flush Interval

Interval (in seconds) at which the Integration Service captures changed Salesforce data. Default is 300. If you set the CDC Time Limit to a non-zero value, the Integration Service captures changed data from the source every 300 seconds. Otherwise, the Integration Service ignores this value.


That's it, you don't have to configure anything else!

Once you set up these properties in the mapping tab of a session, save and restart the task in the workflow, the task will run continuously, non stop. The connector will poll the Salesforce org continuously and propagate any changes you do in Salesforce, downstream to the premise database system, including INSERT, UPDATE and DELETE operations.

Enjoy!

More reading:

SFDC CDC implementation in Informatica PowerCenter




Categories: DBA Blogs

Webcast Series - What's New in EPM 11.1.2.3 and OBIEE 11.1.1.7

Look Smarter Than You Are - Tue, 2013-06-04 10:56
Today I'm giving the first presentation in a 9-week long series on all the new things in Oracle EPM Hyperion 11.1.2.3 and OBIEE 11.1.1.7.  The session today (and again on Thursday) is an overview of everything new in all the products.  It's 108 slides which goes to show you that there's a lot new in 11.1.2.3.  I won't make it through all 108 slides but I will cover the highlights.

I'm actually doing 4 of the 9 weeks (and maybe 5, if I can swing it).  Here's the complete lineup in case you're interested in joining:

  • June 4 & 6 - Overview
  • June 11 & 13 - HFM
  • June 18 & 20 - Financial Close Suite
  • July 9 & 11 - Essbase and OBIEE
  • July 16 & 18 - Planning
  • July 23 & 25 - Smart View and Financial Reporting
  • July 30 & Aug 1 - Data & Metadata Tools (FDM, DRM, etc.)
  • Aug 6 & 8 - Free Supporting Tools (LCM, Calc Mgr, etc.)
  • Aug 13 & 15 - Documentation

If you want to sign up, visit http://www.interrel.com/educations/webcasts.  There's no charge and I don't do marketing during the sessions (seriously, I generally forget to explain what company I work for).  It's a lot of information, but we do spread it out over 9 weeks, so it's not information overload.

And bonus: you get to hear my monotone muppet voice for an hour each week. #WorstBonusEver
Categories: BI & Warehousing

TROUG 2013 DW/BI SIG

H.Tonguç Yılmaz - Tue, 2013-06-04 04:21
Selam, ikinci Türk Oracle Kullanıcıları Derneği, BI/DW özel ilgi grubu toplantımız 21 Haziran günü İTÜ Maslak ‘da gerçekleşecek. Draft plan şu şekilde: 09:00 – 09:30 Kayıt ve açılış 09:30 – 10:15 Ersin İhsan Ünkar / Oracle Big Data Appliance & Oracle Big Data Connectors – Hadoop Introduction 10:30 – 11:15 Ferhat Şengönül / Exadata TBD […]

Free Course on ADF Mobile

Bex Huff - Mon, 2013-06-03 15:30

Oracle came out with a clever new online course on Developing Applications with ADF Mobile. I really like the format: it's kind of like a presentation, but with with video of the key points and code samples. There's also an easy-to-navigate table of contents on the side so you can jump to the topic of interest.

I like it... I hope the ADF team continues in this format. Its a lot better than a jumble of YouTube videos ;-)

read more

Categories: Fusion Middleware

e_howdidIdeservethis

Oracle WTF - Sat, 2013-06-01 02:51

A friend has found himself supporting a stack of code written in this style:

DECLARE
   e_dupe_flag EXCEPTION;
   PRAGMA EXCEPTION_INIT(e_dupe_flag, -1);

BEGIN
   ...

EXCEPTION
   WHEN e_dupe_flag THEN
      RAISE e_duplicate_err;

  etc...

Because, as he says, coding is not hard enough.

This reminded me of one that was sent in a while ago:

others EXCEPTION;

"I didn't know you could do that" adds our correspondent.

Create a Couchbase cluster in less than a minute with Ansible

Tugdual Grall - Fri, 2013-05-31 15:07
TL;DR: Look at the Couchbase Ansible Playbook on my Github. Introduction   When I was looking for a more effective way to create my cluster I asked some sysadmins which tools I should use to do it. The answer I got during OSDC was not Puppet, nor Chef, but was Ansible. This article shows you how you can easily configure and create a Couchbase cluster deployed and many linux boxes...and the Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com1

ipython-sql bind variables

Catherine Devlin - Thu, 2013-05-30 16:57

Thanks to Mike Wilson, ipython-sql now supports bind variables!


In [12]: name = 'Countess'

In [13]: %sql select description from character where charname = :name
Out[13]: [(u'mother to Bertram',)]

White Paper on Custom PDF Reports in APEX

Marc Sewtz - Thu, 2013-05-30 11:43

As I’ve previously blogged about and outlined in my video tutorials, it is now possible to create custom PDF reports with APEX 4.2.2 using the APEX Listener 2.0.2 and third-party tools like Altova Stylevision or Stylus Studio. To help our customers getting started with this, we’ve just released a white paper that outlines the system requirements, explains the configuration steps and options and then walks you through the creation of PDF reports using custom layouts step by step. The white paper is available on out Oracle Application Express OTN page:

Maintaining the security-worthiness of Java is Oracle’s priority

Oracle Security Team - Thu, 2013-05-30 09:57

Hi my name is Nandini Ramani, I lead the software development team building the Java platform.  My responsibilities span across the entire Java platform and include platform security. 

Over the past year, there have been several reports of security vulnerabilities in Java, primarily affecting Java running in Web browsers. This blog entry outlines the steps Oracle has taken to address issues with the security-worthiness of Java in web browsers and elsewhere following the acquisition of Sun Microsystems.

Whenever Oracle makes an acquisition, acquired product lines are required to conform to Oracle policies and procedures, including those comprising Oracle Software Security Assurance.  As a result, for example, the Java development organization had to adopt Oracle’s Security Fixing Policies, which among other things mandate that issues must be resolved in priority order and addressed within a certain period of time.

As a result of adopting these stricter procedures, as well as increasing investments in Java overall by Oracle, Java development significantly accelerated the production of security fixes.  Recently-released Critical Patch Updates for Java SE have contained a historically high number of security fixes.  In addition, Oracle decided to publish an additional security release in 2013. The April 2013 Critical Patch Update for Java SE will bring Java to four  security releases in 2013 as opposed to the three initially planned.  As a reminder, the February 2012 Critical Patch Update for Java SE provided 14 security fixes, the June 2012 release 14, the October 2012 release 30 (thus the total number of new security fixes provided through Critical Patch Updates for Java in 2012 was 58).  In contrast to these numbers, the February 2013 security releases provided 55 new security fixes, and the April 2013 Critical Patch Update for Java SE provided 42 new security fixes, bringing the total number of security fixes released through the Critical Patch Update for Java in the first half of 2013 to 97.

In addition to accelerating the release of security fixes for Java SE, Oracle’s additional investments have provided the organization with the ability to more quickly respond to reports of 0-days and other particularly severe vulnerabilities.  Java development has gained the ability to produce and test individual security fixes more quickly as evidenced by the quick releases of the most recent Java Security Alerts.  In other words, the procedural and technical changes implemented throughout Java development have enabled the organization to make improvements affecting both the Critical Patch Update program (scheduled release of a greater number of security fixes) and the Security Alert program (faster release of unscheduled security fixes in response to 0-days or particularly severe vulnerabilities).

Starting in October 2013, Java security fixes will be released under the Oracle Critical Patch Update schedule along with all other Oracle products.  In other words, Java will now issue four annual security releases.  Obviously, Oracle will retain the ability to issue emergency “out of band” security fixes through the Security Alert program.

The implementation of Oracle Software Security Assurance policies and practices by Java development is also intended to defend against the introduction of new vulnerabilities into the Java code base.  For example, the Java development team has expanded the use of automated security testing tools, facilitating regular coverage over large sections of Java platform code.  The Java team has engaged with Oracle’s primary source code analysis provider to enhance the ability of the tool to work in the Java environment.  The team has also developed sophisticated analysis tools to weed out certain types of vulnerabilities (e.g., fuzzing tools).

Oracle is also addressing the limitations of the existing Java in browser trust/privileges model.  The company has made a number of product enhancements to  default security and provide more end user control over security.  In JDK 7 Update 2, Oracle added enhanced security warnings before executing applets with an old Java runtime. In JDK 7 Update 6, Oracle began dynamically updating information about security baselines – information used to determine if the current version of Java contains the latest security fixes available.  In JDK 7 Update 10, Oracle introduced a security slider configuration option, and provided for automatic security expiration of older Java versions (to make sure that users run the most recent versions of Java with a more restricted trust model than in older versions).  Further, with the release of JDK 7 Update 21, Oracle introduced the following changes:
  (1) The security model for signed applets was changed.  Previously, signing applets was only used to request increased application privileges.  With this update, signing applets establishes identity of the signer, but does not necessarily grant additional privileges.  As a result, it is now possible to run signed applets without allowing them to run outside the sandbox, and users can prevent the execution of any applets if they are not signed. 
  (2) The default plug-in security settings were changed to further discourage the execution of unsigned or self-signed applets.  This change is likely to impact most Java users, and Oracle urges organizations whose sites currently contain unsigned Java Applets to sign those Applets according to the documented recommendations.  Note, however, that users and administrators will be able to specifically opt out of this setting and choose a less secure deployment mode to allow for the execution of unsigned applets.  In the near future, by default, Java will no longer allow the execution of self-signed or unsigned code.
  (3) While Java provides the ability to check the validity of signed certificates through Certificate Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP) calls before the execution of signed applets, the feature is not enabled by default because of a potential negative performance impact.  Oracle is making improvements to standardized revocation services to enable them by default in a future release.  In the interim, we have improved our static blacklisting to a dynamic blacklisting mechanism including daily updates for both blacklisted jar files and certificates.

Finally, while the security problems affecting Java in Internet browsers have generally not impacted Java running on servers, Oracle has found that the public coverage of the recently published vulnerabilities impacting Java in the browser has caused concern to organizations committed to Java applications running on servers.  As a result, Oracle is taking steps to address the security implications of the wide Java distribution model, by further dissociating client/browser use of Java (e.g., affecting home users) and server use (e.g., affecting enterprise deployments).  With Java 7 update 21, Oracle has introduced a new type of Java distribution: “Server JRE.” 

Oracle has removed plugins from the Server JRE distribution to reduce attack surface but also to reduce customer confusion when evaluating server exploitation risk factors.  In the future, Oracle will explore stronger measures to further reduce attack surface including the removal of certain libraries typically unnecessary for server operation.  Such significant measures cannot be implemented in current versions of Java since they would violate current Java specifications, but Oracle has been working with other members of the Java Community Process to enable such changes in future versions of Java.

In addition, Oracle wants to improve the manageability of Java in enterprise deployments.  Local Security Policy features will soon be added to Java and system administrators will gain additional control over security policy settings during Java installation and deployment of Java in their organization.  The policy feature will, for example, allow  system administrators to restrict execution of Java applets to those found on specific hosts (e.g., corporate server assets, partners, etc) and thus reduce the risk of malware infection resulting from desktops accessing unauthorized and malicious hosts. 

It is our belief that as a result of this ongoing security effort, we will decrease the exploitability and severity of potential Java vulnerabilities in the desktop environment and provide additional security protections for Java operating in the server environment.  Oracle’s effort has already enabled the Java development team to deliver security fixes more quickly, resulting in fewer outstanding security bugs in Java.

For more information:
More information about Oracle Software Security Assurance is located at http://www.oracle.com/us/support/assurance/index.html
Java security documentation is located at http://www.oracle.com/technetwork/java/javase/tech/index-jsp-136007.html
Release notes for JDK 7 releases are located at http://www.oracle.com/technetwork/java/javase/7u-relnotes-515228.html

Oracle Forms in the Cloud - the next big thing?

Gerd Volberg - Wed, 2013-05-29 08:26
The whole world speaks about working in the cloud. Why not Oracle Forms?

Why not...

Why...

Because: Oracle did it already...

Link zum Webcast

Michael Ferrante and Roger Vidal are explaining in the Webcast, how you can run your application in the cloud with the Web Logic Server and what this means for the future of Oracle Forms.

Grant points to this webcast with the words: "Oracle Forms remains a key technology..."

Have fun listening to Michael and Roger
Gerd

Improving data move on EXADATA III

Mathias Magnusson - Wed, 2013-05-29 07:00

Moving to history tables

In the last post I talked about how we made the speed of actually writing all those log-records much faster. It has to date been so fast that no a single report of a problem has been filed. you find that post here.

Once the data was written to the log-tables, it had to be moved to the history tables.This was a process that took around 16 hours. It was never allowed to run for that long as it had to be stopped before the business day started.

This move was done from an instance on EXADATA to a database on an old server. Yes, I can hear you all think “AHA! It must be the slow database link.”. That was the leading thought when I started looking at it. And yes it sure was slow, but no it was not having a big impact. The other area that had been tuned and tweaked over and over and over and … was the insert over the database link to the history tables. Sure enough it was taking a long time. However measuring it showed that it only accounted for 20% of the total time. Reducing that would let us save over three hours. While that would be good, where did the rest of the time go?

It went to a place no one had suspected. Nor tweaked Nor had any measurements been made. What else is new?

It was the part of the process that was EXADATA only. It was the delete of the rows that had been moved. Huh? How could this be? Well it turns out that deleting data based on an in-clause was not as fast as one would think (or at least want). The process was based on selecting a set of primary key values and putting them into a temporary table, this table was then used to identify rows to insert into the history table and to delete rows.

Yes, there are quite a few things in this process that one could attempt to optimise  However, no matter what, the speed would probably not be fast enough. If it ended up being, would it handle the projected growth of the business? And is there really no better way than essentially row by row processing?

Sure there is. Sometimes SQL is not the best or only tool at our disposal.

Everything doesn’t have to be done with SQL. ™

We had already removed the indexes, so the delete should now be faster. It was, just barely fast enough. Thus, just with that change we had squeezed into the seemingly unattainable window for this process. But business is growing and we would within weeks be back to tuning and tweaking.

Back to the idea of not using SQL for everything. But first, let’s revisit the approach that led to success with the write speed. What assumptions are made that we can question? Well… Why are we writing this data over a database link to the slowest database we have in our production environment? It has always been that way and yes we’re worried about the impact of doing this on the EXADATA. Both the impact of letting ad-hoc searches be made as well as the impact of storing all this data on the EXADATA. The storage concern is well founded as the log-data takes up close to 1.5 TB and the volume of logs written are increasing.

However, when we question this we all agree that these are assumed problems and assumed solutions to those problems. Based on that a PoC is produced to show what would happen if we could keep the historic log data in the same database instance on the EXADATA.

With the historic tables in the same database, we get a whole new set of tools to use. I build a PoC showing how data can be moved from the operational tables (the one logs are written to) to the historic ones in under a second for the whole days volume. To achieve this I partition the table on range where the partition key is the time when the log was inserted. Next part is to use a technology in the database called exchange partition.

When exchanging a partition, no data is actually moved. The partition with todays data is via exchange partition moved from table A to table B. However, this move is only updating metadata in the database. That is to say that the only change was to specify which table the partition belongs to. The rows in the partition remains in the same exact physical location on disk as they were from the beginning. They are not changed, not read, and not touched in any way.

This is what makes such a move so fast. Even better, it is transactionally safe. If a query started while it belonged to table A, it will be read even though it was moved to another table in the middle of that query. Queries on table A that starts after the move will of course not see the moved data at all.

Sub-second moving data of millions or billions rows is something that cannot be done with SQL no matter how much one tunes the SQL. So again, SQL is not the only tool at your disposal.

With this we proved that the process can be fast enough. I have not discussed it here, but during this process we also showed that the ad-hoc searches was of no concern either. EXADATA smart scan handles the actual ad-hoc queries very well and most of them are actually sub-second response time even with no indexes. This is for 1 billion+ row tables. Yes smart scan is one part of it and storage indexes is another. I will not discuss those in these posts, but take my word for it, when the time they took were presented the concern was immediately forgotten.

In the next post in this series, I will discuss how we dealt with the concern over the amount of disk space we would use now and in the future if we let the historic data stay on the EXADATA.


Patching an Exadata Compute Node

Fuad Arshad - Tue, 2013-05-28 12:10
An Oracle Exadata Full Rack consists of 8 DB compute nodes. Oracle has shifted the strategy to patching the exadata in 11.2.3.2.0 onwards to using Yum as the method of patching. the Quarterly Full Stack  does not include the DB Compute nodes patches anymore. So that has to be done separately. 
So where does someone start when they are new to exadata and need to patch to a newer release of the Software.
For the Compute Nodes Start here
Exadata YUM Repository Population, One-Time Setup Configuration and YUM upgrades [ID 1473002.1]

This note walks you thru either setting up a direct connection to ULN and building a repository or using an ISO image that you can down for setting up the rpeository. Best Practice would be to setup a repository external to the Exadata and then add the repo info in the Exadata compute nodes. Once the repository is created and updated or ISO downloaded. you will need to create
/etc/yum.repos.d/Exadata-computenode.repo
[exadata_dbserver_11.2_x86_64_latest]
name=Oracle Exadata DB server 11.2 Linux $releasever - $basearch - latest
baseurl=http:///yum/unknown/EXADATA/dbserver/11.2/latest/x86_64/
gpgcheck=1
enabled=0
This needs to be added to all Exadata Compute Nodes . then ensure all repositories are disabled to avoid any accidents
sed -i 's/^[\t ]*enabled[\t ]*=[\t ]*1/enabled=0/g' /etc/yum.repos.d/*
Download and stage patch patch 13741363 in a software directory of each node This will have the helper scripts needed . Always make sure to get the updated versions. You will need to disable and stop the crs on the node you are patching as root and then perform a server backup .
$GRID_HOME/bin/crsctl disable crs
$GRID_HOME/bin/crsctl stop crs -f
/13741363//dbserver_backup.sh
This will providecreate a backup and results similar to below will show up.
INFO] Unmount snapshot partition /mnt_snap
[INFO] Remove snapshot partition /dev/VGExaDb/LVDbSys1Snap
Logical volume "LVDbSys1Snap" successfully removed
[INFO] Save partition table of /dev/sda in /mnt_spare/part_table_backup.txt
[INFO] Save lvm info in /mnt_spare/lvm_info.txt
[INFO] Unmount spare root partition /mnt_spare
[INFO] Backup of root /dev/VGExaDb/LVDbSys1 and boot partitions is done successfully
[INFO] Backup partition is /dev/VGExaDb/LVDbSys2
[INFO] /boot area back up named boot_backup.tbz (tar.bz2 format) is on the /dev/VGExaDb/LVDbSys2 partition.
[INFO] No other partitions were backed up. You may manually prepare back up for other partitions.
Once The backup is complete you can proceed with the update
yum --enablerepo=exadata_dbserver_11.2_x86_64_latest  repolist   // thisis the official channel for all updates 
yum --enablerepo=exadata_dbserver_11.2_x86_64_latest update
This will Download the appropriate rpm's and update the compute and reboot. The process can take between 10-30 mins . Once the node is up the Clusterware will not come up. Validate the image using imageinfo
[root@exa]# imageinfo
Kernel version: 2.6.32-400.21.1.el5uek #1 SMP Wed Feb 20 01:35:01 PST 2013 x86_64
Image version: 11.2.3.2.1.130302
Image activated: 2013-05-27 14:41:45 -0500
Image status: success
System partition on device: /dev/mapper/VGExaDb-LVDbSys1
This confirms that the compute node has been upgraded to 11.2.3.2.1 Unlock crs as root
$GRID_HOME/crs/install/rootcrs.pl -unlock
su - oracle
.oraenv
--select oracle database to set home
relink all
make -C $ORACLE_HOME/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle
su root
$GRID_HOME/crs/install/rootcrs.pl -patch
$GRID_HOME/bin/crsctl enable crs
This concludes a compute node patch application.  Rinse and repeat for all compute nodes 8 in X2-8
Now if you have read thru all this you will kind of see how many manual steps are involved. Fortunately Oracle Just Released a utility ot automate all these Tasks for you. Rene Kundersma of Oracle Talks about this new utility Call dbnodeUpdate.sh in his Blog Post Here
 Andy Colvin has published on his Blog his take on these Scripts and a demo Here

Six months as Technical Evangelist at Couchbase

Tugdual Grall - Tue, 2013-05-28 09:36
Already 6 months! Already 6 months that I have joined Couchbase as Technical Evangelist. This is a good opportunity to take some time to look back. So first of all what is a Developer/Technical Evangelist? Hmm it depends of each company/product, but let me tell you what it is for me, inside Couchbase. This is one of the most exciting job I ever had. And I think it is the best job you can haveTugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0

Using Load Plan for managing your ETL task in BI Apps 11.1.1.7.1 (1)

Dylan Wan - Tue, 2013-05-28 02:24

One of the major change introduced in BI Apps 11.1.1.7.1 is the way how we manage the ETL task sequence and trim the unnecessary tasks.

This functionality was accomplished earlier using DAC.  The problem we frequently faced was that the DAC repository and the INFA repository are maintained as two separate repositories.  We have to sync up the name of tasks exactly in order to use DAC to manage the task execution of the Informatica workflow tasks.

Load Plan and Load Plan Generator was designed for addressing this requirement.

Here is a good article that describes the story.

Load Plan Generator – An Inside Look
Categories: BI & Warehousing

Complete Gross Margin improvement framework

Arvind Jain - Tue, 2013-05-28 00:48

Posted above is a time tested framework for significant gross margin improvement to your business unit's overall gross margin.

Simple but very powerful. If you can deploy these buckets wisely then GM savings can be anywhere in thousands or millions .. depending on your operations scale.

 

The Adventures of the Trickster developer - Aliases

Gary Myers - Sun, 2013-05-26 21:02
The Trickster is a mythological being who enjoys potentially dangerous counter-intuitive behaviour. You'll often find him deep within the source code of large systems.

The Alias Trap

Generally an alias in a query is there to make it easier to understand, either for the developer or the database. However the Trickster can reuse aliases within a query to make things more confusing.


desc scott.emp
       Name       Null?    Type
       ---------- -------- --------------------
1      EMPNO      NOT NULL NUMBER(4)
2      ENAME               VARCHAR2(10)
3      JOB                 VARCHAR2(9)
4      MGR                 NUMBER(4)
5      HIREDATE            DATE
6      SAL                 NUMBER(7,2)
7      COMM                NUMBER(7,2)
8      DEPTNO              NUMBER(2)

desc scott.salgrade
       Name      Null?    Type
       --------- -------- --------------------
1      GRADE              NUMBER
2      LOSAL              NUMBER
3      HISAL              NUMBER


desc scott.dept
       Name      Null?    Type
       --------- -------- --------------------
1      DEPTNO    NOT NULL NUMBER(2)
2      DNAME              VARCHAR2(14)
3      LOC                VARCHAR2(13)


The trickster can try queries such as

SELECT e.ename, e.grade
FROM scott.emp e 
       JOIN scott.salgrade e ON e.sal BETWEEN e.losal AND e.hisal;

and


SELECT x.ename, x.dname
from scott.emp x join scott.dept x using (deptno);



As long as any prefixed column names involved are unique to a table, the database can work out what to do. 

If you find this in a 'live' query, it is normally one with at least half a dozen tables where an extra join has been added without noticing that the alias is already in use. And you'll discover it when a column is added to one of those tables causing the database to throw up its hands in surrender. Sometimes you will find it in a dynamically constructed query, when it will fail seemingly at random. 

Once discovered, it isn't a mentally difficult bug to resolve. But first you have to get past the mental roadblock of "But all the columns already have an alias pointing to the table they come from".

Path To Understand Oracle Fusion Application

Oracle e-Business Suite - Sun, 2013-05-26 07:45

Path To Fusion


Categories: APPS Blogs

Review of Learning IPython for Interactive Computing and Data Visualization

Catherine Devlin - Sat, 2013-05-25 21:27
valuable but traditionalMay 25, 2013 by Catherine Devlinproductphoto of 'Learning IPython for Interactive Computing and Data Visualization' 4 stars (of 5)

Packt Publishing recently asked if I could review their new title, Learning IPython for Interactive Computing and Data Visualization. (I got the e-book free for doing the review, but they don't put any conditions on what I say about it.) I don't often do reviews like that, but I couldn't pass one this up because I'm so excited about the IPython Notebook.

It's a mini title, but it does contain a lot of information I was very pleased to see. First and foremost, this is the first book to focus on the IPython Notebook. That's huge. Also:

  • The installation section is thorough and goes well beyond the obvious, discussing options like using prepackaged all-in-one Python distributions like Anaconda.
  • Some of the improvements IPython can make to a programming workflow are nicely introduced, like the ease of debugging, source code inspection, and profiling with the appropriate magics.
  • The section on writing new IPython extensions is extremely valuable - it contains more complete examples than the official documentation does and would have saved me lots of time and excess code if I'd had it when I was writing ipython-sql.
  • There are introductions to all the classic uses that scientists doing numerical simulations value IPython for: convenience in array handling, Pandas integration, plotting, parallel computing, image processing, Cython for faster CPU-bound operations, etc. The book makes no claim to go deeply into any of these, but it gives introductory examples that at least give an idea of how the problems are approached and why IPython excels at them.

So what don't I like? Well, I wish for more. It's not fair to ask for more bulk in a small book that was brought to market swiftly, but I can wish for a more forward-looking, imaginative treatment. The IPython Notebook is ready to go far beyond IPython's traditional core usership in the SciPy community, but this book doesn't really make that pitch. It only touches lightly on how easily and beautifully IPython can replace shell scripting. It doesn't get much into the unexplored possibilities that IPython Notebook's rich display capabilities open up. (I'm thinking of IPython Blocks as a great example of things we can do with IPython Notebook that we never imagined at first glance). This book is a good introduction to IPython's uses as traditionally understood, but it's not the manifesto for the upcoming IPython Notebook Revolution.

The power of hybrid documentation/programs for learning and individual and group productivity is one more of IPython Notebook's emerging possibilities that this book only mentions in passing, and passes up a great chance to demonstrate. The sample code is downloadable as IPython Notebook .ipynb files, but the bare code is alone in the cells, with no use of Markdown cells to annotate or clarify. Perhaps this is just because Packt was afraid that more complete Notebook files would be pirated, but it's a shame.

Overall, this is a short book that achieves its modest goal: a technical introduction to IPython in its traditional uses. You should get it, because IPython Notebook is too important to sit around waiting for the ultimate book - you should be using the Notebook today. But save space on your bookshelf for future books, because there's much more to be said on the topic, some of which hasn't even been imagined yet.

This hReview brought to you by the hReview Creator.

Pages

Subscribe to Oracle FAQ aggregator