Feed aggregator

Supplemental Logging and Securefiles Causing DBWn to Block

Don Seiler - Thu, 2014-04-17 09:00
A few weeks back, we began making changes to prepare for using Oracle Golden Gate. One of the first steps required is to enable "minimal supplemental logging" at the database level. We did this during an evening maintenance window. However by the time the morning workload picked up, we started seeing a lot of sessions blocking, and the root blocker was one of the DB Writer (DBWn) processes.

Looking at the blocked sessions, a query similar to this was a common theme:

UPDATE foo
SET foo_data = :data, foo_time = systimestamp
WHERE foo_id = :id

This statement was run by many sessions from our webservers as part of a page load process. Very high-frequency call rate. We knew that the only change in the database was the minimal supplemental logging. Obviously we were preparing to turn it off, but took some time to look into it. It is important to note that for the most part the contention was only with sessions running this update statement.



Looking at the table involved, one unique feature was that foo_data field is an encrypted CLOB, which Oracle refers to as a securefile in 11g.

SQL> desc foo
Name            Null?    Type
--------------- -------- ----------------------------
FOO_ID          NOT NULL VARCHAR2(64)
FOO_DATA                 CLOB ENCRYPT
FOO_TIME        NOT NULL TIMESTAMP(6)

We opened an SR with Oracle and they pointed us to this unpublished bug:


Bug 9351684 : SECUREFILE - CACHE NOLOGGING CAUSES HIGH WRITE COMPLETE WAITS

They described it for us as:

Confirmed as "not a bug" in this bug.

It was stated: "write complete waits in this case are unfortunately, expected and can not be avoided/tuned. Even for NOLOGGING case there is a short invalidation redo that must be generated, and for correct crash recovery, dbwr must wait for redo to be written to disk first before data blocks can be written."Basically telling us that this is working as intended and there is no workaround if you're using securefiles. For us it was important that we move forward with Golden Gate, so we would need to have a solution that let us keep minimal supplemental logging on. Looking closer, we knew that this table was on an encrypted tablespace already, so we felt comfortable changing the table so that it used a regular "basicfile" CLOB:

SQL> desc foo
Name            Null?    Type
--------------- -------- ----------------------------
FOO_ID          NOT NULL VARCHAR2(64)
FOO_DATA                 CLOB
FOO_TIME        NOT NULL TIMESTAMP(6)


Since making this change, the problems have gone away. Obviously we were lucky in that we could change the table to not use securefiles. If you have a table that sees a lot of DML with securefiles, you're probably going to have a painful experience with supplemental logging. Beware!
Categories: DBA Blogs

Oracle Java Cloud Service - April 2014 Critical Patch Update

Oracle Security Team - Thu, 2014-04-17 08:59

Hi, this is Eric Maurice.

In addition to the release of the April 2014 Critical Patch Update, Oracle has also addressed the recently publicly disclosed issues in the Oracle Java Cloud Service.  Note that the combination of this announcement with the release of the April 2014 Critical Patch Update is not coincidental or the result of the unfortunate public disclosure of exploit code, but rather the result of the need to coordinate the release of related fixes for our on-premise customers. 

Shortly after issues were reported in the Oracle Java Cloud Service, Oracle determined that some of these issues were the result of certain security issues in Oracle products (though not Java SE), which are also licensed for traditional on-premise use.  As a result, Oracle addressed these issues in the Oracle Java Cloud Service, and scheduled the inclusion of related fixes in the following Critical Patch Updates upon completion of successful testing so as to avoid introducing regression issues in these products.

 

For more information:

The April 2014 Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/topics/security/cpuapr2014-1972952.html

More information about Oracle Software Security Assurance, including details about Oracle’s secure development and ongoing security assurance practices is located at http://www.oracle.com/us/support/assurance/overview/index.html

OSP #3a: Build a Standard Cluster Platform

Jeremy Schneider - Thu, 2014-04-17 06:15

This is the fifth article in a series called Operationally Scalable Practices. The first article gives an introduction and the second article contains a general overview. In short, this series suggests a comprehensive and cogent blueprint to best position organizations and DBAs for growth.

We’ve looked in some depth at the process of defining a standard platform with an eye toward Oracle database use cases. Before moving on, it would be worthwhile to briefly touch on clustering.

Most organizations should hold off as long as possible before bringing clusters into their infrastructure. Clusters introduce a very significant new level of complexity. They will immediately drive some very expensive training and/or hiring demands – in addition to the already-expensive software licenses and maintenance fees. There will also be new development and engineering needed – perhaps even within application code itself – to support running your apps on clusters. In some industries, clusters have been very well marketed and many small-to-medium companies have made premature deployments. (Admittedly, my advice to hold off is partly a reaction to this.)

When Clustering is Right

Nonetheless there definitely comes a point where clustering is the right move. There are four basic goals that drive cluster adoption:

  1. Parallel or distributed processing
  2. Fault tolerance
  3. Incremental growth
  4. Pooled resources for better utilization

I want to point out immediately that RAC is just one way of many ways to do clustering. Clustering can be done at many tiers (platform, database, application) and if you define it loosely then even an oracle database can be clustered in a number of ways.

Distributed Processing

Stop for a moment and re-read the list of goals above. If you wanted to design a system to meet these goals, what technology would you use? I already suggested clusters – but that might not have been what came to your mind first. How about grid computing? I once worked with some researchers in Illinois who wrote programs to simulate protein folding and DNS sequencing. They used the Illinois BioGrid – composed of servers and clusters managed independently by three different universities across the state. How about cloud computing? The Obama Campaign in 2008 used EC2 to build their volunteer logistics and coordination platforms to dramatically scale up and down very rapidly on demand. According to the book In Search of Clusters by Gregory Pfister, these four reasons are the main drivers for clustering – but if they also apply to grids and clouds then then what’s the difference? Doesn’t it all accomplish the same thing?

In fact the exact definition of “clustering” can be a little vague and there is a lot of overlap between clouds, grids, clusters – and simple groups of servers with strong & mature standards. In some cases these terms might be more interchangeable than you would expect. Nonetheless there are some general conventions. Here is what I have observed:

CLUSTER Old term, most strongly implies shared hardware resources of some kind, tight coupling and physical proximity of servers, and treatment of the group as a single unit for execution of tasks. While some level of single system image is presented to clients, each server may be individually administered and strong standards are desirable but not always implied. GRID Medium-aged term, implies looser coupling of servers, geographic dispersion, and perhaps cross-organizational ownership and administration. There will not be grid-wide standards for node configuration; individual nodes may be independently administered. The grid may be composed of multiple clusters. Strong standards do exist at a high level for management of jobs and inter-node communication.

Or, alternatively, the term “grid” may more loosely imply a group of servers where nodes/resources and jobs/services can easily be relocated as workload varies. CLOUD New term, implies service-based abstraction, virtualization and automation. It is extremely standardized with a bias toward enforcement through automation rather than policy. Servers are generally single-organization however service consumers are often external. Related to the term “utility computing” or the “as a service” terms (Software/SaaS, Platform/PaaS, Database/DaaS, Infrastructure/IaaS).

Or, alternatively, may (like “grid”) more loosely imply a group of servers where nodes/resources and jobs/services can easily be relocated as workload varies.

Google Trends for Computers and Electronics Category

Google Trends for Computers and Electronics Category

These days, the distributed processing field is a very exciting place because the technology is advancing rapidly on all fronts. Traditional relational databases are dealing with increasingly massive data volumes, and big data technology combined with pay-as-you-go cloud platforms and mature automation toolkits have given bootstrapped startups unforeseen access to extremely large-scale data processing.

Building for Distributed Processing

Your business probably does not have big data. But the business case for some level of distributed processing will probably find you eventually. As I pointed out before, the standards and driving principles at very large organizations can benefit your commodity servers right now and eliminate many growing pains down the road.

In the second half of this article I will take a look at how this specifically applies to clustered Oracle databases. But I’m curious, are your server build standards ready for distributed processing? Could they accommodate clustering, grids or clouds? What kinds of standards do you think are most important to be ready for distributed processing?

Webcast: Database Cloning in Minutes using Oracle Enterprise Manager 12c Database as a Service Snap Clone

Pankaj Chandiramani - Thu, 2014-04-17 05:02

Since the demands
from the business for IT services is non-stop, creating copies of production
databases in order to develop, test and deploy new applications can be
labor intensive and time consuming. Users may also need to preserve private
copies of the database, so that they can go back to a point prior to when
a change was made in order to diagnose potential issues. Using Snap Clone,
users can create multiple snapshots of the database and “time
travel
” across these snapshots to access data from any point
in time.


Join us for an in-depth
technical webcast and learn how Oracle Cloud Management Pack for Oracle
Database's capability called Snap Clone, can fundamentally improve the
efficiency and agility of administrators and QA Engineers while saving
CAPEX on storage. Benefits include:



  • Agile provisioning
    (~ 2 minutes to provision a 1 TB database)

  • Over 90% storage
    savings

  • Reduced administrative
    overhead from integrated lifecycle management


Register
Now!


April 24 — 10:00 a.m. PT | 1:00 p.m. ET

May 8 — 7:00 a.m. PT | 10:00 a.m. ET | 4:00 p.m. CET

May 22 — 10:00 a.m. PT | 1:00 p.m. ET





Categories: DBA Blogs

SQL Developer’s Interface for GIT: Interacting with a GitHub Repository Part 2

Galo Balda's Blog - Wed, 2014-04-16 18:46

In this post I’m going to show to synchronize the remote and local repositories after an existing file in local gets modified. What I’ll do is modify the sp_test_git.pls file in our local repository and then push those changes to the remote repository (GitHub).

First, I proceed to open the sp_test_git.pls file using SQL Developer, add another dbms_output line to it and save it. The moment I save the file, the Pending Changes (Git) window gets updated to reflect the change and the icons in the toolbar get enabled.

modify_file

Now I can include a comment and then add the file to the staging area by clicking on the Add button located on the Pending Changes (Git) window. Notice how the status changes from “Modified Not Staged” to “Modified Staged”.

staged_file

What if I want to compare versions before doing a commit to the local repository? I just have to click on the Compare with Previous Version icon located on the Pending Changes (Git) window.

compare2

The panel on the left displays the version stored in the local repository and the panel on the right displays the version in the Staging Area.

The next step is to commit the changes to the local repository. For that I click on the Commit button located on the Pending Changes (Git) window and then I click on the OK button in the Commit window.

commit

Now the Branch Compare window displays information telling that remote and local are out of sync.

branch_compare2

So the final step is to sync up remote and local by pushing the changes to GitHub. For that I go to the main menu and click on  Team -> Git -> Push to open the “Push to Git” wizard where I enter the URL for the remote repository, the user name and password to complete the operation. Now I go to GitHub to confirm the changes have been applied.

updated_github


Filed under: GIT, SQL Developer, Version Control Tagged: GIT, SQL Developer, Version Control

Categories: DBA Blogs

HGV Levy

Tony Andrews - Wed, 2014-04-16 05:45
The UK government has introduced a new service for foreign lorry drivers to pay a levy to use UK roads here: https://www.hgvlevy.service.gov.uk/ It was built by my current employer, Northgate Information Solutions. Guess what technology it runs on? We had a lot of interesting challenges when building this: Compliance with UK Government styling and standards Responsive design to work on Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com0http://tonyandrews.blogspot.com/2014/04/hgv-levy.html

April 2014 Critical Patch Update Released

Oracle Security Team - Tue, 2014-04-15 15:04

Hello, this is Eric Maurice again.

Oracle today released the April 2014 Critical Patch Update.  This Critical Patch Update provides fixes for 104 vulnerabilities across a number of product lines including: Oracle Database, Oracle Fusion Middleware, Oracle Hyperion, Oracle Supply Chain Product Suite, Oracle iLearning, Oracle PeopleSoft Enterprise, Oracle Siebel CRM, Oracle Java SE, Oracle and Sun Systems Products Suite, Oracle Linux and Virtualization, and Oracle MySQL.  A number of the vulnerabilities fixed in this Critical Patch Update have high CVSS Base Score and are being highlighted in this blog entry.  Oracle recommends this Critical Patch Update be applied as soon as possible.

Out of the 104 vulnerabilities fixed in the April 2014 Critical Patch Update, 2 were for the Oracle Database.  The most severe of these database vulnerabilities received a CVSS Base Score of 8.5 for the Windows platform to denote a full compromise of the targeted system, although a successful exploitation of this bug requires authentication by the malicious attacker.  On other platforms (e.g., Linux, Solaris), the CVSS Base Score is 6.0, because a successful compromise would be limited to the Database and not extend to the underlying Operating System.  Note that Oracle reports this kind of vulnerabilities with the ‘Partial+’ value for Confidentiality, Integrity, and Availability impact (Partial+ is used when the exploit affects a wide range of resources, e.g. all database tables).  Oracle makes a strict application of the CVSS 2.0 standard, and as a result, the Partial+ does not result in an inflated CVSS Base Score (CVSS only provides for ‘None,’ ‘Partial,’ or ‘Complete’ to report the impact of a bug).  This custom value is intended to call customers’ attention to the potential impact of the specific vulnerability and enable them to potentially manually increase this severity rating.  For more information about Oracle’s use of CVSS, see http://www.oracle.com/technetwork/topics/security/cvssscoringsystem-091884.html.

This Critical Patch Update also provides fixes for 20 Fusion Middleware vulnerabilities.  The highest CVSS Base Score for these Fusion Middleware vulnerabilities is 7.5.  This score affects one remotely exploitable without authentication vulnerability in Oracle WebLogic Server (CVE-2014-2470).  If successfully exploited, this vulnerability can result in a wide compromise of the targeted WebLogic Server (Partial+ rating for Confidentiality, Integrity, and Availability.  See previous discussion about the meaning of the ‘Partial+’ value reported by Oracle). 

Also included in this Critical Patch Update were fixes for 37 Java SE vulnerabilities.  4 of these Java SE vulnerabilities received a CVSS Base Score of 10.0.  29 of these 37 vulnerabilities affected client-only deployments, while 6 affected client and server deployments of Java SE.  Rounding up this count were one vulnerability affecting the Javadoc tool and one affecting unpack200.  As a reminder, desktop users, including home users, can leverage the Java Autoupdate or visit Java.com to ensure that they are running the most recent version of Java.  Java SE security fixes delivered through the Critical Patch Update program are cumulative.  In other words, running the most recent version of Java provides users with the protection resulting from all previously-released security fixes.   Oracle strongly recommends that Java users, particularly home users, keep up with Java releases and remove obsolete versions of Java SE, so as to protect themselves against malicious exploitation of Java vulnerabilities. 

This Critical Patch Update also included fixes for 5 vulnerabilities affecting Oracle Linux and Virtualization products suite.  The most severe of these vulnerabilities received a CVSS Base Score of 9.3, and this vulnerability (CVE-2013-6462) affects certain versions of Oracle Global Secure Desktop. 

Due to the relative severity of a number of the vulnerabilities fixed in this Critical Patch Update, Oracle strongly recommends that customers apply this Critical Patch Update as soon as possible.  In addition, as previously discussed, Oracle does not test unsupported products, releases and versions for the presence of vulnerabilities addressed by each Critical Patch Update.  However, it is often the case that earlier versions of affected releases are affected by vulnerabilities fixed in recent Critical Patch Updates.  As a result, it is highly desirable that organizations running unsupported versions, for which security fixes are no longer available under Oracle Premier Support, update their systems to a currently-supported release so as to fully benefit from Oracle’s ongoing security assurance effort.

For more information:

The April 2014 Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/topics/security/cpuapr2014-1972952.html

More information about Oracle’s application of the CVSS scoring system is located at http://www.oracle.com/technetwork/topics/security/cvssscoringsystem-091884.html

An Ovum white paper “Avoiding security risks with regular patching and support services” is located at http://www.oracle.com/us/corporate/analystreports/ovum-avoiding-security-risks-1949314.pdf

More information about Oracle Software Security Assurance, including details about Oracle’s secure development and ongoing security assurance practices is located at http://www.oracle.com/us/support/assurance/overview/index.html

The details of the Common Vulnerability Scoring System (CVSS) are located at http://www.first.org/cvss/cvss-guide.html. 

Java desktop users can verify that they are running the most version of Java and remove older versions of Java by visiting http://java.com/en/download/installed.jsp.      

 

 

Frequently Misused Metrics in Oracle

Steve Karam - Tue, 2014-04-15 14:43
Lying Businessman

Back in March of last year I wrote an article on the five frequently misused metrics in Oracle: These Aren’t the Metrics You’re Looking For.

To sum up, my five picks for the most misused metrics were:

Business Graph

  1. db file scattered read – Scattered reads aren’t always full table scans, and they’re certainly not always bad.
  2. Parse to Execute Ratio – This is not a metric that shows how often you’re hard parsing, no matter how many times you may have read otherwise.
  3. Buffer Hit Ratio – I want to love this metric, I really do. But it’s an advisory one at best, horribly misleading at worst.
  4. CPU % – You license Oracle by CPU. You should probably make sure you’re making the most of your processing power, not trying to reduce it.
  5. Cost – No, not money. Optimizer cost. Oracle’s optimizer might be cost based, but you are not. Tune for time and resources, not Oracle’s own internal numbers.

Version after version, day after day, these don’t change much.

Anyways, I wanted to report to those who aren’t aware that I created a slideshow based on that blog for RMOUG 2014 (which I sadly was not able to attend at the last moment). Have a look and let me know what you think!

Metric Abuse: Frequently Misused Metrics in Oracle

Have you ever committed metric abuse? Gone on a performance tuning snipe hunt? Spent time tuning something that, in the end, didn’t even really have an impact? I’d love to hear your horror stories.

Also while you’re at it, have a look at the Sin of Band-Aids, and what temporary tuning fixes can do to a once stable environment.

And lastly, keep watching #datachat on Twitter and keep an eye out for an update from Confio on today’s #datachat on Performance Tuning with host Kyle Hailey!

The post Frequently Misused Metrics in Oracle appeared first on Oracle Alchemist.

RMAN Redundancy is not a Viable Retention Policy

Don Seiler - Tue, 2014-04-15 14:07

Originally posted by me on the Pythian blog. This is an older post that I somehow forgot to post on my own blog, but another recent redundancy foul-up reminded me of it.

The story you are about to read is based on actual events. Names and paths have been changed to protect the innocent. I call this scenario “The Perfect Storm” because it took just the right combination of events and configurations. Sadly, this doesn’t make it an unlikely occurrence, so I’m posting it here in hopes that you’ll be able to save yourselves before it’s too late.

I have always had a preternatural dislike for using REDUNDANCY as a retention policy for Oracle RMAN, greatly preferring RECOVERY WINDOW instead, simply because REDUNDANCY doesn’t really guarantee anything valuable to me, whereas RECOVERY WINDOW guarantees that I’ll be able to do a point-in-time recovery to anytime within the past x days. Plus, I had already been burned once by a different client using REDUNDANCY. With the story I’m about to tell, this dislike has turned into violent hatred. I’m going to be light on the technical details, but I hope you’ll still feel the full pain.



First some table setting:
  • Standalone 10.2.0.2 instance (no RAC, no DataGuard/Standby)
  • RMAN retention policy set to REDUNDANCY 2
  • Backups stored in the Flash Recovery Area (FRA)
A few months ago, we had a datafile corruption on this relatively new instance (data had been migrated from an old server about a week prior). The on-call DBA followed up the page by checking for corruptions in the datafile with this command:

RMAN> backup check logical datafile '/path/to/foobar_data.dbf';

This, my friends, led to the major fall, though we did not know it for many hours. You see, the FRA was already almost full. This causes the FRA to automatically delete obsolete files to free up space. That last backup command, while only intended to check for logical corruption, did actually perform a backup of the file, and rendered the earliest backup of the file obsolete since there were two newer copies. That earliest file happened to be from the level 0 backup from which we would later want to restore.

Of course, at first we didn’t know why the file was missing. Logs showed that it was on disk no less than two hours before the problem started. Later, scanning the alert log for the missing backup filename yielded this:

Deleted Oracle managed file /path/to/flash_recovery_area/FOO_DB/backupset/2008_12_01/o1_xxxx.bkp

Oracle deleted the one backup file that we needed!

Even worse, it wasn’t until this time on a Monday night that we realized that the level 0 taken the previous weekend had failed to push the backup files to tape because of a failure on the NetBackup server. The problem was reported as part of Monday morning’s routine log checks, but the missing files had not yet been pushed to tape.

In the end, we were able to drop and restore the tablespace to a previous point in time on a test instance from another backup file and exp/imp data back over. It was ugly, but it got things back online. Many DBAs better than myself gave their all on this mission.

To summarize, the ingredients:
  • Oracle RMAN
  • CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
  • Flash Recovery Area near full, obediently deleting obsolete files.
  • Tape backup failure
Add in an innocent backup command and . . . BOOM! Failure Surprise.

The two biggest points to take away are:

  • Tape backup failures are still serious backup failures and should be treated as such, even if you backup to disk first.
  • REDUNDANCY is not a viable retention policy. In my house, it is configuration non grata.
Categories: DBA Blogs

JasperReportsIntegration - Full path requirement and workaround

Dietmar Aust - Mon, 2014-04-14 17:38
I have just posted an answer to a question that seems like a bug in the JasperReportsIntegration toolkit, that you have to use absolute paths for referencing images or subreports, which is typically a bad thing.

Don't know exactly why it doesn't work, but there is a workaround for that: http://www.opal-consulting.de/site/jasperreportsintegration-full-path-requirement-and-workaround/

Hope that helps,
~Dietmar.

First blogpost at my own hosted wordpress instance

Dietmar Aust - Mon, 2014-04-14 16:50
I have been blogging at daust.blogspot.com for quite some years now ... and many people have rather preferred wordpress to blogspot, I can now understand why :).

It is quite flexible and easy to use and there are tons of themes available ... really cool ones.

The main decision to host my own wordpress instance was in the end motivated by different means. I have created two products and they needed a platform to be presented:
First I wanted to buy a new theme from themeforest and build an APEX theme for that ... but this is a lot of work.

I then decided to host my content using wordpress since I have already bought a new theme: http://www.kriesi.at/themedemo/?theme=enfold

And this one has a really easy setup procedure for wordpress and comes with a ton of effects and wizards, cool page designer, etc.

Hopefully this will get mit motivated to post more frequently ... we will see ;).

Cheers,
~Dietmar.

Unique identifiers - but what do they identify

Gary Myers - Fri, 2014-04-11 23:39
Most of the readers of this blog will be developers, or DBAs, who got the rules of Normalisation drummed into them during some phase of the education or training. But often we get to work with people who don't have that grounding. This post is for them. Feel free to point them at it.

Through normalisation, the tendency is to start with a data set, and by a methodical process extract candidate keys and their dependent attributes. In many cases there isn't a genuine or usable candidate key and artificial / surrogate keys need to be generated. While your bank can generally work out who you are based on your name and address, either of those could change and so they assign you a more permanent customer or account number.

The difficulty comes when those identifiers take on a life of their own. 

Consider the phone number. When I dial my wife's phone number, out of all the phones in Australia (or the world), it is her's alone that will ring. Why that one ? 

In the dark ages, the phone number would indicate a particular exchange and a copper wire leading out of that exchange hard wired to a receiver (or a set of receivers in the case of Party Lines).  Now all the routing is electronic, telephones can be mobile and the routing for calls to a particular number can be changed in an instant. A phone number no longer identifies a device, but a service, and a new collection of other identifiers have risen up to support the implementation of that service. An IMEI can identify a mobile handset and the IMSI indicates a SIM card from a network provider, and we can change the SIM card / IMSI that corresponds to a phone number, or swap SIM cards between handsets. Outside the cellular world, VOIP can shunt 'phone number' calls around innumerable devices using IP addresses. 

Time is another factor. While I may 'own' a given phone number at a particular time, I may give that up and someone else might take it over. That may get represented by adding dates, or date ranges to the key, or it can be looked at as a sequence. For example, Elizabeth Taylor's husband may indicate one of seven men depending on context. The "fourth husband" or "her husband on 1st Jan 1960" would be Eddie Fisher.

Those without a data modelling background that includes normalisation may flinch at the proliferation of entities and tables in a relational environment. As developers and architects look at newer technologies some of the discipline of the relational model will be passed over. Ephemeral transactions can cluster the attributes together in XML or JSON formats with no need for consistency of data definitions beyond the period of processing. Data warehousing quickly discarded relational formats in favour of 'facts' and 'dimensions'. 

The burden of managing a continuous and connected set of data extending over a long period of time, during which the identifiers and attributes morph, is an ongoing challenge in database design.

HeartBleed and Oracle

Fuad Arshad - Fri, 2014-04-11 08:23
There are a lot of people asking about Heartbleed and how it has impacted the web.
Oracle has published  MOS Note 1645479.1 that talks about all the products impacted and if and when fixes will be available.
The following blog post is also a good reference about the vulnerability.  https://blogs.oracle.com/security/entry/heartbleed_cve_2014_0160_vulnerability



The Riley Family, Part III

Chet Justice - Thu, 2014-04-10 21:44


That's Mike and Lisa, hanging out at the hospital. Mike's in his awesome cookie monster pajamas and robe...must be nice, right? Oh wait, it's not. You probably remember why he's there, Stage 3 cancer. The joys.

In October, we helped to send the entire family to Game 5 of the World Series (Cards lost, thanks Red Sox for ruining their night).

In November I started a GoFundMe campaign, to date, with your help, we've raised $10,999. We've paid over 9 thousand dollars to the Riley family (another check to be cut shortly).

In December, Mike had surgery. Details can be found here. Shorter: things went fairly well, then they didn't. Mike spent 22 days in the hospital and lost 40 lbs. He missed Christmas and New Years at home with his family. But, as I've learned over the last 6 months, the Riley family really knows how to take things in stride.

About 6 weeks ago Mike started round 2 of chemo, he's halfway through that one now. He complains (daily, ugh) about numbness, dizziness, feeling cold (he lives in St. Louis, are you sure it's not the weather?), and priapism (that's a lie...I hope).

Mike being Mike though, barely a complaint (I'll let you figure out where I'm telling a lie).

Four weeks ago, a chilly (65) Saturday night, Mike and Lisa call. "Hey, I've got some news for you."

"Sweet," I think to myself. Gotta be good news.

"Lisa was just diagnosed with breast cancer."

WTF?

ARE YOU KIDDING ME? (Given Mike's gallows humor, it's possible).

"Nope. Stage 1. Surgery on April 2nd."

FFS

(Surgery was last week. It went well. No news on that front yet.)

Talking to them two of them that evening you would have no idea they BOTH have cancer. Actually, one of my favorite stories of the year...the hashtag for Riley Family campaign was #fmcuta. Fuck Mike's Cancer (up the ass). I thought that was hilarious, but I didn't think the Riley's would appreciate it. They did. They loved it. I still remember Lisa's laugh when I first suggested it. They've dropped the latest bad news and Lisa is like, "Oh, wait until you hear this. I have a hashtag for you."

"What is it?" (I'm thinking something very...conservative. Not sure why, I should know better by now).

#tna

I think about that for about .06 seconds. Holy shit! Did you just say tna? Like "tits and ass?"

(sounds of Lisa howling in the background).

Awesome. See what I mean? Handling it in stride.

"We're going to need a bigger boat." All I can think about now is, "what can we do now?"

First, I raised the campaign goal to 50k. This might be ambitious, that's OK, cancer treatments are expensive enough for one person, and 10K (the original amount) was on the low side. So...50K.

Second, Scott Spendolini created a very cool APEX app, ostensibly called the Riley Support Group (website? gah). It's a calendar/scheduling app that allows friends and family coordinate things like meals, young human (children) care and other things that most of us probably take for granted. Pretty cool stuff. For instance, Tim Gorman provides pizza on Monday nights (Dinner from pizza hut...1 - large hand-tossed cheese lovers, 1 - large thin-crispy pepperoni, 1 - 4xpepperoni rolls, 1 - cheesesticks).

Third. There is no third.

So many of you have donated your hard earned cash to the Riley family, they are incredibly humbled by, and grateful for, everyone's generosity. They aren't out of the woods yet. Donate more. Please. If you can't donate, see if there's something you can help out with (hit me up for details, Tim lives in CO, he's not really close). If you can't do either of those things, send them your prayers or your good thoughts. Any and all help will be greatly appreciated.
Categories: BI & Warehousing

‘Heartbleed’ (CVE-2014-0160) Vulnerability in OpenSSL

Oracle Security Team - Thu, 2014-04-10 13:44

Hi, this is Eric Maurice.

A vulnerability affecting certain versions of the OpenSSL libraries was recently publicly disclosed.  This vulnerability has received the nickname ‘Heartbleed’ and the CVE identifier CVE-2014-0160. 

Oracle is investigating the use of the affected OpenSSL libraries in Oracle products and solutions, and will provide mitigation instructions when available for these affected Oracle products. 

Oracle recommends that customers refer to the 'OpenSSL Security Bug - Heartbleed CVE-2014-0160' page on the Oracle Technology Network (OTN) for information about affected products, availability of fixes and other mitigation instructions.  This page will be periodically updated as Oracle continues its assessment of the situation.   Oracle customers can also open a support ticket with My Oracle Support if they have additional questions or concerns.

 

For More Information:

The CVE-2014-016 page on OTN is located at http://www.oracle.com/technetwork/topics/security/opensslheartbleedcve-2014-0160-2188454.html

The Heartbleed web site is located at http://heartbleed.com/.  This site is not affiliated with Oracle and provides a list of affected OpenSSL versions.

The My Oracle Support portal can be accessed by visiting https://support.oracle.com

 

_direct_read_decision_statistcs_driven, _small_table_threshold and direct path reads on partitioned tables in 11.2.0.3 (Part 2)

Mihajlo Tekic - Thu, 2014-04-10 01:30
This is continuation of my last post regarding direct path reads on partitioned tables in Oracle 11.2.0.3.

To recap, the behavior I observed is that direct path reads will be performed if number of blocks for all partitions that will be accessed exceeds _small_table_threshold value. That is if a table is consisted of 10 partitions each having 100 blocks and if a query goes after two of the partitions, direct path reads will be performed if _small_table_threshold is lower than 200.

Also regardless of how much data has been cached(in the buffer cache)  for each of the partitions, if direct path reads are to be performed, all partition segments will be directly scanned. So, it is all or nothing situation.

I also indicated that _direct_read_decision_statistics_driven parameter was set to TRUE (default) for the tests done in my earlier post.

What is _direct_read_decision_statistics_driven anyway? According to the parameter description, it enables direct path read decision to be based on optimizer statistics. If the parameter is set to FALSE Oracle will use segment headers to determine how many blocks the segment has. (read Tanel Poder’s blogpost for more information)

Let’s see how queries that access table partitions (full scan) behave if _direct_read_decsision_statiscs_driven parameter is set to FALSE in 11.2.0.3. My expectation was that it should be the same as if it was set to TRUE. I thought that once Oracle gets information about the number of blocks in each of the partitions it would use the same calculation as if the parameter was set to TRUE. Let’s see.

But, before moving forward a small disclaimer: Do not perform these tests in production or any other important environment. Changing of undocumented parameters should be done under the guidance of Oracle Support. The information presented here is for demonstration purposes only.

I will use the same table, TEST_PART, that I used in my earlier post.

I started with flushing the buffer cache (to make sure none of the partitions has blocks in the cache).

I set _direct_read_decision_statistcs_driven parameter to false and ran a query that selects data from PART_1 partition only. Each of the partitions contains 4000 rows stored in 65 blocks, plus one segment header block.

_small_table_threshold in my sandbox environment was set to 117.


SQL> alter session set "_direct_read_decision_statistics_driven"=FALSE;

Session altered.

SQL> SELECT count(1) FROM test_part WHERE col1 in (1);

COUNT(1)
----------
4000


As expected, no direct path reads were performed (I used my sese.sql script that scans v$sesstat for statistics that match given keyword)


SQL> @sese direct

no rows selected


Now let’s see what happens with a query that accesses the first two partitions. Remember if _direct_read_decision_statistcs_driven parameter is set to TRUE, this query would perform direct path reads because the number of blocks in both partitions, 130 (2x65) exceeds
_small_table_threshold(117) parameter.


SQL> select count(1) from test_part where col1 in (1,2);

COUNT(1)
----------
8000

SQL> @sese direct

no rows selected


No direct reads. Definitely different compared to when _direct_read_decision_statistcs_driven was set to TRUE.

How about for a query that accesses three partitions:


SQL> select count(1) from test_part where col1 in (1,2,3);

COUNT(1)
----------
12000

SQL> @sese direct

no rows selected


Still no direct path reads.

How about if we access all 7 partitions:


SQL> select count(1) from test_part where col1 in (1,2,3,4,5,6,7);

COUNT(1)
----------
28000

SQL> @sese direct

no rows selected


No direct path reads.

So what is going on? Seems when _direct_read_decision_statistcs_driven is set to FALSE, Oracle makes decision on partition by partition basis. If the number of blocks in the partition is less or equal than _small_table_threshold buffer cache will be used, otherwise direct path reads.

What if some of the partitions were already cached in the buffer cache?

In the next test I’ll:
  • Flush the buffer cache again
  • Set _direct_read_decision_statistcs_driven is set to FALSE
  • Run a query that accesses the first two partitions
  • Decrease the value for _small_table_threshold to 60
  • Run a query that accesses the first three partitions.
  • Check if direct path reads were performed and how many
With this test I’d like to see if Oracle will utilize the buffer cache if the segment data is cached and the number of blocks in partition is greater than _small_table_threshold.


SQL> alter system flush buffer_cache;

System altered.

SQL> alter session set "_direct_read_decision_statistics_driven"=FALSE;

Session altered.

SQL> select count(1) from test_part where col1 in (1,2);

COUNT(1)
----------
8000

SQL> @sese direct

no rows selected


At this point, PART_1 and PART_2 partitions should be entirely in the buffer cache. If you want, you could query X$KCBOQH to confirm this (from a different session logged in as SYS).


SQL> conn /as sysdba
Connected.
SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
2 from X$KCBOQH b, dba_objects o
3 where b.obj#=o.data_object_id
4 and o.object_name='TEST_PART'
5 group by o.subobject_name, b.obj#
6 order by 1;

SUBOBJECT_NAME OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1 146024 66
PART_2 146025 66


As expected, both partitions are in the buffer cache.

Now let’s change decrease _small_table_threshold to 60 and run a query that scans the first three partitions:


SQL> alter session set "_small_table_threshold"=60;

Session altered.

SQL> alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> select count(1) from test_part where col1 in (1,2,3);

COUNT(1)
----------
12000

alter session set events '10046 trace name context off';

SQL> @sese direct

SID ID NAME VALUE
---------- ---------- -------------------------------------------------- ----------
9 76 STAT.consistent gets direct 65
9 81 STAT.physical reads direct 65
9 380 STAT.table scans (direct read) 1


Here they are, 65 direct path reads, one table scan (direct read) which means one of the partitions was scanned using direct path reads. Which one? Yes, you are right, the one that is not in the buffer cache (PART_3 in this example).

If you query X$KCBOQH again you can see that only one block of PART_3 is in the cache. That is the segment header block.


SQL> conn /as sysdba
Connected.
SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
2 from X$KCBOQH b, dba_objects o
3 where b.obj#=o.data_object_id
4 and o.object_name='TEST_PART'
5 group by o.subobject_name, b.obj#
6 order by 1;

SUBOBJECT_NAME OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1 146024 66
PART_2 146025 66
PART_3 146026 1 <===


This means that when _direct_read_decision_statistcs_driven is set to FALSE, in 11.2.0.3, Oracle uses totally different calculation compared to the one used when the parameter is set to TRUE (see in my earlier post).

Moreover, seems Oracle examines each of the partitions separately (which I initially expected to be a case even when _direct_read_decision_statistcs_driven is set to TRUE ) and applies the rules as described in Alex Fatkulin’s blogpost. That is, if any of the following is true, oracle will scan the data in the buffer cache, otherwise direct path reads will be performed: 
  •  the number of blocks in the segment is lower or equal than _small_table_threshold 
  •  at least 50% of the segment data blocks are in the buffer cache
  •  at least 25% of the data blocks are dirty 
The conclusion so far is that in 11.2.0.3, you may observe different behavior for the queries that access table partitions using FTS if you decide to change _direct_read_decision_statistcs_driven parameter.

I will stop here. I ran the same tests against 11.2.0.4 and 12.1.0.1 and noticed some differences in the behavior compared to the one I just wrote about (11.2.0.3). I will post these results in the next few days.

Stay tuned...



SQL Developer’s Interface for GIT: Interacting with a GitHub Repository Part 1

Galo Balda's Blog - Wed, 2014-04-09 23:45

In my previous post, I showed how to clone a GitHub repository using SQL Developer. In this post I’m going to show to synchronize the remote and local repositories after remote gets modified.

Here I use GitHub to commit a file called sp_test_git.pls.  You can create files by clicking on the icon the red arrow is pointing to.

new_file

The content of the file is a PL/SQL procedure that prints a message.

file_content

At this point, the remote repository and the local repository are out of sync. The first thing that you may want to do before modifying any repository, is to make sure that you have the most current version of it so that it includes the changes made by other developers. Let’s synchronize remote and local.

Make sure you open the Versions window. Go to the main menu click on Team -> Versions.

versions

Open the Local branch and click on master, then go to main menu click on Team -> Git -> Fetch to open the “Fetch from Git” wizard. Fetching a repository copies changes from the remote repository into your local system, without modifying any of your current branches. Once you have fetched the changes, you can merge them into your branches or simply view them. We can see the changes on the Branch Compare window by going to the main menu click on Team -> Git -> Branch Compare.

branch_compare

 Branch Compare is showing that sp_test_git.pls has been fetched from the remote master branch. We can right click on this entry and select compare to see the differences.

compare

The window on the left displays the content of the fetched file and the window on right displays the content of the same file in the local repository. In this case the right windows is empty because this is a brand new file that doesn’t exist locally. Let’s accept the changes and merge them into the local repository. We go to the Branch Compare window, right click on the entry, select merge and click on the “Ok” button.

merge

Now the changes should have been applied to the local repository.

local_update

We can go to the path where the local repository is located and confirm that sp_test_git.pls is there.

 

 


Filed under: GIT, SQL Developer, Version Control Tagged: GIT, SQL Developer, Version Control
Categories: DBA Blogs

Oracle Application Express 4.2.5 now available

Joel Kallman - Wed, 2014-04-09 14:50
Oracle Application Express 4.2.5 is now released and available for download.  If you wish to download the full release of Oracle Application Express 4.2.5, you can get it from the Downloads page on OTN.  If you have Oracle Application Express 4.2, 4.2.1, 4.2.2, 4.2.3 or 4.2.4 already installed, then you need to download the APEX 4.2.5 patch set from My Oracle Support.  Look up patch number 17966818.

As is stated in the patch set note that accompanies the Oracle Application Express 4.2.5 patch set:
  • If you have Oracle Application Express release 4.2, 4.2.1, 4.2.2, 4.2.3 or 4.2.4 installed, download the Oracle Application Express 4.2.5 patch set from My Oracle Support and apply it.  Remember - patch number 17966818.
  • If you have Oracle Application Express release 4.1.1 or earlier installed (including Oracle HTML DB release 1.5), download and install the entire Oracle Application Express 4.2.5 release from the Oracle Technology Network (OTN).
  • If you do not have Oracle Application Express installed, download and install the entire Oracle Application Express 4.2.5 release from the Oracle Technology Network (OTN).
As usual, there are a large number of issues corrected in the Application Express 4.2.5 patch set.  You can see the full list in the patch set note.

Some changes in the the Oracle Application Express 4.2.5 patch set:
  1. A number of bug fixes and functionality additions to many of the Packaged Applications.
  2. One new packaged application - Live Poll.  This was the creation of Mike Hichwa.  Live Poll is intended for real-time, very brief polling (in contrast to a formal survey, which can be created and administered via Survey Builder).
  3. One new sample application - the Sample Geolocation Showcase, created by Oracle's Carsten Czarski, who did a masterful job in demonstrating how Oracle's spatial capabilities (via Oracle Locator) can be easily exploited in an Oracle Application Express application.  Try it for yourself today on apex.oracle.com!
  4. A handful of bug fixes in the underlying Application Express engine and APIs.

APEX 4.2.5 should be the end of the line for Oracle Application Express 4.2.x.

On Error Messages

Chen Shapira - Wed, 2014-04-09 14:01

Here’s a pet peeve of mine: Customers who don’t read the error messages. The usual symptom is a belief that there is just on error: “Doesn’t work”, and that all forms of “doesn’t work” are the same. So if you tried something, got an error, your changed something and you are still getting an error, nothing changed.

I hope everyone who reads this blog understand why this behavior makes any troubleshooting nearly impossible. So I won’t bother to explain why I find this so annoying and so self defeating. Instead, I’ll explain what can we, as developers, can do to improve the situation a bit. (OMG, did I just refer to myself as a developer? I do write code that is then used by customers, so I may as well take responsibility for it)

Here’s what I see as main reasons people don’t read error messages:

  1. Error message is so long that they don’t know where to start reading. Errors with multiple Java stack dumps are especially fun. Stack traces are useful only to people who look at the code, so while its important to get them (for support), in most cases your users don’t need to see all that very specific information.
  2. Many different errors lead to the same message. The error message simply doesn’t indicate what the error may be, because it can be one of many different things. I think Kerberos is the worst offender here, so many failures look identical. If this happens very often, you tune out the error message.
  3. The error is so technical and cryptic that it gives you no clue on where to start troubleshooting.  “Table not Found” is clear. “Call to localhost failed on local exception” is not.

I spend a lot of time explaining to my customers “When <app X> says <this> it means that <misconfiguration> happened and you should <solution>”.

To get users to read error messages, I think error messages should be:

  1. Short. Single line or less.
  2. Clear. As much as possible, explain what went wrong in terms your users should understand.
  3. Actionable. There should be one or two actions that the user should take to either resolve the issue or gather enough information to deduce what happened.

I think Oracle are doing a pretty good job of it. Every one of their errors has an ID number, a short description, an explanation and a proposed solution. See here for example: http://docs.oracle.com/cd/B28359_01/server.111/b28278/e2100.htm#ORA-02140

If we don’t make our errors short, clear and actionable – we shouldn’t be surprised when our users simply ignore them and then complain that our app is impossible to use (or worse – don’t complain, but also don’t use our app).

 

 

 


Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator