Feed aggregator

ORCLAPEX NOVA Update - Columbus Brings It

Scott Spendolini - Mon, 2014-05-19 05:31

For the upcoming inaugural ORCLAPEX NOVA MeetUp on May 29th, not only will we have Mike Hichwa, Shakeeb Rahman and David Gale from the Reston-based Oracle APEX development team present, but we will also have the entire Columbus, OH based APEX team in attendance, as well: both Joel Kallman and Jason Straub will be in town and have RSVP’ed to the MeetUp!

Outside of major conferences such as KScope or OpenWorld, there is no other public forum that will have the same level of APEX expertise from the team that develops the product present!  So what are you waiting for?  Join the rest of us who have already RSVP’ed to this event, as it’s 100% free, and you’re sure to learn a bunch about APEX 5.0 and other exciting happenings in the Database Development world at Oracle.

Note: you have to be a member of MeetUp (which is free to join) and RSVP to the event to attend (which is also free), as a list of people needs to be provided to Oracle the day before the event occurs.

Interesting info-graphics on Data-center / DB-Manageability

Pankaj Chandiramani - Mon, 2014-05-19 05:21


 Interesting info-graphics on Data-center / DB-Manageability



Categories: DBA Blogs

Openstack with Oracle Linux and Oracle VM

Wim Coekaerts - Fri, 2014-05-16 13:48
The OpenStack Summit has been an exciting event. We announced the Oracle OpenStack Distribution with support for Oracle Linux and Oracle VM, and support included with Oracle Linux and Oracle VM Premier Support at no additional cost. The announcement was well received by our customers and partners. We’re pleased to continue the Oracle tradition of translating our enterprise experience into community contributions as we’ve done with Linux and Xen. Oracle is committed to ensuring choice for both our partners and customers.

A preview of OpenStack distribution (Havana) is now available on oracle.com for Oracle Linux (controller + compute) and Oracle VM (compute). We will follow this up with the production (GA) release in the next several months, including an update to IceHouse and later Juno. (whitepaper)

An OpenStack distribution contains several components that can be grouped into 2 major buckets (a) controller components, such as keystone, horizon, glance, cinder,.... (b) compute components such as nova and neutron. We provide support for the controller components on top of Oracle Linux and as part of Oracle Linux Premier Support. We provide support for the compute components on top of either Oracle Linux or Oracle VM (as part of Premier Support for both products).

By adding the Oracle OpenStack Distribution to Oracle Linux and Oracle VM, we can provide integrated support for all components in the stack including applications, database, middleware, guest OS, host OS, virtualization, and OpenStack – plus servers and storage. Our experience attacking the world’s toughest enterprise workloads means we focus on OpenStack stability, availability, performance, debugging and diagnostics. Oracle OpenStack customers and partners can immediately benefit from advanced features like Ksplice and DTrace from Oracle Linux and the hardening, testing, performance and stability of Oracle VM.

If you have chosen an OpenStack distribution other than Oracle’s, rest assured. Oracle will not attempt to force you to choose our OpenStack distribution by withholding support; we will provide the same high quality Oracle Linux and Oracle VM support no matter which OpenStack distribution you choose.

Furthermore, Oracle will continue to collaborate with Oracle’s OpenStack partners validating with Oracle Linux and Oracle VM. Our goal remains the same: jointly deliver great solutions and support experience for our mutual customers. We also look forward to working with other vendors to certify networking, storage, hypervisor and other plugins into the Oracle OpenStack Distribution.

Finally, we plan to follow a development model similar to the approach we use with Linux and the Unbreakable Enterprise Kernel. Our development work is focused on contributing upstream to the OpenStack community and we will pick up new releases of OpenStack after testing and validation.

It is an exciting time for OpenStack developers and users. We are thrilled that Oracle and our customers are part of it!

UnifiedPush Server 0.10.3 released

Matthias Wessendorf - Fri, 2014-05-16 05:26

The AeroGear UnifiedPush Server 0.10.3 was released to bintray and OpenShift online!

The release contains a few improvements:

  • Improved exception handling for Google Cloud Messaging
  • Generated Code Snippets match the simplified API of our Apache Cordova Push Plugin (0.5.0)
  • Generated Code Snippets for SimplePush match our JavaScript 1.5 release

For feedback, please join our mailing list! We are happy to help

 

Have fun!

 


A good use-case for Oracle Ksplice

Wim Coekaerts - Thu, 2014-05-15 12:34
One of the advantages of Oracle Ksplice is that you can stick to a given version of a kernel for a very long time. We provide you with the security updates through our Ksplice technology for all the various kernels released so that there's no need for a reboot and also no need to install a newer kernel version that typically also contains new drivers or even new features. Zero downtime yet you are current. Ksplice updates are always based on critical bugfixes or security fixes, things you really want to apply. We do not use Ksplice to provide new driver updates or new features, it's purely focused on those patches that you really want to apply on your environment without downtime and risk of change.

The typical model for providing kernel errata (security/critical fixes) is through providing a newer version of the latest kernel in a "dot dot" release. For instance, for Oracle Linux 6 if the current latest "Red Hat Compatible kernel" is 2.6.32-431.1.2 and a security issue gets fixed, there will be a 2.6.32-431.3.1 (or so). The sysadmin then has to install the new kernel and reboot the server(s) in order to get that fix to be active. Now these "dot dot" release versions typically only contain security fixes or critical bugfixes so while a reboot is annoying and can have a significant time impact, the actual updates are very specific.

When updated versions of the OS are released (such as OL6 update 1, OL6 update 2,...) however, the change in the kernel can be more significant. For instance when you look at the lifecycle of Oracle Linux 6 with the "RHCK" versions. OL6 GA was shipping with kernel 2.6.32-71, update 1 2.6.32-131, update 2 2.6.32-220, update 3 2.6.32-279, update 4 2.6.32-358, update 5 2.6.32-431. Each of these kernels will have pretty significant changes. Aside from carrying forward the security fixes and critical bugfixes, they typically also contain new device drivers, new features backported into older kernels. In fact, if you look at the changelog of the RHCKs you will see features from kernels as current as 3.x backported into 2.6.32.

In this case, going from one version to another is a bigger deal for some customers that have a very conservative upgrade policy. However to be current with security updates one typically has to go to a newer version in order to get the errata. Security fixes are not backported to all older versions by default, while some vendors have a support option where they will support one or 2 other kernel versions, it's relatively selective.

With Ksplice however, we make the security/critical fix errata available for all the various kernels. Not just one or 2 selective versions. So you can be on any of these kernels, and without the need for a reboot, have the fixes available. That's choice and flexibility. It reduces risk of upgrading to newer kernels to get a fix, it reduces down time to zero and increases the security of your servers.

By the way, 2.6.32-71 was released 03-Jan-2011. Since then there were 45 kernels released (RHCK), with vulnerability fixes and critical fixes, so if you wanted to remain current, that would have resulted in 44 reboots for each server since 2011 (so 3.5 years). With Oracle Ksplice, you could still be running that 2.6.32-71 kernel from January 2011, without any reboot and be current with your CVEs. Imagine having 100's, if not 1000's of servers... time saved, cost saved...

To give you a concrete example, here is a list of all the different kernel versions (RHCK) for Oracle Linux 6 :

kernel-2.6.32-71
kernel-2.6.32-71.14.1
kernel-2.6.32-71.18.1
kernel-2.6.32-71.18.2
kernel-2.6.32-71.24.1
kernel-2.6.32-71.29.1
kernel-2.6.32-131.0.15
kernel-2.6.32-131.2.1
kernel-2.6.32-131.4.1
kernel-2.6.32-131.6.1
kernel-2.6.32-131.12.1
kernel-2.6.32-131.17.1
kernel-2.6.32-131.21.1
kernel-2.6.32-220.2.1
kernel-2.6.32-220.4.1
kernel-2.6.32-220.4.2
kernel-2.6.32-220.7.1
kernel-2.6.32-220.13.1
kernel-2.6.32-220.17.1
kernel-2.6.32-220.23.1
kernel-2.6.32-220
kernel-2.6.32-279.1.1
kernel-2.6.32-279.2.1
kernel-2.6.32-279.5.1
kernel-2.6.32-279.5.2
kernel-2.6.32-279.9.1
kernel-2.6.32-279.11.1
kernel-2.6.32-279.14.1
kernel-2.6.32-279.19.1
kernel-2.6.32-279.22.1
kernel-2.6.32-279
kernel-2.6.32-358.0.1
kernel-2.6.32-358.2.1
kernel-2.6.32-358.6.1
kernel-2.6.32-358.6.2
kernel-2.6.32-358.11.1
kernel-2.6.32-358.14.1
kernel-2.6.32-358.18.1
kernel-2.6.32-358.23.2
kernel-2.6.32-358
kernel-2.6.32-431.1.2
kernel-2.6.32-431.3.1
kernel-2.6.32-431.5.1
kernel-2.6.32-431.11.2
kernel-2.6.32-431.17.1
kernel-2.6.32-431

With Oracle Linux and Ksplice you could be running -any- of the above kernel versions in your production environments when a security vulnerability gets fixed, we will make a fix available for all of the above.

Here is a list of the latest Ksplice update packages for Oracle Linux 6 with RHCK, as you can see, all the kernels are there :

uptrack-updates-2.6.32-131.0.15.el6.x86_64.20140331-0
uptrack-updates-2.6.32-131.12.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-131.17.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-131.21.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-131.2.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-131.4.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-131.6.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-220.13.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-220.17.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-220.2.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-220.23.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-220.4.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-220.4.2.el6.x86_64.20140331-0
uptrack-updates-2.6.32-220.7.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-220.el6.x86_64.20140331-0
uptrack-updates-2.6.32-279.11.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-279.1.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-279.14.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-279.19.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-279.2.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-279.22.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-279.5.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-279.5.2.el6.x86_64.20140331-0
uptrack-updates-2.6.32-279.9.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-279.el6.x86_64.20140331-0
uptrack-updates-2.6.32-358.0.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-358.11.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-358.14.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-358.18.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-358.2.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-358.23.2.el6.x86_64.20140331-0
uptrack-updates-2.6.32-358.6.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-358.6.2.el6.x86_64.20140331-0
uptrack-updates-2.6.32-358.el6.x86_64.20140331-0
uptrack-updates-2.6.32-431.11.2.el6.x86_64.20140331-0
uptrack-updates-2.6.32-431.1.2.el6.x86_64.20140331-0
uptrack-updates-2.6.32-431.3.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-431.5.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-431.el6.x86_64.20140331-0
uptrack-updates-2.6.32-71.14.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-71.18.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-71.18.2.el6.x86_64.20140331-0
uptrack-updates-2.6.32-71.24.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-71.29.1.el6.x86_64.20140331-0
uptrack-updates-2.6.32-71.el6.x86_64.20140331-0

Unbreakable Linux Network APIs example

Wim Coekaerts - Thu, 2014-05-15 12:24
I posted a short blog entry about the recently released ULN APIs the other day with a sample of how to call the different APIs. Here is a concrete example to use the API to find a package in a channel and download it.

$ ./ulnget.py kernel-headers.2.6.32-71.29 ol6_x86_64_latest
Searching for 'kernel-headers.2.6.32-71.29' in channel 'ol6_x86_64_latest'

Logging in...
Logged in...
Retrieving all packages...
Found kernel-headers.2.6.32-71.29.1.el6
Getting package details...
Downloading https://uln.oracle.com/XMLRPC/GET-REQ/ol6_x86_64_latest/
kernel-headers-2.6.32-71.29.1.el6.x86_64.rpm...

Logged out...

The code for the above is pasted below, this is just a very simplistic example...

#!/usr/bin/python
try:
    import os
    import sys
    import getpass
    import datetime
    import xmlrpclib

except ImportError, e:
    raise ImportError (str(e) + ': Module  not found')

if len(sys.argv) != 3:
   print "Usage : ulnget.py [packagename] [channelname]"
   exit(1) 

search = str(sys.argv[1])
channelLabel = str(sys.argv[2])

print "Searching for '%s' in channel '%s'" % (search, channelLabel)

SERVER_URL = 'https://linux-update.oracle.com/rpc/api'

USERNAME = 'username'
PASSWORD = 'password'

# channelLabel = 'ol6_x86_64_latest'

client = xmlrpclib.Server(SERVER_URL)

print ""

# login
print "Logging in..."
sessionKey = client.auth.login(USERNAME,PASSWORD)
if len(sessionKey) != 43:
   print "Invalid %d sessionKey : '%s'" % sessionKey
   exit(1)

print "Logged in..." 

print "Retrieving all packages..."
packageList = client.channel.software.listAllPackages(sessionKey, channelLabel)

for package in packageList:
   packageName = '%s.%s-%s' % (package['package_name'],package['package_version']
      ,package['package_release'])
   if search in packageName:
      print "Found %s" % packageName
      pid = package['package_id']
      print "Getting package details..."
      packageDetail = client.packages.getDetails(sessionKey, pid)
      url = packageDetail['download_urls'][0]
      import  urllib2
      req = urllib2.Request(url,headers={'X-ULN-API-User-Key': sessionKey})
      try:
          print "Downloading %s..." %url
          response = urllib2.urlopen(req)
          contents = response.read()
      except urllib2.HTTPError, e:
          print
          print "HTTP error code  : %d" %e.code
      except Exception, e:
          print
          print str(e)

print ""

retval = client.auth.logout(sessionKey)
if retval == 1:
  print "Logged out..."
else:
  print "Failed to log out..."

BLOBs in the Cloud with APEX and AWS S3

Scott Spendolini - Wed, 2014-05-14 15:09
Overview

Recently, I was working with one of our customers and ran into a rather unique requirement and an uncommon constraint. The customer - Storm Petrel - has designed a grant management system called Tempest.  This system is designed to aid local municipalities when applying for FEMA grants after a natural disaster occurs.  As one can imagine, there is a lot of old fashioned paperwork when it comes to managing such a thing.

Thus, the requirement called for the ability to upload and store scanned documents.  No OCR or anything like that, but rather invoices and receipts so that a paper trail of the work done and associated billing activity can be preserved.  For APEX, this can be achieved without breaking a sweat, as the declarative BLOB feature can easily upload a file and store it in a BLOB column of a table, complete with filename and MIME type.

However, the tablespace storage costs from the hosting company for the anticipated volume of documents was considerable.  So much so that the cost would have to be factored into the price of the solution for each customer, making it more expensive and obviously less attractive.

My initial thought was to use Amazon’s S3 storage solution, since the costs of storing 1GB of data for a month is literally 3 cents.  Data transfer prices are also ridiculously inexpensive, and from what I have seen via marketing e-mails, the price of this and many of Amazon’s other AWS services have been on a downward trend for some time.

The next challenge was to figure out how to get APEX integrated with S3.  I have seen some of the AWS API documentation, and while there are ample examples for Java, .NET and PHP, there is nothing at all for PL/SQL.  Fortunately, someone else has already done the heavy lifting here: Morten Braten & Jeffrey Kemp.

Morten’s Alexandria PL/SQL Library is an amazing open-source suite of PL/SQL utilities which provide a number of different services, such as document generation, data integration and security.  Jeff Kemp has a presentation on SlideShare that best covers the breadth of this utility.  You can also read about the latest release - 1.7 - on Morton’s blog here.  You owe it to yourself to check out this library whether or not you have any interest in AWS S3!

In this latest release of the library, Jeff Kemp has added a number of enhancements to the S3 integration piece of the framework, making it quite capable of managing files on S3 via a set of easy to use PL/SQL APIs.  And these APIs can be easily & securely integrated into APEX and called from there.  He even created a brief presentation that describes the S3 APIs.

Configuring AWS Users and Groups

So let’s get down to it.  How does all of this work with APEX?  First of all, you will need to create an AWS account.  You can do this by navigating to http://aws.amazon.com/ and clicking on Sign Up.  The wizard will guide you through the account creation process and collect any relevant information that it needs.  Please note that you will need to provide a valid credit card in order to create an AWS account, as they are not free, depending on which services you choose to use.

Once the AWS account is created, the first thing that you should consider doing is creating a new user that will be used to manage the S3 service.  The credentials that you use when logging into AWS are similar to root, as you will be able to access and manage and of the many AWS services.  When deploying only S3, it’s best to create a user that can only do just that.

To create a new user:

1) Click on the Users tab

2) Click Create New User

3) Enter the User Name(s) and click Create.  Be sure that Generate an access key for each User is checked.

2014 05 08 10 49 18
Once you click Create, another popup region will be displayed.  Do not close this window!  Rather, click on Show User Security Credentials to display the Access Key ID and Secret Access Key ID.  Think of the Access Key ID as a username and the Secret Access Key ID as a password, and then treat them as such.

2014 05 08 10 51 29
For ease of use, you may want to click Download Credentials and save your keys to your PC.

The next step is to create a Group that your new user will be associated with.  The Group in AWS is used to map a user or users to a set of permissions.  In this case, we will need to allow our user to have full access to S3, so we will have to ensure that the permissions allow for this.  In your environment, you may not want to grant as many privileges to a single user.

To create a new group:

1) Click on the Groups tab

2) Click on Create New Group

3) Enter the Group Name, such as S3-Admin, and click Continue

The next few steps may vary depending on which privileges you want to assign to this group.  The example will assume that all S3 privileges are to be assigned.

4) Select Policy Generator, and then click on the Select button.

5) Set the AWS Service drop down to Amazon S3.

6) Select All Actions (*) for the Actions drop down.

7) Enter arn:aws:s3:::* for the Amazon Resource Name (ARN) and click Add Statement.  This will allow access to any S3 resource.  Alternatively, to create a more restricted group, a bucket name could have been specified here, limiting the users in this group to only be able to manage that specific bucket.

8) Click Continue.

9) Optionally rename the Policy Name to something a little less cryptic and click Continue.

10) Click Create group to create the group.
The animation below illustrates the previous steps:

Next, we’ll add our user to the newly created group.

1) Select the group that was just created by checking the associated checkbox.

2) Under the Users tab, click Add Users to Group.

3) Select the user that you want to add and then click Add Users.

The user should now be associated with the group.

2014 05 08 11 55 45
Select the Permissions tab to verify that the appropriate policy is associated with the user.

2014 05 08 11 56 15
At this point, the user management portion of AWS is complete.

Configuring AWS S3

The next step is to configure the S3 portion. To do this, navigate to the S3 Dashboard:

1) Click on the Services tab at the top of the page.

2) Select S3.

You should see the S3 dashboard now:

2014 05 08 13 53 35
S3 uses “buckets" to organize files.  A bucket is just another word for a folder.  Each of these buckets have a number of different properties that can be configured, making the storage and security options quite extensible.  While there is a limit of 100 buckets per AWS account, buckets can contain folders, and when using the AWS APIs, its fairly easy to provide a layer of security based on a file’s location within a bucket.

Let’s start out by creating a bucket and setting up some of the options.

1) Click on Create Bucket.

2) Enter a Bucket Name and select the Region closest to your location and click Create.  One thing to note - the Bucket Name must be unique across ALL of AWS.  So don’t even try demo, test or anything like that.

3) Once your bucket is created, click on the Properties button.

2014 05 08 14 01 07
I’m not going to go through all of the properties of a bucket in detail, as there are plenty of other places that already have that covered.  Fortunately, for our purposes, the default settings on the bucket should suffice.  It is worth taking a look at these settings, as many of them - such as Lifecycle and Versioning - can definitely come in handy and reduce your development and storage costs.

Next, let’s add our first file to the bucket.  To do this:

1) Click on the Bucket Name.

2) Click on the Upload button.

3) A dialog box will appear.  To add a file or files, click Add Files.

4) Using the File Upload window, select a file that you wish to upload.  Select it and click Open.

5) Click Start Upload to initiate the upload process.
Depending on your file size, the transfer will take anywhere from a second to several minutes.  Once it’s complete, your file should be visible in the left side of the dashboard.

6) Click on the recently uploaded file.

7) Click on the Properties button.

2014 05 08 14 18 29
Notice that there is a link to the file displayed in the Properties window.  Click on that link.  You’re probably looking at something like this now:

2014 05 08 14 20 59
That is because by default, all files uploaded to S3 will be secured.  You will need to call an AWS API to generate a special link in order to access them.  This is important for a couple of reasons.  First off, you clearly don’t want just anyone accessing your files on S3.  Second, even if securing files is not a major concern, keep in mind that S3 also charges for data transfer.  Thus, if you put a large public file on S3, and word gets out as to its location, charges can quickly add up as many people access that file.  Fortunately, securely accessing files on S3 from APEX is a breeze with the Alexandria PL/SQL libraries.  More on that shortly.

If you want to preview any file in S3, simply right-click on it and select Open or Download.  This is also how you rename and delete files in S3.  And only authorized AWS S3 users will be able to perform these tasks, as the S3 Dashboard requires a valid AWS account.

Installing AWS S3 PL/SQL Libraries
The next thing that needs to be installed is the Alexandria PL/SQL library - or rather just the Amazon S3 portion of it.  There is no need to install the rest of the components, especially if they are not going to be used.  For ease of use, these objects can be installed directly into your APEX parse-as schema.  
However, they can also be installed into a centralized schema and then made available to other schemas that need to use them.

There are eight files that need to be installed, as well as a one-off command.

1) First, connect to your APEX parse-as schema and run the following script:
create type t_str_array as table of varchar2(4000)
/
2) Next, the HTTP UTIL & DEBUG packages needs to be installed.  This allows the database to retrieve files from S3 as well as provides a debugging infrastructure.

To install these packages, run the following four scripts as your APEX parse-as schema:
/plsql-utils-v170/ora/http_util_pkg.pks
/plsql-utils-v170/ora/http_util_pkg.pkb
/plsql-utils-v170/ora/debug_pkg.pkb
/plsql-utils-v170/ora/debug_pkg.pkb
Before running the S3 scripts, the package body of AMAZON_AWS_AUTH_PKG needs to be modified so that your AWS credentials are embedded in it.

3) Edit the file amazon_aws_auth_pkg.pkb in a text editor.

4) Near the top of the file are three global variable declarations: g_aws_id, g_aws_key and g_gmt_offset.  Set the values of these three variables to the Access Key ID, Secret Key ID and GMT offset.  These values were displayed and/or downloaded when you created your AWS user.  If you did not record these, you will have to create a new pair back in the User Management dashboard.
Here’s an example of what the changes will look like, with the values obfuscated:
g_aws_id     varchar2(20) := 'XXXXXXXXXXXXXXXXXXX'; -- AWS Access Key ID
g_aws_key varchar2(40) := 'XXXXXXXXXXXXXXXXXXX'; -- AWS Secret Key
g_gmt_offset number := 4; -- your timezone GMT adjustment (EST = 4, CST = 5, MST = 6, PST = 7) 
It is also possible to store these values in a more secure place, such as an encrypted column in a table and then fetch them as they are needed.

5) Once the changes to amazon_aws_auth_pkg.pkb are made, save the file.

6) Next, run the following four SQL scripts in the order below as your APEX parse-as schema:
/plsql-utils-v170/ora/amazon_aws_auth_pkg.pks
/plsql-utils-v170/ora/amazon_aws_auth_pkg.pkb
/plsql-utils-v170/ora/amazon_aws_s3_pkg.pks
/plsql-utils-v170/ora/amazon_aws_s3_pkg.pks
If there are no errors, then we’re almost ready to test and see if we can view S3 via APEX!

IMPORTANT NOTE
: The S3 packages in their current form do not offer support for SSL.  This is a big deal, since any request that is made to S3 will be done in the clear, putting the contents of your files at risk as they are transferred to and from S3. There is a proposal on the Alexandria Issues Page that details this deficiency.

I have made some minor alterations to the AMAZON_AWS_S3_PKG package which accommodate using SSL and Oracle Wallet when calling S3. You can download it from here.  When using this version, there are three additional package variables that need to be altered:
 
g_orcl_wallet_path constant varchar2(255) := 'file:/path_to_dir_with_oracle_wallet';
g_orcl_wallet_pw constant varchar2(255) := 'Oracle Wallet Password';
g_aws_url_http constant varchar2(255) := 'https://'; -- Set to either http:// or https:// 
Additionally, Oracle Wallet will need to be configured with the certificate from s3.amazonaws.com.  Jeff Hunter has an excellent and easy to follow post on configuring Oracle Wallet here that will guide you through configuring Oracle Wallet.  
Configuring the ACL
Starting with Oracle 11g, an Access Control List - or ACL - restricts which outbound transactions are allowed to occur.  By default, none of them are.  Thus, we will need to configure the ACL to allow our schema to access Amazon’s servers.
 
To create the ACL, run the following script as SYS, replacing it with your specific values:
BEGIN
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL
(
acl         => 'apex-s3.xml',   
description => 'ACL for APEX-S3 to access Amazon S3',   
principal   => 'APEX_S3',    
is_grant    => TRUE,    
privilege   => 'connect'   
);
DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL     
 
acl         => 'apex-s3.xml',   
host        => '*.amazonaws.com',   
lower_port  => 80,   
upper_port  => 80   
);
COMMIT;
END;
/
IMPORTANT NOTE: When using SSL, the lower_port and upper_port values should both be set to 443.
Integration with APEX
At this point, the integration between Oracle and S3 should be complete.  We can run a simple test to verify this.  
From the SQL Workshop, enter and execute the following SQL, replacing apex-s3-integration with the name of your bucket: 
SELECT * FROM table (amazon_aws_s3_pkg.get_object_tab(p_bucket_name => 'apex-s3-integration'))

This query will return all files that are stored in the bucket apex-s3-integration, as shown below:

2014 05 08 14 56 42

If you see the file that you previously uploaded, then everything is working as it should!
 
Now that the configuration is complete, the next step is to build the actual integration into APEX so that we can upload and download BLOBs to/from S3.  In this example, we will build a simple APEX form & report that allows the user to upload, download and delete content from an S3 bucket.  This example uses APEX 4.2.5, but it should work in previous releases, as it uses nothing too version specific.

To create a simple APEX & S3 Integration application:
 
1) Create a new Database application.  Be sure to set Create Options to Include Application Home Page, and select any theme that you wish.  This example will use Theme 25.
 
2) Edit Page 1 of your application and create a new Interactive Report called “S3 Files”.  Use the following SQL as the source of the report, replacing apex-s3-integration with your bucket name:
SELECT
key,
size_bytes,
last_modified,
amazon_aws_s3_pkg.get_download_url
(
p_bucket_name => 'apex-s3-integration',
p_key => key,
p_expiry_date => SYSDATE + 1
) download,
key delete_doc
FROM
table (amazon_aws_s3_pkg.get_object_tab(p_bucket_name => 'apex-s3-integration')) 
The SQL call the pipeline function get_object_tab from the S3 package, and return all files in the corresponding bucket.  Since the AWS credentials are embedded in the S3 package, they do not need to be entered here.
 
Since the bucket that we created is not open to the public, all access to it is restricted.  Only when a specific Access Key ID, Signature and Expiration Date are passed in will the file be accessible.  To do this, we need to call another API to generate a valid link with that information included.  The get_download_url API takes in three parameters - bucket_name, key and expiry date.  The first two are self-explanatory, whereas the third will determine how long the specific download link will be good for.  In our example, it is set to 1 day, but it can be set to any duration of time.  
 
Once the report is created, run and login to your application. The first page will display something similar to this:
 
2014 05 15 07 31 18
Upon closer inspection of the Download link, the AWSAccessKeyId, Expires and Signature parameters can be seen.  These were automatically generated from the S3 API, and this link can be used to download the corresponding file.  
 
3) Edit the KEY column of the Interactive Report.  Enter #KEY# for the Link Text, set the Target to URL and enter #DOWNLOAD# for the URL and click Apply Changes.
 
Create a Download Link on the KEY Column
 
Next, since we no longer need to display the Download column in the report, it should be set to hidden.
 
4) Edit the Report Attributes of the Interactive Report.
 
5) Set Display Text As for the Download column to Hidden and click Apply Changes.
 
Set the Download column to Hidden
 
When running Page 1 now, the name of the file should be a link.  Clicking on the link should either display or download the file from S3, depending on what type of file was uploaded.
 
To delete an item, we will have to call the S3 API delete_object and pass in the corresponding bucket name and key of the file to be deleted.  We can handle this easily via a column link that calls a Dynamic Action.
 
Before we get started, we’ll need to create a hidden item on Page 1 that will store the file key.
 
6) Create a new Hidden Item on Page 1 called P1_KEY.
 
7) Make sure that the Value Protected attribute is set to Yes, take all the defaults on the next page, and create the item.
 
Next, let’s edit the DELETE_DOC column and set the link that will trigger a Dynamic Action which will in turn, delete the document from S3.
 
8) Edit the DELETE_DOC column.
 
9) Enter Delete for the Link Text and enter the following for the Link Attributes: id="#KEY” class=“deleteDoc”  Next, set the Target to URL and enter # in the URL field and click Apply Changes.
 
Set the Link Attributes of the DELETE_DOC Column
 
Next, let’s create the Dynamic Action.
 
10) Create a new Dynamic Action called Delete S3 Doc.  It should be set to fire when the Event is a Click, and when the jQuery Selector is equal to .deleteDoc
 
Definition of the Delete S3 Doc Dynamic Action
 
11) Next, the first True action should be set to Confirm, and the message “Are you sure that you want to delete this file” be entered into the Text region.
 
Add the Confirm True Action
 
12) Click Next and then click Create Dynamic Action.
 
At this point, if you click on the Delete link, a confirmation message should be displayed.  Clicking OK will have no impact, as additional True actions need to be first added to the Dynamic Action.  Let’s create those True actions now.
 
13) Expand the Delete S3 Doc Dynamic Action and right-click on True and select Create.
 
14) Set the Action to Set Value, uncheck Fire on Page Load, and enter the following for JavaScript Expressionthis.triggeringElement.id;  Next, set the Selection Type to Item(s) and enter P1_KEY for the Item(s) and click Create.
 
Create the Set Value True Action
 
15) Create another True action by clicking on Add True Action.
 
16) Set the Action to Execute PL/SQL Code and enter the following for PL/SQL Code, replacing apex-s3-integration with the name of your bucket:
amazon_aws_s3_pkg.delete_object 
  (
  p_bucket_name => 'apex-s3-integration',
  p_key => :P1_KEY
  );

Enter P1_KEY for Page Items to Submit and click Create.

Create Execute PL/SQL Code True Action

 
17) Create another True action by clicking on Add True Action.
 
18) Set the Action to Refresh, set the Selection Type to Region, and set the Region to S3 Files (20) and click Create.
 
Create a Refresh True Action
 
At this point, if you click on the Delete link for a file in S3, you should be prompted to confirm the deletion, and if you click OK, the file will be removed from S3 and the Interactive Report refreshed.  A quick check of your AWS S3 Dashboard should show that the file is in fact, deleted.  
 
The last step is to create a page that allows the user to upload files to S3.  This can easily be done with a File Upload APEX item and a simple call to the S3 API new_doc.
 
19) Create a new blank page called Upload S3 File and set the Page Number to 2.  Re-use the existing tab set and tab and optionally add a breadcrumb to the page and set the parent to the Home page.
 
20) On Page 2, create a new HTML region called Upload S3 File.
 
Next, we’ll add a File Browse item to the page.  This item will use APEX’s internal table WWV_FLOW_FILES to temporarily store the BLOB file before uploading it to S3.
 
21) In the new region, create a File Browse item called P2_DOC and click Next.
 
22) Set the Label to Select File to Upload and the Template to Required (Horizontal - Right Aligned) and click Next.
 
23) Set Value Required to Yes and Storage Type to Table WWV_FLOW_FILES and click Next.
 
24) Click Create Item.
 
Next, we’ll add a button that will submit the page.
 
25) Create a Region Button in the Upload S3 File region.
 
26) Set the Button Name to UPLOAD, set the Label to Upload, set the Button Style to Template Based Button and set the Button Template to Button and click Next.
 
27) Set the Position to Region Template Position #CREATE# and click Create Button.
 
Next, the PL/SQL Process that will call the S3 API needs to be created.
 
28) Create a new Page Process in the Page Processing region.
 
29) Select PL/SQL and click Next.
 
30) Enter Upload File to S3 for the Name and click Next.
 
31) Enter the following for the region Enter PL/SQL Page Process, replacing apex-s3-integration with the name of your bucket and click Next.
FOR x IN (SELECT * FROM wwv_flow_files WHERE name = :P2_DOC)
LOOP
 -- Create the file in S3
amazon_aws_s3_pkg.new_object
(
p_bucket_name => 'apex-s3-integration',
p_key => x.filename,
p_object => x.blob_content,
p_content_type => x.mime_type
);
END LOOP;
-- Remove the doc from WWV_FLOW_FILES
DELETE FROM wwv_flow_files WHERE name = :P2_DOC;
The PL/SQL above will loop through the table WWV_FLOW_FILES for the document just uploaded and pass the filename, MIME type and file itself to the new_object S3 API, which in turn will upload the file to S3.  The last line will ensure that the file is immediately removed from the WWV_FLOW_FILES table.
 
32) Enter File Uploaded to S3 for the Success Message and click Next.
 
33) Set When Button Pressed to UPLOAD (Upload) and click Create Process.
 
One more thing needs to be added - a branch that returns to Page 1 after the file is uploaded.
 
34) In the Page Processing region, expand the After Processing node and right-click on Branches and select Create.
 
35) Enter Branch to Page 1 for the Name and click Next.
 
36) Enter 1 for Page, check Include Process Success Message and click Next.
 
37) Set When Button Pressed to UPLOAD (Upload) and click Create Branch.
 
Now, you should be able to upload any file to your S3 bucket from APEX!
 
One more small addition needs to be made to our example application.  We need a way to get to Page 2 from Page 1.  A simple region button should do the trick.
 
38) Edit Page 1 of your application.
 
39) Create a Region Button in the S3 Files region.
 
40) Set the Button Name to UPLOAD, set the Label to Upload, set the Button Style to Template Based Button and set the Button Template to Button and click Next.
 
41) Set the Position to Right of the Interactive Report Search Bar and click Next.
 
42) Set the Action to Redirect to Page in this Application, enter 2 for Page and click Create Button.
 
That’s it!  You should now have a working APEX application that has the ability to upload, download and delete files from Amazon’s S3 service.
 
Finished Product
 
Please leave any questions or report any typos in the comments, and I’ll get to them as soon as time permits.

Tackling the challange of Provisoning Databases in an agile datacenter

Pankaj Chandiramani - Wed, 2014-05-14 02:03

One of the key task that a DBA performs repeatedly is Provisioning of Databases which also happens to one of the top 10 Database Challenges as per IOUG Survey .

Most of the challenge comes in form of either Lack of Standardization or it being a Long and Error Prone Process . This is where Enterprise Manager 12c can help by making this a standardized process using profiles and lock-downs ; plus have a role and access separation where lead dba can lock certain properties of database (like character-set or Oracle Home location  or SGA etc) and junior DBA's can't change those during provisioning .Below image describes the solution :



In Short :



  • Its Fast

  • Its Easy 

  • And you have complete control over the lifecycle of your dev and production resources.


I actually wanted to show step by step details on how to provision a 11204 RAC using Provisioning feature of DBLM  , but today i saw a great post by MaaZ Anjum that does the same , so i am going to refer you to his blog here :


Patch and Provision in EM12c: #5 Provision a Real Application Cluster Database


Other Resources : 


Official Doc : http://docs.oracle.com/cd/E24628_01/em.121/e27046/prov_db_overview.htm#CJAJCIDA


Screen Watch : https://apex.oracle.com/pls/apex/f?p=44785:24:112210352584821::NO:24:P24_CONTENT_ID%2CP24_PREV_PAGE:5776%2C1


Others : http://www.oracle.com/technetwork/oem/lifecycle-mgmt-495331.html?ssSourceSiteId=ocomen



Categories: DBA Blogs

brew install sqlplus

Dominic Delmolino - Tue, 2014-05-13 19:59

Gee, that didn’t work.

For those of you wondering about the title of this post, I’m referring to the brew package manager for Mac OS — a nice utility for installing Unix-like packages on Mac OS similar to how yum / apt-get can be used on Linux.

I particularly like the way brew uses /usr/local and symlinks for clean installations of software without messing up the standard Mac paths.

Unfortunately, there isn’t a brew “formula” for installing sqlplus and the instant client libraries (and probably never will be due to licensing restrictions), but we can come close using ideas from Oracle ACE Ronald Rood and his blog post Oracle Client 11gR2 (11.2.0.3) for Apple Mac OS X (Intel).

Go there now and read up through “unzipping the files” — after that, return here and we’ll see how to simulate a brew installation.

organize the software

mkdir -p /usr/local/Oracle/product/instantclient/11.2.0.4.0/bin
mkdir -p /usr/local/Oracle/product/instantclient/11.2.0.4.0/lib
mkdir -p /usr/local/Oracle/product/instantclient/11.2.0.4.0/jdbc/lib
mkdir -p /usr/local/Oracle/product/instantclient/11.2.0.4.0/rdbms/jlib
mkdir -p /usr/local/Oracle/product/instantclient/11.2.0.4.0/sqlplus/admin

Change to the instantclient_11_2 directory where the files were extracted, and execute the following commands to place them into our newly created directories:

mv ojdbc* /usr/local/Oracle/product/instantclient/11.2.0.4.0/jdbc/lib/
mv x*.jar /usr/local/Oracle/product/instantclient/11.2.0.4.0/rdbms/jlib/
mv glogin.sql /usr/local/Oracle/product/instantclient/11.2.0.4.0/sqlplus/admin/
mv *dylib* /usr/local/Oracle/product/instantclient/11.2.0.4.0/lib/
mv *README /usr/local/Oracle/product/instantclient/11.2.0.4.0/
mv * /usr/local/Oracle/product/instantclient/11.2.0.4.0/bin/

While these commands place the files where we want them, we’ll need to do a few more things to make them usable. If you’re using brew already, /usr/local/bin will be in your PATH and you won’t need to add it. We’ll mimic what brew does and symlink sqlplus into /usr/local/bin.

cd /usr/local/bin
ln -s ../Oracle/product/instantclient/11.2.0.4.0/bin/sqlplus sqlplus

This will put sqlplus on our path, but we still need to set the environment variables for things like ORACLE_BASE, ORACLE_HOME and the DYLD_LIBRARY_PATH. Ronald sets them manually and then adds them to his .bash_profile, but I wanted to mimic some of the brew packages and have a .sh file to set variables from /usr/local/share.
To do so, I created another directory underneath /usr/local/Oracle to hold my .sh file:

cd /usr/local/Oracle/product/instantclient/11.2.0.4.0
mkdir -p share/instantclient
cd /usr/local/share
ln -s ../Oracle/product/instantclient/11.2.0.4.0/share/instantclient/ instantclient

Now I can create an instantclient.sh file and place it in /usr/local/Oracle/product/instantclient/11.2.0.4.0/share/instantclient/ with the content I want in my environment.

$ cat /usr/local/share/instantclient/instantclient.sh 
export ORACLE_BASE=/usr/local/Oracle
export ORACLE_HOME=$ORACLE_BASE/product/instantclient/11.2.0.4.0
export DYLD_LIBRARY_PATH=$ORACLE_HOME/lib
export TNS_ADMIN=$ORACLE_BASE/admin/network

Once I have this file in place, I can edit my .bash_profile file and add the following line:

source /usr/local/share/instantclient/instantclient.sh

Open up a new Terminal window and voila! A working sqlplus installation that mimics a brew package install!

Unbreakable Linux Network APIs available

Wim Coekaerts - Tue, 2014-05-13 15:37
Aside from the uln_channel tool that we recently released, we are now also supporting a number of webservices on ULN. A handful of useful APIs are available. Below is a little simple python example that works out of the box on Oracle Linux 6 (when you have an account on ULN) and a description of the currently available APIs. Note that the python code is very simplistic... I know no exception handling, wasn't the point ;)...

Additionally, the ULN integration with Spacewalk uses these APIs as well. See here

APIs :

client.auth.login(username,password) returns sessionKey 
client.errata.listCves(sessionKey, advisory) returns cveList
client.errata.applicableToChannels(sessionKey, advisory) returns channelList
client.channel.software.listLatestPackages(sessionKey, channelLabel) returns packageList
client.channel.software.listErrata(sessionKey, channelLabel) returns errataList
client.packages.listProvidingErrata(sessionKey, pid) returns errataList
client.channel.listSoftwareChannels(sessionKey) returns channelList
client.channel.software.listAllPackages(sessionKey, channelLabel) returns packageList
client.errata.listPackages(sessionKey, advisory) returns packageList
client.errata.getDetails(sessionKey, advisory) returns errataDetail
client.channel.software.getDetails(sessionKey, channelLabel) returns channelDetail
client.packages.getDetails(sessionKey, pid) returns packageDetail
client.auth.logout(sessionKey) returns retval

sample output of the code :

$ ./sample.py
Login : client.auth.login(username,password) returns sessionKey
Logged in...

List CVEs for a particular advisory : client.errata.listCves(sessionKey, advisory) returns cveList
Example : CVEs for advisory 'ELSA-2013-1100' : ['CVE-2013-2231']

List channels applicable to advisory : client.errata.applicableToChannels(sessionKey, advisory) returns channelList
Example : Channels applicable to advisory 'ELSA-2013-1100' : [{'channel_name': 'Oracle Linux 6 Latest (i386)', 'channel_label': 'ol6_i386_latest', 'parent_channel_id': ' ', 'channel_id': 941}, {'channel_name': 'Oracle Linux 6 Latest (x86_64)', 'channel_label': 'ol6_x86_64_latest', 'parent_channel_id': ' ', 'channel_id': 944}, {'channel_name': 'Oracle Linux 6 Update 4 Patch (i386)', 'channel_label': 'ol6_u4_i386_patch', 'parent_channel_id': ' ', 'channel_id': 1642}, {'channel_name': 'Oracle Linux 6 Update 4 Patch (x86_64)', 'channel_label': 'ol6_u4_x86_64_patch', 'parent_channel_id': ' ', 'channel_id': 1644}]

List latest packages in a given channel : client.channel.software.listLatestPackages(sessionKey, channelLabel) returns packageList
Example : Packages for channel 'ol6_x86_64_latest' returns 6801 packages

List errata in a given channel : client.channel.software.listErrata(sessionKey, channelLabel) returns errataList
Example : Errata in channel 'ol6_x86_64_latest' returns 1403 errata

List errata for a given package : client.packages.listProvidingErrata(sessionKey, pid) returns errataList
Example :
[{'errata_update_date': '2011-06-08 00:00:00', 'errata_advisory_type': 'Security Advisory', 'errata_synopsis': 'subversion security update', 'errata_advisory': 'ELSA-2011-0862', 'errata_last_modified_date': '2011-06-08 00:00:00', 'errata_issue_date': '2011-06-08 00:00:00'}]

List software channels available : client.channel.listSoftwareChannels(sessionKey) returns channelList
Example : List of channels returns '253' channels

List all packages for a given channel : client.channel.software.listAllPackages(sessionKey, channelLabel) returns packageList
Example : All packages for channel 'ol6_x86_64_latest' returns 25310 packages

List packages for a given advisory : client.errata.listPackages(sessionKey, advisory) returns packageList
Example : Packages for advisory 'ELSA-2013-1100' returns 12 packages

Details for a specific advisory : client.errata.getDetails(sessionKey, advisory) returns errataDetail
Example :
{'errata_update_date': '7/22/13', 'errata_topic': ' ', 'errata_type': 'Security Advisory', 'errata_severity': 'Important', 'errata_notes': ' ', 'errata_synopsis': 'qemu-kvm security update', 'errata_references': ' ', 'errata_last_modified_date': '2013-07-22 00:00:00', 'errata_issue_date': '7/22/13', 'errata_description': '[qemu-kvm-0.12.1.2-2.355.el6_4.6]\n- kvm-qga-cast-to-int-for-DWORD-type.patch [bz#980758]\n- kvm-qga-remove-undefined-behavior-in-ga_install_service.patch [bz#980758]\n- kvm-qga-diagnostic-output-should-go-to-stderr.patch [bz#980758]\n- kvm-qa_install_service-nest-error-paths-more-idiomatically.patch [bz#980758]\n- kvm-qga-escape-cmdline-args-when-registering-win32-service.patch [bz#980758]\n- Resolves: bz#980758\n (qemu-kvm: CVE-2013-2231 qemu: qemu-ga win32 service unquoted search path [rhel-6.4.z])'}

Details for a given channel : client.channel.software.getDetails(sessionKey, channelLabel) returns channelDetail
Example :
{'channel_description': 'All packages released for Oracle Linux 6 (x86_64), including the very latest updated packages', 'channel_summary': 'Oracle Linux 6 Latest (x86_64)', 'channel_arch_name': 'x86_64', 'metadata_urls': {'group': [{'url': 'https://uln-qa.oracle.com/XMLRPC/GET-REQ/ol6_x86_64_latest/repodata/comps.xml', 'checksum': '08ec74da7552f56814bc7f94d60e6d1c3d8d9ff9', 'checksum_type': 'sha', 'file_name': 'repodata/comps.xml'}], 'filelists': [{'url': 'https://uln-qa.oracle.com/XMLRPC/GET-REQ/ol6_x86_64_latest/repodata/filelists.xml.gz', 'checksum': '2fb7fe60c7ee4dc948bbc083c18ab065384e990f', 'checksum_type': 'sha', 'file_name': 'repodata/filelists.xml.gz'}], 'updateinfo': [{'url': 'https://uln-qa.oracle.com/XMLRPC/GET-REQ/ol6_x86_64_latest/repodata/updateinfo.xml.gz', 'checksum': '15b889640ad35067d99b15973bb71aa1dc33ab00', 'checksum_type': 'sha', 'file_name': 'repodata/updateinfo.xml.gz'}], 'primary': [{'url': 'https://uln-qa.oracle.com/XMLRPC/GET-REQ/ol6_x86_64_latest/repodata/primary.xml.gz', 'checksum': '21f7115120c03a9dbaf25c6e1e9e3d6288bf664f', 'checksum_type': 'sha', 'file_name': 'repodata/primary.xml.gz'}], 'repomd': [{'url': 'https://uln-qa.oracle.com/XMLRPC/GET-REQ/ol6_x86_64_latest/repodata/repomd.xml', 'file_name': 'repodata/repomd.xml'}], 'other': [{'url': 'https://uln-qa.oracle.com/XMLRPC/GET-REQ/ol6_x86_64_latest/repodata/other.xml.gz', 'checksum': '30a176c8509677b588863bf21d7b196941e866af', 'checksum_type': 'sha', 'file_name': 'repodata/other.xml.gz'}]}}

Details for a given package : client.packages.getDetails(sessionKey, pid) returns packageDetail
Example :
{'package_size': 5855337, 'package_arch_label': 'i686', 'package_cookie': '1307566435', 'package_md5sum': 'e74525b5bbaa9e637fe818f3f5777c02', 'package_name': 'subversion', 'package_summary': 'A Modern Concurrent Version Control System', 'package_epoch': ' ', 'package_checksums': [{'md5': 'e74525b5bbaa9e637fe818f3f5777c02'}], 'package_payload_size': 5857988, 'package_version': '1.6.11', 'package_license': 'ASL 1.1', 'package_vendor': 'Oracle America', 'package_release': '2.el6_1.4', 'package_last_modified_date': '2011-06-08 15:53:55', 'package_description': 'Subversion is a concurrent version control system which enables one\nor more users to collaborate in developing and maintaining a\nhierarchy of files and directories while keeping a history of all\nchanges. Subversion only stores the differences between versions,\ninstead of every complete file. Subversion is intended to be a\ncompelling replacement for CVS.', 'package_id': 2814035, 'providing_channels': ['ol6_x86_64_latest'], 'package_build_host': 'ca-build44.us.oracle.com', 'package_build_date': '2011-06-08 15:53:55', 'download_urls': ['https://uln-qa.oracle.com/XMLRPC/GET-REQ/ol6_x86_64_latest/subversion-1.6.11-2.el6_1.4.src.rpm'], 'package_file': 'subversion-1.6.11-2.el6_1.4.src.rpm'}

Logout : client.auth.logout(sessionKey) returns retval
Logged out...

Sample code :

#!/usr/bin/env python
try:
    import os
    import sys
    import getpass
    import datetime
    import xmlrpclib

except ImportError, e:
    raise ImportError (str(e) + ': Module  not found')

SERVER_URL = 'https://linux-update.oracle.com/rpc/api'

USERNAME = 'myusername@company.com'
PASSWORD = 'mypassword'

client = xmlrpclib.Server(SERVER_URL)


# login
print "Login : client.auth.login(username,password) returns sessionKey "
sessionKey = client.auth.login(USERNAME,PASSWORD)
if len(sessionKey) != 43:
   print "Invalid %d sessionKey : '%s'" % sessionKey
   exit(1)

print "Logged in..."

print ""
print ""
print ""


# list CVEs for an advisory
print "List CVEs for a particular advisory : client.errata.listCves(sessionKey, advisory)\
 returns cveList"
advisory = "ELSA-2013-1100"
cveList = client.errata.listCves(sessionKey, advisory)
print "Example : CVEs for advisory '%s' : %s" % (advisory, cveList)


print ""
print ""
print ""

# list channels for CVE
print "List channels applicable to advisory : \
client.errata.applicableToChannels(sessionKey, advisory) returns channelList"
channelList = client.errata.applicableToChannels(sessionKey, advisory)
print "Example : Channels applicable to advisory '%s' : %s" % (advisory, channelList)


print ""
print ""
print ""

# list latest packages in a channel
print "List latest packages in a given channel : \
client.channel.software.listLatestPackages(sessionKey, channelLabel) returns\
 packageList"
channelLabel= 'ol6_x86_64_latest'
packageList = client.channel.software.listLatestPackages(sessionKey, channelLabel)
print "Example : Packages for channel '%s' returns %d packages" %(channelLabel, 
 len(packageList))

print ""
print ""
print ""


# list errata in a channel
print "List errata in a given channel : \
client.channel.software.listErrata(sessionKey, channelLabel) returns errataList"
errataList = client.channel.software.listErrata(sessionKey, channelLabel)
print "Example : Errata in channel '%s' returns %d errata" %(channelLabel, len(errataList))

print ""
print ""
print ""

# list errata for a package with a specific id
print "List errata for a given package : client.packages.listProvidingErrata(sessionKey,
 pid) returns errataList"
pid = '2814035'
errataList = client.packages.listProvidingErrata(sessionKey, pid)
print "Example : \n%s\n" % errataList

print ""
print ""
print ""


# list software channels
print "List software channels available : client.channel.listSoftwareChannels(sessionKey)\
 returns channelList"
channelList = client.channel.listSoftwareChannels(sessionKey)
print "Example : List of channels returns '%d' channels" %(len(channelList))

print ""
print ""
print ""



# list all packages of a channel
print "List all packages for a given channel : \
client.channel.software.listAllPackages(sessionKey, channelLabel) returns packageList"
packageList = client.channel.software.listAllPackages(sessionKey, channelLabel)
print "Example : All packages for channel '%s' returns %d packages" %(channelLabel, 
len(packageList))

print ""
print ""
print ""


# list packages for an errata
print "List packages for a given advisory : client.errata.listPackages(sessionKey,
 advisory) returns packageList"
packageList = client.errata.listPackages(sessionKey, advisory)
print "Example : Packages for advisory '%s' returns %d packages" %(advisory, 
len(packageList))

print ""
print ""
print ""


# get errata details
print "Details for a specific advisory  : \
client.errata.getDetails(sessionKey, advisory) returns errataDetail"
errataDetail = client.errata.getDetails(sessionKey, advisory)
print "Example : \n%s\n" %errataDetail

print ""
print ""
print ""


# get channel details
print "Details for a given channel : \
client.channel.software.getDetails(sessionKey, channelLabel) returns channelDetail"
channelDetail = client.channel.software.getDetails(sessionKey, channelLabel)
print "Example : \n%s\n" % channelDetail

print ""
print ""
print ""


# get package details from package with an id
print "Details for a given package : client.packages.getDetails(sessionKey, pid) \
returns packageDetail"
packageDetail = client.packages.getDetails(sessionKey, pid)
print "Example : \n%s\n" % packageDetail

print ""
print ""
print ""


print "Logout : client.auth.logout(sessionKey) returns retval"
retval = client.auth.logout(sessionKey)
if retval == 1:
  print "Logged out..."
else:
  print "Failed to log out..."

Channel subscription from command-line support added to the Unbreakable Linux Network(ULN)

Wim Coekaerts - Tue, 2014-05-13 12:41
Until recently, to add channels to a server or to register a server as a yum-repository server, one had to log into ULN and manually do this. First a server had to be tagged as a yum server and then any channels that would be included, would have to be added to this server. While this is an easy task, it does involve logging into the website, and manually following a few steps and it could not be automated.

We provided an updated rhn-setup RPM that now adds a new tool called uln-channel which allows users with ULN access to enable a server as a yum server and also add/remove/list channels for this server. This will allow for easy automation.

The latest version of the rhn-setup rpm is rhn-setup-1.0.0.1-16.0.9.el6.noarch. The uln-channel rpm is currently only supported with Oracle Linux version 6.

# uln-channel -h
Usage: uln-channel [options]

Options:
  -c CHANNEL, --channel=CHANNEL
                        name of channel you want to (un)subscribe
  -a, --add             subscribe to channel
  -r, --remove          unsubscribe from channel
  -l, --list            list channels
  -b, --base            show base channel of a system
  -L, --available-channels
                        list all available child channels
  -v, --verbose         verbose output
  -u USER, --user=USER  your user name
  -p PASSWORD, --password=PASSWORD
                        your password
  --enable-yum-server   enable yum server setting
  --disable-yum-server  disable yum server setting
  -h, --help            show this help message and exit

# uln-channel --list
Username: wim@company.com
Password:
ol6_i386_UEK_latest
ol6_i386_ksplice
ol6_i386_latest

# uln-channel --base
Username: wim@company.com
Password:
ol6_i386_ksplice
ol6_i386_latest
ol6_i386_UEK_latest

# uln-channel --enable-yum-server
Username: wim@company.com
Password:

# uln-channel --disable-yum-server
Username: wim@company.com
Password:


# uln-channel --available-channels
Username: wim@company.com
Password:
 el3_i386_latest
el3_u8_i386_patch
el3_u8_x86_64_patch
el3_u9_i386_base
el3_u9_i386_patch
el3_u9_x86_64_base
el3_u9_x86_64_patch
el3_x86_64_latest
...
ol6_x86_64_Dtrace_BETA
ol6_x86_64_Dtrace_latest
ol6_x86_64_Dtrace_userspace_latest
ol6_x86_64_MySQL
ol6_x86_64_MySQL56
ol6_x86_64_UEKR3_latest
ol6_x86_64_UEK_BETA
ol6_x86_64_UEK_base
ol6_x86_64_UEK_latest
ol6_x86_64_addons
ol6_x86_64_gdm_multiseat
ol6_x86_64_ksplice
ol6_x86_64_latest
ol6_x86_64_mysql-ha-utils
ol6_x86_64_ofed_UEK
ol6_x86_64_oracle
ovm22_2.2.0_i386_base
ovm22_2.2.0_i386_patch
ovm22_2.2.1_i386_base
ovm22_2.2.1_i386_patch
ovm22_2.2.2_i386_base
ovm22_2.2.2_i386_patch
ovm22_2.2.3_i386_base
ovm22_2.2.3_i386_patch
ovm22_i386_latest
ovm22_i386_oracle
ovm2_2.1.0_i386_base
ovm2_2.1.0_i386_patch
ovm2_2.1.1_i386_base
ovm2_2.1.1_i386_patch
ovm2_2.1.2_i386_base
ovm2_2.1.2_i386_patch
ovm2_2.1.5_i386_base
ovm2_2.1.5_i386_patch
ovm2_i386_latest
ovm3_3.0.2_x86_64_base
ovm3_3.0.3_x86_64_base
ovm3_3.0.3_x86_64_patch
ovm3_3.0_x86_64_base
ovm3_3.0_x86_64_patch
ovm3_3.1.1_x86_64_base
ovm3_3.1.1_x86_64_patch
ovm3_3.2.1_x86_64_base
ovm3_3.2.1_x86_64_patch
ovm3_x86_64_latest

# uln-channel --add --channel=ol6_x86_64_oracle
Username: wim@company.com
Password:

# uln-channel --list
Username: wim@company.com
Password:
ol6_i386_UEK_latest
ol6_i386_ksplice
ol6_i386_latest
ol6_x86_64_oracle

OpenStack for Oracle Linux and Oracle VM

Wim Coekaerts - Tue, 2014-05-13 12:32
We just made an announcement today about support for OpenStack with Oracle Linux and Oracle VM. The press release can be found here.

America’s Cup Boat Installation Time Lapse Video

David Haimes - Tue, 2014-05-13 08:55

On Friday I realized the America’s Cup yacht was going to be installed at Oracle HQ over the weekend so I went home and got my GoPro camera and set it up to take a picture every 30 seconds.  For some reason it shut off on Saturday morning, when the helicopter brought the hull over the building, but I still think the footage came out pretty well.  Take a look and let me know what you think in the comments below.

(Pro Tip: It’s worth popping out the embedded video and going fullscreen to get the full effect)


Categories: APPS Blogs

archive_lag_target Works in SE

Don Seiler - Mon, 2014-05-12 15:34
TL;DR: The archive_lag_target parameter will force log archiving in Standard Edition.

Just a quick note here that I wanted to share since I didn't see anything directly confirming this when I was searching around.

I have an Oracle 11gR2 Standard Edition (SE) database that I'm also maintaining a manual standby for, since Oracle Data Guard is not available in SE. I created a metric extension in EM12c to alert me if the standby is more than 1 hour behind the primary. However since this is a very low-activity database, archive logs were not switching even once an hour. Obviously, I could include a command to force a log switch/archive in the script that I use to push archivelogs to the standby. However we all know that with Data Guard on Enterprise Edition (EE), one would use the archive_lag_target initialization parameter to set the desired maximum standby lag. Oracle enforces this by performing a log switch at most every X seconds, where X is the number specified by the archive_lag_target value. By default this is set to 0, which disables the feature.

I had assumed that archive_lag_target would only work in EE but decided to give it a try and was pleasantly surprised to see that it does work as intended in SE. So I can set archive_lag_target=900 to specify a 15 minute maximum log archiving (it would be more frequent if the database activity warranted an earlier switch).
Categories: DBA Blogs

Middle East North Africa (MENA) OTN Tour - May 26 - June 1

Syed Jaffar - Mon, 2014-05-12 00:53
Middle East North Africa (MENA) OTN Tour, scheduled through May 26 until June 1 in Tunisia, Saudi Arabia and Dubai countries.

More updates about agenda, registration will be published very shortly. Stay tuned.

Americas Cup Winning Boat Arrives at Oracle HQ

David Haimes - Sun, 2014-05-11 22:50

UPDATE: Check out this post for a time lapse video of the boat being installed over the weekend.
https://davidhaimes.wordpress.com/2014/05/13/americas-cup-boat-installation-time-lapse-video/

On May 20th 2008, I arrived at work and was surprised to see an Americas cup yacht as I looked out from my office window and posted pictures to this blog.  This weekend, almost 6 years later to the day, we had a new boat arrive to live on the lake and this time I was ready because preparations have been going on for a while.  I popped into the office on Saturday to see how it was looking and it was looking, well it was looking very big.  These multi hulled machines are amazing pieces engineering and seeing one so close is pretty cool.  Take a look below at the old boat and the new one viewed from the same office and you will get an idea of the difference in size.  More pictures to come soon…

p-640-480-8d9271fc-4381-49cc-a4ae-710d69c98a30.jpeg

 

teamUSAatHQ

 

 

 


Categories: APPS Blogs

Announcing the ORCLAPEX NOVA Meetup Group

Scott Spendolini - Thu, 2014-05-08 12:41

Following in the footsteps of a few others, I’m happy to announce the formation and initial meeting of the ORCLAPEX NOVA (Northern Virginia) group!  

As Dan McGhan and Doug Gault have mentioned in their blogs, a bunch of us who are regular APEX users are trying to continue to grow the community by providing in-person meetings where we can meet other APEX developers and trade stories, tips and anything else.  Each of the groups is independently run by the local organizers, so the formats and topics will vary from group to group, but the core content will always be focused around Oracle APEX. Groups will also be vendor-neutral, meaning that the core purpose of the group is to provide education and facilitate the sharing of APEX-related ideas, not to market services of products.

Right now, there are a number of groups already formed across the world: 

I’m happy to announce that the first meeting of the ORCLAPEX NOVA group will be Thursday, May 29th, 2014 at Oracle’s Reston office in Reston, VA at 7:00 PM.  Details about the event can be found here.  We will start the group with a bang, as Mike Hichwa, VP of Database Development at Oracle, will be presenting APEX 5.0 New Features for the bulk of the meeting.  You can guarantee that we’ll get to see the latest and greatest features being prepared for the upcoming APEX 5.0 release.  Here’s the rest of the agenda:

 7:00 PM Pizza & Sodas; informal chats 

 7:15 PM Welcome - Scott Spendolini, Enkitec 

 7:30 PM APEX 5.0 - Mike Hichwa, Oracle Corporation 

 9:00 PM Wrap Up & Poll for Next MeetUp

IMPORTANT: In order to attend, you must create a MeetUp.com account, join the group and RSVP.  You will also have to use your real name, as it will be provided to Oracle Security prior to the event, and if you’re not listed, you may not be able to attend.  All communications and announcements will be facilitated via the MeetUp.com site as well.

Also, not all meetings need to be at the Oracle Reston facility; we’re using it because Mike & Shakeeb were able to secure the room for free, and it’s relatively central to Arlington, Fairfax and Loudoun Counties.  Part of what we’ll have to figure out is how many smaller, more local groups we may want to form (i.e. PW County, DC, MD, etc.) and whether or not we should try to keep them loosely associated.  One thought that I had would be for the smaller groups to meet more locally and frequently, and for all of the groups to seek out presenters for an “all hands” type meeting that we can move around the region.  All options are on the table at this point.

I look forward to meeting many of you in person on the 29th!

Oracle Application Express. Fast. Like a Veyron Super Sport.

Joel Kallman - Wed, 2014-05-07 14:18


A partner from the United Kingdom recently asked me for some statistics about apex.oracle.com, as I had authored something similar in a blog post back in 2009.  This gentleman was proposing a magazine article and sought some updated statistics.  Since I compiled this information for him, I reasoned it was worthwhile to also share this same information with the APEX community.

In the past 7 days on apex.oracle.com:

Total Page Views: 4,875,173
Distinct Applications Used: 5,842
Distinct Users: 9,048
Total Number of Workspaces: 20,974
Total Number of Applications: 77,478
New Workspaces Approved: 904

As most people know, apex.oracle.com is the customer evaluation instance, for anyone on the Internet to come and "kick the tires" of Oracle Application Express.

However, what I find even more interesting is the internal instance of Oracle Application Express (apex.oraclecorp.com), hosted inside of Oracle for anyone in the company to come along and build applications, requiring nothing but a browser.  It is run and managed by professionals in Product Development IT.  It's used by virtually every line of business in the company (e.g., HR, Product Development, QA, Sales, Marketing, Real Estate & Facilities, Manufacturing & Distribution, just to name a few).  Instead of merely kicking the tires, these are real applications that the business depends upon, even if some of them are opportunistic applications:

In the past 7 days on apex.oraclecorp.com:

Total Page Views: 2,389,593
Distinct Applications Used: 2,023
Distinct Users: 18,203
Total Number of Workspaces: 2,759
Total Number of Applications: 14,592

And lastly, we have an internal application which is really nothing more than a sophisticated mini data warehouse serving as an employee directory.  Most Oracle employees know it by the name of Aria People.  Tom and others had written this application in lovingly hand-crafted PL/SQL before I even joined Oracle, and we eventually rewrote it in APEX.  As you can imagine, it's used by virtually every employee in the company.  We average 1.4M - 1.5M page views per day.  In reviewing the last 100 days of activity, there was one day (18-MAR-2014) where this application did 3,132,573 page views from 45,767 distinct IP addresses.  The median page rendering time was 0.03 seconds.  In this same application, again looking back across the last 100 days, the busiest hour we had was on 11-MAR-2014, with 171,156 page views in a single hour, from 6,254 distinct IP addresses.  That averages out to 47.543 page views per second.

Oracle Application Express is as scalable as the Oracle Database.  And with some mad Oracle skills, you can scale to great heights.

G+ Public Hangout Fail

Catherine Devlin - Tue, 2014-05-06 22:09
tl;dr:Do not use public Google+ Hangouts under any circumstances, because people suck.

Before the PyCon 2014 CFP came due, PyLadies hosted several G+ hangouts for talk proposal brainstorming. Potential speakers could talk over and flesh out their ideas with each other, producing better talk proposals. More importantly, it was a nice psychological stepping stone on the way to filling out that big, scary CFP form all alone. I thought they went great.

I wanted to emulate them for Postgres Open and PyOhio, which both have CFPs open now. The PyLadies hangouts had used EventBrite to preregister attendees, and I unfortunately did not consider this and the reasons why. Instead, I just scheduled hangouts, made them public, and sent out invitations with the hangout URLs, encouraging people to forward the invites onward. Why make participating any harder than it has to be?

The more worldly of you are already shaking your heads at my naiveté. It turns out that the world's exhibitionists have figured out how to automatically detect and join public hangouts. For several seconds I tried kicking out and banning them as they joined, but new ones kept arriving, faster than one per second. Then I hung up - which unfortunately did not terminate the hangout. It took me frantic minutes to find how to delete a hangout in progress. I dearly hope that no actual tech community members made it to the hangout during that time.

I had intended to create a place where new speakers, and women especially, would feel safe increasing their community participation. The absoluteness of my failure infuriates me.

Hey, Google: public G+ hangouts have been completely broken, not by technical failure, but by the degraded human condition. You need to remove them immediately. The option can only cause harm, as people accidentally expose themselves and others to sexual harrassment.

In the future, a "public" hangout URL should actually take you to a page where you request entrance from the organizer by text message (which should get the same spam filtration that an email would). But fix that later. Take the public hangouts away now.

Everybody else, if you had heard about the hangouts and were planning to participate, THANK YOU - but I've cancelled the rest of them. You should present anyway, though! I'd love to be contacted directly to talk over your ideas for proposals.

Pages

Subscribe to Oracle FAQ aggregator