OTN TechBlog

Subscribe to OTN TechBlog feed
Oracle Blogs
Updated: 12 hours 25 min ago

Podcast: Jfokus Panel: Building a New World Out of Bits

Tue, 2018-01-16 17:27

Our first program for 2018 brings together a panel of experts whose specialties cover a broad spectrum, including Big Data, security, open source, agile, domain driven design, Pattern-Oriented Software Architecture, Internet of Things, and more. The thread that connects these five people is that they are part of the small army of experts that will be presenting at the 2018 Jfokus Developers Conference, February 5-7, 2018 in Stockholm, Sweden.

This program was recorded on January 10, 2018

The Panelists

(in alphabetical order)

Jesse Anderson

Jesse Anderson (@jessetanderson)
Data Engineer, Creative Engineer, Managing Director, Big Data Institute
Reno, Nevada

    Suggested Resources

Benjamin Cabe

Benjamin Cabé (@kartben)
IoT Program Manager, Evangelist, Eclipse Foundation
Toulouse, France

   Suggested Resources

  • Article: Monetizing IoT Data using IOTA
  • White Paper: The Three Software Stacks Required for IoT Architectures
    A collaboration of the Eclipse IoT Working Group
Kevlin Henney

Kevlin Henney (@KevlinHenney)
Consultant, programmer, speaker, trainer, writer, owner, Curbralan
Bristol, UK

   Suggested Resources

Siren Hofvander

Siren Hofvander (@SecurityPony)
Chief Security Officer with Min Doktor
Malmö, Sweden

Suggested Resources

Dan Bergh Johnsson

Dan Bergh Johnsson (@danbjson)
Agile aficionado, Domain Driven Design enthusiast, code quality craftsman, Omegapoint, Stockholm, Sweden

Suggested Resources

Additional Resources Coming Soon
  • Women in Technology
    With Heli Helskyaho, Michelle Malcher, Kellyn Pot'Vin-Gorman, and Laura Ramsey
  • DevOps: Can This Marriage be Saved
    With Nicole Forsgen, Leonid Igolnik, Alaina Prokharchyk, Baruch Sadogursky, Shay Shmeltzer, Kelly Shortridge
  • Combating Complexity
    With Adam Bien, Lucas Jelllema, Chris Newcombe, and Chris Richardson
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

The Best Way to Get Help with Your Oracle Database Questions

Tue, 2018-01-16 12:34

One of the best things about the Oracle Developer Community is the easy access to expert help and ideas. To add to the expert content, Oracle is adding a new service for developers called Ask TOM Office Hours.  Chris Saxon, Oracle SQL Developer Advocate and SQL expert tells all about it:


Aaaaargh! Any more of this and I was ready to throw my computer out of the window. I was stuck. I was editing a video for The Magic of SQL, trying to create some blended split-screen effects. I was sure it was possible. I just didn’t know how. Searches turned up nothing. So I turned to forums for help.

But, instead of answers, all I was getting was requests for extra details. Three days in and I was still no closer to achieving the desired effect. So I gave up and called a colleague. After a couple of minutes chatting, they were able to point me to a solution.

Progress at last!

It’s a drawback that plagues technical forums. A simple request for help can turn into a prolonged back-and-forth exchange of information.

“Which version are you using?”

“What does your code look like?”

“Have you have you set the im_not_an_idiot parameter?”

They do want to help. But the problem is that it's tough to provide effective help without a full understanding of your issue. Respondents need to know what you’re trying to do, what you’ve tried and what you’re working with. So you settle in for a game of internet pong. Your question pings back and forth between you and your unknown “helper”. Until finally your query is answered. Or one of you gives up. All the while sucking up your valuable time.

Frustrating, isn’t it?

Wouldn’t it be great if, in addition to support and Q&A forums, you could have an actual, live conversation, working out all the details of your malady?

Where you could quickly get to the root of the issue or learn how to properly apply a new feature to your program?

Now you can!

Introducing Ask TOM Office Hours

These are scheduled, live Q&A sessions. Hosted by Oracle Database Product Managers, evangelists and even developers. The Oracle product experts. Ready to help you get the best out of Oracle technology.

And the best part: Ask TOM Office Hours sessions are 100% free!

Office Hours continues the pioneering tradition of Ask TOM. Launched in 2000 by Tom Kyte, the site now has a dedicated team who answer hundreds of questions each month. Together they’ve helped millions of developers understand and use Oracle Database.

Office Hours takes this service to the next level, giving you live, direct access to a horde of experts within Oracle. All dedicated to helping you get the most out of your Oracle investment. To take advantage of this new program, visit the Office Hours home page and find an expert who can help . Sign up for the session and, at the appointed hour, join the webinar. There you can put your questions to the host or listen to the Q&A of others, picking up tips and learning about new features.

Each session will have a specific focus, based on the presenter’s expertise. But you are welcome to ask other questions as well.

Stuck on a thorny SQL problem? Grill Chris Saxon or Connor McDonald of the Ask TOM team. 

Want to make the most of Oracle Database's amazing In-Memory feature? Andy Rivenes and Maria Colgan will take you through the key steps.

Started a new job and need to get up-to-speed on Multitenant? Patrick Wheeler will help you get going.

Struggling to get bulk collect working? Ask renowned PL/SQL expert, Steven Feuerstein.

Our experts live all over the globe. So even if you inhabit "Middleofnowhereland", you’re sure to find a timeslot that suits you.

You need to make the most of Oracle Database and its related technologies. It's our job to make it easy for you.

Ask TOM Office Hours: Dedicated to Customer Success

View the sessions and sign up now!

 

Announcing Offline Persistence Toolkit for JavaScript Client Applications

Mon, 2018-01-08 19:27

We are excited to announce the open source release on GitHub of the offline-persistence-toolkit for JavaScript client applications, developed by the Oracle JavaScript Extension Toolkit (Oracle JET) team.

The Offline Persistence Toolkit is a client-side JavaScript library that provides caching and offline support at the HTTP request layer. This support is transparent to the user and is done through the Fetch API and an XHR adapter. HTTP requests made while the client device is offline are captured for replay when connection to the server is restored. Additional capabilities include a persistent storage layer, synchronization manager, binary data support and various configuration APIs for customizing the default behavior.

Whilst the toolkit is primarily intended for hybrid mobile applications created using Oracle JET, it can be used within any JavaScript client application that requires persistent storage and/or offline data access.

The Offline Persistence Toolkit simplifies life for application developers by providing a response caching solution that works well across modern browsers and web views. The toolkit covers common caching cases with a minimal amount of application-specific coding, but provides flexibility to cover non-trivial cases as well. In addition to providing the ability to cache complete response payloads, the toolkit supports "shredding" of REST response payloads into objects that can be stored, queried and updated on the client while offline.

The architecture diagram illustrates the major components of the toolkit and how an application interacts with it:

The Offline Persistence Toolkit is distributed as an npm package consisting of AMD modules.

To install the toolkit, enter the following command at a terminal prompt in your app’s top-level directory:

$ npm install @oracle/offline-persistence-toolkit

 

The toolkit makes heavy use of the Promise API. If you are targeting environments that do not support the Promise API, you will need to polyfill this feature. We recommend the es6-promise polyfill.

The toolkit does not have a dependency on a specific client-side storage solution, but does include a PouchDB adapter. If you plan to use PouchDB for your persistent store, you will need to install the following PouchDB packages:

$ npm install pouchdb pouchdb-find

 

For more information about how to make use of this toolkit in your Oracle JET application or any other JavaScript application, refer to the toolkit's README, which also provides details about why we developed this toolkit, how to include it into your app, some simple use cases and links to JS Doc and more advanced use cases.

You can also refer to the JET FixItFast sample app that makes use of the toolkit.  You can refer directly to the source code and even use the Oracle JET command line interface to build and deploy the app to see how it works.

I hope you find this toolkit really useful and if you have any feedback, please submit issues on GitHub.

For more technical articles about the Offline Persistence Toolkit, Oracle JET and other products, you can also follow OracleDevs on Medium.com.

New Release of Node.js Module for Oracle Database: node-oracledb 2.0 is out

Thu, 2017-12-21 17:41

It's been perhaps the most requested feature, and it's been delivered! You can now get pre-built binaries with all the required dependencies to connect your Node.js applications to an Oracle Database instance. Version 2.0 is the first release to have pre-built binaries. Node-oracledb 2.0.15, the Node.js add-on for Oracle Database, is now on npm for general use. These are provided for convenience and will make life a lot easier, particularly for Windows users.

With improvements throughout the code and documentation, this release is looking great. There are now over 3000 functional tests, as well as solid stress tests we run in various environments under Oracle's internal testing infrastructure.

Binaries for Node 4, 6, 8 and 9 are also available for Windows 64-bit, macOS 64-bit, and Linux 64-bit (built on Oracle Linux 6).

Simply add oracledb to your package.json dependencies or manually install with:

 

$ npm install oracledb

 

Review the CHANGELOG for all changes. For information on migrating see Migrating from node-oracledb 1.13 to node-oracledb 2.0. To know more about this release, go check out the detailed announcement.

Related content

 

Podcast: Blockchain: Beyond Bitcoin

Wed, 2017-12-20 07:00

Blockchain originally gained attention thanks to its connection to Bitcoin. But blockchain has emerged from under the crypto-currency’s shadow to become a powerful trend in enterprise IT -- and something that should be on every developer's radar.  For this program we’ve assembled a panel of blockchain experts to discuss the technology's impact, examine some use cases, and offer suggestions for developers who want to learn more in order to take advantage of the opportunities blockchain represents.

 

This program was recorded on Thursday November, 9, 2017.

 

The Panelists

Listed alphabetically

Lonneke Dikmans

Lonneke Dikmans
Chief Product Officer, eProseed, Utrecht, NL
Oracle Developer Champion

John King

John King
Tech Enablement Specialist/Speaker/Trainer/Course Developer, King Training Resources, Scottsdale, AZ

Robert van Molken

Robert van Mölken
Senior Integration / Cloud Specialist, AMIS, Utrecht, NL
Oracle Developer Champion

Arturo Viveros

Arturo Viveros
SOA/Cloud Architect, Sysco AS, Oslo, NO
Oracle Developer Champion

 

Additional Resources Coming Soon
  • Combating Complexity
    Chris Newcombe, Chris Richardson, Adam Bien, and Lucas Jellema discuss the creeping complexity in software development and strategies heading off the "software apocalypse."
  • DevOps: Can This Marriage be Saved
    Nicole Forsgen, Leonid Igolnik, Alena Prokharchyk, Baruch Sadogursky, Shay Shmeltzer, and Kelly Shortridge discuss the state of DevOps, where organizations get it wrong, and what developers can do to thrive in a DevOps environment.
Subscribe

Never miss an episode! The Oracle Developer Podcast is available via...

Announcing Open Source Jenkins Plugin for Oracle Cloud Infrastructure

Wed, 2017-12-06 15:12

Jenkins is a continuous integration and continuous delivery application that you can use to build and test your software projects continuously. Jenkins OCI Plugin is now available on Github and it allows users to access and manage Oracle Cloud Infrastructure resources from Jenkins. A Jenkins master instance with Jenkins OCI Plugin can spin up slaves (Instances) on demand within the Oracle Cloud Infrastructure, and remove the slaves automatically once the Job completes.

After installing Jenkins OCI Plugin, you can add a OCI Cloud option and a Template with the desired Shape, Image, Domain, etc. The Template will have a Label that you can use in your Jenkins Job. Multiple Templates are supported. The Template options include Labels, Domains, Credentials, Shapes, Images, Slave Limits, and Timeouts.

Below you will find instructions for building and installing the plugin, which is available on GitHub: github.com/oracle/jenkins-oci-plugin

Installing the Jenkins OCI Plugin

The following section covers compiling and installing the Jenkins OCI Plugin.

Plugins required:
  • credentials v2.1.14 or later
  • ssh-slaves v1.6 or later
  • ssh-credentials v1.13 or later
Compile and install OCI Java SDK:

Refer to OCI Java SDK issue 25. Tested with Maven versions 3.3.9 and 3.5.0.

Step 1 – Download plugin $ git clone https://github.com/oracle/oci-java-sdk
$ cd oci-java-sdk
$ mvn compile install Step 2 – Compile the Plugin hpi file $ git clone https://github.com/oracle/jenkins-oci-plugin
$ cd jenkins-oci-plugin
$ mvn compile hpi:hpi

Step 3 – Install hpi

  • Option 1 – Manage Jenkins > Manage Plugins > Click the Advanced tab > Upload Plugin section, click Choose File > Click Upload
  • Option 2 – Copy the downloaded .hpi file into the JENKINS_HOME/plugins directory on the Jenkins master
Restart Jenkins and “OCI Plugin” will be visible in the Installed section of Manage Plugins.

For more information on configuring the Jenkins Plugin for OCI, please refer to the documentation on the GitHub project. And if you have any issues or questions, please feel free to contact the development team by submitting through the Issues tab.

Related content

Kubernetes, Serverless, and Federation – Oracle at KubeCon 2017

Wed, 2017-12-06 09:00

Today at the KubeCon + CloudNativeCon 2017 conference in Austin, TX, the Oracle Container Native Application Development team open sourced two new Kubernetes related projects which we are also demoing here at the show.  First, we have open sourced an Fn Installer for Kubernetes. Fn is an open source serverless project announced this October at Oracle OpenWorld.  This Helm Chart for Fn enables organizations to easily install and run Fn on any Kubernetes deployment including on top of the new Oracle managed Kubernetes service Oracle Container Engine (OCE). 

Second, we have open sourced Global Multi-Cluster Management, a new set of distributed cluster management features for Kubernetes federation that intelligently manages highly distributed applications – “planet-scale” if you will - that are multi-region, hybrid, or even multi-cloud.  In a federated world, many operational challenges emerge - imagine how you would manage and auto-scale global applications or deploy spot clusters on-demand.  For more info, make sure to check out the Multi-Cluster Ops in a Hybrid World session by Kire Filipovski and Vitaliy Zinchenko on Thursday December 7 at 3:50pm!

Pushing Ahead: Keep it Open, Integrated and Enterprise-Grade

Customers are seeking an open, cloud-neutral, and community-driven container-native technology stack that avoids cloud lock-In and allows them to run the same stack in the public cloud as they run locally.  This was our vision when we launched the Container Native Application Development Platform at Oracle OpenWorld 2017 in October.

 

Since then Oracle Container Engine was in the first wave of Certified Kubernetes platforms announced in November 2017, helping developers and dev teams be confident that there is consistency and portability amongst products and implementations.  

So, the community is now looking for the same assurances from their serverless technology choice: make it open and built in a consistent way to match the rest of their cloud native stack.  In other words, make it open and on top of Kubernetes.  And if the promise of an open-source based solution is to avoid cloud lock-in, the next logical request is to make it easy for DevOps teams to operate across clouds or in a hybrid mode.  This lines up with the three major “asks” we hear from customers, development teams and enterprises: their container native platform must be open, integrated, and enterprise-grade:

  • Open: Open on Open

Both the Fn project and Global Multi-Cluster Management are cloud neutral and open source. Doubling down on open, the Fn Helm Chart enables the open serverless project (Fn) to run on the leading open container orchestration platform (Kubernetes).   (Sure beats closed on closed!)  The Helm Chart deploys a fully functioning cluster of Fn github.com/fnproject/fn on a Kubernetes cluster using the Helm helm.sh/ package manager.

  • Integrated: Coherent and Connected

Delivering on the promise of an integrated platform, both the Fn Installer Helm Charts and Global Multi-Cluster Management are built to run on top of Kubernetes and thus integrate natively into Oracle’s Container Native Platform.  While having one of everything works in a Home Depot or Costco, it’s no way to create an integrated, effortless application developer experience – especially at scale across hundreds if not thousands of developers across an organization.  Both the Fn installer and Global Multi-Cluster Management will be available on top of OCE, our managed Kubernetes service

  • Enterprise-Grade: HA, Secure, and Operationally Aware

With the ability to deploy Fn to an enterprise-grade Kubernetes service such as Oracle Container Engine you can run serverless on a highly-available and secure backend platform.  Furthermore, Global Multi-Cluster Management extends the enterprise platform to multiple clusters and clouds and delivers on the enterprise desire for better utilization and capacity management. 

Production operations for large distributed systems is hard enough in a single cloud or on-prem, but becomes even more complex with federated deployments – such as multiple clusters applied across multi-regions, hybrid (cloud/on-prem), and multi-cloud scenarios.  So, in these situations, DevOps teams need to deploy and auto-scale global applications or spot clusters on-demand and enable cloud migrations and hybrid scenarios.

With Great Power Comes Great Responsibility (and Complexity)

So, with the power of Kubernetes federation comes great responsibility and new complexities: how to deal with challenge of applying application-aware decision logic to container native deployments.  Thorny business and operational issues could include cost, regional affinity, performance, quality of service, and compliance.  When DevOps teams are faced with managing multiple Kubernetes deployments they can also struggle with multiple cluster profiles, deployed on a mix of on-prem and public cloud environments.  These are basic DevOps question that are hard questions to answer:

  • How many clusters should we operate?
    • Do we need separate clusters for each environment?
    • How much capacity do we allocate for each cluster?
  • Who will manage the lifecycle of the clusters?
  • Which cloud is best suited for my application?
  • How do we avoid cloud lock-in?
  • How do we deploy applications to multiple clusters?

The three open source components that make up Global Multi-Cluster Management are: (1) Navarkos (which means Admiral in Greek) enables a Kubernetes federated deployment to automatically manage multi-cluster infrastructure and manage clusters in response to federated Kubernetes application deployments; (2) Cluster Manager provides lifecycle management for Kubernetes clusters using a Kubernetes federation backend; and (3) the Federated Ingress Controller is an alternative implementation of federated ingress using external DNS.

Global Multi-Cluster Management works with Kubernetes federation to solve these problems in several ways:

  • Creates Kubernetes clusters on demand and deploys apps to them (only when there is a need)
    • Clusters can be run on any public or private cloud platform
    • Runs the application matching supply and demand
  • Manages cluster consistency and cluster life-cycle
    • Ingress, nodes, network
  • Control multi-cloud application deployments
    • Control applications independently of cloud provider
  • Application-aware clusters
    • Clusters are offline when idle
    • Workloads can be auto-scaled automatically
    • Provides the basis to help decide where apps run based on factors that could include cost, regional affinity, performance, quality of service and compliance

Global Multi-Cluster Management ensures that all of the Kubernetes clusters are created, sized and destroyed only when there is a need for them based on the requested application deployments.  If there are no application deployments, then there are no clusters. As DevOps teams deploy various applications to a federated environment, then Global Multi-Cluster Management makes intelligent decisions if any clusters should be created, how many of them, and where.  At any point in time the live clusters are in tune with the current demand for applications, and the Kubernetes infrastructure becomes more application and operationally aware.

See Us at Booth G8, Join our Sessions, & Learn More at KubeCon + CloudNativeCon 2017

Come see us at Booth G8 and meet our engineers and contributors!  As a local Austin native (and for the rest of the old StackEngine team) we’re excited to welcome you all (y’all) to Austin.  Make sure to join in to “Keep Cloud Native Weird.”    And be fixin’ to check out these sessions:

 

Announcing The New Open Source WebLogic Monitoring Exporter on GitHub

Mon, 2017-12-04 08:00

As it runs, WebLogic Server generates a rich set of metrics and runtime state information that provides detailed performance and diagnostic data about the servers, clusters, applications, and other resources that are running in a WebLogic domain. To give our users the best possible experience when running WebLogic domains in Docker/Kubernetes environments, we have developed the WebLogic Monitoring Exporter. This new tool exposes WebLogic Server metrics that can be read and collected by monitoring tools such as Prometheus, and displayed in Grafana.

We are also making the WebLogic Monitoring Exporter tool available as open source on GitHub, which will allow our community to contribute to this project and be part of enhancing it. 

The WebLogic Monitoring Exporter is implemented as a web application that is deployed to the WebLogic Server instances that are to be monitored. The exporter uses the WebLogic Server 12.2.1.x RESTful Management Interface for accessing runtime state and metrics.  With a single HTTP query, and no special setup, it provides an easy way to select the metrics that are monitored for a managed server.

For detailed information about the design and implementation of the WebLogic Monitoring Exporter, see Exporting Metrics from WebLogic Server.

Prometheus collects the metrics that have been scraped by the WebLogic Monitoring Exporter. By constructing Prometheus-defined queries, you can generate any data output you require to monitor and diagnose the servers, applications, and resources that are running in your WebLogic domain.

We can use Grafana to display these metrics in graphical form.  Connect Grafana to Prometheus, and create queries that take the metrics scraped by the WebLogic Monitoring Exporter and display them in dashboards.

For more information, see Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes.

Get Started!

Get started building and deploying the WebLogic Monitoring Exporter, setup Prometheus and Grafana, and monitor the metrics from the WebLogic Managed servers in a domain/cluster running in Kubernetes. 

  • Clone the source code for the WebLogic Monitoring Exporter from GitHub.
  • Build the WebLogic Monitoring Exporter following the steps in the README file.
  • Install both Prometheus and Grafana in the host where you are running Kubernetes.  
  • Start a WebLogic on Kubernetes domain; find a sample in GitHub.
  • Deploy the WebLogic Monitoring Exporter to the cluster where the WebLogic Managed servers are running.
  • Follow the blog entry Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes that steps you through collecting metrics in Prometheus and display them in Grafana dashboards.

We welcome you to try this out. It's a good start to making the transition to open source monitoring tools.  We can work together to enhance it and take full advantage of its functionality in Docker/Kubernetes environments.

 

Updates to Oracle Cloud Infrastructure CLI

Fri, 2017-12-01 15:01
pre { white-space: pre-wrap; /* css-3 */ white-space: -moz-pre-wrap; /* Mozilla, since 1999 */ white-space: -pre-wrap; /* Opera 4-6 */ white-space: -o-pre-wrap; /* Opera 7 */ word-wrap: break-word; /* Internet Explorer 5.5+ */ margin-bottom: 30px; }

We’ve been hard at work the last few months making updates to our command line interface for Oracle Cloud Infrastructure, and wanted to take a minute to share some of the new functionality! The full list of new features and services can be found in our changelog on GitHub, and below are a few core features we wanted to call out specifically:

Defaults

We know how tedious it can be to type out the same values again and again while using the CLI, so we have added the ability to specify default values for parameters. The example below shows a sample oci_cli_rc file which sets two defaults: one at a global level which will be applied to all operations with a --compartment-id parameter, and one for only ‘os’ (object storage) commands which will be applied to all ‘os’ commands with a --namespace parameter.

Content of ~/.oci/oci_cli_rc:

[DEFAULT] # globally scoped default for all operations with a --compartment-id parameter compartment-id= ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2… # default for --namespace scoped specifically to Object Storage commands os.namespace=mynamespace

Example commands that no longer need explicit parameters:

oci compute instance list # no --compartment-id needed oci os bucket list  # no --compartment-id or --namespace needed

 

Command and parameter aliases

To help with specifying long command and parameter names, we have also added support for defining aliases. The example oci_cli_rc file below shows examples of defining aliases for commands and parameters:

Content of ~/.oci/oci_cli_rc:

[OCI_CLI_PARAM_ALIASES] --ad=--availability-domain -a=--availability-domain --dn=--display-name [OCI_CLI_COMMAND_ALIASES] # This lets you use "ls" instead of "list" for any list command in the CLI (e.g. oci compute instance ls) ls = list # This lets you do "oci os object rm" rather than "oci os object delete" rm = os.object.delete Table output

JSON output is great for parsing but can be problematic when it comes to readability on the command line. To help with this we have added table output format which can be triggered for any operation by supplying --output table. This also makes it easier to use common tools like grep and awk on the CLI output to grab specific records from a table. See the section on JMESPath below to see how you can filter data to make your table output more concise.

Here is an example command and output:

oci iam region list --output table +-----+--------------------+ | key | name | +-----+--------------------+ | FRA | eu-frankfurt-1 | | IAD | us-ashburn-1 | | PHX | us-phoenix-1 | +-----+--------------------+ JMESPath queries

Often times a CLI operation will return more data than you are interested in. To help with filtering and querying data from CLI responses, we have added the --query option which allows running arbitrary JMESPath (http://jmespath.org/) queries on the CLI output before the data is returned.

For example, you may want to list all of the instances in your compartment but only want to see the display-name and lifecycle-state, you can do this with the following query:

# using the oci_cli_rc file from above so we don’t have to specify --compartment-id oci compute instance list --query 'data[*].{"display-name":"display-name","lifecycle-state":"lifecycle-state"}'

This is especially convenient for use with table output so you can limit the output to a size that will fit in your terminal.

You can also define queries in your oci_cli_rc file and reference them by name so you don’t have to type out complex queries, for example:

Content of ~/.oci/oci_cli_rc:

[OCI_CLI_CANNED_QUERIES] get_id_and_display_name_from_list=data[*].{id: id, "display-name": “display-name"}

Example command:

oci compute instance list -c $C --query query://get_id_and_display_name_from_list

To help getting started with some of these features we have added the command 'oci setup oci-cli-rc' to generate a sample oci_cli_rc file with examples of canned queries, defaults, and parameter / command aliases.

JSON Input made easier

We have made a number of improvements to how our CLI works with complex parameters that require JSON input:

Reading JSON parameters from a file:

For any parameter marked as a "COMPLEX TYPE" you can now specify the value to be read from a file using the "file://" prefix instead of needing to format a JSON string on the command line. For example:

oci iam policy create —statements file://statements.json

Generate JSON skeletons for single parameter

To help with specifying JSON input from a file we have added --generate-param-json-input to each command with complex parameters to enable generating a JSON template for a given input parameter. For example, if you are not sure of the format for the oci iam policy create --statements parameter you can issue the following command to generate a template:

oci iam policy create --generate-json-param-input statements output: [ “string”, “string” ]

You can then fill out this template and specify it as the input to a create policy call like so:

oci iam policy create --statements file://statements.json

Generate JSON skeletons for full command input

We also support generating a JSON skeleton for the full command input. A common workflow with this parameter is to dump the full JSON skeleton to a file, edit the file with the input values you want, and then execute the command using that file as input. Here is an example:

# command to emit full JSON skeleton for command to a file input.json oci os preauth-request create --generate-full-command-json-input > input.json # view content of input.json and edit values cat input.json { "accessType": "ObjectRead|ObjectWrite|ObjectReadWrite|AnyObjectWrite", "bucketName": "string", "name": "string", "namespace": "string", "objectName": "string", "opcClientRequestId": "string", "timeExpires": "2017-01-01T00:00:00.000000+00:00" } # run create pre-authenticated request with the values specified from a file oci os preauth-request create --from-json file://input.json Windows auto-complete for power shell

We have now added tab completion for Windows PowerShell! Completion works on commands and parameters and can be enabled with the following command:

oci setup autocomplete

For more in-depth documentation on these features and more, check out our main CLI documentation page here.

Related content

Announcing Mobile Authentication Plugin for Apache Cordova, and More!

Fri, 2017-12-01 11:19

We are excited to announce the open source release on GitHub of the cordova-plugin-oracle-idm-auth plugin for Apache Cordova, developed by the Oracle JavaScript Extension Toolkit (Oracle JET) team.

This plugin provides a simple JavaScript API for performing complex authentication, powered by a native SDK developed by the Oracle Access Management Mobile & Social (OAMMS) team that has been tested and verified against Oracle Access Manager (OAM) and Oracle Identity Cloud Service (IDCS) and is compatible with other 3rd party authentication applications that support Basic Authentication, OAuth, Web SSO or OpenID Connect.

Whilst the plugin is primarily intended for hybrid mobile applications created using Oracle JET, it can be used within any Cordova-based app targeting Android or iOS.

Most mobile authentication scenarios are complex, often requiring interaction with the native operating system for use cases such as:

  • Retrieving authentication tokens and cookies following successful authentication
  • Securely storing tokens and user credentials
  • Performing offline authentication and automatic login

Writing code to handle each of the required authentication scenarios, especially within hybrid mobile applications, is tedious and can be error-prone.

The cordova-plugin-oracle-idm-auth plugin significantly reduces the amount of coding required to successfully authenticate your users and handle various error cases, by abstracting the complex logic behind a set of simple JavaScript APIs, thus allowing you to focus on implementation of your mobile app’s functional aspects.

To add this plugin to your Oracle JET app:

$ ojet add plugin cordova-plugin-oracle-idm-auth

 

To know more about the Oracle JET CLI, visit the ojet-cli project.

To add this plugin to your plain Apache Cordova app:

$ cordova plugin add cordova-plugin-oracle-idm-auth

 

Although the plugin itself contains detailed documentation, stay tuned for more technical posts describing common usage scenarios.

The release of this plugin continues Oracle’s commitment to the open source Apache Cordova community, along with these previously released plugins:

Hope you enjoy, and if you have any feedback, please submit issues to our Cordova projects on GitHub.

For more technical articles, you can also follow OracleDevs on Medium.com.

Related content

 

Introducing Data Hub Cloud Service to Manage Apache Cassandra and More

Wed, 2017-11-22 11:00

Today we are introducing the general availability of the Oracle Data Hub Cloud Service. With Data Hub, developers are now able to initialize and run Apache Cassandra clusters on-demand without having to manage backups, patching and scaling for Cassandra clusters. Oracle Data Hub is a foundation for other databases like MongoDB, Postgres and more coming in the future. Read the full press release from OpenWorld 2017.

The Data Hub Cloud Service provides the following key benefits:

  • Dynamic Scalability – users will have access to an API and a web console interface to easily operate in minutes things such as scale-up/scale-down or scale-out/scale-in, and size their clusters accordingly to their needs.
  • Full Control –as development teams migrate from an on premise environment to the cloud, they continue to have full secure shell (ssh) access to the underlying virtual machines (VMs) hosting these database clusters so that they can login and perform management tasks in the same way they have been doing.

Developers may be looking for more than relational data management for their applications. MySQL and Oracle Database have been around for quite some time already on Oracle Cloud. Today, application developers are looking for the flexibility to choose the database technology according to the data models they use within their application. This use case specific approach enables these developers to choose the Oracle Database Cloud Service when appropriate and in other cases choose other database technologies such as MySQL, MongoDB, Redis, Apache Cassandra etc.

In such a polyglot development environment, the enterprise IT faces the key challenge of how to support as well as lower the total cost of ownership (TCO) of managing such open source database technologies within the organization. This is specifically the problem that the Oracle Data Hub Cloud Service addresses. How to Use Data Hub Cloud Service

Using the Data Hub Cloud Service to provision, administer or monitor an Apache Cassandra database cluster is extremely simple and easy. You can create an Apache Cassandra database cluster with as many nodes as you would like in 2 simple steps:

  • Step 1
    • Choose between Oracle Cloud Infrastructure and Oracle Cloud Infrastructure Classic regions
    • Choose between the latest (3.11) and stable (3.10) Apache Cassandra database versions
  • Step 2
    • Choose the cluster size, compute shape (processor cores) and the storage size. Don't worry about choosing the right value here. You can always dynamically resize when you need additional compute power or storage.
    • Provide the shell access information so that you have the full control to your database clusters.

Flexibility to choose the Database Version

When you create the cluster, you have the flexibility to choose the Apache Cassandra versions. Additionally, you can easily patch to the latest version, as it becomes available for the Cassandra version. Once you choose to apply the patch, the service applies this patch within your cluster in a rolling fashion to minimize any downtime.

Dynamic Scaling

During provisioning, you have the flexibility to choose the cluster size, the compute shapes (compute core and memory), and the storage sizes for all the nodes within the cluster. This flexibility allows you to choose the compute and storage shapes that better meet your workload and performance requirements.
If you want to add either additional nodes in your cluster (commonly referred as scale-out) or additional storage to your nodes in the cluster, you can easily do so using the Data Hub Cloud Service API or Console. So, you don't have to worry about sizing your workload at the time of provisioning.

Full Control

You have full shell access to all the nodes within the cluster so that you have full control to the underlying database and its storage. You also have the full flexibility to login to these nodes and configure the database instances to meet your scalability and performance requirements.

Once you select Create, the service will create the compute instances, attach the block volumes to the node and then lay out the Apache Cassandra binaries within each of the nodes in the cluster. In the Oracle Cloud Infrastructure Classic platform, the service will also automatically enable the network access rules so that users can now begin to use CQL (Cassandra Query Language) tool to create your Cassandra database. In the Oracle Cloud Infrastructure platform, you have the full control and flexibility to create this cluster within a specific subnet in the virtual cloud network (VCN).

Getting Started

This service is accessible via the Oracle My Services dashboard for users already under the Universal Credits. And, if you're not already using the Oracle Cloud, you can start off with a free Cloud credits to explore the services. Appreciate if you can kindly give this service a spin and share your feedback.

Additional Reference

Linuxgiving! The Things We do With and For Oracle Linux

Tue, 2017-11-21 17:00

By: Sergio Leunissen - VP, Operating Systems & Virtualization 

It is almost Thanksgiving, so you may be thinking about things that you’re thankful for –good food, family and friends.  When it comes to making your (an enterprise software developer’s) work life better, your list might include Docker, Kubernetes, VirtualBox and GitHub. I’ll bet Oracle Linux wasn’t on your list, but here’s why it should be…

As enterprises move to the Cloud and DevOps increases in importance, application development also has to move faster. Here’s where Oracle Linux comes in. Not only is Oracle Linux free to download and use, but it also comes pre-configured with access to our Oracle Linux yum server with tons of extra packages to address your development cravings, including:

If you’re still craving something sweet, you can add less complexity to your list as with Oracle Linux you’ll have the advantage of runningthe exact same OS and version in development as you do in production (on-premises or in the cloud).

Related content

And, we’re constantly working on ways to spice-up your experience with Linux, from things as simple as "make it boot faster," to always-available diagnostics for network filesystem mounts, to ways large systems can efficiently parallelize tasks. These posts, from members of the Oracle Linux Kernel Development team, will show you how we are doing this:

Accelerating Linux Boot Time

Pasha Tatashin describes optimizations to the kernel to speed up booting Linux, especially on large systems with many cores and large memory sizes.

Tracing NFS: Beyond tcpdump

Chuck Leverdescribes how we are investigating new ways to trace NFS client operations under heavy load and on high performance network fabrics so that system administrators can better observe and troubleshoot this network file system.

ktask: A Generic Framework for Parallelizing CPU-Intensive Work

Daniel Jordan describes a framework that’s been submitted to the Linux community which makes better use of available system resources to perform large scale housekeeping tasks initiated by the kernel or through system calls.

On top of this, you can have your pumpkin, apple or whatever pie you like and eat it too – since Oracle Linux Premier Support is included with your Oracle Cloud Infrastructure subscription – yes, that includes Ksplice zero down-time updates and much more at no additional cost.

Most everyone's business runs on Linux now, it's at the core of today’s cloud computing. There are still areas to improve, but if you look closely, Oracle Linux is the OS you’ll want for app/dev in your enterprise.

Podcast: What's Hot? Tech Trends That Made a Real Difference in 2017

Wed, 2017-11-15 05:00

Innovation never sleeps, and tech trends come at you from every angle. That's business as usual in the software developer's world. In 2017, microservices, containers, chatbots, blockchain, IoT, and other trends drew lots of attention and conversation. But what trends and technologies penetrated the hype to make a real difference?

In order to get a sense of what's happening on the street, we gathered a group of highly respected software developers, recognized leaders in the community, crammed them into a tiny hotel room in San Francisco (they were in town to present sessions at JavaOne and Oracle OpenWorld), tossed in a couple of microphones, and asked them to talk about the technologies that actually had an impact on their work over the past year. The resulting conversation is lively, wide-ranging, often funny, and insightful from start to finish. Listen for yourself.

The Panelists

(listed alphabetically)

Lonneke Dikmans Lonneke Dikmans
Chief Product Officer, eProseed
Oracle ACE Director
Developer Champion

 

Lucas Jellema
Chief Technical Officer, AMIS Services
Oracle ACE Director
Developer Champion

 

Frank Munz
Software Architect, Cloud Evangelist, Munz & More
Oracle ACE Director
Developer Champion

 

Pratik Patel
Chief Technical Officer, Triplingo
President, Atlanta Java Users Group
Java Champion
Code Champion

 

Chris Richardson
Founder, Chief Executive Officer, Eventuate Inc.
Java Champion
Code Champion

 

Additional Resources Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

SaveSaveSaveSaveSaveSave

An API First Approach to Microservices Development

Wed, 2017-11-08 17:18

Co-author: Claudio Caldato, Sr. Director Development

Introduction 

Over the last couple of years our work on various microservices platforms in the cloud has brought us into close collaboration and engagement with many customers and as a result we have developed a deep understanding of what developers struggle with when adopting microservices architectures in addition to a deep knowledge of distributed systems. A major motivation for joining Oracle, besides working with a great team of very smart people from startups, Amazon and Microsoft, was the opportunity to build from scratch a platform based on open source components that truly addresses the developer. In this initial blog post on our new platform we will describe what was driving the design of our platform, and present an overview of the architecture. 

What developers are looking for

Moving to microservices is not an easy transition for developers that have been building applications using more traditional methods. There are a lot of new concepts and details developers need to become familiar with and consider when they design a distributed application, which is what a microservice application is. Throw containers and orchestrators into the mix and it becomes clear why many developers struggle to adapt to this new world.  

Developers now need to think about their applications in terms of a distributed system with a lot of moving parts; as a result, challenges such as resiliency, idempotency and eventual consistency, just to name a few, are important aspects they now need to take into account. 

In addition, with the latest trends in microservices design and best practices, they also need to learn about containers and orchestrators to make their applications and services work. Modern cluster management and container orchestration solutions such as Kubernetes, Mesos/Marathon or Docker Swarm are improving over time, which simplifies things such as networking, service discovery, etc., but they are still an infrastructure play. The main goal of these tools and technologies is to handle the process of deploying and connecting services, and guarantee that they keep running in case of failures. These aspects are more connected with the infrastructure used to host the services than the actual services themselves. Developers need to have a solid understanding of how orchestrators work, and they need to take that into account when they build services. Programming model and infrastructure are entangled; there is no clear separation, and developers need to understand the underlying infrastructure to make their services work. 

One obvious thing that we have heard repeatedly from our customers and the open source community is that developers really want to focus on the development of the logic, not on the code necessary to handle the execution environment where the service will be deployed, but what does that really mean?  

It means that above all, developers want to focus on APIs (the only thing needed to connect to another service), develop their services in a reactive style, and sometimes just use ‘functions’ to perform simple operations, when deploying and managing more complex services involves too much overhead.  

There is also a strong preference among developers to have a platform built on an OSS stack to avoid vendor lock-in, and to enable hybrid scenarios where public cloud is used in conjunction with on-premise infrastructure.  

It was the copious feedback heard from customers and developers that served as our main motivation to create an API-first microservices platform, and it is based on the following key requirements: 

  • Developers can focus solely on writing code: API-first approach 
  • It combines the traditional REST-based programming model with a modern reactive event-driven model  
  • It consolidates traditional container-based microservices with a serverless/FaaS infrastructure, offering more flexibility so developers can pick the right tool for the job 
  • Easy onboarding of 'external' services so developers can leverage things such as cloud services, and can connect to legacy or 3rd party services easily 

We were asked many times how we would describe our platform as it covers more than just microservices, so in a humorous moment, we came up with the Grand Unified Theory of Container Native Development

 

The Platform Approach 

So what does the platform look like and what components are being used? Before we get into the details let’s look at our fundamental principles for building out this platform:

  • Opinionated and open: make it easy for developers to get productive right away, but also provide the option to go deep in the stack or even replace modules. 
  • Cloud vendor agnostic: although the platform will work best on our New Application Development Stack customers need to be able to install it on top of any cloud infrastructure. 
  • Open source-based stack: we are strong believers in OSS, and our stack is entirely built upon popular OSS components and will be available as OSS 

The Platform Architecture 

Figure 1 shows the high level architecture of our platform and the functionality of each component. 

Let’s look at all the major components of the platform. We start with the API registry as it changes how developers think about, build, and consume microservices. 

API Registry: 

The API registry stores all the information about available APIs in the cluster. Developers can publish an API to make it easier for other developers to use their service. Developers can search for a particular service or function (if there is a serverless framework installed in the cluster). Developers can test an API against a mock service even though the real service is not ready or deployed yet. To connect to a microservice or function in the cluster, developers can generate a client library in various languages. The client library is integrated into the source code and used to call the service. It will always automatically discover the endpoint in the cluster at runtime so developers don’t have to deal with infrastructure details such as IP address or port number that may change over the lifecycle of the service.  In future versions, we plan to add the ability for developers to set security and routing policies directly in the API registry. 

Event Manager: 

The event manager allows services and functions to publish events that other services and functions can subscribe to. It is the key component that enables an event-driven programming model where EventProviders publish events, and consumers – either functions or microservices – consume them. With the EventManager developers can combine both a traditional REST-based programming model with a reactive/event-driven model in a consolidated platform that offers a consistent experience in terms of workflow and tools. 

Service Broker: 

In our transition to working for a major cloud vendor, we have seen that many customers choose to use managed cloud services instead of running and operating their services themselves on a Kubernetes cluster. A popular example of this is Redis cache, offered as a managed service by almost all major cloud providers. As a result, it is very common that a microservice-based application not only consists of services developed by the development team but also of managed cloud services. Kubernetes has introduced a great new feature called service catalog which allows the consumption of external services within a Kubernetes cluster. We have extended our initial design to not only configure the access to external services, but also to register user services with the API registry, so that developers can easily consume them along with the managed services. 

In this way external services, such as the ones provided by the cloud vendor, can be consumed like any other service in the cluster with developers using the same workflow: identify the APIs they want to use, generate the client library, and use it to handle the actual communication with the service. 

Service Broker is also our way to help developers engaged in modernizing their existing infrastructure, for instance by enabling them to package their existing code in containers that can be deployed in the cluster. We are also considering solving for scenarios in which there are existing applications that cannot be modernized; in this case, the Service Broker can be used to ‘expose’ a proxy service that publishes a set of APIs in the API Registry, thereby making the consumption of the external/legacy system similar to using any other microservice in the cluster.  

Kubernetes and Istio: 

We chose Kubernetes as the basis for our platform as it is emerging as the most popular container management platform to run microservices. Another important factor is that the community around Kubernetes is growing rapidly, and that there is Kubernetes support with every major cloud vendor.   

As mentioned before one of our main goals is to reduce complexity for developers. Managing communications among multiple microservices can be a challenging task. For this reason, we determined that we needed to add Istio as a service mesh to our platform. With Istio we get monitoring, diagnostics, complex routing, resiliency and policies for free. This removes a big burden from developers as they would otherwise need to implement those features; with Istio, they are now available at the platform level. 

Monitoring 

Monitoring is an important component of a microservices platform. With potentially a lot of moving parts, the system requires having a way to monitor its behavior at runtime. For our microservices platform we chose to offer an out-of-the-box monitoring solution which is, like the other components in our platform, based on well consolidated and battle-tested technologies such as Prometheus, Zipkin/Jaeger, Grafana and Vizsceral. 

In the spirit of pushing the API-first approach to monitoring as well, our monitoring solution offers developers the ability to see how microservices are connected to each other (via Vizsceral), see data flowing across them and, in the future, will show insight into which APIs have been used. Developers can then use distributed tracing information in Zipkin/Jaeger to investigate potential latency issues or improve the efficiency of their services. In the future, we plan to add integration with other services. For instance, we will add the ability to correlate requests between microservices with data structures inside the JVM so developers can optimize across multiple microservices by following how data is being processed for each request. 

What’s Next? 

This is an initial overview of our new platform and some insight into our motivation, and the design guidelines that we used. We will follow with more blogs that will go deeper into the various aspects of the platform as we get closer to our initial OSS release early 2018. Meanwhile, please take a look at our JavaOne session

For more background on this topic, please see our other blog posts in the Getting Started with Microservices series. Part 1 discusses some of the main advantages of microservices, and touches on some areas to consider when working with them. Part 2 considers how containers fit into the microservices story. Part 3 looks at some basic patterns and best practices for implementing microservices. Part 4 examines the critical aspects of using DevOps principles and practices with containerized microservices. 

Related content

Introducing Dev Gym! Free Training on SQL and More

Wed, 2017-11-08 10:52

There are many ways to learn. For example, you can read a book or blog post, watch a video, or listen to a podcast. All good stuff, which is what you'd expect me to say since I am the author of ten books on the Oracle PL/SQL language, and offer scores of videos and articles on my YouTube channel and blog, respectively.

But there's one problem with those learning formats: they're passive. One way or another, you sit there, and ingest data through your eyes and ears. Nothing wrong with that, but we all know that when it comes to writing code, that sort of knowledge is entirely theoretical.

If you want to get stronger, you can't just read about weightlifting and running. 

You've got to hit the gym and lift some weights. You've got to put on your running shoes and pound the pavement. 

Or as Confucius said it back in 450 BC:

Tell me and I will forget.
Show me and I may remember.
Involve me and I will understand

It's the same with programming. Until you start writing code, and until you start reading and struggling to understand code, you haven't really learned anything.  To get good at programming, you need to engage in some active learning.

That's what the Oracle Dev Gym is all about. And it's absolutely, totally free. 

Learn from Quizzes

Multiple choice quizzes are the core learning mechanism on the Oracle Dev Gym. Our library of over 2,500 quizzes deepen your expertise by challenging you to read and understand code, a great complement to writing and running code.

The home page offers several featured quizzes that are hand-picked by experts from the Dev Gym's library of over 2,000 quizzes.

Looking for something in particular? Enter a keyword or two in the search bar and we'll show you what we've got on that topic.

After submitting your answer, you can explore the quiz's topic in more detail, with full verification code scripts, links to related resources and other quizzes, and discussion on the quiz.

You accumulate points for all the quizzes you answer, but your performance on these quizzes is not ranked. To play competitively against other developers, try our weekly Open Tournaments.

Check out this video on Dev Gym quizzes. 

Learn from Workouts

Quizzes are great, but when you know nothing about the topic of a quiz, they can leave you rather more confused than educated.

So to help you get started with concepts, we’ve created workouts. These contain resources to teach you about an aspect of programming, followed up by questions on the topic to test and reinforce your newly-gained knowledge.

A workout typically consists of a video or article followed by several quizzes. But a workout could also consist simply of a set of quizzes. Either way, go through the exercises of the workout and you will find yourself better able to tackle your real world programming challenges. Build your own custom workout, pick from available workouts, and set up daily workouts (single quiz workouts that expire each day).

Check out this video on Dev Gym workouts. 

Learn from Classes

Perhaps you’re looking for something more structured to help you learn. Then a Dev Gym class might be a perfect fit.

You can think of these as "mini-MOOCS". A MOOC is a massive online open class. The Oracle Learning Library offers a variety of MOOCs and I strongly encourage you to try them out. Generally, you should expect a 3-5 hour per week commitment, over several weeks. 

Dev Gym class are typically lighter-weight. Each class module consists of a video or blog post, followed by several quizzes to reinforce what you've learned. 

A great example of a Dev Gym class is Database of Developers, a 12-week course by Chris Saxon, a member of the AskTOM Answer Team and all around SQL wizard.

Check out this video on Dev Gym classes. 

Open Tournaments

Sometimes you just want to learn, and other times you want to test that knowledge against other developers. Let's face it: lots of humans like to compete, and we make it easy for you to do that with our weekly Open tournaments.

Each Saturday, we publish a brand-new quiz on SQL, PL/SQL, database design and logic (this list will likely grow over time). You have until the following Friday to submit your answer. And if you don't want to compete but still want to tackle those brand-new quizzes, we let you opt-out of ranking.

But for those of you who like to compete, you can check your rankings on the Leaderboard to see how you did the previous week, month, quarter and year. And if finish the year ranked in the top 50 in a particular technology, you are then eligible to compete in the annual championship.

Note that we do not show the results of your submission for an Open tournament until that week is over. Since the quiz is competitive, we don't want to make it easy for players to share results with others who may not yet have taken the quiz. And since the quiz is competitive, we also have rules against cheating. Read Competition Integrity for a description of what constitutes cheating at the Oracle Dev Gym.

Work Out Those Oracle Muscles!

So...are you ready to start working out those Oracle muscles and stretch your Oracle skills?

Visit the Oracle Dev Gym. Take a quiz, step up to a workout, or explore our classes.

Oh, and did I mention? It's all free!

 

Podcast: Chatbot Development: First Steps and Lessons Learned - Part 2

Wed, 2017-10-18 12:42

The previous podcast featured a discussion of chatbot development with a panel of developers who were part of a program that provided early access to the Oracle Intelligent Bots platform available within the Mobile Cloud Service. In this podcast we continue the discussion of chatbot development with an entirely new panel of developers who also had the opportunity to work with that same Intelligent Bots beta release.

Panelists Mia Urman, Peter Crew, and Christoph Reupprich compare notes on the particular challenges that defined their chatbot development experiences, and discuss what they did to meet those challenges. Listen!

The Panelists

Oracle ACE Director Mia Urman
Chief Executive Officer, AuraPlayer Limited, Brookline, Massachusetts.

Peter Crew
Director, SDS Group; Chief Technical Officer, MagiaCX Solutions, Perth, Australia

Oracle ACE Christoph Ruepprich
Infrastructure Senior Principal, Accenture Enkitec Group, Dallas, TX

Additional Resources

SaveSubscribe to the Oracle Developer Community Podcast

SaveSaveSaveSave

A Simple Guide to Oracle Intelligent Bots Error Handling

Tue, 2017-10-03 03:50

Like any software development, building chatbots is rarely perfect first time.  In particular areas such as the conversation flow or backend system integration, which are programmed, are more likely to be subject to bugs and errors. The assumption is that where there is room for failure, there is a way to handle those failures ;and in fact, there is. This blog post explains how to handle errors in Oracle Intelligent Bots. 

Categories of Errors

There are three broad categories of errors that may occur in the context of a bot.

The first category are design time errors in the dialog definition, for example, a missing colon or invalid indents.  The good news is that the Intelligent Bots designer validates the dialog definition at design time and highlights which line it thinks is in error.

Second problem relates to system components at runtime.  Component properties can have their value assigned at runtime, for which bot designers would use an expression such as ${myKeepTurnVar.value} that references a context variable defined in the dialog flow. If a component property attempts to read the variable value before it gets set then this also produces a failure on the component level.

The third category is a problem within a custom component, for example a failed connection to a backend service or a failed input validation. Possibly the backend system is returning an HTTP 404 because it can’t find the requested data, or maybe there is an HTTP 5xx error because the backend system is down. Given the nature of these problems they don't show at design time but only at runtime.

Layers of Error Handling

So now that you know about the categories of errors bot designers and developers usually deal with, let’s have a look how these can be handled.

  • Implicit error handling is what Oracle Intelligent Bots does when there is no error handler defined at all, which is the default. You wouldn’t want to put any bot into production that only has this level of error handling.
  • Component level error handling allows conversation flow designers to catch errors as close as possible to their cause.
  • Global error handling is defined on the chatbot level. All errors that are not handled on the component level will be passed to this error handler.
Component Level Error Handling

To handle errors, each component can have an error transition property set. The error transition references a state in the same dialog flow that the dialog engine navigates to in case of an error.

So first thing you learn is that an error transition in Oracle Intelligent Bots donesn't handle errors itself but triggers navigation.

The Oracle BotML example below shows a definition of a System.Out component with a missing value for the "keepTurn" property. The state has an error transition defined that points to a state with the name "handleError".

welcomeState:

  component: "System.Output"

    properties:

      text: "Welcome ${profile.firstName} ${profile.lastName}"

      keepTurn:

    transitions:

      next: "showOptions"

      error: "handleError"

 

Note: The keep turn property must have a value defined. The code above is not valid and thus will fail at runtime. As the time of writing, design time validation does not catch a missing keepTurn property.

The BotML below shows how the "handleError" state may look like:

handleError:

   component: "System.Output"

   properties:

     text: "This is a problem caught by the component. The error
            state is the \"${system.errorState}\" state"

    transitions:

      return: "done"

In this example, the error handler simply displays a message containing the errorState, which is the name of the dialog state in which the error occurred. If you wanted to componentize the error handler you could build a specific custom component and the error handler state could reference this custom component which would be a more elegant solution

handleError:

   component: "my.errorhandler"

   properties:

     errorState: "${system.errorState}"

     user: "${profile.firstName} {profile.lastName}"

     isDebug: "true"

   transitions:

     next: "start"

   

Custom components can be used within any state in a dialog flow. In the example above the custom component has input properties defined for the error state, the user and a flag to indicate whether this component is used in development or in production. The latter could be used to determine the message to be printed to the user.

What makes a custom component special, as an error handler is that it can log the problem, try to recover or – in serious cases –perform incident reporting so an administrator becomes aware of a runtime problem.

The custom component could also dynamically determine the next state to visit by updating a context variable that is configured as the value for the "next" element in the "transitions" section.

For example

handleError:

   component: "my.errorhandler"

   properties:

     errorState: "${system.errorState}"

     user: "${profile.firstName} {profile.lastName}"

     isDebug: "true"

   transitions:

     next: "${variableName.value}"

Global Error Handling

So defining error handling at the component level is our first strategy.  The second line of defence is a global error handler. There is a good reason for defining a bot wide global error handler, which is to avoid the implicit error handler. The global error handler is defined using an "error" element in the bot header definition as shown below.

metadata:

  platformVersion: 1.0

main: true

name: TheBotName

context:

  variables:

    iResult: "nlpresult"

error: "handleGlobalError"

states:

  …

 

Because this error handler is defined on the bot level, it behaves exactly as the component level error handler in that it triggers navigation to a state defined as the error handler, "handleGlobalError" in this example.

So all that I wrote in the previous section about the component level error handler can be used here as well. However, special caution should be put on using custom components to handle global errors as the global error setting replaces the implicit handler. An error in the custom component could then lead to an infinitive loop.

Implicit Error Handler

This error handler is used if nothing else has been defined. The error message displayed by this error handler is

Oops I'm encountering a spot of trouble. Please try again later...

Hope you agree that this should not be a message to display to a user in a production bot. However, it is important that a generic implicit error hander like this exists because bot design usually starts with the use case at hand and not custom error handling. 

Learn more

To learn more about Oracle Intelligent Bots and Chatbots, visit http://oracle.com/bots

 

Feature image courtesy of Sira Anamwong at FreeDigitalPhotos.net

Announcing Fn–An Open Source Serverless Functions Platform

Mon, 2017-10-02 17:00

We are very excited to announce our new open source, cloud agnostic, serverless platform–Fn.

The Fn project is a container native Apache 2.0 licensed serverless platform that you can run anywhere–any cloud or on-premise. It’s easy to use, supports every programming language, and is extensible and performant. 

We've focused on making it really easy to get started so you can try it out in just a few minutes and then use more advanced features as you grow into it. Check out our quickstart to get up and running and deploying your own function in a few minutes.

History

The Fn Project is being developed by the same team that created IronFunctions. The team pioneered serverless technology and ran a hosted serverless platform for 6 years. After running billions of containers for thousands of customers, pre and post Docker, the team has learned a thing or two about running containers at scale, specifically in a functions-as-a service style.

Now at Oracle, the team has taken this knowledge and experience and applied it to Fn.

Features

Fn has a bunch of great features for development and operations.

  • Easy to use command line tool to develop, test and deploy functions
  • One dependency: Docker
  • Hot functions for high performance applications
  • Lambda code compatibility - export your Lambda code and run it on Fn
  • FDK's (Function Developer Kit) for many popular languages
  • Advanced Java FDK with JUnit test framework
  • Deploy Fn with your favorite orchestration tool such as Kubernetes, Mesosphere and Docker Swarm
  • Smart load balancer built specially for routing traffic to functions
  • Extensible and modular, enabling custom add-ons and integrations

The project homepage is fnproject.io but all the action is on GitHub at github.com/fnproject/fn

We welcome your feedback and contributions to help make Fn the best serverless platform out there. 

Related Content

 

Image credit: Cuito Cuanavale (Creative Commons Attribution License)

Cloud Foundry Arrives on Oracle Cloud with a Provider Interface and Service Brokers

Mon, 2017-10-02 17:00

As the adoption of Oracle Cloud grows, there is increasing demand to bring a variety workloads to run on it. Due to the popularity of the Cloud Foundry application development platform, over the last year, Oracle customers have requested the option of running Cloud Foundry on Oracle Cloud. Reasons include:

  • Cloud Foundry is a very popular application development platform and many Cloud Foundry developers are using Oracle Cloud for other interrelated projects

  • Oracle Cloud has a large ecosystem of Platform services that can be used to augment Cloud Foundry applications or, conversely, Cloud Foundry can be used to extend Oracle services in new ways.

  • Many Cloud Foundry users have significant Oracle workloads that they need to integrate with and more and more Oracle customers are finding it easier and easier to move those workloads to Oracle Cloud. Co-locating Cloud Foundry workloads near those Oracle workloads in the cloud enables them to easily interoperate and integrate.

So what has Oracle done to make this possible?

Cloud Foundry Running on Oracle Cloud

Over the last several months, Pivotal and Oracle engineering teams have  been collaborating to build out several pieces of an integrated solution to run Cloud Foundry on Oracle Cloud.

We started with the BOSH Cloud Provider Interface. This layer of the Cloud Foundry architecture abstracts away the infrastructure provider to the Cloud Foundry application developer. This allows Cloud Foundry to be installed on various cloud providers like AWS, Microsoft Azure, Google Cloud Platform and now Oracle Cloud Infrastructure.

The code for this was just pushed into our GitHub repositories and is being actively worked on by the Oracle team. At this stage, it’s not currently GA, so use it for proof of concepts. You can look at it here

This work has been a great collaboration between Oracle and Pivotal. Over the next few month, our expectation is that this CPI will become regularly tested as part of the standard Cloud Foundry build processes and part of the collection of CPIs available for Cloud Foundry.

Oracle Cloud Service Brokers for Cloud Foundry

Beyond running Cloud Foundry on Oracle Cloud Infrastructure, one of the key technical requirements we’ve heard from developers is the desire to integrate with various Oracle Cloud Services – from Database to WebLogic Server to MySQL.

Cloud Foundry has a natural model for doing this through an interface called a Service Broker. Service brokers enable Cloud Foundry applications to easily interact with services on or off Cloud Foundry. Operations include provisioning and de-provisioning, binding and unbinding, updating instances and catalog management.  

The first service broker type is for our Oracle Cloud Platform PaaS services. In this model, by configuring one service broker – hosted on Oracle Cloud - we enable Cloud Foundry to interact with upwards of five different PaaS services including Database Cloud Service, Java Cloud Service, MySQL Cloud Service, DataHub Cloud Service (Cassandra) and Event Hub Cloud Service (Kafka). This is an initial set of cloud services and Oracle will evaluate others based on market demand. The diagram below shows a pictorial diagram of this service broker approach.

The second service broker Oracle has developed is for the Oracle Cloud Infrastructure capabilities—in particular our Oracle Cloud Infrastructure Oracle Database Cloud Service and our Oracle Cloud Infrastructure Object Storage. These are service brokers that can be installed and configured in Cloud Foundry to give direct access to these Oracle Cloud Infrastructure services. The diagram below provides a pictorial diagram of this model.

Deployment Approaches

All of this integration between Cloud Foundry and the Oracle Cloud naturally begs the question around what are the typical deployment topologies that one will typically see this solution used. For an initial overview our expectation is that there will be three types of topologies:

  1. All in Oracle Cloud. The all in Oracle Cloud approach is where both Cloud Foundry and the services it interacts with will be running in the Oracle Cloud – nothing runs on premises. The diagram below brings together the BOSH CPI and the two service brokers to illustrate this.

  1. The second approach is more of a hybrid approach where Cloud Foundry runs off Oracle Cloud - either on premise or potentially on other cloud infrastructures but via the service brokers integrates remotely with Oracle Cloud services. This approach is clearly constrained architecturally by issues such as network latency, but depending on the cloud services, may be a useful topology for some use cases. The diagram below illustrates this in action.

  1. The third approach is an all on-premises approach leveraging a capability Oracle calls Oracle Cloud at Customer. This enables customers to run Oracle Cloud services in their data center. This approach is particularly useful for customers who have data residency concerns, regulatory concerns and even performance/latency concerns when running Cloud Foundry on premises and reaching out to public clouds.  Oracle Cloud at Customer includes all the services available via the Oracle PaaS Service Broker running on Oracle Cloud Machine, as well as Oracle Exadata Cloud Machine – all running on premises. The diagram below illustrates this topology in action.

Overall there’s a lot of choice and opportunity here and these three different approaches are really meant to give ideas of how it could be done rather than being prescriptive.

What’s Next?

This work is the start of a journey to run Cloud Foundry workloads on and interacting with Oracle Cloud. Watch for more announcements as we move this work forward over the next few months!

For more information on the Oracle Cloud’s BOSH CPI and Oracle Cloud Infrastructure Service Brokers see this blog.  For Pivotal’s perspective on this work, see this blog.

Meet the New Application Development Stack - Managed Kubernetes, Serverless, Registry, CI/CD, Java

Mon, 2017-10-02 17:00
  • Oracle OpenWorld 2017, JavaOne, and Oracle Code Container Native Highlights
  • New Oracle Container Native App Dev Platform: Managed Kubernetes + CI/CD + Private Registry Service
  • Announcing Fn: an Open Source Functions as a Service Project (Serverless)
  • Latest from Java 9: Driving the Build, Deploy, and Operate Loop
The Container Native Challenge

Today, customers face a difficult decision when selecting a container-native application stack.  Either they choose from a mind-boggling menu of non-integrated, discrete and proprietary components from their favorite cloud provider – thus signing up for layers of DIY integration and administration – and slowly getting locked into that cloud vendor drip-by-drip.  Alternatively, many enterprises venture down a second path and select an opinionated application development stack – which looks “open” at first glance but in reality, consists of closed, forked, and proprietary components – well-integrated, yes, but far from open and cloud neutral.   More lock-in?  Absolutely.

//github.com/cncf/landscape)

Cloud Native Landscape (github.com/cncf/landscape)

So, what if you could combine an integrated developer experience with an open, cloud-neutral application stack built to avoid cloud lock-in?

Announcing Oracle’s Container Native Application Development Platform

The Container Native Application Development team today at Oracle OpenWorld 2017 announced the Oracle Container Native Application Development Platform – bringing together three new services - managed Kubernetes, CI/CD, and private registry services together in a frictionless, integrated developer experience.  The goal?  To provide a complete and enterprise-grade suite of cloud services to build, deploy, and operate container-native microservices and serverless applications.   Developers, as they rapidly adopt container-native technologies for new cloud-native apps and for migrating traditional apps, are becoming increasingly concerned about being locked-in by their cloud vendors and their application development platform vendors.  Moreover, they are seeking the nirvana of the true hybrid cloud, using the same stack in the cloud - any cloud - as they run on premise.

Directly addressing this need, the Oracle Container Native Application Development Platform includes a new managed Kubernetes service - Oracle Container Engine - to create and manage Kubernetes clusters for secure, high-performance, high-availability container deployment.  Secondly, a new private Oracle Container Registry Service for storing and sharing container images across multiple deployments. And finally, a new, full container lifecycle management CI/CD service Oracle Container Pipelines, based upon the Wercker acquisition, for continuous integration and delivery of microservice applications. 

Why should you care? Because unlike other cloud providers and enterprise appdev stack vendors, the Container Native Application Development Platform provides an open, integrated container developer experience as a fully-managed, high-availability service on top of an enterprise-grade cloud (bare metal & secure).  A free community edition of Wercker and early adopter access to the full Oracle Container Native Application Development Platform are available at wercker.com.

Meet Fn: An Open Source Serverless Solution

So as if that weren’t enough, we today open sourced Fn, a serverless developer platform project  fnproject.io. Developers using Oracle Cloud Platform, their laptop, or any cloud, can now build and run applications by just writing code without provisioning, scaling or managing any servers – this is all taken care of transparently by the cloud.  This allows them to focus on delivering value and new services instead of managing servers, clusters, and infrastructure. As Fn is an open-source project, it can also be run locally on a developer’s laptop and across multiple clouds, further reducing risk of vendor lock-in. 

Fn consists of three components: (1) the Fn Platform (Fn Server and CLI); (2) Fn Java FDK (Function Development Kit) which brings a first-class function development experience to Java developers including a comprehensive JUnit test harness (JUnit is a unit test harness for Java); and (3) Fn Flow for orchestrating functions directly in code. Fn Flow enables function orchestration for higher level workflows for sequencing, chaining, fanin/fanout, but directly and natively in the developer’s code versus relying on a console.  We will have initial support for Java with additional language bindings coming soon.

How is Fn different? Because it’s open (cloud neutral with no lock-in), can run locally, is container native, and provides polyglot language support (including Java, Go, Ruby, Python, PHP, Rust, .NET Core, and Node.js with AWS Lambda compatibility). We believe serverless will eventually lead to a new, more efficient cloud development and economic model.  Think about it - virtualization disintermediated physical servers, containers are disintermediating virtualization, so how soon until serverless disintermediates containers?  In the end, it’s all about raising the abstraction level so that developers never think about servers, VM’s, and other IaaS components, giving everybody better utilization by using less resources with faster product delivery and increased agility.  But it must follow the developer mandate: open, community-driven, and cloud-neutral.  And that’s why we introduced Fn.

Java 9: Driving the Build–Deploy–Operate Cloud Loop

DevOps and SRE patterns consistently look for automation and culture to create a repeatable application lifecycle of build, deploy, and operate. The latest Java SE 9 release, announced September 21, 2017 and highlighted at the JavaOne 2017 conference, includes more than 150 new features that help drive new Cloud Native development in this model.  Java SE 9 (JDK 9) is a production-ready implementation of the Java SE 9 Platform Specification, which was recently approved together with Java EE 8 in the Java Community Process (JCP).  Java continues to fuel cloud development in a big way - judging by the latest metrics.  The numbers are staggering: 

- #1 Developer Choice for the Cloud
- #1 Programming Language
- #1 Development Platform in the Cloud

Supported by these metrics:

- 12 Million Developers Run Java
- 21 Billion Cloud Connected Java Virtual Machines
- 38 Billion Active Java Virtual Machines
- 1 Billion Downloads Per Year

So, what’s new in Java 9?  Too much to list here, but a good way to summarize it is to look through a DevOps lens as the Java community continues to improve Java and its application in cloud native application development.  Highly effective DevOps teams are seeking to improve their Build-Deploy-Operate loop to build better code, deploy faster and more often, and recover faster from failures – and new Java 9 features are leading the way:

- Build Smarter
  -- JShell to easily explore APIs and try out language features
  -- Improved Javadoc to learn new APIs
  -- New & improved APIs including Process, StackWalker, VarHandle, Flow, CompletableFuture

- Deploy Faster
  -- New module system - Project Jigsaw
  -- Build lightweight Java apps quickly and easily
  -- Bundle just those parts of the JDK that you need
  -- Efficiently deploy apps to the cloud
  -- Modular Java runtime size makes Docker images smaller & Kubernetes orchestration more efficient

- Operate Securely
  -- More scalability and improved security
  -- Better performance management
  -- Java Flight Recorder released to OpenJDK for improved monitoring and diagnostics

To learn more and try out some new sample apps check out wercker.com/java.  And speaking of open source, agility, and velocity, Oracle is moving to a 6-month release cadence for Java SE 9, and will also be providing OpenJDK builds under the General Public License (GPL).  Cool - more open, more better, more often. Also, we will be contributing previously commercial features to OpenJDK such as Java Flight Recorder in Oracle JDK targeting alignment of Oracle JDK and OpenJDK.

Cloud Foundry on Oracle Cloud

For Cloud Foundry developers, we’ve released an Open Service Broker implementation that integrates Oracle Cloud Platform Services with Cloud Foundry so you can now build directly on the Cloud Foundry framework on Oracle Cloud.  Also, we’ve open sourced the BOSH Cloud Provider Interface, so developers can deploy Cloud Foundry workloads directly on Oracle Cloud Infrastructure, a capability targeted for general availability for later this year.

Beating the Open Source Drum

As a container native group, we’re committed to the open source community and these announcements showcase that commitment. The Oracle Container Native Application Development Platform is yet another step in our journey to deliver an open, cloud neutral and frictionless experience for building cloud-native as well as conventional enterprise applications. Over the course of this spring and summer, Oracle has shown continued commitment to open-source standards by joining the Container Native Computing Foundation, dedicating engineering resources to the Kubernetes project, open-sourcing several container utilities and making its flagship databases and developer tools available in the Docker Store marketplace.

Check out Container Native Highlights at OpenWorld and JavaOne

Finally, check out all the Container Native activities at #OOW17, #JavaOne and #OracleCode, and learn more about all things containers, Java, cloud, and more - from build to deploy to operate. Learn from our engineers in technical sessions, get your hands dirty in a hands-on lab, and take a product tour in the DevOps Corner at the Dev Lounge!  

And make sure to stay connected:  

 

Pages