OTN TechBlog

Subscribe to OTN TechBlog feed
Oracle Blogs
Updated: 2 hours 27 min ago

Keep Calm and Code On: Four Ways an Enterprise Blockchain Platform Can Improve Developer ...

Thu, 2018-07-12 01:45

A guest post by Sarabjeet (Jay) Chugh, Sr. Director Product Marketing, Oracle Cloud Platform

Situation

You just got a cool new Blockchain project for a client. As you head back to the office, you start to map out the project plan in your mind. Can you meet all of your client’s requirements in time? You're not alone in this dilemma.

You attend a blockchain conference the next day and get inspired by engaging talks, meet fellow developers working on similar projects. A lunchtime chat with a new friend turns into a lengthy conversation about getting started with Blockchain.

Now you’re bursting with new ideas and ready to get started with your hot new Blockchain coding project. Right?

Well almost…

You go back to your desk and contemplate a plan of action to develop your smart contract or distributed application, thinking through the steps, including ideation, analysis, prototype, coding, and finally building the client-facing application.

Problem

It is then that the reality sets in. You begin thinking beyond proof-of-concept to the production phase that will require additional things that you will need to design for and build into your solution. Additional things such as:
 

These things may delay or even prevent you from getting started with building the solution. Ask yourself the questions such as:

  • Should I spend time trying to fulfill dependencies of open-source software such as Hyperledger Fabric on my own to start using it to code something meaningful?
  • Do I spend time building integrations of diverse systems of record with Blockchain?
  • Do I figure out how to assemble components such as Identity management, compute infrastructure, storage, management & monitoring systems to Blockchain?
  • How do I integrate my familiar development tools & CI/CD platform without learning new tools?
  • And finally, ask yourself, Is it the best use of your time to figure out scaling, security, disaster recovery, point in time recovery of distributed ledger, and the “illities” like reliability, availability, and scalability?

If the answer to one or more of these is a resounding no, you are not alone. Focusing on the above aspects, though important, will take time away from doing the actual work to meet your client’s needs in a timely manner, which can definitely be a source of frustration.

But do not despair.

You need to read on about how an enterprise Blockchain platform such as the one from Oracle can make your life simpler. Imagine productivity savings multiplied hundreds of thousands of times across critical enterprise blockchain applications and chaincode.

What is an Enterprise Blockchain Platform?

The very term “enterprise”  typically signals a “large-company, expensive thing” in the hearts and minds of developers. Not so in this case, as it may be more cost effective than spending your expensive developer hours to build, manage, and maintain blockchain infrastructure and its dependencies on your own.

As the chart below shows, the top two Blockchain technologies used in proofs of concept have been Ethereum and Hyperledger.


 

Ethereum has been a platform of choice among the ICO hype for public blockchain use. However, it has relatively lower performance, is slower and less mature compared to Hyperledger. It also uses a less secure programming model based on a primitive language called Solidity, which is prone to re-entrant attacks that has led to prominent hacks like the DOA attack that lost $50M recently.  

Hyperledger Fabric, on the other hand, wins out in terms of maturity, stability, performance, and is a good choice for enterprise use cases involving the use of permissioned blockchains. In addition, capabilities such as the ones listed in Red have been added by vendors such as Oracle that make it simpler to adopt and use and yet retain the open source compatibility.

Let’s look at how enterprise Blockchain platform, such as the one Oracle has built that is based on open-source Hyperledger Fabric can help boost developer productivity.

How an Enterprise Blockchain Platform Drives Developer Productivity

Enterprise blockchain platforms provide four key benefits that drive greater developer productivity:

 
Performance at Scale

  • Faster consensus with Hyperledger Fabric
  • Faster world state DB - record level locking for concurrency and parallelization of updates to world state DB
  • Parallel execution across channels, smart contracts
  • Parallelized validation for commit

Operations Console with Web UI

  • Dynamic Configuration – Nodes, Channels
  • Chaincode Lifecycle – Install, Instantiate, Invoke, Upgrade
  • Adding Organizations
  • Monitoring dashboards
  • Ledger browser
  • Log access for troubleshooting

Resilience and Availability

  • Highly Available configuration with replicated VMs
  • Autonomous Monitoring & Recovery
  • Embedded backup of configuration changes and new blocks
  • Zero-downtime patching

Enterprise Development and Integration

  • Offline development support and tooling
  • DevOps CI/CD integration for chaincode deployment, and lifecycle management
  • SQL rich queries, which enable writing fewer lines of code, fewer lines to debug
  • REST API based integration with SaaS, custom apps, systems of record
  • Node.js, GO, Java client SDKs
  • Plug-and-Play integration adapters in Oracle’s Integration Cloud

Developers can experience orders of magnitude of productivity gains with pre-assembled, managed, enterprise-grade, and integrated blockchain platform as compared assembling it on their own.

Summary

Oracle offers a pre-assembled, open, enterprise-grade blockchain platform, which provides plug-and-play integrations with systems of records and applications and autonomous AI-driven self-driving, self-repairing, and self-securing capabilities to streamline operations and blockchain functionality. The platform is built with Oracle’s years of experience serving enterprise’s most stringent use cases and is backed by expertise of partners trained in Oracle blockchain. The platform rids developers of the hassles of assembling, integrating, or even worrying about performance, resilience, and manageability that greatly improves productivity.

If you’d like to learn more, Register to attend an upcoming webcast (July 16, 9 am PST/12 pm EST). And if your ready to dive right in you can sign up for $300 of free credits good for up to 3500 hours of Oracle Autonomous Blockchain Cloud Service usage.

Build and Deploy Node.js Microservice on Docker using Oracle Developer Cloud

Thu, 2018-07-05 03:48

This is the first blog in the series to come, which will help you understand, how you can build a NodeJS REST microservice application Docker image and push it to DockerHub using Oracle Developer Cloud Service. The next blog in the series would focus on deployment of the container we build here to deploy on Oracle Kubernetes Engine on Oracle Cloud infrastructure.

You can read about the overview of the Docker functionality in this blog.

Technology Stack Used

Developer Cloud Service - DevOps Platform

Node.js Version 6 – For microservice development.

Docker – For Build

Docker Hub – Container repository

 

Setting up the Environment:

Setting up Docker Hub Account:

You should create an account on https://hub.docker.com/. Keep the credentials handy for use in the build configuration section of the blog.

Setting up Developer Cloud Git Repository:

Now login into your Oracle Developer Cloud Service project. And create a Git repository as shown below. You can give a name of your choice to the Git repository. For the purpose of this blog, I am calling it NodeJSDocker. You can copy the Git repository URL and keep it handy for future use. 

Setting up Build VM in Developer Cloud:

Now we have to create a VM Template and VM with the Docker software bundle for the execution of the build.

Click on the user drop down on the right hand top of the page. Select “Organization” from the menu.

Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”. And then click the Create button.

On creation of the template click on “Configure Software” button.

Select Docker from the list of software bundles available for configuration and click on the + sign to add it to the template. Then click on “Done” to complete the Software configuration.

Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM(s) you want to create and select the VM Template you just created, which would be “DockerTemplate” for our blog.

 

Pushing Scripts to Git Repository on Oracle Developer Cloud:

Command_prompt:> cd <path to the NodeJS folder>

Command_prompt:>git init

Command_prompt:>git add –all

Command_prompt:>git commit –m “<some commit message>”

Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL>

Command_prompt:>git push origin master

Below screen shots are for your reference.

 

Below is the folder structure description for the code that I have in the Git Repository on Oracle Developer Cloud Service.

Code in the Git Repository:

You will need to push the below 3 files in the Developer Cloud hosted Git repository which we have created.

Main.js

This is the main Node JavaScript code snippet which contains two simple methods, first one is to show the message and second one /add is for adding two numbers. The application listens at port 80. 

var express = require("express"); var bodyParser = require("body-parser"); var app = express(); app.use(bodyParser.urlencoded()); app.use(bodyParser.json()); var router = express.Router(); router.get('/',function(req,res){   res.json({"error" : false, "message" : "Hello Abhinav!"}); }); router.post('/add',function(req,res){   res.json({"error" : false, "message" : "success", "data" : req.body.num1 + req.body.num2}); }); app.use('/',router); app.listen(80,function(){   console.log("Listening at PORT 80"); })

Package.json

In this JSON code snippet we define the Node.js module dependencies. We also define the start file, which is Main.js for our project and the Name of the application.

{   "name": "NodeJSMicro",   "version": "0.0.1",   "scripts": {     "start": "node Main.js"   },   "dependencies": {     "body-parser": "^1.13.2",     "express": "^4.13.1"     } }

Dockerfile

This file will contains the commands to be executed to build the Docker container with the Node.js code. It starts by getting the Node.js version 6 Docker image, then adds the two files Main.js and package.json cloned from the Git repository. Run the npm install to download the dependencies in package.json file. Expose port 80 for Docker container. And finally start the application to listen on port 80.

 

FROM node:6 ADD Main.js ./ ADD package.json ./ RUN npm install EXPOSE 80 CMD [ "npm", "start" ]

Build Configuration:

Click on the “+ New Job” button and in the dialog which pops up, give the build job a name of your choice(for the purpose of this blog I have given this as “NodeJSMicroDockerBuild”) and then select the build template (DockerTemplate) from the dropdown, that we had created earlier in the blog. 

As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository we created earlier in the blog, which is NodeJSDocker and the master branch to which we have pushed the code. You may select the checkbox to configure automatic build trigger on SCM commits.

Now from the Builders tab, select Docker Builder -> Docker Login. In the Docker login form you can leave the Registry host empty as we will be using Docker Hub which is the default Docker registry for Developer Cloud Docker Builder. You will have to provide the Docker Hub account username and password in the respective fields of the login form.

In the Builders tab, select Docker Builder -> Docker Build from the Add Builder dropdown. You can leave the Registry host empty as we are going to use Docker Hub which is the default registry. Now, you just need to give the Image name in the form that gets added and you are all done with the Build Job configuration. Click on Save to save the build job configuration.

Note: Image name should be in the format <Docker Hub user name>/<Image Name>

For this blog we can give the image name as - nodejsmicro

Then add Docker Push by selecting Docker Builder -> Docker Push from the Builders tab.Here you just need to mention the Image name, same as you have done in the Docker Build form to push the Docker Image build to the Docker Registry, which in this case is Docker Hub.

Once you execute the build, you will be able to see the build in the build queue.

Once the build gets executed the Docker Image that gets build is pushed to the Docker Registry which is Docker Hub for our blog. You can login into your Docker Hub account to see the Docker repository being created and the image being pushed to it, as seen in the screen shot below.

Now you can pull this image anywhere, then create and run the container, you will have your Node.js microservice code up and running.

 

You can go ahead and try many other Docker commands both using the out of the box Docker Builder functionality and also alternatively using the Shell Builder to run your Docker commands.

In the next blog, of the series, we will deploy this Node.js microservice container on a Kubernetes cluster in Oracle Kubernetes Engine.

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

 

 

Arrgs. My Bot Doesn't Understand Me! Why Intent Resolutions Sometimes Appear to Be Misbehaving

Fri, 2018-06-22 10:17

Article by Grant Ronald, June 2018

One of the most common questions that gets asked when someone starts building a real bot is “Why am I getting strange intent resolutions”. For example, someone tests the bot with random key presses like “slkejfhlskjefhksljefh” and finds an 80% resolution for “CheckMyBalance”. The first reaction is to blame the intent resolution within the product. However, the reality is that you’ve not trained it to know any better. This short article gives a high level conceptual explanation of how model do and don’t work.

READ THE FULL ARTICLE

Related Content

TechExchange - First Step in Training Your Bot

A Practical Guide to Building Multi-Language Chatbots with the Oracle Bot Platform

Fri, 2018-06-22 09:05

Article by Frank Nimphius, Marcelo Jabali - June 2018

Chatbot support for multiple languages is a worldwide requirement. Almost every country has the need for supporting foreign languages, be it to support immigrants, refugees, tourists, or even employees crossing borders on a daily basis for their jobs.

According to the Linguistic Society of America1, as of 2009, 6,909 distinct languages were classified, a number that since then has been grown. Although no bot needs to support all languages, you can tell that for developers building multi-language bots, understanding natural language in multiple languages is a challenge, especially if the developer does not speak all of the languages he or she needs to implement support for.

This article explores Oracle's approach to multi language support in chatbots. It explains the tooling and practices for you to use and follow to build bots that understand and "speak" foreign languages.

Read the full article.

 

Related Content

TechExchange: A Simple Guide and Solution to Using Resource Bundles in Custom Components 

TechExchange - Custom Component Development in OMCe – Getting Up and Running Immediately

TechExchange - First Step in Training Your Bot

API Monetization: What Developers Need to Know

Tue, 2018-06-19 23:15

You’ve no doubt heard the term “API monetization,” but do you really understand what it means? More importantly, do you understand what API monetization means for developers?

“The general availability of information and services has really influenced the way APIs behave and the way APIs are built,” says Oracle ACE and Developer Champion Arturo Viveros, principal architect at Sysco AS in Norway. “The hyper-distributed nature of the systems we work with, with cloud computing and with blockchain, and all of these technologies, makes it very important. Everyone wants to have information in real time now, as opposed to before when we could afford to create APIs that could give you a snapshot of what happened a few hours ago, or a day ago.”

These days the baseline consumer expectation is 24/7/365 service. “So, as a developer, when you’re designing APIs that are going to be exposed as business assets or as products, you need to take into account characteristics like high availability, performance resiliency, and flexibility,” says Viveros. “That’s why all of these new technologies go into supporting APIs, like microservices and containers and serverless. It's so critical to learn to use them because they allow you to be flexible to deploy new versions or improved versions of APIs. They allow your APIs to have an improved life cycle and to move away from the whole monolithic paradigm, reduce time to market, and move forward at the speed that the organization and your user base and consumer base require.”

So yeah, there’s a bit of a learning curve. But hasn’t that always been the developer’s reality? And hasn’t there always been some kind of reward at the end of the learning curve?

“It’s an exciting time for developers,” says Luis Weir. He’s an Oracle ACE Director, a Developer Champion, and the CTO of the Oracle Delivery Unit with Capgemini in the UK. “API monetization is an opportunity to add direct tangible value to the business. APIs have become a source of revenue on their own,” says Weir. “This is quite exciting. I don't think this is something that we’ve seen before in the IT industry. Whatever APIs we had in the past were in support of a business product, they were not the business product. That's different, and I think developers have the opportunity now to be completely, directly involved in the creation and maintenance of these products.”

While developing APIs is certainly important, it’s no less important to take advantage of what is already out there. “Developers within an organization need to be thinking about what APIs might be available to complete functions that are not within their core competency,” says Robert Wunderlich, product strategy director for Cloud, API, and Integration at Oracle. “There are a lot of publicly available APIs that can be used for low or no cost or a reasonable cost.”

[For example, check out the API Showcase on the NYC Developer Portal ]

Luis Weir sees another important aspect of API monetization. “As a developer it's always exciting to see how your product is received. For example, when you create an open source GitHub project and then all of a sudden you see a lot of people forking your project and trying to trace pull requests to contribute to it, that's exciting because that means that you did something that added to your organization or to the community. That's rewarding as a developer. It’s far more rewarding to see an IT asset that's directly influencing the direction of the business.” API monetization provides that visibility.

Arturo Viveros, Luis Weir, and Robert Wunderlich explore API monetization in depth from a developer perspective in this month’s Oracle Developer Community Podcast. Check it out!

The Panelists

In alphabetical order

Arturo Viveros
Oracle ACE
Oracle Developer Champion
Principal Architect, Sysco AS
Twitter LinkedIn Luis Weir
Oracle ACE Director
Oracle Developer Champion
CTO, Oracle Delivery Unit, Capgemini UK
Twitter LinkedIn Robert Wunderlich
Product Strategy Director for Cloud, API, and Integration, Oracle
Twitter LinkedIn  Additional Resources Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

APIs to the Rescue in the Aftermath of 2017 Mexican Earthquake

Tue, 2018-06-19 13:38

After three weeks Hawaii's Kilauea volcano is still busy eating an island. Early in June Guatemala's Volcan De Fuego erupted and is still literally shaking the earth. And just this past weekend a 5.3 magnitude quake struck Osaka, Japan. Mother Earth knows how to get our attention. But in doing so she also triggers an impulse in some human beings to jump in and help in any way they can.

One great example of that kind of techie humanitarianism is the group of Mexican developers and DBAs who, in the immediate aftermath of the earthquake that hit Mexico in 2017, banded together in a collaborative effort to rapidly build a system to coordinate rescue and relief efforts.

Oracle ACE Rene Antunez was one of the volunteers in that effort. He shares the organizational and technical details in this video interview recorded at last week's ODTUG Kscope 2018 event in Orlando.

Given that natural disasters are likely to continue to happen, the open source project is ongoing, and is available on GItHub:

https://github.com/CodeandoMexico/terremoto-cdmx

Why not lend your skills to this worthwhile effort?

Have you been involved in similar humanitarian software development efforts? post a comment below

 

Announcing Oracle APEX 18.1

Fri, 2018-05-25 12:11

Oracle Application Express (APEX) 18.1 is now generally available! APEX enables you to develop, design and deploy beautiful, responsive, data-driven desktop and mobile applications using only a browser. This release of APEX is a dramatic leap forward in both the ease of integration with remote data sources, and the easy inclusion of robust, high-quality application features.

Keeping up with the rapidly changing industry, APEX now makes it easier than ever to build attractive and scalable applications which integrate data from anywhere - within your Oracle database, from a remote Oracle database, or from any REST Service, all with no coding.  And the new APEX 18.1 enables you to quickly add higher-level features which are common to many applications - delivering a rich and powerful end-user experience without writing a line of code.

"Over a half million developers are building Oracle Database applications today using  Oracle Application Express (APEX).  Oracle APEX is a low code, high productivity app dev tool which combines rich declarative UI components with SQL data access.  With the new 18.1 release, Oracle APEX can now integrate data from REST services with data from SQL queries.  This new functionality is eagerly awaited by the APEX developer community", said Andy Mendelsohn, Executive Vice President of Database Server Technologies at Oracle Corporation.

 

Some of the major improvements to Oracle Application Express 18.1 include:

Application Features


It has always been easy to add components to an APEX application - a chart, a form, a report.  But in APEX 18.1, you now have the ability to add higher-level application features to your app, including access control, feedback, activity reporting, email reporting, dynamic user interface selection, and more.  In addition to the existing reporting and data visualization components, you can now create an application with a "cards" report interface, a dashboard, and a timeline report.  The result?  An easily-created powerful and rich application, all without writing a single line of code.

REST Enabled SQL Support


Oracle REST Data Services (ORDS) REST-Enabled SQL Services enables the execution of SQL in remote Oracle Databases, over HTTP and REST.  You can POST SQL statements to the service, and the service then runs the SQL statements against Oracle database and returns the result to the client in a JSON format.  

In APEX 18.1, you can build charts, reports, calendars, trees and even invoke processes against Oracle REST Data Services (ORDS)-provided REST Enabled SQL Services.  No longer is a database link necessary to include data from remote database objects in your APEX application - it can all be done seamlessly via REST Enabled SQL.

Web Source Modules


APEX now offers the ability to declaratively access data services from a variety of REST endpoints, including ordinary REST data feeds, REST Services from Oracle REST Data Services, and Oracle Cloud Applications REST Services.  In addition to supporting smart caching rules for remote REST data, APEX also offers the unique ability to directly manipulate the results of REST data sources using industry standard SQL.

REST Workshop


APEX includes a completely rearchitected REST Workshop, to assist in the creation of REST Services against your Oracle database objects.  The REST definitions are managed in a single repository, and the same definitions can be edited via the APEX REST Workshop, SQL Developer or via documented API's.  Users can exploit the data management skills they possess, such as writing SQL and PL/SQL to define RESTful API services for their database.   The new REST Workshop also includes the ability to generate Swagger documentation against your REST definitions, all with the click of a button.

Application Builder Improvements


In Oracle Application Express 18.1, wizards have been streamlined with smarter defaults and fewer steps, enabling developers to create components quicker than ever before.  There have also been a number of usability enhancements to Page Designer, including greater use of color and graphics on page elements, and "Sticky Filter" which is used to maintain a specific filter in the property editor.  These features are designed to enhance the overall developer experience and improve development productivity.  APEX Spotlight Search provides quick navigation and a unified search experience across the entire APEX interface.

Social Authentication


APEX 18.1 introduces a new native authentication scheme, Social Sign-In.  Developers can now easily create APEX applications which can use Oracle Identity Cloud Service, Google, Facebook, generic OpenID Connect and generic OAuth2 as the authentication method, all with no coding.

Charts


The data visualization engine of Oracle Application Express powered by Oracle JET (JavaScript Extension Toolkit), a modular open source toolkit based on modern JavaScript, CSS3 and HTML5 design and development principles.  The charts in APEX are fully HTML5 capable and work on any modern browser, regardless of platform, or screen size.  These charts provide numerous ways to visualize a data set, including bar, line, area, range, combination, scatter, bubble, polar, radar, pie, funnel, and stock charts.  APEX 18.1 features an upgraded Oracle JET 4.2 engine with updated charts and API's.  There are also new chart types including Gantt, Box-Plot and Pyramid, and better support for multi-series, sparse data sets.

Mobile UI


APEX 18.1 introduce many new UI components to assist in the creation of mobile applications.  Three new component types, ListView, Column Toggle and Reflow Report, are now components which can be used natively with the Universal Theme and are commonly used in mobile applications.  Additional enhancements have been made to the APEX Universal Theme which are mobile-focused, namely, mobile page headers and footers which will remain consistently displayed on mobile devices, and floating item label templates, which optimize the information presented on a mobile screen.  Lastly, APEX 18.1 also includes declarative support for touch-based dynamic actions, tap and double tap, press, swipe, and pan, supporting the creation of rich and functional mobile applications.

Font APEX


Font APEX is a collection of over 1,000 high-quality icons, many specifically created for use in business applications.  Font APEX in APEX 18.1 includes a new set of high-resolution 32 x 32 icons which include much greater detail and the correctly-sized font will automatically be selected for you, based upon where it is used in your APEX application.

Accessibility


APEX 18.1 includes a collection of tests in the APEX Advisor which can be used to identify common accessibility issues in an APEX application, including missing headers and titles, and more. This release also deprecates the accessibility modes, as a separate mode is no longer necessary to be accessible.

Upgrading


If you're an existing Oracle APEX customer, upgrading to APEX 18.1 is as simple as installing the latest version.  The APEX engine will automatically be upgraded and your existing applications will look and run exactly as they did in the earlier versions of APEX.  

 

"We believe that APEX-based PaaS solutions provide a complete platform for extending Oracle’s ERP Cloud. APEX 18.1 introduces two new features that make it a landmark release for our customers. REST Service Consumption gives us the ability to build APEX reports from REST services as if the data were in the local database. This makes embedding data from a REST service directly into an ERP Cloud page much simpler. REST enabled SQL allows us to incorporate data from any Cloud or on-premise Oracle database into our Applications. We can’t wait to introduce APEX 18.1 to our customers!", said Jon Dixon, co-founder of JMJ Cloud.

 

Additional Information


Application Express (APEX) is the low code rapid app dev platform which can run in any Oracle Database and is included with every Oracle Database Cloud Service.  APEX, combined with the Oracle Database, provides a fully integrated environment to build, deploy, maintain and monitor data-driven business applications that look great on mobile and desktop devices.  To learn more about Oracle Application Express, visit apex.oracle.com.  To learn more about Oracle Database Cloud, visit cloud.oracle.com/database

Oracle Cloud Infrastructure CLI on Developer Cloud

Thu, 2018-05-24 10:00

With our May 2018 release of Oracle Developer Cloud, we have integrated Oracle Cloud Infrastructure command line interface (from here on, will be using OCIcli in the blog) as part of the build pipeline in Developer Cloud. This blog will help you understand how you can configure and execute OCIcli commands as part of the build pipeline, configured as part of the build job in Developer Cloud.

Configuring the Build VM Template for OCIcli

You will have to create a build VM with the OCIcli software bundle, to be able to execute the build with OCIcli commands. Click on the user drop down on the right hand top of the page. Select “Organization” from the menu.

Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”. And then click the Create button.

On creation of the template click on “Configure Software” button.

Select OCIcli from the list of software bundles available for configuration and click on the + sign to add it to the template. You will also have to add the Python3.5 software bundle, which is a dependency for the OCIcli. Then click on “Done” to complete the Software configuration.

Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “OCIcli” for our blog.

Build Job Configuration

Configure the Tenancy OCID as Build Parameter using String Parameter and give the name as per your wish. I have named it as "T" and have provided a default value to it, as shown in the screenshot below.

In the Builders tab Select OCIcli Builder and a Unix Shell builder in this sequence from the Add Builder drop down.

On adding the OCIcli Builder, you will see the form as below.

For the OCIcli Builder, you can get the parameters from the OCI console. Below screenshots would show where to get each of these form values from the OCI console.Below highlighted are in red boxes shows where you can get the Tenancy OCID and the region for the “Tenancy” and “Region” fields respectively in the OCIcli builder form.

For the “User OCID” and “Fingerprint” you need go to User Settings by clicking over the username drop down in the OCI console located at right hand side top. Please refer the screen shot below.

Please refer the links below for understanding the process of generating the Private Key and configuring the Public Key for the user in the OCI console.

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3

In the Unix Shell Builder you can try out the below command:

oci iam compartment list -c $T

This command will list all the compartment in the Tenancy with OCID given to variable ‘T’ that we configured in the Build parameters tab as a String Parameter.

 

Post execution of the command, you can view the output on the console log. As shown below.

There are tons of other OCIcli commands that you can run as part of the build pipeline. Please refer this link for the same.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Oracle Developer Cloud - New Continuous Integration Engine Deep Dive

Wed, 2018-05-23 02:00

We introduced our new Build Engine in Oracle Developer Cloud in our April release. This new build engine now comes with the capability to define build pipelines visually. Read more about it in my previous blog.

In this blog we will delve deeper into some of the functionalities of Build Pipeline feature of the new CI Engine in Oracle Developer Cloud.

Auto Start

Auto Start is an option given to the user while creating a build pipeline on Oracle Developer Cloud Service. The below screenshot shows the dialog to create a new Pipeline, where you have a checkbox which needs to be checked to ensure the pipeline execution auto starts when one of the build job in the pipeline is executed externally, then that would trigger the execution of rest of the build jobs in the pipeline.

The below screen shot shows the pipeline for NodeJS application created on Oracle Developer Cloud Pipelines. The build jobs used in the pipeline are build-microservice, test-microservices and loadtest-microservice. And in parallel to the microservice build sequence we have, WiremockInstall and WiremockConfigure.

Scenarios When Auto Start is enabled for the Pipeline:

Scenario 1:

If we run build-microservice build job externally, then it will lead to the execution of the test-microservice and loadtest-microservice build jobs in that order subsequently. But note this does not trigger the execution of WiremockInstall or WiremockConfigure build jobs as they are part of a separate sequence. Please refer the screen shot below, which shows only the build jobs executed in green.

Scenario 2:

If we run test-microservice build job externally, then it will lead to the execution of the loadtest-microservice build job only. Please refer the screen shot below, which shows only the build jobs executed in green.

Scenario 3:

If we run loadtest-microservice build job externally, then it will lead to no other build job execution in the pipeline across both the build sequences.

Exclusive Build

This enables the users to disallow the pipeline jobs to be built externally in parallel to the execution of the build pipeline. It is an option given to the user while creating a build pipeline on Oracle Developer Cloud Service. The below screenshot shows the dialog to create a new Pipeline, where you have a checkbox which needs to be checked to ensure that the execution of build jobs in pipeline will not be allowed to be built in parallel to the pipeline execution.

When you run the pipeline you would see the build jobs queued for execution which you can see in the Build History. In this case you would see two build jobs queued, one would be build-micorservice and other would be WiremockInstall as they are parallel sequences part of the same pipeline.

Now if you try to run any of the build jobs in the pipeline, for example; like test-microservice, you will be given an error message, as shown in the screenshot below.

 

Pipeline Instances:

If you click the Build Pipeline name link in the Pipelines tab you will be able to see the pipeline instances. Pipeline instance is the instance at which it was executed. 

Below screen shot shows the pipeline instances with time stamp of when it was executed. It will show if the pipeline got Auto Started (hover on the status icon of the pipeline instance) due to an external execution of the build job or shows the success status if all the build jobs of the pipeline were build successfully. It also shows the build jobs that executed successfully in green for that particular pipeline instance. The build jobs that did not get executed have a white background.  You also get an option to cancel while the pipeline is getting executed and you may choose to delete the instance post execution of the pipeline.

 

Conditional Build:

The visual build pipeline editor in Oracle Developer Cloud has a feature to support conditional builds. You will have to double click the link connecting the two build jobs and select any one of the conditions as given below:

Successful: To proceed to the next build job in the sequence if the previous one was a success.

Failed: To proceed to the next build job in the sequence if the previous one failed.

Test Failed: To proceed to the next build job in the sequence if the test failed in the previous build job in the pipeline.

 

Fork and Join:

Scenario 1: Fork

In this scenario if you have a build job like build-microservice on which the other three build jobs, “DockerBuild” which builds a deployable Docker image for the code, “terraformBuild” which builds the instance on Oracle Cloud Infrastructure and deploy the code artifact and “ArtifactoryUpload” build job to upload the generated artifact to Artifactory are dependent on then you will be able to fork the build jobs as shown below.

 

Scenario 2: Join

If you have a build job test-microservice which is dependent on two other build jobs, build-microservice which build and deploys the application and another build job WiremockConfigure to configure the service stub, then in this case you need to create a join in the pipeline as shown in the screen shot below.

 

You can refer the Build Pipeline documentation here.

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Pizza, Beer, and Dev Expertise at Your Local Meet-up

Wed, 2018-05-16 06:30

Big developer conferences are great places to learn about new trends and technologies, attend technical sessions, and connect with colleagues. But by virtue of their size, their typical location in destination cities, and multi-day schedules, they can require a lot of planning, expense, and time away from work.

Meet-ups, offer a fantastic alternative. They’re easily accessible local events, generally lasting a couple of hours. Meet-ups offer a more human scale and are far less crowded than big conferences, with a far more casual, informal atmosphere that can be much more conducive to learning through Q&A and hands-on activities.

One big meet-up advantage is that by virtue of their smaller scale they can be scheduled more frequently. For example, while Oracle ACE Associate Jon Petter Hjulsted and his colleagues attend the annual Oracle User Group Norway (OUGN) Conference, they wanted to get together more often, three or four times a year. The result is a series of OUGN Integration meet-ups “where we can meet people who work on the same things.” As of this podcast two meet-ups have already taken place, with third schedule for the end of May.

Luis Weir, CTO at Capgemini in the UK and an Oracle ACE Director and Developer Champion, felt a similar motivation. “There's so many events going on and there's so many places where developers can go,” Luis says. But sometimes developers want a more relaxed, informal, more approachable atmosphere in which to exchange knowledge. Working with his colleague Phil Wilkins, senior consultant at Capgemini and an Oracle ACE, Luis set out to organize a series of meet-ups that offered more “cool.”

Phil’s goal in the effort was to organize smaller events that were “a little less formal, and a bit more convenient.” Bigger, longer events are more difficult to attend because they require more planning on the part of attendees. “It can take quite a bit of effort to organize your day if you’re going to be out for a whole day to attend a user group special interest group event,” Phil says. But local events scheduled in the evening require much less planning in order to attend. “It's great! You can get out and attend these things and you get to talk to people just as much as you would at a during a day-time event.”

For Oracle ACE Ruben Rodriguez Santiago, a Java, ADF, and cloud solution specialist with Avanttic in Spain, the need for meet-ups arose out of a dearth of events focused on Oracle technologies. And those that were available were limited to database and SaaS. “So for me this was a way to get moving and create events for developers,” Ruben says.

What steps did these meet-up organizers take? What insight have they gained along the way as they continue to organize and schedule meet-up events? You’ll learn all that and more in this podcast. Listen!

 

The Panelists Jon-Petter Hjulstad
Department Manager, SYSCO AS
Twitter LinkedIn   
Ruben Rodriguez Santiago
Java, ADF, and Cloud Solution Specialist, Avanttic
Twitter LinkedIn  
Luis Weir
CTO, Oracle DU, Capgemini
Twitter LinkedIn  
Phil Wilkins
Senior Consultant, Capgemini
Twitter LinkedIn  Additional Resources Coming Soon
  • What Developers Need to Know About API Monetization
  • Best Practices for API Development
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

 

Build Oracle Cloud Infrastructure custom Images with Packer on Oracle Developer Cloud

Wed, 2018-05-09 15:55

In the April release of Oracle Developer Cloud Service we started supporting Docker and Terraform builds as part of the CI & CD pipeline. Terraform helps you provision Oracle Cloud Infrastructure instance as part of the build pipeline. But what if you want to provision the instance using a custom image instead of the base image? You need a tool like Packer to script your way into building images. So with Docker build support we can now build Packer based images as part of build pipeline in Oracle Developer Cloud. This blog will help you to understand how you can use Docker and Packer together on Developer Cloud to create custom images on Oracle Cloud Infrastructure.

About Packer

HashiCorp Packer automates the creation of any type of machine image. It embraces modern configuration management by encouraging to use automated scripts to install and configure the software within your Packer-made images. Packer brings machine images into the modern age, unlocking untapped potential and opening new opportunities.

You can read more about Packer on https://www.packer.io/

You can find the details of Packer support for Oracle Cloud Infrastructure here.

Tools and Platforms Used

Below are the tools and cloud platforms I use for this blog:

Oracle Developer Cloud Service: The DevOps platform to build your Ci & CD pipeline.

Oracle Cloud Infrastructure: IaaS platform where we would build the image which can be used for provisioning.

Packer: Tool for creating custom images on cloud. We would be doing for Oracle Cloud Infrastructure or OCI it is popularly known as. For this blog I would mostly be using OCI here on.

Packer Scripts

To execute the Packer scripts on the Oracle Developer Cloud as part of the build pipeline, you need to upload 3 files to the Git repository. To upload the scripts to the Git repository, you will need to first install the Git cli on your machine and then use the below commands to upload the code:

I was using windows machine for the script development, so below is what you need to do on the command line:

Pushing Scripts to Git Repository on Oracle Developer Cloud

Command_prompt:> cd <path to the Terraform script folder>

Command_prompt:>git init

Command_prompt:>git add –all

Command_prompt:>git commit –m “<some commit message>”

Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL>

Command_prompt:>git push origin master

Note: Ensure that the Git repository is created and you have the HTTPS URL for it.

Below is the folder structure description for the scripts that I have in the Git Repository on Oracle Developer Cloud Service.

Description of the files:

oci_api_key.pem – This is the file required for the OCI access. It contains the SSH private key.

Note: Please refer to the links below for details on OCI key. You will also need the SSH public key to be there

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3

 

build.json: This is the only configuration file that you need for Packer. This JSON file contains all the definitions needed for Packer to create an image on Oracle Cloud Infrastructure. I have truncated the ocids and fingerprint for security reasons.

 

{ "builders": [ { "user_ocid":"ocid1.user.oc1..aaaaaaaa", "tenancy_ocid": "ocid1.tenancy.oc1..aaaaaaaay", "fingerprint":"29:b1:8b:e4:7a:92:ae", "key_file":"oci_api_key.pem", "availability_domain": "PILZ:PHX-AD-1", "region": "us-phoenix-1", "base_image_ocid": "ocid1.image.oc1.phx.aaaaaaaal", "compartment_ocid": "ocid1.compartment.oc1..aaaaaaaahd", "image_name": "RedisOCI", "shape": "VM.Standard1.1", "ssh_username": "ubuntu", "ssh_password": "welcome1", "subnet_ocid": "ocid1.subnet.oc1.phx.aaaaaaaa", "type": "oracle-oci" } ], "provisioners": [ { "type": "shell", "inline": [ "sleep 30", "sudo apt-get update", "sudo apt-get install -y redis-server" ] } ] }

You can give values of your choice for image_name and it is recommended but optional to provide ssh_password. While I have kept ssh_username as “Ubuntu” as my base image OS was Ubuntu. Leave the type and shape as is. The base_image ocid would depend on the region. Different region have different ocid for the base images. Please refer link below to find the ocid for the image as per region.

https://docs.us-phoenix-1.oraclecloud.com/images/

Now login into your OCI console to retrieve some of the details needed for the build.json definitions.

Below screenshot shows where you can retrieve your tenancy_ocid from.

Below screenshot of OCI console shows where you will find the compartment_ocid.

Below screenshot of OCI console shows where you will find the user_ocid.

You can retrieve the region and availability_domain as shown below.

Now select the compartment, which is “packerTest” for this blog, then click on the networking tab and then the VCN you have created. Here you would see a subnet each for the availability_domains. Copy the ocid for the subnet with respect to the availability_domain you have chosen.

Dockerfile: This will install Packer in Docker and run the Packer command to create a custom image on OCI. It pulls the packer:full image, then adds the build.json and oci_api_key.pem files the Docker image and then execute the packer build command.

 

FROM hashicorp/packer:full ADD build.json ./ ADD oci_api_key.pem ./ RUN packer build build.json

 

Configuring the Build VM

With our latest release, you will have to create a build VM with the Docker software bundle, to be able to execute the build for Packer, as we are using Docker to install and run Packer.

Click on the user drop down on the right hand top of the page. Select “Organization” from the menu.

Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”. And then click the Create button.

On creation of the template click on “Configure Software” button.

Select Docker from the list of software bundles available for configuration and click on the + sign to add it to the template. Then click on “Done” to complete the Software configuration.

Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “DockerTemplate” for our blog.

 

Build Job Configuration

Click on the “+ New Job” button and in the dialog which pops up, give the build job a name of your choice and then select the build template (DockerTemplate) from the dropdown, that we had created earlier in the blog. 

As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository and the branch that you have selected. You may select the checkbox to configure automatic build trigger on SCM commits.

In the Builders tab Docker Builder -> Docker Build from the Add Builder dropdown. You just need to give the Image name in the form that gets added and you are all done with the Build Job configuration. Now Click on Save to save the build job configuration.

On execution of the build job, the image gets created in the OCI instance in the defined compartment as shown in the below screenshot.

So now you can easily automate custom image creation on Oracle Cloud Infrastructure using Packer as part of your continuous integration & continuous delivery pipeline on Oracle Developer Cloud.

Happy Packing!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Infrastructure as Code using Terraform on Oracle Developer Cloud

Wed, 2018-05-09 14:04

With our April release, we have started supporting Terraform builds in Oracle Developer Cloud. This blog will help you understand how you can use Terraform in build pipeline to provision Oracle Cloud Infrastructure as part of the build pipeline automation. With our April release, we have started supporting Terraform builds in Oracle Developer Cloud. This blog will help you understand how you can use Terraform in build pipeline to provision Oracle Cloud Infrastructure as part of the build pipeline automation. 

Tools and Platforms Used

Below are the tools and cloud platforms I use for this blog:

Oracle Developer Cloud Service: The DevOps platform to build your Ci & CD pipeline.

Oracle Cloud Infrastructure: IaaS platform where we would provision the infrastructure for our usage.

Terraform: Tool for provisioning the infrastructure on cloud. We would be doing for Oracle Cloud Infrastructure or OCI it is popularly known as. For this blog I would be using OCI here on.

 

About Terraform

Terraform is a tool which helps you to write, plan and create your infrastructure safely and efficiently. Terraform can manage existing and popular service providers like Oracle, as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. It helps you to build, manage and version your code. To know more about Terraform go to: https://www.terraform.io/

 

Terraform Scripts

To execute the Terraform scripts on the Oracle Developer Cloud as part of the build pipeline, you need to upload all the scripts to the Git repository. To upload the scripts to the Git repository, you will need to first install the Git cli on your machine and then use the below commands to upload the code:

I was using windows machine for the script development so below is what you need to do on the command line:

Pushing Scripts to Git Repository on Oracle Developer Cloud

Command_prompt:> cd <path to the Terraform script folder>

Command_prompt:>git init

Command_prompt:>git add –all

Command_prompt:>git commit –m “<some commit message>”

Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL>

Command_prompt:>git push origin master

Below is the folder structure description for the terraform scripts that I have in the Git Repository on Oracle Developer Cloud Service.

The terraform scripts are inside the exampleTerraform folder and the oci_api_key_public.pem and oci_api_key.pem are the OCI keys.

In the exampleTerraform folder we have all the “tf” extension files along with the env-vars file. You will be able to see the definition of the files later in the blog.

In the “userdata” folder you will have the bootstrap shell script which will be executed when the VM first boots up on OCI.

Below is the description of each file in the folder and the snippet:

env-vars: It is the most important file where we set all the environment variables which will be used by the Terraform scripts for accessing and provisioning the OCI instance.

### Authentication details export TF_VAR_tenancy_ocid="ocid1.tenancy.oc1..aaaaaaaa" export TF_VAR_user_ocid="ocid1.user.oc1..aaaaaaa" export TF_VAR_fingerprint="29:b1:8b:e4:7a:92:ae:d5" export TF_VAR_private_key_path="/home/builder/.terraform.d/oci_api_key.pem" ### Region export TF_VAR_region="us-phoenix-1" ### Compartment ocid export TF_VAR_compartment_ocid="ocid1.tenancy.oc1..aaaa" ### Public/private keys used on the instance export TF_VAR_ssh_public_key=$(cat exampleTerraform/id_rsa.pub) export TF_VAR_ssh_private_key=$(cat exampleTerraform/id_rsa)

Note: all the ocids above are truncated for security and brevity.

Below screenshot(s) of the OCI console shows where to locate these OCIDS:

tenancy_ocid and region

compartment_ocid:

user_ocid:

Point to the path of the RSA files for the SSH connection which are there in the Git repository and the OCI API Key private pem file in the Git repository.

variables.tf: In this file we initialize the terraform variables along with configuring the Instance Image OCID. This could be the ocid for base image available out of the box on OCI instance. These may vary based on the region where your OCI instance has been provisioned. Use this link for knowing more about the OCI base images. Here we also configure the path for the bootstrap file which resides in the userdata folder, which will be executed on boot of the OCI machine.

variable "tenancy_ocid" {} variable "user_ocid" {} variable "fingerprint" {} variable "private_key_path" {} variable "region" {} variable "compartment_ocid" {} variable "ssh_public_key" {} variable "ssh_private_key" {} # Choose an Availability Domain variable "AD" { default = "1" } variable "InstanceShape" { default = "VM.Standard1.2" } variable "InstanceImageOCID" { type = "map" default = { // Oracle-provided image "Oracle-Linux-7.4-2017.12.18-0" // See https://docs.us-phoenix-1.oraclecloud.com/Content/Resources/Assets/OracleProvidedImageOCIDs.pdf us-phoenix-1 = "ocid1.image.oc1.phx.aaaaaaaa3av7orpsxid6zdpdbreagknmalnt4jge4ixi25cwxx324v6bxt5q" //us-ashburn-1 = "ocid1.image.oc1.iad.aaaaaaaaxrqeombwty6jyqgk3fraczdd63bv66xgfsqka4ktr7c57awr3p5a" //eu-frankfurt-1 = "ocid1.image.oc1.eu-frankfurt-1.aaaaaaaayxmzu6n5hsntq4wlffpb4h6qh6z3uskpbm5v3v4egqlqvwicfbyq" } } variable "DBSize" { default = "50" // size in GBs } variable "BootStrapFile" { default = "./userdata/bootstrap" }

compute.tf: The display name, compartment ocid, image to be used and the shape and the network parameters need to be configured here , as shown in the code snippet below.

 

resource "oci_core_instance" "TFInstance" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" compartment_id = "${var.compartment_ocid}" display_name = "TFInstance" image = "${var.InstanceImageOCID[var.region]}" shape = "${var.InstanceShape}" create_vnic_details { subnet_id = "${oci_core_subnet.ExampleSubnet.id}" display_name = "primaryvnic" assign_public_ip = true hostname_label = "tfexampleinstance" }, metadata { ssh_authorized_keys = "${var.ssh_public_key}" } timeouts { create = "60m" } }

network.tf: Here we have the Terraform script for creating VCN, Subnet, Internet Gateway and Route table. These are vital for the creation and access of the compute instance that we provision.

resource "oci_core_virtual_network" "ExampleVCN" { cidr_block = "10.1.0.0/16" compartment_id = "${var.compartment_ocid}" display_name = "TFExampleVCN" dns_label = "tfexamplevcn" } resource "oci_core_subnet" "ExampleSubnet" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" cidr_block = "10.1.20.0/24" display_name = "TFExampleSubnet" dns_label = "tfexamplesubnet" security_list_ids = ["${oci_core_virtual_network.ExampleVCN.default_security_list_id}"] compartment_id = "${var.compartment_ocid}" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" route_table_id = "${oci_core_route_table.ExampleRT.id}" dhcp_options_id = "${oci_core_virtual_network.ExampleVCN.default_dhcp_options_id}" } resource "oci_core_internet_gateway" "ExampleIG" { compartment_id = "${var.compartment_ocid}" display_name = "TFExampleIG" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" } resource "oci_core_route_table" "ExampleRT" { compartment_id = "${var.compartment_ocid}" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" display_name = "TFExampleRouteTable" route_rules { cidr_block = "0.0.0.0/0" network_entity_id = "${oci_core_internet_gateway.ExampleIG.id}" } }

block.tf: The below script defines the boot volumes for the compute instance getting provisioned.

resource "oci_core_volume" "TFBlock0" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" compartment_id = "${var.compartment_ocid}" display_name = "TFBlock0" size_in_gbs = "${var.DBSize}" } resource "oci_core_volume_attachment" "TFBlock0Attach" { attachment_type = "iscsi" compartment_id = "${var.compartment_ocid}" instance_id = "${oci_core_instance.TFInstance.id}" volume_id = "${oci_core_volume.TFBlock0.id}" }

provider.tf: In the provider script the OCI details are set.

 

provider "oci" { tenancy_ocid = "${var.tenancy_ocid}" user_ocid = "${var.user_ocid}" fingerprint = "${var.fingerprint}" private_key_path = "${var.private_key_path}" region = "${var.region}" disable_auto_retries = "true" }

datasources.tf: Defines the data sources used in the configuration

# Gets a list of Availability Domains data "oci_identity_availability_domains" "ADs" { compartment_id = "${var.tenancy_ocid}" } # Gets a list of vNIC attachments on the instance data "oci_core_vnic_attachments" "InstanceVnics" { compartment_id = "${var.compartment_ocid}" availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" instance_id = "${oci_core_instance.TFInstance.id}" } # Gets the OCID of the first (default) vNIC data "oci_core_vnic" "InstanceVnic" { vnic_id = "${lookup(data.oci_core_vnic_attachments.InstanceVnics.vnic_attachments[0],"vnic_id")}" }

outputs.tf: It defines the output of the configuration, which is public and private IP of the provisioned instance.

# Output the private and public IPs of the instance output "InstancePrivateIP" { value = ["${data.oci_core_vnic.InstanceVnic.private_ip_address}"] } output "InstancePublicIP" { value = ["${data.oci_core_vnic.InstanceVnic.public_ip_address}"] }

remote-exec.tf: Uses a null_resource, remote-exec and depends on to execute a command on the instance.

resource "null_resource" "remote-exec" { depends_on = ["oci_core_instance.TFInstance","oci_core_volume_attachment.TFBlock0Attach"] provisioner "remote-exec" { connection { agent = false timeout = "30m" host = "${data.oci_core_vnic.InstanceVnic.public_ip_address}" user = "ubuntu" private_key = "${var.ssh_private_key}" } inline = [ "touch ~/IMadeAFile.Right.Here", "sudo iscsiadm -m node -o new -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -p ${oci_core_volume_attachment.TFBlock0Attach.ipv4}:${oci_core_volume_attachment.TFBlock0Attach.port}", "sudo iscsiadm -m node -o update -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -n node.startup -v automatic", "echo sudo iscsiadm -m node -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -p ${oci_core_volume_attachment.TFBlock0Attach.ipv4}:${oci_core_volume_attachment.TFBlock0Attach.port} -l >> ~/.bashrc" ] } }

Oracle Infrastructure Cloud - Configuration

The major configuration that need to be done on OCI is for the security for Terraform to be able work and provision an instance.

Click the username on top of the Oracle Cloud Infrastructure console, you will see a drop down, select User Settings from it.

Now click on the “Add Public Key” button, to get the dialog where you can copy paste the oci_api_key.pem(the key) in it and click on the Add button.

Note: Please refer to the links below for details on OCI key.

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3

 

Configuring the Build VM

Click on the user drop down on the right hand top of the page. Select “Organization” from the menu.

Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”.

On creation of the template click on “Configure Software” button.

Select Terraform from the list of software bundles avaibale for configuration and click on the + sign to add it to the template.

Then click on “Done” to complete the Software configuration.

Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “terraformTemplate” for our blog.

Build Job Configuration

As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository and the branch that you have selected. You may select the checkbox to configure automatic build trigger on SCM commits.

Select the Unix Shell Builder form the Add Builder dropdown. Then add the script as below. The below script would first configure the environment variables using env-vars. Then copy the oci_api_key.pem and oci_api_key_public.pem to the specified directory. Then execute the Terraform commands to provision the OCI instance. The important commands are terraform init, terraform plan and terraform apply.

terraform init – The terraform init command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.

terraform plan – The terraform plan command is used to create an execution plan. 

terraform apply – The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.

Post the execution it prints the IP addresses of the provisioned instance as output. And then tries to make a SSH connection to the machine using the RSA keys supplied in the exampleTerraform folder.

Configure Artifact Archiver to archive the terraform.tfstate file which would get generated as part of the build execution. You may select the compression to GZIP or NONE.

Post Build Job Execution

In build log you will be able to see the private and public IP addresses for the instance provisioned by Terraform scripts and then try to make an SSH connection to it. If everything goes fine, you the build job should complete successfully. 

Now you can go to the Oracle Cloud Infrastructure console to see the instance has already being created for you along with network and boot volumes as defined in the Terraform scripts.  

So now you can easily automate provisioning of Oracle Cloud Infrastructure using Terraform as part of your continuous integration & continuous delivery pipeline on Oracle Developer Cloud.

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Developer Cloud Service May Release Adds K8N, OCI, Code Editing and More

Tue, 2018-05-08 11:00

Just a month after the recent release of Oracle Developer Cloud Service - that added support for pipelines, Docker, and Terraform - we are happy to announce another update to the services that adds even more option to help you extend your DevOps and CI/CD processes to support additional use cases.

Here are some highlights of the new version:

Extended build server software

You can now create build jobs and pipelines that leverage:

  • Kubernetese - use the kubectl command line to manage your docker containers
  • OCI Command line - to automate provisioning and configuration of Oracle Compute 
  • Java 9 - for your latest java projects deployments
  • Oracle Development Tools - Oracle Forms and Oracle JDeveloper 12.2.3 are now available to automate deployment of Forms and ADF apps

 

Build Server Software Options SSH Connection in Build

You can now define SSH connection as part of your build configuration to allow you to securely connect and execute shell scripts on Oracle Cloud Services.

In Browser Code Editing and Versioning 

A new "pencil" icon let's you edit code in your private git repositories hosted in Developer Cloud Service directly in your browser. Once you edited the code you can commit the changes to your branch directly providing commit messages.

Code editing in the browser

PagerDuty Webhook

Continuing our principle of keeping the environment open we add a new webhook support to allow you to send events to the popular PagerDuty solution.

Increased Reusability

We are making it easier to replicate things that already work for your team. For example, you can now create a new project based on an existing project you exported. You can copy an agile board over to a new one. If you created a useful issue search - you can share it with others in your team.

There are many other feature that will improve your daily work, have a look at the what's new in DevCS document for more information.

Happy development!

A New Oracle Autonomous Visual Builder Cloud Service - Visual and Coding Combined

Mon, 2018-05-07 14:39

We are happy to announce the availability of Oracle Autonomous Visual Builder Cloud Service (VBCS) - Oracle's visual low-code development platform for JavaScript based applications with built-in autonomous capabilities.

Over the past couple of years, the visual development approach of VBCS has made it a very attractive solution to citizen developers who leveraged the no-code required nature of the platform to build their custom applications.

Many professional developers also expressed interest in the visual development experience they saw, but they were looking for additional capabilities.

Specifically developers were demanding an option to have direct access to the code that the visual tools created so they can change it and enhance it with their own custom code to achieve richer behaviors.

With the new VBCS version we are addressing these demands adding direct access to manipulate code, while keeping the low-code characteristics of VBCS.

Visual and Code Based Development Combined

Just like in previous versions, constructing the UI is done through a visual WYSIWYG layout editor. Existing VBCS users will notice that they now have access to a much richer set of UI components in the component palette. In fact they now have access to all of the components offered by Oracle JET (Oracle's open-source JavaScript Extension Toolkit). In addition you can add more components to the palette using the Web-components standard based Oracle JET composite components architecture (CCA).

The thing to note about the visual editor is the new "Code" button at the top right, clicking this button will give professional developers direct access to the HTML code that makes up the page layout.  They'll be happy to discover that the code is pure HTML/JavaScript/CSS based - which will let them leverage their existing expertise to further enhance and customize it. Developers can directly manipulate that code through the smart code editor leveraging features such as code insight, syntax highlighting, doc access, and reformatting directly in their browser.

The visual development approach is not limited to page layouts. We extend it also to the way you can define business logic. Defining the flow of your logic is done through our new action flow editor. With a collection of operations that you can define in a declarative way, and the ability to invoke your specific JavaScript code for unique functionality.

Now that developers have direct access to the code, we also added integration with Git, leveraging the private Git repositories provided through Oracle Developer Cloud Service (DevCS). Teams can now leverage the full set of Agile methodology capabilities of DevCS when working on VBCS applications, including issue tracking, version management, agile planning and code review processes.

Mobile and Web Development Unified

With the new version of VBCS we further integrated the development experience across both web browser-based and on-device mobile applications. 

In the same project you can create both types of applications, leveraging the same development approach, application architecture, UI components, and access to custom business objects and external REST services.

Once you are done developing your mobile application, we'll package it for you as an on-device mobile app that you install, test, and run on your devices - leveraging the native look and feel provided by Oracle JET for the various mobile platforms.

Standard-Based Data Openness

With the new version you can now hook up VBCS to any REST data source with a few button clicks, leveraging a declarative approach to consuming external REST source in your application. VBCS is able to parse standard Swagger based service descriptors for easy consumption. Even if you don't have a detailed structure description for a service, the declarative dialog in VBCS makes it easy to define the access to any service, including security settings, header and URL parameters, and more. VBCS is smart enough to parse the structure returned from the service and create variables that will allow you to access the data in your UI with ease.

Let's not forget that VBCS also lets you define your own custom reusable business services. VBCS will create the database objects to store the information in these objects, and will provide you with a powerful secure set of REST services to allow you to access these objects from both your VBCS and external applications.

Visual Builder Cloud Service Goes Autonomous

Today’s Visual Builder Cloud Service release also has built-in autonomous capabilities to automate and eliminate repetitive tasks so you can instead focus on app design and development.

Configuring and provisioning your service is as easy as a single button click.All you need to do is tell us the name you want for your server, and with a click of a button everything is configured for you. You don't need to install and configure your underlying platform - the service automatically provision for you a database, an app hosting server, and your full development platform.

One click install

The new autonomous VBCS eliminates any manual tasks for the maintenance of your development and deployment platforms. Once your service is provisioned we'll take care of things like patching, updates, and backups for you.

Furthermore autonomous VBCS automatically maintains your mobile app publishing infrastructure. You just need to click a button and we'll publish your mobile app to iOS or Android packages, and host your web app on our scalable backend services that host your data and your applications.

But Wait There is More

There are many other new features you'll find in the new version of Oracle Visual Builder Cloud Service. Whether you are a seasoned JavaScript expert looking to accelerate your delivery, a developer taking your first steps in the wild world of JavaScript development, or a citizen developer looking to build your business application - Visual Builder has something for you.

So take it for a spin - we are sure you are going to enjoy the experience.

For more information and to get your free trial visit us at http://cloud.oracle.com/visual-builder

 

 

Oracle Dev Moto Tour 2018

Mon, 2018-05-07 14:00
 "Four wheels move the body. Two wheels move the soul."
 
The 2018 Developers Motorcycle Tour will start their engines on May 8th, rolling through Japan and Europe to visit User Groups, Java Day Tokyo and Code events. Join Stephen Chin, Sebastian Daschner, and other community luminaries to catch up on the latest technologies and products, as well as bikes, food, Sumo, football or anything fun. 
 
Streaming live from every location! Watch their sessions online at @OracleDevs and follow them for updates. For details about schedules, resources, videos, and more through May and June 2018, visit DevTours 
 
Japan Tour: May 2018
In May, the dev tour motorcycle team will travel to various events, including the Java Day Tokyo conference.  Meet Akihiro Nishikawa, Andres Almiray, David Buck, Edson Yanaga, Fernando Badapoulis, Ixchel Ruiz, Kirk Pepperdine, Matthew Gilliard, Sebastian Daschner, and Stephen Chin.
 
May 8, 2018 Kumamoto Kumamoto JUG
May 10, 2018 Fukuoka Fukuoka JUG
May 11, 2018 Okayama Okayama JUG
May 14, 2018 Osaka Osaka JUG
May 15, 2018 Nagoya Nagoya JUG
May 17, 2018 Tokyo Java Day Tokyo
May 18, 2018 Tokyo JOnsen
May 19, 2018 Tokyo JOnsen
May 20, 2018 Tokyo JOnsen
May 21, 2018 Sendai Sendai JUG
May 23, 2018 Sapporo JavaDo
May 26, 2018 Tokyo JJUG Event
 
The European Tour: June 2018
In June, the dev tour motorcycle team will travel to multiple European countries and cities to meet Java and Oracle developers. Depending on the city and the event, which will include the Code Berlin conference, you'll meet Fernando Badapoulis, Nikhil Nanivadekar, Sebastian Daschner, and Stephen Chin.
 
June 4, 2018 Zurich JUG Switzerland
June 5, 2018 Freiburg JUG Freiburg
June 6, 2018 Bodensee JUG Bodensee
June 7, 2018 Stuttgart JUG Stuttgart
June 11, 2018 Berlin JUG BB
June 12, 2018 Berlin Oracle Code Berlin
June 13, 2018 Hamburg JUG Hamburg
June 14, 2018 Hannover JUG Hannover
June 15, 2018 Münster JUG Münster
June 16, 2018 Köln / Colone JUG Cologne
June 17, 2018 Munich JUG Munich
 

Oracle Adds New Support for Open Serverless Standards to Fn Project and Key Kubernetes Features ...

Wed, 2018-05-02 02:01
li {line-height:1.7em;}

Open serverless project Fn adds support for broader serverless standardization with CNCF CloudEvents, serverless framework support, and OpenCensus for tracing and metrics.

Oracle Container Engine for Kubernetes tackles toughest real-world governance, scale, and management challenges facing K8s users today

Today at Kubecon + CloudNativeCon Europe 2018, Oracle announced new support for several open serverless standards on its open Fn Project and a set of critical new Oracle Container Engine for Kubernetes features addressing key real-world Kubernetes issues including governance, security, networking, storage, scale, and manageability.

Both the serverless and Kubernetes communities are at an important crossroads in their evolution, and to further its commitment to open serverless standards, Oracle announced that the Fn Project now supports standards-based projects CloudEvents and the Serverless Framework. Both projects are intended to create interoperable and community-driven alternatives to today’s proprietary serverless options.

Solving Real World Kubernetes Challenges

The New Stack, in partnership with the Cloud Native Computing Foundation (CNCF) recently published a report analyzing top challenges facing Kubernetes users today. The report found that infrastructure-related issues – specifically security, storage, and networking – had risen to the top, impacting larger companies the most.

  

 

Source: The New Stack

In addition, when evaluating container orchestration, classic non-functional requirements came into play: scaling, manageability, agility, and security. Solving these types of issues will help the Kubernetes project move through the Gartner Hype Cycle “Trough of Disillusionment”, up the “Slope of Enlightenment” and onto the promised land of the “Plateau of Productivity.”

Source: The New Stack

Addressing Real-World Kubernetes Challenges

To address these top challenges facing Kubernetes users today, Oracle Container Engine for Kubernetes has integrated tightly with the best-in-class governance, security, networking, and scale of Oracle Cloud Infrastructure (OCI). These are summarized below:

  • Governance, compliance, & auditing: Identity and Access Management (IAM) for Kubernetes enables DevOps teams to control who has access to Kubernetes resources, but also set policies describing what type of access they have and to which specific resources. This is a crucial element to managing complex organizations and rules applied to logical groups of users and resources, making it really simple to define and administer policies.

    • Governance: DevOps teams can set which users have access to which resources, compartments, tenancies, users, and groups for their Kubernetes clusters. Since different teams typically manage different resources through different stages of the development cycle – from development, test, staging, through production – role-based access control (RBAC) is crucial. Two levels of RBAC are provided: (1) at the OCI IaaS infrastructure resource level defining who can for example spin up a cluster, scale it, and/or use it, and (2) at a Kubernetes application level where fine-grained Kubernetes resource controls are provided.

  • Compliance: Container Engine for Kubernetes will support The Payment Card Industry Data Security Standard (PCI DSS), the globally applicable security standard that customers use for a wide range of sensitive workloads, including the storage, processing and transmission of cardholder data. DevOps teams will be able to run Kubernetes applications on Oracle’s PCI-compliant Cloud Infrastructure Services.

  • Auditing (logging, monitoring): Cluster management auditing events have also been integrated into the OCI Audit Service for consistent and unified collection and visibility.

  • Scale: Oracle Container Engine is a highly available managed Kubernetes service. The Kubernetes masters are highly available (cross availability domains), managed, and secured. Worker clusters are self-healing, can span availability domains, and can be composed of node pools consisting of compute shapes from VMs to bare metal to GPUs.

    • GPUs, Bare Metal, VMs: Oracle Container Engine offers the industry’s first and broadest family of Kubernetes compute nodes, supporting small and virtualized environments, to very large and dedicated configurations. Users can scale up from basic web apps up to high performance compute models, with network block storage and local NVMe storage options.

    • Predictable, High IOPS: The Kubernetes node pools can use either VMs or Bare Metal compute with predictable IOPS block storage and dense I/O VMs. Local NVMe storage provides a range of compute and capacities with high IOPS.

    • Kubernetes on NVIDIA Tesla GPUs: Running Kubernetes clusters on bare Metal GPUs gives container applications access to the highest performance possible. With no hypervisor overhead, DevOps teams should be delighted to have access to bare metal compute instances on Oracle Cloud Infrastructure with two NVIDIA Tesla P100 GPUs to run CUDA based workloads allowing for over 21 TFLOPS of single-precision performance per instance.

  • Networking: Oracle Container Engine is built on a state-of-the-art, non-blocking Clos network that is not over-subscribed and provides customers with a predictable, high-bandwidth, low latency network.

    • Load balancing: Load balancing is often one of the hardest features to configure and manage – Oracle has integrated seamlessly with OCI load balancing to allow container-level load balancing. Kubernetes load balancing checks for incoming traffic on the load balancer's IP address and distributes incoming traffic to a list of backend servers based on a load balancing policy and a health check policy. DevOps teams can define Load Balancing Policies that tell the load balancer how to distribute incoming traffic to the backend servers.

    • Virtual Cloud Network: Kubernetes user (worker) nodes are deployed inside a customer’s own VCN (virtual cloud network), allowing for secure management of IP addresses, subnets, route tables and gateways using the VCN.

  • Storage: Cracking the code on a simple way to manage Kubernetes storage continues to be a major concern for DevOps teams. There are two new IaaS Kubernetes storage integrations designed for Oracle Cloud Infrastructure that can help, unlocking OCI’s industry leading block storage performance (highest IOPS per GB of any standard cloud provider offering), cost, and predictability:

  • Simplified, Unified Management:

    • Bundled in Management: By bundling in commonly used Kubernetes utilities, Oracle Container Engine for Kubernetes makes for a familiar and seamless developer experience. This includes built-in support for Helm and Tiller (providing standard Kubernetes package management), the Kubernetes dashboard, and kube-dns.

    • Running Existing Applications with Kubernetes: Kubernetes supports an ever-growing set of workloads that are not necessarily net new greenfield apps. A Kubernetes Operator is “an application-specific controller that extends the Kubernetes API to create, configure and manage instances of complex stateful applications on behalf of a Kubernetes user.” Oracle has open-sourced and will soon generally release an Oracle WebLogic Server Kubernetes Operator which allows WebLogic users to manage WebLogic domains in a Kubernetes environment without forcing application rewrites, retesting and additional process and cost. WebLogic 12.2.1.3 has also been certified on Kubernetes, and the WebLogic Monitoring Exporter, which exposes WebLogic Server metrics that can be read and collected by monitoring tools such as Prometheus, and displayed in Grafana, has been released and open sourced.

Fn Project: Open serverless initiatives are progressing within the CNCF and the Fn Project is actively engaged and supporting these emerging standards:

  • CloudEvents: The Fn Project has announced support for the Cloud Event standard effort. CloudEvents seeks to standardize event data and simplify event declaration and delivery among different applications, platforms, and providers. Until now, developers have lacked a common way of describing serverless events. This not only severely affects the portability of serverless apps but is also a significant drain on developer productivity.

  • Serverless Framework: Fn Project, an open source functions as a service and workflow framework, has contributed a FaaS provider to the Serverless Framework to further its mission of multi-cloud and on-premise serverless computing. The new provider allows users of the Serverless Framework to easily build and deploy container-native functions to any Fn Cluster while getting the unified developer experience they’re accustomed to. For Fn’s growing community, the integration provides an additional option for managing functions in a multi-cloud and multi-provider world.

    • "With a rapidly growing community around Fn, offering a first-class integration with the Serverless Framework will help bring our two great communities closer together, providing a “no lock-in” model of serverless computing to companies of all sizes from startups to the largest enterprises,” says Chad Arimura, VP Software Development, Oracle.

  • OpenCensus: Fn is now using OpenCensus stats, trace, and view APIs across all Fn code. OpenCensus is a single distribution of libraries that automatically collects traces and metrics from your app, displays them locally, and sends them to any analysis tool. OpenCensus has made good decisions in defining their own data formats that allow developers to use any backends (explicitly not having to create their own data structures simply for collection). This allows Fn to easily stay up to date in the ops world without continuously having to make extensive code changes.

For more information, join Chad Arimura and Matt Stephenson Friday, May 4 for their talk at KubeCon on Operating a Global Scale FaaS on top of Kubernetes.

 

A Quick Look At What's New In Oracle JET v5.0.0

Thu, 2018-04-26 12:35

The newest release of Oracle JET was delivered to the community on April 16th. Continuing the foundational concept of delivering a toolkit on a consistent and predictable release schedule that application developers can rely on. This is the 24th consecutive on-schedule release for Oracle JET.

 

oracle jet logo

This release is primarily a maintenance release, with updates to the underlying open source dependencies where needed, and quite a bit of housekeeping, with removal of previously deprecated API's.  As always, the Release Notes will provide the details and it's highly recommended that you take some time to read through the sections that describe the removed API's.  Most have been under deprecation notice for well over a year.  In some cases API's are being removed that had deprecation notices announced almost 4 years ago.  This will help keep things as clean and lightweight as possible going forward.

visual build call to action button example

One of the first things that you'll probably notice is the Home page now has an option to checkout the new Visual Builder Cloud Service. For those that are more familiar and comfortable with a declarative approach to web development, Visual Builder provides a very comprehensive drag and drop approach to developing JET based applications. If you find yourself in a position where you need to get down to the code while working in Visual Builder, the newest release now provides full code level development as well.  Just hit the Code button and you'll find yourself writing real JET code, with code completion, inline documentation and more. It's the same code that you see in the Cookbook and other sample applications today.

 

New ways to Get Started

The Get Started page has also received a bit of a face lift.  As the JET community continues to grow, there are more developers looking at JET for the first time, and providing multiple ways to get that first experience is important.  You'll now find that you can Get Started by using Visual Builder as described above, or take a quick look at how JET code is structured with a quick sample available on jsFiddle.  Of course the Command Line Interface (ojet-cli) is still the primary method for getting things off the ground with JET.

get started page screenshot

 

Growth and Success

The JET Community continues to grow at a rapid pace and we are proud to have three new Oracle Partners/Customers added to the Success Stories page in this release.  We also added a new Oracle Product which is providing tremendous opportunities for Cloud Startups.  Visit the Success Stories page to learn more about:

 

If you have a JET application, or your company is using JET and you'd like to be included on the JET Success Stories page, please drop a note in the JET Community Forums.

 

A Single Source of Truth for Resource Paths

The Oracle JET Command Line Interface itself has added a few new features in this release.  One of the most notable is the consolidation of resource path definitions into one configuration file.  If you have tried adding 3rd party libraries to a JET application in the past, you found yourself adding the path to those libraries in up to three different files take make sure things worked in both development as well as a production build of the application.  Everything is now in a single file called "path-mappings.json".  Checkout the Migration Chapter of the Developers Guide for details on how to work with this new single source of truth for paths.

path mapping file structure example

 

 

Composite Component Architecture(CCA) continues to mature

Composite Component Architecture(CCA) continues to be a major focus of Oracle JET and each release brings more enhancements to the metadata and structure of the overall Architecture. The best place to keep track of what is happening in CCA development, is on Duncan Mills' Blog series.  The latest installment covers changes made in the JET v5.0.0 release.

 

 

Theming gets an update

Theming has always been a significant feature of JET with the inclusion of SASS (.scss) files for the default Alta theme, themes for Android, iOS, and Windows platforms, as well as a Theme Builder application to help you build your own theme as needed.  In JET v5.0.0 the method for defining the base color scheme as been revised.  Take a look at the Theme Changes section of the Release Notes for details, as well as the Theme Builder example on the JET Website.

 

New task types in oj-Gantt

The Gantt chart has been gaining features over the last few releases, and with this release comes the ability to add new types of tasks such as a Summary and Milestone. Continue to watch this component over future releases as it matures to meet more and more use cases.

 

 

As always your comments and constructive feedback is welcome.  If you have questions, or comments, please engage with the Oracle JET Community in the Discussion Forums, or follow @OracleJET on Twitter

On behalf of the entire JET development team, Happy Coding!!

 

Announcing the General Availability of MySQL 8.0

Thu, 2018-04-26 08:17

MySQL adds NoSQL and many new enhancements to the world’s most popular open source database:

  1. NoSQL Document Store gives developers the flexibility of developing traditional SQL relational applications and NoSQL, schema-free document database applications.  This eliminates the need for a separate NoSQL document database. 
  2. SQL Window functions, Common Table Expressions, NOWAIT and SKIP LOCKED, Descending Indexes, Grouping, Regular Expressions, Character Sets, Cost Model, and Histograms.
  3. JSON Extended syntax, new functions, improved sorting, and partial updates. With JSON table functions you can use the SQL machinery for JSON data.
  4. GIS Geography support. Spatial Reference Systems (SRS), as well as SRS aware spatial datatypes,  spatial indexes,  and spatial functions.
  5. Reliability DDL statements have become atomic and crash safe, meta-data is stored in a single, transactional data dictionary 
  6. Observability Performance Schema, Information Schema, Invisible Indexes,  Error Logging.
  7. Manageability Persistent Configuration Variables, Undo tablespace management, Restart command, and New DDL.
  8. High Availability InnoDB Cluster delivers an integrated, native, HA solution for your databases.
  9. Security OpenSSL improvements, new default authentication, SQL Roles, breaking up the super privilege, password strength, authorization.
  10. Performance Up to 2x faster than MySQL 5.7.
Developer Features

MySQL 8.0 delivers many new features requested by developers in areas such as SQL, JSON and GIS. Developers also want to be able to store Emojis, thus UTF8MB4 is now the default character set in 8.0.

NoSQL Document Store

MySQL Document Store gives developers maximum flexibility developing traditional SQL relational applications and NoSQL, schema-free document database applications.  This eliminates the need for a separate NoSQL document database.  The MySQL Document Store provides multi-document transaction support and full ACID compliance for schema-less JSON documents.

SQL

Window Functions

MySQL 8.0 delivers SQL window functions in MySQL.   Similar to grouped aggregate functions, window functions perform some calculation on a set of rows, e.g. COUNT or SUM. But where a grouped aggregate collapses this set of rows into a single row, a window function will perform the aggregation for each row in the result set.

Window functions come in two flavors: SQL aggregate functions used as window functions and specialized window functions.

Common Table Expression

MySQL 8.0 delivers [Recursive] Common Table Expressions (CTEs) in MySQL.  Non-recursive CTEs can be explained as “improved derived tables” as it allow the derived table to be referenced more than once. A recursive CTE is a set of rows which is built iteratively: from an initial set of rows, a process derives new rows, which grow the set, and those new rows are fed into the process again, producing more rows, and so on, until the process produces no more rows.

MySQL Workbench Showing MySQL CTE and Windows Functions

MySQL CTE and Window Functions in MySQL Workbench 8.0

NOWAIT and SKIP LOCKED

MySQL 8.0 delivers NOWAIT and SKIP LOCKED alternatives in the SQL locking clause. Normally, when a row is locked due to an UPDATE or a SELECT ... FOR UPDATE, any other transaction will have to wait to access that locked row. In some use cases there is a need to either return immediately if a row is locked or ignore locked rows. A locking clause using NOWAIT will never wait to acquire a row lock. Instead, the query will fail with an error. A locking clause using SKIP LOCKED will never wait to acquire a row lock on the listed tables. Instead, the locked rows are skipped and not read at all.

Descending Indexes

MySQL 8.0 delivers support for indexes in descending order. Values in such an index are arranged in descending order, and we scan it forward. Before 8.0, when a user create a descending index, we created an ascending index and scanned it backwards. One benefit is that forward index scans are faster than backward index scans.

GROUPING

MySQL 8.0  delivers GROUPING(), SQL_FEATURE T433. The GROUPING() function distinguishes super-aggregate rows from regular grouped rows. GROUP BY extensions such as ROLLUP produce super-aggregate rows where the set of all values is represented by null. Using the GROUPING()function, you can distinguish a null representing the set of all values in a super-aggregate row from a NULL in a regular row.

JSON

MySQL 8.0 adds new JSON functions and improves performance for sorting and grouping JSON values.

Extended Syntax for Ranges in JSON path expressions

MySQL 8.0 extends the syntax for ranges in JSON path expressions. For example SELECT JSON_EXTRACT('[1, 2, 3, 4, 5]', '$[1 to 3]');results in [2, 3, 4]. The new syntax introduced is a subset of the SQL standard syntax, described in SQL:2016, 9.39 SQL/JSON path language: syntax and semantics.

JSON Table Functions

MySQL 8.0 adds JSON table functions which enables the use of the SQL machinery for JSON data. JSON_TABLE() creates a relational view of JSON  data. It maps the result of a JSON data evaluation into relational rows and columns. The user can query the result returned by the function as a regular relational table using SQL, e.g. join, project, and aggregate.

JSON Aggregation Functions

MySQL 8.0 adds the aggregation functions JSON_ARRAYAGG() to generate JSON arrays and JSON_OBJECTAGG() to generate JSON objects . This makes it possible to combine JSON documents in multiple rows into a JSON array or a JSON object.

JSON Merge Functions

The JSON_MERGE_PATCH() function implements the semantics of JavaScript (and other scripting languages) specified by RFC7396, i.e. it removes duplicates by precedence of the second document. For example, JSON_MERGE('{"a":1,"b":2 }','{"a":3,"c":4 }'); # returns {"a":3,"b":2,"c":4}.

JSON Improved Sorting

MySQL 8.0 gives better performance for sorting/grouping JSON values by using variable length sort keys. Preliminary benchmarks shows from 1.2 to 18 times improvement in sorting, depending on use case.

JSON Partial Update

MySQL 8.0 adds support for partial update for the JSON_REMOVE()JSON_SET() and JSON_REPLACE() functions.  If only some parts of a JSON document are updated, we want to give information to the handler about what was changed, so that the storage engine and replication don’t need to write the full document.

GIS

MySQL 8.0 delivers geography support. This includes meta-data support for Spatial Reference System (SRS), as well as SRS aware spatial datatypes,  spatial indexes,  and spatial functions.

Character Sets

MySQL 8.0 makes UTF8MB4 the default character set. UTF8MB4 is the dominating character encoding for the web, and this move will make life easier for the vast majority of MySQL users.

Cost Model

Query Optimizer Takes Data Buffering into Account

MySQL 8.0 chooses query plans based on knowledge about whether data resides in-memory or on-disk. This happens automatically, as seen from the end user there is no configuration involved. Historically, the MySQL cost model has assumed data to reside on spinning disks. The cost constants associated with looking up data in-memory and on-disk are now different, thus, the optimizer will choose more optimal access methods for the two cases, based on knowledge of the location of data.

Optimizer Histograms

MySQL 8.0 implements histogram statistics. With Histograms, the user can create statistics on the data distribution for a column in a table, typically done for non-indexed columns, which then will be used by the query optimizer in finding the optimal query plan. The primary use case for histogram statistics is for calculating the selectivity (filter effect) of predicates of the form “COLUMN operator CONSTANT”.

Reliability

Transactional Data Dictionary

MySQL 8.0 increases reliability by ensuring atomic, crash safe DDL, with the transactional data dictionary. With this the user is guaranteed that any DDL statement will either be executed fully or not at all. This is particularly important in a replicated environment, otherwise there can be scenarios where masters and slaves (nodes) get out of sync, causing data-drift.

Observability

Information Schema (speed up)

MySQL 8.0 reimplements Information Schema. In the new implementation the Information Schema tables are simple views on data dictionary tables stored in InnoDB. This is by far more efficient than the old implementation with up to 100 times speedup.

Performance Schema (speed up)

MySQL 8.0 speeds up performance schema queries by adding more than 100 indexes on performance schema tables. 

Manageability

INVISIBLE Indexes

MySQL 8.0 adds the capability of toggling the visibility of an index (visible/invisible). An invisible index is not considered by the optimizer when it makes the query execution plan. However, the index is still maintained in the background so it is cheap to make it visible again. The purpose of this is for a DBA / DevOp to determine whether an index can be dropped or not. If you suspect an index of not being used you first make it invisible, then monitor query performance, and finally remove the index if no query slow down is experienced.

High Availability

MySQL InnoDB Cluster delivers an integrated, native, HA solution for your databases. It tightly integrates MySQL Server with Group Replication, MySQL Router, and MySQL Shell, so you don’t have to rely on external tools, scripts or other components.

Security features

OpenSSL by Default in Community Edition

MySQL 8.0 is unifying on OpenSSL as the default TLS/SSL library for both MySQL Enterprise Edition and MySQL Community Edition. 

SQL roles

MySQL 8.0 implements SQL Roles. A role is a named collection of privileges. The purpose is to simplify the user access right management. One can grant roles to users, grant privileges to roles, create roles, drop roles, and decide what roles are applicable during a session.

Performance

MySQL 8.0 is up to 2x faster than MySQL 5.7.  MySQL 8.0 comes with better performance for Read/Write workloads, IO bound workloads, and high contention “hot spot” workloads.

Scaling Read/Write Workloads

MySQL 8.0 scales well on RW and heavy write workloads. On intensive RW workloads we observe better performance already from 4 concurrent users  and more than 2 times better performance on high loads comparing to MySQL 5.7. We can say that while 5.7 significantly improved scalability for Read Only workloads, 8.0 significantly improves scalability for Read/Write workloads.  The effect is that MySQL improves  hardware utilization (efficiency) for standard server side hardware (like systems with 2 CPU sockets). This improvement is due to re-designing how InnoDB writes to the REDO log. In contrast to the historical implementation where user threads were constantly fighting to log their data changes, in the new REDO log solution user threads are now lock-free, REDO writing and flushing is managed by dedicated background threads, and the whole REDO processing becomes event-driven. 

Utilizing IO Capacity (Fast Storage)

MySQL 8.0 allows users to use every storage device to its full power. For example, testing with Intel Optane flash devices we were able to deliver 1M Point-Select QPS in a fully IO-bound workload.

Better Performance upon High Contention Loads (“hot rows”)

MySQL 8.0 significantly improves the performance for high contention workloads. A high contention workload occurs when multiple transactions are waiting for a lock on the same row in a table,  causing queues of waiting transactions. Many real world workloads are not smooth over for example a day but might have bursts at certain hours. MySQL 8.0 deals much better with such bursts both in terms of transactions per second, mean latency, and 95th percentile latency. The benefit to the end user is better hardware utilization (efficiency) because the system needs less spare capacity and can thus run with a higher average load.

MySQL 8.0 Enterprise Edition

For mission critical applications, MySQL Enterprise Edition provides the following additional capabilities:

  • MySQL Enterprise Backup for full, incremental and partial backups, Point-in-Time Recovery and backup compression.
  • MySQL Enterprise High Availability for integrated, native, HA with InnoDB Cluster.
  • MySQL Enterprise Transparent Data Encryption (TDE) for data-at-rest encryption.
  • MySQL Enterprise Encryption for encryption, key generation, digital signatures and other cryptographic features.
  • MySQL Enterprise Authentication for integration with existing security infrastructures including PAM and Windows Active Directory.
  • MySQL Enterprise Firewall for real-time protection against database specific attacks, such as an SQL Injection.
  • MySQL Enterprise Audit for adding policy-based auditing compliance to new and existing applications.
  • MySQL Enterprise Monitor for managing your database infrastructure.
  • Oracle Enterprise Manager for monitoring MySQL databases from existing OEM implementations.
MySQL Cloud Service

Oracle MySQL Cloud Service is built on MySQL Enterprise Edition and powered by Oracle Cloud, providing an enterprise-grade MySQL database service. It delivers the best in class management tools, self service provisioning, elastic scalability and multi-layer security.

Resources

JavaOne Event Expands with More Tracks, Languages and Communities – and New Name

Thu, 2018-04-19 11:00

The JavaOne conference is expanding to create a new, bigger event that’s inclusive to more languages, technologies and developer communities. Expect more talks on Go, Rust, Python, JavaScript, and R along with more of the great Java technical content that developers have come to expect. We’re calling the new event Oracle Code One, October 22-25 at Moscone West in San Francisco.

Oracle Code One will include a Java technical keynote with the latest information on the Java platform from the architects of the Java team.  It will also have the latest details on Java 11, advances in OpenJDK, and other core Java development.  We are planning dedicated tracks for server side Java EE technology including Jakarta EE (now part of the Eclipse Foundation), Spring, and the latest advances in Java microservices and containers.  Also a wealth of community content on client development, JVM languages, IDEs, test frameworks, etc.

As we expand, developers can also expect additional leading edge topics such as chatbots, microservices, AI, and blockchain. There will also be sessions around our modern open source developer technologies including Oracle JET, Project Fn and OpenJFX.

Finally, one of the things that will continue to make this conference so great is the breadth of community run activities such as Oracle Code4Kids workshops for young developers, IGNITE lightning talks run by local JUG leaders, and an array of technology demos and community projects showcased in the Developer Lounge.  Expect a grand finale with the Developer Community Keynote to close out this week of fun, technology, and community.

Today, we are launching the call for papers for Oracle Code One and you can apply now to be part of any of the 11 tracks of content for Java developers, database developers, full stack developers, DevOps practitioners, and community members.  

I hope you are as excited about this expansion of JavaOne as I am and will join me at the inaugural year of Oracle Code One!

Please submit your abstracts here for consideration:
https://www.oracle.com/code-one/index.html

Beyond Chatbots: An AI Odyssey

Wed, 2018-04-18 06:00

This month the Oracle Developer Community Podcast looks beyond chatbots to explore artificial intelligence -- its current capabilities, staggering potential, and the challenges along the way.

One of the most surprising comments to emerge from this discussion reveals how a character from a 50 year-old feature film factors into one of the most pressing AI challenges.

According to podcast panelist Phil Gordon, CEO and founder of Chatbox.com, the HAL 9000 computer at the center of Stanley Kubrick’s 1968 science fiction classic “2001: A Space Odyssey” is very much on the minds of those now rushing to deploy AI-based solutions. “They have unrealistic expectations of how well AI is going to work and how much it’s going to solve out of the box.” (And apparently they're willing to overlook HAL's abysmal safety record.)

It's easy to see how an AI capable of carrying on a conversation while managing and maintaining all the systems on a complex interplanetary spaceship would be an attractive idea for those who would like to apply similar technology to keeping a modern business on course. But the reality of today’s AI is a bit more modest (if less likely to refuse to open the pod bay doors).

In the podcast, Lyudmil Pelov, a cloud solutions architect with Oracle’s A-Team, explains that unrealistic expectations about AI have been fed by recent articles that portray AI as far more human-like than is currently possible.

“Most people don't understand what's behind the scenes,” says Lyudmil. “They cannot understand that the reality of the technology is very different. We have these algorithms that can beat humans at Go, but that doesn't necessarily mean we can find the cure for the next disease.” Those leaps forward are possible. “From a practical perspective, however, someone has to apply those algorithms,” Lyudmil says.

For podcast panelist Brendan Tierney, an Oracle ACE Director and principal consultant with Oralytics, accessing relevant information from within the organization poses another AI challenge.  “When it comes to customer expectations, there's an idea that it's a magic solution, that it will automatically find and discover and save lots of money automatically. That's not necessarily true.”  But behind that magic is a lot of science.

“The general term associated with this is, ‘data science,’” Brendan explains. “The science to it is that there is a certain amount of experimental work that needs to be done. We need to find out what works best with your data. If you're using a particular technique or algorithm or whatever, it might work for one company, but it might not work best for you. You've got to get your head around the idea that we are in a process of discovery and learning and we need to work out what's best for your data in your organization and processes.”

For panelist Joris Schellekens, software engineer at iText, a key issue is that of retractability. “If the AI predicts something or if your system makes some kind of decision, where does that come from? Why does it decide to do that? This is important to be able to explain expectations correctly, but also in case of failure—why does it fail and why does it decide to do this instead of the correct thing?”

Of course, these issues are only a sampling of what is discussed by the experienced developers in this podcast. So plug in and gain insight that just might help you navigate your own AI odyssey.

The Panelists Phil Gordon
CEO/founder of Chatbox.com

Twitter LinkedIn 

Lyudmil Pelov
Oracle A-Team Cloud Architect, Mobile, Cloud and Bot Technologies, Oracle

Twitter LinkedIn 

Joris Schellekens
Software Engineer, iText

Twitter LinkedIn

Brendan Tierney
Consultant, Architect, Author, Oralytics

Twitter LinkedIn 

Additional Resources Coming Soon
  • The Making of a Meet-Up
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

Pages