Feed aggregator

Introduction to databases for {Power.Coders} with MySQL

Yann Neuhaus - Fri, 2018-06-22 15:04
    This week I took some days off to do something related to my job but a bit different. I’ve given a course on Databases and SQL. But not for my usual customers. And not with the database I know the best. So, it is still in a domain that I know, but out of my comfort zone. And this is something that we should do more often because it gives a little apprehension and a big satisfaction.

    The little apprehension is because there were a lot of unknown parameters for me. I taught to students from Power.Coders, a coding academy for refugees. 18 young people with a very different background. Some already knew how to code. Some did some front-end stuff and website design in the past but had no idea about what is a server. Some others are doing all that for the first time, and have to learn what is a program. But there’s one thing that is common to everybody here: all are motivated to learn, understand, and acquire the knowledge and experience to start a career in IT. This is the good energy that makes everything possible.

    The big satisfaction is because, with everybody doing their best, things works and everyone gains confidence. It is out of the comfort zone that you can get your best, and that is for the students as well as the teacher and coaches. Because I wanted the course to stay consistent with what they learned in the curriculum, I did the database examples and exercises on MySQL and from PHP code. I never did that so I had to study as well. The big advantage I have, from experience, is that I know where to search on the internet. One of the students told me “don’t tell me to google for it, that’s like swimming in the ocean!”. When you are not used to it ‘googling’ for solutions is not easy. Experienced people do not always consider that and they answer in forums with a rude “RTFM” or “LMGTFY”. I’ve never felt obliged to answer on forums but when I do it, it is not to argue about the OP and his question, and whether he did his own research before. If I choose to answer, then my goal is to explain as clearly as possible. And I do not fear to repeat myself because the more I explain and the better understanding I have about what I explain. I remember my first answers on the dba-village forum. Some questions were recurrent. And each time I tried to answer with a shorter and clearer explanation.

    Google Slides

    PC_IMG_5523I’ve prepared slides and exercises for 3 days and here I share the content (however, there were a lot of whiteboard explanations so the slides may not be sufficient). I did the pages were with Google Sites and the presentation with Google Slides. Both of them were, again, new things for me, out of my .ppt comfort zone. It went very well. For correct presenter experience, I installed the “Google Slides Auto Resize Speaker Notes” Google Chrome extension. One things annoys me with Google Slides: readers cannot copy/paste from the slides. Please, comment here if you have a solution. My workaround was to copy the code to the presenter’s notes and tell the students to open them (with ‘s’ key) and copy from there. But I don’t like to duplicate the code.

    – Day 1 on Data Structures:
    Data Modeling, YAML, XML, JSON, CSV and introduction to relation tables.

    Exercise: load some OpenFlight data into MySQL

    – Day 2 on Introduction to Databases:
    RDBMS, create and load tables, normalization

    Exercise: a PHP page to query a simple table

    – Day 3 on Introduction to SQL:
    SELECT, INSERT, DELETE, UPDATE, ACID and transactions

    Exercise: a PHP page to list Flights from multi-creteria form

    In addition to the course, I also did some coaching for their PHP exercises. I discovered this language (which I do not like – meaningless error messages, improbable implicit conversion,…). But at least we were able to make some concepts more clear: what is a web server, sessions, cookies, access to the database… And the method is important. How to approach a code that doesn’t work, where nothing is displayed: change the connection parameters to wrong ones to see if we go to this part of code, add explicitly a syntax error in the SQL statement to see if errors are correctly trapped, echo some variables to see if they are set. Before learning magic IDEs, we must put the basics that will help everywhere. The main message is: you are never stuck with an error. There is always a possibility to trace more. And when you have all details, you can focus your google search better.


    Big thanks to SQLFiddle where it is easy to do some SQL without installing anything. However, being 20 people on a small Wi-Fi, using local resources is preferable. And we installed MAMP (see here how I discovered it and had to fix a bug at the same time). Big thanks to Chris Saxon ‘Database for Developers’ videos which will help the students to review all the concepts in an entertaining way. Thanks to w3schools for the easy learning content.

    Oh, and thanks to facebook sponsoring-privacy-intrusive-algorithms! Because this is how I heard about PowerCoders. For the first time of my life, I clicked on a sponsored link on a social media. This was for the WeMakeIt crowdfunding project for this powercoders curriculum in Lausanne. I’ve read about the project. I watched the video and that’s how I wanted to participate in this project. You should watch this Christian Hirsig TED talk as well. At a time where everybody is talking about autonomous self-driven cars, his accomplishment was to move from completely powerless to be back in the driver’s seat…

    And of course thanks to Powercoders organizers, students, teachers, coaches, mentors and the companies who propose internships to complete the curriculum (I was happy, and proud of my employer, when dbi-services was in immediately).
    Teaching to motivated people who want to learn as much as possible is a great experience, and not all days are like this in the professional life. And explaining topics that are aside of my comfort zone is lot of work, but also a rewarding experience. In this world where technology goes faster and faster, showing the approach and the method to adapt to new topics gives a lot of self-confidence.


    Cet article Introduction to databases for {Power.Coders} with MySQL est apparu en premier sur Blog dbi services.

Arrgs. My Bot Doesn't Understand Me! Why Intent Resolutions Sometimes Appear to Be Misbehaving

OTN TechBlog - Fri, 2018-06-22 10:17

Article by Grant Ronald, June 2018

One of the most common questions that gets asked when someone starts building a real bot is “Why am I getting strange intent resolutions”. For example, someone tests the bot with random key presses like “slkejfhlskjefhksljefh” and finds an 80% resolution for “CheckMyBalance”. The first reaction is to blame the intent resolution within the product. However, the reality is that you’ve not trained it to know any better. This short article gives a high level conceptual explanation of how model do and don’t work.


Related Content

TechExchange - First Step in Training Your Bot

A Practical Guide to Building Multi-Language Chatbots with the Oracle Bot Platform

OTN TechBlog - Fri, 2018-06-22 09:05

Article by Frank Nimphius, Marcelo Jabali - June 2018

Chatbot support for multiple languages is a worldwide requirement. Almost every country has the need for supporting foreign languages, be it to support immigrants, refugees, tourists, or even employees crossing borders on a daily basis for their jobs.

According to the Linguistic Society of America1, as of 2009, 6,909 distinct languages were classified, a number that since then has been grown. Although no bot needs to support all languages, you can tell that for developers building multi-language bots, understanding natural language in multiple languages is a challenge, especially if the developer does not speak all of the languages he or she needs to implement support for.

This article explores Oracle's approach to multi language support in chatbots. It explains the tooling and practices for you to use and follow to build bots that understand and "speak" foreign languages.

Read the full article.


Related Content

TechExchange: A Simple Guide and Solution to Using Resource Bundles in Custom Components 

TechExchange - Custom Component Development in OMCe – Getting Up and Running Immediately

TechExchange - First Step in Training Your Bot

converting TIMESTAMP(6) to TIMESTAMP(0)

Tom Kyte - Fri, 2018-06-22 08:26
Currently I have a column with datatype TIMESTAMP(6) but now i have a requirement to change it to TIMESTAMP(0). Because we cannot decrease the precision, ORA-30082: datetime/interval column to be modified must be empty to decrease fractional sec...
Categories: DBA Blogs

Mail Restrictions using UTL_SMTP

Tom Kyte - Fri, 2018-06-22 08:26
Hi Tom, I have a requirement to send email to particular domain mail id?s. But My Mail server is global mail server we can send mail to any mail ids. Is there any options in Oracle to restrict the mail send as global. For example: My mail host is...
Categories: DBA Blogs

Expanded Oracle Accelerator Gives Texas Startups a Boost

Oracle Press Releases - Fri, 2018-06-22 06:00
Press Release
Expanded Oracle Accelerator Gives Texas Startups a Boost New Austin-based program offers enterprise customer network, mentoring, resources and cloud technology, as well as Capital Factory collaboration, to help startups grow and compete globally

Redwood Shores, Calif.—Jun 22, 2018

Oracle today announced the opening of the Oracle Startup Cloud Accelerator in Austin, Texas, the global program’s first U.S. location and part of the Oracle Global Startup Ecosystem. The new accelerator provides statewide startups with access to a network of more than 430,000 Oracle customers, technical and business mentors, state-of-the-art technology, co-working space at Capital Factory, introductions to partners, talent, and investors, and free Oracle Cloud credits. In addition to local expertise, the program offers an ever-expanding global community of startup peers and program alumni.

Austin Startup Cloud Accelerator

The Oracle Startup Cloud Accelerator, which is open to early-stage technology and technology-enabled startups, is accepting applications through August 7. Startups will begin the six-month program in early September.

Oracle’s Austin Startup Cloud Accelerator is run by JD Weinstein. Weinstein, a former Principal at WPP Ventures and previously a Venture Associate at Capital Factory, brings a deep understanding of the local startup ecosystem and scaling startups through enterprise relationships.

“Austin and the State of Texas are thriving centers of innovation, and we are proud to dive in and support the startup community with cutting edge resources, including enterprise customer channels, hands-on experience with Oracle technical and product teams, mentoring from top business leaders, executives, and investors, as well as connections to thousands of entrepreneurs and corporate partners through our collaboration with Capital Factory,” said JD Weinstein, head of Oracle Startup Ecosystem in Austin.

Capital Factory & Austin Network

Oracle is working with Capital Factory to provide connections to the organization’s expansive network of local entrepreneurs, prominent CEOs, venture capitalists, corporations, and government officials. Startups in Oracle’s accelerator will also receive access to Capital Factory’s Mentor Network, free co-working space, and will benefit from the reach of the organization’s social media and event communities. Members of Oracle’s broader Global Startup Ecosystem will also benefit from the relationship with Capital Factory.

“We are excited that Oracle has invested in Austin as the first U.S. location of its global accelerator,” said Joshua Baer, founder and executive director, Capital Factory. “The combination of our mentor network and Oracle’s cloud platform and customer connections will provide startups a major advantage in growing their business.”

The Startup Cloud Accelerator is supported by Oracle’s rapidly growing presence in Austin. The company recently opened a state-of-the-art campus on Lady Bird Lake. Oracle’s expanding employee base and the new facility will provide additional resources and support for startups in the accelerator program.

Commitment to Global Startups

“Rooted in its own entrepreneurial beginnings, Oracle has long believed that startups are at the heart of innovation,” said Reggie Bradford, senior vice president, Oracle Startup Ecosystem and Accelerator. “The Austin accelerator is key to our mission of creating a global ecosystem of co-development and co-innovation where everyone—the startups, customers, and Oracle—can win.”

The Oracle Global Startup Ecosystem offers residential and nonresidential startup programs, plus a burgeoning higher education program, that power cloud-based technology innovation. The residential Oracle Startup Cloud Accelerator has locations in Austin, Bangalore, Bristol, Delhi–NCR, Mumbai, Paris, São Paulo, Singapore and Tel Aviv. Oracle Scaleup Ecosystem is the nonresidential, virtual-style program available for growing companies around the globe. Interested startups, venture capital firms and other organizations, regardless of their location, can apply for Oracle Scaleup Ecosystem here.

Contact Info
Julia Allyn
Oracle Corporate Communications
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Julia Allyn

  • +1.650.607.1338

MySQL 8.0 – Roles are finally there

Yann Neuhaus - Fri, 2018-06-22 05:29

Roles have been existing on many RDBMS for a long time by now. Starting from version 8.0, this functionality is finally there for MySQL.
The most important advantage is to define only once a role that includes a “set of permissions”, then assign it to each user, avoiding wasting time declaring them individually.

In MySQL, a role can be created like a user, but without the “identified by” clause and without login:

mysqld2-(root@localhost) [(none)]> CREATE ROLE 'r_sakila_read';
Query OK, 0 rows affected (0.03 sec)
mysqld2-(root@localhost) [(none)]> select user,host,authentication_string from mysql.user;
| user             | host      | authentication_string                                                  |
| r_sakila_read    | %         |                                                                        |
| multi_admin      | localhost | $A$005$E?D/>efE+Rt12omzr.78VnfR3kxj8KLG.aP84gdPMxW7A/7uG3D80B          |
| mysql.infoschema | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE                              |
| mysql.session    | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE                              |
| mysql.sys        | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE                              |
| root             | localhost | {u]E/m)qyn3YRk2u.JKdxj9/6Krd8uqNtHRzKA38cG5qyC3ts5                     |

After that you can grant some privileges to this role, as you usually do for users:

mysqld2-(root@localhost) [(none)]> grant select on sakila.* to 'r_sakila_read';
Query OK, 0 rows affected (0.10 sec)
mysqld2-(root@localhost) [(none)]> show grants for r_sakila_read;
| Grants for r_sakila_read@%                        |
| GRANT USAGE ON *.* TO `r_sakila_read`@`%`         |
| GRANT SELECT ON `sakila`.* TO `r_sakila_read`@`%` |
2 rows in set (0.00 sec)

Now you can create your user:

mysqld2-(root@localhost) [(none)]> create user 'u_sakila1'@localhost identified by 'qwepoi123098';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

And yes, check your password policy because, starting from version 8.0, the new validate_password component replaces the old validate_password plugin and is now enabled by default and you don’t have to install it anymore.

mysqld2-(root@localhost) [(none)]> show variables like 'validate_password_%';
| Variable_name                        | Value  |
| validate_password_check_user_name    | ON     |
| validate_password_dictionary_file    |        |
| validate_password_length             | 8      |
| validate_password_mixed_case_count   | 1      |
| validate_password_number_count       | 1      |
| validate_password_policy             | MEDIUM |
| validate_password_special_char_count | 1      |
7 rows in set (0.01 sec)
mysqld2-(root@localhost) [(none)]> create user 'u_sakila1'@localhost identified by 'QwePoi123098!';
Query OK, 0 rows affected (0.08 sec)

In my example I have by default a MEDIUM level for checking passwords which means “Length; numeric, lowercase/uppercase, and special characters” (I will better talk about validate_password component in an upcoming blog). Let’s go back to roles…

Grant the created role to your created user (as you usually grant a privilege):

mysqld2-(root@localhost) [(none)]> grant 'r_sakila_read' to 'u_sakila1'@localhost;
Query OK, 0 rows affected (0.01 sec)
mysqld2-(root@localhost) [(none)]> flush privileges;
Query OK, 0 rows affected (0.02 sec)

At this point if you check privileges of your user through a USING clause, you will get information about the granted roles and also privileges associated with each role:

mysqld2-(root@localhost) [(none)]> show grants for 'u_sakila1'@localhost using 'r_sakila_read';
| Grants for u_sakila1@localhost                        |
| GRANT USAGE ON *.* TO `u_sakila1`@`localhost`         |
| GRANT SELECT ON `sakila`.* TO `u_sakila1`@`localhost` |
| GRANT `r_sakila_read`@`%` TO `u_sakila1`@`localhost`  |
3 rows in set (0.00 sec)

Now if you try to connect with your user and do a select of data on the database on which you have a read privilege, you will discover that something is still missing:

mysqld2-(root@localhost) [(none)]>  system mysql -u u_sakila1 -p
mysqld2-(u_sakila1@localhost) [(none)]> use sakila;
ERROR 1044 (42000): Access denied for user 'u_sakila1'@'localhost' to database 'sakila'
mysqld2-(u_sakila1@localhost) [(none)]> SELECT CURRENT_ROLE();
| NONE           |
1 row in set (0.00 sec)

Because you have to define which roles will be active when the user authenticates. You you can do that by adding the “DEFAULT ROLE role” during the user creation (starting from version 8.0.3), or even later through the following statement:

mysqld2-(root@localhost) [(none)]> set default role r_sakila_read to 'u_sakila1'@localhost;
Query OK, 0 rows affected (0.08 sec)

Otherwise, starting from version 8.0.2, you can directly let the server activate by default all roles granted to each user, setting the activate_all_roles_on_login variable to ON:

mysqld2-(root@localhost) [(none)]> show variables like '%activate%';
| Variable_name               | Value |
| activate_all_roles_on_login | OFF   |
1 row in set (0.00 sec)
mysqld2-(root@localhost) [(none)]> set global activate_all_roles_on_login=ON;
Query OK, 0 rows affected (0.00 sec)
mysqld2-(root@localhost) [(none)]> show variables like '%activate%';
| Variable_name               | Value |
| activate_all_roles_on_login | ON    |
1 row in set (0.01 sec)

So if you check again, all works correctly:

mysqld2-(root@localhost) [mysql]> select * from role_edges;
| %         | r_sakila_read  | localhost | u_sakila1 | N                 |
4 rows in set (0.00 sec)
mysqld2-(root@localhost) [(none)]>  system mysql -u u_sakila1 -p
mysqld2-(u_sakila1@localhost) [(none)]> use sakila
mysqld2-(u_sakila1@localhost) [sakila]> connect
Connection id:    29
Current database: sakila
mysqld2-(u_sakila1@localhost) [sakila]> select CURRENT_ROLE();
| CURRENT_ROLE()      |
| `r_sakila_read`@`%` |
1 row in set (0.00 sec)

Enjoy your roles now! ;)


Cet article MySQL 8.0 – Roles are finally there est apparu en premier sur Blog dbi services.

What’s new in EDB EFM 3.1?

Yann Neuhaus - Fri, 2018-06-22 04:24

Beginning of this month EnterpriseDB announced a new version of its Failover Manager. Version 2.1 introduced controlled switchover operations, version 3.0 brought support for PostgreSQL 10 and now: What’s new in version 3.1? It might seem this is just a bugfix release but there is more and especially one enhancement I’ve waited for a long time.

As you might remember: When you stopped EFM (before version 3.1) the nodes.in file was always empty again. What we usually did is to create a backup of that file so we just could copy it back but this is somehow annoying. The current version comes with a new property in the efm.properties file to handle that better:

# When set to true, EFM will not rewrite the .nodes file whenever new nodes
# join or leave the cluster. This can help starting a cluster in the cases
# where it is expected for member addresses to be mostly static, and combined
# with 'auto.allow.hosts' makes startup easier when learning failover manager.

When set to “true” the file will not be touched when you stop/restart EFM on a node:

root@:/etc/edb/efm/ [] cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address. 
root@:/etc/edb/efm/ [] systemctl stop efm-3.1.service
root@:/etc/edb/efm/ [] cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address. 
root@:/etc/edb/efm/ [] systemctl start efm-3.1.service
root@:/etc/edb/efm/ [] cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address. 

A small, but really nice improvement. At least with our deployments the amount of cluster nodes is rather static so this helps a lot. While this is a new property another property is gone:

root@:/etc/edb/efm/ [] grep efm.license efm.properties

This means you do not anymore need a license key to test EFM for more than 60 days, which is great as well. Another small improvement is that you now can see on which node the VIP is currently running on:

root@:/etc/edb/efm/ [] /usr/edb/efm/bin/efm cluster-status efm
Cluster Status: efm

	Agent Type  Address              Agent  DB       VIP
	Master        UP     UP*
	Standby        UP     UP

Allowed node host list:

Membership coordinator:

Standby priority host list:

Promote Status:

	DB Type     Address              XLog Loc         Info
	Master        0/40006F0        
	Standby        0/40006F0        

	Standby database(s) in sync with master. It is safe to promote.

When it comes to the VIP there is another enhancement which is controlled by new property:

root@:/etc/edb/efm/ [] grep virtualIp.single efm.properties | tail -1

When this is set to “true” EFM will use the same address for the VIP after a failover on the new master. This was the default behavior before EFM 3.1. When you want to use another VIP on a new master you can now do that be switching that to false and provide a different VIP in the properties file on each node.

That’s the important ones for me. The full list is in the documentation.


Cet article What’s new in EDB EFM 3.1? est apparu en premier sur Blog dbi services.

utl_dbws causes ORA-29532 and bad_record_mac

Yann Neuhaus - Fri, 2018-06-22 03:27

After installing OJVM patch set update APR-2017 on a database with PSU APR-2017 installed, first call of utl_dbws package was successful, but after a while utl_dbws calls failed always with ORA-29532 and bad_record_mac. All Java objects remained valid.
Also after trying procedures described in MOS document 2314363.1 utl_dbws worked first time, after that it always failed.
We could observe that after a while after restarting database m000 process ran and tried to recompile Java classes. When waiting till m000 finished utl_dbws always succeeded.
The m000 process start was caused by parameter setting JAVA_JIT_ENABLED to TRUE.

When setting JAVA_JIT_ENABLED to false, utl_dbws always worked fine. Probably locking of java classes by application prevented to recompile them properly.


Cet article utl_dbws causes ORA-29532 and bad_record_mac est apparu en premier sur Blog dbi services.

Oracle WebLogic 12.2.1.x Configuration Guide for Oracle Utilities available

Anthony Shorten - Thu, 2018-06-21 19:06

A new guide whitepaper is now available for use with Oracle Utilities Application Framework based products that support Oracle WebLogic 12.2.1.x and above. The whitepaper walks through the setup of the domain using the Fusion Domain Templates instead of the templates supplied with the product. In future releases, Oracle Utilities Application Framework the product specific domain templates will not be supplied as the Fusion Domain Templates take more of a prominent role in deploying Oracle Utilities products.

The whitepaper covers the following topics:

  • Setting up the Domain for Oracle Utilities products
  • Additional Web Services configuration
  • Configuration of Global Flush functionality in Oracle WebLogic 12.2.1.x
  • Frequently asked installation questions

The whitepaper is available as Oracle WebLogic 12.2.1.x Configuration Guide (Doc Id: 2413918.1) from My Oracle Support.

JavaScript - Method to Call Backend Logic in Sequential Loop

Andrejus Baranovski - Thu, 2018-06-21 15:54
When we call backend REST service from JavaScript - call by default is executed async. This means it will not wait until response from backend is received, but will continue executing code. This is expected and desired functionality in most of the cases. But it might be requirement, where you want to call backend in synchronised way. Example - calling backend service multiple times in the loop, next call must be invoked only after previous call is complete. With default async functionality, loop will complete before first REST call.

Here is the example of calling backend REST service (through Oracle JET API, using JQuery in the background). Call is made 3 times, with success callback printing a message. One more message is printed at the end of each loop iteration:

Three backend REST calls are executed in the loop:

Loop completes earlier than REST call from the first iteration, we can see it from the log:

It might be valid and expected behaviour for most of the cases. But depending on backend logic, may be you would like to guarantee no call from second iteration will be invoked, until first iteration call not complete. This can be achieved by specifying async function and using Promise inside the loop. We should use await new Promise syntax and resolve it in success callback by calling next():

With promise applied, loop is executed sequentially - next loop iteration is started, only after backend service success callback is invoked. You can see it from the log:

Source code is available on my GutHub repository.

Unbreakable Enterprise Kernel Release 5 for Oracle Linux 7

Wim Coekaerts - Thu, 2018-06-21 10:08

Yesterday we released the 5th version of our "UEK" package for Oracle Linux 7 (UEKR5). This kernel version is based on a 4.14.x mainline Linux kernel. One of the nice things is that 4.14 is an upstream Long Term Stable kernel version as well as maintained by gregkh.

UEKR5 is a 64-bit only kernel. We released it on x86(-64) and ARM64 (aarch64) and it is supported starting with Oracle Linux 7.

Updating to UEK5 is easy - just add the UEKR5 yum repo and update. We have some release notes posted here and a more detailed blog here.

A lot of new stuff  in UEKR5... we also put a few extra tools in the yum repo that let you make use of these newer features where tool updates are needed. xfsprogs, btrfsprogs, ixpdimm libraries pmemsdk, updated dtrace utils updated bcache, updated iproute etc.

For those that don't remember, we launched the first version of our kernel for Oracle Linux back in 2010 when we launched the 8 socket Exadata system. We have been releasing a new Linux kernel for Oracle Linux on a regular basis ever since. Every Exadata system, in fact every Oracle Engineered system that runs Linux uses Oracle Linux and uses one of the versions of UEK inside. So for customers, it's the most tested kernel out there, you can run the exact same OS software stack as we run, on our biggest and fastest database servers, on-premises or in the cloud, and in fact, run the exact same OS software stack as we run inside Oracle Cloud in general. That's pretty unique compared to other vendors where the underlying stack is a black box. Not here.

10/2010 - 2.6.32 [UEK] OL5/OL6 03/2012 - 2.6.39 [UEKR2] OL5/OL6 10/2013 - 3.8 [UEKR3] OL6/OL7 01/2016 - 4.1 [UEKR4] OL6/OL7 06/2018 - 4.14 [UEKR5] OL7/

The source code for UEKR5 (as has been the case since day 0) is fully available publicly, the entire git repo is there with changelog, all the patches are there with all the changelog history - not just some tar file with patchfiles on top of tar files to obfuscate? things for some reason. It's all just -right there-. In fact we recently even moved our kernel gitrepo to github.

Have at it.


Demo: GraphQL with node-oracledb

Christopher Jones - Thu, 2018-06-21 09:18

Some of our node-oracledb users recently commented they have moved from REST to GraphQL so I thought I'd take a look at what it is all about.

I can requote the GraphQL talking points with the best of them, but things like "Declarative Data Fetching" and "a schema with a defined type system is the contract between client and server" are easier to undstand with examples.

In brief, GraphQL:

  • Provides a single endpoint that responds to queries. No need to create multiple endpoints to satisfy varying client requirements.

  • Has more flexibility and efficiency than REST. Being a query language, you can adjust which fields are returned by queries, so less data needs to be transfered. You can parameterize the queries, for example to alter the number of records returned - all without changing the API or needing new endpoints.

Let's look at the payload of a GraphQL query. This query with the root field 'blog' asks for the blog with id of 2. Specifically it asks for the id, the title and the content of that blog to be returned:

{ blog(id: 2) { id title content } }

The response from the server would contain the three request fields, for example:

{ "data": { "blog": { "id": 2, "title": "Blog Title 2", "content": "This is blog 2" } } }

Compare that result with this query that does not ask for the title:

{ blog(id: 2) { id content } }

With the same data, this would give:

{ "data": { "blog": { "id": 2, "content": "This is blog 2" } } }

So, unlike REST, we can choose what data needs to be transferred. This makes clients more flexible to develop.

Let's looks at some code. I came across this nice intro blog post today which shows a basic GraphQL server in Node.js. For simplicity its data store is an in-memory JavaScript object. I changed it to use an Oracle Database backend.

The heart of GraphQL is the type system. For the blog example, a type 'Blog' is created in our Node.js application with three obvious values and types:

type Blog { id: Int!, title: String!, content: String! }

The exclamation mark means a field is required.

The part of the GraphQL Schema to query a blog post by id is specified in the root type 'Query':

type Query { blog(id: Int): Blog }

This defines a capability to query a single blog post and return the Blog type we defined above.

We may also want to get all blog posts, so we add a "blogs" field to the Query type:

type Query { blog(id: Int): Blog blogs: [Blog], }

The square brackets indicates a list of Blogs is returned.

A query to get all blogs would be like:

{ blogs { id title content } }

You can see that the queries include the 'blog' or 'blogs' field. We can pass all queries to the one endpoint and that endpoint will determin how to handle each. There is no need for multiple endpoints.

To manipulate data requires some 'mutations', typically making up the CUD of CRUD:

input BlogEntry { title: String!, content: String! } type Mutation { createBlog(input: BlogEntry): Blog!, updateBlog(id: Int, input: BlogEntry): Blog!, deleteBlog(id: Int): Blog! }

To start with, the "input" type allows us to define input parameters that will be supplied by a client. Here a BlogEntry contains just a title and content. There is no id, since that will be automatically created when a new blog post is inserted into the database.

In the mutations, you can see a BlogEntry type is in the argument lists for the createBlog and updateBlog fields. The deleteBlog field just needs to know the id to delete. The mutations all return a Blog. An example of using createBlog is shown later.

Combined, we represent the schema in Node.js like:

const typeDefs = ` type Blog { id: Int!, title: String!, content: String! } type Query { blogs: [Blog], blog(id: Int): Blog } input BlogEntry { title: String!, content: String! } type Mutation { createBlog(input: BlogEntry): Blog!, updateBlog(id: Int, input: BlogEntry): Blog!, deleteBlog(id: Int): Blog! }`;

This is the contract, defining the data types and available operations.

In the backend, I decided to use Oracle Database 12c's JSON features. There's no need to say that using JSON gives developers power to modify and improve the schema during the life of an application:

CREATE TABLE blogtable (blog CLOB CHECK (blog IS JSON)); INSERT INTO blogtable VALUES ( '{"id": 1, "title": "Blog Title 1", "content": "This is blog 1"}'); INSERT INTO blogtable VALUES ( '{"id": 2, "title": "Blog Title 2", "content": "This is blog 2"}'); COMMIT; CREATE UNIQUE INDEX blog_idx ON blogtable b (b.blog.id); CREATE SEQUENCE blog_seq START WITH 3;

Each field of the JSON strings corresponds to the values of the GraphQL Blog type. (The 'dotted' notation syntax I'm using in this post requires Oracle DB 12.2, but can be rewritten for

The Node.js ecosystem has some powerful modules for GraphQL. The package.json is:

{ "name": "graphql-oracle", "version": "1.0.0", "description": "Basic demo of GraphQL with Oracle DB", "main": "graphql_oracle.js", "keywords": [], "author": "christopher.jones@oracle.com", "license": "MIT", "dependencies": { "oracledb": "^2.3.0", "express": "^4.16.3", "express-graphql": "^0.6.12", "graphql": "^0.13.2", "graphql-tools": "^3.0.2" } }

If you want to see the full graphql_oracle.js file it is here.

Digging into it, the application has some 'Resolvers' to handle the client calls. From Dhaval Nagar's demo, I modified these resolvers to invoke new helper functions that I created:

const resolvers = { Query: { blogs(root, args, context, info) { return getAllBlogsHelper(); }, blog(root, {id}, context, info) { return getOneBlogHelper(id); } }, [ . . . ] };

To conclude the GraphQL part of the sample, the GraphQL and Express modules hook up the schema type definition from above with the resolvers, and start an Express app:

const schema = graphqlTools.makeExecutableSchema({typeDefs, resolvers}); app.use('/graphql', graphql({ graphiql: true, schema })); app.listen(port, function() { console.log('Listening on http://localhost:' + port + '/graphql'); })

On the Oracle side, we want to use a connection pool, so the first thing the app does is start one:

await oracledb.createPool(dbConfig);

The helper functions can get a connection from the pool. For example, the helper to get one blog is:

async function getOneBlogHelper(id) { let sql = 'SELECT b.blog FROM blogtable b WHERE b.blog.id = :id'; let binds = [id]; let conn = await oracledb.getConnection(); let result = await conn.execute(sql, binds); await conn.close(); return JSON.parse(result.rows[0][0]); }

The JSON.parse() call nicely converts the JSON string that is stored in the database into the JavaScript object to be returned.

Starting the app and loading the endpoint in a browser gives a GraphiQL IDE. After entering the query on the left and clicking the 'play' button, the middle pane shows the returned data. The right hand pane gives the API documentation:

To insert a new blog, the createBlog mutation can be used:

If you want to play around more, I've put the full set of demo-quality files for you to hack on here. You may want to look at the GraphQL introductory videos, such as this comparison with REST.

To finish, GraphQL has the concept of real time updates with subscriptions, something that ties in well with the Continous Query Notification feature of node-oracledb 2.3. Yay - something else to play with! But that will have to wait for another day. Let me know if you beat me to it.

Oracle Introduces New Java SE Subscription Offering for Broader Enterprise Java Support

Oracle Press Releases - Thu, 2018-06-21 09:00
Press Release
Oracle Introduces New Java SE Subscription Offering for Broader Enterprise Java Support Java SE Subscription Provides Licensing and Support for Java SE on Servers, Desktops, and Cloud Deployments

Redwood Shores Calif—Jun 21, 2018

In order to further support the millions of worldwide businesses running Java in production, Oracle today announced Java SE Subscription, a new subscription model that covers all Java SE licensing and support needs. Java SE Subscription removes enterprise boardroom concerns around mission critical, timely, software performance, stability and security updates. Java SE Subscription complements Oracle’s long-standing and continued free Java SE releases and stewardship of the OpenJDK ecosystem where Oracle now produces open source OpenJDK binaries, enabling developers and organizations that do not need commercial support or enterprise management tools.

Java SE Subscription provides commercial licensing, including commercial features and tools such as the Java Advanced Management Console to identify, manage and tune Java SE desktop use across the enterprise. It also includes Oracle Premier Support for current and previous Java SE versions.  For further details please visit FAQ list at: http://www.oracle.com/technetwork/java/javaseproducts/overview/javasesubscriptionfaq-4891443.html

“Companies want full flexibility over when and how they update their production applications.” Said Georges Saab, VP Java Platform Group at Oracle. “Oracle is the world’s leader in providing both open source and commercially supported Java SE innovation, stability, performance and security updates for the Java Platform. Our long-standing investment in Java SE ensures customers get predictable and timely updates.”

“The subscription model for updates and support has been long established in the Linux ecosystem. Meanwhile people are increasingly used to paying for services rather than products.” said James Governor, analyst and co-founder of RedMonk. “It’s natural for Oracle to offer a monthly Java SE subscription to suit service-based procurement models for enterprise customers.”

"At Gluon we are strong believers in commercial support offerings around open source software, as it enables organizations to continue to produce software, and the developer community to ensure that they have access to the source code." said Johan Vos, Co-founder and CTO of Gluon. "Today's announcement from Oracle ensures those in the Java Community that need an additional level of support can receive it, and ensures that Java developers can still leverage the open-source software for creating their software. The Java SE Subscription model from Oracle is complementary to how companies like Gluon tailor their solutions around Java SE, Java EE and JavaFX on mobile, embedded and desktop."

To learn more about Java SE Subscription, please visit https://www.oracle.com/java/java-se-subscription.html. Java is the world’s most popular programming language, with over 12 million developers running Java. Java is also the #1 developer choice for cloud, with over 21 billion cloud-connected Java virtual machines.

Contact Info
Alex Shapiro
+1 415-608-5044
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Alex Shapiro

  • +1 415-608-5044

[Video] Oracle Database on Amazon AWS Overview

Online Apps DBA - Thu, 2018-06-21 08:25

Let’s Get Ready To Move Oracle Database To Amazon AWS [VLOG] Oracle Database on Amazon AWS Overview The AWS IaaS platform offers a scalable, secure, highly available and secure infrastructure for Oracle platform solutions such as Database, Middleware, and applications. Visit the link  https://k21academy.com/clouddba30 and find out Is it Worth Moving Toward Amazon AWS? Comment […]

The post [Video] Oracle Database on Amazon AWS Overview appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Kscope18: It's a Wrap!

Rittman Mead Consulting - Thu, 2018-06-21 08:23
 It's a Wrap!

As announced few weeks back I represented Rittman Mead at ODTUG's Kscope18 hosted in the magnificent Walt Disney World Dolphin Resort. It's always hard to be credible when telling people you are going to Disneyworld for work but Kscope is a must-go event if you are in the Oracle landscape.

 It's a Wrap!

In the Sunday symposium Oracle PMs share hints about the products latest capabilities and roadmaps, then three full days of presentations spanning from the traditional Database, EPM and BI tracks to the new entries like Blockchain. On top of this the opportunity to be introduced to a network of Oracle experts including Oracle ACEs and Directors, PMs and people willing to share their experience with Oracle (and other) tools.

Sunday Symposium and Presentations

I attended the Oracle Analytics (BI and Essbase) Sunday Symposium run by Gabby Rubin and Matt Milella from Oracle. It was interesting to see the OAC product enhancements and roadmap as well as the feature catch-up in the latest release of OBIEE on-premises (version

As expected, most of the push is towards OAC (Oracle Analytics Cloud): all new features will be developed there and eventually (but assurance on this) ported in the on-premises version. This makes a lot of sense from Oracle's point of view since it gives them the ability to produce new features quickly since they need to be tested only against a single set of HW/SW rather than the multitude they are supporting on-premises.

Most of the enhancements are expected in the Mode 2/Self Service BI area covered by Oracle Analytics Cloud Standard since a) this is the overall trend of the BI industry b) the features requested by traditional dashboard style reporting are well covered by OBIEE.
The following are just few of the items you could expect in future versions:

  • Recommendations during the data preparation phase like GeoLocation and Date enrichments
  • Data Flow enhancements like incremental updates or parametrized data-flows
  • New Visualizations and in general more control over the settings of the single charts.

In general Oracle's idea is to provide a single tool that meets both the needs of Mode 1 and Mode 2 Analytics (Self Service vs Centralized) rather than focusing on solving one need at a time like other vendors do.

Special mention to the Oracle Autonomous Analytics Cloud, released few weeks ago, that differs from traditional OAC for the fact that backups, patching and service monitoring are now managed automatically by Oracle thus releasing the customer from those tasks.

During the main conference days (mon-wed) I assisted a lot of very insightful presentations and the Oracle ACE Briefing which gave me ideas for future blog posts, so stay tuned! As written previously I had two sessions accepted for Kscope18: "Visualizing Streams" and "DevOps and OBIEE: Do it Before it's too late", in the following paragraph I'll share details (and link to the slides) of both.

Visualizing Streams

One of the latest trends in the data and analytics space is the transition from the old style batch based reporting systems which by design were adding a delay between the event creation and the appearance in the reporting to the concept of streaming: ingesting and delivering event information and analytics as soon as the event is created.

 It's a Wrap!

The session explains how the analytics space changed in recent times providing details on how to setup a modern analytical platform which includes streaming technologies like Apache Kafka, SQL based enrichment tools like Confluent's KSQL and connections to Self Service BI tools like Oracle's Data Visualization via sql-on-Hadoop technologies like Apache Drill. The slides of the session are available here.

DevOps and OBIEE: Do it Before it's Too Late

In the second session, slides here, I've been initially going through the motivations of applying DevOps principles to OBIEE: the self service BI wave started as a response to the long time to delivery associated with the old school centralized reporting projects. Huge monolithic sets of requirements to be delivered, no easy way to provide development isolation, manual testing and code promotion were only few of the stoppers for a fast delivery.

 It's a Wrap!

After an initial analysis of the default OBIEE developments methods, the presentation explains how to apply DevOps principles to an OBIEE (or OAC) environment and precisely:

  • Code versioning techniques
  • Feature-driven environment creation
  • Automated promotion
  • Automated regression testing

Providing details on how the Rittman Mead BI Developer Toolkit, partially described here, can act as an accelerator for the adoption of these practices in any custom OBIEE implementation and delivery process.

As mentioned before, the overall Kscope experience is great: plenty of technical presentation, roadmap information, networking opportunities and also much fun! Looking forward to Kscope19 in Seattle!

Categories: BI & Warehousing

[Video] Oracle Identity and Access Management: Oracle Identity Federation

Online Apps DBA - Thu, 2018-06-21 08:18

OIF is an authentication process across domains. Interested To know What is Oracle Identity Federation (OIF) [VLOG] Oracle Identity and Access Management: Oracle Identity Federation Visit the link: http://k21academy.com/oam24 &  learn about different Protocols supported by Federation like: ✔ SAML V1 and V2: SAML (Security Assertion Markup Language) ✔ Liberty ✔ OpenID ✔ OAuth Comment down […]

The post [Video] Oracle Identity and Access Management: Oracle Identity Federation appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Intercollegiate Tennis Association and Oracle Announce Multi-Year Extension

Oracle Press Releases - Thu, 2018-06-21 07:00
Press Release
Intercollegiate Tennis Association and Oracle Announce Multi-Year Extension

TEMPE, Ariz. and Redwood Shores, Calif.—Jun 21, 2018

The Intercollegiate Tennis Association and Oracle are excited to announce a multi-year extension to their alliance, as Oracle continues to strengthen its ongoing commitment to collegiate tennis.

The Oracle ITA alliance includes Oracle’s ongoing sponsorship of the Oracle ITA Collegiate Tennis Rankings, the Oracle ITA Masters and Oracle ITA National Fall Championships, while adding title sponsorships to the ITA Summer Circuit (now branded as the Oracle ITA Summer Circuit Powered By UTR) and the Division I and Division III National Team Indoor Championships.

“Our partnership with ITA has been a great success to date, and we’re eager to keep expanding the game,” said Oracle CEO Mark Hurd. “We want to ensure that young players understand that collegiate tennis offers terrific opportunities to improve their games, play in great venues in a team environment, all while getting an education that will serve them well for the rest of their lives.”

ITA CEO Timothy Russell added, “The ITA is thrilled to be continuing our wonderful working relationship with Oracle; an incredibly innovative company with an astonishing forward-thinking CEO. Both parties are committed to positively shaping the future of college tennis. Oracle’s attention to creating events of high distinction, in which the best players in college want to participate and fans want to watch, either in person or from the comfort of their own home via television and live streaming, is elevating our game.”

The newly-christened Oracle ITA Summer Circuit Powered by UTR will serve as a model for level-based play in the near 50 tournaments contested during the Summer Circuit’s six-week duration. The Oracle ITA Summer Circuit Powered by UTR, which began in 1993, provides college tennis players, along with junior players, alumni and young aspiring professionals, the opportunity to compete in organized events during the summer months. For the third consecutive year, the Oracle ITA Summer Circuit Powered by UTR will feature nearly 50 tournaments across 23 different states, during a six-week stretch from late June to the end of July. The circuit will culminate at the ITA National Summer Championships, hosted by TCU from August 10-14, which will feature prize money for the first time.

“The ITA Summer Circuit is yet another great opportunity to influence the quality of American tennis and Oracle is excited to play a part in it,” said Hurd. “The summer circuit is the ideal opportunity for all players, from collegians to juniors, to play competitively year-round.”

Oracle will now have an expanded presence in the dual-match portion of the college tennis schedule by becoming the title sponsor of all four National Team Indoor Championships. Contested during the months of February and March, the Oracle ITA National Team Indoor Championships feature 16 of the nation’s top men’s and women’s teams from Division I, and eight highly-ranked men’s and women’s Division III teams vying for a national indoor title.

“We are excited that Oracle will serve as the title sponsor for the National Team Indoor Championships,” said Russell. “The National Team Indoor Championships feature elite fields and stand as a good season-opening barometer for how the dual-match season will play out.”

Serving as the culmination to the fall season, the Oracle ITA National Fall Championships will take place November 1-5, 2018, at the Surprise Tennis & Racquet Complex in Surprise, Arizona, which recently hosted the 2018 NCAA Division II National Championships and previously hosted the 2016 ITA Small College Championships.

The Oracle ITA National Fall Championships features 128 of the nation’s top collegiate singles players (64 men and 64 women) and 64 doubles teams (32 men’s team and 32 women’s teams). In its second year, having replaced the ITA National Indoor Intercollegiate Championships, it is the lone event on the collegiate tennis calendar to feature competitors from all five divisions playing in the same tournament.

Created in 2015, the Oracle ITA Masters has established itself as one of the premier events of the collegiate tennis season. The Oracle ITA Masters features singles draws of 32 for men and women, and a mixed doubles event with a 32-draw. Players are chosen based upon conference representation, similar to the NCAA Tournament.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
Dan Johnson
ITA Marketing and Communications
About the ITA

The Intercollegiate Tennis Association (ITA) is committed to serving college tennis and returning the leaders of tomorrow. As the governing body of college tennis, the ITA oversees women’s and men’s varsity tennis at NCAA Divisions I, II and III, NAIA and Junior/Community College divisions. The ITA administers a comprehensive awards and rankings program for men's and women’s varsity players, coaches and teams in all divisions, providing recognition for their accomplishments on and off the court. For more information on the ITA, visit the ITA website at www.itatennis.com, like the ITA on Facebook or follow @ITA_Tennis on Twitter and Instagram.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Deborah Hellinger

  • 212-508-7935

Dan Johnson

  • 303-579-4878


Subscribe to Oracle FAQ aggregator