Andrejus Baranovski

Subscribe to Andrejus Baranovski feed
Blog about Oracle technology
Updated: 13 hours 31 min ago

ADF BC REST Query and SQL Nesting Control Solution

Thu, 2018-08-16 15:04
I will talk about expert mode View Object (with hand written SQL), this View Object is created based on SQL join. So, thats my use case for today example. I will describe issue related to generated SQL statement and give a hint how to solve it. This is in particular useful, if you want to expose complex VO (SQL with joins and calculating totals) over ADF BC REST service and then run queries against this REST resource.

Code is available on my GitHub repository.

Here is SQL join and expert mode VO (the one where you can modify SQL by hand):


This VO is exposed through ADF BC REST, I will not go through those details, you can find more info about it online. Once application is running, REST resource is accessible through GET. ADF BC REST syntax allows to pass query string along with REST request, here I'm filtering based on StreetAddress='ABC':


On backend this works OK by default and generates nested query (this is expected behaviour for expert mode VOs, all additional criteria clauses will be added through SQL wrapping). While such query executes just fine, this is not what we want in some use cases. If we calculate totals or average aggregated values in SQL, we don't want it to be wrapped:


To prevent SQL wrapping we can call ADF BC API method in VO constructor:


While probably this works with regular ADF BC, it doesn't work with criteria coming from ADF BC REST. SQL query is generated with two WHERE clauses, after query nesting was disabled:


Possible solution proposed by me - override executeQueryForCollection method, do some parsing and change second WHERE to be AND, apply changed query string and then execute super:


This trick helps and query is generated as we would expect, criteria added from ADF BC REST query call is appended at the end of WHERE clause:

Flow Navigation Menu Control in Oracle VBCS

Sun, 2018-08-12 14:33
Oracle VBCS allows us to build multiple flows within the application. This is great - this helps to split application logic into different smaller modules. Although VBCS doesn't offer (in the current version) declarative support to build menu structure to navigate between the flows. Luckily this requirement can be achieved in few simple steps, please read John Ceccarelli post - Adding a Navigation Bar to a VBCS Application. I thought to go through instructions listed by John and test it out, today post is based on this. In my next posts I will take a look how to replace navigation bar menu structure with something more advanced, for example - menu slider on the left.

I think VBCS have great potential as JavaScript declarative development IDE. I see many concepts are similar to other Oracle declarative development tools, e.g. Forms, Oracle ADF. VBCS runs Oracle JET, all you build in VBCS is Oracle JET. Oracle takes care upgrading Oracle JET version in VBCS, I have applied recent patch (by click of the button) and latest JET version is available within our VBCS environment:


Coming back to flows in VBCS. We can create as many flows we want. Each flow could be based on one or multiple fragments (HTML/JS modules). Here I have created three flows, each with single fragment:


We can select flow and this will bring us flow diagram, where we could have navigation implementation between flow elements/fragments:


Fragment - this is where UI part is done:


So thats about flows and fragments. For someone with ADF background, this sounds very similar to task flows and fragments. Next we should see how to implement flow navigation, to be able to select flow from the top menu. VBCS application comes with so called shell page. This page is top UI wrapper, which contains application name, logged user info, etc. Here we can implement top level menu, which would navigate through application flows:


There must be default flow, which is displayed once application is loaded. Default flow is set in settings of the shell page. Go to settings and choose default flow, dashboard-flow in my case:


Next we need to add JET component - navigation list to the shell page, to render menu UI. You can do it by drag and drop, but easier is to switch shell page to source view and add navigation list HTML portion manually (you can copy paste it from source code uploaded to GitHub, see link at the end of this post) - highlighted HTML will render menu bar to navigate between flows:


Initially you will notice error related to JET navigation list not recognised, we need to import it. Another error - selection listener is not found, we will implement it.

To import JET navigation list component, go to source implementation of the shell page and add oj-navigation-list in component imports section - this will solve issue with unknown navigation list entry:


To execute action in VBCS, we must create Action Chain. Crete Action Chain within shell page - navigateToPage:


We need input parameter - flow name, which want navigate to. Create variable in Action Chain - currentFlow:


Add action of type Navigate to Action Chain, this will trigger navigation logic:


Go to Action Chain source and add "page": "{{$variables.currentFlow}}" under actions. This will force navigation to the flow, which will be passed through parameter:


Finally we create navigation list selection event (within shell page), this event will trigger action chain created above and pass current flow ID. We must create custom event and its name should match even name defined in JET navigation list in HTML (see above):


Choose to create custom event (it didn't work for me in Chrome, only in Safari browser. VBCS bug?) and provide same name as in navigation list component listener:


Choose our navigation Action Chain to be triggered from this event:


Just a reminder, event is called from navigation list selection:


Event is passing flow ID from currently selected tab item:


On runtime, dashboard flow is loaded by default:


We can switch to Jobs, etc.:


Download exported (runnable only in VBCS) VBCS app from GitHub repo.

Oracle Offline Persistence Toolkit - Controlling Online Replay

Thu, 2018-08-09 13:25
Few months ago I had a post about Oracle Offline Persistence toolkit, which integrates well with Oracle JET (JavaScript toolkit from Oracle) - Oracle JET Offline Persistence Toolkit - Offline Update Handling. I'm back to this topic with sample application upgraded to JET 5.1 and offline toolkit upgraded to 1.1.5. In this post I will describe how to control online replay by filtering out some of the requests, to be excluded from replay.

Source code is available on GitHub. Below I describe changes and functionality in the latest commit.

To test online replay, go offline and execute some actions in the sample app - change few records and try to search by first name, also try to use page navigation buttons. You will be able to save changes in offline mode, but if this is your first time loading app and data from other pages wasn't fetch yet, then page navigation would not bring any new results in offline mode (make sure to load more records while online and then go offline):


In online replay manager, I'm filtering out GET requests intentionally. Once going online, I replay only PATCH requests. This is done mainly for a test, to learn how to control replay process. PATCH requests are executed during replay:


Printing out in the log, each GET request which was removed from replay loop:


Replay implementation (I would recommend to read Offline Persistence Toolkit usage doc for more info):


This code is executed, after transition to online status. Calling getSyncLog method from Sync Manager - returns a list of requests pending replay. Promise returns function with array of requests waiting for online replay. I have marked function to be async, this allows to implement sequential loop, where each GET request will be removed one by one in order. This is needed, since removeRequest from Sync Manager is executed in promise and loop would complete too late - after we pass execute replay phase. Read more about sequential loop implementation in JS, when promise is used - JavaScript - Method to Call Backend Logic in Sequential Loop. Once all GET requests are removed, we execute sync method, this will force all remaining requests in queue to be replayed.

Data Conflict Solution for ADF BC REST with Versioning

Mon, 2018-08-06 10:06
I would like to share sample solution for data conflict processing in ADF BC REST using versioning. When multiple users are editing concurrently the same data row - it is important to inform user before overriding changes already committed by another user. There are other approaches to implement data conflict control, you should evaluate if solution explained below is suitable for your use case, before applying it.

Sample code can be obtained from GitHub repository.

I'm using custom change indicator property, to evaluate if client data is expired. Change indicator value is sent to the client together with request data. PATCH request must include current client side change indicator value, if change indicator will match value in backend - PATCH is allowed, otherwise new change indicator will be returned to the client and response will be marked with 409 Conflict status code. Based on this, client could decide either to resubmit PATCH request with new change indicator and overwrite current data in DB or refresh client side data and try to submit changes later.

In this example - PATCH was executed with valid change indicator, response status is 200 OK. New change indicator value is returned to the client (it should be submitted for the next PATCH call for current row):


To test data change conflict, I would go directly to DB and change same record. Change indicator will be updated too:


Client doesn't know about change indicator update (data was changed by another user). Client will include currently known change indicator value and execute PATCH. This will result in 409 Conflict status. Backend returns latest change indicator value in the response:


Data wasn't updated, PATCH request was stopped on the backend:


Client knows latest change indicator value and can submit it again - this time successful (no one else changed data in the meantime):


Status 200 OK is returned, along with new change indicator value. Data is changed in DB as expected:


Backend implementation is not complex. You need DB trigger, which will get value from DB sequence and assign it for each changed row:


ADF BC REST includes change indicator attribute, it is marked with Refresh on Update support. This allows to get latest value assigned from DB trigger and return it to the client:


In doDML method we compare change indicator attribute value currently stored in DB and the one which comes from the client. If values do not match (client doesn't have the latest value) - update is not allowed:


When update is not allowed, we also must change HTTP response code to be 409 Conflict. This will allow to execute error callback on client side and take required action to process data conflict on the client. HTTP response code is set from custom ADF BC REST filter:

Text Classification with Deep Neural Network in TensorFlow - Simple Explanation

Mon, 2018-07-30 13:05
Text classification implementation with TensorFlow can be simple. One of the areas where text classification can be applied - chatbot text processing and intent resolution. I will describe step by step in this post, how to build TensorFlow model for text classification and how classification is done. Please refer to my previous post related to similar topic - Contextual Chatbot with TensorFlow, Node.js and Oracle JET - Steps How to Install and Get It Working. I would recommend to go through this great post about chatbot implementation - Contextual Chatbots with Tensorflow.

Complete source code is available in GitHub repo (refer to the steps described in the blog referenced above).

Text classification implementation:

Step 1: Preparing Data
  • Tokenise patterns into array of words
  • Lower case and stem all words. Example: Pharmacy = pharm. Attempt to represent related words 
  • Create list of classes - intents
  • Create list of documents - combination between list of patterns and list of intents
Python implementation:


Step 2: Preparing TensorFlow Input
  • [X: [0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, ...N], Y: [0, 0, 1, 0, 0, 0, ...M]]
  • [X: [0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, ...N], Y: [0, 0, 0, 1, 0, 0, ...M]]
  • Array representing pattern with 0/1. N = vocabulary size. 1 when word position in vocabulary is matching word from pattern
  • Array representing intent with 0/1. M = number of intents. 1 when intent position in list of intents/classes is matching current intent
Python implementation:


Step 3: Training Neural Network
  • Use tflearn - deep learning library featuring a higher-level API for TensorFlow
  • Define X input shape - equal to word vocabulary size
  • Define two layers with 8 hidden neurones - optimal for text classification task (based on experiments)
  • Define Y input shape - equal to number of intents
  • Apply regression to find the best equation parameters
  • Define Deep Neural Network model (DNN)
  • Run model.fit to construct classification model. Provide X/Y inputs, number of epochs and batch size
  • Per each epoch, multiple operations are executed to find optimal model parameters to classify future input converted to array of 0/1
  • Batch size
    • Smaller batch size requires less memory. Especially important for datasets with large vocabulary
    • Typically networks train faster with smaller batches. Weights and network parameters are updated after each propagation
    • The smaller the batch the less accurate estimate of the gradient (function which describes the data) could be
Python implementation:


Step 4: Initial Model Testing
  • Tokenise input sentence - split it into array of words
  • Create bag of words (array with 0/1) for the input sentence - array equal to the size of vocabulary, with 1 for each word found in input sentence
  • Run model.predict with given bag of words array, this will return probability for each intent
Python implementation:


Step 5: Reuse Trained Model
  • For better reusability, it is recommended to create separate TensorFlow notebook, to handle classification requests
  • We can reuse previously created DNN model, by loading it with TensorFlow pickle
Python implementation:


Step 6: Text Classification
  • Define REST interface, so that function will be accessible outside TensorFlow
  • Convert incoming sentence into bag of words array and run model.predict
  • Consider results with probability higher than 0.25 to filter noise
  • Return multiple identified intents (if any), together with assigned probability
Python implementation:

Oracle VBCS - Pay As You Go Cloud Model Experience Explained

Thu, 2018-07-19 14:03
If you are considering starting using VBCS cloud service from Oracle, may be this post will be useful. I will share my experience with pay as you go model.

Two payment models are available:

1. Pay As You Go - good when accessing VBCS time to time. Can be terminated at any time
2. Monthly Flex - good when need to run VBCS 24/7. Requires commitment, can't be terminated at any time

When you create Oracle Cloud account, initially you will get 30 days free trial period. At the end of that period (or earlier), you can upgrade to billable plan. To upgrade, go to account management and choose to upgrade promotional offer - you will be given choice to go with Pay As You Go or Monthly Flex:


As soon as you upgrade to Pay As You Go, you will start seeing monthly usage amount in the dashboard. Also it shows hourly usage of VBCS instance, for the one you will be billed:


Click on monthly usage amount, you will see detail view per each service billing. When VBCS instance is stopped (in case of Pay As You Go) - you will be billed only for hardware storage (Compute Classic) - this is relatively very small amount:


There are two options, how you can create VBCS instance - either autonomous VBCS or customer managed VBCS. To be able to stop/start VBCS instance and avoid billing when instance is not used (in case of Pay As You Go) - make sure to go with customer managed VBCS. In this example, VBCS instance was used only for 1 hour and then it was stopped, it can be started again at anytime:


To manage VBCS instance, you would need to navigate to Oracle Cloud Stack UI. From here you can start stop both DB and VBCS in single action. It is not enough to stop VBCS, make sure to stop DB too, if you are not using it:

ADF Postback Payload Size Optimization

Sun, 2018-07-15 01:45
Recently I came across property called oracle.adf.view.rich.POSTBACK_PAYLOAD_TYPE. This property helps to optimize postback payload size. It is described in ADF Faces configuration section - A.2.3.16 Postback Payload Size Optimization. ADF partial request is executing HTTP post with values from all fields included. When postback property is set to dirty, it will include into HTTP post only changed values. As result - server will get only changed attributes, potentially this can reduce server time processing and make HTTP request size smaller. This especially can be important for large forms, with many fields.

Let's take a look into example. After clicking on any button in the form, go to network monitor and study Form Data section. You will see ID's and values for all fields included in the UI. All fields are submitted with HTTP request by default, even these fields were not changed:


Postback optimization property can be set in web.xml. By default it's value is full, change it to dirty:


With value set to dirty, try to change at least one field and then press any button. Observe Form Data section in network monitor - only fields with changed values will be submitted:


Try to test it in your project and see the difference.

Check my sample app for this use case on GitHub.

Contextual Chatbot with TensorFlow, Node.js and Oracle JET - Steps How to Install and Get It Working

Tue, 2018-07-10 12:15
Blog reader was asking to provide a list of steps, to guide through install and run process for chatbot solution with TensorFlow, Node.JS and Oracle JET.

Resources:

1. Chatbot UI and context handling backend implementation - Machine Learning Applied - TensorFlow Chatbot UI with Oracle JET Custom Component

2. Classification implementation - Classification - Machine Learning Chatbot with TensorFlow

3. TensorFlow installation - TensorFlow - Getting Started with Docker Container and Jupyter Notebook

4. Source code - GitHub

Install and run steps:

1. Download source code from GitHub repository:


2. Install TensorFlow and configure Flask (TensorFlow Linear Regression Model Access with Custom REST API using Flask)

3. Upload intents.json file to TensorFlow root folder:


4. Upload both TensorFlow notebooks:


5. Open and execute (click Run for each section, step by step) model notebook:


6. Repeat training step few times, to get minimum loss:


7. Open and execute response notebook:


8. Make sure REST interface is running, see message below:


9. Test classification from external REST client:


10. Go to socketioserver folder and run (install Node.js before that) npm install express --save and npm install socket.io --save commands:


11. Run npm start to startup Node.js backend:


12. Go to socketiojet folder and run (install Oracle JET before that) ojet restore:


13. Run ojet serve to start chatbot UI. Type questions to chatbot prompt:

JavaScript - Method to Call Backend Logic in Sequential Loop

Thu, 2018-06-21 15:54
When we call backend REST service from JavaScript - call by default is executed async. This means it will not wait until response from backend is received, but will continue executing code. This is expected and desired functionality in most of the cases. But it might be requirement, where you want to call backend in synchronised way. Example - calling backend service multiple times in the loop, next call must be invoked only after previous call is complete. With default async functionality, loop will complete before first REST call.

Here is the example of calling backend REST service (through Oracle JET API, using JQuery in the background). Call is made 3 times, with success callback printing a message. One more message is printed at the end of each loop iteration:


Three backend REST calls are executed in the loop:


Loop completes earlier than REST call from the first iteration, we can see it from the log:


It might be valid and expected behaviour for most of the cases. But depending on backend logic, may be you would like to guarantee no call from second iteration will be invoked, until first iteration call not complete. This can be achieved by specifying async function and using Promise inside the loop. We should use await new Promise syntax and resolve it in success callback by calling next():


With promise applied, loop is executed sequentially - next loop iteration is started, only after backend service success callback is invoked. You can see it from the log:


Source code is available on my GutHub repository.

Custom JavaScript Client Code in Oracle Visual Builder

Mon, 2018-06-18 16:17
Hey, this is my first post about VBCS, you should expect more posts in the future about this topic. Red Samurai decided to choose VBCS as our primary JavaScript development IDE in the cloud. We are going to use it for declarative JS development, similar as we use JDeveloper for ADF.

I was going through the custom JS client code functionality in VBCS and thought it would be good idea to describe how it works. There is good material available for the same topic from Oracle, I recommend to go through it - Variables, Modules, and Functions, OH MY! Custom Client Code in Visual Builder.

I have created simple UI with one input and one output field. Button calls custom JS method, where value from input field will be processed and returned to be displayed in disabled field:


Below I will describe how all parts are wired together. Across different parts of VBCS there is a lot of resemblance with the way how ADF development done - this helps to reuse ADF skills for VBCS.

VBCS allows to define variables on 3 levels:

1. Page - page scope
2. Flow - flow scope
3. Application - application scope

In my example I decided to go with page scope variables (defined in page called main-start) - first one is assigned with input field and second with output:


There is property inspector, it allows to assign expressions to UI fields. Below you can see first variable assigned to input field:


Second variable is assigned to output field:


Button is assigned with action chain call - in VBCS we can call action chains. In ADF we call action listener and code Java logic in the method, here action chain gives more flexibility, you will see this below in action chain implementation:


VBCS allows to switch to code view and check HTML structure built with JET components. This is useful when you want to adjust generated code by yourself or copy layout to external JET project:


There is JS tab, associated with each VBCS page. There we can find JS file, where custom code can be included. I have created basic custom function, just for a test purpose:


VBCS JS code editor offers extensive auto suggest functionality - great help during development:


In case of syntax issues - errors are reported in audit window:


There is separate tab for action chains, I have already one - called from button (see above):


Action chain editor view - along with diagram, we have various components available. This looks slightly similar to SOA/BPM extension in JDeveloper, isn't it? In this action chain, first of all we call custom action - custom JS method define above:


Input parameter for JS call is assigned from page variable (input component):


In the next step - assign variables logic is called, this helps to assign function return value to page variable, which is mapped with output UI field:


Function return value mapping with page variable:


Application can be tested with single click, our message is printed in the log:


I have exported VBCS application and uploaded to GitHub repository. Once you export from VBCS, can access and check generated code. Here is main page code:


In main-start-page.json we can see metadata definition. For example, there we can find button event mapping with action chain:


VBCS looks very promising to me and I think this might be declarative JS development future.

CDN Support in Oracle JET

Sun, 2018-06-17 02:12
With the recent releases of Oracle JET - CDN support in your app can be enabled easily. By default JET app is set to download all JET toolkit related scripts and static files from the same host, where application is hosted. You can track it easily through network monitor, you should see such files as ojknockout.js, etc. fetched from same host:


CDN can be enabled by changing use property from local to cdn in path_mapping.json and restarting the app:


After this change, you should see all JET toolkit content to be downloaded from static.oracle.com host:


Benefit - you reduce load on your host, from where only application specific files will be downloaded, with JET toolkit code downloaded from external Oracle host. Same achievable on your own host, but JET toolkit content downloaded from Oracle host - is compressed out of the box (another benefit):

Machine Learning Applied - TensorFlow Chatbot UI with Oracle JET Custom Component

Mon, 2018-06-11 16:22
This post is based on Oracle Code 2018 Shenzhen, Warsaw and Berlin talks. View presentation on SlideShare:


In my previous post I have outlined how to build chatbot backend with TensorFlow - Classification - Machine Learning Chatbot with TensorFlow. Today post is the next step - I will explain how to build custom UI on top of TensorFlow chatbot with Oracle JET.

You can download complete source code (which includes TensorFlow part, backend for chatbot context processing and JET custom component chatbot UI) from my GitHub repository.

Here is solution architecture snapshot:


TensorFlow is used for machine learning and text classification task. Flask allows to communicate through REST to TensorFlow from outside. Contextual chatbot conversation processing is implemented in Node.js backend, communication with Oracle JET client is handled by Socket.io.

Key point in chatbot implementation - correct data structure construction for machine training process. More accurate learning will be, better classification results will be achieved afterwards. Chatbot training data can come in the form of JSON. Training data quality can be measured by overlap between intents and sample sentences. As more overlaps you have, weaker machine learning output will be produced and classification will be less accurate. Training data can contain information which is not used directly by TensorFlow - we can include intent context processing into the same structure, it will be used by context processing algorithm. Sample JSON structure for training data:


Accurate classification by TensorFlow is only one piece of chatbot functionality. We need to maintain conversation context. This can be achieved in Node.js backend, by custom algorithm. In my example, when context is not set - TensorFlow is called to classify statement and produce intent probability. There might be multiple intents classified for the same sentence - TensorFlow will return multiple probabilities. It is up to you, either to always choose intent with top probability or ask user to choose. Communication back to the client is handled through Socket.io by calling socket.emit function:


If context was already set, we don't call classification function - we don't need it in this step. Rather we check by intent context mapping - what should be the next step. Based on that information, we send back question or action to the client, again through Socket.io by calling socket.emit function:


Chatbot UI is implemented with JET custom component (check how it works in JET cookbook). This makes it easy to reuse the same component in various applications:


Here is example, when chatbot UI is included into consuming application. It comes with custom listener, where any custom actions are executed. Custom listener allows to move any custom logic outside of chatbot component, making it truly reusable:


Example for custom logic - based on chatbot reply, we can load application module, assign parameter values, etc.:


Chatbot UI implementation is based on the list, which renders bot and client messages using the template. This template detects if message belongs to client or bot and applies required style - this helps to render readable list. Also there is input area and control buttons:


JS module executes logic which helps to display bot message, by adding it to the list of messages generates event to be handled by custom logic listener. Message is sent from the client to the bot server by calling Socket.io socket.emit function:


Here is the final result - chatbot box implemented with Oracle JET:

Effective Way to Get Changed Rows in ADF BC API

Tue, 2018-05-29 14:32
Did you ever wondered how to get all changed rows in the transaction, without iterating through entire row set? It turns out to be pretty simple with ADF BC API method - getAllEntityInstancesIterator, which is invoked for Entity Definition attached to current VO.

Method works well - it returns changed rows from different row set pages, not only from the current. In my experiment, I changed couple of rows in the first page:


And couple of rows in 5th page. Also I removed row and created one:


Method returns information about all changed rows, as well as deleted and new:


Example of getAllEntityInstancesIterator method usage in VO Impl class. This method helps to get all changed rows in current transaction, very handy:


Sample application source code is available on GitHub.

Oracle ADF BC REST - Performance Review and Tuning

Sun, 2018-05-27 00:12
I thought to check how well ADF BC REST scales and how fast it performs. For that reason, I implemented sample ADF BC REST application and executed JMeter stress load test against it. You can access source code for application and JMeter script on my GitHub repository. Application is called Blog Visitor Counter app for a reason - I'm using same app to count blog visitors. This means each time you are accessing blog page - ADF BC REST service is triggered in the background and it logs counter value with timestamp (no personal data).

Application structure is straightforward - ADF BC REST implementation:


When REST service is accessed (GET request is executed) - it creates and commits new row in the background (this is why I like ADF BC REST - you have a lot of power and flexibility in the backend), before returning total logged rows count:


New row is assigned with counter value from DB sequence, as well as with timestamp. Both values are calculated in Groovy. Another bonus point for ADF BC REST, besides writing logic in Java - you can do scripting in Groovy - this makes code simpler:


Thats it - ADF BC REST service is ready to run. You may wonder, how I'm accessing it from blog page. ADF BC REST services as any other REST, can be invoked through HTTP request. In this particular case, I'm calling GET operation through Ajax call in JavaScript on client side. This script is uploaded to blogger HTML:


Performance

I'm using JMeter to execute performance test. In below example, REST GET request is invoked in infinite loop by 100 concurrent threads. This creates constant load and allows to measure how ADF BC REST application performs under such load:


ADF BC REST scales well, with 100 concurrent threads it does request processing in 0.1 - 0.2 seconds. If we would compare it to ADF UI request processing time, it would be around 10 times faster. This is expected, because JSF and ADF Faces UI classes are not used during ADF BC REST request. Performance test statistics for 100 threads, see Avg logged time in milliseconds:


Tuning

1. Referenced Pool Size and Application Module Pooling

ADF BC REST executes request is stateless mode, REST nature is stateless. I though to check, what this mean for Application Module tuning parameters. I have observed that changing Referenced Pool Size value doesn't influence application performance, it works either with 0 or any other value in the same way. Referenced Pool Size parameter is not important for ADF BC REST runtime:


Application performs well under load, there are no passivations/activations logged, even when Referenced Pool Size is set to zero.


However, I found that it is still important to keep Enable Application Module Pooling = ON. If you switch it OFF - passivation will start to appear, which consumes processing power and is highly unrecommended. So, keep Enable Application Module Pooling = ON.

2. Disconnect Application Module Upon Release

It is important to set Disconnect Application Module Upon Release = ON (read more about it - ADF BC Tuning with Do Connection Pooling and TXN Disconnect Level). This will ensure there will be always near zero DB connections left open:


Otherwise if we keep Disconnect Application Module Upon Release = OFF:


DB connections will not be released promptly:


This summarises important points related to ADF BC REST tuning.

Microservice Approach for Web Development - Micro Frontends

Thu, 2018-05-17 12:16
This post is based on my Oracle Code 2018 Warsaw talk. View presentation on slides share:


Wondering what micro frontends term means? Check micro frontends description here. Simply speaking, micro frontend must implement business logic from top to bottom (database, middleware and UI) in isolated environment, it should be reusable and pluggable into main application UI shell. There must be no shared variables between micro frontends. Advantage - distributed teams can work on separate micro frontends, this improves large and modular system development. There is runtime advantage too - if one of the frontends stops working, main application should continue to work.

I have implemented micro frontends architecture with Oracle JET. Source code is available on GitHub repository. There are three applications, two with micro frontends and one is the master UI shell. Both micro frontends are implemented as JET Composite Components. First is hosted on WebLogic, it calls ADF BC REST service in the backend. Second is hosted on Node.JS and returns static data. First micro frontend implements listener, it allows to handle actions from the outside.


When JET application is accessed in your browser, bunch of HTML, JS and CSS files are downloaded from the server. Core idea with micro frontends - instead of loading HTML, JS and CSS for micro frontend from the same host as master app - load it from different host. JET Composite Component rendered inside master application will be downloaded from different host. Not only downloaded, all backend calls should go to that host too, not to the master host. JET Composite Component integration into master application architecture:


This is how it works in practice. Each of these charts is separate JET Composite Component, loaded as micro frontend from different host into master application. We can see that in network monitor. Loader.js scripts for both micro frontends are downloaded from different hosts:


Runtime advantage - if one or multiple micro frontends are down, application continues to run:


JET Composite Component runs on the client, even it is hosted in its own micro frontend. This gives possibility to subscribe to the events happening in the component in the master app and route that event to another micro frontend. In this example, once item is selected in job chart - employees chart (another micro frontend) is filtered:


Technical implementation

Main application must be configured to support remote module loading for JET Composite Component. Read more about it in Duncan Mills blog post - JET Custom Components XII - Revisiting the Loader Script. In short, you should add Xhr config in JET application main.js:


Server where micro frontend is hosted, must set Access-Control-Allow-Origin header.

Main module where both micro frontends are integrated is using JET module component. Each micro frontend in master UI shell is wrapped into JET module. This allows main application to function, even when micro frontend in the module stops:


JET module is initialized from variable, which returns module name:


Jobs module contains Jobs micro frontend - JET Composite Component. It is hosted and WebLogic and calls ADF BC REST in the backend. Component is assigned with listener:


The most important part is in JS script. Here instead of referencing JET Composite Component locally, we load it from remote host. This allows to develop and host micro frontend JET Composite Component on its own:


Listener refers to c2 element and cals the method. Element c2 in the main app relates to second micro frontend:


This component is loaded from another host, from Node.JS:


Important hint - for JET Composite Component to load from remote host, make sure to add .js for JET Composite Component script, as highlighted (refer to source code):

Comparing Intent Classification in TensorFlow and Oracle Chatbot

Tue, 2018-04-03 10:46
I have created sample set of intents with phrases (five phrases per intent, and ten intents). Using this set of data to train and build classification model with TensorFlow and Oracle Chatbot machine learning. Once model is trained, classifying identical sample phrases with both TensorFlow and Oracle Chatbot to compare results. Using Oracle Chatbot with both Linguistic and Machine Learning models.

Summary:

1. Overall TensorFlow model performs better. The main reason for this - I was training TensorFlow model multiple times, until good learning output (minimized learning loss) was produced.

2. Oracle Chatbot doesn't return information about learning loss after training, this makes it hard to decide if training was efficient or no. As consequence - worse classification results, can be related to slightly less efficient training, simply because you don't get information about training efficiency

3. Classification results score: 93% TensorFlow, 87% Oracle Chatbot Linguistic model, 67% Oracle Chatbot Machine Learning. TensorFlow is better, but Oracle Chatbot Linguistic model is very close. Oracle Chatbot Machine Learning model can be improved, see point 2

Results table (click on it, to see maximized):


TensorFlow

List of intents for TensorFlow is provided in JSON file. Same intents are used to train model in Oracle Chatbot:


TensorFlow classification model is created by training 2-layer neural network. Once training is completed, it prints out total loss for the training. This allows to repeat training, until model is produced with optimal loss (as close as possible to 0): 0.00924 in this case:


TensorFlow classification result is good, it failed to classify only one sentence - "How you work?" This sentence is not directly related to any intent, although I should mention Oracle Chatbot Linguistic model is able to classify it. TensorFlow offers correct classification intent as second option, coming very close to correct answer:



Oracle Chatbot

Oracle Chatbot provides UI to enter intents and sample phrases - same set of intents with phrases is used as for TensorFlow:


Oracle Chatbot offers two training models - linguistic and machine learning based.


Once model is trained, there is no feedback about training loss. We can enter phrase and check intent classification result. Below is sample for Linguistic model classification failure - it fails to classify one of the intents, where sentence topic is not perfectly clear, however same intent is classified well by Oracle Chatbot Machine Learning model:


Oracle Chatbot Machine Learning model fails on another intent, where we want to check for hospital (hospital search) to monitor blood pressure. I'm sure if it would be possible to review training quality loss (may be in the next release?), we could decide to re-train model and get results close to TensorFlow. Classification with Oracle Chatbot Machine Learning model:

Socket.IO Integration with Oracle JET

Thu, 2018-03-29 08:42
Socket.IO is a JavaScript library for realtime web applications. It comes in two parts - a client-side library that runs in the browser and a server-side library for Node.js. In this post I will walk you through complete integration scenario with Oracle JET.

Here you can see it in action. Send Event button from JET - sends message through Socket.IO to Node.js server side. Message is handled on server side and response is sent back to client (displayed in browser console):


Server side part with Socket.io is implemented in Node.js application and it runs on Express. To create Node.js application (which is just one json file in the beginning), run command:

npm init

To add Express and Socket.io, run commands:

npm install express --save
npm install socket.io --save

To start Node.js application on Express, run command:

npm start

Double check package.json, it should contain references to Express and Socket.IO:


Here is server side code for Socket.IO (I created server.js file manually). When connection is established with the client, message is printed. Method socket.on listens for incoming messages. Method socket.emit transmits message to client. In both cases we can use JSON structure for payload variable. There is cheatsheet for socket.emit - Socket.IO - Emit cheatsheet. Socket.IO server side:


Socket.IO client side can be installed into JET application with NPM. There is separate section in Oracle JET documentation, where you can read step by step instructions about 3-rd party library installation into Oracle JET - Adding Third-Party Tools or Libraries to Your Oracle JET Application. I would recommend manually include Socket.IO dependency entry into package.json in JET:


Then run command to fetch Socket.IO library into JET application node modules. Next continue with instructions from Oracle JET guide and check my sample code:

npm update

To establish socket connection - import Socket.IO into JET module and use io.connect to establish socket connection. Connect to the end point where Express is running with server side Socket.IO listener. Client side is using same socket.on and socket.emit API methods as server side:


Download sample code from my GitHub repository.

ADF on Docker - Java Memory Limit Tuning for JVM

Wed, 2018-03-28 03:46
It might look like a challenge to run Java in Docker environment, by default Java is not aware of Docker memory limits. Check this article for example - Java inside docker: What you must know to not FAIL.  I was able to run WebLogic and ADF (Essential WebLogic Tuning to Run on Docker and Avoid OOM) on Docker previously without Java memory issues, using JAVA_OPTIONS=-XX:+UnlockCommercialFeatures -XX:+ResourceManagement -XX:+UseG1GC. However after Docker upgrade to latest version, these settings didn't help anymore. I did't want to hardcode memory setting with -Xmx.

Java started to consume all available memory in Docker and eventually was killed. You can see this from chart below - memory is growing, killed and after restart growing again:


To solve this behaviour, I have applied settings from Java Platform Group, Product Management Blog - Java SE support for Docker CPU and memory limits. I have replaced JAVA_OPTIONS=-XX:+UnlockCommercialFeatures -XX:+ResourceManagement -XX:+UseG1GC set previously with JAVA_OPTIONS=-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:+UseG1GC.

JAVA_OPTIONS=-XX:+UnlockExperimentalVMOptions - XX:+UseCGroupMemoryLimitForHeap -XX:+UseG1GC did the job - JVM stays in Docker memory limits sharp:


This chart shows Java memory behaviour before and after settings were applied. From March 27th - Java memory is a straight line with JAVA_OPTIONS=-XX:+UnlockExperimentalVMOptions - XX:+UseCGroupMemoryLimitForHeap -XX:+UseG1GC:

Oracle JET Offline Persistence Toolkit - Offline Update Handling

Tue, 2018-03-27 09:42
Oracle JET Offline Persistence Toolkit supports offline update, create and delete operations. In this post I will describe update use case. Read previous post related to offline toolkit, where I explain how to handle REST pagination, querying and shredding - REST Paging Support by Oracle Offline Persistence in JET.

This gif shows scenario, where we go to offline mode and then changing data in multiple rows. Data update happens offline and each PATCH request is tracked by offline persistence toolkit:


As soon as we go online (Offline checkbox value is changed in Chrome Developer Tools) - requests executed while offline are replayed automatically against backend server:


We should see, how update flow is implemented in JET in this particular case. Once data is changed, we call submitUpdate function. This function in turn calls JET Model API function save. This triggers PATCH call to back-end to update data. If we are offline, JET offline persistence toolkit, transparently records PATCH request to be able to replay it later while online. There are no specific code changes needed by developer to support offline logic during REST call:


Once we go online, listener is invoked and it calls our function synchOfflineChanges. This function triggers request replay to the backend. This means we can control, when requests are replayed. Besides this, we can control each request which failed to be replayed - this is important, when data conflict happens during update in backend:


Online handler is registered with window.addEventListener in the same module, where persistence manager is defined:


Offline Persistence Toolkit 1.1.1 supports extensive logging. You can update to 1.1.1 version by running: npm install @oracle/offline-persistence-toolkit command:


To enable persistence toolkit logger, add persist/impl/logger module to your target module and call logger.option('level', logger.LEVEL_LOG):


Logger prints useful information about offline update, this helps to debug offline functionality:


Download sample application from GitHub repository.

ADF Declarative Component Example

Thu, 2018-03-22 12:48
ADF Declarative Component support is popular ADF framework feature, but in this post I would like to explain it from slightly different angle. I will show how to pass ADF binding and Java bean objects into component through properties, in these cases when component must show data from ADF bindings, such approach could offer robustness and simplify component development.

This is component implemented in the sample app - choice list renders data from ADF LOV and button calls Java bean method to print selected LOV item value (retrieved from ADF bindings):


JDeveloper provides wizard to create initial structure for declarative component:


This is ADF declarative component, it is rendered from our own tag. There are two properties. List binding is assigned with LOV binding object instance and bean property with Java bean instance defined in backing bean scope. In this way, we pass objects directly into the component:


LOV binding is defined in target page definition file, where component is consumed:


Bean is defined in the same project, where page which consumes ADF declarative component is created. We need to define component property type to match bean type, for that reason, we must create class interface in the component library and in target project implement it:


Component and main projects can be in the same JDEV application, we can use JDEV working sets to navigate between projects (when running main project, we dont want to run component project, component project is deployed and reused through ADF JAR library):


Bean interface is defined inside component:


Property for list binding is defined with JUCtrlListBinding type, this allows to pass binding instance directly to the component. Same for bean instance, using interface to define bean instance type, which will be assigned from the page, where component is used:


Declarative component is based on combination of ADF Faces components:


Download sample application from GitHub repository.

Pages