Anthony Shorten

Subscribe to Anthony Shorten feed
Oracle Blogs
Updated: 3 hours 13 min ago

Converting your XAI Services to IWS using scripting

Tue, 2017-10-10 17:14

With the deprecation announcement surrounding XML Application Integration (XAI), it is possible to convert to using Inbound Web Services (IWS) manually or using a simple script. This article will outline the process of building a script to bulk transfer the definitions over from XAI to IWS.

Ideally, it is recommended that you migrate each XAI Inbound Service to Inbound Web Services manually so that you can take the opportunity to rationalize your services and reduce your maintenance costs but if you want to simply transfer over to the new facility in bulk this can be done via a service script to migrate the information.

This can be done using a number of techniques:

  • You can drive the migration via a query portal that can be called via a Business Service from a BPA or batch process.
  • You can use the Plug-In Batch to pump the services through a script to save time.

In this article I will outline the latter example to illustrate the migration as well as highlight how to build a Plug In Batch process using configuration alone.

Note: Code and Design in this article are provided for illustrative purposes and only cover the basic functionality needed for the article. Variations on this design are possible through the flexibility of the extensible of the product. These are not examined in any detail except to illustrate the basic process.

Note: The names of the objects in this article are just examples. Alternative values can be used, if desired.

Design

The design for this is as follows:

  • Build a Service script that will take the XAI Inbound Service identifier to migrate and perform the following
    • Read the XAI Inbound Service definition to load the variables for the migration
    • Check that the XAI Inbound Service is valid to be migrated. This means it must be owned by Customer Modification and uses the Business Adaptor XAI Adapter.
    • Transfer the XAI Inbound Service definition to the relevant fields in the Inbound Web Service and add the service. Optionally activate the service ready for deployment. The deployment activity itself should not be part of the script as it is not a per service activity usually.
    • By default the following is transferred:
      • The Web Service name would be the Service Name on the XAI Inbound Service not the identifier as that is randomly generated.
      • Common attributes are transferred across from the existing definition
      • A single operation, with the same name as the Inbound Web Service, is created as a minimalist migration option.
  • Build a Plug In Batch definition to include the following:
    • The Select Record algorithm will identify the list of services to migrate. It should be noted that only services that are owned by the Customer Modification (CM) owner should be migrated as ownership should be respected.
    • The script for the above will be used in the Process Record algorithm.

The following diagram illustrates the overall process:

Plug In Development Process

The design of the Plug In Batch will only work for Oracle Utilities Application Framework V4.3.0.4.0 and above but the Service Script used for the conversion can be used with any implementation of Oracle Utilities Application Framework V4.2.0.2.0 and above. On older versions you can hook the script into another script such as BPA or drive it from a query zone.

Note: This process should ONLY be used to migrate XAI Inbound Services that are Customer Modifications. Services owned by the product itself should not be migrated to respect record ownership rules.

XAI Inbound Service Conversion Service Script

The first part of the process is to build a service script that establishes an Inbound Web Service for an XML Application Integration Inbound Service. To build the script the following process should be used:

  • Create Business Objects - Create a Business Object, using Business Object maintenance, based upon XAI SERVICE (XAI Inbound Service) and F1-IWSSVC (Inbound Web Service) to be used as Data Areas in your script. You can leave the schema's as generated with all the elements defined or remove the elements you do not need (as this is only a transient piece of functionality). I will assume that the schema will be as the default generation using the Schema generator in the Dashboard. Remember to allocate the Application Service for security purposes (I used F1-DFLTS as that is provided in the base meta data). The settings for the Business Objects are summarized as follows:
Setting XAI Inbound Service BO Values IWS Service BO Values Business Object CMXAIService CMIWSService Description XAI Service Conversion BO IWS Service Conversion BO Detailed Description Conversion BO for XML Application Integration Conversion BO for Inbound Web Services Maintenance Object XAI SERVICE F1-IWSSVC Application Service F1-DFLTS F1-DFLTS Instance Control Allow New Instances Allow New Instances
  • Build Script - Build a Service Script with the following attributes:
Setting Value Script CMConvertXAI Description Convert an XAI Service to IWS Service Detailed Description

Script that converts the passed in XAI Service Id into an Inbound Web Service.

- Reads the XAI Inbound Service definition
- Copies the relevant attributes to the Inbound Web Service
- Add the Inbound Web Service

Script Type Service Script Application Service F1-DFLTAPS Script Engine Version 3.0 Data Area CMIWSService - Data Area Name IWSService Data Area CMXAIService - Data Area Name XAIService Schema (this is the input value and some temporary variables)

<schema>
  <xaiInboundService mdField="XAI_IN_SVC_ID"/>
  <operations type="group">
    <iwsName/>  
    <operationName/>  
    <requestSchema/>  
    <responseSchema/>  
    <requestXSL/>  
    <responseXSL/>  
    <schemaName/>  
    <schemaType/>  
    <transactionType/>  
    <searchType/>
   </operations>
</schema>

The Data Area section looks like this:

  • Add the following code to your script (this is in individual edit-data steps):

Note: The code below is very basic and there are optimizations that can be done to make it smaller and more efficient. This is just some sample code to illustrate the process.

10: edit data
     // Jump out if the inbound service Id is blank
     if ("string(parm/xaiInboundService) = $BLANK")
       terminate;
     end-if;
end-edit;
20: edit data
     // populate the key value from the input parameter
     move "parm/xaiInboundService" to "XAIService/xaiServiceId";
     // invoke the XAI Service BO to read the service definition
     invokeBO 'CMXAIService' using "XAIService" for read;
     // Check that the Service Name is populated at a minimum
     if ("XAIService/xaiInServiceName = $BLANK")
       terminate;
     end-if;
     // Check that the Service type is correct
     if ("XAIService/xaiAdapter != BusinessAdaptor")
       terminate;
     end-if;
     // Check that the owner flag is CM
     if ("XAIService/customizationOwner != CM")
       terminate;
     end-if;
end-edit;
30: edit data
     // Copy the key attributes from XAI to IWS
     move "XAIService/xaiInServiceName" to "IWSService/iwsName";
     move "XAIService/description" to "IWSService/description";
     move "XAIService/longDescription" to "IWSService/longDescription";
     move "XAIService/isTracing" to "IWSService/isTracing";
     move "XAIService/postError" to "IWSService/postError";
     move "XAIService/shouldDebug" to "IWSService/shouldDebug";
     move "XAIService/xaiInServiceName" to "IWSService/defaultOperation";
     // Assume the service will be Active (this can be altered)
     // For example, set this to false to allow for manual checking of the
     // setting. That way you can confirm the service is set correctly and then
     // manually set Active to true in the user interface.
     move 'true' to "IWSService/isActive";
     // Process the list for the operation to the temporary variables in the schema
     move "XAIService/xaiInServiceName" to "parm/operations/iwsName";
     move "XAIService/xaiInServiceName" to "parm/operations/operationName";
     move "XAIService/requestSchema" to "parm/operations/requestSchema";
     move "XAIService/responseSchema" to "parm/operations/responseSchema";
     move "XAIService/inputXSL" to "parm/operations/requestXSL";
     move "XAIService/responseXSL" to "parm/operations/responseXSL";
     move "XAIService/schemaName" to "parm/operations/schemaName";
     move "XAIService/schemaType" to "parm/operations/schemaType";
     // move "XAIService/transactionType" to "parm/operations/transactionType";
     move "XAI/searchType" to "parm/operations/searchType";
     // Add the parameters to the operation list object
     move "parm/operations" to "IWSService/+iwsServiceOperation";
end-edit;
40: edit data
     // Invoke BO for Add
     invokeBO 'CMIWSService' using "IWSService" for add;
end-edit;

Note: The code example above does not add annotations to the Inbound Web Service to attach policies for true backward compatibility. It is assumed that policies are set globally rather than on individual services. If you want to add annotation logic to the script it is recommended to add an annotations group to the script internal data area and add annotations list in logic in the script.

One thing to point out for XAI. To use the same payload for an XAI service in Inbound Web Services, a single operation must exist with the same name as the Service Name. This is the design pattern for a one to one conversion. It is possible to vary from that if you manually convert from XAI to IWS as it is possible to reduce the number of services in IWS using multiple operations. Refer to Migrating from XAI to IWS (Doc Id: 1644914.1) and Web Services Best Practices (Doc Id: 2214375.1) from My Oracle Support for a discussion of the various techniques available. The attribute mapping looks like this:

Mapping of objects

The Service Script has now been completed. All it needs is to pass the XAI Inbound Service Identifier (not the name) to parm/xaiInboundService structure.

Building The Plug In Batch Control

In past releases, the only way to build a Batch process that is controlled via a Batch Control was to use the Oracle Utilities SDK using Java. It is now possible to define what is termed a Plug In based Batch Control which allows you to use ConfigTools and some configuration to build your batch process. The fundamental principle is that batch is basically selecting a set of records to process and then passing those records into something to process them. In our case, we will provide an SQL statement to subset the services to convert from XAI to pass to the service we just built in the previous step.

Select Records Algorithm

The first part of the Plug In Batch process is to define the Select Records algorithm that defines the parameters for the Batch process, the commit strategy and the SQL used to pump the records into the process. The first step is to create a script to be used for the Algorithm Type of Select Records to define the parameters and the commit strategy. For this example I created a script with the following parameters:

Setting Value Script CMXAISEL Description XAI Select Record Script - Parameters Detailed Description This script is the driver for the Select Records algorithm for the XAI to IWS conversion Script Type Plug In Script Algorithm Entity Batch Control - Select Records Script Version 3.0 Script Step 10: edit data
 // Set strategy and key field
 // Strategy values are dictated by BATCH_STRATEGY_FLG lookup
 //  Set JOBS strategy as this is a single threaded process
 //  I could use THDS strategy but then would have to put in logic for
 // restart in the SQL. The current SQL has that logic already implied.
 move 'JOBS' to "parm/hard/batchStrategy";
 move 'XAI_IN_SVC_ID' to "parm/hard/keyField";
end-edit;

Note: I have NO parameters for this job. If you wish to add processing for parameters, take a look at some examples of this algorithm type to see the processing necessary for bind variables.

The next step is to create an algorithm type. This will be used by the algorithm itself to define the process. Typically, an algorithm type is the definition of the physical aspects of the algorithm and its parameters. For the select algorithm the following algorithm type was created:

Setting Value Algorithm Type CMXAISEL Description XAI Selection Algorithm Detailed Description This algorithm Type is a generic wrapper to set the job parameters Algorithm Entity Batch Control - Select Records Program Type Plug In Script Plug In Script CMXAISEL Parameter SQL (Sequence 1 - Required) - This is the SQL to pass into the process

The last step is to create the Algorithm to be used in the Batch Control. This will use the Algorithm Type created earlier. Create the algorithm definition as follows:

Setting Value Algorithm Code CMXAISEL Description XAI Conversion Selection Algorithm Type CMXAISEL Effective Date Any valid date in the past is acceptable SQL Parameter

SELECT xai_in_svc_id FROM ci_xai_in_svc
WHERE xai_adapter_id = 'BusinessAdaptor'
AND
xai_in_svc_name NOT IN ( SELECT in_svc_name FROM f1_iws_svc)
AND
owner_flg = 'CM'

You might notice the SQL used in the driver. It passes the XAI_IN_SVC_ID's for XAI Inbound Services that use the Business Adaptor, are not already converted (for restart) and are owned by Customer Modification.

Process Records Algorithm

The next step is to link the script created earlier to the Process Records algorithm. As with the Select Records algorithm, a script, an algorithm type and algorithm entries need to be created.

The first part of the process is to build a Plug-In Script to pass the data from the Select Records Algorithm to the Service Script that does the conversion. The parameters are as follows:

Setting Recommended Value Script CMXAIProcess Description Process XAI Records in Batch Detailed Description This script reads the parameters from the Select records and passes them to the XAI Conversion script Script Type Plug-In Script Algorithm Entity Batch Control - Process Record Script Version 3.0 Data Area Service Script - CMConvertXAI - Data Area Name ConvertXAI Script Step if ("parm/hard/selectedFields/Field[name='XAI_IN_SVC_ID']/value != $BLANK")
    move "parm/hard/selectedFields/Field[name='XAI_IN_SVC_ID']/value" to "ConvertXAI/xaiInboundService";
    invokeSS 'CMConvertXAI' using "ConvertXAI" ;
end-if;

The script above basically takes the parameters passed to the algorithm and then passes them to the Service Script for processing

The next step is to define this script as an Algorithm Type:

Setting Value Algorithm Type CMXAIPROC Description XAI Conversion Algorithm Detailed Description This algorithm type links the algorithm to the service script to drive the process. Algorithm Entity Batch Control - Process Record Program Type Plug-In Script Plug-In Script CMXAIProcess

The last step in the algorithm process is to create the Algorithm entry itself:

Setting Value Algorithm Code CMXAIPROCESS Description XAI Conversion Process Record Algorithm Type CMXAIPROC Plug In Batch Control Configuration

The last part of the process is to bring all the configuration into a single place, the Batch Control. This will pull in the algorithms into a configuration ready for use.

Setting Value Batch Control CMXAICNV Description Convert XAI Services to IWS Detailed Description

This batch control converts the XAI Inbound Services to Inbound Web Services to aid in the mass migration of the meta data to the new facility.
This batch job only converts the following:

- XAI Services that are owned by Customer Modification to respect record ownership.
- XAI Services that use the Business Adaptor XAI Adapter. Other types are auto converted in IWS
- XAI Services that are not already defined as Inbound Web Services

Application Service F1-DFLTAPS Batch Control Type Not Timed Batch Category Adhoc Algorithm - Select Records CMXAISEL Algorithm - Process Records CMXAIPROCESS

The Plug-in batch process is now defined.

Summary

The conversion process can be summarized as follows:

  • A Service Script is required to transfer the data from the XAI Inbound Web Service to the Inbound Web Service definition. This converts only services owned by Customer Modification, have not been migrated already and use the Business Adaptor XAI Adapter. The script sets the same parameters as the XAI Service for backward compatibility and creates a SINGLE operation Web Service with the same payload as the original.
  • The Select Records Algorithm is defined which defines the subset of records to process with a script that defines the job properties, an algorithm entry to define the script to the framework and an algorithm, with the SQL to use, to link to the Batch Control.
  • The Process Records Algorithm is defined which defines the processing from the Select Records and links in the Service Script from the first step. As with any algorithm, the code is built, in this case in Plug-In Script to link the data to the script, an algorithm type entry defines the script and then an algorithm definition is created to link to the batch control.
  • The last step is to create the Batch Control that links the Select Records and Process Records algorithms.

Single Submitter Support in Oracle Scheduler Integration

Tue, 2017-08-29 17:47

The Oracle Scheduler integration was released for Oracle Utilities Application Framework to provide an interface to the DBMS_SCHEDULER package in the Oracle Database. 

By default, when submitting a multi-threaded job where the thread_limit is set to a number greater than 1 and the thread_number on the submission is setting to it to zero (to spawn threads) the interface would submit each thread individually after each other. For a large number of threads, this may lead to a high level of lock contention on the Batch Control table. To resolve this issue we have enhanced the interface to include a new feature to reduce the lock contention using a single submitter.

To use this facility you can either use a new command line override:

OUAF_BATCH.Submit_Job(
...
single_submitter => true,
...
)

Or an be used on the Set_Option facility (Globally or on individual jobs). For example for a Global scope:

OUAF_BATCH.Set_Option(scope => 'GLOBAL', name => 'single_submitter', value => true);

The default for this facility is set to false (for backward compatibility). If the value is set to true, you cannot restart an individual thread till all running threads have ended.

This patch is available from My Oracle Support for a number of releases:

Release Patch 4.2.0.3.0 24299479 4.3.0.1.0 26440254 4.3.0.2.0 26452535 4.3.0.3.0 26452546 4.3.0.4.0 26452556

 

Team based To Do Management

Thu, 2017-08-17 19:39

One of the interesting discussions I have with customers and partners about the To Do functionality in the Oracle Utilities Application Framework based products is team management. Most partners and customers think that the To Do functionality is limited to one role per To Do type. This is due to the fact that most examples they see in training or in demonstrations shows one role per To Do type. There is "more than meets the eye" to the functionality.

The To Do functionality can be configured in different ways to implement different allocation mechanisms. Let me discuss and alternative configuration that may appeal to some implementations.

  • Create a To Role for each organizational team in your organization. These do not have to be whole parts of your organization, they can simply be groups of people with similar skills or work responsibilities. You decide the numbers of groups and their composition. I will use the word "team" rather than To Do Role in the rest of this article to emphasize the alternative view.
  • By using teams you actually might reduce your maintenance costs as you will probably have less numbers of teams than the number of To Do types to manage. At the moment remember people think that you can only have one team per To Do Type.
  • Allocate people to those teams. Now you have full flexibility here. A person can be a member of any team you wish and of course they can be members of multiple teams (even overlapping ones - more about his later). 
  • Allocate the teams to the To Do Types they will be working on. Now that you have teams you can allocate multiple teams per To Do type. Remember one of the teams should be allocated as the Default so that your algorithms, batch jobs etc have a default to allocate.

Now your implementation will be using teams of people rather than using one role per To Do Type. This means you can allocate to teams (or individuals) and supervisors can manage teams.

Remember the use of a capability in the product is not restricted to what is shown in demonstrations. Think outside the box.

High Availablity Designs

Mon, 2017-08-14 19:58

One of the most common tasks in any implementation of an Oracle Utilities Application Framework product is the design of a high availability environment to ensure business continuity and availability.

The Oracle Utilities Application Framework is designed to allow implementations to use a wide variety of high availability and business continuity solutions available in the market. As the product is housed in Oracle WebLogic and Oracle Database then we can utilize the high availability features of those products.

If you are considering designing a high availability architecture here are a few guidelines:

  • Consider the Oracle Maximum Availability Architecture which has guidelines for designing high availability and business continuity solutions for a variety of solutions available.
  • Design for your business requirements and hardware platform. Solutions can vary to low cost solutions with minimal hardware to highly configured complex hardware/software solutions.
  • Do not discount solutions built into your hardware platform. Redundancy and high availability features of hardware can be part of the solution that you propose for an implementation. These are typically already in place so offer a cost effective component of any solution.
  • Design for your budget. I have seen implementations where they design a complex high availability solution only to get "sticker shock" when the price is discussed. I usually temper costs of a solution against the estimated business loss from an availability issue or a business continuity issue. It is very similar to discussions around insurance you might have personally.
  • Customers of Oracle Utilities Application Framework based product have used both hardware and/or software based availability and business continuity solutions. This includes hardware at the load balancing level, such as routers, to implement high availability.
  • Oracle typically recommends clustering as one of the techniques to consider in your solutions. Oracle Utilities Application Framework supports clustering for Oracle WebLogic, Oracle Coherence and Oracle Database. We support clusters within user channels (online, web services and batch) and across those channels as well.
  • Oracle typically recommends Real Application Clustering (including One Node implementations) as part of an availability solution. Oracle Utilities Application Framework supports RAC and includes support for newer implementations of that technology through features such as Oracle Notification Service (ONS).
  • One of the most common business continuity solutions customers have chosen is to use Oracle Data Guard or Oracle Active Data Guard to keep a backup database in synchronization with the prime database. Customers wanting to use the backup database for reporting tend to choose Oracle Active Data Guard as their preferred solution.
  • Batch can be clustered using Oracle Coherence (with flexibility in the architecture) and in Oracle Cloud SaaS implementations, we support batch clustering via Oracle WebLogic clustering. For customers interested in batch architecture refer to Batch Best Practices (Doc Id: 836362.1) available from My Oracle Support.

The following references for MAA may help you design your solution:

Updated Integration Solutions Whitepaper - Augment your solutions

Wed, 2017-08-09 19:58

Whilst Oracle Utilities Application Framework is flexible and supports a wide range of solutions in the marketplace, there are some requirements that are actually best served using other Oracle technology integrated for your implementation. A whitepaper outlining a summary of the most common technology integrations has been updated to the latest release.

This whitepaper outlines the most common Oracle technology integrations that have been used by Oracle and its partners to implement complete solutions in the marketplace. It is designed to help customers and partners make judgements on the technology available and how to integrate this technology with your Oracle Utilities Application Framework based product.

The whitepaper is Integration Reference Solutions (Doc Id: 1506855.1) and is available from My Oracle Support.

Integration Architecture

The updates include the latest information as well as helpful links to other documentation to help plan and design your integrated solutions.

Updated Whats New whitepaper - 4.3.0.4.0

Sun, 2017-08-06 17:40

The Whats New in FW4 whitepaper has been updated for the latest service pack release. This whitepaper is designed to summarize the major technology and functional changes implemented in the Oracle Utilities Application Framework since V2.2 till the latest service pack. This is primarily of interest to customer upgrading of those earlier versions to understand what has changed and what is new in the framework since that early release.

The whitepaper is only a summary of selected enhancements and it is still recommended to review the release notes of each release if you are interested in details of everything that is changed. This whitepaper does not cover the changes to any of the products that use the Oracle Utilities Application Framework, it is recommended to refer to the release notes of the individual products for details of new functionality.

The whitepaper is available from Whats New in FW4 (Doc Id: 1177265.1) from My Oracle Support.

Securing Your JNDI Resources for Other Groups

Thu, 2017-08-03 16:25

As with other applications, the Oracle Utilities Application Framework respects the settings within the Oracle WebLogic domain, including any default settings. One of the default settings for the domain is access to the JNDI resources within the domain. By default, Oracle WebLogic grants access to Everyone that is defined in the security realm definition of the domain. Whilst, this is generally acceptable in the vast majority of domains that are setup (remember you tend to set up a lot of non-production copies in any implementation of the products), it may not be appropriate for production domains. There is a simple setup to correct that.

  • Create a group to designate the specific users outside the application users you want to give access to the JNDI resources. Allocate the user identities to that group in your security repository. If you use the internal LDAP of Oracle WebLogic then you can add them using the console. If you want to designate different groups of people, create different groups.
    • Remember you have groups already for other users, Administrators and the product group. For this documentation we will use the Administrators and cisusers groups. You can vary the values according to your site setup. These will be reused for the setup.
  • Create a Global Role which refers to the above group. If you created multiple then specify each group in the role.
  • On the product server(s) or cluster, select the View JNDI Tree option on the Configuration --> General tab. For example:

View JNDI Tree

  • On the root node of the server definition in the tree remove the Everyone from the node using the Remove button. The Administrators should be the only group that has access at the root level. Do NOT remove Administrators as this will corrupt your access to the domain. The following is an example of the recommended settings:

Root Node Access

  • All child nodes in the JNDI inherit the root node setup. Now for the product to work you need to add cisusers to the following JNDI objects:
    • The servicebean must be accessible for cisusers. This will be under the context value set for your domain.
    • The Data Sources (OUAF_DS in my example) must be accessible to cisusers.
    • The JMX nodes should be accessible to cisusers if you are using JMX monitoring (directly or via OEM).
    • If using the internal JMS processing, wither that is the JMS Senders or MDB, then you must allow cisusers access to the JMS resources in the domain.
  • Add your custom group to the relevant JNDI objects they need to have access to.
  • Set the Enable Remote JDBC Connection Property to false. This can be done using the JAVA_OPTIONS setting in the setDomainEnv[.sh] script shipped with Oracle WebLogic in the bin directory of your domain home (Add -Dweblogic.jdbc.remoteEnabled=false to JAVA_OPTIONS). Check that the variable WLS_JDBC_REMOTE_ENABLED is not set incorrectly.
  • If you are using SSL, you need to set the RMI JDBC Security to Secure to ensure Administrators use SSL as well for connections. For example:

RMI JDBC Security

The domain is now more secure.

 

 

Calling Batch Level Of Service

Wed, 2017-07-26 18:38

As a followup to my Batch Level Of Service article, I want to illustrate how to call your new algorithm from other scripts and as part of query zones.

In the base product we ship a Business Service, F1-BatchLevelOfService, that allows a script or query zone to call the Batch Level Of Service algorithm attached to a Batch Control, if it exists, to return the level of service. I should point out that if a Batch Level Of Service algorithm is not configured on the Batch Control, this call will return the Disabled state.

The schema for this service is shown below (please use the View Schema feature on your version for later versions):

Level of Service Schema

To use this service you need to populate the batchControlId input parameter when calling the service for the service to return the message and levelOfService.

Now, how do you call this in other objects:

  • Service Scripts - Include the F1-BatchLevelOfService service as a Data Area attached to the script and use invokeBS to call the business service. For example:

move "parm/batchControlId" to "F1-BatchLevelOfService/input/batchControlId";
invokeBS 'F1-BatchLevelOfService' using "F1-BatchLevelOfService";

  • Query Portal - Use the source=bs tag in your column with a call to the F1-BatchLevelOfService service passing the column that contains the Batch Control Id. For example:

source=BS bs='F1-BatchLevelOfService' input=[input/batchControlId=C1] output=output/levelOfService

Additionally you can use F1-ReturnMessage to format the message which is returned as well.

Here is an example of the columns used in a query portal:

Example Use of Batch Level Of Service

Building a Batch Level of Service Algorithm

Fri, 2017-07-21 00:45

One of the features of the Oracle Utilities Application Framework is the Batch Level of Service. This is an optional feature where the Oracle Utilities Application Framework can assess the current execution metrics against some target metrics and return whether the batch job met its targets or failed in meeting targets (including the reason).

This facility is optional and requires some configuration on the Batch Control using a Batch Level Of Service algorithm. This algorithm takes in the BATCH_CD as an input and performs the necessary processing to check the level of service (anyway you wish).

The algorithm passes in a Batch Code (batchControlId) and it passes back the following:

  • The Level Of Service, levelOfService,  (as expressed by the system lookup F1_BATCH_LEVEL_SERVICE_FLG):
    • DISA (Disabled) - The Batch Level Of Service is disabled as the algorithm is not configured on the Batch Control record. This is the default.
    • NORM (Normal) - The execution of the batch job is within the service level you are checking.
    • ERRO (Error) - The execution of the batch job exceeds the service level is you are checking.
    • WARN (Warning) - This can be used to detect that he job is close to the service level (if you require this functionality).
  • The reason for the Level Of Service, expressed as a message (via Message Category, Message Number and Message Parameters). This allows you customize the information passed to express why the target was within limits or exceeded.

So it is possible to use any metric in your algorithm to measure the target performance of your batch controls. This information will be displayed on the Batch Control or via the F1-BatchLevelOfService Business Service (for query portals).

Now, I will illustrate the process for building a Batch Level Of Service with an example algorithm. This sample will just take a target value and assess the latest completed execution. The requirements for the sample algorithm are as follows:

  • A target will be set on the parameters of the algorithm which is the target value in seconds. Seconds was chosen as that is the lowest common denominator for all types of jobs.
  • The algorithm will determine the latest batch number or batch rerun number (to support reruns) for the completed jobs only. We have an internal business service, F1-BatchRunStatistics that returns the relevant statistics if given the batch code, batch number and batch rerun number.
  • The duration returned will be compared to the target and the relevant levelOfService set with the appropriate message.

Here is the process I used to build my algorithm:

  • I created three custom messages that would hold the reason for the NORM, ERRO and WARN state. I do not use the last state in my algorithm though in a future set of articles I might revisit that. For example:

Messages for Batch Level Of Service

  • You might notice that in the message for the times the target is exceeded I will include the target as part of the message (to tell you how far you are away from the target). The first parameter will be the target and the second will be the value returned from the product.
  • The next step is to define the Business Service that will return the batch identifiers of the execution I want to evaluate for the statistic. In this case I want to find the latest run number for a given batch code. Now, there are various ways of doing this but I will build a business service to bring back the right value. In this case I will do the following:
    • I will build a query zone with the following configuration to return the batch run number and batch rerun number:
Parameter Setting Zone CMBHZZ Description Return Batch Last Run Number and Rerun Number Zone Type F1-DE-SINGLE Application Service F1-DFLTS Width Full Hidden Filter 1 label=BATCH_CD Initial Display Columns C1 C2 C3 SQL Statement select b1.batch_cd, max(b1.batch_nbr), max(b2.batch_rerun_nbr) from ci_batch_inst b1, ci_batch_inst b2 where
b1.batch_cd = :H1 and b1.batch_cd = b2.batch_cd and b1.batch_nbr = b2.batch_nbr group by b1.batch_cd Column 1 source=SQLCOL sqlcol=1 label=BATCH_CD Column 2 source=SQLCOL sqlcol=2 label=BATCH_NBR Column 3 source=SQLCOL sqlcol=3 label=BATCH_RERUN_NBR
  • I will convert this to a Business Service using the FWLZDEXP with the following schema:

Business Service Schema

  • I need to create a Data Area to hold my input variables. I could do this inline but I might want to reuse the Data Area for other algorithms in the future. For example:

Data Area

  • I now have all the components to start my algorithm via Plug In Script. I create a Batch Level Of Service script with the following settings:
Script Basics
  • I attach the following Data Areas. These are the data areas used by the various calls in the script:

Data Areas

  • The script code looks something like this:

Script

Note: The code shown above is for illustrative processes. It is not a supported part of the product, just an example.

  • I now create the Algorithm Type that will define the algorithm parameters and the interface for the Algorithm entries. Notice the only parameter is the Target Value:

Sample Algorithm Type

  • Now I create the Algorithm entries to set the target value. For example:

Example Algorithm

  • I can create many different algorithm entries to reuse across the batch controls. For example:

Example Algorithms

  • The final step is to add it to the Batch Controls ready to be used. As I wrote the script as a Plug-In Script there is no deployment needed as it auto deploys. For example, on the Batch Control, I can add the algorithm:

Example Algorithm configuration on batch control

  • Now the Batch Level Of Service will be invoked whenever I open the Batch Control. For example:

Example Normal outcome from algorithm

Example outcome of Error

This example is just one use case to illustrate the use of Batch Level Of Service. This article is the first in a new series of articles that will use this as a basis for a new set of custom portals to help plan and optimize your batch experience.

Design Guidelines

Thu, 2017-07-06 23:24

The Oracle Utilities Application Framework is both flexible and powerful in terms of the extensibility of the products that use the product. As the famous saying goes though, "With Great Power comes Great Responsibility". Flexibility does not mean that you have carte blanche in terms of design when it comes to using the facilities of the product. Each object in the product has been specifically designed for a specific purpose and trying to use the extension facilities with those object must also respect those purposes.

Let me give some advice that may help guide your design work when building extensions:

  • Look at the base - The most important piece of advice I give partners and customers is look at the base product facilities first. I am amazed how many times I see an enhancement that has been implemented by a partner only to find that the base product already did that. This is particularly important when upgrading to a newer version. We spend a lot of time adding new features and updating existing ones (and sometimes replacing older features with newer features) so what you have as enhancements in previous now are part of the base product. It is a good idea to revert back to the base to reduce your maintenance costs.
  • Respect the objects - We have three types of objects in the product: Configuration, Master and Transaction.
    • The configuration objects are designed to hold meta data and configuration that influence the behavior of the product. They are cached in a L2 Cache that is designed for performance and are generally static data that is used as reference and guidance for the other objects. They tend to be low volume and are the domain of your Administrators or Power Users (rather than end users). A simple rule here is that they tend to exist on the Admin menu of the product.
    • The master objects are medium volume, with low growth, and define the key identifier or root data used by the product. For example, Accounts, Meters, Assets, Crews, etc.
    • The transaction objects are high volume and high growth and are added by processes in the product or interfaces and directly reference master objects. For example, bills, payments, meter reads, work activities, tasks etc.. These objects tend to also support Information Lifecycle Management.
    • Now you need to respect each of them. For example, do not load transaction data into a configuration object is a good example. Each its own place and each resource profile and behaviors.
  • Avoid overuse of the CLOB field - The CLOB field was introduced across most objects in the product and is a great way of extending the product. Just understand that while they are powerful they are not unlimited. They are limited in size for performance reasons and they are not a replacement for other facilities like characteristics and even building custom tables. They are XML remember and have limited maintenance and search capabilities over other methods.
  • Avoid long term issues - This one is hard to explain so let me try. When you design something, think about the other issues that may arise due to your design. For example, lots of implementers forget about volume increases over time and run into issues such as storage long term. Remember data in certain objects has different lifecycles and needs to be managed accordingly. Factor that into your design. Too many times I see extensions that forget this rule and then customer calls support for advice only to hear they need to redesign it to cater for the issue.

I have been in the industry over 30 years and made a lot of those mistakes myself early in my career so it is not impossible. Just learn and make sure you do not repeat your mistakes over time. One more piece of advice, talk about your designs with a few people (of various ages as well) to see if it makes sense. Do not take this as a criticism as a lot of great designers bounce ideas off others to see if they make sense. Doing that as part of any design process helps make the design more robust. Otherwise it just looks rushed and from the other side looks like lazy design. As designers I have seen great designs and bad designs, but it is possible to transform a requirement into a great design with some forethought.

Updates to Oracle Utilities Testing solution

Tue, 2017-06-27 18:49

We are pleased to announce the availability of new content for the Oracle Functional Testing Advanced Pack for Oracle Utilities. This pack allows customers of supported Oracle Utilities products to adopt automated testing quickly and easily by providing the testing components used by Product Development for use in the Oracle Application Testing Suite.

We have released, as patches available from My Oracle Support, the following content patches:

  • Oracle Utilities Customer Care And Billing v2.6.0.0.0 (available as patch 26075747).
  • Oracle Utilities Customer To Meter v2.6.0.0.0 (available as patch 26075823).
  • Oracle Utilities Meter Data Management/ Oracle Utilities Smart Grid Gateway v2.2.0.1 (available as patch 26075799).

This means the current release of the pack, v5.0.1.0, supports the following products and versions:

  • Oracle Utilities Customer Care And Billing 2.4.0.3, 2.5.0.1, 2.5.0.2 & 2.6.0.0
  • Oracle Utilities Mobile Workforce Management 2.2.0.3, 2.3.0.0 & 2.3.0.1
  • Oracle Real Time Scheduler 2.2.0.3, 2.3.0.0 & 2.3.0.1
  • Oracle Utilities Application Framework 4.2.0.3, 4.3.0.1, 4.3.0.2, 4.3.0.3 & 4.3.0.4
  • Oracle Utilities Meter Data Management 2.1.0.3, 2.2.0.0 & 2.2.0.1
  • Oracle Utilities Smart Grid Gateway (all adapters) 2.1.0.3, 2.2.0.0 & 2.2.0.1      
  • Oracle Utilities Work And Asset Management 2.1.1, & 2.2.0
  • Oracle Utilities Operational Device Management 2.1.1 & 2.2.0
  • Oracle Utilities Customer To Meter 2.6.0.0

The pack continues to support the ability to build flows for these products, including flows across multiple products, packaged integration and supports all channels of access including online, web services and batch. We also support mobile testing for the Oracle Utilities Mobile Workforce Management and Oracle Real Time Scheduler products running on Android and iOS devices.

The pack also includes sanity flows used by the Oracle Utilities cloud deployments that test the installation of the products are complete and operational.

The VERSION column - A unsung treasure

Wed, 2017-06-21 20:58

If you use an Oracle Utilities Application Framework based product you will notice the column VERSION exists on all objects in the product. There is a very important reason that this column exists on the tables.

One of the common scenarios in an online system is the problem called the lost update problem. Let me explain, say we have two users (there can be more), say User A and User B.

  • User A reads Object A to edit it.
  • User B reads Object A as well to edit it at the same time.
  • User B saves the Object changes first.
  • User A saves the Object changes.

Now, without protection, the changes that User B made would be overridden by User A's changes. We have lost User B's changes. This is the lost update problem in a nutshell.

Now using the VERSION column changes the above scenario:

  • When User A and User B reads the object, the current value of VERSION is noted.
  • Whenever the object is updated, the value VERSION is checked. If it is the same than the value of VERSION when the record was read then value of VERSION is incremented as part of the update.
  • If the value of VERSION does not match, the product will issue a "Concurrency Error" and ask the user to retry the transaction (after reloading the changed object).

In our scenario, User A would receive the message as the value of VERSION has incremented, and therefore differs, since it was read by that user.

VERSION is a standard column on all objects in the system and applies no matter what channel (online, web services or batch) updates the object.

Hidden gems in OUAF 4.3.0.4.0

Thu, 2017-06-01 21:45

Oracle Utilities Application Framework V4.3.0.4.0 has just been released with a few products and you will find a few hidden gems in the installation which provides a couple of useful features for those upgrading.

Here is a summary of some of those features:

  • You will notice that now the product requires the Oracle Java Required Files (JRF). These files are additional libraries Oracle uses in its products to standardize diagnostics and administration. The JRF is provided as a profile you apply to your Oracle WebLogic domain to provide additional facilities and features. It install JRF it is recommended to down the Fusion Middleware Infrastructure release of Oracle WebLogic as it includes all the files necessary to apply the template. These libraries are used by various components in the product and each release we will implement more and more of the advanced functionality they provide.
  • One of the biggest gems is that JRF implements a new additional console in the form of Fusion Middleware Control. Customers familiar with Oracle SOA Suite will be familiar with this new console. It is a companion console and has some additional features around Web Services management and other administration features (including recording for replays) for common tasks. Here is an example of the console running with one of our products:

Oracle Fusion Middleware Control

  • The JRF inlcudes a prebuilt diagnostics framework (FMWDFW) setup for use with WLDF. The WebLogic Diagnostics Framework (WLDF) is a framework where you configure rules for detecting issues in your domain. When an issue arises, WLDF automatically collects the relevant information into a Diagnostics Package which can be sent to Oracle Support for diagnosis. This collects any relevant information (including flight recordings if you enable that) and creates a zip file full of diagnostic information to help solve the issue. The prebuilt setup can be used with OUAF products and can be altered to detect additional issues if necessary. At the present it helps detect the following:
    • Deadlocks
    • Heapspace (memory issues)
    • Stuck Threads (it can be configured to detect hogging threads as well)
    • UncheckedException - These are general errors

The JRF is a collection of useful libraries and utilities that are now enabled with Oracle Utilities Application Framework to help you be more efficient and also detect issues for you to manage.

Scripting, Groovy and Java for extending the product

Sun, 2017-05-28 23:55

In a recent past release of the Oracle Utilities Application Framework, we introduced Groovy as an alternative development technology for server side extensions on our products. This now means we have three technologies that can be used to extend our products:

  • XPath/Xquery based scripting engine known as scripting
  • Java
  • Groovy

Now, the issue becomes which technology do I use for my extensions. Here are a few guidelines to help you:

  • In terms of performance, there is not much difference between the technologies as, at the end of the day, they all result in byte code that is executed by the product. The product runtime does not discriminate the technology at that level. There is a slight advantage of Java/Groovy over Scripting for extremely large volumes.
  • If you are doing complex algorithmic or operating system level interaction it is recommended to use either Groovy or Java instead of scripting. While scripting can satisfy the most common of extensions, it may not be as efficient as Java/Groovy.
  • If you are intending to move to the Oracle Utilities SaaS offerings, you cannot use Java for any extensions. This is due to the fact that Java tends to be low level and also you cannot deploy your own JAR/WAR/EAR files in a Saas environment. If you use Oracle PaaS then you have full access so you can use Java in those cases.
  • Groovy was adopted as a language as it is the foundation of the Oracle Cloud offerings in general for extensions. The Groovy implementation across the Oracle Cloud is whitelisted so that it is restricted to accessing classes that do not have direct access to operating system resources. In this case we supply Groovy libraries to provide a contained integration with these resources.
  • One of the major considerations is total cost of ownership. Typically if you use a mixture of languages in your implementation then the cost of maintenance of those extensions tends to be higher if you chose to use a single language. This is true for any product that has multiple ways of extension as while flexibility is a great asset, it can come with additional costs. I usually recommend that you pick one of the technologies and stick with it for your extensions unless, for some reason, you need to use a mixture.
  • In terms of best practices, a lot of implementation partners tend to use scripting for the vast majority of their extensions and only use Groovy/Java when scripting is not applicable for some reason.
  • One of the big advantages of scripting and Groovy is that the code assets are actually contained in the database and migration is all handled by either Bundling (for small migrations) or using Configuration Migration Assistant (CMA). The use of Java for extensions, typically requires a manual synchronization of data as well as code.

From a vendor perspective, it does not matter which technology you choose to use. Personally, I would use scripting and the only use Groovy as necessary, it is easier to manage and you do not have physical JAR/WAR/EAR files to manage which makes your code/data synchronization much less an issue in a complex migration strategy. It also means you can move to the cloud a lot easier, in the future.

High and Maximum Availability Architectures

Thu, 2017-05-25 17:51

One of the most common questions I get from partners is what are the best practices that Oracle recommends for implementing high availability and also business continuity. Oracle has a set of flexible architectures and capabilities to support a wide range of high availability and business continuity solutions available in the marketplace.

The Oracle Utilities Application Framework supports the Oracle WebLogic and Oracle Database and related products with features inherited from the architecture or native facilities that allow features to be implemented. In summary the Oracle Utilities Application Framework supports the following:

  • Oracle WebLogic Clustering and high availability architectures are supported natively including support for the load balancing facilities supported, whether they be hardware or software based. This support extends to the individual channels supported by the Framework and to individual J2EE resources such as JMS, Data Sources, MDB etc..
  • Oracle Coherence high availability clustering is available natively for the batch architecture. We now also support using Oracle WebLogic to cluster and manage our batch architecture (though it is exclusively used in our Oracle Cloud implementations at the moment).
  • The high availability and business continuity features of the Oracle Database are also supported. For example, it is possible to implement Oracle Notification Service support within the architecture to implement Fast Connection Failure etc.

Oracle publishes a set of guidelines for Oracle WebLogic, Oracle Coherence and Oracle Database that can be used with Oracle Utilities Application Framework to implement high availability and business continuity solutions. Refer to the following references for this information:

REST Support clarifications

Tue, 2017-05-23 19:10

In the Oracle Utilities Application Framework V4.3.0.3.0 release, the support for REST has been enabled for use as a complementary interface method adding to the SOAP support we already have in the product.

The REST support in the Oracle Utilities Application Framework was originally developed to support our new generation of the mobile connection platform we used for the Oracle Utilities Mobile Workforce Management platform and limited to that product initially. Subsequently, we have decided to open up the support for general use.

As the REST support was originally designed for its original purpose, the current release of REST is limited to specific aspects of that protocol but it is at a sufficient level to be used for general purpose functions. It is designed to be an alternative to SOAP integration for customers who want to a mixture of SOAP and REST in their integration architectures.

In the initial release, the REST support has been implemented as part of the online channel to take advantage of the Oracle WebLogic facilities and share the protocol and security setup of that channel. In a future release, we have plans to incorporate enhanced REST features in a separate channel dedicated to integration.

For more information about the REST platform support, including the limitations of this initial release, refer to the Web Services Best Practices whitepaper from My Oracle Support (Doc Id: 221475.1).

Multiple Policy Support (4.3.0.4.0)

Wed, 2017-05-17 23:28

One of the features of the latest Oracle Utilities Application Framework (V4.3.0.4.0) is the support for multiple WS-Policy compliant policies on Inbound Web Services. There are a number of ways to achieve this:

  • Annotations - It is now possible to specify multiple inline policies (standard ones and custom ones) with order of precedence also supported via a Sequence. It is also now possible to delegate to security within Annotations to Oracle Web Services Manager. This means it is now possible to mix inline with external policies. For example:

Multiple Policies as Annotations

  • Oracle WebLogic - It is possible to attach the policies supported by Oracle WebLogic to the individually deployed Web Services on the container level. This supports multiple policies (order of precedence is designated by the order they appear in the Web Service) on the individual Web Service.
  • Oracle Web Services Manager - It is possible to attach additional policies using the container (Web Services Manager includes the Oracle WebLogic supported policies, additional advanced policies and access controls) and like Oracle WebLogic, the order of precedence for multiple policies is the order they are attached to the individual Web Service. For example:

OWSM Policy Example

Now why have multiple policies in the first place. Well, you do not have to use multiple policies but there are a few use cases where it makes sense:

  • Some WS-Policies are for transport security and some are for message security only. Using a combination allows you to specify both using different policies. I should point out that most WS-Policies contain a transport and message combination so it reduces the need for multiple policies in the container.
  • You can create WS-Policy compliant custom policies, as long as they are supported by Oracle WebLogic or Oracle Web Services Manager, and those can have separate transport or message security definitions.
  • You should reuse web services as much as possible. You can choose not to expose the WS-Policy in your service but then use different policies for different interface systems. This might sound illogical but you may have different levels of security depending on the source of the call. In this case you would tell your sources the different policies they must adhere to.

Multiple policies are an optional feature but can be used to support a wide range of different interface styles.

SOA Suite Security with Inbound Web Services

Wed, 2017-05-17 19:06

With the introduction of Inbound Web Services the integration between these services and Oracle SOA Suite now has a few more options in terms of security.

  • It is possible to specify the WS-Policy to use to secure the transport and message sent to the product web service on the SOA Composite. The product supports more than one WS-Policy per service and any composite must conform to one of those policies.
  • As with older versions of the product and SOA Suite, you can specify the csf-key within the domain itself. This key holds the credentials of the interface in meta-data so that it avoids hardcoding the credentials in each call. This also means you can manage credentials from the console independently of the composite. In the latest releases it is possible to specify the csf-map as well (in past releases you had to use oracle.wsm.security as the map).

Now the process to do the configuration is as follows:

  • Using Oracle Fusion Middleware control, select the Oracle SOA Suite domain (usually soa_domain) and add the credentials (and map) to the domain. The credentials can be shared across composites or you choose to setup multiple credentials (one for each interface for example). In the example below, the map is the default oracle.wsm.security map and key is ouaf.key (just for the example):

Example Key and Map

  • Now the credentials and the WS-Policies need to be specified on the composite within Oracle SOA Suite. This can be done within SOA Composer or Oracle JDeveloper. Below is an Oracle JDeveloper example, where you link the WS-Policies using Configure SOA WS Policies at the project level in Oracle JDeveloper for each external reference. For example:

Configure SOA WS Policies

  • You then select the policy you want to use for the call. Remember you only use one of the policies you have configured on the Inbound Web Service. If you have a custom policy, that must be deployed to the Oracle SOA Suite and your Oracle JDeveloper instance to be valid for your composite. For example a list of policies is displayed and you select one:

Example Policy Selection

  • Edit the Policy to specify additional information. For example :

Editing Policy

  • At this point, specify which csf-map and csf-key you want to use for the call in the Override Value. In the example below the csf-key is specified. For example:

Example Key specification

The security has been setup for the composite. You have indicated the credentials (which can be managed from the console) and the policy to use can be attached to the composite to ensure that your security specification has been implemented.

Depending on the WS-Policy you choose to use, there may be additional transport and message protection settings you will need to specify (for example if you use policy specific encryption, outside the transport layer, you may need to specify the encryption parameters for the message). For full details of Oracle SOA Suite facilities, refer to the Oracle SOA Suite documentation.

Testing, the Oracle difference

Mon, 2017-05-15 00:43

Recently I attended the customer forums in London, to discuss the future of our product lines and also outline the work we have done over the last year. One of the questions that came up was the a discussion of the major advantages of using the Oracle Functional Testing Advanced Pack for Oracle Utilities which is part of the Oracle Testing solution.

In the industry, functional testing, in an initial implementation and the subsequent upgrades of any product, is a major part of the implementation. Typically to reduce risk, implementations commonly decide to reduce the scope of testing, to meet deadlines, which increases the overall risk.

One way of addressing this is to adopt automated testing. While this sounds logical it can have hidden costs:

  • Traditional tools use user interface based scripting which basically records the screen and the interaction of the screen. In the old days in my career, I used to call this screen scraping. I am sure it is more than that, effectively it is using the screen recording, including the data entered, as a rerunnable test.
  • Typically, data that is entered in the recording is embedded in the script used for recording. This means if you wanted to reuse the script you would probably need to record it again or have some programming resource to change the script. Effectively you need a specialist script programmer to maintain the testing assets for you.
  • If the user experience changes, even due to a patch, the script may or may not work as originally intended which may return inconsistent results or you will need to re-record the asset again. This is more likely when you upgrade as new modern user experiences are introduced over time.
  • Testing assets are really programmable objects that are typically maintained by a programmer rather than a testing resource. Whilst, these programming languages are made easier and easier to use they are still programming.

Now, whilst it is possible to use the Oracle Application Testing Suite in the traditional sense as outlined above, when it is coupled with the Oracle Functional Testing Advanced Pack for Oracle Utilities it is much different and addresses the issues seen in a traditional automated testing approach.

  • Oracle Functional Testing Advanced Pack for Oracle Utilities includes a full set of reusable components that are the SAME components used by the QA teams at Oracle on a day to day basis. The fact they are used on a daily basis by the product QA, reduces the risk of them actually executing and being able to be used against the product versions.
  • The solution is based upon Oracle Application Testing Suite which is used by hundreds of Oracle customers across many Oracle products such as eBusiness Suite, Peoplesoft, Fusion, JD Edwards etc. Oracle Utilities is just one of the latest products to use the Oracle Application Testing Suite. In fact, some of the products have licensed packs as well that can be used with in conjunction with the Oracle Utilities pack.
  • The components represent the full functions of the main functionality of the product they are supplied for. The only components we do not provide are the components that cover the administration objects. These objects are typically not cost effective to automate in an implementation, due to their very low usage after implementation.
  • The supplied components are customization aware where algorithms, change handlers, etc are handled by the component automatically.
  • The Oracle Functional Testing Advanced Pack for Oracle Utilities supplies a number of utilities to allow partners and implementations to add custom components to the solution for any customization not handled by the base components (this should be relatively rare).
  • The process to use the pack with the Oracle Application Testing suite is more assembly (orchestration) rather than programming. Oracle Flow Builder, which is included in the solution, is a simple browser based too that allows business processes to be modeled with simple drag and drop of the components in the order they represent the business process. This allows a lower skilled person to build the flows rather than a programmer.
  • The testing flows becomes a test script through a generator. The resulting script does not need to be altered or maintained by a developer after it is generated.
  • Data for the flow is independent of the flow which encourages reuse. For example, it is possible to attach different data to represent different scenarios to a single flow. Flows can also contain multiple scenarios if desired. This extends even after the flow is expressed a test script where the physical data is separated out so it can be replaced at runtime rather than design time.
  • The whole solution is designed for reuse so that the number of assets you need is actually far less than traditional methods. This reduces costs and risk.
  • It is possible to reuse your flows across product versions. For example, it is possible to test multiple releases of products to reduce your upgrade risk by aligning the same flows to different versions of the supplied components.

The testing solution from Oracle Utilities is far more cost effective than traditional methods with the content allowing implementations to quickly adopt automated testing with a lower implementation risk. Customers who have used the solution have found they have tested more, reduced their testing costs and increased accuracy of their solutions.

Oracle Utilities Work And Asset Management V2.2.0.0.0 Released

Thu, 2017-05-11 16:14

Oracle Utilities Work And Asset Management (WAM) V2.2.0.0.0 has been released and is available from Oracle Delivery Cloud. This version is also based upon Oracle Utilities Application Framework V4.3.0.4.0 (also known as 4.3 SP4).

Included in this release are usability enhancements, an update to the Esri GIS Integration, Preventive Maintenance Event processing, and Construction Work Management.  

With these new additions we are now able to support the full asset lifecycle, from design and construction to retirement, opening up the gas and electric distribution market.  Construction Work Management adds the final piece to the Asset Lifecycle process.

  • Asset Performance Management - The Asset Performance Management features have been enhanced to offer new ways to calculate Asset Health Index scores and to set up Preventive Maintenance triggers based on the Asset Health Index.   We also offer integration points for third party predictive maintenance products to affect the Asset Health Index.
  • Compatible Units - Compatible Units are job standards that can be used to provide consistency and assistance when creating work designs.  Compatible Units can be created for either internal resources or for contractors.
  • Construction Work Design - Work Designs are critical to utility distribution companies.  The work design process leverages the compatible units to quickly scope and estimate the costs of work.  You are able to create multiple versions to designs to compare various construction options such as overhead or underground work.  You can also create design versions to compare contractor work.  When you pick a design to execute, you are able to easily transition the work design into a work package without having to create new work orders from scratch.
  • Construction Work Orders - Construction work orders differ from regular work orders because we are creating new assets rather than maintaining existing assets.  A construction work order also manages Construction Work in Progress (CWIP) accounting to ensure the work in progress is accounted for correctly.  The closeout process allows you to create new WAM assets to start their lifecycle in WAM and also creates the fixed asset property unit data to feed the corporate accounting system.
  • "As Built" Reconciliation - One of the big challenges for organizations is the reconciliation of the work design to the actual construction.  The actual construction work often diverges from the estimate due to the wide variety of variables that occur on a project.  WAM v2.2 offers a full reconciliation process to allow you to revise the values of assets, move costs between construction and maintenance accounts, review and adjust property unit valuation, and provides support for mass asset valuations.
  • PM Event Processing -  You can now package up a group of work templates into a PM Event and trigger that event as a group rather than one work template at a time.  This can be used for outage work or any repetitive work that requires multiple work orders to be created.

  • Esri GIS Integration - The user experience of the Esri GIS Integration was completely revised to provide a more intuitive experience.  Esri mapviewer components are directly integrated into the Work and Asset Management product.  Customers can publish any map component as an Esri Web Map and enroll that Web Map into WAM.  This includes feature layer maps as well as any thematic maps or metrics that customers choose to publish.

Esri Integration

 

Pages