Amis Blog

Subscribe to Amis Blog feed
Friends of Oracle and Java
Updated: 14 hours 51 min ago

Development and Runtime Experiences with a Canonical Data Model Part II: XML Namespace Standards

Wed, 2017-03-29 12:21

This blog is about XML namespace standards. Primary for using them in a Canonical Data Model (CDM), but also interesting for anyone who has to define XML data by creating XML Schema files (XSD). This blogpost is the second part of a trilogy about my experiences in using and developing a CDM. The first blogpost is about naming & structure standards and the third blogpost is about dependency management & interface tailoring.

XML Namespace Standards

A very important part of an XML model, is its namespace. With a namespace you can bind an XML model to specific domain and can represent a company, a business domain, a system, a service or even a single component or layer within a service. For a CDM model this means that choices have to be made. Use one or more namespaces. How to deal with newer versions of a CDM, etc.

Two approaches: one generic namespace vs component specific namespaces
Basically I’ve come across two approaches of defining a namespace in a CDM. Both ways can be a good approach, but you have to choose one based on your specific project characteristics.

  1. The first approach is to use one generic fixed namespace for the entire CDM. This may also be the ’empty’ namespace which looks like there is no namespace. This approach of one generic fixed namespace is useful when you have a central CDM that is available at run time and all services refer to this central CDM. When you go for this approach, go for one namespace only, so do not use different namespaces within the CDM.
    For maintenance and to keep the CDM manageable, it can be useful to split up the CDM into more definition files (XSD’s), each one representing a different group (domain) of entities. However my advise is to still use the same namespace in all of these definition files. The reason is that in time the CDM will change and you may want to move entities from one group to another group or you wan to split up a group. When each group had its own namespace, you would have gotten a problem with backward compatibility. That’s because an element which moves from one group to another, would then have changed from its namespace.
    When at a certain moment you’re going to have a huge amount of changes which also impacts the running software, you can create a new version of the CDM. Examples of such situations are connecting a new external system or replacing an important system by another system. In case you have more versions of the CDM, each version must have its own namespace where the version number is part of the name of the namespace. New functionality can now be developed with the new version of the CDM. When it uses existing functionality (e.g. calling an existing service) it has to transform the data from the new version of the CDM to the old version (and vice versa).

  2. The second approach is that each software component (e.g. a SOAP webservice) has its own specific namespace. This specific namespaces is used as the namespace for a copy of the CDM. The software component uses this copy of the CDM. You can consider it as ‘his’ own copy of the CDM. A central runtime CDM is not needed any more. This means that the software components have no runtime dependencies on the CDM! The result is that the software components can be deployed and run independent of the current version of the CDM. This is the most important advantage!
    The way to achieve this is to have a central CDM without a namespace (or a dummy namespace like ‘xxx’), which is only available as an off-line library at design time. So there even is no run time CDM to reference to!
    Developers need to create a hard coded copy of the CDM for the software component they are building and apply a namespace to the copy. The name of this namespace is specific for that software component and typically includes the name (and version) of the software component itself. Because the software component is the ‘owner’ of this copy, the parts (entities) of CDM which are not used by the software component, can be removed from this copy.

In part III in my last blogpost about run time dependencies and interface tailoring I will advise when to use the first and when to use the second approach. First some words about XML patterns and their usage in these two namespace approaches.

XML Patterns
XML patterns are design patterns, applicable to the design of XML. Because the design of XML is defined by XML Schema, XSD files, these XML patterns actually are XML Schema (XSD) patterns. These design patterns describe a specific way of modeling XML. Different ways of modeling can result into the same XML, but may be different in terms of maintenance, flexibility, ease of extension, etc.
As far as I know, there are four XML patterns: “Russian Doll”, “Salami Slice”, “Venetian Blind” and “Garden of Eden”. I’m not going to describe these patterns, because that has already be done by others. For a good description of the first three, see http://www.xfront.com/GlobalVersusLocal.html and http://www.oracle.com/technetwork/java/design-patterns-142138.html gives for a brief summary of all four. I advise you to read and understand them when you want to setup an XML type CDM.

I’ve described two approaches of using a CDM above, a central run-time referenced CDM and a design time only CDM. So the question is, which XML design pattern matches best for each approach?

When you’re going for the first approach, a central run-time-referenced CDM, there are no translations necessary when passing (a part of) an XML payload from one service to another service. This is easier compared with the second approach where each service has a different namespace. Because there are no translations necessary and the services need to reference parts of entities as well as entire entity elements, it’s advisable to use the “Salami Slice” or the “Garden of Eden” pattern. They both have all elements defined globaly, so it’s easy to reuse them. With the “Garden of Eden” patterns types are defined globally as well and thus reusable providing more flexibility and freedom to designers and developers. The downside is that you end up with a very scattered and verbose CDM.
So solve this disadvantage, you can go for the “Venetian Blind” pattern and set the schema attribute “elementFormDefault” to “unqualified” and do not include any element definitions in the root of the schema’s (XSD) which make up the CDM. This means there are only XML type definitions in the root of the schema(s), so CDM is defined by types. The software components, e.g. a web service, do have their own namespace. In this way the software components define a namespace (through their XSD or WSDL) for the root element of the payload (in the SOAP body), while all the child elements below this root remain ‘namespace-less’.
This makes the life of an developer easier as there is no namespace and thus no prefixes needed the payloads messages. No dealing with namespaces in all transformation, validation and processing software that works with those messages makes programming code (e.g. xslt) less complicated, so less error prone.
This leads to my advise that:

The “Venetion Blind” pattern with the schema attribute “elementFormDefault” set to “unqualified” and no elements in the root of the schema’s, is the best XML pattern for the approach of using a central run-time referenced CDM.

When you’re going for the second option, no runtime CDM, but only a design time CDM, you shouldn’t use a model which results in payloads (or part of the payloads) of different services having exact the same namespace. So you cannot use the “Venetian Blind” pattern with “elementFormDefault” set to “unqualified” which I have just explained. You can still can use the “Salami Slice” or “Garden of Eden” pattern, but the disadvantages of large scattered and verbose CDM remain.
The reason that you can not have the same namespace for the payload of services with this approach is because the services have their own copy (‘version’) of the CDM. When (parts of) payloads of different services have the same element with also the same namespace (or the empty namespace), the XML structure of both is considered to be exactly equal, while that need not be the case!. When they are not the same you have a problem when services need to call each other and payloads are passed to each other. They can already be different at design time and then it’s quite obvious.
Much more dangerous is that they even can become different later in time without even being noticed! To explain this, assume that at a certain time two software components were developed, they used the same CDM version, so the XML structure was the same. But what if one of them changes later in time and these changes are considered as backwards compatible (resulting in a new minor version). The design time CDM has changed, so the newer version of this service uses this newer CDM version. The other service did not change and now receives a payload from the changed service with elements of a newer version of the CDM. Hopefully this unchanged service can handle this new CDM format correctly, but it might not! Another problem is that it might break its own contract (WSDL) when this service copies the new CDM entities (or part of it) to its response of caller. Thus breaking its own contract while the service itself has not changed! Keep in mind its WSDL still uses the old CDM definitions of the entities in the payload.
Graphically explained:
Breach of Service Contract
Service B calls Service A and retrieves a (part of) the payload entity X from Service A. Service B uses this entity to return it to his consumers as (part of) payload. This is all nice and correct according to its service contract (WSDL).
Later in time, Service A is updated to version 1.1 and the newer version of the CDM is used in this updated version. In the newer CDM version, entity X has also been updated to X’. Now this X’ entity is passed from Service A to Service B. Service B returns this new entity X’ to its consumers, while they expect the original X entity. So service B returns an invalid response and breaks its own contract!
You can imagine what happens when there is a chain of services and probably there are more consumers of Service A. Such an update can spread out through the entire integration layer (SOA environment) like ripples on water!
You don’t want to update all the services in the chains effected by such a little update.
I’m aware a service should not do this. Theoretically a service is fully responsible that always complies to its own contract (WSDL), but this is very difficult to implement this when developing lots of services. When there is a mapping in a service, this is quite clear, but all mapping should be checked. However an XML entity often is used as variable (e.g. BPEL) in some processing code and can be passed to a caller unnoticed.
The only solution is to avoid passing complete entities (container elements), so, when passing through, all data fields (data elements) have to be mapped individually (in a so called transformation) for all incoming and outgoing data of the service.
The problem is that you cannot enforce software to do this, so this must become a rule, a standard, for software developers.
Everyone, who has been in a software development for some years, knows this is not going to work. There will always be a software developer (at that moment or maybe in future for maintenance) not knowing or understanding this standard.
The best way to prevent this problem, is to give each service its own namespace, so entities (container elements) cannot be copied and passed through in its entirety and thus developers have to map the data elements individually.

This is why I advise for the approach of a design time only CDM to also use the “Venetian Blind” pattern, but now with the schema attribute “elementFormDefault” set to “qualified”. This results into a CDM of which

  • it is easy to copy the elements that are needed, including child elements and necessary types, from the design time CDM to the runtime constituents of the software component being developed. Do not forget to apply the component specific target namespace to this copy.
  • it is possible to reuse type definitions within the CDM itself, preventing multiple definitions of the same entity.

In my next blogpost, part III about runtime dependencies and interface tailoring, I explain why you should go in most cases for a design time CDM and not a central runtime CDM.

The post Development and Runtime Experiences with a Canonical Data Model Part II: XML Namespace Standards appeared first on AMIS Oracle and Java Blog.

Development and Runtime Experiences with a Canonical Data Model Part III: Dependency Management & Interface Tailoring

Wed, 2017-03-29 12:20
Introduction

This blogpost is part III, the last part of a trilogy on how to create and use a Canonical Data Model (CDM). The first blogpost contains part I in which I share my experiences in developing a CDM and provide you with lots of standards and guidelines for creating a CDM. The second part is all about XML Namespace Standards. This part is about usage of a CDM in the integration layer, thus how to use it in a run time environment and what are the consequences for the development of the services which are part of the integration layer.

Dependency Management & Interface Tailoring

When you’ve decided to use a CDM, it’s quite tempting to use the XSD files, that make up the CDM, in a central place in the run time environment where all the services can reference to. In this way there is only one model, one ‘truth’ for all the services. However there are a few problems you run into quite fast when using such a central run time CDM.

Dependency Management

Backwards compatibility
The first challenge is to maintain backwards compatibility. This means that when there is a change in the CDM, this change is implemented in such a way that the CDM supports both the ‘old’ data format, according to the CDM before the change, as well as the new data format with the change. When you’re in the development stage of the project, the CDM will change quite frequently, in large projects even on a daily basis. When these changes are backwards compatible, the services which already have been developed and are considered as finished, do not need to change (unless of course the change also involves a functional change of a finished service). Otherwise, when these changes are not backwards compatible, all software components, so all services, which have been finished have to be investigated whether they are hit by the change. Since all services use the same set of central XSD definitions, many will be hit by a change in these definitions.
If you’re lucky you have nice unit tests or other code analysis tools you can use to detect this. You may ask yourself if these test and/or tool will cover a 100% hit range. When services are hit, they have to be modified, tested and released again. To reduce maintenance and rework of all finished services, there will be pressure to maintain backwards compatibility as much as possible.
Maintaining backwards compatibility in practice means

  • that all elements that are added to the CDM have to be optional;
  • That you can increase the maximum occurrence of an element, but never reduce it;
  • That you can make mandatory elements optional, but not vice versa;
  • And that structure changes are much more difficult.

For example, when a data element has to be split up into multiple elements. Let’s take a product id element of type string and split it up into a container elements that is able to contain multiple product identifications for the same product. The identification container element will have child elements for product id, product id type and an optional owner id for the ‘owner’ of the identification (e.g. a customer may have his own product identification). One way of applying this change and still maintain backwards compatibility is by using an XML choice construction:

<complexType name="tProduct">
  <sequence>
    <choice minOccurs="0" maxOccurs="1">
      <element name="Id" type="string" />
      <element name="Identifications">
        <complexType>
          <sequence>
            <element name="Identification" minOccurs="0" maxOccurs="unbounded">
              <complexType>
                <sequence>
                  <element name="Id" type="string" />
                  <element name="IdType" type="string" />
                  <element name="IdOwner" type="string" minOccurs="0"/>
                </sequence>
              </complexType>
            </element>
          </sequence>
        </complexType>
      </element>
    </choice>
    <element name="Name" type="string" />
    ...
  </sequence>
</complexType>

There are other ways to implement this change and remain backwards compatible, but they will all will into a redundant and verbose data model. As you can imagine, this soon results in a very ugly CDM, which is hard to read and understand.

Hidden functional bugs
There is another danger. When keeping backward compatibility in this way, the services which were finished technically don’t break and still run. But they might functional break! This break is even more dangerous because it may not be visible immediately and it can take quite a long time before this hidden functional bug is discovered. Perhaps the service already runs in a production environment and execute with unnoticed functional bugs!
Take the example above and consider that there has already been a service developed which does something with orders. Besides order handling, it also sends the product id’s in an order to a CRM system, but only for the product id’s in the range 1000-2000. The check in the service on the product id being in the range 1000-2000 will be based upon the original product id field. But what happens if the CDM is changed as described in previous paragraph, so the original product id field is part of a choice and thus becomes optional. This unchanged service now might handle orders that contain products with the newer data definition for a product in where the new “Identification” element is used instead of the old “Id” element. If you’re lucky, the check on the range fails with a run time exception! Lucky, because you’re immediately notified of this functional flaw. It probably will be detected quite early in a test environment when it’s common functionality. But what if it is rare functionality? Then the danger is that it might not be detected and you end up with a run time exception in a production environment. That is not what you want, but at least it is detected!
The real problem is that there is a realistic chance that the check doesn’t throw an exception and doesn’t log an error or warning. It might conclude that the product id is not in the range 1000-2000, because the product id field is not there, while the product identification is in that range! It just uses the new way of data modeling the product identification with the new “Identification” element. This results into a service that has a functional bug while it seems to run correctly!

Backward compatibility in time
Sometimes you have no choice and you have to make changes which are not backward compatible. This can cause another problem: you’re not backwards compatible in time. You might be developing newer versions of services. But what if in production there is a problem with one of these new services using the new CDM and you want to go back to a previous version of that service? You have to go back to the old version of the CDM as well, because the old version is not compatible with the new CDM. But that also means that none of the newer services can run, because they depend on the new CDM. So you have to revert to the old versions for all of the new services using the new CDM!

The base cause of these problems is that all software components (service) are dependent on the central run time CDM!
So this central run CDM introduces dependencies between all (versions of) components. This heavily conflicts with one of the base principles of SOA: loose coupled, independent services.

 

Interface Tailoring

There is another problem with a central CDM which has more to do with programming concepts, but also impacts the usage of services resulting in a slower development cycle. The interface of a service which is described in its contract (WSDL) should reflect the functionality of a service. However, if you’re using a central CDM, the CDM is used by all the services. This means that the entities in the CDM contain all the data elements which are needed in the contracts of all the services. So basically a CDM entity consists of a ‘merge’ of all these data elements. The result is that the entities will be quite large, detailed and extensive. The services use these CDM entities in their contracts, while functionally only a (small) part of the elements are used in a single service.

This makes the interface of a service very unclear, ambiguous and meaningless.

Another side effect is that it makes no sense to validate (request) messages, because all elements will be optional.

Take for example a simple service that returns the street and city based upon the postal code and house number (this is a very common functionality in The Netherlands). The interface would be nice and clear and almost self-describing when the service contract dictates that the input (request) only is a postal code and the output (response) only contains the street name and the city. But with a central CDM, the input will be an entity of type address, as well as the output. With some bad luck, the address entity also contain all kind of elements for foreign addresses, post office boxes, etc. I’ve seen exactly this example in a real project with an address entity containing more than 30 child elements! While the service only needed four of them: two elements, postal code and house number, as input and also two elements, street and city, as the output. You might consider to this by defining these separate elements as input and output and not to use the entity element. But that’s not the idea of a central CDM! Take notice that this is just a little example. I’ve seen this problem in a project with lawsuit entities. You can imagine how large such an entity can become, with hundreds of elements. Services individually only used some of the elements of the lawsuit entity, but these elements were scattered across the entire entity. So is does not help either to split up the type definition of a lawsuit entity into several sub types. In that project almost all the services needed one or more lawsuit entities resulting in interface contracts (WSDL) which all were very generic and didn’t make sense. You needed the (up to date) documentation of the service in order to know which elements you had to use in the input and which elements were returned as output, because the definitions of the request and response messages were not useful as they contained complete entities.

Solution

The solution to both of the problems described above, is not to use a central run time CDM, but only a design time CDM.
This design time CDM has no namespace (or a dummy one). When a service is developed, a hard copy is made of (a part of) the CDM at that moment to a (source) location specific for that service. Then a service specific namespace has to be applied to this local copy of the (service specific) CDM.
And now you can shape this local copy of the CDM to your needs! Tailor it by removing elements that the service contract (WSDL) doesn’t need. You can also apply more restrictions to the remaining elements by making optional elements mandatory, reduce the maximum occurrences of an element and even create data value restrictions for an element (e.g. set a maximum string length). By doing this, you can tailor the interface in such a way that it reflects the functionality of the service!
You can even have two different versions of an entity in this copy of the CDM. For example one to use in the input message and one in the output message.
Let’s demonstrate this with the example of above: An address with only postal code and house number for the input message and an address with street and city for the output message. The design time CDM contains the full address entity, while the local and tailored copy of the service CDM contains two tailored address entities. And this one can be used by the service XSD which contains the message definitions of the request and response payloads:

CDM XSD and Service XSD

CDM XSD and Service XSD

You can expand the source code if you are interested:

<schema targetNamespace="DUMMY_NAMESPACE"
            xmlns="http://www.w3.org/2001/XMLSchema" 
            version="1.0">

   <complexType name="TAddress">
      <sequence>
         <element name="Department" type="string" minOccurs="0"/>
         <element name="Street" type="string" minOccurs="0"/>
         <element name="Number" type="string" minOccurs="0"/>
         <element name="PostalCode" type="string" minOccurs="0"/>
         <element name="City" type="string" minOccurs="0"/>
         <element name="County" type="string" minOccurs="0"/>
         <element name="State" type="string" minOccurs="0"/>
         <element name="Country" type="string" minOccurs="0"/>
      </sequence>
   </complexType>
   
</schema>
<schema targetNamespace="http://nl.amis.AddressServiceCDM"
            xmlns="http://www.w3.org/2001/XMLSchema" 
            version="1.0">

   <complexType name="TAddressInput">
      <sequence>
         <element name="Number" type="string" minOccurs="0"/>
         <element name="PostalCode" type="string" minOccurs="1"/>
      </sequence>
   </complexType>

   <complexType name="TAddressOutput">
      <sequence>
         <element name="Street" type="string" minOccurs="1"/>
         <element name="City" type="string" minOccurs="1"/>
      </sequence>
   </complexType>
   
</schema>
<schema targetNamespace="http://nl.amis.AddressService"
        xmlns="http://www.w3.org/2001/XMLSchema" 
        xmlns:cdm="http://nl.amis.AddressServiceCDM" 
        version="1.0">

   <import namespace="http://nl.amis.AddressServiceCDM" schemaLocation="AddressServiceCDM.xsd"/>

   <element name="getAddressRequest">
	   <complexType>
		  <sequence>
			 <element name="Address" type="cdm:TAddressInput" minOccurs="1"/>
		  </sequence>
	   </complexType>
   </element>

   <element name="getAddressResponse">
	   <complexType>
		  <sequence>
			 <element name="Address" type="cdm:TAddressOutput" minOccurs="1"/>
		  </sequence>
	   </complexType>
   </element>
   
</schema>

When you’re finished tailoring, you can still deploy these service interfaces (WSDL) containing the shaped data definitions (XSDs) to a central run time location. However each service must have its own location within this run time location, to store these tailored data definitions (XSDs). When you do this, you can also store the service interface (abstract WSDL) in there as well. In this way there is only one copy of a service interface, that is used by the implementing service as well as by consuming services.
I’ve worked in a project with SOAP services where the conventions dictated that the filename of a WSDL is the same as the name of the service. The message payloads were not defined in this WSDL, but were included from an external XSD file. This XSD also had the same filename as the service name. This service XSD defined the payload of the messages, but it did not contain CDM entities or CDM type definitions. They were included from another XSD with the fixed name CDM.xsd. This local, service specific, CDM.xsd contained the tailored (stripped and restricted) copy of the central design time CDM, but had the same target namespace as the service.wsdl and the service.xsd:
Service Files
This approach also gave the opportunity to add operation specific elements to the message definitions in the service.xsd. These operation specific elements were not part of the central CDM and did not belong there due to their nature (operation specific). These operation specific elements ware rarely needed, but when needed, they did not pollute the CDM, because you don’t need to somehow add them to the CDM. Think of switches and options on operations which act on functionality, e.g. a boolean type element “includeProductDescription” in the request message for operation “getOrder”.

Note: The services in the project all did use a little generic XML of which the definition (XSD) was stored in a central run time location. However these data definitions are technical data fields and therefor are not part of the CDM. For example header fields that are used for security, a generic response entity containing messages (error, warning info) and optional paging information elements in case a response contains a collection. You need a central type definition when you are using generic functionality (e.g. from a software library) in all services and consuming software.

Conclusion
With this approach of a design time CDM and tailored interfaces:

  • There are no run time dependencies on the CDM and thus no dependencies between (versions of) services
  • Contract breach and hidden functional bugs are prevented. (Because of different namespaces services have to copy each data element individually when passing an entity or part of an entity, to its output)
  • Service interfaces reflect the service functionality
  • Method specific parameters can be added without polluting the CDM
  • And – most important – the CDM can change without limitations and as often as you want to!

The result is that the CDM in time will grow to a nice clean and mature model that reflects the business data model of the organization – while not impending and even promoting the agility of service development. And that is exactly what you want with a CDM!

 

When to use a central run time CDM

A final remark about a central run time CDM. There are situations where this can be a good solution. That is for smaller integration projects and in the case when all the systems and data sources which are to be connected with the integration layer are already in place, so they are not being developed. They probably already run in production for a while.
This means that the data and the data format which has to be passed through the integration layer and is used in the services is already fixed. You could state that the CDM already is there, although it still has to be described, documented in a data model. It’s likely that it’s also a project where there is a ‘one go’ to production, instead of frequent delivery cycles.
When after a while one system is replaced by another system or the integration layer is extended by connecting one or more systems and this results that the CDM has to be changed, you can add versioning to the CDM. Create a copy of the existing CDM and give it a new version (e.g. with a version number in the namespace) and you can make the changed in CDM which are needed. This is also a good opportunity to clean up the CDM by removing unwanted legacy due to keeping backwards compatibility. Use this newer version of the CDM for all new development and maintenance of services.
Again, only use this central run time CDM for smaller projects and when it is a ‘one go’ to production (e.g. replacement of one system). As soon as the project becomes larger and/or integration of systems keeps on going, switch over to the design time CDM approach.
You can easily switch over by starting to develop the new services with the design time CDM approach and keep the ‘old’ services running with the central run time CDM. As soon there is a change in an ‘old’ service, refactor it to the new approach of the design time CDM. In time there will be no more services using the run time CDM, so the run time CDM can be removed.

After reading this blogpost, together with the previous two blogpost which make up the trilogy about my experiences with a Canonical Data Model, you should be able to have good understanding about how to set up a CDM and use it in your projects. Hopefully it helps you in making valuable decisions about creating and using a CDM and your projects will benefit from it.

The post Development and Runtime Experiences with a Canonical Data Model Part III: Dependency Management & Interface Tailoring appeared first on AMIS Oracle and Java Blog.

Single-Sign-On to Oracle ERP Cloud

Tue, 2017-03-21 10:04

More and more enterprises are using Single-Sign-On (SSO) for there on-premise applications today, but what if they want to use SSO for there cloud applications as well?

This blog post is addressing this topic for Single-Sign-On to Oracle ERP Cloud in a hybrid environment.

First of all lets focus on SSO on-premise and introduce some terminology.

A user (aka principal) wants to have access to a particular service. This service can be found at the Service Provider (SP). The provided services are secured so the user needs to authenticate itself first. The Identity Provider (IdP) is able to validate (assert) the authenticity of the user by asking, for instance, the username and password (or using other methods).

So for authentication we always have three different roles: User, Service Provider (SP) and Identity Provider (IdP), as shown below.

For Single-Sign-On we should have a centralized IdP and we should have a standardized way to assert the authentication information.

In an on-premise landscape there is plenty of choice for an IdP. Some common used ones are: Microsoft Active Directory (AD) (closed source), Oracle Identity & Access Management (closed source) and Shibboleth (open source). For now we assume we are using AD.

Kerberos

The most used standard for doing SSO is Kerberos. In that case a Kerberos ticket is asserted by the IdP which is used towards all the Service Providers to be able to login.

This Kerberos method is suited for an on-premise landscape and also suited if the connection to a private cloud is via a VPN (the two are effectively part of the internal network and everything should work ok for the cloud as well). But what if we want to integrate a public cloud such as Oracle Fusion Cloud then things get messy.

Arguably the reason Kerberos isn’t used over the public Internet doesn’t have to do with the security of the protocol itsef, but rather that it’s an authentication model that doesn’t fit the needs of most “public Internet” applications. Kerberos is a heavy protocol and cannot be used in scenarios where users want to connect to services from unknown clients as in a typical Internet or Public Cloud computer scenario, where the authentication provider typically does not have knowledge about the users client system.

The main standards to be able to facilitate SSO for the internet are:

  • OpenID
  • OAuth
  • Security Assertion Markup Language (SAML)
SAML 2.0
Oracle Fusion Cloud is based on SAML 2.0 so let’s go on with this standard for now.
Conceptually the SAML handshake looks like Kerberos; you can also see the different roles for User, SP and IdP and the assertion of a SSO ticket.
Identity Federation
But how can we integrate Kerberos with SAML?
Now a different concept comes in: Identity Federation. This means linking and using the identity of a user across several security domains (on-premise and public cloud). In simpler terms, an SP does not necessarily need to obtain and store the user credentials in order to authenticate them. Instead, the SP can use an IdP that is already storing this. In our case the IdP is on-premise (Active Directory for example) and the SP is the Oracle Fusion Cloud application.
Now there are two things to be done:
  • The on-premise Kerberos ticked should be translated to SAML. Because we want SSO.
  • There is need for trust between IdP en SP. Only trusted security domains can access the SP. Trust configuration should be done at both sites (on-premise vs cloud)
Translation of Kerberos ticked is performed by a Security Token Service (STS). This is the broker that sits between a SP and the user. An STS is an issuer of security tokens. “Issuer” is often a synonym of an STS. STS’s can have different roles: as IdP when they authenticate users or as Federation Provider (FP) when they sit in the middle of a trust chain and act as “relying parties” for other IdPs.
In our case the STS translates Kerberos to SAML and Microsoft Active Directory Federation Server (ADFS) and Oracle Identity Federation Server (part of Oracle Identity Governance Suite) are examples of doing this.
So the picture look like this now:
Trust
But how is the Trust achieved?
Trust is just metadata about the SP and the IdP. So the metadata from the IdP should be uploaded in Oracle ERP Cloud and visa versa.When you create metadata for the IdP, the IdP entity is added to a circle of trust. A circle of trust is used to group SP’s and IdP’s in a secure, trusted environment. Other remote provider entities can be added to the circle of trust.
Metadata is defined in XML. A SP uses the Metadata to know how to communicate with the IdP and vise versa. Metadata define things like what service is available, addresses and certificates:
  • Location of its SSO service.
  • An ID identifying the provider.
  • Signature of the metadata and public keys for verifying and encrypting further communication.
  • Information about if the IdP wants the communication signed or encrypted.
There is no protocol how the exchange is done, but there are no secret information in the metadata so the XML can be freely distributed by mail or published in clear text on the Internet.
It is however highly recommended that the metadata is protected from unauthorized modification, this could be a good start on a Man-In-The-Middle attack.
The integrity of the Metadata could be protected using for example digital signatures or by transporting the metadata using some secure channels.
Metadata could contain lots of other information. For a full description have a look at the SAML specifications http://saml.xml.org/saml-specifications
Oracle ERP Cloud Security
Application Security in Oracle ERP Cloud consists of two parts:
  1. Oracle Identity Management (OIM) running in the cloud (Oracle Identity Federation is part of this).
  2. Authorization Policy Manager (APM).
Oracle Identity Management is responsible for the user accounts management. Authorization Policy Manager is responsible for the fine grained SaaS role mapping (aka Entitlements).
See this blog post from Oracle how this works: http://www.ateam-oracle.com/introduction-to-fusion-applications-roles-concepts/
Remark: the application security in Oracle ERP Cloud will change with R12 and will benefit from the following new capabilities:
  • Separation between OIM and APM is no longer available. A new simplified Security Console will contain both.

  • Configuration of SSO integration (with IdP) is simplified and can be performed from a single screen.
  • REST API’s based on SCIM (System for Cross-Domain Identity Management) 2.0 are available for Identity Synchronization with IdP.
Another remark: Oracle Identity Cloud Service is released in Q1 2017. Integration with Oracle ERP Cloud is not the case yet because Identity Federation functionality is not implemented yet. The release date isn’t clear, so we have to deal with the functionality presented above.
Configuring SSO for Oracle ERP Cloud
 For SSO the following aspects should be taken into account:
  • Users and Entitlements
  • Initial upload of identities and users
  • Identity and user Synchronization
  • Exchange of SP and IdP metadata
Users and Entitlements

Before going into this I must explain the difference between users and employees.

  • When talking about users we mean the user login account. As explained before these accounts are the domain of IAM.
  • Users have access rights based on Role Based Access Controls (RBAC). Also IAM is handling this.
  • Users have entitlements to use particular ERP functionality. This is handled in APM.
  • When talking about employees we mean the business employee with it’s associated business job. This is the domain of Oracle HCM Cloud (even when you don’t have a HCM full-use license). An employee can access Oracle ERP Cloud when it’s having an user account in IAM and the proper entitlements in APM.
Initial user upload

To establish SSO between the customer’s on-premises environment and the Oracle ERP Cloud environment, the customer must specify which identity attribute (aka GUID) (user name or email address) will be unique across all users in the customer’s organization. The SAML token should pass this attribute so the SP could determine which user is asserted (remember the first picture in this blog post).

But before this could work the SP should have all users loaded. This is a initial step in the Oracle ERP Cloud on-boarding process.
Currently (Oracle ERP Cloud R11) the following options are available:
  • If running Oracle HCM Public Cloud, you may need to use HR2HR Integration
  • If running Non-HCM Public Cloud, use Spreadsheet Upload [Document Note 1454722.1] or if you are running CRM Public Cloud, use the CRM upload utility for HCM employees. You could also manually enter the employee.

Do the following tasks to load the initial user data into Oracle ERP Cloud:

  1. Extract user data from your local LDAP directory service to a local file by using the tools provided by your LDAP directory service vendor.
  2. Convert the data in the file into a format that is delivered and supported by Oracle ERP Cloud.
  3. Load the user data into Oracle ERP Cloud by using one of the supported data loading methods.

Data loaders in Oracle ERP Cloud import data in the CSV format. Therefore, you must convert user data extracted from your local LDAP directory into the CSV format. Ensure that the mandatory attributes are non-empty and present.

From Oracle ERP Cloud R12 the initial load can also be performed by using the SCIM 2.0 REST API’s. For details about this see: https://docs.oracle.com/cd/E52734_01/oim/OMDEV/scim.htm#OMDEV5526

Identity and user Synchronization

The IdP should always have the truth about the users and business roles. So there should be something in place to push them to the Oracle ERP Cloud.

For R12 the SCIM REST API’s are the best way to do that. For R11 it’s a lot more complicated as explained below.

Now the concept of employee and job comes in again. As explained earlier in this blog post this is the domain of Oracle HCM Cloud (which is also part of Oracle ERP Cloud).

Oracle HCM Cloud is having REST API’s for read and push of Employee and Job data to Oracle HCM Cloud:

  • GET /hcmCoreApi/resources/11.12.1.0/emps
  • POST /hcmCoreApi/resources/11.12.1.0/emps
  • PATCH /hcmCoreApi/resources/11.12.1.0/emps/{empsUniqID}
  • GET /hcmCoreSetupApi/resources/11.12.1.0/jobs
  • GET /hcmCoreSetupApi/resources/11.12.1.0/jobs/{jobsUniqID}

For more details about these see (which are also available in R11) see: https://docs.oracle.com/cloud/latest/globalcs_gs/FARWS/Global_HR_REST_APIs_and_Atom_Feeds_R12.html

But how can we provision IAM/APM? For that Oracle HCM Cloud have standard provisioning job:

  • Send Pending LDAP Requests: Sends bulk requests and future-dated requests that are now active to OIM. The response to each request from OIM to Oracle Fusion HCM indicates transaction status (for example, Completed).
  • Retrieve Latest LDAP Changes: Requests updates from OIM that may not have arrived automatically because of a failure or error, for example.

For details see: http://docs.oracle.com/cloud/farel8/common/OCHUS/F1210304AN1EB1F.htm

Now the problem could arise that an administer has changed user permissions in ERP Cloud (HCM or IAM/APM) which are not reflected in the IdP (which should always reflect the truth), so these are out-of-sync.

To solve this the IdP should first read all employee and job data from Oracle HCM Cloud and based on that creates the delta with it’s own administration. This delta is pushed to Oracle HCM Cloud so all manually changes are removed. This synchronization job should be performed at least every day.

The whole solution for Identity and user synchronization for R11 could look like this:

 

Exchange metadata for SSO
In R11 of Oracle ERP Cloud the exchange of SAML metadata for SSO is a manual process. In R12 there is a screen to do this. So for R12 skip the rest of this blog.
For R11, generation of SP metadata.xml (to setup the Federation for IdP) and upload of your IdP metadata.xml into the SP is performed by the Oracle Cloud Operations team. To start the integration process you should create a Service Request and provide the following information:
  • Which type Federation Server is used on-premise.
  • Which SaaS application you want to integrate.
  • How many users will be enabled.
  • URL’s for IdP production and IdP non-production.
  • Technical contacts.

The following should also be taken into account (at both sites):

  • The Assertion Consumer Service URL of the SP, where the user will be redirected from the IdP with SAML Assertion.
  • The Signing Certificate corresponding to the private key used by the SP to sign the SAML Messages.
  • The Encryption Certificate corresponding to the private key used by the SP to decrypt the SAML Assertion, if SAML encryption is to be used.
  • The Logout service endpoint.
The Oracle Cloud Operations team document delivers a document how to configure the on-premises IdP (Microsoft Active Directory Federation Server (ADFS) 2.0 or Oracle Identity Federation Server 11g).
Be aware that the Oracle Cloud Operations team needs two weeks as least to do the configuration in Oracle SSO Cloud.
For detailed information about this see Oracle Support Document: Co-Existence and SSO: The SSO Enablement Process for Public Cloud Customers (Doc ID 1477245.1).

The post Single-Sign-On to Oracle ERP Cloud appeared first on AMIS Oracle and Java Blog.

Oracle SOA Suite: Find that composite instance!

Mon, 2017-03-20 06:14

When executing BPM or BPEL processes, they are usually executed in the context of a specific entity. Sometimes you want to find instances involved with a specific entity. There are different ways to make this easy. You can for example use composite instance titles or sensors and set them to a unique identifier for your entity. If they have not been used, you can check the audit trail. However, manually checking the audit trail, especially if there are many instances, can be cumbersome. Also if different teams use different standards or standards have evolved over time, there might not be a single way to look for your entity identifier in composite instances. You want to automate this.

It is of course possible to write Java or WLST code and use the API to gather all relevant information. It would however require fetching large amounts of data from the SOAINFRA database to analyse. Fetching all that data into WLST or Java and combining it, would not be fast. I’ve created a database package / query which performs this feat directly on the 11g SOAINFRA database (and most likely with little alteration on 12c).

How does it work

The checks which are performed in order (the first result found is returned):

  • Check the composite instance title
  • Check the sensor values
  • Check the composite audit trail
  • Check the composite audit details
  • Check the BPM audit trail
  • Check the Mediator audit trail
  • Do the above checks for every composite sharing the same ECID.

It first looks for instance titles conforming to a specific syntax (with a regular expression), next it looks for sensor values of sensors with a specific name. After that it starts to look in the audit trail and if even that fails, it looks in the audit details where messages are stored when they become larger than a set value (look for Audit Trail threshold). Next the BPM and Mediator specific audit tables are looked at and as a last resort, it uses the ECID to find other composite instances in the same flow which might provide the required information and it does the same checks as mentioned above on those composite instances. Using this method I could find for almost any composite instance in my environment a corresponding entity identifier. The package/query has been tested on 11g but not on 12c. You should of course check to see if it fits your personal requirements. The code is mostly easy to read save the audit parsing details. For parsing the audit trail and details tables, I’ve used the following blog. The data is saved in a file which can be imported in Excel and can be scheduled on Linux with a provided sh script.

Getting the script to work for your case

You can download the script here. Several minor changes are required to make the script suitable for a specific use case.

  • In the example script getcomposites_run.sql the identification regular expressing: AA\d\d\.\d+ is used. You should of course replace this with a regular expression reflecting the format of your entity identification.
  • In the example script getcomposites_run.sql sensors which have AAIDENTIFICATION in the name will be looked at. This should be changed to reflect the names used by your sensors.
  • The getcomposites.sh contains a connect string: connect soainfra_username/soainfra_password. You should change this to your credentials.
  • The getcomposites.sh script can be scheduled. In the example script, it is scheduled to run at 12:30:00. If you do not need it, you can remove the scheduling. It can come in handy when you want to run it outside of office hours because the script most likely will impact performance.
  • The selection in getcomposites_run.sql only looks at running composites. Depending on your usecase, you might want to change this to take all composites into consideration.
  • The script has not been updated to 12g. If you happen to create a 12g version of this script (I think not much should have to be changed), please inform me so I can add it to the Github repository.
Considerations
  • The script contains some repetition of code. This could be improved.
  • If you have much data in your SOAINFRA tables, the query will be slow. It could take hours. During this period, performance might be adversely affected.
  • That I had to create a script like this (first try this, then this, then this, etc) indicates that I encountered a situation in which there was not a single way to link composite instances to a specific identifier. If your project uses strict standards and these standards are enforced, a script like this would not be needed. For example, you set your composite instance title to reflect your main entity identifier or use specific sensors. In such a case, you do not need to fall back to parsing audit data.

The post Oracle SOA Suite: Find that composite instance! appeared first on AMIS Oracle and Java Blog.

Apache Kafka on the Oracle Cloud: My First experiences with Oracle Event Hub Cloud Service

Sat, 2017-03-18 06:46

Oracle recently made their ‘Kafka on Cloud’ service available: the Event Hub cloud service – offered as part of the Big Data Compute Cloud Service. In this article, I will briefly show the steps I went through to get up and running with this service. To be frank, it was not entirely intuitive – so this may be handy the next time I have to do it or perhaps when you are struggling. One obvious point of confusion is the overloaded use of the ‘service’ – which is applied among others for Kafka Topics.

The steps are:

  1. Initialize the Oracle BigData Compute CS | Oracle Event Hub Cloud Service – Platform – -expose REST Proxy
  2. Create Network Access Rule to enable access to the Event Hub Platform instance from the public internet
  3. Create a Event Hub service instance on the Event Hub Platform – corresponding to a Kafka Topic
  4. Inspect Event Hub using PSM
  5. Interact with Event Hub through CURL
  6. Create application (e.g. NodeJS) to access the Event Hub Service (aka Kafka Topic): produce message, create consumer, consume messages (through the REST proxy) – discussed in a subsequent blog article
Initialize the Oracle BigData Compute CS | Oracle Event Hub Cloud Service – Platform – -expose REST Proxy

 

From the Dashboard, open the BigData Compute Cloud Service. Open the dropdown menu. Select option Oracle Event Hub Cloud Service – Platform.

 

image

 

image

 

Provide the name of the service – the Kafka Event Hub Cluster: soaringEventsHub. Click Next

image

Specify Basic deployment type (for simple explorations). Upload the SSH Public Key.

For a trial, select just a single node (I started out with 5 nodes and it cost me a huge pile of resources).

Enable REST Access – to be able to communicate with the Event Hub through the REST API. Specify the username and password for accessing the REST proxy. (in hindsight, I probably should have picked a different username than admin).

image

 

Click Next. The service will be created – with a generous memory an storage allocation:image

When the service creation is complete – details are shown, including the public IP for the Kafka cluster and the ZooKeeper instanceSNAGHTMLd64c40

And the Rest Proxy details:

SNAGHTML1296a2d

Create Network Access Rule to enable access to the Event Hub Platform instance from the public internet

To be perfectly honest – I am not sure anymore that this step is really required. I had trouble accessing the Event Hub service from my laptop – and I am still not able to connect to the Kafka broker from a regular Kafka client – so I tried many different things, including exposing the public IP address on the Event Hub (Kafk)a node. Whether it helped or made any difference, I am not entirely sure. I will share the steps I took, and you figure out whether you need them. (I am still hoping that accessing the Event Hub from applications running inside the same identity domain on Oracle PaaS will be easier).

I opened the Access Rule for the Event Hub service:

image

image

I created an access rule

image

And enabled it. In red the new access rule. In the blue rectangle the access rule that opens up the REST Proxy for the Event Hub service:

SNAGHTML152941b

 

Create Event Hub Service on the Event Hub Service Platform aka Create a Kafka Topic

Open the Oracle Event Hub Cloud Service console (not the Oracle Event Hub Cloud Service – Platform!)

 

image

Click on Create Service. This corresponds to creating a Kafka Topic. The naming is awfully confusing here. Create an instance of Event Hub Service on top of an instance of Event Hub Platform Service actually means create a Kafka message topic. Now that is much clearer.

 

image

 

Specify the name for the service – which will be part of the name of the Kafka Topic. Specify the Number of Partitions and the default Retention Period for messages published to the topic. Indicate on which instance of the Event Hub Cloud Service Platform – created in the first step in this article – the topic should be created.image

Click Next.image

Click Create.

The Topic is created with a name that is composed from the name of the identity domain and the name specified for the Event Hub Service instance: partnercloud17-SoaringEventBus.

 

image

The popup details for the “service”  (aka Topic) indicate the main configuration details as well as the REST endpoint for this Topic. Messages can be produced at this endpoint:SNAGHTMLd98394

The log of all Service activity (again, Service here is at the same level of Kafka Topic)

image

The Service Console now lists these instances:

image

 

 

Inspect Event Hub using PSM

The PSM (PaaS Service Manager) command line interface for the Oracle Cloud (read here on how to install it and get going) can be used to inspect the state of the Event Hub Service. It cannot be used to produce and consume messages though.

Using PSM for Event Hub is described in the documentation:  http://docs.oracle.com/en/cloud/paas/java-cloud/pscli/oehcs-commands.html

To list all services (topics)
psm oehcs services

image

— details for a single service
psm oehcs service –service-name SoaringEventBus

— to create a new Topic:

psm oehcs create-service

— to update the retention time for a topic:

psm oehcs update-service

 

Interact with Event Hub through CURL

    One way to produce messages to the topic and/or consume message from a topic is by using cURL.

    It took me some time to figure out the correct syntax for each of these operations. It seems not abundantly well documented/explained at present. Anyways, here it goes. (note: the -k option tells cURL to accept an unknown certificate)

    * To produce a message to the topic on Event Hub, use:

    curl -X POST -k -u username:password -H “Content-Type: application/vnd.kafka.json.v1+json”  -d “{ \”records\”: [ { \”key\”: \”myKey\”, \”value\”: \”mySpecialValue\”}] }” https://public_ip_REST-PROXY:1080/restproxy/topics/NAME_OF_EVENT_HUB_SERVICE

    * Create a consumer group: in order to consume messages, you first need to create a consumer group with a consumer instance. Subsequently, you can consume messages through the consumer:

    curl -X POST -k -u username:password -H “Content-Type: application/vnd.kafka.json.v1+json”  -d “{ \”name\”:  \”soaring-eventbus-consumer\”, \”format\”: \”json\” }”  https://public_ip_REST-PROXY:1080/restproxy/consumers/soaring-eventbus-consumer-group

    * To consume messages – levering the consumer group soaring-eventbus-consumer-group and the consumer instance soaring-events-consumer:
    curl -X POST -k -u username:password -H “Accept: application/vnd.kafka.json.v1+json”    https://public_ip_REST-PROXY:1080/restproxy/consumers/soaring-eventbus-consumer-group/instances/soaring-eventbus-consumer/topics/partnercloud17-SoaringEventBus

     

    Here three commands: create a consumer group, produce a message and consume (all) available messages – all from the same topic:

    SNAGHTML142b0dd

    The post Apache Kafka on the Oracle Cloud: My First experiences with Oracle Event Hub Cloud Service appeared first on AMIS Oracle and Java Blog.

    Introducing NoSQL and MongoDB to Relational Database professionals

    Wed, 2017-03-15 06:25

    Most enterprises have a lot of variety in the data they deal with. Some data is highly structured and other is very unstructured, some data is bound by strict integrity rules and quality constraints and other is free of any restrictions, some data is “hot” – currently very much in demand – and other data can be stone cold. Some data needs to extremely accurate, down to a prescribed number of fractional digits and other is only approximate. Some is highly confidential and other publicly accessible. Some is around in small quantities and other in huge volumes.

    Over the years many IT professionals and companies have come to the realization that all this differentiation in data justifies or even mandates a differentiation in how the data is stored and processed. It does not make sense to treat the hottest transactional data in the same way as the archived records from 30 years. Yet many organizations have been doing exactly that: store it all in the enterprise relational database. It works, keeps all data accessible for those rare instances where that really old data is required and most importantly: keeps all data accessible in the same way – through straightforward SQL queries.

    On March 14th, we organized a SIG session at AMIS around NoSQL in general and MongoDB in particular. We presented on the history of NoSQL, how it complements relational databases and a pure SQL approach and what types of NoSQL databases are available. Subsequently we focused on MongoDB, introducing the product and its architecture and discussing how to interact with MongoDB from JavaScript/NodeJS and from Java.

    The slides presented by the speakers – Pom Bleeksma and Lucas Jellema – are shown here (from SlideShare):

     

    The handson workshop is completely available from GitHub: https://github.com/lucasjellema/sig-nosql-mongodb.

    image

    An additional slide deck was discussed – to demonstrate 30 queries side by side, against MongoDB vs Oracle Database SQL. This slide deck includes the MongoDB operations:

    •Filter & Sort (find, sort)

    •Aggregation ($group, $project, $match, $sort)

    •Lookup & Outer Join ($lookup, $arrayElemAt)

    •Facet Search ($facet, $bucket, $sortByCount)

    •Update (findAndModify, forEach, save, update, upsert, $set, $unset)

    •Date and Time operations

    •Materialized View ($out)

    •Nested documents/tables ($unwind, $reduce)

    •Geospatial (ensureIndex, 2dsphere, $near, $geoNear)

    •Text Search (createIndex, text, $text, $search)

    •Stored Procedures (db.system.js.save, $where)

     

    The post Introducing NoSQL and MongoDB to Relational Database professionals appeared first on AMIS Oracle and Java Blog.

    Oracle Public Cloud – Invoking ICS endpoints from SOA CS – configure SSL certificate and basic authentication

    Wed, 2017-03-15 06:16

    As part of the Soaring through the Clouds demo of 17 Oracle Public Cloud services, I had to integrate SOA CS with both ACCS (Application Container Cloud) and ICS (Integration Cloud Service).

    image

    Calls from Service Bus and SOA Composites running in SOA Suite 12c on SOA CS to endpoints on ACCS (Node.js Express applications) and ICS (REST connector endpoint) were required in this demo. These calls are over SSL (to https endpoints) and for ICS also require basic authentication (at present, ICS endpoints cannot be invoked anonymously).

    This article shows the steps for taking care of these two aspects:

    • ensure that the JVM under SOA Suite on SOA CS knows and trusts the SSL certificate for ACCS or ICS
    • ensure that the call from SOA CS to ICS carries basic authentication details

    The starting point is a SOA Composite that corresponds with the preceding figure – with external references to DBaaS (through Database Adapter), ICS (to call an integration that talks to Twitter) and ACCS (to invoke a REST API on NodeJS that calls out to the Spotify API):

    image

    Configure SSL Certificate on JVM under SOA Suite on SOA CS

    I have tried to deploy the SOA composite (successful) and invoke the TweetServiceSOAP endpoint (that invokes ICS) (not successful). The first error I run into is:

    env:Serverjavax.ws.rs.ProcessingException: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested targetoracle.sysman.emInternalSDK.webservices.util.SoapTestException: Client received SOAP Fault from server : javax.ws.rs.ProcessingException: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

    image

    This may sound a little cryptic, but is actually quite simple: the endpoint for the ICS service I am trying to invoke is: https://ics4emeapartner-partnercloud17.integration.us2.oraclecloud.com/integration/flowapi/rest/ACEDEM_RESTME_… The essential is right at the beginning: https. The communication with the endpoint is secure, over SSL. This requires the certificate of the ICS server to be used by SOA CS (in particular the JVM under WebLogic running SOA Suite on the SOA CS instance). For this to happen, the certificate needs to be configured with the JVM as a trusted certificate.

    With WebLogic 12c it has become a lot easier to register certificates with the server – going through the Enterprise Manager Fusion Middleware Control. These are the steps:

    1. Paste the endpoint for the ICS service in the browser’s location bar and try to access it; this will not result in a meaningful response. It will however initiate an SSL connection between browser and server, as you can tell from the padlock icon displayed to the left of the location bar

    image

    2. Click on the padlock icon, to open the details for the SSL certificate

    SNAGHTML1005017

    Open the Security tab and click on View Certificate

    SNAGHTML100be83

    3. Open the Details tab and Export the Certificate

    SNAGHTML101da3d

    Save the certificate to a file:

    SNAGHTML10236cd

    4. Open the Enterprise Manager Fusion Middleware Control for the WebLogic Domain under the SOA CS instance. Navigate to Security | Keystore:

    image

    5. Select Stripe system | trust and click on the Manage button

    image

    6. Click on Import to import a new certificate:

    image

    Select Trusted Certificate as the Certificate Type. Provide an alias to identify the certificate.

    Click browse and select the file that was saved when exporting the certificate in step 3:

    image

    Click OK.

    The Certificate is imported and added to the keystore:

    image

    7. Restart the WebLogic Domain (admin server and all managed servers)

    Unfortunately for the new certificate to become truly available, a restart is (still) required. (or at least, that is my understanding, perhaps you can try without because it seems like a very heavy step)

    This blog by Adam DesJardin from our REAL partner AVIO Consulting provided much of the answer: http://www.avioconsulting.com/blog/soa-suite-12c-and-opss-keystore-service

     

    Add basic authentication to the call from SOA CS to ICS

    When I again tested my call to the TweetServiceSOAP endpoint (that invokes ICS), I was again not successful. This time, a different exception occurred:

    env:ServerAuthorization Requiredoracle.sysman.emInternalSDK.webservices.util.SoapTestException: Client received SOAP Fault from server : Authorization Required

    This is not really a surprise: all calls to ICS endpoints require basic authentication (because at present, ICS endpoints cannot be invoked anonymously). These are the steps to make this successful:

    1. Create an Oracle Public Cloud user account with one permission: call ICS services: johndoe

    Now we need to a credential for jonhdoe in a credential map in the credential store in WebLogic, and refer to that credential in a OWMS Security Policy that we add to the Reference in the SOA Composite that makes the call to ICS.

    2. Open the Enterprise Manager Fusion Middleware Control for the WebLogic Domain under the SOA CS instance. Navigate to Security | Credentials:

    image

    3. If the map oracle.wsm.security does not yet exist, click on Create Map. Enter the name oracle.wsm.security in the Map Name field and click on OK.

    image

    4. Select the map oracle.wsm.security and click on Create Key

    image

    Set the Key for this credential; the key is used to refer to the credential in the security policy. Here I use ICSJohnDoe.

    image

    Set the type of Password and the username and password to the correct values for the ICS user. Click on OK to create.

    image

    5. Add a security policy to the Reference in the SOA Composite.

    In JDeveloper open the SOA Composite. Right click on the Reference. Select Configure SOA WS Policies from the context menu.

    image

    Click on the plus icon in the category Security. Select oracle/http_basic_auth_over_ssl_client_policy.

    image

    Set the value of property csf-key to the Key value defined for the credential in step 4, in my case ICSJohnDoe.

    Click on OK.

    6. Redeploy the SOA Composite to SOA CS.

     

    This time when I invoke the Web Service, my Tweet gets published:

    image

    The flow trace for the SOA Composite:

    image

    Resources

    A-Team Article – add certificate to JCS and invoke JCS from ICS – http://www.ateam-oracle.com/configuring-https-between-integration-cloud-service-and-java-cloud-service/

      The post Oracle Public Cloud – Invoking ICS endpoints from SOA CS – configure SSL certificate and basic authentication appeared first on AMIS Oracle and Java Blog.

      Change UUIDs in VirtualBox

      Sun, 2017-03-12 00:31

      If you are anything like me you will have multiple virtualboxes running on your system. Sometimes you might want to run a copy of a virtualbox for different purposes. Like running an Oracle 11 Devdays instance as test environment but also running the same vbox for customer testing. If you copy the vbox and try to run it in the manager you’ll be presented with an error that a harddisk with the same UUID already exists. Here’s how I solved it.

      First of all you make a backup-copy of the Virtualbox you want to change. While this is running you can download the portable apps UUID-GUID generator or if you are not running windows a similar program. You can also use an online GUID generator.

      After the backup has completed you can start changing the UUIDs for the VirtualBox. Open the <virtualboxname>.vbox file in a text editor. There are a couple of UUIDs that need to be changed:

      First look for the <Machine> tag (2nd tag in the xml file). One of the attributes is uuid={some_uuid}. You can change this to your new uuid. This is where the generator comes in, just generate a new uuid and paste that here.

      Next you need to change the uuids for the harddisks. This is a little more tricky. Find the tag <Harddisk> and look for the uuid attribute. This uuid is used multiple times in the xml file. Also in the StorageControllers section. The easiest way to keep these in sync is to do a search-and-replace over the entire file. Search for the current uuid, replace with a freshly generated uuid. Before you change the next one. you also need to change the uuid in the harddisk file. You do this running a command line utility VBoxManage.
      The command is like this:
      <path_to_virtualbox>VBoxManage internalcommands sethduuid <filepath> <uuid>

      Repeat this process for all the harddisks that are defined. This way you can have multiple instances of the same VirtualBox in your VirtualBox Manager.

      You may want to change other settings like MAC Addresses for your network cards, but you can do this using the VBox interface.

      The post Change UUIDs in VirtualBox appeared first on AMIS Oracle and Java Blog.

      Oracle Service Bus : Service Exploring via WebLogic Server MBeans with JMX

      Thu, 2017-03-09 03:34

      At a public sector organization in the Netherlands there was the need to make an inventory of the deployed OSB services in order to find out, the dependencies with certain external web services (which were on a list to become deprecated).

      For this, in particular the endpoints of business services were of interest.

      Besides that, the dependencies between services and also the Message Flow per proxy service was of interest, in particular Operational Branch, Route, Java Callout and Service Callout actions.

      Therefor an OSBServiceExplorer tool was developed to explore the services (proxy and business) within the OSB via WebLogic Server MBeans with JMX. For now, this tool was merely used to quickly return the information needed, but in the future it can be the basis for a more comprehensive one.

      This article will explain how the OSBServiceExplorer tool uses WebLogic Server MBeans with JMX.

      If you are interested in general information about, using MBeans with JMX, I kindly point you to another article (written be me) on the AMIS TECHNOLOGY BLOG: “Oracle Service Bus : disable / enable a proxy service via WebLogic Server MBeans with JMX”, via url: https://technology.amis.nl/2017/02/28/oracle-service-bus-disable-enable-a-proxy-service-via-weblogic-server-mbeans-with-jmx/

      Remark: Some names in the examples in this article are in Dutch, but don’t let this scare you off.

      MBeans

      For ease of use, a ms-dos batch file was created, using MBeans, to explore services (proxy and business). The WebLogic Server contains a set of MBeans that can be used to configure, monitor and manage WebLogic Server resources.

      On a server, the ms-dos batch file “OSBServiceExplorer.bat” is called.

      The content of the ms-dos batch file “OSBServiceExplorer.bat” is:
      java.exe -classpath “OSBServiceExplorer.jar;com.bea.common.configfwk_1.7.0.0.jar;sb-kernel-api.jar;sb-kernel-impl.jar;wlfullclient.jar” nl.xyz.osbservice.osbserviceexplorer. OSBServiceExplorer “xyz” “7001” “weblogic” “xyz”

      In the ms-dos batch file via java.exe a class named OSBServiceExplorer is being called. The main method of this class expects the following parameters:

      Parameter name Description HOSTNAME Host name of the AdminServer PORT Port of the AdminServer USERNAME Username PASSWORD Passsword

      In the sample code shown at the end of this article, the use of the following MBeans can be seen:

      Provides a common access point for navigating to all runtime and configuration MBeans in the domain as well as to MBeans that provide domain-wide services (such as controlling and monitoring the life cycles of servers and message-driven EJBs and coordinating the migration of migratable services). [https://docs.oracle.com/middleware/1213/wls/WLAPI/weblogic/management/mbeanservers/domainruntime/DomainRuntimeServiceMBean.html]

      This library is not by default provided in a WebLogic install and must be build. The simple way of how to do this is described in “Fusion Middleware Programming Stand-alone Clients for Oracle WebLogic Server, Using the WebLogic JarBuilder Tool”, which can be reached via url: https://docs.oracle.com/cd/E28280_01/web.1111/e13717/jarbuilder.htm#SACLT240.

      Provides methods for retrieving runtime information about a server instance and for transitioning a server from one state to another. [https://docs.oracle.com/cd/E11035_01/wls100/javadocs_mhome/weblogic/management/runtime/ServerRuntimeMBean.html]

      Provides various API to query, export and import resources, obtain validation errors, get and set environment values, and in general manage resources in an ALSB domain. [https://docs.oracle.com/cd/E13171_01/alsb/docs26/javadoc/com/bea/wli/sb/management/configuration/ALSBConfigurationMBean.html]

      Once the connection to the DomainRuntimeServiceMBean is made, other MBeans can be found via the findService method.

      Service findService(String name,
                          String type,
                          String location)
      

      This method returns the Service on the specified Server or in the primary MBeanServer if the location is not specified.

      In the sample code shown at the end of this article, certain java fields are used. For reading purposes the field values are shown in the following table:

      Field Field value DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME weblogic.management.mbeanservers.domainruntime DomainRuntimeServiceMBean.OBJECT_NAME com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean ALSBConfigurationMBean.NAME ALSBConfiguration ALSBConfigurationMBean.TYPE com.bea.wli.sb.management.configuration.ALSBConfigurationMBean Ref.DOMAIN <Reference to the domain>

      Because of the use of com.bea.wli.config.Ref.class , the following library <Middleware Home Directory>/Oracle_OSB1/modules/com.bea.common.configfwk_1.7.0.0.jar was needed.

      A Ref uniquely represents a resource, project or folder that is managed by the Configuration Framework.

      A special Ref DOMAIN refers to the whole domain.
      [https://docs.oracle.com/cd/E17904_01/apirefs.1111/e15033/com/bea/wli/config/Ref.html]

      Because of the use of weblogic.management.jmx.MBeanServerInvocationHandler.class , the following library <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar was needed.

      When running the code the following error was thrown:

      java.lang.RuntimeException: java.lang.ClassNotFoundException: com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean
      	at weblogic.management.jmx.MBeanServerInvocationHandler.newProxyInstance(MBeanServerInvocationHandler.java:621)
      	at weblogic.management.jmx.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:418)
      	at $Proxy0.findService(Unknown Source)
      	at nl.xyz.osbservice.osbserviceexplorer.OSBServiceExplorer.<init>(OSBServiceExplorer.java:174)
      	at nl.xyz.osbservice.osbserviceexplorer.OSBServiceExplorer.main(OSBServiceExplorer.java:445)
      Caused by: java.lang.ClassNotFoundException: com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean
      	at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
      	at java.security.AccessController.doPrivileged(Native Method)
      	at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
      	at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
      	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
      	at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
      	at weblogic.management.jmx.MBeanServerInvocationHandler.newProxyInstance(MBeanServerInvocationHandler.java:619)
      	... 4 more
      Process exited.
      
      

      So because of the use of com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean.class the following library <Middleware Home Directory>/Oracle_OSB1/lib/sb-kernel-impl.jar was also needed.

      Runtime information (name and state) of the server instances

      The OSBServiceExplorer tool writes its output to a text file called “OSBServiceExplorer.txt”.

      First the runtime information (name and state) of the server instances (Administration Server and Managed Servers) of the WebLogic domain are written to file.

      Example content fragment of the text file:

      Found server runtimes:
      - Server name: AdminServer. Server state: RUNNING
      - Server name: ManagedServer1. Server state: RUNNING
      - Server name: ManagedServer2. Server state: RUNNING

      See the code fragment below:

      fileWriter.write("Found server runtimes:\n");
      int length = (int)serverRuntimes.length;
      for (int i = 0; i < length; i++) {
          ServerRuntimeMBean serverRuntimeMBean = serverRuntimes[i];
      
          String name = serverRuntimeMBean.getName();
          String state = serverRuntimeMBean.getState();
          fileWriter.write("- Server name: " + name + ". Server state: " +
                           state + "\n");
      }
      fileWriter.write("" + "\n");
      List of Ref objects (projects, folders, or resources)

      Next, a list of Ref objects is written to file, including the total number of objects in the list.

      Example content fragment of the text file:

      Found total of 1132 refs, including the following proxy and business services: 
      …
      - ProxyService: JMSConsumerStuFZKNMessageService-1.0/proxy/JMSConsumerStuFZKNMessageService_PS
      …
      - ProxyService: ZKN ZaakService-2.0/proxy/UpdateZaak_Lk01_PS
      …
      - BusinessService: ZKN ZaakService-2.0/business/eBUS/eBUS_FolderService_BS

      See the code fragment below:

      Set refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);
      
      
      fileWriter.write("Found total of " + refs.size() + " refs, including the following proxy and business services:\n");
      
      for (Ref ref : refs) {
          String typeId = ref.getTypeId();
      
          if (typeId.equalsIgnoreCase("ProxyService")) {
      
              fileWriter.write("- ProxyService: " + ref.getFullName() +
                               "\n");
          } else if (typeId.equalsIgnoreCase("BusinessService")) {
              fileWriter.write("- BusinessService: " + ref.getFullName() +
                               "\n");
          } else {
              //fileWriter.write(ref.getFullName());
          }
      }
      
      fileWriter.write("" + "\n");

      As mentioned before, a Ref object uniquely represents a resource, project or folder. A Ref object has two components:

      • typeId that indicates whether it is a project, folder, or a resource
      • array of names of non-zero length.

      For a resource the array of names start with the project name, followed by folder names, and end with the resource name.
      For a project, the Ref object simply contains one name component, that is, the project name.
      A Ref object for a folder contains the project name followed by the names of the folders which it is nested under.

      [https://docs.oracle.com/cd/E17904_01/apirefs.1111/e15033/com/bea/wli/config/Ref.html]

      Below is an example of a Ref object that represents a folder (via JDeveloper Debug):

      Below is an example of a Ref object that represents a resource (via JDeveloper Debug):

      ResourceConfigurationMBean

      In order to be able to determine the actual endpoints of the proxy services and business services, the ResourceConfigurationMBean is used. When connected, the Service Bus MBeans are located under com.oracle.osb. [https://technology.amis.nl/2014/10/20/oracle-service-bus-obtaining-list-exposed-soap-http-endpoints/]

      When we look at the java code, as a next step, the names of a set of MBeans specified by pattern matching are put in a list and looped through.

      Once the connection to the DomainRuntimeServiceMBean is made, other MBeans can be found via the queryNames method.

      Set queryNames(ObjectName name,
                     QueryExp query)
                     throws IOException
      

      Gets the names of MBeans controlled by the MBean server. This method enables any of the following to be obtained: The names of all MBeans, the names of a set of MBeans specified by pattern matching on the ObjectName and/or a Query expression, a specific MBean name (equivalent to testing whether an MBean is registered). When the object name is null or no domain and key properties are specified, all objects are selected (and filtered if a query is specified). It returns the set of ObjectNames for the MBeans selected.
      [https://docs.oracle.com/javase/7/docs/api/javax/management/MBeanServerConnection.html]

      See the code fragment below:

      String domain = "com.oracle.osb";
      String objectNamePattern =
          domain + ":" + "Type=ResourceConfigurationMBean,*";
      
      Set osbResourceConfigurations =
          connection.queryNames(new ObjectName(objectNamePattern), null);
      
      fileWriter.write("ResourceConfiguration list of proxy and business services:\n");
      for (ObjectName osbResourceConfiguration :
           osbResourceConfigurations) {
      …
          String canonicalName =
              osbResourceConfiguration.getCanonicalName();
          fileWriter.write("- Resource: " + canonicalName + "\n");
      …
      }

      The pattern used is: com.oracle.osb:Type=ResourceConfigurationMBean,*

      Example content fragment of the text file:

      ResourceConfiguration list of proxy and business services:
      …
      - Resource: com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
      …

      Below is an example of an ObjectName object (via JDeveloper Debug), found via the queryNames method:

      Via the Oracle Enterprise Manager Fusion Middleware Control for a certain domain, the System MBean Browser can be opened. Here the previously mentioned ResourceConfigurationMBean’s can be found.


      [Via MBean Browser]

      The information on the right is as follows (if we navigate to a particular ResourceConfigurationMBean, for example …$UpdateZaak_Lk01_PS) :


      [Via MBean Browser]

      Here we can see that the attributes Configuration and Metadata are available:

      • Configuration

      [Via MBean Browser]

      The Configuration is made available in java by the following code fragment:

      CompositeDataSupport configuration = (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,"Configuration");
      • Metadata

      [Via MBean Browser]

      The Metadata is made available in java by the following code fragment:

      CompositeDataSupport metadata = (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,"Metadata");
      Diving into attribute Configuration of the ResourceConfigurationMBean

      For each found proxy and business service the configuration information (canonicalName, service-type, transport-type, url) is written to file.

      See the code fragment below:

      String canonicalName =
          osbResourceConfiguration.getCanonicalName();
      …
      String servicetype =
          (String)configuration.get("service-type");
      CompositeDataSupport transportconfiguration =
          (CompositeDataSupport)configuration.get("transport-configuration");
      String transporttype =
          (String)transportconfiguration.get("transport-type");
      …
      fileWriter.write("  Configuration of " + canonicalName +
                       ":" + " service-type=" + servicetype +
                       ", transport-type=" + transporttype +
                       ", url=" + url + "\n");

      Proxy service configuration:

      Below is an example of a proxy service configuration (content fragment of the text file):

        Configuration of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean: service-type=Abstract SOAP, transport-type=local, url=local

      The proxy services which define the exposed endpoints, can be recognized by the ProxyService$ prefix.


      [Via MBean Browser]

      For getting the endpoint, see the code fragment below:

      String url = (String)transportconfiguration.get("url");

      Business service configuration:

      Below is an example of a business service configuration (content fragment of the text file):

        Configuration of com.oracle.osb:Location=AdminServer,Name=BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_FolderService_BS,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=http, url=http://xyz/eBus/FolderService.svc

      The business services which define the exposed endpoints, can be recognized by the BusinessService$ prefix.


      [Via MBean Browser]

      For getting the endpoint, see the code fragment below:

      CompositeData[] urlconfiguration =
          (CompositeData[])transportconfiguration.get("url-configuration");
      String url = (String)urlconfiguration[0].get("url");

      So, via the url key found in the business service configuration, the endpoint of a business service can be found (for example: http://xyz/eBus/FolderService.svc). So in that way the dependencies (proxy and/or business services) with certain external web services (having a certain endpoint), could be found.

      Proxy service pipeline, element hierarchy

      For a proxy service the elements (nodes) of the pipeline are investigated.

      See the code fragment below:

      CompositeDataSupport pipeline =
          (CompositeDataSupport)configuration.get("pipeline");
      TabularDataSupport nodes =
          (TabularDataSupport)pipeline.get("nodes");


      [Via MBean Browser]

      Below is an example of a nodes object (via JDeveloper Debug):

      If we take a look at the dataMap object, we can see nodes of different types.

      Below is an example of a node of type Stage (via JDeveloper Debug):

      Below is an example of a node of type Action and label ifThenElse (via JDeveloper Debug):

      Below is an example of a node of type Action and label wsCallout (via JDeveloper Debug):

      For the examples above the Message Flow part of the UpdateZaak_Lk01_PS proxy service looks like:

      The mapping between the node-id and the corresponding element in the Messsage Flow can be achieved by looking in the .proxy file (in this case: UpdateZaak_Lk01_PS.proxy) for the _ActiondId- identification, mentioned as value for the name key.

      <con:stage name="EditFolderZaakStage">
              <con:context>
                …
              </con:context>
              <con:actions>
                <con3:ifThenElse>
                  <con2:id>_ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7c84</con2:id>
                  <con3:case>
                    <con3:condition>
                      …
                    </con3:condition>
                    <con3:actions>
                      <con3:wsCallout>
                        <con2:id>_ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7b7f</con2:id>
                        …

      The first node in the dataMap object (via JDeveloper Debug) looks like:

      The dataMap object is of type HashMap. A hashMap maintains key and value pairs and often denoted as HashMap<Key, Value> or HashMap<K, V>. HashMap implements Map interface

      As can be seen, the key is of type Object and the value of type CompositeData.

      In order to know what kind of information is delivered via the CompositeData object, the rowType object can be used.

      See the code fragment below:

      TabularType tabularType = nodes.getTabularType();
      CompositeType rowType = tabularType.getRowType();

      Below is an example of a rowType object (via JDeveloper Debug):

      From this it is now clear that the CompositeData object for a ProxyServicePipelineElementType contains:

      Index key value 0 children Children of this node 1 label Label 2 name Name of the node 3 node-id Id of this node unique within the graph 4 type Pipeline element type

      In the code fragment below, an iterator is used to loop through the dataMap object.

      Iterator keyIter = nodes.keySet().iterator();
      
      for (int j = 0; keyIter.hasNext(); ++j) {
      
          Object[] key = ((Collection)keyIter.next()).toArray();
      
          CompositeData compositeData = nodes.get(key);
      
          …
      }

      The key object for the first node in the dataMap object (via JDeveloper Debug) looks like:

      The value of this key object is 25, which also is shown as the value for the node-id of the compositeData object, which for the first node in the dataMap object (via JDeveloper Debug) looks like:

      It’s obvious that the nodes in the pipeline form a hierarchy. A node can have children, which in turn can also have children, etc. Think for example of a “Stage” having an “If Then” action which in turn contains several “Assign” actions. A proxy service Message Flow can of course contain all kinds of elements (see the Design Palette).

      Below is (for another proxy service) an example content fragment of the text file, that reflects the hierarchy:

           Index#76:
             level    = 1
             label    = branch-node
             name     = CheckOperationOperationalBranch
             node-id  = 62
             type     = OperationalBranchNode
             children = [42,46,50,61]
               level    = 2
               node-id  = 42
               children = [41]
                 level    = 3
                 label    = route-node
                 name     = creeerZaak_Lk01RouteNode
                 node-id  = 41
                 type     = RouteNode
                 children = [40]
                   level    = 4
                   node-id  = 40
                   children = [39]
                     level    = 5
                     label    = route
                     name     = _ActionId-4977625172784205635-3567e5a2.15364c39a7e.-7b99
                     node-id  = 39
                     type     = Action
                     children = []
               level    = 2
               node-id  = 46
               children = [45]
                 level    = 3
                 label    = route-node
                 name     = updateZaak_Lk01RouteNode
                 node-id  = 45
                 type     = RouteNode
                 children = [44]
                   level    = 4
                   node-id  = 44
                   children = [43]
                     level    = 5
                     label    = route
                     name     = _ActionId-4977625172784205635-3567e5a2.15364c39a7e.-7b77
                     node-id  = 43
                     type     = Action
                     children = []
               …
      

      Because of the interest in only certain kind of nodes (Route, Java Callout, Service Callout, etc.) some kind of filtering is needed. For this the label and type keys are used.

      See the code fragment below:

      String label = (String)compositeData.get("label");
      String type = (String)compositeData.get("type");
      
      if (type.equals("Action") &&
          (label.contains("wsCallout") ||
           label.contains("javaCallout") ||
           label.contains("route"))) {
      
          fileWriter.write("    Index#" + j + ":\n");
          printCompositeData(nodes, key, 1);
      } else if (type.equals("OperationalBranchNode") ||
                 type.equals("RouteNode"))
      {
          fileWriter.write("    Index#" + j + ":\n");
          printCompositeData(nodes, key, 1);
      }

      Example content fragment of the text file:

          Index#72:
             level    = 1
             label    = wsCallout
             name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7b7f
             node-id  = 71
             type     = Action
             children = [66,70]
          Index#98:
             level    = 1
             label    = wsCallout
             name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7997
             node-id  = 54
             type     = Action
             children = [48,53]
          Index#106:
             level    = 1
             label    = wsCallout
             name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7cf4
             node-id  = 35
             type     = Action
             children = [30,34]
      

      When we take a closer look at the node of type Action and label wsCallout with index 106, this can also be found in the MBean Browser:


      [Via MBean Browser]

      The children node-id’s are 30 (a node of type Sequence and name requestTransform, also having children) and 34 (a node of type Sequence and name responseTransform, also having children).

      Diving into attribute Metadata of the ResourceConfigurationMBean

      For each found proxy service the metadata information (dependencies and dependents) is written to file.

      See the code fragment below:

      fileWriter.write("  Metadata of " + canonicalName + "\n");
      
      String[] dependencies =
          (String[])metadata.get("dependencies");
      fileWriter.write("    dependencies:\n");
      int size;
      size = dependencies.length;
      for (int i = 0; i < size; i++) {
          String dependency = dependencies[i];
          if (!dependency.contains("Xquery")) {
              fileWriter.write("      - " + dependency + "\n");
          }
      }
      fileWriter.write("" + "\n");
      
      String[] dependents = (String[])metadata.get("dependents");
      fileWriter.write("    dependents:\n");
      size = dependents.length;
      for (int i = 0; i < size; i++) {
          String dependent = dependents[i];
          fileWriter.write("      - " + dependent + "\n");
      }
      fileWriter.write("" + "\n");

      Example content fragment of the text file:

        Metadata of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
          dependencies:
            - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_FolderService_BS
            - XMLSchema$CDM$Interface$StUF-ZKN_1_1_02$zkn0310$mutatie$zkn0310_msg_mutatie
            - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_SearchService_BS
            - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_LookupService_BS
      
          dependents:
            - ProxyService$JMSConsumerStuFZKNMessageService-1.0$proxy$JMSConsumerStuFZKNMessageService_PS
            - ProxyService$ZKN ZaakService-2.0$proxy$ZaakService_PS
      

      As can be seen in the MBean Browser, the metadata for a particular proxy service shows the dependencies on other resources (like business services and XML Schemas) and other services that are dependent on the proxy service.


      [Via MBean Browser]

      By looking at the results in the text file "OSBServiceExplorer.txt", the dependencies between services (proxy and business) and also the dependencies with certain external web services (with a particular endpoint) could be extracted.

      Example content of the text file:

      Found server runtimes:
      - Server name: AdminServer. Server state: RUNNING
      - Server name: ManagedServer1. Server state: RUNNING
      - Server name: ManagedServer2. Server state: RUNNING
      
      Found total of 1132 refs, including the following proxy and business services: 
      …
      - ProxyService: JMSConsumerStuFZKNMessageService-1.0/proxy/JMSConsumerStuFZKNMessageService_PS
      …
      - ProxyService: ZKN ZaakService-2.0/proxy/UpdateZaak_Lk01_PS
      …
      - BusinessService: ZKN ZaakService-2.0/business/eBUS/eBUS_FolderService_BS
      …
      
      ResourceConfiguration list of proxy and business services:
      …
      - Resource: com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
        Configuration of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean: service-type=Abstract SOAP, transport-type=local, url=local
      
          Index#72:
             level    = 1
             label    = wsCallout
             name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7b7f
             node-id  = 71
             type     = Action
             children = [66,70]
          Index#98:
             level    = 1
             label    = wsCallout
             name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7997
             node-id  = 54
             type     = Action
             children = [48,53]
          Index#106:
             level    = 1
             label    = wsCallout
             name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7cf4
             node-id  = 35
             type     = Action
             children = [30,34]
      
        Metadata of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
          dependencies:
            - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_FolderService_BS
            - XMLSchema$CDM$Interface$StUF-ZKN_1_1_02$zkn0310$mutatie$zkn0310_msg_mutatie
            - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_SearchService_BS
            - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_LookupService_BS
      
          dependents:
            - ProxyService$JMSConsumerStuFZKNMessageService-1.0$proxy$JMSConsumerStuFZKNMessageService_PS
            - ProxyService$ZKN ZaakService-2.0$proxy$ZaakService_PS
      …

      The java code:

      package nl.xyz.osbservice.osbserviceexplorer;
      
      
      import com.bea.wli.config.Ref;
      import com.bea.wli.sb.management.configuration.ALSBConfigurationMBean;
      
      import java.io.FileWriter;
      import java.io.IOException;
      
      import java.net.MalformedURLException;
      
      import java.util.Collection;
      import java.util.HashMap;
      import java.util.Hashtable;
      import java.util.Iterator;
      import java.util.Properties;
      import java.util.Set;
      
      import javax.management.MBeanServerConnection;
      import javax.management.MalformedObjectNameException;
      import javax.management.ObjectName;
      import javax.management.openmbean.CompositeData;
      import javax.management.openmbean.CompositeDataSupport;
      import javax.management.openmbean.CompositeType;
      import javax.management.openmbean.TabularDataSupport;
      import javax.management.openmbean.TabularType;
      import javax.management.remote.JMXConnector;
      import javax.management.remote.JMXConnectorFactory;
      import javax.management.remote.JMXServiceURL;
      
      import javax.naming.Context;
      
      import weblogic.management.jmx.MBeanServerInvocationHandler;
      import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
      import weblogic.management.runtime.ServerRuntimeMBean;
      
      
      public class OSBServiceExplorer {
          private static MBeanServerConnection connection;
          private static JMXConnector connector;
          private static FileWriter fileWriter;
      
          /**
           * Indent a string
           * @param indent - The number of indentations to add before a string 
           * @return String - The indented string
           */
          private static String getIndentString(int indent) {
              StringBuilder sb = new StringBuilder();
              for (int i = 0; i < indent; i++) {
                  sb.append("  ");
              }
              return sb.toString();
          }
      
      
          /**
           * Print composite data (write to file)
           * @param nodes - The list of nodes
           * @param key - The list of keys
           * @param level - The level in the hierarchy of nodes
           */
          private void printCompositeData(TabularDataSupport nodes, Object[] key,
                                          int level) {
              try {
                  CompositeData compositeData = nodes.get(key);
      
                  fileWriter.write(getIndentString(level) + "     level    = " +
                                   level + "\n");
      
                  String label = (String)compositeData.get("label");
                  String name = (String)compositeData.get("name");
                  String nodeid = (String)compositeData.get("node-id");
                  String type = (String)compositeData.get("type");
                  String[] childeren = (String[])compositeData.get("children");
                  if (level == 1 ||
                      (label.contains("route-node") || label.contains("route"))) {
                      fileWriter.write(getIndentString(level) + "     label    = " +
                                       label + "\n");
      
                      fileWriter.write(getIndentString(level) + "     name     = " +
                                       name + "\n");
      
                      fileWriter.write(getIndentString(level) + "     node-id  = " +
                                       nodeid + "\n");
      
                      fileWriter.write(getIndentString(level) + "     type     = " +
                                       type + "\n");
      
                      fileWriter.write(getIndentString(level) + "     children = [");
      
                      int size = childeren.length;
      
                      for (int i = 0; i < size; i++) {
                          fileWriter.write(childeren[i]);
                          if (i < size - 1) { fileWriter.write(","); } } fileWriter.write("]\n"); } else if (level >= 2) {
                      fileWriter.write(getIndentString(level) + "     node-id  = " +
                                       nodeid + "\n");
      
                      fileWriter.write(getIndentString(level) + "     children = [");
      
                      int size = childeren.length;
      
                      for (int i = 0; i < size; i++) {
                          fileWriter.write(childeren[i]);
                          if (i < size - 1) { fileWriter.write(","); } } fileWriter.write("]\n"); } if ((level == 1 && type.equals("OperationalBranchNode")) || level > 1) {
                      level++;
      
                      int size = childeren.length;
      
                      for (int i = 0; i < size; i++) {
                          key[0] = childeren[i];
                          printCompositeData(nodes, key, level);
                      }
                  }
      
              } catch (Exception ex) {
                  ex.printStackTrace();
              }
          }
      
          public OSBServiceExplorer(HashMap props) {
              super();
      
      
              try {
      
                  Properties properties = new Properties();
                  properties.putAll(props);
      
                  initConnection(properties.getProperty("HOSTNAME"),
                                 properties.getProperty("PORT"),
                                 properties.getProperty("USERNAME"),
                                 properties.getProperty("PASSWORD"));
      
      
                  DomainRuntimeServiceMBean domainRuntimeServiceMBean =
                      (DomainRuntimeServiceMBean)findDomainRuntimeServiceMBean(connection);
      
                  ServerRuntimeMBean[] serverRuntimes =
                      domainRuntimeServiceMBean.getServerRuntimes();
      
                  fileWriter = new FileWriter("OSBServiceExplorer.txt", false);
      
      
                  fileWriter.write("Found server runtimes:\n");
                  int length = (int)serverRuntimes.length;
                  for (int i = 0; i < length; i++) {
                      ServerRuntimeMBean serverRuntimeMBean = serverRuntimes[i];
      
                      String name = serverRuntimeMBean.getName();
                      String state = serverRuntimeMBean.getState();
                      fileWriter.write("- Server name: " + name +
                                       ". Server state: " + state + "\n");
                  }
                  fileWriter.write("" + "\n");
      
                  // Create an mbean instance to perform configuration operations in the created session.
                  //
                  // There is a separate instance of ALSBConfigurationMBean for each session.
                  // There is also one more ALSBConfigurationMBean instance which works on the core data, i.e., the data which ALSB runtime uses.
                  // An ALSBConfigurationMBean instance is created whenever a new session is created via the SessionManagementMBean.createSession(String) API.
                  // This mbean instance is then used to perform configuration operations in that session.
                  // The mbean instance is destroyed when the corresponding session is activated or discarded.
                  ALSBConfigurationMBean alsbConfigurationMBean =
                      (ALSBConfigurationMBean)domainRuntimeServiceMBean.findService(ALSBConfigurationMBean.NAME,
                                                                                    ALSBConfigurationMBean.TYPE,
                                                                                    null);
      
                  Set<Ref> refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);
      
      
                  fileWriter.write("Found total of " + refs.size() +
                                   " refs, including the following proxy and business services:\n");
      
                  for (Ref ref : refs) {
                      String typeId = ref.getTypeId();
      
                      if (typeId.equalsIgnoreCase("ProxyService")) {
      
                          fileWriter.write("- ProxyService: " + ref.getFullName() +
                                           "\n");
                      } else if (typeId.equalsIgnoreCase("BusinessService")) {
                          fileWriter.write("- BusinessService: " +
                                           ref.getFullName() + "\n");
                      } else {
                          //fileWriter.write(ref.getFullName());
                      }
                  }
      
                  fileWriter.write("" + "\n");
      
                  String domain = "com.oracle.osb";
                  String objectNamePattern =
                      domain + ":" + "Type=ResourceConfigurationMBean,*";
      
                  Set<ObjectName> osbResourceConfigurations =
                      connection.queryNames(new ObjectName(objectNamePattern), null);
      
                  fileWriter.write("ResourceConfiguration list of proxy and business services:\n");
                  for (ObjectName osbResourceConfiguration :
                       osbResourceConfigurations) {
      
                      CompositeDataSupport configuration =
                          (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                        "Configuration");
      
                      CompositeDataSupport metadata =
                          (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                        "Metadata");
      
                      String canonicalName =
                          osbResourceConfiguration.getCanonicalName();
                      fileWriter.write("- Resource: " + canonicalName + "\n");
                      if (canonicalName.contains("ProxyService")) {
                          String servicetype =
                              (String)configuration.get("service-type");
                          CompositeDataSupport transportconfiguration =
                              (CompositeDataSupport)configuration.get("transport-configuration");
                          String transporttype =
                              (String)transportconfiguration.get("transport-type");
                          String url = (String)transportconfiguration.get("url");
                          
                          fileWriter.write("  Configuration of " + canonicalName +
                                           ":" + " service-type=" + servicetype +
                                           ", transport-type=" + transporttype +
                                           ", url=" + url + "\n");
                      } else if (canonicalName.contains("BusinessService")) {
                          String servicetype =
                              (String)configuration.get("service-type");
                          CompositeDataSupport transportconfiguration =
                              (CompositeDataSupport)configuration.get("transport-configuration");
                          String transporttype =
                              (String)transportconfiguration.get("transport-type");
                          CompositeData[] urlconfiguration =
                              (CompositeData[])transportconfiguration.get("url-configuration");
                          String url = (String)urlconfiguration[0].get("url");
      
                          fileWriter.write("  Configuration of " + canonicalName +
                                           ":" + " service-type=" + servicetype +
                                           ", transport-type=" + transporttype +
                                           ", url=" + url + "\n");
                      }
      
                      if (canonicalName.contains("ProxyService")) {
      
                          fileWriter.write("" + "\n");
      
                          CompositeDataSupport pipeline =
                              (CompositeDataSupport)configuration.get("pipeline");
                          TabularDataSupport nodes =
                              (TabularDataSupport)pipeline.get("nodes");
      
                          TabularType tabularType = nodes.getTabularType();
                          CompositeType rowType = tabularType.getRowType();
      
                          Iterator keyIter = nodes.keySet().iterator();
      
                          for (int j = 0; keyIter.hasNext(); ++j) {
      
                              Object[] key = ((Collection)keyIter.next()).toArray();
      
                              CompositeData compositeData = nodes.get(key);
      
                              String label = (String)compositeData.get("label");
                              String type = (String)compositeData.get("type");
                              if (type.equals("Action") &&
                                  (label.contains("wsCallout") ||
                                   label.contains("javaCallout") ||
                                   label.contains("route"))) {
      
                                  fileWriter.write("    Index#" + j + ":\n");
                                  printCompositeData(nodes, key, 1);
                              } else if (type.equals("OperationalBranchNode") ||
                                         type.equals("RouteNode")) {
      
                                  fileWriter.write("    Index#" + j + ":\n");
                                  printCompositeData(nodes, key, 1);
                              }
                          }
      
                          fileWriter.write("" + "\n");
                          fileWriter.write("  Metadata of " + canonicalName + "\n");
      
                          String[] dependencies =
                              (String[])metadata.get("dependencies");
                          fileWriter.write("    dependencies:\n");
                          int size;
                          size = dependencies.length;
                          for (int i = 0; i < size; i++) {
                              String dependency = dependencies[i];
                              if (!dependency.contains("Xquery")) {
                                  fileWriter.write("      - " + dependency + "\n");
                              }
                          }
                          fileWriter.write("" + "\n");
      
                          String[] dependents = (String[])metadata.get("dependents");
                          fileWriter.write("    dependents:\n");
                          size = dependents.length;
                          for (int i = 0; i < size; i++) {
                              String dependent = dependents[i];
                              fileWriter.write("      - " + dependent + "\n");
                          }
                          fileWriter.write("" + "\n");
      
                      }
      
                  }
                  fileWriter.close();
      
                  System.out.println("Succesfully completed");
      
              } catch (Exception ex) {
                  ex.printStackTrace();
              } finally {
                  if (connector != null)
                      try {
                          connector.close();
                      } catch (Exception e) {
                          e.printStackTrace();
                      }
              }
          }
      
      
          /*
             * Initialize connection to the Domain Runtime MBean Server.
             */
      
          public static void initConnection(String hostname, String portString,
                                            String username,
                                            String password) throws IOException,
                                                                    MalformedURLException {
      
              String protocol = "t3";
              Integer portInteger = Integer.valueOf(portString);
              int port = portInteger.intValue();
              String jndiroot = "/jndi/";
              String mbeanserver = DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME;
      
              JMXServiceURL serviceURL =
                  new JMXServiceURL(protocol, hostname, port, jndiroot +
                                    mbeanserver);
      
              Hashtable hashtable = new Hashtable();
              hashtable.put(Context.SECURITY_PRINCIPAL, username);
              hashtable.put(Context.SECURITY_CREDENTIALS, password);
              hashtable.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,
                            "weblogic.management.remote");
              hashtable.put("jmx.remote.x.request.waiting.timeout", new Long(10000));
      
              connector = JMXConnectorFactory.connect(serviceURL, hashtable);
              connection = connector.getMBeanServerConnection();
          }
      
      
          private static Ref constructRef(String refType, String serviceURI) {
              Ref ref = null;
              String[] uriData = serviceURI.split("/");
              ref = new Ref(refType, uriData);
              return ref;
          }
      
      
          /**
           * Finds the specified MBean object
           *
           * @param connection - A connection to the MBeanServer.
           * @return Object - The MBean or null if the MBean was not found.
           */
          public Object findDomainRuntimeServiceMBean(MBeanServerConnection connection) {
              try {
                  ObjectName objectName =
                      new ObjectName(DomainRuntimeServiceMBean.OBJECT_NAME);
                  return (DomainRuntimeServiceMBean)MBeanServerInvocationHandler.newProxyInstance(connection,
                                                                                                  objectName);
              } catch (MalformedObjectNameException e) {
                  e.printStackTrace();
                  return null;
              }
          }
      
      
          public static void main(String[] args) {
              try {
                  if (args.length <= 0) {
                      System.out.println("Provide values for the following parameters: HOSTNAME, PORT, USERNAME, PASSWORD.");
      
                  } else {
                      HashMap<String, String> map = new HashMap<String, String>();
      
                      map.put("HOSTNAME", args[0]);
                      map.put("PORT", args[1]);
                      map.put("USERNAME", args[2]);
                      map.put("PASSWORD", args[3]);
                      OSBServiceExplorer osbServiceExplorer =
                          new OSBServiceExplorer(map);
                  }
              } catch (Exception e) {
                  e.printStackTrace();
              }
      
          }
      }
      
      

      The post Oracle Service Bus : Service Exploring via WebLogic Server MBeans with JMX appeared first on AMIS Oracle and Java Blog.

      Getting ADF Data in a Jet Component (2)

      Wed, 2017-03-08 06:47

      In my previous blog I explained how to get ADF Data in a Jet Component, this was done by iterating through a ViewObject and rendering a component per element in the View Object. When you want to use the DVT’s from Oracle Jet, this won’t do the thing, because you will need the entire data set to be present at once in your component. This blog will show you how to do that without using Rest Services.

      My colleague Lucas Jellema made a JSONProviderBean, which makes data from data bindings available as nested JSON object in client side JavaScript.1

      Using this bean we can use the iterator binding of our View Object in our component page fragment.

      
      
      <div>
       <amis-chart chart-data="#{jsonProviderBean[bindings.EmployeesVO.DCIteratorBinding]}"/>
       </div>
      
      
      

      This will pass the JSON as a string to our component.

          {
              "properties": {
                  "chartData": {
                      "type": "string"
                  }
              }
               
          }
      

      In our component viewmodel we can now parse this string into a json object. The “values” object of this json object contains the data we need for our barchart, but it is not in a form the barchart can understand. Therefore you need to write function to get the data you need and put it into a format that the barchart does understand.

          function AmisChartComponentModel(context) {
              var self = this;
          
              context.props.then(function (propertymap) {
                  self.properties = propertymap;
                  var dataAsJson = JSON.parse(propertymap.chartData);
                  var barData = self.createBarSeries(dataAsJson.values);
                  /* set chart data */
                  self.barSeriesValue = ko.observableArray(barData);
              })
              
              //function to transform the data.
              self.createBarSeries = function (jsonDataArray) {
                  var data = [];
                  jsonDataArray.forEach(function (item, index, arr) {
                      data.push( {
                          "name" : item.FirstName, "items" : [item.Salary]
                      });
                  })
                  return data;
              }    
              
          }
          return AmisChartComponentModel;
      
      });
      

      We now have our entire employee data set available for the barchart. In this case I made a chart for Salary per employee, we can do all the fancy interactions with the component that we can normally do as well, for example stacking the data or changing from a horizontal to a vertical graph.

         

      Sources
      1. https://github.com/lucasjellema/adf-binding-to-json
      2. https://technology.amis.nl/2017/03/07/getting-adf-data-jet-component/
      3. http://andrejusb.blogspot.nl/2015/12/improved-jet-rendering-in-adf.html
      4. https://blogs.oracle.com/groundside/entry/jet_composite_components_i_backgrounder (and the other blogs)
      5. Source of this demo: Github
      Versions used

      JDeveloper 12.1.3,
      OracleJet V2.2.0

      Disclaimer

      The information is based on my personal research. At the moment, Oracle does not support or encourage integrating ADF and Jet. Oracle is working on JET Composite Components in ADF.

      The post Getting ADF Data in a Jet Component (2) appeared first on AMIS Oracle and Java Blog.

      Getting ADF Data in a Jet Component (1)

      Tue, 2017-03-07 09:33

      Oracle JET has been around for a while, and at this moment we are investigating what it would take to integrate JET with our existing ADF Application. In the current ADF application we want to make a dashboard in JET, however we still need to know for what project we need the data from. Therefore I am researching on how to get data from our ADF application into our JET part. In this blog I will show you how you can in an easy and quick way get your ADF BC data into your JET Components without using REST services.

      I used the blog of Andrejus1 to set up JET within my ADF Application.

      Add the JET libraries to the public_html folder of the ViewController project.

      (Final) Structure of the project:

      Make a jsf page and use af:resources to get to the css and requireJS and add the main.js

      <?xml version='1.0' encoding='UTF-8'?>
      <!DOCTYPE html>
      <f:view xmlns:f="http://java.sun.com/jsf/core" xmlns:af="http://xmlns.oracle.com/adf/faces/rich" xmlns:dvt="http://xmlns.oracle.com/dss/adf/faces" xmlns:ui="http://java.sun.com/jsf/facelets">
          <af:document title="main.jsf" id="d1">
              <af:messages id="m1"/>
              <af:resource type="css" source="jet/css/alta/2.2.0/web/alta.min.css"/>
              <af:resource type="javascript" source="jet/js/libs/require/require.js"/>
              <img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%3E%0A%20%20%20%20%20%20%20%20%20%20require.config(%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20baseUrl%20%3A%20%22jet%2Fjs%22%0A%20%20%20%20%20%20%20%20%20%20%7D)%3B%0A%0A%20%20%20%20%20%20%20%20%20%20require(%5B%22main%22%5D)%3B%0A%20%20%20%20%20%20%20%20%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&lt;script&gt;" title="&lt;script&gt;" />
              <af:form id="f1">
              
              </af:form>
          </af:document>
      </f:view>
      
      

      Then I added my composite component folder to the js folder of JET. My component is named amis-person and will show the name of the person in capital letters and the emailadress within a blue box. You can read more about composite components in the blog series of Duncan2

      Put the metadata directly in the loader.js instead of via a json file, otherwise it will not work!. When you do it via the .json file and you console.log the in the function, you will see it does not print out the metadata from the .json file.

      
      define(['ojs/ojcore',
             './amis-person',
              'text!./amis-person.html',
              'css!./amis-person',
              'ojs/ojcomposite'],
        function (oj, ComponentModel, view, css) {
              'use strict';
               var metadata = '{ "properties": { "amisPersonName": { "type": "string"}, "amisPersonEmail": { "type": "string"}} }';
             oj.Composite.register('amis-person',
            {
      
            metadata: { inline: JSON.parse(metadata) },
            viewModel: { inline: ComponentModel },
            view: { inline: view },
            css: { inline: css }
             });
         });
      

      Import the component in main.js to make it available.

      require(['ojs/ojcore', 'knockout', 'jquery', 'ojs/ojknockout', 'jet-composites/amis-person/loader'],
      function (oj, ko, $){
      function ViewModel() {
          var self = this;</code>
      }
          ko.applyBindings(new ViewModel(), document.body);
      })
      
      

      Create a page fragment where you will put the html to show your component, in this case it is just the composite component.

      <?xml version='1.0' encoding='UTF-8'?>
        <ui:composition xmlns:ui="http://java.sun.com/jsf/facelets">
          <amis-person amis-person-name="NAME" amis-person-email="EMAIL" />
        </ui:composition>
      

      In the jsf page, create an iterator for the viewmodel and put the page fragment within the iterator

       <af:iterator id="iterator" value="#{bindings.EmployeesVO.collectionModel}" var="item">
          <ui:include src="/fragments/amis-person-fragment.jsff"/>
       </af:iterator>
      

      Change the bindings in the page fragment to match the output of the iterator

       <amis-person amis-person-name="#{item.bindings.FirstName.inputValue}" amis-person-email="#{item.bindings.Email.inputValue}" />
      

      That’s it, you are done. When I now run the project I see the data from the Employee Viewmodel in the Jet Component I made:

       

      Sources
      1. http://andrejusb.blogspot.nl/2015/12/improved-jet-rendering-in-adf.html
      2. https://blogs.oracle.com/groundside/entry/jet_composite_components_i_backgrounder (and the other blogs)
      3. ADFJetDemo Application or Github
      Versions used

      JDeveloper 12.1.3,
      OracleJet V2.2.0

      Disclaimer

      The information is based on my personal research. At the moment, Oracle does not support or encourage integrating ADF and Jet. Oracle is working on JET Composite Components in ADF.

      There is also a second part, on how to this, but then with DVT’s

      The post Getting ADF Data in a Jet Component (1) appeared first on AMIS Oracle and Java Blog.

      Getting started with Oracle PaaS Service Manager Command Line Interface (PSM)

      Mon, 2017-03-06 23:41

      Oracle PaaS Service Manager (PSM) provides a command line interface (CLI) with which you can manage the lifecycle of various services in Oracle Public Cloud. This opens the door for scripting (recurring) tasks – such as (re)deployment of applications on ACCS to provisioning new environments. PSM makes performing admin operations on the Oracle Public Cloud a lot easier and efficient, compared to using the the GUI.

      Note that the CLI is a thin wrapper over PaaS REST APIs that invokes these APIs to support common PaaS features.

      The steps for installing and configuring PSM are simple enough – and take about 6 minutes. I will briefly walk you through them. They are also documented just fine.  Before I continue, I want to thank Abhijit Ramchandra Jere of Oracle for graciously helping me out with PSM.

      1. Install Python (3.3+) and cURL

      PSM is a Python based tool. To set it up and run it, you need to have Python set up on your machine.

      2. Download PSM

      The psmcli.zip can be downloaded from Cloud UI (as described here) or it can be fetched through cURL from the REST API (as described here):

      curl -I -X GET       -u “lucas.jellema:password”      -H “X-ID-TENANT-NAME: cloud17”      -H “Accept: application/json”       https://psm.us.oraclecloud.com/paas/api/v1.1/instancemgmt/cloud17/services/cs/instances

      3. Install PSM as a Python Package

      With a simple statement, PSM is installed from the downloaded zip file (see here)

      pip install -U psmcli.zip

      image

      This installs PSM into the Python Scripts directory: image

      Verify the result using

      pip show psmcli

      image

      On Linux:

      image

       

      4. Configure PSM for the identity domain and the connection to the cloud

      Run the setup for PSM and configure it for use with your identity domain (see docs). Note: this step assumes that the Python scripts directory that contains PSM is in the PATH environment variable.

      psm setup

      image

      I am not sure if and how you can use PSM on your machine for multiple identity domains or user accounts. I have access to several Oracle Public Cloud identity domains – even in different data centers. I have now setup PSM for one of them. If I can do a setup for a second identity domain and then somehow be able to switch between the two is not yet clear to me.
      EDIT: switching to a different identity domain is simply done by running psm setup again. I need to provide the identity domain, region and credentials to make the switch. Note: psm remembers the set up across restart of the operating system.

      5. Start using PSM for inspecting and manipulating PaaS Services

      PSM can be used with many PaaS Services – not yet all – for inspecting their health, stopping and (re)starting, scaling and performing many administrative activities. See docs for all of them.

      Some examples:

      List all applications on Application Container Cloud:

      psm accs apps

      image

      List log details for a specific application on ACCS:

      psm accs log -n|–app-name name -i|–instance-name name

      psm accs log -n Artist-Enricher-API -i web.1

      and the list of log files is presented

      image

       

      6. Update PSM

      To get rid of the slightly annoying message about their being a later version of PSM available – and to get hold of the latest version, you simply type:

      psm update

      and wait for maybe 15 seconds.

      image

       

      Issues:

      I ran into an issue, caused as it turned out by having multiple Python versions on my machine. PSM got installed as Python package with version 3.5 and I was trying to run PSM with Python 3.6 as first version in my PATH environment variable. Clearly, that failed.

      The error I ran into: ModuleNotFoundError: No module named ‘opaascli’

      image

      The solution: I removed all but one Python version (3.5 because with 3.6 the installation did not go well because of missing pip) and then installed with that one version.

      Resources

      Documentation on PSM: http://docs.oracle.com/en/cloud/paas/java-cloud/pscli/abouit-paas-service-manager-command-line-interface.html

      Documentation on Oracle PaaS REST APIs: https://apicatalog.oraclecloud.com/ui/

      The post Getting started with Oracle PaaS Service Manager Command Line Interface (PSM) appeared first on AMIS Oracle and Java Blog.

      Runtime version of Node on Oracle Application Container Cloud (0.12 vs. 6.9.1) – easy single click upgrade

      Wed, 2017-03-01 23:32

      Yesterday, I create a new application on Oracle Application Container Cloud. The application type is Node (fka Node.js), so the container type I had created was Node – rather than PHP or Java SE, the other two options currently available to me. I was a little dismayed to learn that the runtime Node version that my container was created with was (still) 0.12.17. I had assumed that by now ACCS would have moved to a more recent version of Node.

      Today, after a little closer inspection, I realized that upgrading the runtime [version of Node]is actually very simple to do on ACCS. Go to the Administration tab in the application overview.

      image

      Two later versions for the Node runtime are listed – 4.6.1 and 6.9.1. Now we are talking! I can simply click on the Update button to have the runtime upgraded. I can then choose between the fast update, with some brief downtime, or the rolling upgrade that will not affect the service of my application – and take longer to complete.

      image

      I click on Restart. The UI informs me of the current action:

      image

      And I can track the in progress activity:

      image

      The overall upgrade took several minutes to complete – somewhat longer still than I had expected. However, it took me not more effort than clicking a button. And it did not impact my consumers. All in all, pretty smooth. And now I am on v6.9.1, which is pretty up to date.

      I am not sure whether during the initial creation of the application I had the option to start out with this recent version of Node, rather than the fairly old v0.12 that was now provisioned initially. If so, I missed it completely. Then it should be made more obvious. If I did not get the choice, then I believe that a missed opportunity that Oracle may want to add to this cloud service.

      The post Runtime version of Node on Oracle Application Container Cloud (0.12 vs. 6.9.1) – easy single click upgrade appeared first on AMIS Oracle and Java Blog.

      Oracle Service Bus : disable / enable a proxy service via WebLogic Server MBeans with JMX

      Tue, 2017-02-28 04:22

      At a public sector organization in the Netherlands an OSB proxy service was (via JMS) reading messages from a WebLogic queue. These messages where then send to a back-end system. Every evening during a certain time period the back-end system was down. So therefor and also in case of planned maintenance there was a requirement whereby it was necessary to be able to stop and start sending messages to the back-end system from the queue. Hence, a script was needed to disable/enable the OSB proxy service (deployed on OSB 11.1.1.7).

      This article will explain how the OSB proxy service can be disabled/enabled via WebLogic Server MBeans with JMX.

      A managed bean (MBean) is a Java object that represents a Java Management Extensions (JMX) manageable resource in a distributed environment, such as an application, a service, a component, or a device.

      First an “high over” overview of the MBeans is given. For further information see “Fusion Middleware Developing Custom Management Utilities With JMX for Oracle WebLogic Server”, via url: https://docs.oracle.com/cd/E28280_01/web.1111/e13728/toc.htm

      Next the structure and use of the System MBean Browser in the Oracle Enterprise Manager Fusion Middleware Control is discussed.

      Finally the code to disable/enable the OSB proxy service is shown.

      To disable/enable an OSB proxy service, also WebLogic Scripting Tool (WLST) can be used, but in this case (also because of my java developer skills) JMX was used. For more information have a look for example at AMIS TECHNOLOGY BLOG: “Oracle Service Bus: enable / disable proxy service with WLST”, via url: https://technology.amis.nl/2011/01/10/oracle-service-bus-enable-disable-proxy-service-with-wlst/

      The Java Management Extensions (JMX) technology is a standard part of the Java Platform, Standard Edition (Java SE platform). The JMX technology was added to the platform in the Java 2 Platform, Standard Edition (J2SE) 5.0 release.

      The JMX technology provides a simple, standard way of managing resources such as applications, devices, and services. Because the JMX technology is dynamic, you can use it to monitor and manage resources as they are created, installed and implemented. You can also use the JMX technology to monitor and manage the Java Virtual Machine (Java VM).

      For another example of using MBeans with JMX, I kindly point you to another article (written by me) on the AMIS TECHNOLOGY BLOG: “Doing performance measurements of an OSB Proxy Service by programmatically extracting performance metrics via the ServiceDomainMBean and presenting them as an image via a PowerPoint VBA module”, via url: https://technology.amis.nl/2016/01/30/performance-measurements-of-an-osb-proxy-service-by-using-the-servicedomainmbean/

      Basic Organization of a WebLogic Server Domain

      As you probably already know a WebLogic Server administration domain is a collection of one or more servers and the applications and resources that are configured to run on the servers. Each domain must include a special server instance that is designated as the Administration Server. The simplest domain contains a single server instance that acts as both Administration Server and host for applications and resources. This domain configuration is commonly used in development environments. Domains for production environments usually contain multiple server instances (Managed Servers) running independently or in groups called clusters. In such environments, the Administration Server does not host production applications.

      Separate MBean Types for Monitoring and Configuring

      All WebLogic Server MBeans can be organized into one of the following general types based on whether the MBean monitors or configures servers and resources:

      • Runtime MBeans contain information about the run-time state of a server and its resources. They generally contain only data about the current state of a server or resource, and they do not persist this data. When you shut down a server instance, all run-time statistics and metrics from the run-time MBeans are destroyed.
      • Configuration MBeans contain information about the configuration of servers and resources. They represent the information that is stored in the domain’s XML configuration documents.
      • Configuration MBeans for system modules contain information about the configuration of services such as JDBC data sources and JMS topics that have been targeted at the system level. Instead of targeting these services at the system level, you can include services as modules within an application. These application-level resources share the life cycle and scope of the parent application. However, WebLogic Server does not provide MBeans for application modules.
      MBean Servers

      At the core of any JMX agent is the MBean server, which acts as a container for MBeans.

      The JVM for an Administration Server maintains three MBean servers provided by Oracle and optionally maintains the platform MBean server, which is provided by the JDK itself. The JVM for a Managed Server maintains only one Oracle MBean server and the optional platform MBean server.

      MBean Server Creates, registers, and provides access to… Domain Runtime MBean Server MBeans for domain-wide services. This MBean server also acts as a single point of access for MBeans that reside on Managed Servers.

      Only the Administration Server hosts an instance of this MBean server. Runtime MBean Server MBeans that expose monitoring, run-time control, and the active configuration of a specific WebLogic Server instance.

      In release 11.1.1.7, the WebLogic Server Runtime MBean Server is configured by default to be the platform MBean server.

      Each server in the domain hosts an instance of this MBean server. Edit MBean Server Pending configuration MBeans and operations that control the configuration of a WebLogic Server domain. It exposes a ConfigurationManagerMBean for locking, saving, and activating changes.

      Only the Administration Server hosts an instance of this MBean server. The JVM’s platform MBean server MBeans provided by the JDK that contain monitoring information for the JVM itself. You can register custom MBeans in this MBean server.

      In release 11.1.1.7, WebLogic Server uses the JVM’s platform MBean server to contain the WebLogic run-time MBeans by default. Service MBeans

      Within each MBean server, WebLogic Server registers a service MBean under a simple object name. The attributes and operations in this MBean serve as your entry point into the WebLogic Server MBean hierarchies and enable JMX clients to navigate to all WebLogic Server MBeans in an MBean server after supplying only a single object name.

      MBean Server Service MBean JMX object name The Domain Runtime MBean Server DomainRuntimeServiceMBean

      Provides access to MBeans for domain-wide services such as application deployment, JMS servers, and JDBC data sources. It also is a single point for accessing the hierarchies of all run-time MBeans and all active configuration MBeans for all servers in the domain. com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean Runtime MBean Servers RuntimeServiceMBean

      Provides access to run-time MBeans and active configuration MBeans for the current server. com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean The Edit MBean Server EditServiceMBean

      Provides the entry point for managing the configuration of the current WebLogic Server domain. com.bea:Name=EditService,Type=weblogic.management.mbeanservers.edit.EditServiceMBean Choosing an MBean Server

      If your client monitors run-time MBeans for multiple servers, or if your client runs in a separate JVM, Oracle recommends that you connect to the Domain Runtime MBean Server on the Administration Server instead of connecting separately to each Runtime MBean Server on each server instance in the domain.

      The trade off for directing all JMX requests through the Domain Runtime MBean Server is a slight degradation in performance due to network latency and increased memory usage. However, for most network topologies and performance requirements, the simplified code maintenance and enhanced security that the Domain Runtime MBean Server enables is preferable.

      System MBean Browser

      Oracle Enterprise Manager Fusion Middleware Control provides the System MBean Browser for managing MBeans that perform specific monitoring and configuration tasks.

      Via the Oracle Enterprise Manager Fusion Middleware Control for a certain domain, the System MBean Browser can be opened.

      Here the previously mentioned types of MBean’s can be seen: Runtime MBeans and Configuration MBeans:

      When navigating to “Configuration MBeans | com.bea”, the previously mentioned EditServiceMBean can be found:

      When navigating to “Runtime MBeans | com.bea | Domain: <a domain>”, the previously mentioned DomainRuntimeServiceMBean can be found:

      Also the later on in this article mentioned MBeans can be found:

      For example for the ProxyServiceConfigurationMbean, the available operations can be found:

      When navigating to “Runtime MBeans | com.bea”, within each Server the previously mentioned RuntimeServiceMBean can be found.

       

      Code to disable/enable the OSB proxy service

      The requirement to be able to stop and start sending messages to the back-end system from the queue was implemented by disabling/enabling the state of the OSB Proxy service JMSConsumerStuFZKNMessageService_PS.

      Short before the back-end system goes down, dequeuing of the queue should be disabled.
      Right after the back-end system goes up again, dequeuing of the queue should be enabled.

      The state of the OSB Proxy service can be seen in the Oracle Service Bus Administration 11g Console (for example via the Project Explorer) in the tab “Operational Settings” of the proxy service.

      For ease of use, two ms-dos batch files where created, each using MBeans, to change the state of a service (proxy service or business service). As stated before, the WebLogic Server contains a set of MBeans that can be used to configure, monitor and manage WebLogic Server resources.

      • Disable_JMSConsumerStuFZKNMessageService_PS.bat

      On the server where the back-end system resides, the ms-dos batch file “Disable_JMSConsumerStuFZKNMessageService_PS.bat” is called.

      The content of the batch file is:

      java.exe -classpath “OSBServiceState.jar;com.bea.common.configfwk_1.7.0.0.jar;sb-kernel-api.jar;sb-kernel-impl.jar;wlfullclient.jar” nl.xyz.osbservice.osbservicestate.OSBServiceState “xyz” “7001” “weblogic” “xyz” “ProxyService” “JMSConsumerStuFZKNMessageService-1.0/proxy/JMSConsumerStuFZKNMessageService_PS” “Disable”

      • Enable_JMSConsumerStuFZKNMessageService_PS.bat

      On the server where the back-end system resides, the ms-dos batch file “Enable_JMSConsumerStuFZKNMessageService_PS.bat” is called.

      The content of the batch file is:

      java.exe -classpath “OSBServiceState.jar;com.bea.common.configfwk_1.7.0.0.jar;sb-kernel-api.jar;sb-kernel-impl.jar;wlfullclient.jar” nl.xyz.osbservice.osbservicestate.OSBServiceState “xyz” “7001” “weblogic” “xyz” “ProxyService” “JMSConsumerStuFZKNMessageService-1.0/proxy/JMSConsumerStuFZKNMessageService_PS” “Enable”

      In both ms-dos batch files via java.exe a class named OSBServiceState is being called. The main method of this class expects the following parameters:

      Parameter name Description HOSTNAME Host name of the AdminServer PORT Port of the AdminServer USERNAME Username PASSWORD Passsword SERVICETYPE Type of resource. Possible values are:

      • ProxyService
      • BusinessService
      SERVICEURI Identifier of the resource. The name begins with the project name, followed by folder names and ending with the resource name. ACTION The action to be carried out. Possible values are:

      • Enable
      • Disable

      Every change is carried out in it´s own session (via the SessionManagementMBean), which is automatically activated with description: OSBServiceState_script_<systemdatetime>

      This can be seen via the Change Center | View Changes of the Oracle Service Bus Administration 11g Console:

      The response from “Disable_JMSConsumerStuFZKNMessageService_PS.bat” is:

      Disabling service JMSConsumerStuFZKNMessageService-1.0/proxy/JMSConsumerStuFZKNMessageService_PS has been succesfully completed

      In the Oracle Service Bus Administration 11g Console this change can be found as a Task:

      The result of changing the state of the OSB Proxy service can be checked in the Oracle Service Bus Administration 11g Console.

      The same applies when using “Enable_JMSConsumerStuFZKNMessageService_PS.bat”.

      In the sample code below the use of the following MBeans can be seen:

      Provides a common access point for navigating to all runtime and configuration MBeans in the domain as well as to MBeans that provide domain-wide services (such as controlling and monitoring the life cycles of servers and message-driven EJBs and coordinating the migration of migratable services). [https://docs.oracle.com/middleware/1213/wls/WLAPI/weblogic/management/mbeanservers/domainruntime/DomainRuntimeServiceMBean.html]

      This library is not by default provided in a WebLogic install and must be build. The simple way of how to do this is described in
      “Fusion Middleware Programming Stand-alone Clients for Oracle WebLogic Server, Using the WebLogic JarBuilder Tool”, which can be reached via url: https://docs.oracle.com/cd/E28280_01/web.1111/e13717/jarbuilder.htm#SACLT240.

      Provides API to create, activate or discard sessions. [http://docs.oracle.com/cd/E13171_01/alsb/docs26/javadoc/com/bea/wli/sb/management/configuration/SessionManagementMBean.html]

      Provides API to enable/disable services and enable/disable monitoring for a proxy service. [https://docs.oracle.com/cd/E13171_01/alsb/docs26/javadoc/com/bea/wli/sb/management/configuration/ProxyServiceConfigurationMBean.html]

      Provides API for managing business services. [https://docs.oracle.com/cd/E13171_01/alsb/docs25/javadoc/com/bea/wli/sb/management/configuration/BusinessServiceConfigurationMBean.html]

      Once the connection to the DomainRuntimeServiceMBean is made, other MBeans can be found via the findService method.

      Service findService(String name,
                          String type,
                          String location)
      

      This method returns the Service on the specified Server or in the primary MBeanServer if the location is not specified.

      In the code example below certain java fields are used. For reading purposes the field values are shown in the following table:

      Field Field value DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME weblogic.management.mbeanservers.domainruntime DomainRuntimeServiceMBean.OBJECT_NAME com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean SessionManagementMBean.NAME SessionManagement SessionManagementMBean.TYPE com.bea.wli.sb.management.configuration.SessionManagementMBean ProxyServiceConfigurationMBean.NAME ProxyServiceConfiguration ProxyServiceConfigurationMBean.TYPE com.bea.wli.sb.management.configuration.ProxyServiceConfigurationMBean BusinessServiceConfigurationMBean.NAME BusinessServiceConfiguration BusinessServiceConfigurationMBean.TYPE com.bea.wli.sb.management.configuration.BusinessServiceConfigurationMBean

      Because of the use of com.bea.wli.config.Ref.class , the following library <Middleware Home Directory>/Oracle_OSB1/modules/com.bea.common.configfwk_1.7.0.0.jar was needed.

      Because of the use of weblogic.management.jmx.MBeanServerInvocationHandler.class , the following library <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar was needed.

      When running the code the following error was thrown:

      java.lang.RuntimeException: java.lang.ClassNotFoundException: com.bea.wli.sb.management.configuration.DelegatedSessionManagementMBean
      	at weblogic.management.jmx.MBeanServerInvocationHandler.newProxyInstance(MBeanServerInvocationHandler.java:621)
      	at weblogic.management.jmx.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:418)
      	at $Proxy0.findService(Unknown Source)
      	at nl.xyz.osbservice.osbservicestate.OSBServiceState.<init>(OSBServiceState.java:66)
      	at nl.xyz.osbservice.osbservicestate.OSBServiceState.main(OSBServiceState.java:217)
      Caused by: java.lang.ClassNotFoundException: com.bea.wli.sb.management.configuration.DelegatedSessionManagementMBean
      	at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
      	at java.security.AccessController.doPrivileged(Native Method)
      	at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
      	at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
      	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
      	at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
      	at weblogic.management.jmx.MBeanServerInvocationHandler.newProxyInstance(MBeanServerInvocationHandler.java:619)
      	... 4 more
      Process exited.
      

      So because of the use of com.bea.wli.sb.management.configuration.DelegatedSessionManagementMBean.class the following library <Middleware Home Directory>/Oracle_OSB1/lib/sb-kernel-impl.jar was also needed.

      package nl.xyz.osbservice.osbservicestate;
      
      
      import com.bea.wli.config.Ref;
      import com.bea.wli.sb.management.configuration.BusinessServiceConfigurationMBean;
      import com.bea.wli.sb.management.configuration.ProxyServiceConfigurationMBean;
      import com.bea.wli.sb.management.configuration.SessionManagementMBean;
      
      import java.io.IOException;
      
      import java.net.MalformedURLException;
      
      import java.util.HashMap;
      import java.util.Hashtable;
      import java.util.Properties;
      
      import javax.management.MBeanServerConnection;
      import javax.management.MalformedObjectNameException;
      import javax.management.ObjectName;
      import javax.management.remote.JMXConnector;
      import javax.management.remote.JMXConnectorFactory;
      import javax.management.remote.JMXServiceURL;
      
      import javax.naming.Context;
      
      import weblogic.management.jmx.MBeanServerInvocationHandler;
      import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
      
      
      public class OSBServiceState {
          private static MBeanServerConnection connection;
          private static JMXConnector connector;
      
          public OSBServiceState(HashMap props) {
              super();
              SessionManagementMBean sessionManagementMBean = null;
              String sessionName =
                  "OSBServiceState_script_" + System.currentTimeMillis();
              String servicetype;
              String serviceURI;
              String action;
              String description = "";
      
      
              try {
      
                  Properties properties = new Properties();
                  properties.putAll(props);
      
                  initConnection(properties.getProperty("HOSTNAME"),
                                 properties.getProperty("PORT"),
                                 properties.getProperty("USERNAME"),
                                 properties.getProperty("PASSWORD"));
      
                  servicetype = properties.getProperty("SERVICETYPE");
                  serviceURI = properties.getProperty("SERVICEURI");
                  action = properties.getProperty("ACTION");
      
                  DomainRuntimeServiceMBean domainRuntimeServiceMBean =
                      (DomainRuntimeServiceMBean)findDomainRuntimeServiceMBean(connection);
      
                  // Create a session via SessionManagementMBean.
                  sessionManagementMBean =
                          (SessionManagementMBean)domainRuntimeServiceMBean.findService(SessionManagementMBean.NAME,
                                                                                        SessionManagementMBean.TYPE,
                                                                                        null);
                  sessionManagementMBean.createSession(sessionName);
      
                  if (servicetype.equalsIgnoreCase("ProxyService")) {
      
                      // A Ref uniquely represents a resource, project or folder that is managed by the Configuration Framework.
                      // A Ref object has two components: A typeId that indicates whether it is a project, folder, or a resource, and an array of names of non-zero length.
                      // For a resource the array of names start with the project name, followed by folder names, and end with the resource name.
                      // For a project, the Ref object simply contains one name component, that is, the project name.
                      // A Ref object for a folder contains the project name followed by the names of the folders which it is nested under.
                      Ref ref = constructRef("ProxyService", serviceURI);
      
                      ProxyServiceConfigurationMBean proxyServiceConfigurationMBean =
                          (ProxyServiceConfigurationMBean)domainRuntimeServiceMBean.findService(ProxyServiceConfigurationMBean.NAME +
                                                                                                "." +
                                                                                                sessionName,
                                                                                                ProxyServiceConfigurationMBean.TYPE,
                                                                                                null);
                      if (action.equalsIgnoreCase("Enable")) {
                          proxyServiceConfigurationMBean.enableService(ref);
                          description = "Enabled the service: " + serviceURI;
                          System.out.print("Enabling service " + serviceURI);
                      } else if (action.equalsIgnoreCase("Disable")) {
                          proxyServiceConfigurationMBean.disableService(ref);
                          description = "Disabled the service: " + serviceURI;
                          System.out.print("Disabling service " + serviceURI);
                      } else {
                          System.out.println("Unsupported value for ACTION");
                      }
                  } else if (servicetype.equals("BusinessService")) {
                      Ref ref = constructRef("BusinessService", serviceURI);
      
                      BusinessServiceConfigurationMBean businessServiceConfigurationMBean =
                          (BusinessServiceConfigurationMBean)domainRuntimeServiceMBean.findService(BusinessServiceConfigurationMBean.NAME +
                                                                                                   "." +
                                                                                                   sessionName,
                                                                                                   BusinessServiceConfigurationMBean.TYPE,
                                                                                                   null);
                      if (action.equalsIgnoreCase("Enable")) {
                          businessServiceConfigurationMBean.enableService(ref);
                          description = "Enabled the service: " + serviceURI;
                          System.out.print("Enabling service " + serviceURI);
                      } else if (action.equalsIgnoreCase("Disable")) {
                          businessServiceConfigurationMBean.disableService(ref);
                          description = "Disabled the service: " + serviceURI;
                          System.out.print("Disabling service " + serviceURI);
                      } else {
                          System.out.println("Unsupported value for ACTION");
                      }
                  }
                  sessionManagementMBean.activateSession(sessionName, description);
                  System.out.println(" has been succesfully completed");
              } catch (Exception ex) {
                  if (sessionManagementMBean != null) {
                      try {
                         sessionManagementMBean.discardSession(sessionName);
                          System.out.println(" resulted in an error.");
                      } catch (Exception e) {
                          System.out.println("Unable to discard session: " +
                                             sessionName);
                      }
                  }
      
                  ex.printStackTrace();
              } finally {
                  if (connector != null)
                      try {
                          connector.close();
                      } catch (Exception e) {
                          e.printStackTrace();
                      }
              }
          }
      
      
          /*
             * Initialize connection to the Domain Runtime MBean Server.
             */
      
          public static void initConnection(String hostname, String portString,
                                            String username,
                                            String password) throws IOException,
                                                                    MalformedURLException {
      
              String protocol = "t3";
              Integer portInteger = Integer.valueOf(portString);
              int port = portInteger.intValue();
              String jndiroot = "/jndi/";
              String mbeanserver = DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME;
      
              JMXServiceURL serviceURL =
                  new JMXServiceURL(protocol, hostname, port, jndiroot +
                                    mbeanserver);
      
              Hashtable hashtable = new Hashtable();
              hashtable.put(Context.SECURITY_PRINCIPAL, username);
              hashtable.put(Context.SECURITY_CREDENTIALS, password);
              hashtable.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,
                            "weblogic.management.remote");
              hashtable.put("jmx.remote.x.request.waiting.timeout", new Long(10000));
      
              connector = JMXConnectorFactory.connect(serviceURL, hashtable);
              connection = connector.getMBeanServerConnection();
          }
      
      
          private static Ref constructRef(String refType, String serviceURI) {
              Ref ref = null;
              String[] uriData = serviceURI.split("/");
              ref = new Ref(refType, uriData);
              return ref;
          }
      
      
          /**
           * Finds the specified MBean object
           *
           * @param connection - A connection to the MBeanServer.
           * @return Object - The MBean or null if the MBean was not found.
           */
          public Object findDomainRuntimeServiceMBean(MBeanServerConnection connection) {
              try {
                  ObjectName objectName =
                      new ObjectName(DomainRuntimeServiceMBean.OBJECT_NAME);
                  return (DomainRuntimeServiceMBean)MBeanServerInvocationHandler.newProxyInstance(connection,
                                                                                                  objectName);
              } catch (MalformedObjectNameException e) {
                  e.printStackTrace();
                  return null;
              }
          }
      
      
          public static void main(String[] args) {
              try {
                  if (args.length <= 0) {
                      System.out.println("Provide values for the following parameters: HOSTNAME, PORT, USERNAME, PASSWORD, SERVICETYPE, SERVICEURI, ACTION.);
      
                  } else {
                      HashMap<String, String> map = new HashMap<String, String>();
      
                      map.put("HOSTNAME", args[0]);
                      map.put("PORT", args[1]);
                      map.put("USERNAME", args[2]);
                      map.put("PASSWORD", args[3]);
                      map.put("SERVICETYPE", args[4]);
                      map.put("SERVICEURI", args[5]);
                      map.put("ACTION", args[6]);
                      OSBServiceState osbServiceState = new OSBServiceState(map);
                  }
              } catch (Exception e) {
                  e.printStackTrace();
              }
      
          }
      }
      
      

      The post Oracle Service Bus : disable / enable a proxy service via WebLogic Server MBeans with JMX appeared first on AMIS Oracle and Java Blog.

      Dump Oracle data into a delimited ascii file with PL/SQL

      Fri, 2017-02-24 08:30

      This is how I dump data from an Oracle Database (tested on 8i,9i,10g,11g,12c) to a delimited ascii file:

      SQL*Plus: Release 12.1.0.2.0 Production on Fri Feb 24 13:55:47 2017
      Copyright (c) 1982, 2014, Oracle.  All rights reserved.
      
      Connected to:
      Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production
      
      SQL> set timing on
      SQL> select Dump_Delimited('select * from all_objects', 'all_objects.csv') nr_rows from dual;
      
         NR_ROWS
      ----------
           97116
      
      Elapsed: 00:00:11.87
      SQL> ! cat /u01/etl/report/all_objects_readme.txt
      
      
        *********************************************************************  
        Record Layout of file /u01/etl/report/all_objects.csv
        *********************************************************************  
      
      
        Column                          Sequence  MaxLength  Datatype  
        ------------------------------  --------  ---------  ----------  
      
        OWNER                           1         128        VARCHAR2                 
        OBJECT_NAME                     2         128        VARCHAR2                 
        SUBOBJECT_NAME                  3         128        VARCHAR2                 
        OBJECT_ID                       4         24         NUMBER                   
        DATA_OBJECT_ID                  5         24         NUMBER                   
        OBJECT_TYPE                     6         23         VARCHAR2                 
        CREATED                         7         20         DATE                     
        LAST_DDL_TIME                   8         20         DATE                     
        TIMESTAMP                       9         19         VARCHAR2                 
        STATUS                          10        7          VARCHAR2                 
        TEMPORARY                       11        1          VARCHAR2                 
        GENERATED                       12        1          VARCHAR2                 
        SECONDARY                       13        1          VARCHAR2                 
        NAMESPACE                       14        24         NUMBER                   
        EDITION_NAME                    15        128        VARCHAR2                 
        SHARING                         16        13         VARCHAR2                 
        EDITIONABLE                     17        1          VARCHAR2                 
        ORACLE_MAINTAINED               18        1          VARCHAR2                 
      
      
        ----------------------------------  
        Generated:     24-02-2017 13:56:50
        Generated by:  ETL
        Columns Count: 18
        Records Count: 97116
        Delimiter: ][
        Row Delimiter: ]
        ----------------------------------  
      
      SQL> 
      

      Next to the query and the generated filename the Dump_Delimited function takes another 6 parameters, each one with a default value. Check out the PL/SQL, and BTW… the basics for this code comes from Tom Kyte.

      SET DEFINE OFF;
      CREATE OR REPLACE DIRECTORY ETL_UNLOAD_DIR AS '/u01/etl/report';
      GRANT READ, WRITE ON DIRECTORY ETL_UNLOAD_DIR TO ETL;
      
      CREATE OR REPLACE FUNCTION Dump_Delimited
         ( P_query                IN VARCHAR2
         , P_filename             IN VARCHAR2
         , P_column_delimiter     IN VARCHAR2    := ']['
         , P_row_delimiter        IN VARCHAR2    := ']'
         , P_comment              IN VARCHAR2    := NULL
         , P_write_rec_layout     IN PLS_INTEGER := 1
         , P_dir                  IN VARCHAR2    := 'ETL_UNLOAD_DIR'
         , P_nr_is_pos_integer    IN PLS_INTEGER := 0 )
      RETURN PLS_INTEGER
       IS
          filehandle             UTL_FILE.FILE_TYPE;
          filehandle_rc          UTL_FILE.FILE_TYPE;
      
          v_user_name            VARCHAR2(100);
          v_file_name_full       VARCHAR2(200);
          v_dir                  VARCHAR2(200);
          v_total_length         PLS_INTEGER := 0;
          v_startpos             PLS_INTEGER := 0;
          v_datatype             VARCHAR2(30);
          v_delimiter            VARCHAR2(10):= P_column_delimiter;
          v_rowdelimiter         VARCHAR2(10):= P_row_delimiter;
      
          v_cursorid             PLS_INTEGER := DBMS_SQL.OPEN_CURSOR;
          v_columnvalue          VARCHAR2(4000);
          v_ignore               PLS_INTEGER;
          v_colcount             PLS_INTEGER := 0;
          v_newline              VARCHAR2(32676);
          v_desc_cols_table      DBMS_SQL.DESC_TAB;
          v_dateformat           NLS_SESSION_PARAMETERS.VALUE%TYPE;
          v_stat                 VARCHAR2(1000);
          counter                PLS_INTEGER := 0;
      BEGIN
      
          SELECT directory_path
            INTO v_dir 
          FROM DBA_DIRECTORIES
          WHERE directory_name = P_dir;
          v_file_name_full  := v_dir||'/'||P_filename;
      
          SELECT VALUE
            INTO v_dateformat
          FROM NLS_SESSION_PARAMETERS
          WHERE parameter = 'NLS_DATE_FORMAT';
      
          /* Use a date format that includes the time. */
          v_stat := 'alter session set nls_date_format=''dd-mm-yyyy hh24:mi:ss'' ';
          EXECUTE IMMEDIATE v_stat;
      
          filehandle := UTL_FILE.FOPEN( P_dir, P_filename, 'w', 32000 );
      
          /* Parse the input query so we can describe it. */
          DBMS_SQL.PARSE(  v_cursorid,  P_query, dbms_sql.native );
      
          /* Now, describe the outputs of the query. */
          DBMS_SQL.DESCRIBE_COLUMNS( v_cursorid, v_colcount, v_desc_cols_table );
      
          /* For each column, we need to define it, to tell the database
           * what we will fetch into. In this case, all data is going
           * to be fetched into a single varchar2(4000) variable.
           *
           * We will also adjust the max width of each column. 
           */
      IF P_write_rec_layout = 1 THEN
      
         filehandle_rc := UTL_FILE.FOPEN(P_dir, SUBSTR(P_filename,1, INSTR(P_filename,'.',-1)-1)||'_readme.txt', 'w');
      
      --Start Header
          v_newline := CHR(10)||CHR(10)||'  *********************************************************************  ';
            UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
             v_newline := '  Record Layout of file '||v_file_name_full;
            UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
          v_newline := '  *********************************************************************  '||CHR(10)||CHR(10);
            UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
          v_newline := '  Column                          Sequence  MaxLength  Datatype  ';
            UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
          v_newline := '  ------------------------------  --------  ---------  ----------  '||CHR(10);
            UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
      --End Header
      
      --Start Body
          FOR i IN 1 .. v_colcount
          LOOP
             DBMS_SQL.DEFINE_COLUMN( v_cursorid, i, v_columnvalue, 4000 );
             SELECT DECODE( v_desc_cols_table(i).col_type,  2, DECODE(v_desc_cols_table(i).col_precision,0,v_desc_cols_table(i).col_max_len,v_desc_cols_table(i).col_precision)+DECODE(P_nr_is_pos_integer,1,0,2)
                                                         , 12, 20, v_desc_cols_table(i).col_max_len )
               INTO v_desc_cols_table(i).col_max_len
             FROM dual;
      
             SELECT DECODE( TO_CHAR(v_desc_cols_table(i).col_type), '1'  , 'VARCHAR2'
                                                                  , '2'  , 'NUMBER'
                                                                  , '8'  , 'LONG'
                                                                  , '11' , 'ROWID'
                                                                  , '12' , 'DATE'
                                                                  , '96' , 'CHAR'
                                                                  , '108', 'USER_DEFINED_TYPE', TO_CHAR(v_desc_cols_table(i).col_type) )
               INTO v_datatype
             FROM DUAL;
      
             v_newline := RPAD('  '||v_desc_cols_table(i).col_name,34)||RPAD(i,10)||RPAD(v_desc_cols_table(i).col_max_len,11)||RPAD(v_datatype,25);
          UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
          END LOOP;
      --End Body
      
      ELSE
      
          FOR i IN 1 .. v_colcount LOOP
             DBMS_SQL.DEFINE_COLUMN( v_cursorid, i, v_columnvalue, 4000 );
             SELECT DECODE( v_desc_cols_table(i).col_type,  2, DECODE(v_desc_cols_table(i).col_precision,0,v_desc_cols_table(i).col_max_len,v_desc_cols_table(i).col_precision)+DECODE(P_nr_is_pos_integer,1,0,2)
                                                         , 12, 20, v_desc_cols_table(i).col_max_len )
               INTO v_desc_cols_table(i).col_max_len
             FROM dual;
           END LOOP;
      
      END IF;
      
          v_ignore := DBMS_SQL.EXECUTE(v_cursorid);
      
           WHILE ( DBMS_SQL.FETCH_ROWS(v_cursorid) > 0 )
           LOOP
              /* Build up a big output line. This is more efficient than
               * calling UTL_FILE.PUT inside the loop.
               */
              v_newline := NULL;
              FOR i IN 1 .. v_colcount LOOP
                  DBMS_SQL.COLUMN_VALUE( v_cursorid, i, v_columnvalue );
                  if i = 1 then
                    v_newline := v_newline||v_columnvalue;
                  else
                    v_newline := v_newline||v_delimiter||v_columnvalue;
                  end if;              
              END LOOP;
      
              /* Now print out that line and increment a counter. */
              UTL_FILE.PUT_LINE( filehandle, v_newline||v_rowdelimiter );
              counter := counter+1;
          END LOOP;
      
      IF P_write_rec_layout = 1 THEN
      
      --Start Footer
          v_newline := CHR(10)||CHR(10)||'  ----------------------------------  ';
            UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
             v_newline := '  Generated:     '||SYSDATE;
            UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
             v_newline := '  Generated by:  '||USER;
            UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
             v_newline := '  Columns Count: '||v_colcount;
            UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
             v_newline := '  Records Count: '||counter;
            UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
             v_newline := '  Delimiter: '||v_delimiter;
            UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
             v_newline := '  Row Delimiter: '||v_rowdelimiter;
            UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
          v_newline := '  ----------------------------------  '||CHR(10)||CHR(10);
            UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
      --End Footer
      
      --Start Commment
          v_newline := '  '||P_comment;
            UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
      --End Commment
      
      UTL_FILE.FCLOSE(filehandle_rc);
      
      END IF;
      
          /* Free up resources. */
          DBMS_SQL.CLOSE_CURSOR(v_cursorid);
          UTL_FILE.FCLOSE( filehandle );
      
          /* Reset the date format ... and return. */
          v_stat := 'alter session set nls_date_format=''' || v_dateformat || ''' ';
          EXECUTE IMMEDIATE v_stat;
      
          RETURN counter;
      EXCEPTION
          WHEN OTHERS THEN
              DBMS_SQL.CLOSE_CURSOR( v_cursorid );
              EXECUTE IMMEDIATE 'alter session set nls_date_format=''' || v_dateformat || ''' ';
              RETURN counter;
      
      END Dump_Delimited;
      /
      
      SHOW ERRORS;
      

      The post Dump Oracle data into a delimited ascii file with PL/SQL appeared first on AMIS Oracle and Java Blog.

      DIY Parallelization with Oracle DBMS_DATAPUMP

      Thu, 2017-02-23 12:17

      Oracle dbms_datapump provides a parallel option for exports and imports, but some objects cannot be processed in this mode. In a migration project from AIX 11gR2 to ODA X5-2 ( OL 5.9 ) 12c that included an initial load for Golden Gate, I had to deal with one of those objects, a 600G table with LOB fields, stored in the database as Basic Files ( = traditional LOB storage ).

      By applying some DIY parallelization I was able to bring the export time back from 14 hours to 35 minutes.
      Instrumental in this solution is the handy “detach” feature in the dbms_datapump package, and the use of dbms_rowid to “split” the table data in same sized chunks. The first allowed me to just define and start datapump jobs without having to wait till each one is finished, the second results in all jobs to end within just a short time of each other.

      The following PL/SQL exports tables in 32 chunks with 32 concurrent datapump jobs. Feel free to adjust this “dop” and
      schema as well as table names. Just one parameter is provided… it makes the export procedure as a whole wait
      for the end of all exports, so some other action may start automatically ( e.g a file transfer ).

      CREATE OR REPLACE PACKAGE Datapump_Parallel_Exp_Pck                                                                                                                                                                 
        IS                                                                                                                                                                                                  
          g_parallel   CONSTANT NUMBER       := 32;                                                                                                                               
          g_dmp_dir    CONSTANT VARCHAR2(25) := 'DATA_PUMP_DIR';                                                                                                                          
                                                                                                                                                                                                              
      ------------------------------------------------------------------------------------------------- 
      PROCEDURE Exec_Export
         ( P_wait IN PLS_INTEGER := 0 );                                                                                                                                                                   
                                                                                                                                                                                                              
      --------------------------------------------------------------------------------------------------                                                                                                      
      END Datapump_Parallel_Exp_Pck;
      /
      
      SHOW ERRORS;
      
      
      CREATE OR REPLACE PACKAGE BODY Datapump_Parallel_Exp_Pck                                                                                                                                                            
        IS                                                                                                                                                                                                    
                                                                                                                                                                                                              
      -------------------------------------------------------------------------------------------------                                                                                                       
      PROCEDURE Sleep                                                                                                                                                                                         
        (P_millisesconds IN NUMBER)                                                                                                                                                                           
       AS LANGUAGE JAVA                                                                                                                                                                                       
          NAME 'java.lang.Thread.sleep(int)';                                                                                                                                                                 
                                                                                                                                                                                                              
      -------------------------------------------------------------------------------------------------                                                                                                       
      FUNCTION Get_Current_Scn                                                                                                                                                                                
        RETURN NUMBER                                                                                                                                                                                         
          IS                                                                                                                                                                                                  
          v_ret NUMBER := 0;                                                                                                                                                                                  
      BEGIN                                                                                                                                                                                                   
                                                                                                                                                                                                              
        SELECT current_scn                                                                                                                                                                                    
          INTO v_ret                                                                                                                                                                                          
        FROM v$database;                                                                                                                                                                                      
                                                                                                                                                                                                              
        RETURN v_ret;                                                                                                                                                                                         
                                                                                                                                                                                                              
        EXCEPTION                                                                                                                                                                                             
          WHEN OTHERS THEN                                                                                                                                                                                    
         RAISE_APPLICATION_ERROR( -20010, SQLERRM||' - '||DBMS_UTILITY.FORMAT_ERROR_BACKTRACE );                                                                                                              
      END Get_Current_Scn;                                                                                                                                                                                    
                                                                                                                                                                                                              
      -------------------------------------------------------------------------------------------------                                                                                                       
      PROCEDURE Exp_Tables_Parallel                                                                                                                                                                   
        ( P_scn  IN NUMBER                                                                                                                                                                                    
        , P_dmp OUT VARCHAR2 )                                                                                                                                                                                
       IS                                                                                                                                                                                                     
         h1                  NUMBER(10);                                                                                                                                                                      
         v_dop               NUMBER := g_parallel;                                                                                                                                                            
         v_curr_scn          NUMBER := P_scn;                                                                                                                                                                 
         v_job_name_org      VARCHAR2(30)  := 'PX_'||TO_CHAR(sysdate,'YYYYMMDDHH24MISS');    -- PX: Parallel Execution                                                                                     
         v_job_name          VARCHAR2(30)  := v_job_name_org;                                                                                                                                                 
         v_dmp_file_name_org VARCHAR2(100) := lower(v_job_name||'.dmp');                                                                                                                                      
         v_dmp_file_name     VARCHAR2(100) := v_dmp_file_name_org;                                                                                                                                            
         v_log_file_name_org VARCHAR2(100) := lower(v_job_name||'.log');                                                                                                                                      
         v_log_file_name     VARCHAR2(100) := v_log_file_name_org;                                                                                                                                            
                                                                                                                                                                                                              
      BEGIN                                                                                                                                                                                                   
                                                                                                                                                                                                              
      -- drop master table for "orphaned job" if it exists                                                                                                                                                       
         for i in ( select 'DROP TABLE '||owner_name||'.'||job_name||' PURGE' stat                                                                                                                            
                    from dba_datapump_jobs                                                                                                                                                                    
                    where owner_name = USER                                                                                                                                                                   
                      and instr(v_job_name, upper(job_name) ) > 0                                                                                                                                             
                      and state = 'NOT RUNNING'                                                                                                                                                               
                      and attached_sessions = 0 )                                                                                                                                                             
         loop                                                                                                                                                                                                 
           execute immediate i.stat;                                                                                                                                                                          
         end loop;                                                                                                                                                                                            
                                                                                                                                                                                                              
      -- set out parameter                                                                                                                                                                                    
        P_dmp := v_dmp_file_name;                                                                                                                                                                             
                                                                                                                                                                                                              
      -- start jobs in parallel                                                                                                                                                                               
        DBMS_OUTPUT.PUT_LINE('**** START SETTING DATAPUMP PARALLEL_TABLE_EXPORT JOBS ****' );                                                                                                                 
        for counter in 0 .. v_dop-1                                                                                                                                                                           
        loop                                                                                                                                                                                                  
          v_job_name      := v_job_name_org||'_'||lpad(counter+1,3,0);                                                                                                                                        
          v_dmp_file_name := v_dmp_file_name_org||'_'||lpad(counter+1,3,0);                                                                                                                                   
          v_log_file_name := v_log_file_name_org||'_'||lpad(counter+1,3,0);                                                                                                                                   
                                                                                                                                                                                                              
          h1 := dbms_datapump.open                                                                                                                                                                            
            ( operation => 'EXPORT'                                                                                                                                                                           
            , job_mode  => 'SCHEMA'                                                                                                                                                                           
            , job_name  => v_job_name                                                                                                                                                                         
            , version   => 'LATEST');                                                                                                                                                                         
         DBMS_OUTPUT.PUT_LINE( 'Successfully opened job: '||v_job_name);                                                                                                                                     
                                                                                                                                                                                                              
           dbms_datapump.set_parallel(handle  => h1, degree => 1);                                                                                                                                            
           dbms_datapump.set_parameter(handle => h1, name  => 'KEEP_MASTER', value => 0);                                                                                                                     
           dbms_datapump.set_parameter(handle => h1, name  => 'ESTIMATE', value => 'BLOCKS');                                                                                                                 
           dbms_datapump.set_parameter(handle => h1, name  => 'INCLUDE_METADATA', value => 0);                                                                                                                
           dbms_datapump.set_parameter(handle => h1, name  => 'METRICS', value => 1);                                                                                                                         
           dbms_datapump.set_parameter(handle => h1, name  => 'FLASHBACK_SCN', value => v_curr_scn);                                                                                                          
         DBMS_OUTPUT.PUT_LINE('Successfully set job parameters for job '||v_job_name);                                                                                                                        
                                                                                                                                                                                                              
      -- export just these schemas                                                                                                                                                                            
           dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_LIST', value => ' ''<SCHEMA01>'',''<SCHEMA02>'',''<SCHEMA03>'' ');                                                                                       
         DBMS_OUTPUT.PUT_LINE('Successfully set schemas for job '||v_job_name);                                                                                                                               
      -- export tables only                                                                                                                                                                                   
           dbms_datapump.metadata_filter(handle => h1, name => 'INCLUDE_PATH_EXPR', value => q'[='TABLE']' );                                                                                                 
         DBMS_OUTPUT.PUT_LINE('Successfully set table export for job '||v_job_name);                                                                                                                          
      -- export just these tables                                                                                                                                                                            
           dbms_datapump.metadata_filter(handle => h1, name => 'NAME_LIST', value => ' ''<TABLE01>'',''<TABLE02>'',''<TABLE03>'',''<TABLE03>'',''<TABLE04>'' ', object_path => 'TABLE');                                                                                                                                                                                                     
         DBMS_OUTPUT.PUT_LINE('Successfully set tables for job '||v_job_name);                                                                                                                                
      -- export just a 1/v_dop part of the data                                                                                                                                                             
           dbms_datapump.data_filter(handle => h1, name => 'SUBQUERY', value => 'WHERE MOD(DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID), '||v_dop||')='||counter);                                                    
         DBMS_OUTPUT.PUT_LINE('Successfully set data filter for job '||v_job_name);                                                                                                                          
                                                                                                                                                                                                              
           dbms_datapump.add_file                                                                                                                                                                             
             ( handle => h1                                                                                                                                                                                   
             , filename => v_dmp_file_name                                                                                                                                                                    
             , directory => g_dmp_dir                                                                                                                                                               
             , filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE                                                                                                                                              
             , reusefile => 1 );                                                                                                                                                                              
         DBMS_OUTPUT.PUT_LINE('Successfully add dmp file: '||v_dmp_file_name);                                                                                                                               
                                                                                                                                                                                                              
           dbms_datapump.add_file                                                                                                                                                                             
             ( handle => h1                                                                                                                                                                                   
             , filename => v_log_file_name                                                                                                                                                                    
             , directory => g_dmp_dir                                                                                                                                                               
             , filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);                                                                                                                                             
         DBMS_OUTPUT.PUT_LINE('Successfully add log file: '||v_log_file_name );                                                                                                                              
                                                                                                                                                                                                              
           dbms_datapump.log_entry(handle => h1, message => 'Job '||(counter+1)||'/'||v_dop||' starting at '||to_char(sysdate, 'dd-mon-yyyy hh24:mi:ss')||' as of scn: '||v_curr_scn );                       
           dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);                                                                                                                         
         DBMS_OUTPUT.PUT_LINE('Successfully started job '||(counter+1)||'/'||v_dop||' at '||to_char(sysdate,'dd-mon-yyyy hh24:mi:ss') ||' as of scn: '||v_curr_scn );                                        
                                                                                                                                                                                                              
           dbms_datapump.detach(handle => h1);                                                                                                                                                                
         DBMS_OUTPUT.PUT_LINE('Successfully detached from job' );                                                                                                                                            
                                                                                                                                                                                                              
        end loop;                                                                                                                                                                                             
        DBMS_OUTPUT.PUT_LINE('**** END SETTING DATAPUMP PARALLEL_TABLE_EXPORT JOBS ****' );                                                                                                                   
                                                                                                                                                                                                              
      EXCEPTION                                                                                                                                                                                               
        WHEN OTHERS THEN                                                                                                                                                                                      
          dbms_datapump.detach(handle => h1);                                                                                                                                                                 
          DBMS_OUTPUT.PUT_LINE('Successfully detached from job' );                                                                                                                                            
          DBMS_OUTPUT.PUT_LINE('Error: '||SQLERRM||' - '||DBMS_UTILITY.FORMAT_ERROR_BACKTRACE );                                                                                                              
          DBMS_OUTPUT.PUT_LINE('**** END SETTING DATAPUMP PARALLEL_TABLE_EXPORT JOBS ****' );                                                                                                                 
          RAISE_APPLICATION_ERROR( -20010, SQLERRM||' - '||DBMS_UTILITY.FORMAT_ERROR_BACKTRACE );                                                                                                             
      END Exp_Tables_Parallel;                                                                                                                                                                        
                                                                                                         
      -------------------------------------------------------------------------------------------------                                                                                                       
      PROCEDURE Exec_Export
         ( P_wait IN PLS_INTEGER := 0 )                                                                                                                                                                   
        IS                                                                                                                                                                                                    
        v_scn         NUMBER;                                                                                                                                                                                     
        v_dmp         VARCHAR2(200);
        export_done   PLS_INTEGER := 0;                                                                                                                                                                              
                                                                                                                                                                                                              
      BEGIN                                                                                                                                                                                                   
                                                                                                                                                                                                              
      -- get current scn                                                                                                                                                                                      
        v_scn := Get_Current_Scn;                                                                                                                                                                             
                                                                                                                                                                                                              
      -- start parallel export processes + detach                                                                                                                                                             
        Exp_Tables_Parallel( v_scn, v_dmp );                                                                                                                                                       
      
        if P_wait = 1 then
      -- wait till all parallel export processes are finished 
      -- check every 5 minutes                                                                                                                                                                                    
          export_done := 0;
          loop                                                                                                           
            for i in ( select 1                                                                                                                                                                               
                       from ( select count(*) cnt                                                                                                                                                             
                              from user_tables                                                                                                                                                                
                              where instr(table_name,upper(replace(v_dmp,'.dmp'))) > 0 )                                                                                                                   
                       where cnt = 0 )                                                                                                                                                                        
            loop                                                                                                                                                                                              
              export_done := 1;                                                                                                                                                      
            end loop;
          
            if export_done = 1 then
              exit;
            end if;
            Sleep(300000);
          end loop; 
        end if;
                                                                                                                                                                                                              
      EXCEPTION                                                                                                                                                                                               
        WHEN OTHERS THEN                                                                                                                                                                                      
          DBMS_OUTPUT.PUT_LINE('Error: '||SQLERRM||' - '||DBMS_UTILITY.FORMAT_ERROR_BACKTRACE );                                                                                                              
          RAISE_APPLICATION_ERROR( -20010, SQLERRM||' - '||DBMS_UTILITY.FORMAT_ERROR_BACKTRACE );                                                                                                             
      END Exec_Export;                                                                                                                                                                                
                                                                                                                                                                                                              
      --------------------------------------------------------------------------------------------------------                                                                                                
      END Datapump_Parallel_Exp_Pck;
      /
      
      SHOW ERRORS;
      

      The post DIY Parallelization with Oracle DBMS_DATAPUMP appeared first on AMIS Oracle and Java Blog.

      AMIS Tools Showroom – The Sequel – Donderdag 16 maart 2017

      Mon, 2017-02-20 23:25

      Donderdag 16 maart

      17.00-21.00 uur

      AMIS, Nieuwegein

      Aanmelden via: bu.om@amis.nl

      Op donderdag 16 maart vindt de tweede AMIS Tools Showroom Sessie plaats. De eerste sessie was op 13 december: hierin hebben 16 AMIS-ers in korte en hele korte presentaties en demonstraties allerlei handige tools en hulpmiddelen laten zien. De nadruk in deze sessie lag op tools voor monitoring, communicatie en collaboration.

      In deze tweede sessie gaan we op zoek naar nog een collectie tools. Deze uitnodiging betreft dan ook twee aspecten:

      · Wil je er op 16 maart bij zijn om tools door je vakbroeders gepresenteerd te krijgen?

      · Heb jij een tool waarover je tijdens deze sessie wil presenteren? Denk bijvoorbeeld aan tools rondom web conferencing & video streaming, screen cams, text editing, chat, image editing, data visualisatie, document sharing, voice recognition. En andere tools, apps en plugins die jij handig vindt in je werk en die je aan je vakgenoten zou willen laten zien – in een korte presentatie (5-15 min) – liefst met een demo.

      Zou je via het volgende formulier willen aangeven welke tools voor jou interessant zijn en over welk tool jij wel zou willen presenteren: https://docs.google.com/forms/d/e/1FAIpQLSdNPwUACXxWaZGfs911UraVFQp5aWqeJVEx0xrSRFQTcYnYXA/viewform .
      Op basis van de resultaten van deze survey kunnen we de agenda samenstellen voor deze sessie.

      The post AMIS Tools Showroom – The Sequel – Donderdag 16 maart 2017 appeared first on AMIS Oracle and Java Blog.

      Node.js application writing to MongoDB – Kafka Streams findings read from Kafka Topic written to MongoDB from Node

      Mon, 2017-02-20 08:35

      MongoDB is a popular, light weight, highly scalable, very fast and easy to use NoSQL document database. Written in C++, working with JSON documents (stored in binary format BSON), processing JavaScript commands using the V8 engine, MongoDB easily ties in into many different languages and platforms, one of which is Node.JS. In this article, I describe first of all how a very simple interaction between Node.JS and MongoDB can be implemented.

       

      image

      Then I do something a little more challenging: the Node.JS application consumes messages from an Apache Kafka topic and writes these messages to a MongoDB database collection, to make the results available for many clients to read and query. Finally I will show a little analytical query against the MongoDB collection, to retrieve some information we would not have been able to get from the plain Kafka Topic (although with Kafka Streams it just may be possible as well).

      You will see the Mongo DB driver for Node.JS in action, as well as the kafka-node driver for Apache Kafka from Node.JS. All resources are in the GitHub Repo: https://github.com/lucasjellema/kafka-streams-running-topN.

      Prerequisites

      Node.JS is installed, as is MongoDB.

      Run the MongoDB server. On Windows, the command is mongod, optionally followed by the dbpath parameter to specify in which directory the data files are to be stored

      mongod --dbpath c:\node\nodetest1\data\

      For the part where messages are consumed from a Kafka Topic, a running Apache Kafka Cluster is  available – as described in more detail in several previous articles such as https://technology.amis.nl/2017/02/13/kafka-streams-and-nodejs-consuming-and-periodically-reporting-in-node-js-on-the-results-from-a-kafka-streams-streaming-analytics-application/.

       

      Getting Started

      Start a new Node application, using npm init.

      Into this application, install npm packages kafka-node en mongodb:

      npm install mongodb –save

      npm install kafka-node –save

      This installs the two Node modules with their dependencies and adds them to the package.json

       

      First Node Program – for Creating and Updating Two Static Documents

      This simple Node.JS program uses the the mongodb driver for Node, connects to a MongoDB server running locally and a database called test. It then tries to update two documents in the top3 collection in the test database; if a document does not yet exist (based on the key which is the continent property) it is created. When the application is done running, two documents exist (and have their lastModified property set if they were updated).

      var MongoClient = require('mongodb').MongoClient;
      var assert = require('assert');
      
      // connect string for mongodb server running locally, connecting to a database called test
      var url = 'mongodb://127.0.0.1:27017/test';
      
      MongoClient.connect(url, function(err, db) {
        assert.equal(null, err);
        console.log("Connected correctly to server.");
         var doc = {
              "continent" : "Europe",
               "nrs" : [ {"name":"Belgium"}, {"name":"Luxemburg"}]
            };
         var doc2 = {
              "continent" : "Asia",
               "nrs" : [ {"name":"China"}, {"name":"India"}]
            };
        insertDocument(db,doc, function() {
          console.log("returned from processing doc "+doc.continent);  
          insertDocument(db,doc2, function() {
            console.log("returned from processing doc "+doc2.continent);          
            db.close();
            console.log("Connection to database is closed. Two documents should exist, either just created or updated. ");
            console.log("From the MongoDB shell: db.top3.find() should list the documents. ");
          });
        });
      });
      
      var insertDocument = function(db, doc, callback) {
         // first try to update; if a document could be updated, we're done 
         console.log("Processing doc for "+doc.continent);
         updateTop3ForContinent( db, doc, function (results) {      
             if (!results || results.result.n == 0) {
                // the document was not updated so presumably it does not exist; let's insert it  
                db.collection('top3').insertOne( 
                      doc
                    , function(err, result) {
                         assert.equal(err, null);
                         callback();
                      }
                    );   
             }//if
             else {
               callback();
             }
       }); //updateTop3ForContinent
      }; //insertDocument
      
      var updateTop3ForContinent = function(db, top3 , callback) {
         db.collection('top3').updateOne(
            { "continent" : top3.continent },
            {
              $set: { "nrs": top3.nrs },
              $currentDate: { "lastModified": true }
            }, function(err, results) {
            //console.log(results);
            callback(results);
         });
      };
      
      

      The console output from the Node application:

      image

      The output on the MongoDB Shell:

      image

      Note: I have used db.top3.find() three times: before running the Node application, after it has ran once and after it has ran a second time. Note that after the second time, the lastModified property was added.

      Second Node Program – Consume messages from Kafka Topic and Update MongoDB accordingly

      This application registers as Kafka Consumer on the Topic Top3CountrySizePerContinent. Each message that is produced to that topic is consumed by the Node application and handled by function handleCountryMessage. This function parses the JSON message received from Kafka, adds a property continent derived from the key of the Kafka message, and calls the insertDocument function. This function attempts to update a record in the MongoDB collection top3 that has the same continent property value as the document passed in as parameter. If the update succeeds, the handling of the Kafka message is complete and the MongoDB collection  contains the most recent standings produced by the Kafka Streams application. If the update fails, presumably that happens because there is no record yet for the current continent. In that case, a new document is inserted for the continent.

      image

      /*
      This program connects to MongoDB (using the mongodb module )
      This program consumes Kafka messages from topic Top3CountrySizePerContinent to which the Running Top3 (size of countries by continent) is produced.
      
      This program records each latest update of the top 3 largest countries for a continent in MongoDB. If a document does not yet exist for a continent (based on the key which is the continent property) it is inserted.
      
      The program ensures that the MongoDB /test/top3 collection contains the latest Top 3 for each continent at any point in time.
      
      */
      
      var MongoClient = require('mongodb').MongoClient;
      var assert = require('assert');
      
      var kafka = require('kafka-node')
      var Consumer = kafka.Consumer
      var client = new kafka.Client("ubuntu:2181/")
      var countriesTopic = "Top3CountrySizePerContinent";
      
      
      // connect string for mongodb server running locally, connecting to a database called test
      var url = 'mongodb://127.0.0.1:27017/test';
      var mongodb;
      
      MongoClient.connect(url, function(err, db) {
        assert.equal(null, err);
        console.log("Connected correctly to MongoDB server.");
        mongodb = db;
      });
      
      var insertDocument = function(db, doc, callback) {
         // first try to update; if a document could be updated, we're done 
         updateTop3ForContinent( db, doc, function (results) {      
             if (!results || results.result.n == 0) {
                // the document was not updated so presumably it does not exist; let's insert it  
                db.collection('top3').insertOne( 
                      doc
                    , function(err, result) {
                         assert.equal(err, null);
                         console.log("Inserted doc for "+doc.continent);
                         callback();
                      }
                    );   
             }//if
             else {
               console.log("Updated doc for "+doc.continent);
               callback();
             }
       }); //updateTop3ForContinent
      }; //insertDocument
      
      var updateTop3ForContinent = function(db, top3 , callback) {
         db.collection('top3').updateOne(
            { "continent" : top3.continent },
            {
              $set: { "nrs": top3.nrs },
              $currentDate: { "lastModified": true }
            }, function(err, results) {
            //console.log(results);
            callback(results);
         });
      };
      
      // Configure Kafka Consumer for Kafka Top3 Topic and handle Kafka message (by calling updateSseClients)
      var consumer = new Consumer(
        client,
        [],
        {fromOffset: true}
      );
      
      consumer.on('message', function (message) {
        handleCountryMessage(message);
      });
      
      consumer.addTopics([
        { topic: countriesTopic, partition: 0, offset: 0}
      ], () => console.log("topic "+countriesTopic+" added to consumer for listening"));
      
      function handleCountryMessage(countryMessage) {
          var top3 = JSON.parse(countryMessage.value);
          var continent = new Buffer(countryMessage.key).toString('ascii');
          top3.continent = continent;
          // insert or update the top3 in the MongoDB server
          insertDocument(mongodb,top3, function() {
            console.log("Top3 recorded in MongoDB for "+top3.continent);  
          });
      
      }// handleCountryMessage
      
      

      Running the application produces the following output.

      Producing Countries:

      SNAGHTML44a937e

      Producing Streaming Analysis – Running Top 3 per Continent:

      SNAGHTML44b2c82

      Processing Kafka Messages:

      image

      Resulting MongoDB collection:

      SNAGHTML44bf675

      And after a little while, here is the latest situation for Europe and Asia in the MongoDB collection :

      image

      Resulting from processing the latest Kafka Stream result messages:

      image

       

       

      Querying the MongoDB Collection

      The current set of top3 documents – one for each continent – stored in MongoDB can be queried, using MongoDB find and aggregation facilities.

      One query we can perform is to retrieve the top 5 largest countries in the world. Here is the query that gives us that insight. First it creates a single record per country (using unwind to join the nrs collection in each top3 document). The countries are then sorted by the size of each country (descending) and the first 5 of the sort result are retained. These five are then projected into a nicer looking output document that only contains continent, country and area fields.

      db.top3.aggregate([ {$project: {nrs:1}},{$unwind:’$nrs’}, {$sort: {“nrs.size”:-1}}, {$limit:5}, {$project: {continent:’$nrs.continent’, country:’$nrs.name’, area:’$nrs.size’ }}])

      db.top3.aggregate([ 
         {$project: {nrs:1}}
        ,{$unwind:'$nrs'}
        , {$sort: {"nrs.size":-1}}
        , {$limit:5}
        , {$project: {continent:'$nrs.continent', country:'$nrs.name', area:'$nrs.size' }}
      ])
      

      image

      (And because no continent has its number 3 country in the top 4 of this list, we can be sure that this top 5 is the actual top 5 of the world)

       

      Resources

      A very good read – although a little out of date – is this tutorial on 1st and 2nd steps with Node and Mongodb: http://cwbuecheler.com/web/tutorials/2013/node-express-mongo/ 

      MongoDB Driver for Node.js in the official MongoDB documentation: https://docs.mongodb.com/getting-started/node/client/ 

      Kafka Connect for MongoDB – YouTube intro – https://www.youtube.com/watch?v=AF9WyW4npwY 

      Combining MongoDB and Apache Kafka – with a Java application talking and listening to both: https://www.mongodb.com/blog/post/mongodb-and-data-streaming-implementing-a-mongodb-kafka-consumer 

      Tutorials Point MongoDB tutorials – https://www.tutorialspoint.com/mongodb/mongodb_sort_record.htm 

      Data Aggregation with Node.JS driver for MongoDB – https://docs.mongodb.com/getting-started/node/aggregation/

      The post Node.js application writing to MongoDB – Kafka Streams findings read from Kafka Topic written to MongoDB from Node appeared first on AMIS Oracle and Java Blog.

      Public Cloud consequences with an Oracle environment

      Sun, 2017-02-19 10:23

      The title suggests a negative statement of using a Public Cloud. Well, it isn’t.  I’m convinced the Cloud is the next BIG thing, with huge advantages for businesses. But companies should be aware of what they choose. A lot of providers, including Oracle, are pushing us to the cloud, Public, Private or Hybrid. And make us believe that a public cloud will be an inevitable extension to our on-premises environment. Moving your weblogic and database environment including data from on-premises to the public cloud and back is no problem, or will be no problem in the future. But what kind of hassle you have to cope with, technical and business-wise?

      The list of implications and consequences in this post is not exhaustive, it intends to give you an idea of the technical and non-technical consequences of moving to and from the cloud.

      Public Cloud

      The title of this blogpost is about the ‘Public Cloud’. What I mean here is the Oracle-authorized Public clouds: Amazon, Azure and of course the Oracle Cloud. More drilled down: the PaaS and the IaaS environments. What were the big differences again and why am I not talking about consequences with a Private Cloud.

      I like to think that a Private Cloud is a better version of the average on-premises environments with at least some of these characteristics of a Public Cloud:

      – Self-serviced

      – Shared services

      – Standardized

      – Virtual

      – Metering, automatic cost allocation and chargeback

      So on-premises may be a Private Cloud, but most of the time it’s not the same. One – not mentioned yet – characteristic is very important: you (your IT-department) should be in control of the environment regarding versions, patches, Service Level Agreements etc.

       

      A Public Cloud has at least the following characteristics next to the one mentioned above:

      – Shared resources with other companies

      – Available through the internet

      And: no department of your company is in full control of versions, patches, Service Level Agreements etc.

       

      So the scope of the article will be the Public Cloud

      Version differences

      When deploying Oracle software in the Public Cloud, you are depending on the versions and patches offered by the cloud provider.E.g. Oracle announced it will deploy the most recent patches and releases at first in the Oracle Public Cloud. And afterwards these releases and patches will be available for the on-premises environment.

      So when having a development- and test- environment in the Public Cloud, and the production at on-premises or private cloud, you must be fully aware of the version differences regarding your life cycle management.

      image

      License differences

      When deploying your software in an IaaS environment it’s possible to ‘Bring Your Own License’. But in the Oracle Cloud it’s possible being charged ‘metered’ and ‘unmetered’ subscription.

      Databases in the Oracle Public Cloud are fully installed with options, and it’s possible and sometimes inevitable to use them.  E.g. the extra tablespace you create in the Oracle cloud is default encrypted. So when moving this database  to on-premises, you must be owner the Security Option license of the Oracle Database Enterprise Edition to be compliant.

      And moving a Pluggable Database from the (Oracle) cloud to on-premises, to a Container-Database with already one Pluggable Database in it, you are running into a Multitenancy Option.

      The next picture is a test of creating two tablespaces in the Oracle Cloud, one default, and one with TDE. The result is two tablespaces with TDE.

      image

      Centralized administration

      It’s an ideal world when you are able to manage your database in the Private and Public Cloud from one central mangement point. When using the Oracle management solution this should be the Oracle Enterprise Manager Cloud Control. It’s relatively simple to deploy agents on-primises ánd in the Oracle Cloud (PaaS and IaaS). But unfortunately it’s not supported to deploy databases or middleware in Azure or Amazon RWS with Oracle Enterprise Manager. You will get the following message:

       

      image

      It’s technically possible to overcome this, but not supported at the moment. So with Oracle Enterprise Manager you basically have two options: Oracle Public Cloud or the Cloud@Customer option.

      Standards

      Deployment of Oracle PaaS software is automated in the cloud according to the standards dictated by the provider. It should be very convenient that these standards are the same as the on-premises software, but that’s not always the case . At least, not in a PaaS environment in de Public Cloud.

      In an IaaS environemt you’ve got generally almost full control over the Deployments. Almost? Yeah, you will still be relying at the version and configuration of the Operationg System. To also overcome this, you have to choose for a bare metal option in the Public Cloud.

       

      Azure / Amazon

      However the Clouds of Microsoft Azure and Amazon are for some a way ahead of the Oracle Cloud, the fact is that for Oracle the Oracle Cloud prevails.

      As already said, it’s not possible to manage the Oracle software in the Azure / Amazon public cloud by Oracle Enterprise Manager on-premise. Amazon does support an Agent for Enterprise Manager on-premise, but not the one you actually want to make your life easier.

      You’re depending on the breadcrumps Oracle is willing to share with the other Cloud providers.

       

      SLA

      The business would like to have a Service Level agreement with the IT-department, but the IT-department could be relying on the SLA of the Public Cloud provider. And that could be a mismatch. For example, in my tests I had a trial agreement with Microsoft Azure a while ago. All working fine, but suddenly Azure had a dispute with Oracle (I think) and I got the following message.

      image

      You don’t want this in a real environment.

      Security

      There is and there has been a distrust against security in the Public Cloud. I believe the security in the mentioned Public Clouds is generally better or at least the same as  what the average company wishes.

      Nevertheless you may have to cope with different security-bases and changes of security-roles in your company in a so-called ‘hybrid infrastructure’.

       

      Roles – Cloud Administrator

      As already mentioned, there will be new roles (and more likely new jobs) to manage the Oracle software in the Public Cloud. Managing subscriptions, interoperability, compliancy etc. New processes, new management, new challenges.

       

      Data Sovereignty

      For a lot of companies it’s important to know where the data geographically actually resides.This data is subject to the laws of the country in which it is located. What is the roadmap of these Public Cloud providers?

       

      Latency

      Seperating the database and middleware will cause latency when one of them is in the Public Cloud. Oracle has two solutions for that:

      Cloud@customer. Bring the Public Cloud to on-premises with a seperate machine, managed by Oracle.

      – Connect the Internet backbone to your company. Equinix now provides access through a direct connect or Equinix Cloud Exchange to Oracle Cloud in five (5) metros in the US, Europe, and Australia. Enterprise customers with a presence in these Equinix IBX data centers can leverage Oracle’s FastConnect giving them a high-performance private access to Oracle Cloud.

       

       

      Resources:

      Private vs hybrid: http://www.stratoscale.com/blog/cloud/private-cloud-vs-public-hybrid-or-multi/?utm_source=twitter&utm_medium=social&utm_campaign=blog_recycle

      Time to embrace the cloud: https://software.dell.com/community/b/en/posts/dbas-embrace-the-cloud

      The post Public Cloud consequences with an Oracle environment appeared first on AMIS Oracle and Java Blog.

      Node.js application using SSE (Server Sent Events) to push updates (read from Kafka Topic) to simple HTML client application

      Sun, 2017-02-19 06:33

      This article describes a simple Node.js application that uses Server Sent Events (SSE) technology to push updates to a simple HTML client, served through the Express framework. The updates originate from messages consumed from a Kafka Topic. Although the approach outlined in this article stands on its own, and does not even depend on Apache Kafka, it also forms the next step in a series of articles that describe an Kafka Streams application that processes messages on a Kafka Topic – deriving running Top N ranks – and produces them to another Kafka Topic. The Node.js application in this current article consumes the Top N messages and pushes them to the HTML client.

      The simple story told by this article is:

      image

      And the complete picture – including the prequel discussed in https://technology.amis.nl/2017/02/12/apache-kafka-streams-running-top-n-grouped-by-dimension-from-and-to-kafka-topic/ – looks like this:

      image

       

      Sources are found on GitHub:https://github.com/lucasjellema/kafka-streams-running-topN/tree/master/kafka-node-express-topN-sse .

      Topics discussed in this article

      Browser, HTML & JavaScript

      • Programmatically add HTML elements
      • Add row to an HTML table and cells to a table row
      • Set Id attribute on HTML elements
      • Loop over all elements in an array using for .. of
      • Subscribe to a SSE server
      • Process an incoming SSE message (onMessage, process JSON)
      • Formatting (large) numeric values in JavaScript strings
      • Concatenating Strings in JavaScript

      Node & Express (server side JavaScript)

      • Consume message from Kafka Topic
      • Serve static HTML documents using Express
      • Expose API through Express that allows SSE clients to register for server sent events
      • Push JSON messages to all SSE clients
      • Execute a function periodically, based on an interval using a Node Time (setInterval)

      Browser – Client Side – HTML & JavaScript

      The client side of the implementation is a simple HTML document (index.html) with embedded JavaScript. In a real application, the JavaScript should ideally be imported from a separate JavaScript library. In the <script> tag in the <head> of the document is the JavaScript statement that registers the browser as a SSE subscriber:

      var source = new EventSource(“../topn/updates”);

      The SSE server is located at a path /topn/updates relative to the path where the index.html document was loaded (http://host:port/index.html – downloaded from the public sub directory in the Node application where static resources are located and served from). Requests to this URL path are handled through the Express framework in the Node application.

      On this EventSource object, a message handler is created – with the function to be invoked whenever an SSE event is received on this source:

      source.onmessage = function(event) { … }

      The content of the function is fairly straightforward: the JSON payload from the event is parsed. It contains the name of a continent and an array of the current top 3 countries by size in that continent. Based on this information, the function locates the continent row (if it does not yet exist, the row is created) in the table with top3 records. The top3 in the SSE event is written to the innnerHTML property of the second table cell in the continent’s table row.

       

      
      <!DOCTYPE html>
      <html>
        <head>
          <title>Continent and Country Overview</title>
          <meta charset="UTF-8">
          <meta name="viewport" content="width=device-width, initial-scale=1.0">
          <script>
      	  // assume that API service is published on same server that is the server of this HTML file
          // send event to SSE stream 
          /* "{\"nrs\":[{\"code\":\"FR\",\"name\":\"France\",\"population\":66836154,\"size\":643801,\"continent\":\"Europe\"},{\"code\":\"DE\",\"name\":\"Germany\",\"population\":80722792,\"size\":357022,\"continent\":\"Europe\"},{\"code\":\"FI\",\"name\":\"Finland\",\"population\":5498211,\"size\":338145,\"continent\":\"Europe\"},null]}"
      update all Sse Client with message {"nrs":[{"code":"FR","name":"France","population":66836154,"size":643801,"continent":"Europe"},{"code":"DE","name":"Germany","population":80722792,"size":357022,"continent":"Europe"},{"code":"FI","name":"Finland","population":5498211,"size":338145,"continent":"Europe"},null]}
      */
            var source = new EventSource("../topn/updates");
            source.onmessage = function(event) {
              var top3 = JSON.parse(event.data);
              if (top3.continent) {
              var nrs = top3.nrs;
               var trID = "continent_"+top3.continent;
               // find row in table with id equal to continent
               var tr = document.getElementById(trID);
               // if not found, then add a row
               if (!tr) {
                 // table does not yet have a row for the continent, than add it 
                 // find table with continents
                 var tbl = document.getElementById("continentsTbl");
                 // Create an empty <tr> element and add it to the 1st position of the table:
                 tr = tbl.insertRow(1);
                 tr.setAttribute("id", trID, 0);
                 // Insert new cells (<td> elements) at the 1st and 2nd position of the "new" <tr> element:
                 var cell1 = tr.insertCell(0);
                 cell1.setAttribute("id",trID+"Continent",0);
                 var cell2 = tr.insertCell(1);
                 cell2.setAttribute("id",trID+"Top3",0);
                 // Add some text to the new cells:
                 cell1.innerHTML = top3.continent;
               }// tr not found
               var top3Cell = document.getElementById(trID+"Top3");
               var list = "<ol>";
               for (country of top3.nrs) {
                  if (country) {
                      list= list.concat( "<li>",country.name," (size ",country.size.toLocaleString(),")","</li>");
                  }
               }//for
               list= list+ "</ol>";
               top3Cell.innerHTML = list;    
              }// if continent    
            };//onMessage
          </script>    
        </head>
        <body>
          <div id="loading">
            <h2>Please wait...</h2>
          </div>
          <div id="result">
            <table id="continentsTbl">
              <tr><td>Continent</td><td>Top 3 Countries by Size</td></tr>
            </table>
          </div>
        </body>
      </html>
      

       

      Node Application – Server Side – JavaScript using Express framework

      The server side in this article consists of a simple Node application that leverages the Express module as well as the kafka-node module. A simple, generic SSE library is used – in the file sse.js. It exports the Connection object – that represents the SSE channel to a single client – and the Topic object that manages a collection of Connections (for all SSE consumers around a specific subject). When the connection  under a Connection ends (on close), the Connection is removed from the Collection.

      "use strict";
      
      console.log("loading sse.js");
      
      // ... with this middleware:
      function sseMiddleware(req, res, next) {
          console.log(" sseMiddleware is activated with "+ req+" res: "+res);
          res.sseConnection = new Connection(res);
          console.log(" res has now connection  res: "+res.sseConnection );
          next();
      }
      exports.sseMiddleware = sseMiddleware;
      /**
       * A Connection is a simple SSE manager for 1 client.
       */
      var Connection = (function () {
          function Connection(res) {
                console.log(" sseMiddleware construct connection for response ");
        
              this.res = res;
          }
          Connection.prototype.setup = function () {
              console.log("set up SSE stream for response");
              this.res.writeHead(200, {
                  'Content-Type': 'text/event-stream',
                  'Cache-Control': 'no-cache',
                  'Connection': 'keep-alive'
              });
          };
          Connection.prototype.send = function (data) {
              console.log("send event to SSE stream "+JSON.stringify(data));
              this.res.write("data: " + JSON.stringify(data) + "\n\n");
          };
          return Connection;
      }());
      
      exports.Connection = Connection;
      /**
       * A Topic handles a bundle of connections with cleanup after lost connection.
       */
      var Topic = (function () {
          function Topic() {
                console.log(" constructor for Topic");
        
              this.connections = [];
          }
          Topic.prototype.add = function (conn) {
              var connections = this.connections;
              connections.push(conn);
              console.log('New client connected, now: ', connections.length);
              conn.res.on('close', function () {
                  var i = connections.indexOf(conn);
                  if (i >= 0) {
                      connections.splice(i, 1);
                  }
                  console.log('Client disconnected, now: ', connections.length);
              });
          };
          Topic.prototype.forEach = function (cb) {
              this.connections.forEach(cb);
          };
          return Topic;
      }());
      exports.Topic = Topic;
      
      

      The main application – in file topNreport.js does a few things:

      • it serves static HTML resources in the public subdirectory (which only contains the index.html document)
      • it implements the /topn/updates API where clients can register for SSE updates (that are collected in the sseClients Topic)
      • it consumes messages from the Kafka Topic Top3CountrySizePerContinent and pushes each received message as SSE event to all SSE clients
      • it schedules a function for periodic execution (once every 10 seconds at the moment); whenever the function executes, it sends a heartbeat event to all SSE clients

       

      /*
      This program serves a static HTML file (through the Express framework on top of Node). The browser that loads this HTML document registers itself as an SSE client with this program.
      
      This program consumes Kafka messages from topic Top3CountrySizePerContinent to which the Running Top3 (size of countries by continent) is produced.
      
      This program reports to all its SSE clients the latest update (or potentially a periodice top 3 largest countries per continent (with a configurable interval))
      
       
      */
      
      var express = require('express')
        , http = require('http')
        , sseMW = require('./sse');
      
      var kafka = require('kafka-node')
      var Consumer = kafka.Consumer
      var client = new kafka.Client("ubuntu:2181/")
      var countriesTopic = "Top3CountrySizePerContinent";
      
      var app = express();
      var server = http.createServer(app);
      
      var PORT = process.env.PORT || 3000;
      var APP_VERSION = '0.9';
      
      server.listen(PORT, function () {
        console.log('Server running, version '+APP_VERSION+', Express is listening... at '+PORT+" ");
      });
      
       // Realtime updates
      var sseClients = new sseMW.Topic();
      
      
      app.use(express.static(__dirname + '/public'))
      app.get('/about', function (req, res) {
          res.writeHead(200, {'Content-Type': 'text/html'});
          res.write("Version "+APP_VERSION+". No Data Requested, so none is returned");
          res.write("Supported URLs:");
          res.write("/public , /public/index.html ");
          res.write("incoming headers" + JSON.stringify(req.headers)); 
          res.end();
      });
      //configure sseMW.sseMiddleware as function to get a stab at incoming requests, in this case by adding a Connection property to the request
      app.use(sseMW.sseMiddleware)
      
      // initial registration of SSE Client Connection 
      app.get('/topn/updates', function(req,res){
          var sseConnection = res.sseConnection;
          sseConnection.setup();
          sseClients.add(sseConnection);
      } );
      
      
      var m;
      //send message to all registered SSE clients
      updateSseClients = function(message) {
          var msg = message;
          this.m=message;
          sseClients.forEach( 
            function(sseConnection) {
              sseConnection.send(this.m); 
            }
            , this // this second argument to forEach is the thisArg (https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach) 
          ); //forEach
      }// updateSseClients
      
      // send a heartbeat signal to all SSE clients, once every interval seconds (or every 3 seconds if no interval is specified)
      initHeartbeat = function(interval) {
          setInterval(function()  {
              var msg = {"label":"The latest", "time":new Date()}; 
              updateSseClients( JSON.stringify(msg));
            }//interval function
          , interval?interval*1000:3000
          ); // setInterval 
      }//initHeartbeat
      
      // initialize heartbeat at 10 second interval
      initHeartbeat(10); 
      
      
      // Configure Kafka Consumer for Kafka Top3 Topic and handle Kafka message (by calling updateSseClients)
      var consumer = new Consumer(
        client,
        [],
        {fromOffset: true}
      );
      
      consumer.on('message', function (message) {
        handleCountryMessage(message);
      });
      
      consumer.addTopics([
        { topic: countriesTopic, partition: 0, offset: 0}
      ], () => console.log("topic "+countriesTopic+" added to consumer for listening"));
      
      function handleCountryMessage(countryMessage) {
          var top3 = JSON.parse(countryMessage.value);
          var continent = new Buffer(countryMessage.key).toString('ascii');
          top3.continent = continent;
          updateSseClients( top3);
      }// handleCountryMessage
      

      Running the application

      In order to run the application, the Node application that publishes the basic country records to a Kafka Topic is started:

      SNAGHTML2a52e0

      The Kafka Streaming Java application that derives the Top 3 per continent as produces it to a Kafka Topic is started:

      image

      And the Node application that consumes from the Top3 Topic and pushes SSE events to the browser clients is run:

      image

      After a little wait, the browser displays:

      image

      based on output from the Kafka Streams application:

      image

      When all messages have been processed from the countries2.csv input file, this is what the browser shows:

      image

      This is the result of all the individual top3 messages pushed as SSE events from the Node application to the browser client. The screenshot shows only one browser client; however, many browsers could have connected to the same Node server and have received the same SSE events simultaneously.

      image

      The post Node.js application using SSE (Server Sent Events) to push updates (read from Kafka Topic) to simple HTML client application appeared first on AMIS Oracle and Java Blog.

      Pages