Feed aggregator

Oracle Database on OpenShift

Yann Neuhaus - Tue, 2018-08-14 15:39
By Franck Pachot

.
In a previous post I described the setup of MiniShift on my laptop in order to run OpenShift for test purpose. I even pulled the Oracle Database image from the Docker Store. But the goal is to import it into OpenShift to deploy it from the Image Stream.

I start MiniShift on my laptop, specifying a larger disk (default is 20GB)

C:\Users\Franck>minishift start --disk-size 40g
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
-- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
-- Checking if VirtualBox is installed ... OK
-- Checking the ISO URL ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting the OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 2 GB
vCPUs : 2
Disk size: 40 GB
-- Starting Minishift VM .................................................................... OK
-- Checking for IP address ... OK
-- Checking for nameservers ... OK
-- Checking if external host is reachable from the Minishift VM ...
Pinging 8.8.8.8 ... OK
-- Checking HTTP connectivity from the VM ...
Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 1% used OK
Importing 'openshift/origin:v3.9.0' ............. OK
Importing 'openshift/origin-docker-registry:v3.9.0' ... OK
Importing 'openshift/origin-haproxy-router:v3.9.0' ...... OK
-- OpenShift cluster will be configured with ...
Version: v3.9.0
-- Copying oc binary from the OpenShift container image to VM ... OK
-- Starting OpenShift cluster ...........................................................
Using nsenter mounter for OpenShift volumes
Using public hostname IP 192.168.99.105 as the host IP
Using 192.168.99.105 as the server IP
Starting OpenShift using openshift/origin:v3.9.0 ...
OpenShift server started.
 
The server is accessible via web console at:
https:⁄⁄192.168.99.105:8443
 
You are logged in as:
User: developer
Password:
 
To login as administrator:
oc login -u system:admin

MiniShift is starting a VirualBox and gets an IP address from the VirtualBox DHCP – here 192.168.99.105
I can access to the console https://192.168.99.105:8443 and log as developer or admin but for the moment I’m continuing in command line.

At any moment I can log to the VM running OpenShift with the minishift command. Here checking the size of the disks

C:\Users\Franck>minishift ssh
 
[docker@minishift ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/live-rw 9.8G 697M 9.0G 8% /
devtmpfs 974M 0 974M 0% /dev
tmpfs 1000M 0 1000M 0% /dev/shm
tmpfs 1000M 18M 983M 2% /run
tmpfs 1000M 0 1000M 0% /sys/fs/cgroup
/dev/sr0 344M 344M 0 100% /run/initramfs/live
/dev/sda1 39G 1.8G 37G 5% /mnt/sda1
tmpfs 200M 0 200M 0% /run/user/0
tmpfs 200M 0 200M 0% /run/user/1000

Build the Docker image

The goal is to run in OpenShift a container from an image that has been build somewhere else. In this example I’ll not build one but use one provided on the Docker store: the Oracle Database ‘slim’ image. For this example, I’ll use the minishift VM docker, just because it is there.

I have DockerTools installed on my laptop and just want to set the environment to connect to the docker server on the minishift VM. I can get the environment from minishift:

C:\Users\Franck>minishift docker-env
SET DOCKER_TLS_VERIFY=1
SET DOCKER_HOST=tcp://192.168.99.105:2376
SET DOCKER_CERT_PATH=C:\Users\Franck\.minishift\certs
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i

Here is how to directly set the environemnt from it:

C:\Users\Franck>@FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i

Now my docker commands will connect to this docker server. Here are the related info, minishift is already running several containers there for its own usage:

C:\Users\Franck>docker info
Containers: 9
Running: 7
Paused: 0
Stopped: 2
Images: 6
Server Version: 1.13.1
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log:
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Init Binary: docker-init
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: e9c345b3f906d5dc5e8100b05ce37073a811c74a (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
Profile: default
selinux
Kernel Version: 3.10.0-862.6.3.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.953GiB
Name: minishift
ID: U7IQ:TE3X:HSGK:3ES2:IO6G:A7VI:3KUU:YMBC:3ZIR:QYUL:EQUL:VFMS
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: pachot
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
Experimental: false
Insecure Registries:
172.30.0.0/16
127.0.0.0/8
Live Restore Enabled: false

As for this example, I’ll use the Oracle Database image, I need to log to the Docker Store to prove that I accept the licensing conditions:

C:\Users\Franck>docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username:
Password:
Login Succeeded

I pull the image, takes some time because ‘slim’ means 2GB with Oracle Database.

C:\Users\Franck>docker pull store/oracle/database-enterprise:12.2.0.1-slim
Trying to pull repository docker.io/store/oracle/database-enterprise ...
12.2.0.1-slim: Pulling from docker.io/store/oracle/database-enterprise
4ce27fe12c04: Pull complete
9d3556e8e792: Pull complete
fc60a1a28025: Pull complete
0c32e4ed872e: Pull complete
be0a1f1e8dfd: Pull complete
Digest: sha256:dbd87ae4cc3425dea7ba3d3f34e062cbd0afa89aed2c3f3d47ceb5213cc0359a
Status: Downloaded newer image for docker.io/store/oracle/database-enterprise:12.2.0.1-slim

Here is the image:

C:\Users\Franck>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
openshift/origin-web-console v3.9.0 aa12a2fc57f7 7 weeks ago 495MB
openshift/origin-docker-registry v3.9.0 0530b896b578 7 weeks ago 465MB
openshift/origin-haproxy-router v3.9.0 6b85d7aec983 7 weeks ago 1.28GB
openshift/origin-deployer v3.9.0 39ee47797d2e 7 weeks ago 1.26GB
openshift/origin v3.9.0 12a3f005312b 7 weeks ago 1.26GB
openshift/origin-pod v3.9.0 6e08365fbba9 7 weeks ago 223MB
store/oracle/database-enterprise 12.2.0.1-slim 27c9559d36ec 12 months ago 2.08GB

My minishift VM disk has increased by 2GB:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 3.9G 35G 11% /mnt/sda1

Push the image to OpenShift registry

OpenShift has its integrated container registry from which the Docker images are visible to Image Stream.
Here is the address of the registry:

C:\Users\Franck>minishift openshift registry
172.30.1.1:5000

I’ll run some OpenShift commands and the path to the minishift cache for ‘oc’ can be set with:

C:\Users\Franck>minishift oc-env
SET PATH=C:\Users\Franck\.minishift\cache\oc\v3.9.0\windows;%PATH%
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i
 
C:\Users\Franck>@FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i

I am still connected as developer to OpenShift:

C:\Users\Franck>oc whoami
developer

and I get the login token:

C:\Users\Franck>oc whoami -t
lde5zRPHjkDyaXU9ninZ6zX50cVu3liNBjQVinJdwFc

I use this token to login to the OpenShift registry with docker in order to be able to push the image:

C:\Users\Franck>docker login -u developer -p lde5zRPHjkDyaXU9ninZ6zX50cVu3liNBjQVinJdwFc 172.30.1.1:5000
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded

I create a new project to import the image to:

C:\Users\Franck>oc new-project oracle --display-name=Oracle
Now using project "oracle" on server "https://192.168.99.105:8443".
 
You can add applications to this project with the 'new-app' command. For example, try:
 
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
 
to build a new example application in Ruby.

This can also be done from the GUI. Here is the project on the right:
CaptureOpenShiftProject

I tag the image with the name of the registry (172.30.1.1:5000) and the name of the project (oracle) and add an image name, so that the full name is: 172.30.1.1:5000/oracle/ora122slim

C:\Users\Franck>docker tag store/oracle/database-enterprise:12.2.0.1-slim 172.30.1.1:5000/oracle/ora122slim

We can see this tagged image

C:\Users\Franck>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
openshift/origin-web-console v3.9.0 aa12a2fc57f7 7 weeks ago 495MB
openshift/origin-docker-registry v3.9.0 0530b896b578 7 weeks ago 465MB
openshift/origin-haproxy-router v3.9.0 6b85d7aec983 7 weeks ago 1.28GB
openshift/origin-deployer v3.9.0 39ee47797d2e 7 weeks ago 1.26GB
openshift/origin v3.9.0 12a3f005312b 7 weeks ago 1.26GB
openshift/origin-pod v3.9.0 6e08365fbba9 7 weeks ago 223MB
172.30.1.1:5000/oracle/ora122slim latest 27c9559d36ec 12 months ago 2.08GB
store/oracle/database-enterprise 12.2.0.1-slim 27c9559d36ec 12 months ago 2.08GB

Note that it is the same IMAGE ID and doesn’t take more space:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 3.9G 35G 11% /mnt/sda1

Then I’m finally ready to pull the image to the OpenShift docker registry:

C:\Users\Franck>docker push 172.30.1.1:5000/oracle/ora122slim
The push refers to a repository [172.30.1.1:5000/oracle/ora122slim] 066e811424fb: Pushed
99d7f2451a1a: Pushed
a2c532d8cc36: Pushed
49c80855196a: Pushed
40c24f62a02f: Pushed
latest: digest: sha256:25b0ec7cc3987f86b1e754fc214e7f06761c57bc11910d4be87b0d42ee12d254 size: 1372

This is a copy, and takes an additional 2GB:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 5.4G 33G 14% /mnt/sda1

Deploy the image

Finally, I can deploy the image as it is visible in the GUI:
CaptureOpenShiftImport

I choose to deploy from fommand line:

C:\Users\Franck>oc new-app --image-stream=ora122slim --name=ora122slimdeployment
--> Found image 27c9559 (12 months old) in image stream "oracle/ora122slim" under tag "latest" for "ora122slim"
 
* This image will be deployed in deployment config "ora122slimdeployment"
* Ports 1521/tcp, 5500/tcp will be load balanced by service "ora122slimdeployment"
* Other containers can access this service through the hostname "ora122slimdeployment"
* This image declares volumes and will default to use non-persistent, host-local storage.
You can add persistent volumes later by running 'volume dc/ora122slimdeployment --add ...'

--> Creating resources ...
imagestreamtag "ora122slimdeployment:latest" created
deploymentconfig "ora122slimdeployment" created
service "ora122slimdeployment" created
--> Success
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/ora122slimdeployment'
Run 'oc status' to view your app.

CaptureOpenShiftDeploy

I expose the service:

C:\Users\Franck>oc expose service ora122slimdeployment
route "ora122slimdeployment" exposed

/bin/bash: /home/oracle/setup/dockerInit.sh: Permission denied

Here is one little thing to change. From the POD terminal, I can see the following error:
CaptureOpenShiftCrash

The same can be read from command line:

C:\Users\Franck>oc status
In project Oracle (oracle) on server https://192.168.99.105:8443
 
http://ora122slimdeployment-oracle.192.168.99.105.nip.io to pod port 1521-tcp (svc/ora122slimdeployment)
dc/ora122slimdeployment deploys istag/ora122slim:latest
deployment #1 deployed 7 minutes ago - 0/1 pods (warning: 6 restarts)
 
Errors:
* pod/ora122slimdeployment-1-86prl is crash-looping
 
1 error, 2 infos identified, use 'oc status -v' to see details.
 
C:\Users\Franck>oc logs ora122slimdeployment-1-86prl -c ora122slimdeployment
/bin/bash: /home/oracle/setup/dockerInit.sh: Permission denied

This is because by default, for security reason, OpenShift runs the container with a random user id. But the files are executable only by oracle:

sh-4.2$ ls -l /home/oracle/setup/dockerInit.sh
-rwxr-xr--. 1 oracle oinstall 2165 Aug 17 2017 /home/oracle/setup/dockerInit.sh
sh-4.2$

The solution is quite simple: allow the container to run with its own user id:

C:\Users\Franck>minishift addon apply anyuid
-- Applying addon 'anyuid':.
Add-on 'anyuid' changed the default security context constraints to allow pods to run as any user.
Per default OpenShift runs containers using an arbitrarily assigned user ID.
Refer to https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#security-context-constraints and
https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-origin-specific-guidelines for more information.

The the restart of the POD will go further:
CaptureOpenShiftOracle

This Oracle Database from the Docker Store is not really an image of an installed Oracle Database, but just a tar of Oracle Home and Database files that have to be untared.

Now, in addition to the image size I have an additional 2GB layer for the container:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 11G 28G 28% /mnt/sda1
 
C:\Users\Franck>docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 7 6 3.568GB 1.261GB (35%)
Containers 17 9 1.895GB 58.87kB (0%)
Local Volumes 0 0 0B 0B
Build Cache 0B 0B

Of course there is more to customize. The minishift VM should have more memory and the container for Oracle Database as well. We probably want to add an external volume, and export ports outside of the minishift VM.

 

Cet article Oracle Database on OpenShift est apparu en premier sur Blog dbi services.

Federal Court Rules that Oracle Is Entitled to a Permanent Injunction Against Rimini Street and Awards Attorneys' Fees in Copyright Suit

Oracle Press Releases - Tue, 2018-08-14 15:09
Press Release
Federal Court Rules that Oracle Is Entitled to a Permanent Injunction Against Rimini Street and Awards Attorneys' Fees in Copyright Suit

Redwood Shores, Calif.—Aug 14, 2018

Today, for the second time, a Federal Court in Nevada granted Oracle's motion for a permanent injunction against Rimini Street for years of infringement of Oracle’s copyrights. In an opinion notable for its strong language condemning Rimini Street’s actions, the Court made clear that since its inception, Rimini’s business “was built entirely on its infringement of Oracle’s copyrighted software.” The Court also highlighted Rimini's “conscious disregard” for Oracle's copyrights and Rimini's “significant litigation misconduct” in granting Oracle's motion for its attorneys' fees to be paid.

“As the Court's Order today makes clear, Rimini Street's business has been built entirely on unlawful conduct, and Rimini's executives have repeatedly lied to cover up their company's illegal acts. Rimini, which admits that it is the subject of an ongoing federal criminal investigation, has proven itself to be disreputable, and it seems very clear that it will not stop its unlawful conduct until a Court orders it to stop. Oracle is grateful that today the Federal Court in Nevada did just that,” said Dorian Daley, Oracle's Executive Vice President and General Counsel.

The Court noted that it was Rimini's brazen misconduct that enabled it to "rapidly build" its infringing business, while at the same time irreparably damaging Oracle because Rimini's very business model “eroded the bonds and trust that Oracle has with its customers.” It also stressed that for over five years of litigation, “literally up until trial”, Rimini Street denied the allegations of infringement. At trial, however, Rimini CEO Seth Ravin, who was also a defendant, changed his story and admitted for the first time that Rimini Street did in fact engage in all the infringing activities that Oracle had identified.

Finally, the Court declared that over $28M in attorneys’ fees should be awarded to Oracle because of Rimini Street’s significant litigation misconduct in this action. Rimini comments to the market that this award would have to be returned to Rimini have proven to be false and misleading, like so many of its actions and assurances to customers and others.

Contact Info
Deborah Hellinger
Oracle
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

ODA database been stuck in deleting status

Yann Neuhaus - Tue, 2018-08-14 15:03

Facing an internal inconsistency in the ODA derby database is very painful (see https://blog.dbi-services.com/oda-lite-what-is-this-odacli-repository/ for more info about the derby database). I have recently faced a case where the database deletion was failing and the database remained then in “Deleting” status.  Connecting directly to the internal derby database and doing some self cleaning is very risky and should be performed at your own and known risk. So, in most of the case, a database inconsistency issue ends with an Oracle Support ticket to get their help for cleaning. Before doing so I wanted to look closer to the issue and was very happy to fix it myself. I wanted to share my experience here.

Issue description

As explained in the introduction, the database deletion failed and the database remained in “Deleting” status.

[root@prod1 ~]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
ea49c5a8-8747-4459-bb99-cd71c8c87d58     testtst1   Si       12.1.0.2             false      OLTP     Odb1s    ACFS       Deleting     80a2e501-31d8-4a5d-83db-e04dad34a7fa

Looking at the job activity log, we can see that the deletion is failing while trying to delete the FileSystem.

[root@prod1 ~]# odacli describe-job -i 50a8c1c2-686e-455e-878f-eaa537295c9f

Job details
----------------------------------------------------------------
                     ID:  50a8c1c2-686e-455e-878f-eaa537295c9f
            Description:  Database service deletion with db name: testtst1 with id : ea49c5a8-8747-4459-bb99-cd71c8c87d58
                 Status:  Failure
                Created:  July 25, 2018 9:40:17 AM CEST
                Message:  DCS-10011:Input parameter 'ACFS Device for delete' cannot be NULL.

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
database Service deletion for ea49c5a8-8747-4459-bb99-cd71c8c87d58 July 25, 2018 9:40:17 AM CEST       July 25, 2018 9:40:22 AM CEST       Failure
database Service deletion for ea49c5a8-8747-4459-bb99-cd71c8c87d58 July 25, 2018 9:40:17 AM CEST       July 25, 2018 9:40:22 AM CEST       Failure
Validate db ea49c5a8-8747-4459-bb99-cd71c8c87d58 for deletion July 25, 2018 9:40:17 AM CEST       July 25, 2018 9:40:17 AM CEST       Success
Database Deletion                        July 25, 2018 9:40:18 AM CEST       July 25, 2018 9:40:18 AM CEST       Success
Unregister Db From Cluster               July 25, 2018 9:40:18 AM CEST       July 25, 2018 9:40:19 AM CEST       Success
Kill Pmon Process                        July 25, 2018 9:40:19 AM CEST       July 25, 2018 9:40:19 AM CEST       Success
Database Files Deletion                  July 25, 2018 9:40:19 AM CEST       July 25, 2018 9:40:19 AM CEST       Success
Deleting FileSystem                      July 25, 2018 9:40:21 AM CEST       July 25, 2018 9:40:22 AM CEST       Failure

I decided to have a look why it would have failed on the file system deletion step, and I was very surprised to see there was no data volume for this database anymore. This can be seen in the below volinfo command output. Not sure what happened, but it is weird : why failing if what you want to delete is no more existing and stopping processing further.

ASMCMD> volinfo --all
Diskgroup Name: DATA

         Volume Name: COMMONSTORE
         Volume Device: /dev/asm/commonstore-265
         State: ENABLED
         Size (MB): 5120
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /opt/oracle/dcs/commonstore

Diskgroup Name: RECO

         Volume Name: RECO
         Volume Device: /dev/asm/reco-403
         State: ENABLED
         Size (MB): 304128
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /u03/app/oracle/
 Solution

So why not trying to give the ODA what he is expecting to see? Therefore I tried to create the ACFS volume with exact naming and I was very happy to see that this solved the problem. There was no other relation key than the name of the volume. Let’s look in details the steps I performed.

Let’s create the database expected data volume.

ASMCMD> volcreate -G DATA -s 10G DATTESTTST1

ASMCMD> volinfo -G DATA -a
Diskgroup Name: DATA

         Volume Name: COMMONSTORE
         Volume Device: /dev/asm/commonstore-265 
         State: ENABLED
         Size (MB): 5120
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /opt/oracle/dcs/commonstore

         Volume Name: DATTESTTST1
         Volume Device: /dev/asm/dattesttst1-265
         State: ENABLED
         Size (MB): 10240
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage:
         Mountpath:

Let’s create the file system for the newly created volume.

grid@prod1:/home/grid/ [+ASM1] mkfs.acfs /dev/asm/dattesttst1-265
mkfs.acfs: version                   = 12.2.0.1.0
mkfs.acfs: on-disk version           = 46.0
mkfs.acfs: volume                    = /dev/asm/dattesttst1-265
mkfs.acfs: volume size               = 10737418240  (  10.00 GB )
mkfs.acfs: Format complete.

Let’s check the expected mount points needed for the corresponding database.

[root@prod1 ~]# odacli describe-dbstorage -i 31d852f7-bdd0-40f5-9224-2ca139a2c3db
DBStorage details
----------------------------------------------------------------
                     ID: 31d852f7-bdd0-40f5-9224-2ca139a2c3db
                DB Name: testtst1
          DBUnique Name: testtst1_RZ1
         DB Resource ID: ea49c5a8-8747-4459-bb99-cd71c8c87d58
           Storage Type: Acfs
          DATA Location: /u02/app/oracle/oradata/testtst1_RZ1
          RECO Location: /u03/app/oracle/fast_recovery_area/
          REDO Location: /u03/app/oracle/redo/
   FLASH Cache Location:
                  State: ResourceState(status=Configured)
                Created: July 18, 2018 10:28:39 AM CEST
            UpdatedTime: July 18, 2018 10:29:01 AM CEST

In order to add and start the appropriate file system.

[root@prod1 testtst1_RZ1]# cd /u01/app/12.2.0.1/grid/bin/
[root@prod1 bin]# ./srvctl add filesystem -volume DATTESTTST1 -diskgroup DATA -path /u02/app/oracle/oradata/testtst1_RZ1 -fstype ACFS -autostart ALWAYS -mountowner oracle
[root@prod1 bin]# ./srvctl start filesystem -device /dev/asm/dattesttst1-265

Let’s check the mounted file system.

[root@prod1 bin]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolRoot
                       30G   24G  4.1G  86% /
tmpfs                 189G  1.3G  187G   1% /dev/shm
/dev/md0              477M   40M  412M   9% /boot
/dev/sda1             500M  320K  500M   1% /boot/efi
/dev/mapper/VolGroupSys-LogVolOpt
                       59G   13G   44G  22% /opt
/dev/mapper/VolGroupSys-LogVolU01
                       99G   25G   69G  27% /u01
/dev/asm/commonstore-265
                      5.0G  319M  4.7G   7% /opt/oracle/dcs/commonstore
/dev/asm/reco-403     297G   14G  284G   5% /u03/app/oracle
/dev/asm/dattesttst1-265
                       10G  265M  9.8G   3% /u02/app/oracle/oradata/testtst1_RZ1

Let’s now try to delete the database again. Option -fd is mandatory to force deletion.

[root@prod1 bin]# odacli delete-database -i ea49c5a8-8747-4459-bb99-cd71c8c87d58 -fd
{
  "jobId" : "976c8689-a69d-4e0d-a5e0-e40a30a77d29",
  "status" : "Running",
  "message" : null,
  "reports" : [ {
    "taskId" : "TaskZJsonRpcExt_471",
    "taskName" : "Validate db ea49c5a8-8747-4459-bb99-cd71c8c87d58 for deletion",
    "taskResult" : "",
    "startTime" : "July 25, 2018 10:04:24 AM CEST",
    "endTime" : "July 25, 2018 10:04:24 AM CEST",
    "status" : "Success",
    "taskDescription" : null,
    "parentTaskId" : "TaskSequential_469",
    "jobId" : "976c8689-a69d-4e0d-a5e0-e40a30a77d29",
    "tags" : [ ],
    "reportLevel" : "Info",
    "updatedTime" : "July 25, 2018 10:04:24 AM CEST"
  } ],
  "createTimestamp" : "July 25, 2018 10:04:23 AM CEST",
  "resourceList" : [ ],
  "description" : "Database service deletion with db name: testtst1 with id : ea49c5a8-8747-4459-bb99-cd71c8c87d58",
  "updatedTime" : "July 25, 2018 10:04:23 AM CEST"
}

The database deletion is now successful.

[root@prod1 bin]# odacli describe-job -i 976c8689-a69d-4e0d-a5e0-e40a30a77d29

Job details
----------------------------------------------------------------
                     ID:  976c8689-a69d-4e0d-a5e0-e40a30a77d29
            Description:  Database service deletion with db name: testtst1 with id : ea49c5a8-8747-4459-bb99-cd71c8c87d58
                 Status:  Success
                Created:  July 25, 2018 10:04:23 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate db ea49c5a8-8747-4459-bb99-cd71c8c87d58 for deletion July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:24 AM CEST      Success
Database Deletion                        July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:24 AM CEST      Success
Unregister Db From Cluster               July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:24 AM CEST      Success
Kill Pmon Process                        July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:24 AM CEST      Success
Database Files Deletion                  July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:25 AM CEST      Success
Deleting Volume                          July 25, 2018 10:04:30 AM CEST      July 25, 2018 10:04:32 AM CEST      Success

Let’s check the volume and file system to make sure they have been removed.

ASMCMD> volinfo --all
Diskgroup Name: DATA

         Volume Name: COMMONSTORE
         Volume Device: /dev/asm/commonstore-265
         State: ENABLED
         Size (MB): 5120
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /opt/oracle/dcs/commonstore

Diskgroup Name: RECO

         Volume Name: RECO
         Volume Device: /dev/asm/reco-403
         State: ENABLED
         Size (MB): 304128
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /u03/app/oracle/

grid@prod1:/home/grid/ [+ASM1] df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolRoot
                       30G   24G  4.1G  86% /
tmpfs                 189G  1.3G  187G   1% /dev/shm
/dev/md0              477M   40M  412M   9% /boot
/dev/sda1             500M  320K  500M   1% /boot/efi
/dev/mapper/VolGroupSys-LogVolOpt
                       59G   13G   44G  22% /opt
/dev/mapper/VolGroupSys-LogVolU01
                       99G   25G   69G  27% /u01
/dev/asm/commonstore-265
                      5.0G  319M  4.7G   7% /opt/oracle/dcs/commonstore
/dev/asm/reco-403     297G   14G  284G   5% /u03/app/oracle
grid@prod1:/home/grid/ [+ASM1]

Listing the database would show that the unique database has now been deleted.

[root@prod1 bin]# odacli list-databases
DCS-10032:Resource database is not found.

To complete the test and make sure all is ok, I created a new database, which I expected would be successful.

[root@prod1 bin]# odacli describe-job -i cf896c7f-0675-4980-a63f-a8a2b09b1352

Job details
----------------------------------------------------------------
                     ID:  cf896c7f-0675-4980-a63f-a8a2b09b1352
            Description:  Database service creation with db name: testtst2
                 Status:  Success
                Created:  July 25, 2018 10:12:24 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance               July 25, 2018 10:12:25 AM CEST      July 25, 2018 10:12:25 AM CEST      Success
Creating volume dattesttst2              July 25, 2018 10:12:25 AM CEST      July 25, 2018 10:12:36 AM CEST      Success
Creating ACFS filesystem for DATA        July 25, 2018 10:12:36 AM CEST      July 25, 2018 10:12:44 AM CEST      Success
Database Service creation                July 25, 2018 10:12:44 AM CEST      July 25, 2018 10:18:49 AM CEST      Success
Database Creation                        July 25, 2018 10:12:44 AM CEST      July 25, 2018 10:17:36 AM CEST      Success
Change permission for xdb wallet files   July 25, 2018 10:17:36 AM CEST      July 25, 2018 10:17:36 AM CEST      Success
Place SnapshotCtrlFile in sharedLoc      July 25, 2018 10:17:36 AM CEST      July 25, 2018 10:17:37 AM CEST      Success
Running DataPatch                        July 25, 2018 10:18:34 AM CEST      July 25, 2018 10:18:47 AM CEST      Success
updating the Database version            July 25, 2018 10:18:47 AM CEST      July 25, 2018 10:18:49 AM CEST      Success
create Users tablespace                  July 25, 2018 10:18:49 AM CEST      July 25, 2018 10:18:51 AM CEST      Success



[root@prod1 bin]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
e0e8163d-dcaa-4692-85c5-24fb9fe17291     testtst2   Si       12.1.0.2             false      OLTP     Odb1s    ACFS       Configured   80a2e501-31d8-4a5d-83db-e04dad34a7fa

 

 

 

Cet article ODA database been stuck in deleting status est apparu en premier sur Blog dbi services.

Intel Processor L1TF vulnerabilities: CVE-2018-3615, CVE-2018-3620, CVE-2018-3646

Oracle Security Team - Tue, 2018-08-14 12:00

Today, Intel disclosed a new set of speculative execution side-channel processor vulnerabilities affecting their processors.    These L1 Terminal Fault (L1TF) vulnerabilities affect a number of Intel processors, and they have received three CVE identifiers:

  • CVE-2018-3615 impacts Intel Software Guard Extensions (SGX) and has a CVSS Base Score of 7.9.

  • CVE-2018-3620 impacts operating systems and System Management Mode (SMM) running on Intel processors and has a CVSS Base Score of 7.1.

  • CVE-2018-3646 impacts virtualization software and Virtual Machine Monitors (VMM) running on Intel processors and has a CVSS Base Score of 7.1

These vulnerabilities derive from a flaw in Intel processors, in which operations performed by a processor while using speculative execution can result in a compromise of the confidentiality of data between threads executing on a physical CPU core. 

As with other variants of speculative execution side-channel issues (i.e., Spectre and Meltdown), successful exploitation of L1TF vulnerabilities require the attacker to have the ability to run malicious code on the targeted systems.  Therefore, L1TF vulnerabilities are not directly exploitable against servers which do not allow the execution of untrusted code. 

While Oracle has not yet received reports of successful exploitation of this speculative execution side-channel issue “in the wild,” Oracle has worked with Intel and other industry partners to develop technical mitigations against these issues. 

The technical steps Intel recommends to mitigate L1TF vulnerabilities on affected systems include:

  • Ensuring that affected Intel processors are running the latest Intel processor microcode. Intel reports that the microcode update  it has released for the Spectre 3a (CVE-2018-3640) and Spectre 4 (CVE-2018-3639) vulnerabilities also contains the microcode instructions which can be used to mitigate the L1TF vulnerabilities. Updated microcode by itself is not sufficient to protect against L1TF.

  • Applying the necessary OS and virtualization software patches against affected systems. To be effective, OS patches will require the presence of the updated Intel processor microcode.  This is because updated microcode by itself is not sufficient to protect against L1TF.  Corresponding OS and virtualization software updates are also required to mitigate the L1TF vulnerabilities present in Intel processors.

  • Disabling Intel Hyper-Threading technology in some situations. Disabling HT alone is not sufficient for mitigating L1TF vulnerabilities. Disabling HT will result in significant performance degradation.

In response to the various L1TF Intel processor vulnerabilities:

Oracle Hardware

  • Oracle recommends that administrators of x86-based Systems carefully assess the L1TF threat for their systems and implement the appropriate security mitigations.Oracle will provide specific guidance for Oracle Engineered Systems.

  • Oracle has determined that Oracle SPARC servers are not affected by the L1TF vulnerabilities.

  • Oracle has determined that Oracle Intel x86 Servers are not impacted by vulnerability CVE-2018-3615 because the processors in use with these systems do not make use of Intel Software Guard Extensions (SGX).

Oracle Operating Systems (Linux and Solaris) and Virtualization

  • Oracle has released security patches for Oracle Linux 7, Oracle Linux 6 and Oracle VM Server for X86 products.  In addition to OS patches, customers should run the current version of the Intel microcode to mitigate these issues. 

  • Oracle Linux customers can take advantage of Oracle Ksplice to apply these updates without needing to reboot their systems.

  • Oracle has determined that Oracle Solaris on x86 is not affected by vulnerabilities CVE-2018-3615 and CVE-2018-3620 regardless of the underlying Intel processor on these systems.  It is however affected by vulnerability CVE-2018-3646 when using Kernel Zones. The necessary patches will be provided at a later date

  • Oracle Solaris on SPARC is not affected by the L1TF vulnerabilities.

Oracle Cloud

  • The Oracle Cloud Security and DevOps teams continue to work in collaboration with our industry partners on implementing the necessary mitigations to protect customer instances and data across all Oracle Cloud offerings: Oracle Cloud (IaaS, PaaS, SaaS), Oracle NetSuite, Oracle GBU Cloud Services, Oracle Data Cloud, and Oracle Managed Cloud Services.  

  • Oracle’s first priority is to mitigate the risk of tenant-to-tenant attacks.

  • Oracle will notify and coordinate with the affected customers for any required maintenance activities as additional mitigating controls continue to be implemented.

  • Oracle has determined that a number of Oracle's cloud services are not affected by the L1TF vulnerabilities.  They include Autonomous Data Warehouse service, which provides a fully managed database optimized for running data warehouse workloads, and Oracle Autonomous Transaction Processing service, which provides a fully managed database service optimized for running online transaction processing and mixed database workloads.  No further action is required by customers of these services as both were found to require no additional mitigating controls based on service design and are not affected by the L1TF vulnerabilities (CVE-2018-3615, CVE-2018-3620, and CVE-2018-3646).   

  • Bare metal instances in Oracle Cloud Infrastructure (OCI) Compute offer full control of a physical server and require no additional Oracle code to run.  By design, the bare metal instances are isolated from other customer instances on the OCI network whether they be virtual machines or bare metal.  However, for customers running their own virtualization stack on bare metal instances, the L1TF vulnerability could allow a virtual machine to access privileged information from the underlying hypervisor or other VMs on the same bare metal instance.  These customers should review the Intel recommendations about vulnerabilities CVE-2018-3615, CVE-2018-3620, CVE-2018-3646 and make changes to their configurations as they deem appropriate.

Note that many industry experts anticipate that new techniques leveraging these processor flaws will continue to be disclosed for the foreseeable future.  Future speculative side-channel processor vulnerabilities are likely to continue to impact primarily operating systems and virtualization platforms, as addressing them will likely require software update and microcode update.  Oracle therefore recommends that customers remain on current security release levels, including firmware, and applicable microcode updates (delivered as Firmware or OS patches), as well as software upgrades. 

 

For more information:
 

JRE 1.8.0_181 Certified with Oracle EBS 12.1 and 12.2

Steven Chan - Tue, 2018-08-14 11:29

Java logo

Java Runtime Environment 1.8.0_181 (a.k.a. JRE 8u181-b13) and later updates on the JRE 8 codeline are now certified with Oracle E-Business Suite 12.1 and 12.2 for Windows clients.

Java Web Start is available

This JRE release may be run with either the Java plug-in or Java Web Start.

Java Web Start is certified with EBS 12.1 and 12.2 for Windows clients.  

Considerations if you're also running JRE 1.6 or 1.7

JRE 1.7 and JRE 1.6 updates included an important change: the Java deployment technology (i.e. the JRE browser plugin) is no longer available for those Java releases. It is expected that Java deployment technology will not be packaged in later Java 6 or 7 updates.

JRE 1.7.0_161 (and later 1.7 updates) and 1.6.0_171 (and later 1.6 updates) can still run Java content.  They cannot launch Java.

End-users who only have JRE 1.7 or JRE 1.6 -- and not JRE 1.8 -- installed on their Windows desktop will be unable to launch Java content.

End-users who need to launch JRE 1.7 or 1.6 for compatibility with other third-party Java applications must also install the JRE 1.8.0_152 or later JRE 1.8 updates on their desktops.

Once JRE 1.8.0_152 or later JRE 1.8 updates are installed on a Windows desktop, it can be used to launch JRE 1.7 and JRE 1.6. 

How do I get help with this change?

EBS customers requiring assistance with this change to Java deployment technology can log a Service Request for assistance from the Java Support group.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Recommended Browser documentation for your EBS release for details.

Where are the official patch requirements documented?

All patches required for ensuring full compatibility of the E-Business Suite with JRE 8 are documented in these Notes:

For EBS 12.1 & 12.2

Implications of Java 6 and 7 End of Public Updates for EBS Users

The Oracle Java SE Support Roadmap and Oracle Lifetime Support Policy for Oracle Fusion Middleware documents explain the dates and policies governing Oracle's Java Support.  The client-side Java technology (Java Runtime Environment / JRE) is now referred to as Java SE Deployment Technology in these documents.

Starting with Java 7, Extended Support is not available for Java SE Deployment Technology.  It is more important than ever for you to stay current with new JRE versions.

If you are currently running JRE 6 on your EBS desktops:

  • You can continue to do so until the end of Java SE 6 Deployment Technology Extended Support in June 2017
  • You can obtain JRE 6 updates from My Oracle Support.  See:

If you are currently running JRE 7 on your EBS desktops:

  • You can continue to do so until the end of Java SE 7 Deployment Technology Premier Support in October 2017.
  • You can obtain JRE 7 updates from My Oracle Support.  See:

If you are currently running JRE 8 on your EBS desktops:

  • You can continue to do so until the end of Java SE 8 Deployment Technology Premier Support in March 2019
  • You can obtain JRE 8 updates from the Java SE download site or from My Oracle Support. See:

Will EBS users be forced to upgrade to JRE 8 for Windows desktop clients?

No.

This upgrade is highly recommended but remains optional while Java 6 and 7 are covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JRE 6 and 7 desktop clients. Note that there are different impacts of enabling JRE Auto-Update depending on your current JRE release installed, despite the availability of ongoing support for JRE 6 and 7 for EBS customers; see the next section below.

Impact of enabling JRE Auto-Update

Java Auto-Update is a feature that keeps desktops up-to-date with the latest Java release.  The Java Auto-Update feature connects to java.com at a scheduled time and checks to see if there is an update available.

Enabling the JRE Auto-Update feature on desktops with JRE 6 installed will have no effect.

With the release of the January Critical patch Updates, the Java Auto-Update Mechanism will automatically update JRE 7 plug-ins to JRE 8.

Enabling the JRE Auto-Update feature on desktops with JRE 8 installed will apply JRE 8 updates.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

JRE 8 is certified for Mac OS X 10.8 (Mountain Lion), 10.9 (Mavericks), 10.10 (Yosemite), and 10.11 (El Capitan) desktops.  For details, see:

Will EBS users be forced to upgrade to JDK 8 for EBS application tier servers?

No.

JRE is used for desktop clients.  JDK is used for application tier servers.

JRE 8 desktop clients can connect to EBS environments running JDK 6 or 7.

JDK 8 is not certified with the E-Business Suite.  EBS customers should continue to run EBS servers on JDK 6 or 7.

Known Issues

Internet Explorer Performance Issue

Launching JRE 1.8.0_73 through Internet Explorer will have a delay of around 20 seconds before the applet starts to load (Java Console will come up if enabled).

This issue fixed in JRE 1.8.0_74.  Internet Explorer users are recommended to uptake this version of JRE 8.

Form Focus Issue Clicking outside the frame during forms launch may cause a loss of focus when running with JRE 8 and can occur in all Oracle E-Business Suite releases. To fix this issue, apply the following patch:

References

Related Articles
Categories: APPS Blogs

JRE 1.7.0_191 Certified with Oracle E-Business Suite 12.1 and 12.2

Steven Chan - Tue, 2018-08-14 11:24

Java logo

Java Runtime Environment 1.7.0_191 (a.k.a. JRE 7u191-b09) and later updates on the JRE 7 codeline are now certified with Oracle E-Business Suite Release 12.1 and 12.2 for Windows-based desktop clients.

What's new in this update?

This update includes an important change: the Java deployment technology (i.e. the JRE browser plugin) is no longer available as of this Java release. It is expected that Java deployment technology will not be packaged in later Java 7 updates.

JRE 1.7.0_161  and later JRE 1.7 updates can still run Java content.  These releases cannot launch Java.

End-users who only have JRE 1.7.0_161 and later JRE 1.7 updates -- but not JRE 1.8 -- installed on their Windows desktop will be unable to launch Java content.

End-users who need to launch JRE 1.7 for compatibility with other third-party Java applications must also install the October 2017 CPU release JRE 1.8.0_151 or later JRE 1.8 updates on their desktops.

Once JRE 1.8.0_151 or a later JRE 1.8 update is installed on a Windows desktop, it can be used to launch JRE 1.7.0_161 and later updates on the JRE 1.7 codeline. 

How do I get help with this change?

EBS customers requiring assistance with this change to Java deployment technology can log a Service Request for assistance from the Java Support group.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

Effects of new support dates on Java upgrades for EBS environments

Support dates for the E-Business Suiteand Java have changed.  Please review the sections below for more details:

  • What does this mean for Oracle E-Business Suite users?
  • Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients?
  • Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Recommended Browser documentation for your EBS release for details.

Where are the official patch requirements documented?

How can EBS customers obtain Java 7?

EBS customers can download Java 7 patches from My Oracle Support.  For a complete list of all Java SE patch numbers, see:

Both JDK and JRE packages are now contained in a single combined download.  Download the "JDK" package for both the desktop client JRE and the server-side JDK package. 

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

Java Auto-Update Mechanism

With the release of the January 2015 Critical patch Updates, the Java Auto-Update Mechanism will automatically update JRE 7 plug-ins to JRE 8.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

Mac users running Mac OS X 10.7 (Lion), 10.8 (Mountain Lion), 10.9 (Mavericks), and 10.10 (Yosemite) can run JRE 7 or 8 plug-ins.  See:

Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

JRE ("Deployment Technology") is used for desktop clients.  JDK is used for application tier servers.

JDK upgrades for E-Business Suite application tier servers are highly recommended but currently remain optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6 for application tier servers. 

Java SE 6 (excluding Deployment Technology) is covered by Extended Support until December 2018.  All EBS customers with application tier servers on Windows, Solaris, and Linux must upgrade to JDK 7 (excluding Deployment Technology) by December 2018. EBS customers running their application tier servers on other operating systems should check with their respective vendors for the support dates for those platforms.

JDK 7 is certified with E-Business Suite 12.  See:

Known Issues

When using Internet Explorer, JRE 1.7.0_01 had a delay of around 20 seconds before the applet started to load. This issue is fixed in JRE 1.7.0_95.

References

Related Articles
Categories: APPS Blogs

JRE 1.6.0_201 Certified with Oracle E-Business Suite

Steven Chan - Tue, 2018-08-14 11:20

Java logThe latest Java Runtime Environment 1.6.0_201 (a.k.a. JRE 6u201-b07) and later updates on the JRE 6 codeline are now certified with Oracle E-Business Suite Release 12.1 and 12.2 for Windows-based desktop clients.

What's new in this update?

This update includes an important change: the Java deployment technology (i.e. the JRE browser plugin) is no longer available as of the Java 1.6.0_171 release. It is expected that Java deployment technology will not be packaged in later Java 6 updates.

JRE 1.6.0_171 and later JRE 1.6 updates can still run Java content.  These releases cannot launch Java.

End-users who only have JRE 1.6.0_171 and later JRE 1.6 updates -- but not JRE 1.8 -- installed on their Windows desktop will be unable to launch Java content.

End-users who need to launch JRE 1.6 for compatibility with other third-party Java applications must also install the October 2017 PSU release JRE 1.8.0_152 or later JRE 1.8 updates on their desktops.

Once JRE 1.8.0_152 or a later JRE 1.8 update is installed on a Windows desktop, it can be used to launch JRE 1.6.0_171 and later updates on the JRE 1.6 codeline. 

How do I get help with this change?

EBS customers requiring assistance with this change to Java deployment technology can log a Service Request for assistance from the Java Support group.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

Effects of new support dates on Java upgrades for EBS environments

Support dates for the E-Business Suiteand Java have changed.  Please review the sections below for more details:

  • What does this mean for Oracle E-Business Suite users?
  • Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients?
  • Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?
32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Deploying JRE documentation for your EBS release for details.

How can EBS customers obtain Java 6 updates?

Java 6 is now available only via My Oracle Support for E-Business Suite users.  You can find links to this release, including Release Notes, documentation, and the actual Java downloads here: Both JDK and JRE packages are contained in a single combined download after 6u45.  Download the "JDK" package for both the desktop client JRE and the server-side JDK package.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

Mac users running Mac OS X 10.10 (Yosemite) can run JRE 7 or 8 plug-ins.  See:

Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

JRE ("Deployment Technology") is used for desktop clients.  JDK is used for application tier servers.

JDK upgrades for E-Business Suite application tier servers are highly recommended but currently remain optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6 for application tier servers. 

Java SE 6 (excluding Deployment Technology) is covered by Extended Support until December 2018.  All EBS customers with application tier servers on Windows, Solaris, and Linux must upgrade to JDK 7 by December 2018. EBS customers running their application tier servers on other operating systems should check with their respective vendors for the support dates for those platforms.

JDK 7 is the latest Java release certified with E-Business Suite 12 servers.  See:

References

Related Articles
Categories: APPS Blogs

Upgrade EM 13.2 to EM 13.3

Yann Neuhaus - Tue, 2018-08-14 10:05

As the last Enterprise Manager Cloud Control 13.3 is out since a few days, I decided to test the upgrade procedure from the Enterprise Manager Cloud Control 13.2

You have to follow some pre-requisites:

First you copy the emkey :

oracle@localhost:/home/oracle/ [oms13c] emctl config emkey 
-copy_to_repos_from_file -repos_conndesc '"(DESCRIPTION=(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=em13c)(PORT=1521)))(CONNECT_DATA=
(SERVICE_NAME=EMREP13C)))"' -repos_user sysman -repos_pwd manager1 
-emkey_file /home/oracle/oms13c/sysman/config/emkey.ora
Oracle Enterprise Manager Cloud Control 13c Release 2  
Copyright (c) 1996, 2016 Oracle Corporation.  All rights reserved.
Enter Admin User's Password : 
The EMKey has been copied to the Management Repository. 
This operation will cause the EMKey to become unsecure.
After the required operation has been completed, secure the EMKey by running 
"emctl config emkey -remove_from_repos".

Check that the parameter in the repository database “_allow_insert_with_update_check” is TRUE:

SQL> show parameter _allow

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
_allow_insert_with_update_check      boolean	 TRUE

Just before running the upgrade procedure, you have to stop the OMS with the command emctl stop oms -all , and to stop the agent with the classical command emctl stop agent.

I will also recommend to run a full rman backup of the repository database.

Then once you have unzipped the binaries you have downloaded, you simply run the command:

oracle@localhost:/home/oracle/software/ [oms13c] ./em13300_linux64.bin 
0%...........................................................................100%
Launcher log file is /tmp/OraInstall2018-08-13_10-45-07AM/
launcher2018-08-13_10-45-07AM.log.
Starting Oracle Universal Installer

Checking if CPU speed is above 300 MHz.   Actual 2591.940 MHz    Passed
Checking monitor: must be configured to display at least 256 colors.   
Actual 16777216    Passed
Checking swap space: must be greater than 512 MB.   Actual 5567 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    Passed 
Preparing to launch the Oracle Universal Installer from 
/tmp/OraInstall2018-08-13_10-45-07AM
====Prereq Config Location main=== 
/tmp/OraInstall2018-08-13_10-45-07AM/stage/prereq
EMGCInstaller args -scratchPath
EMGCInstaller args /tmp/OraInstall2018-08-13_10-45-07AM
EMGCInstaller args -sourceType
EMGCInstaller args network
EMGCInstaller args -timestamp
EMGCInstaller args 2018-08-13_10-45-07AM
EMGCInstaller args -paramFile
EMGCInstaller args /tmp/sfx_WIQ10z/Disk1/install/linux64/oraparam.ini
EMGCInstaller args -nocleanUpOnExit
DiskLoc inside SourceLoc/home/oracle/software
EMFileLoc:/tmp/OraInstall2018-08-13_10-45-07AM/oui/em/
ScratchPathValue :/tmp/OraInstall2018-08-13_10-45-07AM

Picture1

 

Picture2

I skipped the Updates

Picture3png

The check are successfull

Picture4

We upgrade an existing Enterprise Manager System, we enter the existing Middleware home.

Picture5

We enter the new Middleware home.

Picture6

We enter the sys and sysman passwords.

Picture7

We can select additional plug-ins

Picture8

We enter the Weblogic password

Picture9

We do not share location for Oracle BI Publisher, but we enable BI Publisher

Picture11

We choose the default configuration ports.

Picture12

Picture13

At this time you can drink some coffees because the upgrade procedure takes a long time …

Picture14

Just before the end of the upgrade process, you have to run the allroot.sh script:

[root@localhost oms133]# ./allroot.sh

Starting to execute allroot.sh ......... 

Starting to execute /u00/app/oracle/oms133/root.sh ......
/etc exist
/u00/app/oracle/oms133
Finished execution of  /u00/app/oracle/oms133/root.sh ......

Picture15

The upgrade is successful :=)

Picture16

But the upgrade is not yet finished, you have to restart and upgrade the management agent and delete the old OMS installation

In order to upgrade the agent, you select the Upgrade agent from the tool menu:

Picture17

But I had a problem with my agent in 13.2 version. The agent was in a non-upgradable state, and Oracle recommended to run emctl control agent runCollection <target>:oracle_home oracle_home_config but the command did not work and saying : EMD runCollection error:no target collection

So I decided to delete the agent and to install manually a new agent following the classical GUI method.

The agent in version 13.3 in now up and running:

Picture18

As in the previous Enterprise Manager versions, the deinstallation is very easy. You only have to check if any old processes are running:

oracle@localhost:/home/oracle/oms13c/oui/bin/ [oms13c] ps -ef | grep /home/oracle/oms13c
oracle    9565 15114  0 11:51 pts/0    00:00:00 grep --color=auto /home/oracle/oms13c

Then we simply delete the old OMS HOME:

oracle@localhost:/home/oracle/ [oms13c] rm -rf oms13c

There are not so many features in Enterprise Manager 13.3. they concern the framework and infrastructure, the Middleware Management,  the Cloud management and the Database management. You can have a look at those new features:

https://docs.oracle.com/cd/cloud-control-13.3/EMCON/GUID-503991BC-D1CD-46EC-8373-8423B2D43437.htm#EMCON-GUID-503991BC-D1CD-46EC-8373-8423B2D43437

Even if the upgrade procedure lasted a long time, I did not encounter any blocking errors. The upgrade procedure is quite the same as before.

Furthermore with Enterprise Manager 13.3, we have support for monitoring and management for Oracle databases version 18c:

Picture19

 

Cet article Upgrade EM 13.2 to EM 13.3 est apparu en premier sur Blog dbi services.

Start-up Uses Oracle Cloud to Launch AI-Powered Hub for Social Networks

Oracle Press Releases - Tue, 2018-08-14 10:00
Press Release
Start-up Uses Oracle Cloud to Launch AI-Powered Hub for Social Networks Virtual Artifacts’ new Hibe hub connects diverse mobile apps, enabling consumers to stick with their social platform of choice

Redwood Shores, Calif.—Aug 14, 2018

Oracle today announced Virtual Artifacts has launched its mobile application network, Hibe, with Oracle Cloud. The company has developed Hibe as a new social network for mobile applications that lets consumers communicate with each other from their social platform of choice. To prepare for rapid growth, Virtual Artifacts invested in Oracle Cloud, including Oracle Autonomous Database, Oracle Cloud Platform, and Oracle Cloud Infrastructure.

Hibe helps different mobile applications to seamlessly communicate with each other without fear of data or privacy spillage. Built on an AI-driven proprietary privacy engine, users are able to easily connect, interact, and transact together, each from their own favorite communication tools. The new network removes the inconvenience of having to switch between applications by keeping communications synched and in one place.

"With 4.7 billion users on 6 million mobile apps, our ability to grow our database exponentially and globally was critical,” said Stéphane Lamoureux, COO, Virtual Artifacts. “We wanted a cloud provider that shares our values on privacy, is agile, and can work closely with us to build specific solutions. Oracle was the only vendor that could meet all of our requirements. With Oracle Autonomous Database, we no longer need to worry about manual, time-intensive tasks, such as database patching, upgrading, and tuning.”

With Oracle Autonomous Database, the industry’s first self-driving, self-securing, and self-repairing database, Virtual Artifacts can avoid the complexities of database configuration, tuning, and administration and instead focus on innovation. Oracle Autonomous Data Warehouse Cloud provides an easy-to-set-up and use data store for Hibe data, enabling Virtual Artifacts to better understand customer behavior.

The Hibe platform uses a number of Oracle Cloud Platform and Infrastructure services to support its current operations and anticipated growth. Using Oracle Mobile Cloud’s AI-driven intelligent bot on Hibe, Virtual Artifacts will be able to answer developers’ questions and make it easy for app providers to access the platform. The integration of Oracle’s mobile applications and chatbot technology on Hibe will also enable customers to directly engage with consumers, making it possible to quickly understand key audiences and improve the mobile experience.

Additionally, Virtual Artifacts will work with Oracle to develop the Hibe Marketplace, including an advertising engine to match items with interested consumers connected to Hibe through their respective mobile applications. The Hibe platform will also host and distribute content. By using Oracle’s blockchain platform, Virtual Artifacts and other content contributors on the platform will be able to track usage, distribution, and copyright attribution to help ensure it is being used and credited properly.

“The launch of the Hibe platform showcases the capabilities of Oracle’s Autonomous Cloud,” said Christopher Chelliah, group vice president and chief architect, Oracle Asia Pacific. “We essentially become the back end globally, leaving Virtual Artifacts to concentrate on the Hibe platform. This means that as a native cloud startup, we are supporting Virtual Artifacts from zero-to-production, and beyond. The combination of the strength of Oracle Cloud and Virtual Artifacts’ innovative privacy engine has the potential to revolutionize the app-to-app ecosystem.”

Contact Info
Dan Muñoz
Oracle
650.506.2904
dan.munoz@oracle.com
Jesse Caputo
Oracle
650.506.5967
jesse.caputo@oracle.com
About Virtual Artifacts

Based in Montreal, Canada, Virtual Artifacts is committed to deliver privacy-centric technologies that bridges the gap between online and real-life interactions and fundamentally transform how individuals, brands, and organizations meet, connect, communicate and transact online. These proprietary technologies both protect individual privacy and  disrupt many online industries, including ecommerce, advertising, shipping, mobile and the manner in which  brands manage key relationships with customers. Over the last 10 years, the company has developed decision-making AI engines and a centralized blockchain-like data structure to host, contextualize, distribute and ensure confidentiality of online activities powered by the company’s technologies. For more information about Virtual Artifacts, please visit us at www.virtualartifacts.com and www.hibe.com

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Dan Muñoz

  • 650.506.2904

Jesse Caputo

  • 650.506.5967

Licensable targets and Management Packs with EM13c

Yann Neuhaus - Tue, 2018-08-14 09:42

When you add a new target in Enterprise Manager 13c , the management packs are enabled by default. This could be a problem in case of a LMS control, and to avoid any problem, you have to  manually disable those management packs.

If like me you recently have moved your database infrastructure to a new one and have to add one hundred targets, you will have to click some hundredth of times on the management packs page in EM13C.

As you can see I added a new Oracle Database target and all the management packs are enabled:

mgmt1

 

There is a possibility to define the way EM13c manages the licensable targets. In the management Packs page , you select Auto Licensing, you select the packs you want to disable, and the next time you will add a new target (oracle database host database or weblogic), only the packs you have defined as enabled will be defined for the targets you decided to add:

mgmt2

I decided to keep enabled only the Database Tuning Pack and Database Diagnostics Pack.

mgmt3

And now the management packs are correct when I add a new database target:

mgmt4

I have missed this feature for a long time, I would have earned a lot of time avoiding deactivating the packs in Enterprise Manager Cloud Control :=)

 

 

 

Cet article Licensable targets and Management Packs with EM13c est apparu en premier sur Blog dbi services.

Autonomous Data Warehouse (ADW) and Autonomous Transaction Processing (ATP)

Tim Hall - Tue, 2018-08-14 08:39

A few days ago Oracle announced the general availability of the Autonomous Transaction Processing service on Oracle Cloud. This is the next member of the Autonomous Database family of products.

I’ve already written about provisioning the Autonomous Data Warehouse service and now I’ve used the Autonomous Transaction Processing service also.

I also wrote about loading data into the ADW service using the DBMS_CLOUD package and Data Pump.

The methods described here can also be used with the ATP service. I’ve added a few extra notes to these articles to explain a couple of minor differences.

If you’ve got access to Oracle Cloud you should give them a try. I really hope this is the DBaaS++ service I’ve been waiting for from Oracle.

Cheers

Tim…

Autonomous Data Warehouse (ADW) and Autonomous Transaction Processing (ATP) was first posted on August 14, 2018 at 2:39 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

[VIDEO] Oracle Cloud Infrastructure: Compute CPU & Memory

Online Apps DBA - Tue, 2018-08-14 07:23

Want to get the Overview of the Compute Service in Oracle Cloud Infrastructure (OCI)? Or Compute Shapes available on OCI? Don’t worry we’ve got it all covered for you here https://k21academy.com/iaas18 If you like it don’t forget it o share with your other Cloud friends!! Want to get the Overview of the Compute Service in […]

The post [VIDEO] Oracle Cloud Infrastructure: Compute CPU & Memory appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle Recognized as a Leader in the 2018 Gartner Magic Quadrant for Web Content Management

Oracle Press Releases - Tue, 2018-08-14 07:00
Press Release
Oracle Recognized as a Leader in the 2018 Gartner Magic Quadrant for Web Content Management For the second consecutive year, Oracle positioned as a Leader based on completeness of vision and ability to execute

Redwood Shores, Calif.—Aug 14, 2018

Oracle today announced that it has been named a Leader in Gartner’s 2018 “Magic Quadrant for Web Content Management” report1 for the second consecutive year. The recognition is further validation of Oracle’s continued leadership in this category and unique approach to helping enterprises streamline content management and build immersive digital experiences for users.

“As the world increasingly adopts internet and mobile-first strategies, the ability to create a cohesive, seamless experience across an enterprise’s properties is critical,” said Amit Zavery, executive vice president, Oracle Cloud Platform. “Customers recognize the importance of their content and the benefits of Oracle Content and Experience Cloud, including its expansive features and ability to easily manage content and deliver exceptional user experiences across all platforms.”

According to Gartner, “Leaders should drive market transformation. They have the highest combined scores for Ability to Execute and Completeness of Vision. They are doing well and are prepared for the future with a clear vision and a thorough appreciation of the broader context of digital business. They have strong channel partners, a presence in multiple regions, consistent financial performance, broad platform support and good customer support. In addition, they dominate in one or more technologies or vertical markets. Leaders are aware of the ecosystem in which their offerings need to fit. Leaders can:

  • Demonstrate enterprise deployments
  • Offer integration with other business applications and content repositories
  • Support multiple vertical and horizontal contexts”

Oracle Content and Experience Cloud is the modern platform for driving omni-channel content and workflow management as well as rich experiences to optimize customer journeys, and partner and employee engagement. It is the only solution of its kind available with out-of-the-box integrations with Oracle on-premises and SaaS solutions. With Oracle Content and Experience Cloud, organizations can enable content collaboration, deliver consistent experiences across online properties, and manage media-centric content storage all within one central content hub. In addition, Oracle’s capabilities extend beyond the typical role of content management, providing low-code development tools for building digital experiences that leverage a service catalog of data connections.

Access a complimentary copy of Gartner’s 2018 “Magic Quadrant for Web Content Management” here.

1. Gartner, “Magic Quadrant for Web Content Management,” by Mick MacComascaigh, Jim Murphy, July 30, 2018

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
Jesse Caputo
Oracle
+1.650.506.5697
jesse.caputo@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Jesse Caputo

  • +1.650.506.5697

Oracle Helps Organizations Manage International Trade More Efficiently

Oracle Press Releases - Tue, 2018-08-14 06:30
Press Release
Oracle Helps Organizations Manage International Trade More Efficiently New releases of Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud help customers reduce costs, streamline trade regulatory compliance, and accelerate customer fulfillment

Redwood Shores, Calif.—Aug 14, 2018

To help customers succeed in an increasingly complex global economy, Oracle today introduced new releases of Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud. The new releases enable customers who seek to reduce costs, streamline compliance with global trade regulations, and accelerate customer fulfillment, to manage these significant needs on a single, integrated platform.

The latest releases of Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud provide real-time, data-driven insights into shipment routes and automated event handling. They also offer expanded regulatory support for fast, accurate screening and customs declarations. For example, Nahdi Medical Company has been able to improve truck utilization by five to ten percent since implementing the latest release of Oracle Transportation Management.

“We now have greater visibility into shipment demands and delivery status, which means we have the flexibility to quickly re-do plans as new requirements arise,” said Sayed Al-Sayed, supply chain applications manager, Nahdi Medical Company. “The accuracy of Oracle Transportation Management Cloud’s recommended plans and route optimizations has enabled us to increase truck utilization rates, which has allowed us to cut time on shipments, while also saving money.”

Enhancements to Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud include:

  • Enhanced routing: Enables customers to make better decisions when routing shipments by accounting for factors such as historic traffic patterns, hazardous materials, and tolls when planning shipments

  • Expanded regulatory support: Enables customers to better meet their global trade needs in today’s dynamically changing, regulatory environment by supporting expanded regulatory content, allowing more accurate screening, and providing enhancements for summary customs declarations

  • Advanced planning: Enables customers to better map out and improve transportation planning in end-to-end, outbound order fulfillment by providing enhancements to sample integrated flows with Oracle Order Management Cloud and Oracle Inventory Management Cloud

  • Driver-oriented features: Enables customers to improve the handling of assignments for shift-based drivers by providing enhanced support for planning and execution of private and dedicated fleets, including driver-oriented workflow in the OTM Mobile App

  • IoT Fleet Monitoring integration: Enables customers to have real-time visibility into a shipment’s location through automatic event handling

  • UX and collaboration: Enables customers to have a simpler and more configurable user interface, including integration with Oracle Content and Experience Cloud, which streamlines collaboration with peers

  • Graphical diagnostics tool: Enables customers to easily research and resolve questions in real time that may arise from the shipment planning processes by providing them with a visual diagnostic tool, which highlights areas that warrant further investigation

“The global trade and logistics landscape is dramatically changing and organizations need to be able to accommodate a complex web of regulations and customer expectations for each shipment in order to be competitive,” said Derek Gittoes, vice president, SCM Product Strategy, Oracle. “The combination of Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud gives our customers an innovative platform to reduce complexity and effectively improve the efficiency of their global trade compliance and transport operations.”

The enhancements further extend Oracle’s leadership position in the transportation management category. Oracle Transportation Management Cloud provides a single platform for transportation orchestration across the entire supply chain. Oracle Global Trade Management Cloud offers unparalleled visibility and control over orders and shipments to reduce costs, automate processes, streamline compliance, and improve service delivery.

Contact Info
Vanessa Johnson
Oracle
+1650.607.1692
vanessa.n.johnson@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle. 

Talk to a Press Contact

Vanessa Johnson

  • +1650.607.1692

[VLOG]Cloud Service Model: SaaS | PaaS | IaaS

Online Apps DBA - Tue, 2018-08-14 02:10

Add Catalyst to your Cloud Journey & Find answers to all your question like What is Cloud Computing and characteristics of Cloud Computing… [VLOG]Cloud Service Model: SaaS | PaaS | IaaS Visit: https://k21academy.com/cloud02 & gain in-depth knowledge about the Cloud Service Models: ✓ SaaS ✓ PaaS ✓ IaaS Add Catalyst to your Cloud Journey & […]

The post [VLOG]Cloud Service Model: SaaS | PaaS | IaaS appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Using the managed PostgreSQL service in Azure

Yann Neuhaus - Tue, 2018-08-14 00:59

In the last post we had a look on how you can bring up a customized PostgreSQL instance in the Azure cloud. Now I want to check what you can do with the managed service. For the managed service I am expecting that I can bring up a PostgreSQL quite easily and fast and that I can add replicas on demand. Lets see what is there and how you can use it.

Of course we need to login again:

dwe@dwe:~$ cd /var/tmp
dwe@dwe:/var/tmp$ az login

The az command for working with PostgreSQL is simply “postgres”:

dwe@dwe:~$ az postgres --help

Group
    az postgres : Manage Azure Database for PostgreSQL servers.

Subgroups:
    db          : Manage PostgreSQL databases on a server.
    server      : Manage PostgreSQL servers.
    server-logs : Manage server logs.

Does not look like we can do much but you never know so lets bring up an instance. Again, we need a resource group first:

dwe@dwe:~$ az group create --name PGTEST --location "westeurope"
{
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST",
  "location": "westeurope",
  "managedBy": null,
  "name": "PGTEST",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null
}

Lets try to bring up an instance with a little storage (512MB), SSL enabled and the standard postgres user:

dwe@dwe:~$ az postgres server create --name mymanagedpg1 --resource-group PGTEST --sku-name B_Gen4_2 --ssl-enforcement Enabled --storage-size 512 --admin-user postgres --admin-password xxxxx --location westeurope
Deployment failed. Correlation ID: e3cd6d04-3557-4c2a-b70f-7c11a61c395d. Server name 'PG1' cannot be empty or null. It can only be made up of lowercase letters 'a'-'z', the numbers 0-9 and the hyphen. The hyphen may not lead or trail in the name.

Ok, seems upper case letters are not allowed, try again:

dwe@dwe:~$ az postgres server create --name mymanagedpg1 --resource-group PGTEST --sku-name B_Gen4_2 --ssl-enforcement Enabled --storage-size 512 --admin-user postgres --admin-password postgres --location westeurope
Deployment failed. Correlation ID: e50ca5d6-0e38-48b8-8015-786233c0d103. The storage size of 512 MB does not meet the minimum required storage of 5120 MB.

Ok, we need a minimum of 5120 MB of storage, again:

dwe@dwe:~$ az postgres server create --name mymanagedpg1 --resource-group PGTEST --sku-name B_Gen4_2 --ssl-enforcement Enabled --storage-size 5120 --admin-user postgres --admin-password postgres --location westeurope
Deployment failed. Correlation ID: 470975ce-1ee1-4531-8703-55947772fb51. Password validation failed. The password does not meet policy requirements because it is not complex enough.

This one is good as it at least denies the postgres/postgres combination. Again with a better password:

dwe@dwe:~$ az postgres server create --name mymanagedpg1 --resource-group PGTEST --sku-name B_Gen4_2 --ssl-enforcement Enabled --storage-size 5120 --admin-user postgres --admin-password "xxx" --location westeurope
{
  "administratorLogin": "postgres",
  "earliestRestoreDate": "2018-08-13T12:30:10.763000+00:00",
  "fullyQualifiedDomainName": "mymanagedpg1.postgres.database.azure.com",
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1",
  "location": "westeurope",
  "name": "mymanagedpg1",
  "resourceGroup": "PGTEST",
  "sku": {
    "capacity": 2,
    "family": "Gen4",
    "name": "B_Gen4_2",
    "size": null,
    "tier": "Basic"
  },
  "sslEnforcement": "Enabled",
  "storageProfile": {
    "backupRetentionDays": 7,
    "geoRedundantBackup": "Disabled",
    "storageMb": 5120
  },
  "tags": null,
  "type": "Microsoft.DBforPostgreSQL/servers",
  "userVisibleState": "Ready",
  "version": "9.6"
}

Better. What I am not happy with is that the default seems to be PostgreSQL 9.6. PostgreSQL 10 is out around a year now and that should definitely by the default. In the portal it looks like this and there you can also find the information required for connecting to the instance:
Selection_004

So lets try to connect:

dwe@dwe:~$ psql -h mymanagedpg1.postgres.database.azure.com -U postgres@mymanagedpg1
psql: FATAL:  no pg_hba.conf entry for host "x.x.x.xx", user "postgres", database "postgres@mymanagedpg1", SSL on
FATAL:  SSL connection is required. Please specify SSL options and retry.

How do we manage that with the managed PostgreSQL service? Actually there is no az command to modify pg_hba_conf but what we need to do is to create a firewall rule:

dwe@dwe:~$ az postgres server firewall-rule create -g PGTEST -s mymanagedpg1 -n allowall --start-ip-address 0.0.0.0 --end-ip-address 255.255.255.255
{
  "endIpAddress": "255.255.255.255",
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/firewallRules/allowall",
  "name": "allowall",
  "resourceGroup": "PGTEST",
  "startIpAddress": "0.0.0.0",
  "type": "Microsoft.DBforPostgreSQL/servers/firewallRules"
}

Of course you should not open to the whole world as I am doing here. When the rule is in place connections do work:

dwe@dwe:~$ psql -h mymanagedpg1.postgres.database.azure.com -U postgres@mymanagedpg1 postgres
Password for user postgres@mymanagedpg1: 
psql (9.5.13, server 9.6.9)
WARNING: psql major version 9.5, server major version 9.6.
         Some psql features might not work.
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-SHA384, bits: 256, compression: off)
Type "help" for help.

postgres=> 

There is an additional database called “azure_maintenance” and we are not allowed to connect there:

postgres=> \l
                                                               List of databases
       Name        |      Owner      | Encoding |          Collate           |           Ctype            |          Access privileges          
-------------------+-----------------+----------+----------------------------+----------------------------+-------------------------------------
 azure_maintenance | azure_superuser | UTF8     | English_United States.1252 | English_United States.1252 | azure_superuser=CTc/azure_superuser
 postgres          | azure_superuser | UTF8     | English_United States.1252 | English_United States.1252 | 
 template0         | azure_superuser | UTF8     | English_United States.1252 | English_United States.1252 | =c/azure_superuser                 +
                   |                 |          |                            |                            | azure_superuser=CTc/azure_superuser
 template1         | azure_superuser | UTF8     | English_United States.1252 | English_United States.1252 | =c/azure_superuser                 +
                   |                 |          |                            |                            | azure_superuser=CTc/azure_superuser
(4 rows)
postgres=> \c azure_maintenance
FATAL:  permission denied for database "azure_maintenance"
DETAIL:  User does not have CONNECT privilege.
Previous connection kept

The minor release is one release behind but as the latest minor release was released this week that seems to be fine:

postgres=> select version();
                           version                           
-------------------------------------------------------------
 PostgreSQL 9.6.9, compiled by Visual C++ build 1800, 64-bit
(1 row)

postgres=> 

I would probably not compile PostgreSQL with “Visual C++” but given that we use a Microsoft product, surprise, we are running on Windows:

postgres=> select name,setting from pg_settings where name = 'archive_azure_location';
          name          |          setting          
------------------------+---------------------------
 archive_azure_location | c:\BackupShareDir\Archive
(1 row)

… and the PostgreSQL source code was modified as this parameter does not exist in the community version.

Access to the server logs is quite easy:

dwe@dwe:~$ az postgres server-logs list --resource-group PGTEST --server-name mymanagedpg1
[
  {
    "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/logFiles/postgresql-2018-08-13_122334.log",
    "lastModifiedTime": "2018-08-13T12:59:26+00:00",
    "logFileType": "text",
    "name": "postgresql-2018-08-13_122334.log",
    "resourceGroup": "PGTEST",
    "sizeInKb": 6,
    "type": "Microsoft.DBforPostgreSQL/servers/logFiles",
    "url": "https://wasd2prodweu1afse118.file.core.windows.net/74484e5541e04b5a8556eac6a9eb37c8/pg_log/postgresql-2018-08-13_122334.log?sv=2015-04-05&sr=f&sig=ojGG2km5NFrfQ8dJ0btz8bhmwNMe0F7oq0iTRum%2FjJ4%3D&se=2018-08-13T14%3A06%3A16Z&sp=r"
  },
  {
    "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/logFiles/postgresql-2018-08-13_130000.log",
    "lastModifiedTime": "2018-08-13T13:00:00+00:00",
    "logFileType": "text",
    "name": "postgresql-2018-08-13_130000.log",
    "resourceGroup": "PGTEST",
    "sizeInKb": 0,
    "type": "Microsoft.DBforPostgreSQL/servers/logFiles",
    "url": "https://wasd2prodweu1afse118.file.core.windows.net/74484e5541e04b5a8556eac6a9eb37c8/pg_log/postgresql-2018-08-13_130000.log?sv=2015-04-05&sr=f&sig=k8avZ62KyLN8RW0ZcIigyPZa40EKNBJvNvneViHjyeI%3D&se=2018-08-13T14%3A06%3A16Z&sp=r"
  }
]

We can just download the logs and have a look at them:

dwe@dwe:~$ wget "https://wasd2prodweu1afse118.file.core.windows.net/74484e5541e04b5a8556eac6a9eb37c8/pg_log/postgresql-2018-08-13_122334.log?sv=2015-04-05&sr=f&sig=Mzy2dQ%2BgRPY8lfkUAP5X%2FkXSxoxWSwrphy7BphaTjLk%3D&se=2018-08-13T14%3A07%3A29Z&sp=r" 
we@dwe:~$ more postgresql-2018-08-13_122334.log\?sv\=2015-04-05\&sr\=f\&sig\=Mzy2dQ%2BgRPY8lfkUAP5X%2FkXSxoxWSwrphy7BphaTjLk%3D\&se\=2018-08-13T14%3A07%3A29Z\&sp\=r
2018-08-13 12:23:34 UTC-5b717845.6c-LOG:  could not bind IPv6 socket: A socket operation was attempted to an unreachable host.
	
2018-08-13 12:23:34 UTC-5b717845.6c-HINT:  Is another postmaster already running on port 20686? If not, wait a few seconds and retry.
2018-08-13 12:23:34 UTC-5b717846.78-LOG:  database system was shut down at 2018-08-13 12:23:32 UTC
2018-08-13 12:23:35 UTC-5b717846.78-LOG:  database startup complete in 1 seconds, startup began 2 seconds after last stop
...

The PostgreSQL configuration is accessible quite easy:

dwe@dwe:~$ az postgres server configuration list --resource-group PGTEST --server-name mymanagedpg1 | head -20
[
  {
    "allowedValues": "on,off",
    "dataType": "Boolean",
    "defaultValue": "on",
    "description": "Enable input of NULL elements in arrays.",
    "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/configurations/array_nulls",
    "name": "array_nulls",
    "resourceGroup": "PGTEST",
    "source": "system-default",
    "type": "Microsoft.DBforPostgreSQL/servers/configurations",
    "value": "on"
  },
  {
    "allowedValues": "safe_encoding,on,off",
    "dataType": "Enumeration",
    "defaultValue": "safe_encoding",
    "description": "Sets whether \"\\'\" is allowed in string literals.",
    "id": "/subscriptions/030698d5-42d6-41a1-8740-355649c409e7/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/configurations/backslash_quote",
    "name": "backslash_quote",

Setting a parameter is easy as well:

dwe@dwe:~$ az postgres server configuration set --name work_mem --value=32 --resource-group PGTEST --server-name mymanagedpg1
Deployment failed. Correlation ID: 634fd473-0c28-43a7-946e-ecbb26faf961. The value '32' for configuration 'work_mem' is not valid. The allowed values are '4096-2097151'.
dwe@dwe:~$ az postgres server configuration set --name work_mem --value=4096 --resource-group PGTEST --server-name mymanagedpg1
{
  "allowedValues": "4096-2097151",
  "dataType": "Integer",
  "defaultValue": "4096",
  "description": "Sets the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files.",
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.DBforPostgreSQL/servers/mymanagedpg1/configurations/work_mem",
  "name": "work_mem",
  "resourceGroup": "PGTEST",
  "source": "system-default",
  "type": "Microsoft.DBforPostgreSQL/servers/configurations",
  "value": "4096"
}

The interesting point is what happens when we change a parameter that requires a restart:

dwe@dwe:~$ az postgres server configuration set --name shared_buffers --value=4096 --resource-group PGTEST --server-name mymanagedpg1
Deployment failed. Correlation ID: d849b302-1c41-4b13-a2d5-6b24f144be89. The configuration 'shared_buffers' does not exist for PostgreSQL server version 9.6.
dwe@dwe:~$ psql -h mymanagedpg1.postgres.database.azure.com -U postgres@mymanagedpg1 postgresPassword for user postgres@mymanagedpg1: 
psql (9.5.13, server 9.6.9)
WARNING: psql major version 9.5, server major version 9.6.
         Some psql features might not work.
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-SHA384, bits: 256, compression: off)
Type "help" for help.

postgres=> show shared_buffers ;
 shared_buffers 
----------------
 512MB
(1 row)
postgres=> 

So memory configuration depends on the pricing models, more information here. If you want to scale up or down “you can independently change the vCores, the hardware generation, the pricing tier (except to and from Basic), the amount of storage, and the backup retention period”.

Some final thoughts: Bringing an instance up is quite easy and simple. The default PostgreSQL version is 9.6.x, which is not a good choice in my opinion, version 10 already got the 5th minor release and is stable and the most recent version. Scaling up and down is a matter of changing basic stuff such as cores, memory, storage and pricing models. For many workloads this is probably fine, if you want to have more control you’d do better in provisioning VMs and then do the PostgreSQL stuff for your own. High availability is not implemented by adding replicas but by creating new nodes, attaching the storage to that node and then bring it up. This might be sufficient, it might be not, depends on your requirements.

In a next post we will build our own PostgreSQL HA solution on Azure.

 

Cet article Using the managed PostgreSQL service in Azure est apparu en premier sur Blog dbi services.

Tracking Down the Big Bang: CERN and Oracle Extend Research and Development Partnership

Oracle Press Releases - Mon, 2018-08-13 12:31
Press Release
Tracking Down the Big Bang: CERN and Oracle Extend Research and Development Partnership CERN uses Oracle cloud technologies to support its research infrastructure, used to investigate the fundamental structure of our universe and phenomena such as dark matter and dark energy

Geneva—Aug 13, 2018

The European Organization for Nuclear Research (CERN) is extending its partnership with Oracle for another three years. This partnership is through a research-and-development program set up by the laboratory, called CERN openlab; it provides a unique research framework in which scientists and leading IT companies can work together. One of the aims of the partnership with Oracle is to develop a high-performance cloud infrastructure capable of storing and analyzing vast quantities of controls data, such as that generated by the gigantic research infrastructures used at the laboratory to probe the origin of the universe. Oracle can also make use of insights gained from the program to provide its customers with extremely powerful and future-proof cloud technologies.

CERN openlab has provided a unique setting for cooperation between science and industry since 2001. In this program, CERN cooperates with leading IT companies on the joint development of high-performance technologies for basic research in physics. Oracle has been a partner in the program since 2003 and started another three-year project cycle in 2018. As one of the largest members, the cloud provider is involved in four current CERN openlab projects. In addition, every year 40 students from all over the world get an opportunity to work on current projects during a nine-week summer school program.

“CERN openlab is a win-win project for everyone involved,” said Eric Grancher, Leader of CERN’s Database Services Group. “It gives our collaborators a way to get valuable feedback by testing their solutions in one of the most complex and demanding technology environments. And we at CERN can assess the potential that new technologies have for future applications during the early phases of their development. In addition, CERN openlab provides a neutral scientific environment where businesses can engage in dialogue.”

“We are pleased to extend our partnership with Oracle for another three years,” said Eva Dafonte Perez, Deputy Leader of CERN’s Database Services Group. “In addition to our 15-year partnership through CERN openlab, we have been working with Oracle since 1982. We will continue to need high-performance and, above all, quickly scalable solutions in the future in order to store and analyze the growing amount of data recorded by our instrumentation. Oracle offers flexibility because its solutions are available both on-premises and in the cloud.”

Based in Geneva, Switzerland, CERN is dedicated to basic research in physics. CERN uses its Large Hadron Collider (LHC), the world’s largest particle accelerator, to investigate fundamental structure of the universe. In the LHC, subatomic particles are accelerated and caused to collide, simulating the conditions just a fraction of a second after the Big Bang. The LHC experiments currently produce approximately 50 petabytes of data annually, a volume corresponding to roughly 2,000 years of HD video content.

However, our current understanding of physics only explains the visible matter that makes up about 5% of the Universe’s total energy. The LHC is therefore to be made even more powerful, generating even more particle collisions and boosting efforts to investigate phenomena such as dark matter and dark energy. CERN also needs to have correspondingly powerful IT infrastructure in place; the laboratory’s cooperation with Oracle plays a key role in ensuring this.

“CERN’s research goals are extremely exciting, with technologies developed at the laboratory having had significant impact on our every-day lives. For example, technologies developed at CERN have already helped improve the treatment of certain types of cancer. So we are very pleased to renew our partnership in CERN openlab and hope to work together to develop even more powerful technologies that will advance both science and industry,” said David Ebert, Director-Government, Education, Healthcare Industry Solutions EMEA, Oracle.

Contact Info
Harald Gessner
Head of Corporate Communications Europe North, Oracle
+49 89 1430 1215
harald.gessner@oracle.com
Claudia Wittwer/Volker Schmidt
Burson Marsteller
+49 89 17 959 1884
Oracle-Germany@bm.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Harald Gessner

  • +49 89 1430 1215

Claudia Wittwer/Volker Schmidt

  • +49 89 17 959 1884

August 2018 Update to Integrated SOA Gateway

Steven Chan - Mon, 2018-08-13 11:19

Oracle E-Business Suite Integrated SOA Gateway (ISG) provides an infrastructure to provide, consume, and administer Oracle E-Business Suite web services. It service-enables public integration interfaces registered in Oracle Integration Repository. These interfaces can be deployed as SOAP and/or REST-based web services. ISG also provides a service invocation framework to consume external SOAP based web services.

I am pleased to announce the availability of new consolidated patch for Integrated SOA Gateway in EBS R12.2 and EBS R12.1.3. This patch contains a new feature: REST support for Open Interface Tables. It also includes over four dozen fixes for stability and performance, and is highly recommended for all users. 

You can download the latest ISG consolidated patch here:

Oracle E-Business Suite Integrated SOA Gateway R12.2:
Patch 27949145:R12.OWF.C - ISG Consolidated Patch for Release 12.2 (18_3_3)

This is a cumulative patch update that includes all previously released ISG updates for EBS R12.2.

Oracle E-Business Suite Integrated SOA Gateway R12.1.3:
Patch 27941624:R12.OWF.B - ISG Consolidated Patch for Release 12.1 (18_3_3)

This is a cumulative patch update that includes all previously released ISG updates for EBS R12.1.3.

These patches include critical bug fixes in Integrated SOA Gateway. They include major updates in REST service enablement of Open Interface Tables and Views. If you are using Open Interface Tables and Views as REST based web services with previous ISG consolidated patch, you are required to uptake this new consolidated patch.
 
References

Related Articles

Categories: APPS Blogs

Bioscience Companies Deliver Life-Changing Products to Patients Faster with Help from NetSuite

Oracle Press Releases - Mon, 2018-08-13 08:00
Press Release
Bioscience Companies Deliver Life-Changing Products to Patients Faster with Help from NetSuite Selecta Biosciences and BioMonde Select NetSuite to Simplify Business Processes

San Mateo,Calif.—Aug 13, 2018

To accelerate the commercialization of life-changing products, bioscience companies are selecting Oracle NetSuite. Two cutting-edge bioscience companies, Selecta Biosciences and BioMonde, are leveraging NetSuite to bring life-changing products to market faster and allow them to focus on the continued treatment and prevention of serious diseases.

Selecta Biosciences Seeks to Counter Unwanted Immune Responses

Selecta Biosciences, a clinical-stage biopharmaceutical company, is focused on unlocking the full potential of biologic therapies by mitigating unwanted immune responses. Selecta Bioscience’s current pipeline includes enzyme, oncology and gene therapy product candidates enabled by its proprietary Synthetic Vaccine Particles (SVP™) technology.

After working with inefficient, on-premise software, Selecta Biosciences selected NetSuite to better manage financial, accounting, and project management processes with one unified system. With NetSuite, Selecta Biosciences has harmonized data from accounting, timekeeping and project management, which allows the business to synchronize project planning with cash flow and the timing of crucial product events.

“Implementing NetSuite was a great system investment,” said Chip Michaud, FP&A Director, Selecta Biosciences. “By significantly reducing the amount of time to handle backend processes, we are able to better focus on developing our product candidates.”

BioMonde Revisits Ancient Treatment

BioMonde, a global bioscience company based in the UK, specializes in larval debridement therapy for the treatment of patients with chronic and non-healing wounds. Larval Therapy has proven efficacy and is known as a natural leader in wound debridement.

To help manage complex business processes, regulatory mandates, and global manufacturing and distribution challenges, BioMonde selected NetSuite OneWorld. With its data-driven approach, NetSuite OneWorld enables BioMonde to quickly respond to patients’ needs by providing a 360-degree view of financial, customer, product and order data that allows BioMonde to focus on expanding its global footprint and bringing its innovative treatment to more suffering patients.

“To reach our potential, we quickly needed to become a world-wide operation,” said Mike Johnson, CFO, BioMonde. “NetSuite OneWorld enabled us to expand and now we’re able to slice and dice not just financial data, but all the data sales people have captured. With this data, we have a clear view of where we are and where we need to be at any given moment.”

“The bioscience industry faces unique challenges, including complex healthcare regulations and commercialization requirements, which means it often takes years and millions of dollars to turn ideas into reality,” said Paul Farrell, VP Industry Product Marketing & Management, Oracle NetSuite. “To navigate this complexity and devote their attention to making life-changing breakthroughs for patients, bioscience companies are increasingly selecting NetSuite to manage their business processes. NetSuite gives them a unified, cloud-based platform to streamline business complexities and pressures around everything from regulation to increased competition.”

Learn more about NetSuite’s offering for bioscience companies.

Contact Info
Danielle Tarp
Oracle NetSuite Corporate Communications
650-506-2904
danielle.tarp@oracle.com
About Oracle NetSuite

For more than 20 years, Oracle NetSuite has helped organizations grow, scale and adapt to change. NetSuite provides a suite of cloud-based applications, which includes financials/Enterprise Resource Planning (ERP), HR, professional services automation and omnichannel commerce, used by more than 40,000 organizations and subsidiaries in 199 countries and territories.

For more information, please visit http://www.netsuite.com.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Danielle Tarp

  • 650-506-2904

ORACLE_HOME with symbolic link and postupgrade_fixups

Yann Neuhaus - Mon, 2018-08-13 07:26

Here is a quick post you may google into if you got the following error when running postupgrade_fixups.sql after an upgrade:

ERROR - Cannot open the preupgrade_messages.properties file from the directory object preupgrade_dir
DECLARE
*
ERROR at line 1:
ORA-29283: invalid file operation
ORA-06512: at "SYS.DBMS_PREUP", line 3300
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 41
ORA-06512: at "SYS.UTL_FILE", line 478
ORA-06512: at "SYS.DBMS_PREUP", line 3260
ORA-06512: at "SYS.DBMS_PREUP", line 9739
ORA-06512: at line 11


Before upgrading a database with dbupgrade, you run, on the current version of your database, the preupgrade.jar from the new version (and probably download the lastest one from MOS). This generates a script to run before the upgrade, and one to run after the upgrade. Those scripts are generated under $ORACLE_BASE/cfgtoollogs/<database>/preupgrade where you find something like that:

drwxr-xr-x. 3 oracle oinstall 4096 Aug 11 19:36 ..
drwxr-xr-x. 3 oracle oinstall 4096 Aug 11 19:36 oracle
drwxr-xr-x. 3 oracle oinstall 4096 Aug 11 19:36 upgrade
-rw-r--r--. 1 oracle oinstall 14846 Aug 11 20:19 dbms_registry_extended.sql
-rw-r--r--. 1 oracle oinstall 7963 Aug 11 20:19 preupgrade_driver.sql
-rw-r--r--. 1 oracle oinstall 422048 Aug 11 20:19 preupgrade_package.sql
-rw-r--r--. 1 oracle oinstall 14383 Aug 11 20:19 parameters.properties
-rw-r--r--. 1 oracle oinstall 83854 Aug 11 20:19 preupgrade_messages.properties
-rw-r--r--. 1 oracle oinstall 50172 Aug 11 20:19 components.properties
-rw-r--r--. 1 oracle oinstall 2 Aug 11 20:19 checksBuffer.tmp
-rw-r--r--. 1 oracle oinstall 6492 Aug 11 20:20 preupgrade_fixups.sql
-rw-r--r--. 1 oracle oinstall 7575 Aug 11 20:20 postupgrade_fixups.sql
-rw-r--r--. 1 oracle oinstall 5587 Aug 11 20:20 preupgrade.log

Everything is straightforward.

oracle@vmreforatun01:/u00/app/oracle/product/ [DB2] java -jar /u00/app/oracle/product/18EE/rdbms/admin/preupgrade.jar
...
==================
PREUPGRADE SUMMARY
==================
/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/preupgrade.log
/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/preupgrade_fixups.sql
/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/postupgrade_fixups.sql
 
Execute fixup scripts as indicated below:
 
Before upgrade log into the database and execute the preupgrade fixups
@/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/preupgrade_fixups.sql
 
After the upgrade:
 
Log into the database and execute the postupgrade fixups
@/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/postupgrade_fixups.sql
 
Preupgrade complete: 2018-08-11T19:37:29
oracle@vmreforatun01:/u00/app/oracle/product/ [DB2]

For a database we have in a lab for our workshops, which I upgraded to 18c, I’ve run the postfix script after the upgrade but got the error mentioned above about UTL_FILE invalid file operation in the preupgrade_dir. I looked at the script. The postupgrade_fixups.sql script creates a directory on $ORACLE_HOME/rdbms/admin and calls preupgrade_package.sql which reads preupgrade_messages.properties.

This is a bit confusing because there’s also the same file in the cfgtoollogs preupgrade subdirectory but my directory looks good:

SQL> select directory_name,directory_path from dba_directories where directory_name='PREUPGRADE_DIR';
 
DIRECTORY_NAME
--------------------------------------------------------------------------------------------------------------------------------
DIRECTORY_PATH
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
PREUPGRADE_DIR
/u00/app/oracle/product/18SE/rdbms/admin

So, as the “ORA-29283: invalid file operation” is not very detailed, I traced all the system calls on files (strace -fye trace=file) when running sqlplus and got this:

[pid 29974] clock_gettime(CLOCK_MONOTONIC, {17811, 723389136}) = 0
[pid 29974] stat("/u00/app/oracle/product/18SE/rdbms/admin/preupgrade_messages.properties", {st_mode=S_IFREG|0644, st_size=83854, ...}) = 0
[pid 29974] stat("/u00/app/oracle/product/18SE/rdbms/admin/", {st_mode=S_IFDIR|0755, st_size=65536, ...}) = 0
[pid 29974] lstat("/u00", {st_mode=S_IFLNK|0777, st_size=11, ...}) = 0
[pid 29974] readlink("/u00", "/oracle/u00", 4095) = 11
[pid 29974] lstat("/oracle", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle/product", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle/product/18SE", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle/product/18SE/rdbms", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle/product/18SE/rdbms/admin", {st_mode=S_IFDIR|0755, st_size=65536, ...}) = 0
[pid 29974] clock_gettime(CLOCK_MONOTONIC, {17811, 724514469}) = 0

Then I realized that the ORACLE_HOME is under a symbolic link. For whatever reason, on this environment, ORACLE_BASE is physically /oracle/u00/app/oracle but there’s a /u00 link to /oracle/u00 and this short one was used to set the environment variables. UTL_FILE, since 11g, and for security reasons, does not accept directories which use a symbolic link. And we can see on the strace above that it was detected (readlink).

So, the solution can be a quick workaround here, changing the postupgrade_fixups.sql to set the physical path instead of the one read from ORACLE_HOME by dbms_system.get_env.

However, if you can restart the instance, then it will be better to set the ORACLE_HOME to the physical path. Symbolic links for the ORACLE_HOME may be misleading. Remember that the ORACLE_HOME text string is part of the instance identification, combined with ORACLE_SID. So, having different values even when resolved to the same path will bring lot of problems. Do not forget to change it everywhere (shell environment, listener.ora) so that you are sure that nobody will use a different one when starting the database.

 

Cet article ORACLE_HOME with symbolic link and postupgrade_fixups est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator