Feed aggregator

Bringing up your customized PostgreSQL instance on Azure

Yann Neuhaus - Mon, 2018-08-13 06:53

The Azure cloud becomes more and more popular so I gave it try and started simple. The goal was to provision a VM, compiling and installing PostgreSQL and then connecting to the instance. There is also a fully managed PostgreSQL service but I wanted to do it on my own just to get a feeling about the command line tools. Here is how I’ve done it.

Obviously you need to login which is just a matter of this:

dwe@dwe:~$ cd /var/tmp
dwe@dwe:/var/tmp$ az login

For doing anything in Azure you’ll need to create a resource group which is like container holding your resources. As a resource group needs to be created in a specific location the next step is to get a list of those:

dwe@dwe:/var/tmp$ az account list-locations
[
  {
    "displayName": "East Asia",
    "id": "/subscriptions/030698d5-42d6-41a1-8740-355649c409e7/locations/eastasia",
    "latitude": "22.267",
    "longitude": "114.188",
    "name": "eastasia",
    "subscriptionId": null
  },
  {
    "displayName": "Southeast Asia",
    "id": "/subscriptions/030698d5-42d6-41a1-8740-355649c409e7/locations/southeastasia",
    "latitude": "1.283",
    "longitude": "103.833",
    "name": "southeastasia",
    "subscriptionId": null
  },
...

Once you have selected a location the resource group can be created:

dwe@dwe:/var/tmp$ az group create --name PGTEST --location "westeurope"
{
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST",
  "location": "westeurope",
  "managedBy": null,
  "name": "PGTEST",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null
}

All you need to do for creating a CentOS VM is this simple command:

dwe@dwe:/var/tmp$ az vm create -n MyPg -g PGTEST --image centos --data-disk-sizes-gb 10 --size Standard_DS2_v2 --generate-ssh-keys
{
  "fqdns": "",
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg",
  "location": "westeurope",
  "macAddress": "xx-xx-xx-xx-xx-xx",
  "powerState": "VM running",
  "privateIpAddress": "x.x.x.x",
  "publicIpAddress": "x.x.x.x",
  "resourceGroup": "PGTEST",
  "zones": ""
}

While the VM is getting created you can watch the resources appearing in the portal:
Selection_026
Selection_027
Selection_028

As soon as the VM is ready connecting via ssh is possible (the keys have automatically been added, no password required):

dwe@dwe:/var/tmp$ ssh x.x.x.x
The authenticity of host 'xx.xx.x.x (xx.xx.x.x)' can't be established.
ECDSA key fingerprint is SHA256:YzNOzg30JH0A3U1R+6WzuJEd3+7N4GmwpSVkznhuTuE.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'xx.xx.x.x' (ECDSA) to the list of known hosts.
[dwe@MyPg ~]$ ls -la /etc/yum.repos.d/
total 44
drwxr-xr-x.  2 root root  209 Sep 25  2017 .
drwxr-xr-x. 86 root root 8192 Aug  2 08:05 ..
-rw-r--r--.  1 root root 1706 Sep 25  2017 CentOS-Base.repo
-rw-r--r--.  1 root root 1309 Nov 29  2016 CentOS-CR.repo
-rw-r--r--.  1 root root  649 Nov 29  2016 CentOS-Debuginfo.repo
-rw-r--r--.  1 root root  314 Nov 29  2016 CentOS-fasttrack.repo
-rw-r--r--.  1 root root  630 Nov 29  2016 CentOS-Media.repo
-rw-r--r--.  1 root root 1331 Nov 29  2016 CentOS-Sources.repo
-rw-r--r--.  1 root root 2893 Nov 29  2016 CentOS-Vault.repo
-rw-r--r--.  1 root root  282 Sep 25  2017 OpenLogic.repo
[dwe@MyPg ~]$ sudo su -
[root@MyPg ~]# cat /etc/centos-release
CentOS Linux release 7.3.1611 (Core) 
[root@MyPg ~]# 

Of course we want to update all the operating system packages to the latest release before moving on. Be careful here to really exclude the WALinuxAgent because otherwise the agent will be upgraded as well (and restarted) and the script execution will fail as you lose connectivity:

dwe@dwe:/var/tmp$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"yum update -y --exclude=WALinuxAgent"}'
{
  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "yum update -y --exclude=WALinuxAgent"
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

When we want to compile PostgreSQL we need some packages for that, so (not all of them required for compiling PostgreSQL but this is what we usually install):

dwe@dwe:/var/tmp$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"yum install -y gcc openldap-devel python-devel readline-devel redhat-lsb bison flex perl-ExtUtils-Embed zlib-devel crypto-utils openssl-devel pam-devel libxml2-devel libxslt-devel openssh-clients bzip2 net-tools wget screen unzip sysstat xorg-x11-xauth systemd-devel bash-completion"}'

{
  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "yum install -y gcc openldap-devel python-devel readline-devel redhat-lsb bison flex perl-ExtUtils-Embed zlib-devel crypto-utils openssl-devel pam-devel libxml2-devel libxslt-devel openssh-clients bzip2 net-tools wget screen unzip sysstat xorg-x11-xauth systemd-devel bash-completion"
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

Preparation work for the user, group and directories:

dwe@dwe:~$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"groupadd postgres; useradd -m -g postgres postgres; mkdir -p /u01/app; chown postgres:postgres /u01/app; mkdir -p /u02/pgdata; chown postgres:postgres /u02/pgdata"}'
{
  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "groupadd postgres; useradd -m -g postgres postgres; mkdir -p /u01/app; chown postgres:postgres /u01/app; mkdir -p /u02/pgdata; chown postgres:postgres /u02/pgdata"
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

For the next steps we will just copy over this script and then execute it:

dwe@dwe:~$ cat installPG.sh 
#!/bin/bash
cd /u01/app; wget https://ftp.postgresql.org/pub/source/v10.5/postgresql-10.5.tar.bz2
tar -axf postgresql-10.5.tar.bz2
rm -f postgresql-10.5.tar.bz2
cd postgresql-10.5
PGHOME=/u01/app/postgres/product/10/db_5/
SEGSIZE=2
BLOCKSIZE=8
WALSEGSIZE=16
./configure --prefix=${PGHOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGHOME}/bin \
            --libdir=${PGHOME}/lib \
            --sysconfdir=${PGHOME}/etc \
            --includedir=${PGHOME}/include \
            --datarootdir=${PGHOME}/share \
            --datadir=${PGHOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
            --with-wal-segsize=${WALSEGSIZE}  \
	    --with-systemd 
make -j 4 all
make install
cd contrib
make -j 4 install

dwe@dwe:~$ scp installPG.sh x.x.x.x:/var/tmp/
installPG.sh                                                                                                100% 1111     1.1KB/s   00:00    

Of course you could also add the yum commands to the same script but I wanted to show both ways. Using the CustomScript feature and copying over a script for execution. Lets execute that:

dwe@dwe:~$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"chmod +x /var/tmp/installPG.sh; sudo su - postgres -c /var/tmp/installPG.sh"}'

  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "chmod +x /var/tmp/installPG.sh; sudo su - postgres -c /var/tmp/installPG.sh"
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

Binaries ready. Initialize the cluster:

dwe@dwe:~$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"sudo su - postgres -c \"/u01/app/postgres/product/10/db_5/bin/initdb -D /u02/pgdata/PG1\""}'
{
  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "sudo su - postgres -c \"/u01/app/postgres/product/10/db_5/bin/initdb -D /u02/pgdata/PG1\""
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

Startup:

dwe@dwe:~$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"sudo su - postgres -c \"/u01/app/postgres/product/10/db_5/bin/pg_ctl -D /u02/pgdata/PG1 start\""}'
{
  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx030698d5-42d6-41a1-8740-355649c409e7/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "sudo su - postgres -c \"/u01/app/postgres/product/10/db_5/bin/pg_ctl -D /u02/pgdata/PG1 start\""
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

… and the instance is up and running:

dwe@dwe:~$ ssh x.x.x.x
Last login: Mon Aug 13 10:43:53 2018 from ip-37-201-6-36.hsi13.unitymediagroup.de
[dwe@MyPg ~]$ sudo su - postgres
Last login: Mon Aug 13 11:33:52 UTC 2018 on pts/0
[postgres@MyPg ~]$ /u01/app/postgres/product/10/db_5/bin/psql -c 'select version()'
                                                 version                                                 
---------------------------------------------------------------------------------------------------------
 PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)
[postgres@MyPg ~]$ 

When you want to access this instance from outside Azure you will need to open the port:

dwe@dwe:~$ az vm open-port --resource-group PGTEST --name MyPg --port 5432

Once you have configured PostgreSQL for accepting connections:

[postgres@MyPg ~]$ /u01/app/postgres/product/10/db_5/bin/psql
psql (10.5)
Type "help" for help.

postgres=# alter system set listen_addresses = '*';
ALTER SYSTEM
postgres=# alter user postgres password 'secret';
ALTER ROLE
postgres=# show port ;
 port 
------
 5432
(1 row)

postgres=# \q
[postgres@MyPg ~]$ echo "host    all             all             37.201.6.36/32   md5" >> /u02/pgdata/PG1/pg_hba.conf 
[postgres@MyPg ~]$ /u01/app/postgres/product/10/db_5/bin/pg_ctl -D /u02/pgdata/PG1/ restart

… you can access the instance from your outside Azure:

dwe@dwe:~$ psql -h 137.117.157.183 -U postgres
Password for user postgres: 
psql (9.5.13, server 10.5)
WARNING: psql major version 9.5, server major version 10.
         Some psql features might not work.
Type "help" for help.

postgres=# 

Put all that into a well written script and you can have your customized PostgreSQL instance ready in Azure in a couple of minutes. Now that I have a feeling on how that works in general I’ll look into the managed PostgreSQL service in another post.

 

Cet article Bringing up your customized PostgreSQL instance on Azure est apparu en premier sur Blog dbi services.

Scripts for Batch-Processing using the Data Dictionary in #Exasol

The Oracle Instructor - Mon, 2018-08-13 05:59

If you want to batch-process over a number of objects in Exasol, scripts that work with the Data Dictionary might do the trick. Let’s say I want to drop all of my tables that start with the letter P:

 

SQL_EXA> open schema fred;
EXA: open schema fred;

Rows affected: 0

SQL_EXA> create or replace script droptables
          as
          res=query([[select * from exa_user_tables where table_name like 'P%']])
          for i=1, #res
          do
           query([[drop table ::t]],{t=res[i].TABLE_NAME})
          end
        /
EXA: create or replace script droptables...

Rows affected: 0

SQL_EXA> select table_name from exa_user_tables;
EXA: select table_name from exa_user_tables;

TABLE_NAME
--------------------------------------------------
P9_TAB
P8_TAB

2 rows in resultset.

Amazingly enough, DROP TABLE is an operation that can be rolled back in Exasol. So to safeguard, I can disable autocommit here.

SQL_EXA> set autocommit off;
SQL_EXA> execute script droptables;
EXA: execute script droptables;

Rows affected: 0

SQL_EXA> select table_name from exa_user_tables;
EXA: select table_name from exa_user_tables;

TABLE_NAME
--------------------------------------------------

0 rows in resultset.

SQL_EXA> rollback;
EXA: rollback;

Rows affected: 0

SQL_EXA> select table_name from exa_user_tables;
EXA: select table_name from exa_user_tables;

TABLE_NAME
---------------------------------------------------
P9_TAB
P8_TAB

2 rows in resultset.
Categories: DBA Blogs

Flow Navigation Menu Control in Oracle VBCS

Andrejus Baranovski - Sun, 2018-08-12 14:33
Oracle VBCS allows us to build multiple flows within the application. This is great - this helps to split application logic into different smaller modules. Although VBCS doesn't offer (in the current version) declarative support to build menu structure to navigate between the flows. Luckily this requirement can be achieved in few simple steps, please read John Ceccarelli post - Adding a Navigation Bar to a VBCS Application. I thought to go through instructions listed by John and test it out, today post is based on this. In my next posts I will take a look how to replace navigation bar menu structure with something more advanced, for example - menu slider on the left.

I think VBCS have great potential as JavaScript declarative development IDE. I see many concepts are similar to other Oracle declarative development tools, e.g. Forms, Oracle ADF. VBCS runs Oracle JET, all you build in VBCS is Oracle JET. Oracle takes care upgrading Oracle JET version in VBCS, I have applied recent patch (by click of the button) and latest JET version is available within our VBCS environment:


Coming back to flows in VBCS. We can create as many flows we want. Each flow could be based on one or multiple fragments (HTML/JS modules). Here I have created three flows, each with single fragment:


We can select flow and this will bring us flow diagram, where we could have navigation implementation between flow elements/fragments:


Fragment - this is where UI part is done:


So thats about flows and fragments. For someone with ADF background, this sounds very similar to task flows and fragments. Next we should see how to implement flow navigation, to be able to select flow from the top menu. VBCS application comes with so called shell page. This page is top UI wrapper, which contains application name, logged user info, etc. Here we can implement top level menu, which would navigate through application flows:


There must be default flow, which is displayed once application is loaded. Default flow is set in settings of the shell page. Go to settings and choose default flow, dashboard-flow in my case:


Next we need to add JET component - navigation list to the shell page, to render menu UI. You can do it by drag and drop, but easier is to switch shell page to source view and add navigation list HTML portion manually (you can copy paste it from source code uploaded to GitHub, see link at the end of this post) - highlighted HTML will render menu bar to navigate between flows:


Initially you will notice error related to JET navigation list not recognised, we need to import it. Another error - selection listener is not found, we will implement it.

To import JET navigation list component, go to source implementation of the shell page and add oj-navigation-list in component imports section - this will solve issue with unknown navigation list entry:


To execute action in VBCS, we must create Action Chain. Crete Action Chain within shell page - navigateToPage:


We need input parameter - flow name, which want navigate to. Create variable in Action Chain - currentFlow:


Add action of type Navigate to Action Chain, this will trigger navigation logic:


Go to Action Chain source and add "page": "{{$variables.currentFlow}}" under actions. This will force navigation to the flow, which will be passed through parameter:


Finally we create navigation list selection event (within shell page), this event will trigger action chain created above and pass current flow ID. We must create custom event and its name should match even name defined in JET navigation list in HTML (see above):


Choose to create custom event (it didn't work for me in Chrome, only in Safari browser. VBCS bug?) and provide same name as in navigation list component listener:


Choose our navigation Action Chain to be triggered from this event:


Just a reminder, event is called from navigation list selection:


Event is passing flow ID from currently selected tab item:


On runtime, dashboard flow is loaded by default:


We can switch to Jobs, etc.:


Download exported (runnable only in VBCS) VBCS app from GitHub repo.

Documentum – Silent Install – D2

Yann Neuhaus - Sun, 2018-08-12 07:50

In previous blogs, we installed in silent the Documentum binaries, a docbroker (+licence(s) if needed) as well as several repositories. In this one, we will see how to install D2 on a predefined list of docbases/repositories (on the Content Server side) and you will see that, here, the process is quite different.

D2 is supporting the silent installation since quite some time now and it is pretty easy to do. At the end of the D2 GUI Installer, there is a screen where you are asked if you want to generate a silent properties (response) file containing the information that have been set in the D2 GUI Installer. Therefore, this is a first way to start working with silent installation or you can just read this blog ;).

So, let’s start this with the preparation of a template file. I will use a lot of placeholders in the template and will replace the values with sed commands, just as a quick look at how you can script a silent installation with a template configuration file and some properties prepared before.

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/D2_template.xml
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/D2_template.xml
<?xml version="1.0" encoding="UTF-8"?>
<AutomatedInstallation langpack="eng">
  <com.izforge.izpack.panels.HTMLHelloPanel id="welcome"/>
  <com.izforge.izpack.panels.UserInputPanel id="SelectInstallOrMergeConfig">
    <userInput>
      <entry key="InstallD2" value="true"/>
      <entry key="MergeConfigs" value="false"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.HTMLInfoPanel id="readme"/>
  <com.izforge.izpack.panels.PacksPanel id="UNKNOWN (com.izforge.izpack.panels.PacksPanel)">
    <pack index="0" name="Installer files" selected="true"/>
    <pack index="1" name="D2" selected="###WAR_REQUIRED###"/>
    <pack index="2" name="D2-Config" selected="###WAR_REQUIRED###"/>
    <pack index="3" name="D2-API for Content Server/JMS" selected="true"/>
    <pack index="4" name="D2-API for BPM" selected="###BPM_REQUIRED###"/>
    <pack index="5" name="DAR" selected="###DAR_REQUIRED###"/>
  </com.izforge.izpack.panels.PacksPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.0">
    <userInput>
      <entry key="jboss5XCompliant" value="false"/>
      <entry key="webappsDir" value="###DOCUMENTUM###/D2-Install/war"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.2">
    <userInput>
      <entry key="pluginInstaller" value="###PLUGIN_LIST###"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.3">
    <userInput>
      <entry key="csDir" value="###DOCUMENTUM###/D2-Install/D2-API"/>
      <entry key="bpmDir" value="###JMS_HOME###/server/DctmServer_MethodServer/deployments/bpm.ear"/>
      <entry key="jmsDir" value="###JMS_HOME###/server/DctmServer_MethodServer/deployments/ServerApps.ear"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.4">
    <userInput>
      <entry key="installationDir" value="###DOCUMENTUM###/D2-Install/DAR"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.5">
    <userInput>
      <entry key="dfsDir" value="/tmp/###DFS_SDK_PACKAGE###"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.7">
    <userInput>
      <entry key="COMMON.USER_ACCOUNT" value="###INSTALL_OWNER###"/>
      <entry key="install.owner.password" value="###INSTALL_OWNER_PASSWD###"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.8">
    <userInput>
      <entry key="SERVER.REPOSITORIES.NAMES" value="###DOCBASE_LIST###"/>
      <entry key="setReturnRepeatingValue" value="true"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.9">
    <userInput>
      <entry key="securityRadioSelection" value="true"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPD2ConfigOrClient">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPChooseUsetheSameDFC">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPChooseReferenceDFCForConfig">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPDocbrokerInfo">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPEnableDFCSessionPool">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPDFCKeyStoreInfo">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetD2ConfigLanguage">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPEnableD2BOCS">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetHideDomainforConfig">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetTemporaryMaxFiles">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="10">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="11">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPChooseReferenceDFCForClient">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPDocbrokerInfoForClient">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="12">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="13">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="14">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="15">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="16">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="17">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="18">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="19">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="20">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="21">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="22">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetTransferMode">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="24">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="25">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPEnableAuditing">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPchooseWebAppServer">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPAskWebappsDir">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPAskNewWarDir">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.InstallPanel id="UNKNOWN (com.izforge.izpack.panels.InstallPanel)"/>
  <com.izforge.izpack.panels.XInfoPanel id="UNKNOWN (com.izforge.izpack.panels.XInfoPanel)"/>
  <com.izforge.izpack.panels.FinishPanel id="UNKNOWN (com.izforge.izpack.panels.FinishPanel)"/>
</AutomatedInstallation>

[dmadmin@content_server_01 ~]$

 

As you probably understood by looking at the above file, I’m using “/tmp/” for the input elements needed by D2 like the DFS package, the D2 installer or the D2+Pack Plugins and I’m using “$DOCUMENTUM/D2-Install” as the output folder where D2 generates its stuff into.

Once you have the template ready, you can replace the placeholders as follow (this is just an example of configuration based on the other silent blogs I wrote so far):

[dmadmin@content_server_01 ~]$ export d2_install_file=/tmp/dctm_install/D2.xml
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cp /tmp/dctm_install/D2_template.xml ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###WAR_REQUIRED###,true," ${d2_install_file}
[dmadmin@content_server_01 ~]$ sed -i "s,###BPM_REQUIRED###,true," ${d2_install_file}
[dmadmin@content_server_01 ~]$ sed -i "s,###DAR_REQUIRED###,true," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###DOCUMENTUM###,$DOCUMENTUM," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###PLUGIN_LIST###,/tmp/D2_pluspack_4.7.0.P18/Plugins/C2-Install-4.7.0.jar;/tmp/D2_pluspack_4.7.0.P18/Plugins/D2-Bin-Install-4.7.0.jar;/tmp/D2_pluspack_4.7.0.P18/Plugins/O2-Install-4.7.0.jar;," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###JMS_HOME###,$DOCUMENTUM_SHARED/wildfly9.0.1," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###DFS_SDK_PACKAGE###,emc-dfs-sdk-7.3," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ read -s -p "  ----> Please enter the Install Owner's password: " dm_pw; echo; echo
  ----> Please enter the Install Owner's password: <TYPE HERE THE PASSWORD>
[dmadmin@content_server_01 ~]$ sed -i "s,###INSTALL_OWNER###,dmadmin," ${d2_install_file}
[dmadmin@content_server_01 ~]$ sed -i "s,###INSTALL_OWNER_PASSWD###,${dm_pw}," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s/###DOCBASE_LIST###/Docbase1/" ${d2_install_file}
[dmadmin@content_server_01 ~]$

 

A short description of these properties as well as some notes on the values used above:

  • langpack: The language you are usually using for running the installers… English is fine if you use this template
  • entry key=”InstallD2″: Whether or not you want to install D2
  • entry key=”MergeConfigs”: Whether or not you want to merge the actual configuration/installation with the new one. I’m always restarting a D2 installation from scratch (removing the D2 hidden files for that) so I always set this to false
  • pack index=”0″ name=”Installer files”: Always set this to true to install D2 on a CS
  • pack index=”1″ name=”D2″: Whether or not you want to generate the D2 WAR file. This is usually true for a “Primary” Content Server and can be set to false for other “Remote” CSs
  • pack index=”2″ name=”D2-Config”: Same as above but for the D2-Config WAR file
  • pack index=”3″ name=”D2-API for Content Server/JMS”: Whether or not you want the D2 Installer to put the D2 specific libraries into the JMS lib folder (path defined in: entry key=”jmsDir”). Even if you set this to true, you will still need to manually put a lot of D2 libs into the JMS lib folder because D2 only put a few of them but much more are required to run D2 properly (see documentation for the full list)
  • pack index=”4″ name=”D2-API for BPM”: Same as above but for the BPM this time (path defined in: entry key=”bpmDir”)
  • pack index=”5″ name=”DAR”: Whether or not you want to generate the DARs. This is usually true for a “Primary” Content Server and can be set to false for other “Remote” CSs
  • entry key=”jboss5XCompliant”: I guess this is for the JBoss 5 support so if you are on Dctm 7.x, leave this as false
  • entry key=”webappsDir”: The path the D2 Installer will put the generated WAR files into. In this example, I set it to “$DOCUMENTUM/D2-Install/war” so this folder MUST exist before running the installer in silent
  • entry key=”pluginInstaller”: This one is a little bit trickier… It’s a semi-colon list of all D2+Pack Plugins you would like to install in addition to the D2. In the above, I’m using the C2, D2-Bin as well as O2 plugins. The D2+Pack package must obviously be extracted BEFORE running the installer in silent and all the paths MUST exist (you will need to extract the plugins jar from each plugin zip files). I opened a few bugs & enhancements requests for these so if you are facing an issue, let me know, I might be able to help you
  • entry key=”csDir”: The path the D2 Installer will put the generated libraries into. In this example, I set it to “$DOCUMENTUM/D2-Install/D2-API” so this folder MUST exist before running the installer in silent
  • entry key=”bpmDir”: The path the D2 Installer will put a few of the D2 libraries into for the BPM (it’s not all needed JARs and this parameter is obviously not needed if you set ###BPM_REQUIRED### to false)
  • entry key=”jmsDir”: Same as above but for the JMS this time
  • entry key=”installationDir”: The path the D2 Installer will put the generated DAR files into. In this example, I set it to “$DOCUMENTUM/D2-Install/DAR” so this folder MUST exist before running the installer in silent
  • entry key=”dfsDir”: The path where the DFS SDK can be found. The DFS SDK package MUST be extracted in this folder before running the installer in silent
  • entry key=”COMMON.USER_ACCOUNT”: The name of the Documentum Installation Owner
  • entry key=”install.owner.password”: The password of the Documentum Installation Owner. I used above a “read -s” command so it doesn’t appear on the command line, but it will be put in clear text in the xml file…
  • entry key=”SERVER.REPOSITORIES.NAMES”: A comma separated list of all docbases/repositories (without spaces) that need to be configured for D2. The DARs will be installed automatically on these docbases/repositories and if you want to do it properly, it mustn’t contain the GR. You could potentially add the GR in this parameter but all D2 DARs would be installed into the GR and this isn’t needed… Only the “D2-DAR.dar” and “Collaboration_Services.dar” are needed to be installed on the GR so I only add normal docbases/repositories in this parameter and once D2 is installed, I manually deploy these two DARs into the GR (I wrote a blog about deploying DARs easily to a docbase a few years ago if you are interested). So, here I have a value of “Docbase1″ but if you had two, you could set it to “Docbase1,Docbase2″
  • entry key=”setReturnRepeatingValue”: Whether or not you want the repeating values. A value of true should set the “return_top_results_row_based=false” in the server.ini
  • entry key=”securityRadioSelection”: A value of true means that D2 have to apply Security Rules to content BEFORE applying AutoLink and a value of false means that D2 can do it AFTER only
  • That’s the end of this file because I’m using D2 4.7 and in D2 4.7, there is no Lockbox anymore! If you are using previous D2 versions, you will need to put additional parameters for the D2 Lockbox generation, location, password, aso…

 

Once the properties file is ready, you can install the docbroker/connection broker using the following command:

[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -DTRACE=true -DDEBUG=true -Djava.io.tmpdir=$DOCUMENTUM/D2-Install/tmp -jar /tmp/D2_4.7.0_P18/D2-Installer-4.7.0.jar ${d2_install_file}

 

You now know how to install D2 on a Content Server using the silent installation provided by D2. As you saw above, it is quite different compared to all Documentum components silent installation, but it is working so… Maybe at some point in the future, D2 will switch to use the same kind of properties file as Documentum.

 

Cet article Documentum – Silent Install – D2 est apparu en premier sur Blog dbi services.

Partitioning -- 3a : Indexes on a Partitioned Table

Hemant K Chitale - Sun, 2018-08-12 06:35
Building on the case study of the Range Partitioned Table from the previous Blog Post, here are some Indexes.

SQL> select index_name, tablespace_name, partitioned, uniqueness
2 from user_indexes
3 where table_name = 'SALES_DATA'
4 /

INDEX_NAME TABLESPACE_NAME PAR UNIQUENES
------------------------------ ------------------------------ --- ---------
SYS_C0017514 HEMANT NO UNIQUE

SQL>
SQL> select column_name, column_position
2 from user_ind_columns
3 where index_name = 'SYS_C0017514'
4 /

COLUMN_NAME COLUMN_POSITION
------------------------------ ---------------
SALE_ID 1

SQL>


We have an Index automatically built for the Primary Key constraint (note that the Index was created in the user's DEFAULT Tablespace).  This Index is a Global, Non-Partitioned Index.   Therefore, any Partition Maintenance operation (for a non-empty Partition) on the table may set the Index UNUSABLE unless the UPDATE INDEXES clause is used.

SQL> select status from user_indexes
2 where index_name = 'SYS_C0017514'
3 /

STATUS
--------
VALID

SQL> alter table sales_data drop partition P_2016;

Table altered.

SQL> select status from user_indexes
2 where index_name = 'SYS_C0017514'
3 /

STATUS
--------
VALID

SQL> insert into sales_data
2 values (1001,to_date('01-FEB-2017','DD-MON-RR'),'ABC1001',1,10,1000);

1 row created.

SQL> commit;

Commit complete.

SQL> alter table sales_data drop partition P_2017;

Table altered.

SQL> select status from user_indexes
2 where index_name = 'SYS_C0017514'
3 /

STATUS
--------
UNUSABLE

SQL>
SQL> alter index SYS_C0017514 rebuild;

Index altered.

SQL> select status from user_indexes
2 where index_name = 'SYS_C0017514'
3 /

STATUS
--------
VALID

SQL>


When I dropped the *empty* P_2016 Partition, the Index remained valid.  However, when I dropped the *non-empty*  P_2017 Partition (even if it has a single row and irrespective of whether Table/Partition statistics have been gathered) without the UPDATE INDEXES clause, the Index became UNUSABLE.  An UNUSABLE Unique Index will not allow fresh inserts (into *any* Partition of the table).

Next, I attempt to create a Local Partitioned Index.  Such an Index has a Partition corresponding to each Table Partition.

SQL> create index sales_data_lcl_ndx_1
2 on sales_data (sale_date, invoice_number) local
3 /

Index created.

SQL> select partitioned
2 from user_indexes
3 where index_name = 'SALES_DATA_LCL_NDX_1'
4 /

PAR
---
YES

SQL> select partitioned, tablespace_name, status
2 from user_indexes
3 where index_name = 'SALES_DATA_LCL_NDX_1'
4 /

PAR TABLESPACE_NAME STATUS
--- ------------------------------ --------
YES N/A

SQL>
SQL> select partition_name, tablespace_name, status
2 from user_ind_partitions
3 where index_name = 'SALES_DATA_LCL_NDX_1'
4 order by partition_position
5 /

PARTITION_NAME TABLESPACE_NAME STATUS
------------------------------ ------------------------------ --------
P_2018 TBS_YEAR_2018 USABLE
P_2019 TBS_YEAR_2019 USABLE
P_2020 TBS_YEAR_2020 USABLE
P_MAXVALUE USERS USABLE

SQL>


A Local Partitioned Index is created with the LOCAL keyword in the CREATE INDEX statement.
For a Partitioned Index, the TABLESPACE_NAME and STATUS attributes carry no meaning at the Index level --- these have values for each Partition.

Note how the Index Partitions were created with the same Partition Name and Tablespace Name as the Table Partitions. Similarly, any Partition Maintenance operations (DROP, MERGE, SPLIT) at the Table level will automatically be applied to the Index, dropping/creating the corresponding Index Partition(s).

In my next post, I will look at two other types of Index definitions on a Partitioned Table  (Global Partitioned Index and Bitmap Index).




Categories: DBA Blogs

No communication skills? Tech is not for you!

Tim Hall - Sun, 2018-08-12 03:06

Sometimes the tech world drives me to despair. A quick look around Stack Exchange and forums and you can see most people have terrible written communication skills. I have a long history of trying to encourage people to improve their communication skills because it really matters.

This is something I have had to work on myself, and still do. If you don’t put some effort into developing your communication skills you will always remain a second-class member of staff.

I’ve got to the point now where I’m becoming hard nosed about it. If you’ve not already recognised this in yourself and started to try and do something about it, why should an employer waste their time with you?

If you really don’t know where to start, you might want to look through these series of posts I wrote a while ago.

You might think it’s all about silent geniuses, but the tech industry is really about communication. If you can’t communicate efficiently with colleagues and the users in the business area you are working in, there is no point you being there.

Please, please, please make the effort. Once you do you will never look back!

Cheers

Tim…

No communication skills? Tech is not for you! was first posted on August 12, 2018 at 9:06 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

18c runInstaller -silent

Yann Neuhaus - Sun, 2018-08-12 02:34

You find two different ‘runInstaller’ under an Oracle Home. The old one, the Oracle Universal Installer, in $ORACLE_HOME/oui/bin. And the new one, in $ORACLE_HOME directly. They have the same name but are completely different. The old one was used to install an Oracle Home from the installation media. But in 18c you don’t use it. It has been used by Oracle to build the Oracle Home image. Then you download and unzip directly your Oracle Home. You have only to configure it and re-link the binaries. And this is done by the new runInstaller which is at the root of the Oracle Home. Actually, it is just a shell script that runs the Perl dbSetup.pl to setup the Oracle Database software. In my opinion, it would be better to have it called dbSetup.sh rather than rename it to runInstaller, especially given that the same thing for Grid Infrastructure is called GridSetup.sh since 12cR2. The Perl script finally runs the Java GUI. It can also be run in command line, aka silent mode, which is the goal of this post. The command line arguments are similar, but not the same as in the old runInstaller.

Prerequisites

You may want to run the prerequisites only to check if your system is ready for the installation. Here is how to do so in command line:

$ $ORACLE_HOME/runInstaller -silent -executePrereqs -responseFile $ORACLE_HOME/inventory/response/db_install.rsp
 
Launching Oracle Database Setup Wizard...
 
[FATAL] [INS-13013] Target environment does not meet some mandatory requirements.
CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u00/app/oraInventory/logs/InstallActions2018-08-11_06-07-14PM/installActions2018-08-11_06-07-14PM.log
ACTION: Identify the list of failed prerequisite checks from the log: /u00/app/oraInventory/logs/InstallActions2018-08-11_06-07-14PM/installActions2018-08-11_06-07-14PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.

From there we can check the log about the tests that have failed, such as in the following example:

INFO: [Aug 11, 2018 6:08:21 PM] Physical Memory: This is a prerequisite condition to test whether the system has at least 8GB (8388608.0KB) of total physical memory.
INFO: [Aug 11, 2018 6:08:21 PM] Severity:IGNORABLE
INFO: [Aug 11, 2018 6:08:21 PM] OverallStatus:VERIFICATION_FAILED
INFO: [Aug 11, 2018 6:08:21 PM] *********************************************
INFO: [Aug 11, 2018 6:08:21 PM] Run Level: This is a prerequisite condition to test whether the system is running with proper run level.
INFO: [Aug 11, 2018 6:08:21 PM] Severity:CRITICAL
INFO: [Aug 11, 2018 6:08:21 PM] OverallStatus:VERIFICATION_FAILED
INFO: [Aug 11, 2018 6:08:21 PM] *********************************************
INFO: [Aug 11, 2018 6:08:21 PM] OS Kernel Version: This is a prerequisite condition to test whether the system kernel version is at least "2.6.39-400.211.1".
INFO: [Aug 11, 2018 6:08:21 PM] Severity:CRITICAL
INFO: [Aug 11, 2018 6:08:21 PM] OverallStatus:VERIFICATION_FAILED

Software Install

You can pass all parameters in command line (‘runInstaller -silent -help’ to see all possibilities), but in all cases you need a response file. Then I put everything I need in the response file. There’s no mention of the ORACLE_HOME because you already unzipped it at the right place. The most important is the edition which seems to accept [EE, SEONE, SE2, HP, XP, PE]. I didn’t try it but Standard Edition One is for versions <= 12.1.0.1 by the way.

cd $ORACLE_HOME
 
cat > db18EE.rsp <<END
oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v18.0.0
oracle.install.option=INSTALL_DB_SWONLY
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/u00/app/oraInventory
ORACLE_BASE=/u00/app/oracle
oracle.install.db.InstallEdition=EE
oracle.install.db.OSDBA_GROUP=dba
oracle.install.db.OSOPER_GROUP=oper
oracle.install.db.OSBACKUPDBA_GROUP=backupdba
oracle.install.db.OSDGDBA_GROUP=dgdba
oracle.install.db.OSKMDBA_GROUP=kmdba
oracle.install.db.OSRACDBA_GROUP=dba
END

There is no need for the oracle.install.db.config variables because I’ll install the software only without creating a database.

Here is how to run the dbSetup. You can use ‘-ignorePrereqFailure’ to ignore the prerequisites if you want to install to a host where some prerequisites fail:

./runInstaller -silent -noconfig -ignorePrereqFailure -responseFile ./db18EE.rsp

 
The log of the installation goes into the oraInventory/logs and, as usual, you have to run the root.sh

As a root user, execute the following script(s):
1. /u00/app/oracle/product/18SE/root.sh
 
Execute /u00/app/oracle/product/18SE/root.sh on the following nodes:
[vmreforatun01]

This new runInstaller can also apply one-off patches with -applyOneOffs mentioning the patch locations. You can also build an Oracle Home image that you customize, with -createGoldImage -destinationLocation and even mention some files or path to exclude to make it smaller: -exclFiles

 

Cet article 18c runInstaller -silent est apparu en premier sur Blog dbi services.

Documentum – Silent Install – Docbases/Repositories

Yann Neuhaus - Sat, 2018-08-11 23:31

In previous blogs, we installed in silent the Documentum binaries as well as a docbroker (+licence(s) if needed). In this one, we will see how to install docbases/repositories and by that, I mean either a Global Registry (GR) repository or a normal repository.

As you all know, you will need a repository to be a GR and I would always recommend to setup a GR that isn’t used by the end-users (no real documents). That’s why I will split this blog into two: the installation of a GR and then, the installation of a normal repository that will be used by end-users. So, let’s get to it.

 

1. Documentum Global Registry repository installation

The properties file for a GR installation is as follow (it’s a big one):

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/CS_Docbase_GR.properties
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/CS_Docbase_GR.properties
### Silent installation response file for a Docbase (GR)
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.CONFIGURATOR.LICENSING=false
SERVER.CONFIGURATOR.REPOSITORY=true
SERVER.CONFIGURATOR.BROKER=false

### Docbase parameters
SERVER.DOCBASE_ACTION=CREATE

common.use.existing.aek.lockbox=common.create.new
common.aek.passphrase.password=a3kP4ssw0rd
common.aek.key.name=CSaek
common.aek.algorithm=AES_256_CBC
SERVER.ENABLE_LOCKBOX=true
SERVER.LOCKBOX_FILE_NAME=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD=l0ckb0xP4ssw0rd

SERVER.DOCUMENTUM_DATA_FOR_SAN_NAS=false
SERVER.DOCUMENTUM_DATA=
SERVER.DOCUMENTUM_SHARE=
SERVER.FQDN=content_server_01.dbi-services.com

SERVER.DOCBASE_NAME=gr_docbase
SERVER.DOCBASE_ID=1010101
SERVER.DOCBASE_DESCRIPTION=Global Registry repository for silent install blog

SERVER.PROJECTED_DOCBROKER_HOST=content_server_01.dbi-services.com
SERVER.PROJECTED_DOCBROKER_PORT=1489
SERVER.TEST_DOCBROKER=true
SERVER.CONNECT_MODE=dual

SERVER.USE_EXISTING_DATABASE_ACCOUNT=true
SERVER.INDEXSPACE_NAME=DM_GR_DOCBASE_INDEX
SERVER.DATABASE_CONNECTION=DEMODBNAME
SERVER.DATABASE_ADMIN_NAME=gr_docbase
SERVER.SECURE.DATABASE_ADMIN_PASSWORD=gr_d0cb4seP4ssw0rdDB
SERVER.DOCBASE_OWNER_NAME=gr_docbase
SERVER.SECURE.DOCBASE_OWNER_PASSWORD=gr_d0cb4seP4ssw0rdDB
SERVER.DOCBASE_SERVICE_NAME=gr_docbase

SERVER.GLOBAL_REGISTRY_SPECIFY_OPTION=USE_THIS_REPOSITORY
SERVER.BOF_REGISTRY_USER_LOGIN_NAME=dm_bof_registry
SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD=dm_b0f_reg1s7ryP4ssw0rd

### Common parameters
SERVER.ENABLE_XHIVE=false
SERVER.CONFIGURATOR.DISTRIBUTED_ENV=false
SERVER.ENABLE_RKM=false
START_METHOD_SERVER=false
MORE_DOCBASE=false
SERVER.CONGINUE.MORECOMPONENT=false

[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,SERVER.DOCUMENTUM_DATA=.*,SERVER.DOCUMENTUM_DATA=$DOCUMENTUM/data," /tmp/dctm_install/CS_Docbase_GR.properties
[dmadmin@content_server_01 ~]$ sed -i "s,SERVER.DOCUMENTUM_SHARE=.*,SERVER.DOCUMENTUM_SHARE=$DOCUMENTUM/share," /tmp/dctm_install/CS_Docbase_GR.properties
[dmadmin@content_server_01 ~]$

 

In the above commands, I didn’t put the SERVER.DOCUMENTUM_DATA and SERVER.DOCUMENTUM_SHARE into the file directly but I used sed commands to update the file later because I didn’t want to direct you to use a certain path for your installation like /app or /opt or /var or whatever… This choice is yours, so I just used sub-folders of $DOCUMENTUM and used this environment variable to set both parameters so you can choose which path you want for the Data and Share folder (the above is the default but you can set what you want).

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • SERVER.CONFIGURATOR.LICENSING: Whether or not you want to configure a licence using this properties file. Here since we just want a docbase/repository, it is obviously false
  • SERVER.CONFIGURATOR.REPOSITORY: Whether or not you want to configure a docbase/repository. That’s the purpose of this properties file so it will be true
  • SERVER.CONFIGURATOR.BROKER: Whether or not you want to configure a docbroker/connection broker. Same as the licence, it will be false
  • SERVER.DOCBASE_ACTION: The action to be executed, it can be either CREATE, UPGRADE or DELETE. You can upgrade a Documentum environment in silent even if the source doesn’t support the silent installation/upgrade as long as the target version (CS 7.3, CS 16.4, …) does
  • common.use.existing.aek.lockbox: Whether to use an existing aek or create a new one. Possible values are “common.create.new” or “common.use.existing”. In this case, it is the first docbase/repository created so we are creating a new one. In case of migration/upgrade, you might want to use an existing one (after upgrading it) …
  • common.aek.passphrase.password: The password to be used for the AEK
  • common.aek.key.name: The name of the AEK key to be used. This is usually something like “CSaek”
  • common.aek.algorithm: The algorithm to be used for the AEK key. I would recommend the strongest one, if possible: “AES_256_CBC”
  • SERVER.ENABLE_LOCKBOX: Whether or not you want to use a Lockbox to protect the AEK key. If set to true, a lockbox will be created and the AEK key will be stored in it
  • SERVER.LOCKBOX_FILE_NAME: The name of the Lockbox to be used. This is usually something like “lockbox.lb”
  • SERVER.LOCKBOX_PASSPHRASE.PASSWORD: The password to be used for the Lockbox
  • SERVER.DOCUMENTUM_DATA_FOR_SAN_NAS: Whether or not the “SERVER.DOCUMENTUM_DATA” and “SERVER.DOCUMENTUM_SHARE” are using a SAN or NAS path
  • SERVER.DOCUMENTUM_DATA: The path to be used to store the Documentum documents, accessible from all Content Servers which will host this docbase/repository
  • SERVER.DOCUMENTUM_SHARE: The path to be used for the share folder
  • SERVER.FQDN: The Fully Qualified Domain Name of the current host the docbase/repository is being installed on
  • SERVER.DOCBASE_NAME: The name of the docbase/repository to be created (dm_docbase_config.object_name)
  • SERVER.DOCBASE_ID: The ID of the docbase/repository to be created
  • SERVER.DOCBASE_DESCRIPTION: The description of the docbase/repository to be created (dm_docbase_config.title)
  • SERVER.PROJECTED_DOCBROKER_HOST: The hostname to be use for the [DOCBROKER_PROJECTION_TARGET] on the server.ini file, meaning the docbroker/connection broker the docbase/repository should project to, by default
  • SERVER.PROJECTED_DOCBROKER_PORT: The port to be use for the [DOCBROKER_PROJECTION_TARGET] on the server.ini file, meaning the docbroker/connection broker the docbase/repository should project to, by default
  • SERVER.TEST_DOCBROKER: Whether or not you want to test the docbroker/connection broker connection during the installation. I would recommand to always set this to true to be sure the docbase/repository is installed properly… If a docbroker/connection broker isn’t available, the installation will not be complete (DARs installation for example) but you will not see any error, unless you manually check the installation log…
  • SERVER.CONNECT_MODE: The connection mode of the docbase/repository to be used (dm_server_config.secure_connect_mode), it can be either native, dual or secure. If it is dual or secure, you have 2 choices:
    • Use the default “Anonymous” mode, which is actually not really secure
    • Use a real “SSL Certificate” mode, which requires some more parameters to be configured:
      • SERVER.USE_CERTIFICATES: Whether or not to use SSL Certificate for the docbase/repository
      • SERVER.KEYSTORE_FILE_NAME: The name of the p12 file that contains the keystore
      • SERVER.KEYSTORE_PASSWORD_FILE_NAME: The name of the password file that contains the password of the keystore
      • SERVER.TRUST_STORE_FILE_NAME: The name of the p7b file that contains the SSL Certificate needed to trust the targets (from a docbase point of view)
      • SERVER.CIPHER_LIST: Colon separated list of ciphers to be enabled (E.g.: EDH-RSA-AES256-GCM-SHA384:EDH-RSA-AES256-SHA)
      • SERVER.DFC_SSL_TRUSTSTORE: Full path and name of the truststore to be used that contains the SSL Certificate needed to trust the targets (from a DFC/client point of view)
      • SERVER.DFC_SSL_TRUSTSTORE_PASSWORD: The password of the truststore in clear text
      • SERVER.DFC_SSL_USE_EXISTING_TRUSTSTORE: Whether or not to use the Java truststore or the 2 above parameters instead
  • SERVER.USE_EXISTING_DATABASE_ACCOUNT: Whether or not you want to use an existing DB Account or create a new one. I don’t like when an installer is requesting you full access to a DB so I’m usually preparing the DB User upfront with only the bare minimal set of permissions required and then using this account for the Application (Documentum docbase/repository in this case)
  • SERVER.INDEXSPACE_NAME: The name of the tablespace to be used to store the indexes (to be set if using existing DB User)
  • SERVER.DATABASE_CONNECTION: The name of the Database to connect to. This needs to be available on the tnsnames.ora if using Oracle, aso…
  • SERVER.DATABASE_ADMIN_NAME: The name of the Database admin account to be used. There is no reason to put anything else than the same as the schema owner’s account here… If you configured the correct permissions, you don’t need a DB admin account at all
  • SERVER.SECURE.DATABASE_ADMIN_PASSWORD: The password of the above-mentioned account
  • SERVER.DOCBASE_OWNER_NAME: The name of the schema owner’s account to be used for runtime
  • SERVER.SECURE.DOCBASE_OWNER_PASSWORD: The password of the schema owner’s account
  • SERVER.DOCBASE_SERVICE_NAME: The name of the service to be used. To be set only when using Oracle…
  • SERVER.GLOBAL_REGISTRY_SPECIFY_OPTION: If this docbase/repository should be a Global Registry, then set this to “USE_THIS_REPOSITORY”, otherwise do not set the parameter. If the GR is on a remote host, you need to set this to “SPECIFY_DIFFERENT_REPOSITORY” and then use a few additional parameters to specify the name of the GR repo and the host it is currently running on
  • SERVER.BOF_REGISTRY_USER_LOGIN_NAME: The name of the BOF Registry account to be created. This is usually something like “dm_bof_registry”
  • SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD=The password to be used for the BOF Registry account
  • SERVER.ENABLE_XHIVE: Whether or not you want to enable the XML Store Feature. As I mentioned in the blog with the licences, this is one of the thing you might want to enable the licence during the docbase/repository configuration. If you want to enable the XHIVE, you will need to specify a few additional parameters like the XDB user/password, host and port, aso…
  • SERVER.CONFIGURATOR.DISTRIBUTED_ENV: Whether or not you want to enable/configure the DMS. If you set this to true, you will need to add a few more parameters like the DMS Action to be performed, the webserver port, host, password, aso…
  • SERVER.ENABLE_RKM: Whether or not you want to enable/configure the RKM. If you set this to true, you will need to add a few more parameters like the host/port on which the keys will be stored, the certificates and password, aso…
  • START_METHOD_SERVER: Whether or not you want the JMS to be re-started again once the docbase/repository has been created. Since we usually create at least 2 docbases/repositories, we can leave it stopped there
  • MORE_DOCBASE: Never change this value, it should remain as false as far as I know
  • SERVER.CONGINUE.MORECOMPONENT: Whether or not you want to configure some additional components. Same as above, I would always let it as false… I know that the name of this parameter is strange but that’s the name that is coming from the templates… But if you look a little bit on the internet, you might be able to find “SERVER.CONTINUE.MORE.COMPONENT” instead… So which one is working, which one isn’t is still a mystery for me. I use the first one but since I always set it to false, that doesn’t have any impact for me and I never saw any errors coming from the log files.

 

Once the properties file is ready, you can install the Global Registry repository using the following command:

[dmadmin@content_server_01 ~]$ $DM_HOME/install/dm_launch_server_config_program.sh -f /tmp/dctm_install/CS_Docbase_GR.properties

 

Contrary to previous installations, this will take some time (around 20 minutes) because it needs to install the docbase/repository, then there are DARs that need to be installed, aso… Unfortunately, there is no feedback on the progress, so you just need to wait and in case something goes wrong, you won’t even notice since there are no errors shown… Therefore, check the logs to be sure!

 

2. Other repository installation

Once you have a Global Registry repository installed, you can install the repository that will be used by the end-users (which isn’t a GR then). The properties file for an additional repository is as follow:

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/CS_Docbase_Other.properties
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/CS_Docbase_Other.properties
### Silent installation response file for a Docbase
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.CONFIGURATOR.LICENSING=false
SERVER.CONFIGURATOR.REPOSITORY=true
SERVER.CONFIGURATOR.BROKER=false

### Docbase parameters
SERVER.DOCBASE_ACTION=CREATE

common.use.existing.aek.lockbox=common.use.existing
common.aek.passphrase.password=a3kP4ssw0rd
common.aek.key.name=CSaek
common.aek.algorithm=AES_256_CBC
SERVER.ENABLE_LOCKBOX=true
SERVER.LOCKBOX_FILE_NAME=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD=l0ckb0xP4ssw0rd

SERVER.DOCUMENTUM_DATA_FOR_SAN_NAS=false
SERVER.DOCUMENTUM_DATA=
SERVER.DOCUMENTUM_SHARE=
SERVER.FQDN=content_server_01.dbi-services.com

SERVER.DOCBASE_NAME=Docbase1
SERVER.DOCBASE_ID=1010102
SERVER.DOCBASE_DESCRIPTION=Docbase1 repository for silent install blog

SERVER.PROJECTED_DOCBROKER_HOST=content_server_01.dbi-services.com
SERVER.PROJECTED_DOCBROKER_PORT=1489
SERVER.TEST_DOCBROKER=true
SERVER.CONNECT_MODE=dual

SERVER.USE_EXISTING_DATABASE_ACCOUNT=true
SERVER.INDEXSPACE_NAME=DM_DOCBASE1_INDEX
SERVER.DATABASE_CONNECTION=DEMODBNAME
SERVER.DATABASE_ADMIN_NAME=docbase1
SERVER.SECURE.DATABASE_ADMIN_PASSWORD=d0cb4se1P4ssw0rdDB
SERVER.DOCBASE_OWNER_NAME=docbase1
SERVER.SECURE.DOCBASE_OWNER_PASSWORD=d0cb4se1P4ssw0rdDB
SERVER.DOCBASE_SERVICE_NAME=docbase1

### Common parameters
SERVER.ENABLE_XHIVE=false
SERVER.CONFIGURATOR.DISTRIBUTED_ENV=false
SERVER.ENABLE_RKM=false
START_METHOD_SERVER=true
MORE_DOCBASE=false
SERVER.CONGINUE.MORECOMPONENT=false

[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,SERVER.DOCUMENTUM_DATA=.*,SERVER.DOCUMENTUM_DATA=$DOCUMENTUM/data," /tmp/dctm_install/CS_Docbase_Other.properties
[dmadmin@content_server_01 ~]$ sed -i "s,SERVER.DOCUMENTUM_SHARE=.*,SERVER.DOCUMENTUM_SHARE=$DOCUMENTUM/share," /tmp/dctm_install/CS_Docbase_Other.properties
[dmadmin@content_server_01 ~]$

 

I won’t list all these parameters again but just the ones that changed, except the docbase/repository name/id/description and DB accounts/tablespaces since these are pretty obvious:

  • Updated parameter’s value:
    • common.use.existing.aek.lockbox: As mentioned above, since the AEK key is now created (as part of the GR installation), this now need to be set to “common.use.existing” instead
  • Removed parameter (all these will be taken from the dfc.properties that has been updated as part of the GR installation):
    • SERVER.GLOBAL_REGISTRY_SPECIFY_OPTION
    • SERVER.BOF_REGISTRY_USER_LOGIN_NAME
    • SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD

 

Once the properties file is ready, you can install the additional repository in the same way:

[dmadmin@content_server_01 ~]$ $DM_HOME/install/dm_launch_server_config_program.sh -f /tmp/dctm_install/CS_Docbase_Other.properties

 

You now know how to install and configure a Global Registry repository as well as any other docbase/repository on a “Primary” Content Server using the silent installation provided by Documentum. In a later blog, I will talk about specificities related to a “Remote” Content Server for a High Availability environment.

 

Cet article Documentum – Silent Install – Docbases/Repositories est apparu en premier sur Blog dbi services.

Ubuntu / Linux: WLAN connections fails with "denied authentication (status 17)"

Dietrich Schroff - Sat, 2018-08-11 12:42
During holidays i was not able to connect my laptop to the WLAN.
The /var/log/syslog showed the following:
Jul 15 13:45:20 pc wpa_supplicant[935]: wlp3s0: SME: Trying to authenticate with 14:9d:09:fe:a4:b3 (SSID='HUAWEI BTV' freq=2437 MHz)
Jul 15 13:45:20 pc kernel: [ 6972.826888] wlp3s0: authenticate with 14:9d:09:fe:a4:b3
Jul 15 13:45:20 pc NetworkManager[797]:   [1531655120.4746] device (wlp3s0): supplicant interface state: disconnected -> authenticating
Jul 15 13:45:20 pc kernel: [ 6972.829942] wlp3s0: send auth to 14:9d:09:fe:a4:b3 (try 1/3)
Jul 15 13:45:20 pc kernel: [ 6972.833433] wlp3s0: 14:9d:09:fe:a4:b3 denied authentication (status 17)
Jul 15 13:45:20 pc wpa_supplicant[935]: wlp3s0: CTRL-EVENT-SSID-TEMP-DISABLED id=0 ssid="HUAWEI BTV" auth_failures=1 duration=10 reason=CONN_FAILED
Jul 15 13:45:20 pc NetworkManager[797]:   [1531655120.5031] device (wlp3s0): supplicant interface state: authenticating -> disconnected
Jul 15 13:45:20 pc org.freedesktop.Notifications[1248]: ** (notify-osd:1448): WARNING **: dnd_is_idle_inhibited(): got error "The name org.gnome.SessionManager was not provided by any .service files"
Hmmm.
STATUS 17What the hell does this error code tell me?

The explanation was quite easy: The WLAN was provided by a tablet via tethering. And Android limits the number of devices in such a WLAN to 5.
After removing one of the other devices everything worked fine...


The Oracle Active Session History (ASH) - a real treasure trove

Karl Reitschuster - Sat, 2018-08-11 11:02
When Oracle introduced the Active Session History (ASH) a new gate of tuning and monitoring options was opened.  Yes we had statspack, extended tracing and so on but all these tools were mssing a time(line) dimension. For example statspack reports and later AWR reports are flattened aggregations of numbers carefully to interpret as details are flushed as you always have only a look on an aggregated view of performance metrics.

Security Alert CVE-2018-3110 Released

Oracle Security Team - Fri, 2018-08-10 15:02

Oracle just released Security Alert CVE-2018-3110.  This vulnerability affects the Oracle Database versions 11.2.0.4 and 12.2.0.1 on Windows.  It has received a CVSS Base Score of 9.9, and it is not remotely exploitable without authentication.  Vulnerability CVE-2018-3110 also affects Oracle Database version 12.1.0.2 on Windows as well as Oracle Database on Linux and Unix; however, patches for those versions and platforms were included in the July 2018 Critical Patch Update.

Due to the nature of this vulnerability, Oracle recommends that customers apply these patches as soon as possible.  This means that:

  • Customers running Oracle Database versions 11.2.0.4 and 12.2.0.1 on Windows should apply the patches provided by the Security Alert.
  • Customers running version 12.1.0.2 on Windows or any version of the database on Linux or Unix should apply the July 2018 Critical Patch Update if they have not already done so. 

For More Information:
• The Advisory for Security Alert CVE-2018-3110 is located at http://www.oracle.com/technetwork/security-advisory/alert-cve-2018-3110-5032149.html
• The Advisory for the July 2018 Critical Patch Update is located at http://www.oracle.com/technetwork/security-advisory/cpujul2018-4258247.html

TRANSPORT_CONNECT_TIMEOUT and RETRY_COUNT

Yann Neuhaus - Fri, 2018-08-10 09:47

When you have a Data Guard configuration, you want the application to connect to the right server, where the primary is, without taking too much time. The default TCP timeout is 1 minute which is too long. When you don’t want to configure a virtual IP address (VIP) you can simply list all the addresses in the client connection string. But then you need to reduce the timeout. A short duration in 1 to 5 seconds will be ok most of the time, but in case of network issue, you want to give a chance to retry with a longer timeout. This post is about the connection string parameters to define this. Of course, all is documented but the goal of this post is also to show how to quickly test it. Because a reliable understanding of how it works relies on both documentation and test.

Here is a simple client failover configuration where the connection tries 10.10.10.10 and, if it fails, tries 10.10.10.11

DEFAULT=
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)

The problem with that is when the 10.10.10.10 is down then the 10.10.10.11 will be tried only after 60 seconds, the default TCP timeout. You can completely avoid waiting for the timeout by using a virtual IP that will always be up, started on the failed-over server. But you can also reduce the TCP timeout to a few seconds.

Here is a tnsping with the above tnsnames.ora entry and when both servers are down:


$ time tnsping DESCRIPTION
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:15:55
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 2m0.051s
user 0m0.005s
sys 0m0.011s

That’s 2 minutes because there is a 1 minute timeout for each address.

TRANSPORT_CONNECT_TIMEOUT

Now, just adding the TRANSPORT_CONNECT_TIMEOUT to the connection string description to reduce the timout to 4 seconds:


DESCRIPTION=
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=4)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)

The total time to get the answer from both addresses is 8 seconds – 4 second for each:

$ time tnsping DESCRIPTION
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:15:55
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (TRANSPORT_CONNECT_TIMEOUT=4) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 0m8.023s
user 0m0.010s
sys 0m0.006s

RETRY_COUNT

If you lower the timeout, you may give a chance to retry a few times with RETRY_COUNT. There, RETRY_COUNT=2 will give 3 attempts ( 1 + 2 retries ) to the address list:

$ time tnsping RETRY_COUNT
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:49:34
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (TRANSPORT_CONNECT_TIMEOUT=4) (RETRY_COUNT=2) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 0m24.049s
user 0m0.011s
sys 0m0.010s

This has tried 10.10.10.10 and then 10.10.10.11 for 4 seconds each, and then retried 2 times wich in total takes 2x4x4=24 seconds

DESCRIPTION_LIST

The TRANSPORT and RETRY_COUNT are used only in the DESCRIPTION. You may want to give several attempts with an increasing timeout. For example: try each address for one second to get a quick connection to the primary, wherever it is, when the network is in good health. Then give two attempts with a 5 seconds timeout for bad network times. And then one final attempt to each with the default timeout to be sure that the servers are down.

You can use a DESCRIPTION_LIST for this:

INCREASING=
(DESCRIPTION_LIST=
(LOAD_BALANCE=off)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=1)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=5)
(RETRY_COUNT=1)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=2)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
)

Rather than just time the total attempts, I’ll strace each connections:

$ strace -tT tnsping INCREASING 2>&1 | grep -C1 --color=auto -E 'poll.*|inet_addr[()".0-9]*'
 
16:15:49 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000008>
16:15:49 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000087>
16:15:49 times(NULL) = 434920117 <0.000011>
16:15:49 mmap(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7efce31bc000 <0.000013>
16:15:49 poll([{fd=4, events=POLLOUT}], 1, 1000) = 0 (Timeout) <1.001435>
16:15:50 close(4) = 0 <0.000256>
--
16:15:50 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000060>
16:15:50 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000495>
16:15:50 times(NULL) = 434920218 <0.000062>
16:15:50 poll([{fd=4, events=POLLOUT}], 1, 1000) = 0 (Timeout) <1.000768>
16:15:51 close(4) = 0 <0.000050>
--
16:15:51 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000015>
16:15:51 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000060>
16:15:51 times(NULL) = 434920318 <0.000010>
16:15:51 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.005563>
16:15:56 close(4) = 0 <0.000027>
--
16:15:56 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000012>
16:15:56 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000081>
16:15:56 times(NULL) = 434920819 <0.000015>
16:15:56 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.006265>
16:16:01 close(4) = 0 <0.000192>
--
16:16:01 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000079>
16:16:01 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000486>
16:16:01 times(NULL) = 434921320 <0.000087>
16:16:01 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.004660>
16:16:06 close(4) = 0 <0.000611>
--
16:16:06 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000114>
16:16:06 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000536>
16:16:06 times(NULL) = 434921822 <0.000097>
16:16:06 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.008128>
16:16:11 close(4) = 0 <0.000135>
--
16:16:11 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000137>
16:16:11 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000584>
16:16:11 times(NULL) = 434922323 <0.000079>
16:16:11 poll([{fd=4, events=POLLOUT}], 1, 60000) = 0 (Timeout) <60.053782>
16:17:11 close(4) = 0 <0.000166>
--
16:17:11 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000195>
16:17:11 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000549>
16:17:11 times(NULL) = 434928329 <0.000488>
16:17:11 poll([{fd=4, events=POLLOUT}], 1, 60000) = 0 (Timeout) <60.007246>
16:18:11 close(4) = 0 <0.000043>

With ‘-T’ strace shows the duration of the poll() system call between brackets after the return code. You can see here 1-second timeout attempts to each address, then 2 attempts with 5 seconds timeout and then 60 seconds.

Note that I have added (LOAD_BALANCE=OFF) here because the default is ON in a DESCRIPTION_LIST but here I want to take them in the order I specified them.

 

Cet article TRANSPORT_CONNECT_TIMEOUT and RETRY_COUNT est apparu en premier sur Blog dbi services.

Fishbowl Solutions Helps Global Dredging Company Reduce WebCenter Portal Development Costs while Enhancing the Overall Experience to Access Information

A supplier of equipment, vessels, and services for offshore dredging and wet-mining markets, based in Europe with over 3,000 employees and 39 global locations, was struggling to get the most out of their enterprise business applications.

Business Problem

In 2012, the company started a transformation initiative, and as part of the project, they replaced most of their enterprise business applications.  The company had over 10 different business applications and wanted to provide employees with access to information through a single web experience or portal view. For example, the field engineers may need information for a ship’s parts from the PLM system (TeamCenter), as well as customer-specific cost information for parts from the company’s ERP system (IFS Applications). It was critical to the business that employees could quickly navigate, search, and view information regardless of where it is stored in the content management system. The company’s business is built from ships dredging, laying cable, etc., so the sooner field engineers are able to find information on servicing a broken part, the sooner the company is able to drive revenue.

Integrating Oracle WebCenter

The company chose Oracle WebCenter Portal because it had the best capabilities to integrate their various business systems, as well as its ability to scale. WebCenter enabled them to build a data integration portal that provided a single pane of glass to all enterprise information. Unfortunately, this single pane of glass did not perform as well as expected. The integrations, menu navigation, and the ability to render part drawings in the portal were all developed using Oracle Application Development Framework (Oracle ADF).  Oracle ADF is great for serving up content to WebCenter Portal using taskflows, but it requires a specialized development skill set. The company had limited Oracle ADF development resources, so each time a change or update was requested for the portal it took them weeks and sometimes months to implement the enhancement. Additionally, every change to the portal required a restart and these took in excess of forty minutes.

Platform Goals

The company wanted to shorten the time-to-market for portal changes, as well as reduce its dependency on and the overall development and design limitations with Oracle ADF. They wanted to modernize their portal and leverage a more designer-friendly, front-end development framework. They contacted Fishbowl Solutions after searching for Oracle WebCenter Portal partners and finding out about their single page application approach (SPA) to front-end portal development.

Fishbowl Solutions’ SPA for Oracle WebCenter Portal is a framework that overhauls the Oracle ADF UI with Oracle JET (JavaScript Extension Toolkit) or other front-end design technology such as Angular or React. The SPA framework includes components (taskflows) that act as progressive web applications and can be dropped onto pages from the portal resource catalog, meaning that no Oracle ADF development is necessary. Fishbowl’s SPA also enables portal components to be rendered on the client side with a single page load. This decreases the amount of processing being done on the portal application server, as well as how many times the page has to reload. This all leads to an improved experience for the user, as well as the ability design and development teams to view changes or updates to the portal almost instantaneously.

Outcome

Fishbowl Solutions helped the company implement its SPA framework in under two weeks. Since the implementation, they have observed more return visits to the portal, as well as fewer support issues. They are also no longer constrained by the 40-minute portal restart after changes to the portal, and overall portal downtime has been significantly reduced. Lastly, Fishbowl’s SPA framework provided them with a go-forward design and development approach for portal projects, which will enable them to continue to evolve their portal to best serve their employees and customers alike.

The post Fishbowl Solutions Helps Global Dredging Company Reduce WebCenter Portal Development Costs while Enhancing the Overall Experience to Access Information appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Dear Oracle Customers & Partners, Thank YOU!

Chris Warticki - Fri, 2018-08-10 08:00

Dear Oracle Customers & Partners,

After a deeply rewarding 20-year career here at Oracle, this serves as my final blog post.  I wanted to take a moment and thank all of you for reading, subscribing and sharing the information provided here about Oracle Support.

My 20-year career began serving customers in our Network Support group then quickly transitioned to managing the Unix Install and RDBMS Support teams under Server Technologies.  After 7 years in in the Support delivery organization, I began working for the Customer Support Education team, Proactive Support team and finished within the Global Customer Management organization. 

I've been privileged to have traveled the world evangelizing the evolution, best practices and value of Oracle Support throughout the years.  I've been blessed to meet many of you at conferences, events, user groups and 1on1 (both online and in-person).  I've thoroughly enjoyed every minute of it.

We've been through a lot together.  From the early days of telephone, call-center support to web enabled, online support. From TAR to SR.  From MetaLink to MOS.  We survived FlashMOS. We survived Y2K.  Through it all, you know that my mantra has always been FOR THE CUSTOMER.

As for the future of this blog - stay tuned.  As for my personal next chapter, you can follow me on Twitter and Linked-In.

Thank You all. Thank you for your time.  Thank you for your feedback. Thank you for your business.

Sincerely,

Chris Warticki



 

OPS$Oracle user after Wallet Creation in Oracle 12c

Pakistan's First Oracle Blog - Fri, 2018-08-10 00:37

----- In Oracle 12.1.0.2, created the wallet by using below commands:

TEST$ orapki wallet create -wallet "/u01/app/oracle/admin/TEST/wallet" -pwd ****  -auto_login_local
Oracle PKI Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.


TEST$ mkstore -wrl "/u01/app/oracle/admin/TEST/wallet" -createCredential TEST2 sys ********
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
Create credential oracle.security.client.connect_string1

----- But when I logged into the database with sys user, the show user showed OPS$ORACLE user instead of sys:

TEST$ sqlplus /@TEST2

SQL*Plus: Release 12.1.0.2.0 Production on Thu Aug 9 13:09:38 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Thu Aug 09 2018 03:18:20 -04:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> sho user
USER is "OPS$ORACLE"
SQL>

----- So made following changes and it worked fine:

Put following entry in sqlnet.ora file:

SQLNET.WALLET_OVERRIDE = TRUE
The SQLNET.WALLET_OVERRIDE entry allows this method to override any existing OS authentication configuration.

and used mkstore to create the wallet:

TEST$  mkstore -wrl "/u01/app/oracle/admin/TEST/wallet" -createCredential TEST2 sys

Categories: DBA Blogs

ATP vs ADW – the Autonomous Database lockdown profiles

Yann Neuhaus - Thu, 2018-08-09 14:49

The Oracle database has always distinguished two types of workloads: transactional (OLTP) and datawarehouse (VLDB, DWH, DSS, BI, analytics). There is the same idea in the managed Oracle Cloud with two autonomous database services.

To show how this is old, here is how they were defined in the Oracle7 Tuning Book:

CaptureOLTPvsDSS

The definition has not changed a lot. But the technology behind DSS/DWH has improved. Now, with In-Memory Column Store, Smart Scan, Result Cache we can even see that indexes, materialized views, star transformation, hints,.. are disabled in the Autonomous Datawarehouse cloud service.

The difference between the two autonomous cloud services, ATP (Autonomous Transaction Processing) for OLTP and ADW (Autonomous Datawarehouse) for analytics have been described by Maria Colgan after Larry Ellison announce:
https://sqlmaria.com/2018/08/07/how-does-autonomous-transaction-processing-differ-from-the-autonomous-data-warehouse/

PDBaaS

Those autonomous services are PDB as a Service. They are using the consolidation and isolation features of 12cR2 multitenant. And we can even see that the ATP and ADW services can run within the same CDB and same instance. They are different PDBs and they differ only by their resource management plans and PDB lockdown profiles. So let’s see the differences from this point of view: ATP lockdown profile is called ‘OLTP’ and ADW lockdown profile is called ‘DWCS’.

Options

The only difference is about partitioning which is enabled for ATP and disabled for ASW

SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where rule_type='OPTION';
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------ ----- ------
OLTP OPTION PARTITIONING ENABLE ALL 284
 
SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where rule_type='OPTION' order by 1, 2 ,3 nulls first, 4 nulls first,5;
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------- ----- ------
DWCS OPTION PARTITIONING DISABLE ALL 73

Features

All disabled features are the same:
Disabled for all users: COMMON_SCHEMA_ACCESS, CONNECTIONS, CTX_LOGGING, NETWORK_ACCESS, OS_ACCESS, SYSTEM_DATA
Disabled for local users only: USER_PASSWORD_VERIFIERS
Enabled for all users: COMMON_USER_CONNECT, TRACE_VIEW_ACCESS, UTL_FILE

Those last ones are good news. We can query V$DIAG_OPT_TRACE_RECORDS, V$DIAG_SQL_TRACE_RECORDS, $DIAG_TRACE_FILE_CONTENTS to show some diagnostics. Unfortunately, I’ve seen no allowed ways to enable SQL Trace.

DDL Statements

In the ADW service, the local user cannot create indexes and materialized views, They are supposed to be autonomously created when required (probably by a common user):

SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where regexp_like(rule,'(INDEX|MATERIALIZED)') order by 1, 2 ,3 nulls first, 4 nulls first,5;
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------ ----- ------
DWCS STATEMENT ALTER INDEX DISABLE LOCAL 73
DWCS STATEMENT ALTER INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT ALTER MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT ALTER MATERIALIZED VIEW LOG DISABLE LOCAL 73
DWCS STATEMENT CREATE INDEX DISABLE LOCAL 73
DWCS STATEMENT CREATE INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT CREATE MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT CREATE MATERIALIZED VIEW LOG DISABLE LOCAL 73
DWCS STATEMENT DROP INDEX DISABLE LOCAL 73
DWCS STATEMENT DROP INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT DROP MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT DROP MATERIALIZED VIEW LOG DISABLE LOCAL 73

The ATP service enables all those for all users.

System parameters

Both services disable all ALTER SYSTEM statements except KILL SESSION. They allow all ALTER SYSTEM SET for the common users but allow only a few parameters to be changed by local users.

Basically, the ADW service sets the following:


_default_pct_free=1
_ldr_io_size=33554432
_ldr_io_size2=33554432
_max_io_size=33554432
_optimizer_allow_all_access_paths=FALSE
_optimizer_answering_query_using_stats=TRUE
optimizer_ignore_hints=TRUE
optimizer_ignore_parallel_hints=TRUE
parallel_degree_policy=AUTO
parallel_min_degree=CPU
result_cache_max_result=1
result_cache_mode=FORCE

The ATP service keeps the defaults.

Basically, all the features for OLTP are there for years in the Oracle Database. Under the ‘autonomous’ umbrella we can see some well-known features

  • Automatic provisioning: pluggable databases
  • Automatic scaling: PDB resource manager
  • Automatic tuning: SQL Plan Management, Adaptive Plans, SQL Tuning Advisor, Storage Indexes
  • Automatic security: Rolling patches, Encryption, Database Vault
  • Automatic Fault Tolerant Failover: RAC, Data Guard
  • Automatic Backup and Recovery: RMAN, Flashback

All these proven features are enhanced to work together in a consolidated multitenant environment. Povisioning is as fast as a CREATE PDB. The ATP and ADW services enable and configure the right set of features for each workload.

 

Cet article ATP vs ADW – the Autonomous Database lockdown profiles est apparu en premier sur Blog dbi services.

Oracle Offline Persistence Toolkit - Controlling Online Replay

Andrejus Baranovski - Thu, 2018-08-09 13:25
Few months ago I had a post about Oracle Offline Persistence toolkit, which integrates well with Oracle JET (JavaScript toolkit from Oracle) - Oracle JET Offline Persistence Toolkit - Offline Update Handling. I'm back to this topic with sample application upgraded to JET 5.1 and offline toolkit upgraded to 1.1.5. In this post I will describe how to control online replay by filtering out some of the requests, to be excluded from replay.

Source code is available on GitHub. Below I describe changes and functionality in the latest commit.

To test online replay, go offline and execute some actions in the sample app - change few records and try to search by first name, also try to use page navigation buttons. You will be able to save changes in offline mode, but if this is your first time loading app and data from other pages wasn't fetch yet, then page navigation would not bring any new results in offline mode (make sure to load more records while online and then go offline):


In online replay manager, I'm filtering out GET requests intentionally. Once going online, I replay only PATCH requests. This is done mainly for a test, to learn how to control replay process. PATCH requests are executed during replay:


Printing out in the log, each GET request which was removed from replay loop:


Replay implementation (I would recommend to read Offline Persistence Toolkit usage doc for more info):


This code is executed, after transition to online status. Calling getSyncLog method from Sync Manager - returns a list of requests pending replay. Promise returns function with array of requests waiting for online replay. I have marked function to be async, this allows to implement sequential loop, where each GET request will be removed one by one in order. This is needed, since removeRequest from Sync Manager is executed in promise and loop would complete too late - after we pass execute replay phase. Read more about sequential loop implementation in JS, when promise is used - JavaScript - Method to Call Backend Logic in Sequential Loop. Once all GET requests are removed, we execute sync method, this will force all remaining requests in queue to be replayed.

Oracle Ksplice for Oracle Linux in Oracle Cloud

Wim Coekaerts - Thu, 2018-08-09 11:38

My favorite topic.. Ksplice! Just a friendly reminder that every Oracle Linux instance in Oracle Cloud comes with Oracle Ksplice installed/enabled by default at no additional cost beyond basic compute.

When you run an OL instance, the uptrack tools are on the base image. (uptrack-upgrade, uptrack-uname, etc..). The config file (/etc/uptrack/uptrack.conf) contains an access-key that enables any cloud instance to talk to our Ksplice service without registration. So as soon as you log into your system you can run # uptrack-upgrade or # uptrack-show .

uptrack doesn't run automatically, by default.  You are expected to manually type # uptrack-upgrade . What this does is the following: it goes to our service and looks at which Ksplice patches are available for your running kernel and asks if you want to install them. if you add - y then  it will just go ahead and install whatever is available without prompting you.

uptrack-show lists the patches that are already applied on your running kernel/system.

uptrack-uname shows the 'effective' kernel version. What this means is which kernel version you are effectively updated to with relevant CVEs and critical issues.

Here's a concrete example of my OCI instance:

 

# uname -a Linux devel 4.1.12-124.14.5.el7uek.x86_64 #2 SMP Fri May 4 15:26:53 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux

My instance runs UEK R4 (4.1.12-124.14.5) that's the actual RPM that's installed and the actual kernel that I booted the instance with.

 

# uptrack-uname -a Linux devel 4.1.12-124.15.1.el7uek.x86_64 #2 SMP Tue May 8 16:27:00 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux

I already ran uptrack-upgrade before so a number of patches are already applied and installed up to the same level as 4.1.12-124.15.1. So instead of installing the 4.1.12-124.15.1 kernel-uek RPM and rebooting, when I ran uptrack-upgrade a while back, it got me right to that level without affecting my availability one bit.

I did not enable auto-install so since I ran that command a while back, I have not done it again, a good number of (some serious) CVE's have been fixed and released since so it's time to update... but I so hate reboots! luckily.. no need.

What's already installed? Let's see...

 

# uptrack-show Installed updates: [1zkgpvff] KAISER/KPTI enablement for Ksplice. [1ozdguag] Improve the interface to freeze tasks. [nw9iml90] CVE-2017-15129: Use-after-free in network namespace when getting namespace ids. [i9x5u5uf] CVE-2018-5332: Out-of-bounds write when sending messages through Reliable Datagram Sockets. [dwwke2ym] CVE-2017-7294: Denial-of-service when creating surface using DRM driver for VMware Virtual GPU. [cxke2gao] CVE-2017-15299: Denial-of-service in uninstantiated key configuration. [nwtwa8b3] CVE-2017-16994: Information leak when using mincore system call. [hfehp9m0] CVE-2017-17449: Missing permission check in netlink monitoring. [7x9spq2j] CVE-2017-17448: Unprivileged access to netlink namespace creation. [lvyij5z2] NULL pointer dereference when rebuilding caches in Reliable Datagram Sockets protocol. [s31vmh6q] CVE-2017-17741: Denial-of-service in kvm_mmio tracepoint. [3x6jix1s] Denial-of-service of KVM L1 nested hypervisor when exiting L2 guest. [d22dawa6] Improved CPU feature detection on microcode updates. [fszq2l5k] CVE-2018-3639: Speculative Store Bypass information leak. [58rtgwo2] Device Mapper encrypted target Support big-endian plain64 IV. [oita8o1p] CVE-2017-16939: Denial-of-service in IPSEC transform policy netlink dump. [qenhqrfo] CVE-2017-1000410: Information leak in Bluetooth L2CAP messages. [965vypan] CVE-2018-10323: NULL pointer dereference when converting extents-format to B+tree in XFS filesystem. [drgt70ax] CVE-2018-8781: Integer overflow when mapping memory in USB Display Link video driver. [fa0wqzlw] CVE-2018-10675: Use-after-free in get_mempolicy due to incorrect reference counting. [bghp5z31] Denial-of-service in NFS dentry invalidation. [7n6p7i4h] CVE-2017-18203: Denial-of-service during device mapper destruction. [okbvjnaf] CVE-2018-6927: Integer overflow when re queuing a futex. [pzuay984] CVE-2018-5750: Information leak when registering ACPI Smart Battery System driver. [j5pxwei9] CVE-2018-5333: NULL pointer dereference when freeing resources in Reliable Datagram Sockets driver. Effective kernel version is 4.1.12-124.15.1.el7uek

so the above patches were installed last time. Quite a few! All applied, without affecting availability.

Ok, what else is available... a whole bunch, best apply them!

 

# uptrack-upgrade The following steps will be taken: Install [f9c8g2hm] CVE-2018-3665: Information leak in floating point registers. Install [eeqhvdh8] Repeated IBRS/IBPB noise in kernel log on Xen Dom0 or old microcode. Install [s3g55ums] DMA memory exhaustion in Xen software IO TLB. Install [nne9ju4x] CVE-2018-10087: Denial-of-service when using wait() syscall with a too big pid. Install [3xsxgabo] CVE-2017-18017: Use-after-free when processing TCP packets in netfliter TCPMSS target. Install [rt4hra3j] CVE-2018-5803: Denial-of-service when receiving forged packet over SCTP socket. Install [2ycvrhs6] Improved fix to CVE-2018-1093: Denial-of-service in ext4 bitmap block validity check. Install [rjklau8v] Incorrect sequence numbers in RDS/TCP. Install [qc163oh5] CVE-2018-10124: Denial-of-service when using kill() syscall with a too big pid. Install [5g4kpl3f] Denial-of-service when removing USB3 device. Install [lhr4t7eg] CVE-2017-7616: Information leak when setting memory policy. Install [mpc40pom] CVE-2017-11600: Denial-of-service in IP transformation configuration. Install [s77tq4wi] CVE-2018-1130: Denial-of-service in DCCP message send. Install [fli7048b] Incorrect failover group parsing in RDS/IP. Install [lu9ofhmo] Kernel crash in OCFS2 Distributed Lock Manager lock resource initialization. Install [dbhfmo13] Fail-over delay in Reliable Datagram Sockets. Install [7ag5j1qq] Device mapper path setup failure on queue limit change. Install [8l28npgh] Performance loss with incorrect IBRS usage when retpoline enabled. Install [sbq777bi] Improved fix to Performance loss with incorrect IBRS usage when retpoline enabled. Install [ls429any] Denial-of-service in RDS user copying error. Install [u79kngd9] Denial of service in RDS TCP socket shutdown. Go ahead [y/N]? y Installing [f9c8g2hm] CVE-2018-3665: Information leak in floating point registers. Installing [eeqhvdh8] Repeated IBRS/IBPB noise in kernel log on Xen Dom0 or old microcode. Installing [s3g55ums] DMA memory exhaustion in Xen software IO TLB. Installing [nne9ju4x] CVE-2018-10087: Denial-of-service when using wait() syscall with a too big pid. Installing [3xsxgabo] CVE-2017-18017: Use-after-free when processing TCP packets in netfliter TCPMSS target. Installing [rt4hra3j] CVE-2018-5803: Denial-of-service when receiving forged packet over SCTP socket. Installing [2ycvrhs6] Improved fix to CVE-2018-1093: Denial-of-service in ext4 bitmap block validity check. Installing [rjklau8v] Incorrect sequence numbers in RDS/TCP. Installing [qc163oh5] CVE-2018-10124: Denial-of-service when using kill() syscall with a too big pid. Installing [5g4kpl3f] Denial-of-service when removing USB3 device. Installing [lhr4t7eg] CVE-2017-7616: Information leak when setting memory policy. Installing [mpc40pom] CVE-2017-11600: Denial-of-service in IP transformation configuration. Installing [s77tq4wi] CVE-2018-1130: Denial-of-service in DCCP message send. Installing [fli7048b] Incorrect failover group parsing in RDS/IP. Installing [lu9ofhmo] Kernel crash in OCFS2 Distributed Lock Manager lock resource initialization. Installing [dbhfmo13] Fail-over delay in Reliable Datagram Sockets. Installing [7ag5j1qq] Device mapper path setup failure on queue limit change. Installing [8l28npgh] Performance loss with incorrect IBRS usage when retpoline enabled. Installing [sbq777bi] Improved fix to Performance loss with incorrect IBRS usage when retpoline enabled. Installing [ls429any] Denial-of-service in RDS user copying error. Installing [u79kngd9] Denial of service in RDS TCP socket shutdown. Your kernel is fully up to date. Effective kernel version is 4.1.12-124.17.2.el7uek Done!

I now have a total of 46 Ksplice updates applied on this running kernel.

 

# uptrack-uname -a Linux devel 4.1.12-124.17.2.el7uek.x86_64 #2 SMP Tue Jul 17 20:28:07 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux

current to the 'latest' UEKR4 version in terms of CVEs

Now we don't provide driver 'updates' or so in these patches only critical fixes and security fixes. So the kernel is not -identical- to the 4.1.12-17.2 in every sense. But it certainly is on your current system as it's related to bad things that could happen!

Since I don't want to forget running the update, I am going to just enable Ksplice to run through a cron job. Just edit /etc/uptrack/uptrack.conf and change autoinstall = no to autoinstall = yes.

A few other things:

When Ksplice patches are installed and you do end up doing a reboot, the installed patches will be automatically applied again right at boot time if you reboot into the same original kernel. Note - it will not automatically go look for new patches.

If you want to also go check for new updates, you can comment out #upgrade_on_reboot = yes  this will make that happen.

I removed all installed Ksplice updates (online, using # uptrack-remove --all) and now will time reapplying all 46:

 

# time uptrack-upgrade -y ... real 0m11.705s user 0m4.273s sys 0m4.807s

So 11.7 seconds to apply all 46. Each patch gets applied one after the other, there is no system halt for that long at all, for each individual patch it just halts for a few us (not noticeable) and then has a short pause to continue to the next but this pause is just the uptrack tool, not your server instance.

So enable autoinstall, enable upgrade_on_reboot=yes and you have an Oracle Linux system that you can just leave running and you automatically are current with CVEs/critical fixes without having to worry...Autonomous Oracle Linux patching. Pretty cool!

Some vendors are trying to offer 'live patching' but those things don't come even close. It validates the importance of this technology and feature set,  it's not anywhere near a viable alternative.

Have fun!

 

SQL JSON ORA-40459 Exception

Tom Kyte - Thu, 2018-08-09 10:46
I am trying to use JSON functions like JSON_OBJECT and JSON_ARRAYAGG to generate a JSON string through SQL. We have alot of columns in our tables that have alot more data than 4000 bytes which I am trying to parse through and generate JSON string. Ev...
Categories: DBA Blogs

Merge Two Rows Into One Row With 2 Columns

Tom Kyte - Thu, 2018-08-09 10:46
Is there a way to simply merge via a single pivot or analytical query the results from two rows into one? We have a legacy table that is essentially storing key value pairs in columnar format. As an test case, here is an example: <code>create tabl...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator