Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 2 hours 29 min ago

MongoDB OPS Manager

Wed, 2018-09-19 08:03

MongoDB OPS Manager (MMS) is a tool for administering and managing MongoDB deployments, particularly large clusters. MongoDB Inc. qualified it as “the best way to manage your MongoDB data center“. OPS Manager also allows you to deploy a complete MongoDB cluster in multiple nodes and several topologies.  As you know, at dbi services, the MongoDB installation is based on our best practices, especially the MFA (MongoDB Flexible Architecture), more information here.

Is OPS Manager compatible with our installation best practices and our MongoDB DMK? For this reasons, I would like to post a guide for the installation and the configuration of the OPS Manager (MMS) based on the dbi services best practices.

In this installation guide, we’ll use the latest version of OPS Manager, release 4.0.2. We’ll install OPS Manager in a single instance, recommended for test and proof of concept.

Testing Environment

We’ll use a Docker container provisioned in the Swiss public cloud Hidora.  Below the information of the container:

  • CentOS 7
  • Add a Public IP
  • Endpoints configuration for: MongoDB DB port 2017, FTP port 21, SSH port 22, OPS Manager interface port 8080

Hidora_Endpoints_MongoDB

MongoDB Installation

Once your container has been provisioned, you can start the installation of MongoDB. It’s important to know, that OPS Manager needs a MongoDB database in order to stores the application information.  That’s why we need to install and start a mongo database at first.

For more details about the MongoDB Installation, you can refer to a previous blog.

[root@node32605-env-4486959]# mkdir -p /u00/app/mongodb/{local,admin,product}

[root@node32605-env-4486959]# mkdir -p /u01/mongodbdata/
[root@node32605-env-4486959]# mkdir -p /u01/mongodbdata/{appdb,bckpdb}
[root@node32605-env-4486959]# mkdir -p /u02/mongodblog/
[root@node32605-env-4486959]# mkdir -p /u02/mongodblog/{applog,bckplog}
[root@node32605-env-4486959]# mkdir -p /u99/mongodbbackup/ 

[root@node32605-env-4486959]# chown -R mongodb:mongodb /u00/app/mongodb/ /u01/mongodbdata/ /u99/mongodbbackup/

Let’s now download the latest MongoDB and OPS Manager releases from the MongoDB Download Center.

[root@node32605-env-4486959 opt]# wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-4.0.2.tgz
[root@node32605-env-4486959 opt]# wget https://downloads.mongodb.com/on-prem-mms/tar/mongodb-mms-4.0.2.50187.20180905T1454Z-1.x86_64.tar.gz

Based on the MFA, move the software inside the /product folder.

[root@node32605-env-4486959 opt] mv mongodb-linux-x86_64-rhel70-4.0.1.tgz mongodb-mms-4.0.2.50187.20180905T1454Z-1.x86_64.tar.gz /u00/app/mongodb/product/

Permissions and Extraction:

[root@node32605-env-4486959 product]# chown -R mongodb:mongodb /u00/app/mongodb/product/* [root@node32605-env-4486959 product]# su - mongodb [mongodb@node32605-env-4486959 product]$ tar -xzf mongodb-linux-x86_64-rhel70-4.0.1.tgz [mongodb@node32605-env-4486959 product]$ tar -xzf mongodb-mms-4.0.2.50187.20180905T1454Z-1.x86_64.tar.gz

Run mongo databases for OPS Manager and Backup:

[mongodb@node32605-env-4486959 bin]$ ./mongod --port 27017 --dbpath /u01/mongodbdata/appdb/ --logpath /u02/mongodblog/applog/mongodb.log --wiredTigerCacheSizeGB 1 --fork
[mongodb@node32605-env-4486959 bin]$ ./mongod --port 27018 --dbpath /u01/mongodbdata/bckpdb/ --logpath /u02/mongodblog/bckplog/mongodb.log --wiredTigerCacheSizeGB 1 --fork

Once the 2 databases have been successfully started, we can confirgure and start the OPS Manager application.

First, we need to configure the URL to access to OPS Manager.

[mongodb@node32605-env-4486959 ~]$ cd /u00/app/mongodb/product/mongodb-mms-4.0.2.50187.20180905T1454Z-1.x86_64/conf

Edit the conf-mms.properties file and add the following lines:

mongo.mongoUri=mongodb://127.0.0.1:27017/?maxPoolSize=150
 mongo.ssl=false
 mms.centralUrl=http://xxx.xxx.xx.xx:8080

Replace the xxx.xxx.xx.xx by your public IP or DNS name.

[mongodb@node32605-env-4486959 ~]$ cd /u00/app/mongodb/product/mongodb-mms-4.0.2.50187.20180905T1454Z-1.x86_64/bin
[mongodb@node32605-env-4486959 bin]$ ./mongodb-mms start
 OPS Manager configuration

Access to the OPS Manager application through the following URL:

http://public_ip:8080

MongoDB_UI

 

You need to register for the first time.

MongoDB_Register

 

Once your account have been created, configure the OPS Manager access URL.

MongoDB_URL

Then configure your email settings.

MongoDB_EmailSettings

Click on Continue and configure the User Authentication, Backup Snapshots, Proxy.

Finish by the OPS Manager versions configuration.

MongoDB_Version_Management

 

Congratulation, you finish your installation. You can start using OPS Manager now and deploy a MongoDB cluster.

MongoDB_OPSManager

 

 

 

Cet article MongoDB OPS Manager est apparu en premier sur Blog dbi services.

Java9 new features

Tue, 2018-09-18 01:52

java9

Java9 is on its way now, in this blog I’ll talk about the new features I found interesting, the performances and so on.

Configure Eclipse for Java9

Prior to Eclipse Oxygen 4.7.1a, you’ll have to configure eclipse a little bit to make it run your Java9 projects.

Add in eclipse.ini after –launcher.appendVmargs

-vm
C:\Program Files\Java\jdk-9.0.4\bin\javaw.exe

 

Still in eclipse.ini add:

--add-modules=ALL-SYSTEM

 

You should have something like this:

--launcher.appendVmargs
-vm
C:\Program Files\Java\jdk-9.0.4\bin\javaw.exe
-vmargs
-Dosgi.requiredJavaVersion=1.6
-Xms40m
-Xmx512m
--add-modules=ALL-SYSTEM
New Features  Modules

Like a lot of other languages, and in order to obfuscate a little more the code, java is going to use Modules. It simply means that you’ll be able to make your code requiring a specific library. This is quite helpful for small memory device that do not need the whole JVM to be loaded. You can have a list of available modules here.

When creating a module, you’ll generate a file called module-info.java which will be like:

module test.java9 {
	requires com.dbiservices.example.engines;
	exports com.dbiservices.example.car;
}

Here my module requires the “engines” module and exports the “car” module. This allows to only load classes related to our business and not some side libraries, it will help managing memory more efficiently but also requires some understanding regarding the module system. In addition, it creates a real dependency system between jars, and prevent using public classes that were not supposed to be exposed through the API. It prevents some strange behavior when you have duplicates entries, like several jar versions in the classpath. All non-exported modules will be encapsulated by default

 JShell

Java9 now provides a JShell, like other languages you can now execute java code through a java shell command prompt. Simply starts jshell from the JDK in the bin folder:

jshellThis kind of tool can greatly improve productivity for small tests, you don’t have to create small testing classes anymore. Very useful for regular expressions testing for example.

New HTTP API

The old http api is being upgraded finally. It now supports WebSockets and HTTP/2 protocol out of the box. For the moment the API is placed in an incubator module, that mean it can still change a little, but you can start playing with like following:

import jdk.incubator.http.*;
public class Run {

public static void main(String[] args) throws IOException, InterruptedException {
  HttpClient client = HttpClient.newHttpClient();
  HttpRequest req = HttpRequest.newBuilder(URI.create("http://www.google.com"))
		              .header("User-Agent","Java")
		              .GET()
		              .build();
  HttpResponse<String> resp = client.send(req, HttpResponse.BodyHandler.asString());
}

You’ll have to setup module-info.java accordingly:

module test.java9 {
	requires jdk.incubator.httpclient;
}
 Private interface methods

Since Java 8, an interface can contain behavior instead of only a method signature. But if you have several methods doing quite the same thing, usually you can refactor those methods into a private one. But default methods in java 8 can’t be private. In Java 9 you can add private helper methods to interfaces which can solve this issue:

public interface CarContract {

	void normalMethod();
	default void defaultMethod() {doSomething();}
	default void secondDefaultMethod() {doSomething();}
	
	private void doSomething(){System.out.println("Something");}
}

The private method “doSomething()” is hidden from the exposure of the interface.

 Unified JVM Logging

Java 9 adds a handy feature to debug the JVM thanks to logging. You can now enable logging for different tags like gc, compiler, threads and so on. You can set it thanks to the command line parameter -Xlog. Here’s an example of the configuration for the gc tag, using debug level without decoration:

-Xlog:gc=debug:file=log/gc.log:none

And the result:

ConcGCThreads: 2
ParallelGCThreads: 8
Initialize mark stack with 4096 chunks, maximum 16384
Using G1
GC(0) Pause Young (G1 Evacuation Pause) 24M->4M(254M) 5.969ms
GC(1) Pause Young (G1 Evacuation Pause) 59M->20M(254M) 21.708ms
GC(2) Pause Young (G1 Evacuation Pause) 50M->31M(254M) 20.461ms
GC(3) Pause Young (G1 Evacuation Pause) 84M->48M(254M) 30.398ms
GC(4) Pause Young (G1 Evacuation Pause) 111M->70M(321M) 31.902ms

We can even merge info:

-Xlog:gc+heap=debug:file=log/heap.log:none

Which results to this:

Heap region size: 1M
Minimum heap 8388608  Initial heap 266338304  Maximum heap 4248829952
GC(0) Heap before GC invocations=0 (full 0):
GC(0)  garbage-first heap   total 260096K, used 24576K [0x00000006c2c00000, 0x00000006c2d007f0, 0x00000007c0000000)
GC(0)   region size 1024K, 24 young (24576K), 0 survivors (0K)
GC(0)  Metaspace       used 6007K, capacity 6128K, committed 6272K, reserved 1056768K
GC(0)   class space    used 547K, capacity 589K, committed 640K, reserved 1048576K
GC(0) Eden regions: 24->0(151)
GC(0) Survivor regions: 0->1(3)
GC(0) Old regions: 0->0
GC(0) Humongous regions: 0->0
GC(0) Heap after GC invocations=1 (full 0):
GC(0)  garbage-first heap   total 260096K, used 985K [0x00000006c2c00000, 0x00000006c2d007f0, 0x00000007c0000000)
GC(0)   region size 1024K, 1 young (1024K), 1 survivors (1024K)
GC(0)  Metaspace       used 6007K, capacity 6128K, committed 6272K, reserved 1056768K
GC(0)   class space    used 547K, capacity 589K, committed 640K, reserved 1048576K
GC(1) Heap before GC invocations=1 (full 0):
GC(1)  garbage-first heap   total 260096K, used 155609K [0x00000006c2c00000, 0x00000006c2d007f0, 0x00000007c0000000)
GC(1)   region size 1024K, 152 young (155648K), 1 survivors (1024K)
GC(1)  Metaspace       used 6066K, capacity 6196K, committed 6272K, reserved 1056768K
GC(1)   class space    used 548K, capacity 589K, committed 640K, reserved 1048576K
GC(1) Eden regions: 151->0(149)
GC(1) Survivor regions: 1->3(19)
...
...

There are other new features not detailed here, but you can find a list here.

 

Cet article Java9 new features est apparu en premier sur Blog dbi services.

Configure AFD with Grid Infrastructure software (SIHA & CRS) from very beginning.

Tue, 2018-09-18 01:04

Introduction :

Oracle ASM Filter Driver (Oracle ASMFD) simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
Oracle ASM Filter Driver (Oracle ASMFD) is a kernel module that resides in the I/O path of the Oracle ASM disks. Oracle ASM uses the filter driver to validate write I/O requests to Oracle ASM disks.

In this blog I will explain how to setup a Grid Infrastructure software within AFD on an architecture SIHA or CRS

Case1. You want to configure AFD from very beginning (no UDEV, no ASMLib) with SIHA, Single Instance High Availability installation (former Oracle Restart)

Issue :

If we want to use AFD driver from very beginning, we should use Oracle AFD to prepare some disks for the ASM instance,
The issue is coming from the fact that AFD will be available just after the installation (can be configured before the installation)!

Solution :

Step1. Install GI stack in software only mode

setup_soft_only

Step2. Run root.sh when is prompted, without any other action(do not execute generated script rootUpgrade.sh)

Step3. Run roothas.pl to setup your HAS stack

[root] /u01/app/grid/product/12.2.0/grid/perl/bin/perl -I /u01/app/grid/product/12.2.0/grid/perl/lib -I /u01/app/grid/product/12.2.0/grid/crs/install /u01/app/grid/product/12.2.0/grid/crs/install/roothas.pl

Step4. As root user proceed to configure AFD

 /u01/app/grid/product/12.2.0/grid/bin/crsctl stop has -f
/u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_configure
/u01/app/grid/product/12.2.0/grid/bin/crsctl start has

Step5.  Setup AFD string to discover new devices , as grid user

 /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_dsset '/dev/sd*'

Step6. Label new disk as root

 /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_label DISK1 /dev/sdb1

Step7. As grid user, launch ASMCA , to create your ASM instance, based on the diskgroup created on the new labeled disk , DISK1

disk_AFD

disk_AFD

Step8. Display AFD driver  within HAS stack.

check_res

 

Case2. You want to configure AFD from very beginning (no UDEV, no ASMLib) with CRS : Cluster Ready Services

Issue :

By installing on software-only mode, you will just copy and relink the binaries.
No wrapper scripts are created as (crsctl or clsecho).
The issue consists that AFD needs wrapper scripts and not the binaries (crsctl.bin).

Solution :

Step1.Do it on all nodes.

Install Grid Infrastructure on the all nodes of the future cluster in the mode “Software-only Installation”.

setup_soft_only

Step2. Do it on all nodes.

After the installation the wrapper scripts are not present. You can copy from any other installation (SIHA too) or use a cloned home.

After getting the two scripts , modify the variables inside them to be aligned with your current system used for installation:

ORA_CRS_HOME=/u01/app/grid/product/12.2.0/grid  --should be changed
MY_HOST=dbi1 –should be changed
ORACLE_USER=grid
ORACLE_HOME=$ORA_CRS_HOME
ORACLE_BASE=/u01/app/oracle
CRF_HOME=/u01/app/grid/product/12.2.0/grid –should be changed

Step3. Do it on all nodes

Configure AFD :

[root@dbi1 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_configure
AFD-627: AFD distribution files found.
AFD-634: Removing previous AFD installation.
AFD-635: Previous AFD components successfully removed.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.

Step4. Do it only on the first node.

Scan & label the new disks using AFD.

/u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_label DISK1 /dev/sdb1
/u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_label DISK1 /dev/sdc1
/u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_label DISK1 /dev/sdd1
[root@dbi1 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_scan
[root@dbi1 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_lsdsk
--------------------------------------------------------------------------------

Label                     Filtering   Path

================================================================================

DISK1                       ENABLED   /dev/sdb1

DISK2                       ENABLED   /dev/sdc1

DISK3                       ENABLED   /dev/sdd1

Step5. Do it on the other nodes.

Scan and display the disks on the other nodes of the future cluster. No need to label them again.

[root@dbi2 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_scan
[root@dbi2 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DISK1                       ENABLED   /dev/sdb1
DISK2                       ENABLED   /dev/sdc1
DISK3                       ENABLED   /dev/sdd1

Step6. Do it on 1st node

Run the script config.sh as oracle/grid user

/u01/app/grid/product/12.2.0/grid/crs/config/config.sh

config_luster

Step7. Do it on 1st node

Setup the connectivity between all the future nodes of the cluster and follow the wizard.

conn_all_nodes

Step8. Do it on 1st node

You will be asked to create a ASM diskgroup.

Normally without doing previous steps , will not be possible , as no udev no ASMLib no AFD configured. So no labeled disks for that step.

create_asm_DG

But…….

Step9. Do it on 1st node

Change discovery path to ‘AFD:*’and should retrieve the disks labeled on the previous step.

afd_path

Step10. Do it on 1st node

Provide AFD labeled disks to create the ASM disk group for the OCR files.Uncheck “Configure Oracle ASM Filter Driver”

CREATE_ASM_DG_2

Step11. Do it on 1st node

Finalize the configuration as per documentation.

 

Additionally, another way (easier ) to install/configure ASM Filter Driver you can find here :
https://blog.dbi-services.com/oracle-18c-cluster-with-oracle-asm-filter-driver/

Summary : Using the scenarios described above , we can configure Grid Infrastructure stack within AFD on  a SIHA or CRS architecture.

 

Cet article Configure AFD with Grid Infrastructure software (SIHA & CRS) from very beginning. est apparu en premier sur Blog dbi services.

User Session lost using ADF Application

Mon, 2018-09-17 11:45

In one of my missions, I was involved in a new Fusion Middleware 12C (12.2.1.2) installation with an ADF application and an Oracle report server instance deployments .
This infrastructure is protected using an Access Manager Single Sign on Server.
In Production, the complete environment is fronted by a WAF server ending the https.
On the TEST The complete environment is fronted by a SSL reverse proxy ending the https.

In the chosen architecture, all Single Sign On request goes directly through the reverse proxy to the OAM servers.
The Application requests and the reports requests are routed through a HTTP server having the WebGate installed.

Below is an extract of the SSL part of the reverse Proxy configuration:
# SSL Virtual Host
<VirtualHost 10.0.1.51:443>
ServerName https://mySite.com
ErrorLog logs/ssl_errors.log
TransferLog logs/ssl_access.log
HostNameLookups off
ProxyPreserveHost On
ProxyPassReverse /oam http://appserver.example.com:14100/oam
ProxyPass /oam http://appserver.example.com:14100/oam
ProxyPassReverse /myCustom-sso-web http://appserver.example.com:14100/myCustom-sso-web
ProxyPass /myCustom-sso-web http://appserver.example.com:14100/myCustom-sso-web
ProxyPass /reports http://appserver.example.com:7778/reports
ProxyPassReverse /reports http://appserver.example.com:7778/reports
ProxyPass /myApplication http://appserver.example.com:7778/myApplication
ProxyPassReverse /myApplication http://appserver.example.com:7778/myApplication
# SSL configuration
SSLEngine on
SSLCertificateFile /etc/httpd/conf/ssl/myStite_com.crt
SSLCertificateKeyFile /etc/httpd/conf/ssl/mySite_com.key
</VirtualHost>

HTTP Server Virtual hosts:
# Local requests
Listen 7778
<VirtualHost *:7778>
ServerName http://appserver.example.com:7778
# Rewrite included for OAM logout redirection
RewriteRule ^/oam/(.*)$ http://appserver.example.com:14100/oam/$1
RewriteRule ^/myCustom-sso-web/(.*)$ http://appserver.example.com:14100/myCustom-sso-sso-web/$1
</VirtualHost>

<VirtualHost *:7778>
ServerName https://mySite.com:443
</VirtualHost>

The ADf application and the reports servers mapping is done using custom configuration files included in http.conf files
#adf.conf
#----------
<Location /myApplication>
SetHandler weblogic-handler
WebLogicCluster appserver.example.com:9001,appserver1.example.com:9003
WLProxySSLPassThrough ON
</Location>

# Force caching for image files
<FilesMatch "\.(jpg|jpeg|png|gif|swf)$">
Header unset Surrogate-Control
Header unset Pragma
Header unset Cache-Control
Header unset Last-Modified
Header unset Expires
Header set Cache-Control "max-age=86400, public"
Header set Surrogate-Control "max-age=86400"
</FilesMatch>

#reports.conf
#-------------
<Location /reports>
SetHandler weblogic-handler
WebLogicCluster appserver.example.com:9004,appserver1.example.com:9004
DynamicServerList OFF
WLProxySSLPassThrough ON
</Location>

After configuring a ADF application and the Reports Server to be protected through the WebGate, the users can connect and work without any issue during the first 30 minutes.
Then they loose their sessions. We thought first it was related to the session timeouts or inactivity timeout.
We increased the values of those timeouts without success.
We checked the logs and found out that the issue was related to the OAM and WebGate cookies.

The OAM Server gets and sets a cookie named OAM_ID.
Each WebGate gets and sets a cookie named OAMAuthnCookie_ + the host name and port.

The contents of the cookies are:

Authenticated User Identity (User DN)
Authentication Level
IP Address
SessionID (Reference to Server side session – OAM11g Only)
Session Validity (Start Time, Refresh Time)
Session InActivity Timeouts (Global Inactivity, Max Inactivity)
Validation Hash

The validity of a WebGate handled user session is 30 minutes by default and then the WebGate checks the OAM cookies.
Those cookies are secured and were lost because they were not forwarded by the WAF or the reverse proxy due to ending of the https.

We needed to changes the SSL reverse proxy configuration to send the correct information to the WebLogic Server and HTTP Server about ending SSL at reverse proxy level.
This has been done adding two HTTP Headers to the request before sending them to the Oracle Access Manager or Fusion Middleware HTTP Server.

# For the WebLogic Server to be informed about SSL ending at reverse proxy level
RequestHeader set WL-Proxy-SSL true
# For the Oracle HTTP Server to take the secure cookies in account
RequestHeader set X-Forwarded-Proto “https”

The WAF needed to be configured to do the same HTTP Headers adds in the production environment.

After those changes, the issue was solved.

 

Cet article User Session lost using ADF Application est apparu en premier sur Blog dbi services.

EDB containers for OpenShift 2.3 – PEM integration

Mon, 2018-09-17 11:19

A few days ago EnterpriseDB announced the availability of version 2.3 of the EDB containers for OpenShift. The main new feature in this release is the integration of PEM (Postgres Enterprise Manager), so in this post we’ll look at how we can bring up a PEM server in OpenShift. If you did not follow the lats posts about EDB containers in OpenShift here is the summary:

The first step you need to do is to download the updated container images. You’ll notice that there are two new containers which have not been available before the 2.3 release:

  • edb-pemserver: Obviously this is the PEM server
  • admintool: a utility container for supporting database upgrades and launching PEM agents on the database containers

For downloading the latest release of the EDB container images for OpenShift, the procedure is the following:

docker run -d -p 5000:5000 --restart=always --name registry registry:2
docker login containers.enterprisedb.com

docker pull containers.enterprisedb.com/edb/edb-as:v10
docker tag containers.enterprisedb.com/edb/edb-as:v10 localhost:5000/edb/edb-as:v10
docker push localhost:5000/edb/edb-as:v10

docker pull containers.enterprisedb.com/edb/edb-pgpool:v3.6
docker tag containers.enterprisedb.com/edb/edb-pgpool:v3.6 localhost:5000/edb/edb-pgpool:v3.6
docker push localhost:5000/edb/edb-pgpool:v3.6

docker pull containers.enterprisedb.com/edb/edb-pemserver:v7.3
docker tag containers.enterprisedb.com/edb/edb-pemserver:v7.3 localhost:5000/edb/edb-pemserver:v7.3
docker push localhost:5000/edb/edb-pemserver:v7.3

docker pull containers.enterprisedb.com/edb/edb-admintool
docker tag containers.enterprisedb.com/edb/edb-admintool localhost:5000/edb/edb-admintool
docker push localhost:5000/edb/edb-admintool

docker pull containers.enterprisedb.com/edb/edb-bart:v2.1
docker tag containers.enterprisedb.com/edb/edb-bart:v2.1 localhost:5000/edb/edb-bart:v2.1
docker push localhost:5000/edb/edb-bart:v2.1

In my case I have quite a few EDB containers available now (…and I could go ahead and delete the old ones, of course):

docker@minishift:~$ docker images | grep edb
containers.enterprisedb.com/edb/edb-as          v10                 1d118c96529b        45 hours ago        1.804 GB
localhost:5000/edb/edb-as                       v10                 1d118c96529b        45 hours ago        1.804 GB
containers.enterprisedb.com/edb/edb-admintool   latest              07fda249cf5c        10 days ago         531.6 MB
localhost:5000/edb/edb-admintool                latest              07fda249cf5c        10 days ago         531.6 MB
containers.enterprisedb.com/edb/edb-pemserver   v7.3                78954c316ca9        10 days ago         1.592 GB
localhost:5000/edb/edb-pemserver                v7.3                78954c316ca9        10 days ago         1.592 GB
containers.enterprisedb.com/edb/edb-bart        v2.1                e2410ed4cf9b        10 days ago         571 MB
localhost:5000/edb/edb-bart                     v2.1                e2410ed4cf9b        10 days ago         571 MB
containers.enterprisedb.com/edb/edb-pgpool      v3.6                e8c600ab993a        10 days ago         561.1 MB
localhost:5000/edb/edb-pgpool                   v3.6                e8c600ab993a        10 days ago         561.1 MB
containers.enterprisedb.com/edb/edb-as                              00adaa0d4063        3 months ago        979.3 MB
localhost:5000/edb/edb-as                                           00adaa0d4063        3 months ago        979.3 MB
localhost:5000/edb/edb-pgpool                   v3.5                e7efdb0ae1be        4 months ago        564.1 MB
containers.enterprisedb.com/edb/edb-pgpool      v3.5                e7efdb0ae1be        4 months ago        564.1 MB
localhost:5000/edb/edb-as                       v10.3               90b79757b2f7        4 months ago        842.7 MB
containers.enterprisedb.com/edb/edb-bart        v2.0                48ee2c01db92        4 months ago        590.6 MB
localhost:5000/edb/edb-bart                     2.0                 48ee2c01db92        4 months ago        590.6 MB
localhost:5000/edb/edb-bart                     v2.0                48ee2c01db92        4 months ago        590.6 MB

The only bits I changed in the yaml file that describes my EDB AS deployment compared to the previous posts are these (check the high-lightened lines, there are only two):

apiVersion: v1
kind: Template
metadata:
   name: edb-as10-custom
   annotations:
    description: "Custom EDB Postgres Advanced Server 10.0 Deployment Config"
    tags: "database,epas,postgres,postgresql"
    iconClass: "icon-postgresql"
objects:
- apiVersion: v1 
  kind: Service
  metadata:
    name: ${DATABASE_NAME}-service 
    labels:
      role: loadbalancer
      cluster: ${DATABASE_NAME}
  spec:
    selector:                  
      lb: ${DATABASE_NAME}-pgpool
    ports:
    - name: lb 
      port: ${PGPORT}
      targetPort: 9999
    sessionAffinity: None
    type: LoadBalancer
- apiVersion: v1 
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-pgpool
  spec:
    replicas: 2
    selector:
      lb: ${DATABASE_NAME}-pgpool
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        labels:
          lb: ${DATABASE_NAME}-pgpool
          role: queryrouter
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-pgpool
          env:
          - name: DATABASE_NAME
            value: ${DATABASE_NAME} 
          - name: PGPORT
            value: ${PGPORT} 
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres'
          - name: REPL_PASSWORD
            value: 'postgres'
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: localhost:5000/edb/edb-pgpool:v3.6
          imagePullPolicy: IfNotPresent
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-as10-0
  spec:
    replicas: 1
    selector:
      db: ${DATABASE_NAME}-as10-0 
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        creationTimestamp: null
        labels:
          db: ${DATABASE_NAME}-as10-0 
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-as10 
          env:
          - name: DATABASE_NAME 
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER 
            value: ${DATABASE_USER} 
          - name: DATABASE_USER_PASSWORD
            value: 'postgres'
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres'
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: REPL_PASSWORD
            value: 'postgres'
          - name: PGPORT
            value: ${PGPORT} 
          - name: RESTORE_FILE
            value: ${RESTORE_FILE} 
          - name: LOCALEPARAMETER
            value: ${LOCALEPARAMETER}
          - name: CLEANUP_SCHEDULE
            value: ${CLEANUP_SCHEDULE}
          - name: EFM_EMAIL
            value: ${EFM_EMAIL}
          - name: NAMESERVER
            value: ${NAMESERVER}
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName 
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP 
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: localhost:5000/edb/edb-as:v10
          imagePullPolicy: IfNotPresent 
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5 
          livenessProbe:
            exec:
              command:
              - /var/lib/edb/testIsHealthy.sh
            initialDelaySeconds: 600 
            timeoutSeconds: 60 
          ports:
          - containerPort: ${PGPORT} 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
          - name: ${BACKUP_PERSISTENT_VOLUME}
            mountPath: /edbbackup
          - name: pg-initconf
            mountPath: /initconf
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${PERSISTENT_VOLUME_CLAIM}
        - name: ${BACKUP_PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${BACKUP_PERSISTENT_VOLUME_CLAIM}
        - name: pg-initconf
          configMap:
            name: postgres-map
    triggers:
    - type: ConfigChange
parameters:
- name: DATABASE_NAME
  displayName: Database Name
  description: Name of Postgres database (leave edb for default)
  value: 'edb'
- name: DATABASE_USER
  displayName: Default database user (leave enterprisedb for default)
  description: Default database user
  value: 'enterprisedb'
- name: REPL_USER
  displayName: Repl user
  description: repl database user
  value: 'repl'
- name: PGPORT
  displayName: Database Port
  description: Database Port (leave 5444 for default)
  value: "5444"
- name: LOCALEPARAMETER
  displayName: Locale
  description: Locale of database
  value: ''
- name: CLEANUP_SCHEDULE
  displayName: Host Cleanup Schedule
  description: Standard cron schedule - min (0 - 59), hour (0 - 23), day of month (1 - 31), month (1 - 12), day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0). Leave it empty if you dont want to cleanup.
  value: '0:0:*:*:*'
- name: EFM_EMAIL
  displayName: Email
  description: Email for EFM
  value: 'none@none.com'
- name: NAMESERVER
  displayName: Name Server for Email
  description: Name Server for Email
  value: '8.8.8.8'
- name: PERSISTENT_VOLUME
  displayName: Persistent Volume
  description: Persistent volume name
  value: ''
  required: true
- name: PERSISTENT_VOLUME_CLAIM 
  displayName: Persistent Volume Claim
  description: Persistent volume claim name
  value: ''
  required: true
- name: BACKUP_PERSISTENT_VOLUME
  displayName: Backup Persistent Volume
  description: Backup Persistent volume name
  value: ''
  required: false
- name: BACKUP_PERSISTENT_VOLUME_CLAIM
  displayName: Backup Persistent Volume Claim
  description: Backup Persistent volume claim name
  value: ''
  required: false
- name: RESTORE_FILE
  displayName: Restore File
  description: Restore file location
  value: ''
- name: ACCEPT_EULA
  displayName: Accept end-user license agreement (leave 'Yes' for default)
  description: Indicates whether user accepts the end-user license agreement
  value: 'Yes'
  required: true

As the template starts with one replica I scaled that to three so finally the setup we start with for PEM is this (one master and two replicas, which is the minimum you need for automated failover anyway):

dwe@dwe:~$ oc get pods -o wide -L role
edb-as10-0-1-4ptdr   1/1       Running   0          7m        172.17.0.5   localhost   standbydb
edb-as10-0-1-8mw7m   1/1       Running   0          5m        172.17.0.6   localhost   standbydb
edb-as10-0-1-krzpp   1/1       Running   0          8m        172.17.0.9   localhost   masterdb
edb-pgpool-1-665mp   1/1       Running   0          8m        172.17.0.8   localhost   queryrouter
edb-pgpool-1-mhgnq   1/1       Running   0          8m        172.17.0.7   localhost   queryrouter

Nothing special happened so far except that we downloaded the new container images, pushed that to the local registry and adjusted the deployment yaml to reference the latest version of the containers. What we want to do now is to create the PEM repository container so that we can add the database to PEM which will give us monitoring and alerting. As PEM requires persistent storage as well we need a new storage definition:

Selection_016

You can of course also get the storage definition using the “oc” command:

dwe@dwe:~$ oc get pvc
NAME                STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
edb-bart-claim      Bound     pv0091    100Gi      RWO,ROX,RWX                   16h
edb-pem-claim       Bound     pv0056    100Gi      RWO,ROX,RWX                   50s
edb-storage-claim   Bound     pv0037    100Gi      RWO,ROX,RWX                   16h

The yaml file for the PEM server is this one (notice that the container image referenced is coming from the local registry):

apiVersion: v1
kind: Template
metadata:
   name: edb-pemserver
   annotations:
    description: "Standard EDB Postgres Enterprise Manager Server 7.3 Deployment Config"
    tags: "pemserver"
    iconClass: "icon-postgresql"
objects:
- apiVersion: v1
  kind: Service
  metadata:
    name: ${DATABASE_NAME}-webservice 
    labels:
      name: ${DATABASE_NAME}-webservice
  spec:
    selector:
      role: pemserver 
    ports:
    - name: https
      port: 30443
      nodePort: 30443
      protocol: TCP
      targetPort: 8443
    - name: http
      port: 30080
      nodePort: 30080
      protocol: TCP
      targetPort: 8080
    type: NodePort
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    name: edb-pemserver
  spec:
    replicas: 1
    selector:
      app: pemserver 
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        creationTimestamp: null
        labels:
          app: pemserver 
          cluster: ${DATABASE_NAME} 
      spec:
        containers:
        - name: pem-db
          env:
          - name: DATABASE_NAME
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER
            value: ${DATABASE_USER}
          - name: ENTERPRISEDB_PASSWORD
            value: "postgres"
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
          - name: PGPORT
            value: ${PGPORT}
          - name: RESTORE_FILE
            value: ${RESTORE_FILE}
          - name: ENABLE_HA_MODE
            value: "No"
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
            image: localhost:5000/edb/edb-as:v10
          imagePullPolicy: Always 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
        - name: pem-webclient 
          image: localhost:5000/edb/edb-pemserver:v7.3
          imagePullPolicy: Always 
          env:
          - name: DATABASE_NAME 
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER 
            value: ${DATABASE_USER} 
          - name: ENTERPRISEDB_PASSWORD
            value: "postgres"
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName 
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP 
          - name: PGPORT
            value: ${PGPORT}
          - name: CIDR_ADDR
            value: ${CIDR_ADDR}
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          - name: DEBUG_MODE
            value: ${DEBUG_MODE}
          ports:
          - containerPort: ${PGPORT} 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
          - name: httpd-shm
            mountPath: /run/httpd
        volumes:
        - name: ${PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${PERSISTENT_VOLUME_CLAIM}
        - name: httpd-shm 
          emptyDir:
            medium: Memory 
        dnsPolicy: ClusterFirst
        restartPolicy: Always
    triggers:
    - type: ConfigChange
parameters:
- name: DATABASE_NAME
  displayName: Database Name
  description: Name of Postgres database (leave edb for default)
  value: 'pem'
  required: true
- name: DATABASE_USER
  displayName: Default database user (leave enterprisedb for default)
  description: Default database user
  value: 'enterprisedb'
- name: PGPORT
  displayName: Database Port
  description: Database Port (leave 5444 for default)
  value: '5444'
  required: true
- name: PERSISTENT_VOLUME
  displayName: Persistent Volume
  description: Persistent volume name
  value: 'edb-data-pv'
  required: true
- name: PERSISTENT_VOLUME_CLAIM 
  displayName: Persistent Volume Claim
  description: Persistent volume claim name
  value: 'edb-data-pvc'
  required: true
- name: RESTORE_FILE
  displayName: Restore File
  description: Restore file location
  value: ''
- name: CIDR_ADDR 
  displayName: CIDR address block for PEM 
  description: CIDR address block for PEM (leave '0.0.0.0/0' for default) 
  value: '0.0.0.0/0' 
- name: ACCEPT_EULA
  displayName: Accept end-user license agreement (leave 'Yes' for default)
  description: Indicates whether user accepts the end-user license agreement
  value: 'Yes'
  required: true

Again, don’t process the template right now, just save it as a template:
Selection_001

Once we have that available we can start to deploy the PEM server from the catalog:
Selection_002

Selection_003

Of course we need to reference the storage definition we created above:
Selection_004

Leave everything else at its defaults and create the deployment:
Selection_005

A few minutes later you should have PEM ready:
Selection_011

For connecting to PEM with your browser have a look at the service definition to get the port:
Selection_012

Once you have that you can connect to PEM:
Selection_013
Selection_014

In the next post we’ll look at how we can add our existing database deployment to our just created PEM server so we can monitor the instances and configure alerting.

 

Cet article EDB containers for OpenShift 2.3 – PEM integration est apparu en premier sur Blog dbi services.

We are proud to announce:

Fri, 2018-09-14 09:42

Selection_015

(no words required for this post, the image says it all)

 

Cet article We are proud to announce: est apparu en premier sur Blog dbi services.

Masking Data With PostgreSQL

Thu, 2018-09-13 10:01

I was searching a tool for anonymizing data in a PostgreSQL database and I have tested the extension pg_anonymizer.
PostgreSQL_anonymizer is a set of SQL functions that remove personally identifiable values from a PostgreSQL table and replace them with random-but-plausible values. The goal is to avoid any identification from the data record while remaining suitable for testing, data analysis and data processing.
In this blog I am showing how this extension can be used. I am using a PostgreSQL 10 database.
The first step is to install the extension pg_anonymizer. In my case I did it with with pgxn client

[postgres@pgserver2 ~]$ pgxn install postgresql_anonymizer --pg_config /u01/app/postgres/product/10/db_1/bin/pg_config
INFO: best version: postgresql_anonymizer 0.0.3
INFO: saving /tmp/tmpVf3psT/postgresql_anonymizer-0.0.3.zip
INFO: unpacking: /tmp/tmpVf3psT/postgresql_anonymizer-0.0.3.zip
INFO: building extension
gmake: Nothing to be done for `all'.
INFO: installing extension
/usr/bin/mkdir -p '/u01/app/postgres/product/10/db_1/share/extension'
/usr/bin/mkdir -p '/u01/app/postgres/product/10/db_1/share/extension/anon'
/usr/bin/install -c -m 644 .//anon.control '/u01/app/postgres/product/10/db_1/share/extension/'
/usr/bin/install -c -m 644 .//anon/anon--0.0.3.sql  '/u01/app/postgres/product/10/db_1/share/extension/anon/'
[postgres@pgserver2 ~]$

We can then verify that under /u01/app/postgres/product/10/db_1/share/extension we have a file anon.control and a directory named anon

[postgres@pgserver2 extension]$ ls -ltra anon*
-rw-r--r--. 1 postgres postgres 167 Sep 13 10:54 anon.control

anon:
total 18552
drwxrwxr-x. 3 postgres postgres    12288 Sep 13 10:54 ..
drwxrwxr-x. 2 postgres postgres       28 Sep 13 10:54 .
-rw-r--r--. 1 postgres postgres 18980156 Sep 13 10:54 anon--0.0.3.sql

Let’s create a database named prod and let’s create the required extensions. tsm_system_rows should delivered by the contrib.

prod=# \c prod
You are now connected to database "prod" as user "postgres".
prod=#
prod=# CREATE EXTENSION tsm_system_rows;;
CREATE EXTENSION
prod=#

prod=# CREATE EXTENSION anon;
CREATE EXTENSION
prod=#


prod=# \dx
                                    List of installed extensions
      Name       | Version |   Schema   |                        Description

-----------------+---------+------------+----------------------------------------------------
--------
 anon            | 0.0.3   | anon       | Data anonymization tools
 plpgsql         | 1.0     | pg_catalog | PL/pgSQL procedural language
 tsm_system_rows | 1.0     | public     | TABLESAMPLE method which accepts number of rows as
a limit
(3 rows)

prod=#

The extension will create following functions in the schema anon. These functions can be used to mask some data.

prod=# set search_path=anon;
SET
prod=# \df
                                                               List of functions
 Schema |           Name           |     Result data type     |                          Argu
ment data types                           |  Type
--------+--------------------------+--------------------------+------------------------------
------------------------------------------+--------
 anon   | random_city              | text                     |
                                          | normal
 anon   | random_city_in_country   | text                     | country_name text
                                          | normal
 anon   | random_company           | text                     |
                                          | normal
 anon   | random_country           | text                     |
                                          | normal
 anon   | random_date              | timestamp with time zone |
                                          | normal
 anon   | random_date_between      | timestamp with time zone | date_start timestamp with tim
e zone, date_end timestamp with time zone | normal
 anon   | random_email             | text                     |
                                          | normal
 anon   | random_first_name        | text                     |
                                          | normal
 anon   | random_iban              | text                     |
                                          | normal
 anon   | random_int_between       | integer                  | int_start integer, int_stop integer
                            | normal
 anon   | random_last_name         | text                     |
                            | normal
 anon   | random_phone             | text                     | phone_prefix text DEFAULT '0'::text
                            | normal
 anon   | random_region            | text                     |
                            | normal
 anon   | random_region_in_country | text                     | country_name text
                            | normal
 anon   | random_siren             | text                     |
                            | normal
 anon   | random_siret             | text                     |
                            | normal
 anon   | random_string            | text                     | l integer
                            | normal
 anon   | random_zip               | text                     |
                            | normal
(18 rows)

prod=#

Now in the database prod let’s create a table with some data.

prod=# \d customers
                      Table "public.customers"
   Column   |         Type          | Collation | Nullable | Default
------------+-----------------------+-----------+----------+---------
 first_name | character varying(30) |           |          |
 last_name  | character varying(30) |           |          |
 email_add  | character varying(30) |           |          |
 country    | character varying(60) |           |          |
 iban       | character varying(60) |           |          |
 amount     | integer               |           |          |

prod=#

prod=# table customers;
 first_name | last_name |        email_add        |   country    |            iban            |   amount
------------+-----------+-------------------------+--------------+----------------------------+------------
 Michel     | Delco     | michel.delco@yaaa.fr    | FRANCE       | FR763000600001123456890189 |    5000000
 Denise     | Blanchot  | denise.blanchot@yaaa.de | GERMANY      | DE91100000000123456789     | 1000000000
 Farid      | Dim       | farid.dim@yaaa.sa       | Saudi Arabia | SA4420000001234567891234   |    2500000
(3 rows)

prod=#

Let’s say that I want some people to access to all data for this table, but I don’t want them to see the real email, the real country and the real iban of the customers.
One solution should be to create a view with anonymous data for these columns. This will replace them with random-but-plausible values for these columns

prod=# create view Customers_anon as select first_name as Firstname ,last_name  as Lastnmame,anon.random_email() as Email ,anon.random_country() as Country, anon.random_iban() as Iban ,amount as Amount from customers;
CREATE VIEW

And then grant the access privilege to concerned people

prod=# select * from customers_anon ;
 firstname | lastnmame |             email             | country |            iban            |   amount
-----------+-----------+-------------------------------+---------+----------------------------+------------
 Michel    | Delco     | wlothean0@springer.com        | Spain   |  AD1111112222C3C3C3C3C3C3  |    5000000
 Denise    | Blanchot  | emoraloy@dropbox.com          | Myanmar |  AD1111112222C3C3C3C3C3C3  | 1000000000
 Farid     | Dim       | vbritlandkt@deliciousdays.com | India   |  AD1111112222C3C3C3C3C3C3  |    2500000
(3 rows)

prod=#
 

Cet article Masking Data With PostgreSQL est apparu en premier sur Blog dbi services.

Redhat Forum 2018 – everthing is OpenShift

Wed, 2018-09-12 11:31

I had an exiting and informational day at Redhat Forum Zurich.

After starting with a short welcome in the really full movie theater in Zurich Sihlcity, we had the great pleasure to listen to Jim Whitehurst. With humor he told about the success of Redhat during the last 25 years.

IMG_20180911_094607

The partner and success stories of Vorwerk / Microsoft / Accenture / Avaloq / Swisscom and SAP showed impressivly the potential and the usage of OpenShift.
firefox_2018-09-12_18-11-44

After the lunch break, which was great for networking and talking to some of the partners, the breakout sessions started.
The range of sessions showed the importance of OpenShift for agile businesses.

Here is a short summary of three sessions:
Philippe Bürgisser (acceleris) showed on a practical example his sticking points of bringing OpenShift into production.
PuzzleIT, adcubum and Helsana gave amazing insides into their journey to move adcubum syrius to APPUiO.
RedHat and acceleris explained how Cloud/OpenShift simplifies and improves development cycles.
firefox_2018-09-12_17-50-40

During the end note, Redhat take up the cudgels for women in IT and their importance, a suprising and apreciated aspect – (Red)Hats off!
Thank you for that great event! Almost 900 participants this year can’t be wrong.
firefox_2018-09-12_18-05-56

 

Cet article Redhat Forum 2018 – everthing is OpenShift est apparu en premier sur Blog dbi services.

[Oracle 18c new feature] Quick win to improve queries with Memoptimized Rowstore

Tue, 2018-09-11 09:18

With its 18th release Oracle comes with many improvements. Some of them are obvious and some of them more discrete.
This is the case of the new buffer area (memory area) called the Memoptimize pool. This new area, part of the SGA, is used to store the data and metadata of standard Oracle tables (heap-organized tables) to improve significantly the performance of queries having filter on PKs.

This new MEMPTIMIZE POOL memory area is split in 2 parts:

  • Memoptimize buffer area: 75% of the space reserved to store table buffers the same way as they are store in the so-called buffer cache
  • Hash index: 25% of the space reserved to store the hash index of primary key from table in the Memoptimize buffer area

To manage this space a new parameter MEMOPTIMIZE_POOL_SIZE is available, unfortunately not dynamic. This parameter is fixed at run time and it is not managed with the database automatic memory management. This parameter takes space from the SGA_TARGET so be careful when dimensioning it.

Before this new memory structure, clients who want to query a standard table with a filter on its PK (e.g: where COL_PK = X ) have to wait on I/Os coming from the disk to the memory until reach the X value from the index. Then I/Os again from disk to memory to fetch the table block containing the row from the table where COL_PK = X. This mechanism consumes I/Os of course and also CPU cycles because it involves other processes of the instance who need to perform some tasks.

Now thanks to this new memory space, when a client does the exact same query where COL_PK = X, it can directly hash the value and walk through the Hash Index to find the row location in the Memoptimize buffer area. Then the result is directly picked up by the client process. It results in less CPU consumption and less I/Os disk in most of case at the cost of memory space.

When to used?

It is only useful in case when queries are done on table with an equality filter on the PK. You can balance the need with the size of the requested table and the frequency of usage of such queries.

4 steps activation

  1. Check that the COMPATIBLE parameter is set to 18.0.0 or higher
  2. Set the parameter MEMOPTIMIZE_POOL_SIZE to the desired value (restart required)
  3. Alter (or create) target table with the “MEMOPTIMIZE FOR READ” clause
  4. Then execute the procedure “DBMS_MEMOPTIMIZE.POPULATE( )” to populate the MEMOPTIMIZE POOL with the target table

How to remove a table from the MEMOPTIMIZE POOL ?

With the procedure DROP_OBJECT() from the DBMS_MEMOPTIMIZE package.

You can disable the access to this new MEMPTIMIZE POOL by using the clause “NO MEMOPTIMIZE FOR READ”.

 

I hope this helps and please do not hesitates to contact us should you want more details.

Nicolas

 

Cet article [Oracle 18c new feature] Quick win to improve queries with Memoptimized Rowstore est apparu en premier sur Blog dbi services.

PII search using HCI

Tue, 2018-09-11 05:04

In a previous blog, we described how to install Hitachi Content Intelligence the solution of Hitachi Ventara for data indexing and search. In this blog post, we will see how we can use Hitachi Content Intelligence to perform the basic search on personal information (PII).

Data Connections

HCI allows you to connect to multiple data source using default data connectors. The first step is to create a data connection. By default, multiples data connectors are available:

HCI_data_connectors

For our example, we will simply use the Local File System as the data repository. Note that, the directory must be within the HCI install directory

Below the data connection configuration for our PII demo.

HCI_Data_Connection

Click on Test after adding the information and click on Create.

A new data connection will appear in your dashboard.

Processing Pipelines
After creating the data connection, will build a processing pipeline for our PII example
Click on Processing Pipelines > Create a Pipeline. Enter a name for your pipeline (optionally a description) and click on Create.
Click on Add Stages, and create your desired pipeline. For PII search we will use the following pipeline.
HCI_PII_Pipeline
After building your pipeline, you can test it by clicking on the Test Pipeline button at the top right of your page.
Index Collections
We should now, create an index collection to specify how you want to index your data set.
First, click on Create Index inside the Index Collections button. Create an HCI Index and use the schemaless option.
HCI_Index
Content Classes
Then you should create your content classes to extract your desired information from your data set. For our PII example, we will create 3 content classes for American Express and Visa credit card and for Security Social Number.
HCI_content_classes
For America Express credit card, your should add the following pattern.
HCI_AMEX
Pattern for Visa credit card.
HCI_VISA
Pattern for Social Security Number.
HCI_SSN
Start your workflow
When all steps are completed you can start your workflow and wait until it finish.
HCI_Workflow_Start
HCI Search
Use the HCI Search application to visualize the results.
https://<hci_instance_ip>:8888
Select your index name in the search field, and naviguate through the results.
You can also display the results in charts and graphics
HCI_Search_Graph
This demo is also available in the Hitachi Ventara Community website: https://community.hitachivantara.com/thread/13804-pii-workflows
 

Cet article PII search using HCI est apparu en premier sur Blog dbi services.

PDB lockdown with Oracle 18.3.0.0

Mon, 2018-09-10 11:01

The PDB lockdown feature offers you the possibility to restrict operations and functionality available from within a PDB, and might be very useful from a security perspective.

Some new features have been added to the 18.3.0.0 Oracle version:

  • You have the possibility to create PDB lockdown profiles in the application root like in the CDB root. This facilitates to have a more precise control access to the applications associated with the application container.
  • You can create a PDB lockdown profile from another PDB lockdown profile.
  • Three default PDB lockdown profiles have been added : PRIVATE_DBAAS, SAAS and PUBLIC_DBAAS
  • The v$lockdown_rules is a new view allowing you to display the contents of a PDB lockdown profile.

Let’s make some tests:

At first we create a lockdown profile from the CDB (as we did with Oracle 12.2)

SQL> create lockdown profile psi;

Lockdown Profile created.

We alter the lockdown profile to disable any statement on the PDB side except alter system set open_cursors=500;

SQL> alter lockdown profile PSI disable statement=('ALTER SYSTEM') 
clause=('SET') OPTION ALL EXCEPT=('open_cursors');

Lockdown Profile altered.

Then we enable the lockdown profile:

SQL> alter system set PDB_LOCKDOWN=PSI;

System altered.

We can check the pdb_lockdown parameter value from the CDB side:

SQL> show parameter pdb_lockdown

NAME				     TYPE	 VALUE
------------------------------------ ----------- -------
pdb_lockdown			     string	 PSI

From the PDB side what happens ?

SQL> alter session set container=pdb;

Session altered.

SQL> alter system set cursor_sharing='FORCE';
alter system set cursor_sharing='FORCE'
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> alter system set optimizer_mode='FIRST_ROWS_10';
alter system set optimizer_mode='FIRST_ROWS_10'
*
ERROR at line 1:
ORA-01031: insufficient privileges

This is a good feature, allowing a greater degree of separation between different PDB of the same instance.

We can create a lockdown profile disabling partitioned tables creation:

SQL> connect / as sysdba
Connected.
SQL> create lockdown profile psi;

Lockdown Profile created.

SQL> alter lockdown profile psi disable option=('Partitioning');

Lockdown Profile altered.

SQL> alter system set pdb_lockdown ='PSI';

System altered.

On the CDB side, we can create partitioned tables:

SQL> create table emp (name varchar2(10)) partition by hash(name);

Table created.

On the PDB side we cannot create partitioned tables:

SQL> alter session set container = pdb;

Session altered.

SQL> show parameter pdb_lockdown

NAME				     TYPE
------------------------------------ --------------------------------
VALUE
------------------------------
pdb_lockdown			     string
APP
SQL> create table emp (name varchar2(10)) partition by hash(name);
create table emp (name varchar2(10)) partition by hash(name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

We now have the possibility to create a lockdown profile from another one:

Remember we have the pdb lockdown profile app disabling partitioned tables creation, we can create a new app_hr lockdown profile from the app lockdown profile and add new features to the app_hr one:

SQL> create lockdown profile app_hr from app;

Lockdown Profile created.

The app_hr lockdown profile will not have the possibility to run alter system flush shared_pool:

SQL> alter lockdown profile app_hr disable STATEMENT = ('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

We can query the dba_lockdown_profiles view:

SQL> SELECT profile_name, rule_type, rule, status 
     FROM   dba_lockdown_profiles order by 1;

PROFILE_NAME		   RULE_TYPE	    RULE.        STATUS

APP			    OPTION.     PARTITIONING	 DISABLE
APP_HR			   STATEMENT	ALTER SYSTEM	 DISABLE
APP_HR		            OPTION.     PARTITIONING     DISABLE
SQL> alter system set pdb_lockdown=app_hr;

System altered.

SQL> alter session set container=pdb;

Session altered.

SQL> alter system flush shared_pool ;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

If we reset the pdb_lockdown to app, we now can flush the shared pool:

SQL> alter system set pdb_lockdown=app;

System altered.

SQL> alter system flush shared_pool ;

System altered.

We now can create lockdown profiles in the application root, so let’s create an application PDB:

SQL> CREATE PLUGGABLE DATABASE apppsi 
AS APPLICATION CONTAINER ADMIN USER app_admin IDENTIFIED BY manager
file_name_convert=('/home/oracle/oradata/DB18', 
'/home/oracle/oradata/DB18/apppsi');  

Pluggable database created.

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB				  READ WRITE NO
	 4 APPPSI			  MOUNTED

We open the application PDB:

SQL> alter pluggable database apppsi open;

Pluggable database altered.

We connect to the application container :

SQL> alter session set container=apppsi;

Session altered.

We have the possibility to create a lockdown profile:

SQL> create lockdown profile apppsi;

Lockdown Profile created.

And to disable some features:

SQL> alter lockdown profile apppsi disable option=('Partitioning');

Lockdown Profile altered.

But there is a problem if we try to enable the profile:

SQL> alter system set pdb_lockdown=apppsi;
alter system set pdb_lockdown=apppsi
*
ERROR at line 1:
ORA-65208: Lockdown profile APPPSI does not exist.

And surprise we cannot create a partitioned table:

SQL> create table emp (name varchar2(10)) partition by hash(name);
create table emp (name varchar2(10)) partition by hash(name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

Let’s do some more tests: we alter the lockdown profile like this:

SQL> alter lockdown profile apppsi disable statement=('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

SQL> alter system flush shared_pool;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

In fact we cannot use sys in order to test lockdown profiles in APP root, we have to use an application user with privileges such as create or alter lockdown profiles in the application container. So after creating an appuser in the application root:

SQL> connect appuser/appuser@apppsi

SQL> create lockdown profile appuser_hr;

Lockdown Profile created.

SQL> alter lockdown profile appuser_hr disable option=('Partitioning');

Lockdown Profile altered.

And now it works fine:

SQL> alter system set pdb_lockdown=appuser_hr;

System altered.

SQL> create table emp (name varchar2(10)) partition by hash (name);
create table emp (name varchar2(10)) partition by hash (name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

And now can we enable again the partitioning option for the appuser_hr profile in the APP root ?

SQL> alter lockdown profile appuser_hr enable option = ('Partitioning');

Lockdown Profile altered.

SQL> create table emp (name varchar2(10)) partition by hash (name);
create table emp (name varchar2(10)) partition by hash (name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

It does not work as expected, the lockdown profile has been updated, but as previously we cannot create a partitioned table.

Let’s do another test with the statement option: we later the lockdown profile in order to disable all alter system set statements except with open_cursors:

SQL> alter lockdown profile appuser_hr disable statement=('ALTER SYSTEM') 
clause=('SET') OPTION ALL EXCEPT=('open_cursors');

Lockdown Profile altered.

SQL> alter system set open_cursors=500;

System altered.

This is a normal behavior.

Now we alter the lockdown profile in order to disable alter system flush shared_pool:

SQL> alter lockdown profile appuser_hr disable STATEMENT = ('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

SQL> alter system flush shared_pool;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

That’s fine :=)

Now we enable the statement:

SQL> alter lockdown profile appuser_hr enable STATEMENT = ('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

SQL> alter system flush shared_pool;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

And again this is not possible …

Let’s try in the CDB root:

SQL> connect / as sysdba
Connected.

SQL> alter lockdown profile app disable statement =('ALTER SYSTEM') 
clause=('SET') OPTION ALL EXCEPT=('open_cursors');

Lockdown Profile altered.

SQL> alter session set container=pdb;

Session altered.

SQL> alter system set cursor_sharing='FORCE';
alter system set cursor_sharing='FORCE'
*
ERROR at line 1:
ORA-01031: insufficient privileges

The behavior is correct, let’s try to enable it :

SQL> connect / as sysdba
Connected.

SQL> alter lockdown profile app enable statement=('ALTER SYSTEM') 
clause ALL;

Lockdown Profile altered.

SQL> alter session set container=pdb;

Session altered.

SQL> alter system set cursor_sharing='FORCE';

System altered.

This is correct again, it seems it does not work correctly in the APP root …

In conclusion the lockdown profile new features are powerful and will be very useful for security reasons. It will allow the DBAs to define a finer granularity  to restrict user’s rights to what they only need to access. But we have to be careful, with the PDB lockdown profiles we can build and generate very complicated database administration.

 

 

 

 

 

 

 

 

 

 

 

 

 

Cet article PDB lockdown with Oracle 18.3.0.0 est apparu en premier sur Blog dbi services.

Connecting to Azure SQL Managed Instance from on-premise network

Mon, 2018-09-10 06:26

A couple of weeks ago, I wrote up about my first immersion into the SQL Server managed instances (SQLMIs), a new deployment model of Azure SQL Database which provides near 100% compatibility with the latest SQL Server on-premises Database Engine. In the previous blog post, to test a connection to this new service, I installed an Azure virtual machine on the same VNET (172.16.0.0/16) including SQL Server management studio. For testing purpose, we don’t need more, but in real production scenario chances are your SQL Azure MI would be part of your on-premise network with a more complex Azure network topology including VNET, Express Route or VPN S2S as well. Implementing such infrastructure won’t be likely your concern if you are a database administrator. But you need to be aware of the underlying connectivity components and how to diagnose possible issues or how to interact with your network team in order to avoid being under pressure and feeling the wrath of your application users too quickly :)

So, I decided to implement this kind of infrastructure in my lab environment but if you’re not a network guru like me you will likely face some difficulties to configure some components especially when it comes the VPN S2S. In addition, you have to understand different new notions about Azure network before hoping to see your infrastructure work correctly. As an old sysadmin, I admit it was a very great opportunity to turn my network learning into a concrete use case. Let’s first set the initial context. Here my lab environment I’ve been using for a while for different purposes as internal testing and event presentations as well. It addresses a lot of testing scenarios including multi-subnet architectures with SQL Server FCIs and SQL Server availability groups.

blog 142 - 1 - lab environment

bviously, some static routes are already set up to allow network traffic between my on-premise subnets. As you guessed, the game consisted in extending this on-premise network to my SQL MI network on Azure. As a reminder, SQL MI is not reachable from a public endpoint and you may connect only from an internal network (either directly from Azure or from your on-premise network). As said previously one of my biggest challenges was to configure my remote access servers as VPN server to communicate with my SQL MI Azure network. Fortunately, you have a plenty of pointers on the internet that may help you to achieve this task.  This blog post is a good walk-through by the way. In my context, you will note I also had to apply special settings to my home routeur in order to allow IPsec Passthrough as well as to add my RRAS server internal IP (192.168.0.101) to the DMZ. I also used IKEv2 VPN protocol and pre-shared key for authentication between my gateways on-premise and on Azure. The VPN S2S configuration is environment specific and this probably why doing a presentation to customer or at events is so difficult especially if you’re outside of your own network.

Anyway, let’s talk about the Azure side configuration. My Azure network topology is composed of two distinct VNETs as follows:

blog 142 - 2 - VNET configuration

The connection between my on-premise and my Azure networks are defined as shown below:

$vpnconnection = Get-AzureRmVirtualNetworkGatewayConnection -ResourceGroupName dbi-onpremises-rg 
$vpnconnection.Name
$vpnconnection.VirtualNetworkGateway1.Id
$vpnconnection.LocalNetworkGateway2.Id 

dbi-vpn-connection
/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/virtualNetworkGateways/dbi-virtual-network-gw
/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/localNetworkGateways/dbi-local-network-gw

 

The first VNET (172.17.x.x) is used as hub virtual network and owns my gateway. The second one (172.16.x.x) concerns is SQL MI VNET:

Get-AzureRmVirtualNetwork | Where-Object { $_.Name -like '*-vnet' } | % {

    Get-AzureRmVirtualNetworkSubnetConfig -VirtualNetwork $_ | Select Name, AddressPrefix
} 

Name          AddressPrefix  
----          -------------  
default       172.17.0.0/24  
GatewaySubnet 172.17.1.0/24  
vm-net        172.16.128.0/17
sql-mi-subnet 172.16.0.0/24

 

My azure gateway subnet (GatewaySubnet) is part of the VPN connectivity with the related gateway connections:

$gatewaycfg = Get-AzureRmVirtualNetworkGatewayConnection -ResourceGroupName dbi-onpremises-rg -Name dbi-vpn-connection 
$gatewaycfg.VirtualNetworkGateway1.Id
$gatewaycfg.LocalNetworkGateway2.Id 

/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/virtualNetworkGateways/dbi-virtual-network-gw
/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/localNetworkGateways/dbi-local-network-gw

 

The dbi-local-network-gw local gateway includes the following addresses prefix that correspond to my local lab environment network:

$gatewaylocal = Get-AzureRMLocalNetworkGateway -ResourceGroupName dbi-onpremises-rg -Name dbi-local-network-gw 
$gatewaylocal.LocalNetworkAddressSpace.AddressPrefixes 

192.168.0.0/16
192.168.5.0/24
192.168.40.0/24

 

Note that I’ve chosen a static configuration but my guess is that I could turn to the BGP protocol instead to make things more dynamic. I will talk quickly about using BGP with routing issues at the end of the write-up. But at this stage, some misconfiguration steps are missing to hope reaching out my SQL MI instance from my lab environment network. Indeed, although my VPN connection status is ok, I was able only to reach out my dbi-on-premise-vnet VNET and I need a way to connect to the sql-mi-vnet VNET. So, I had to turn on both the virtual network peering and gateway transit mechanism. Peering 2 VNETs Azure automatically routes traffic between them by the way.

blog 142 - 3 - VNET configuration peering

Here the peering configuration I applied to my dbi-onpremises-vnet VNET (first VNET):

Get-AzureRmVirtualNetworkPeering -ResourceGroupName dbi-onpremises-rg -VirtualNetworkName dbi-onpremises-vnet | `
Select-Object VirtualNetworkName, PeeringState, AllowVirtualNetworkAccess, AllowForwardedTraffic, AllowGatewayTransit, UseRemoteGateways 

$peering = Get-AzureRmVirtualNetworkPeering -ResourceGroupName dbi-onpremises-rg -VirtualNetworkName dbi-onpremises-vnet 
Write-Host "Remote virtual network peering"
$peering.RemoteVirtualNetwork.Id 

VirtualNetworkName        : dbi-onpremises-vnet
PeeringState              : Connected
AllowVirtualNetworkAccess : True
AllowForwardedTraffic     : True
AllowGatewayTransit       : True
UseRemoteGateways         : False

Remote virtual network peering
/subscriptions/xxxxx/resourceGroups/sql-mi-rg/providers/Microsoft.Network/virtualNetworks/sql-mi-vnet

 

And here the peering configuration of my sql-mi-vnet VNET (2nd VNET):

Get-AzureRmVirtualNetworkPeering -ResourceGroupName sql-mi-rg -VirtualNetworkName sql-mi-vnet | `
Select-Object VirtualNetworkName, PeeringState, AllowVirtualNetworkAccess, AllowForwardedTraffic, AllowGatewayTransit, UseRemoteGateways 

$peering = Get-AzureRmVirtualNetworkPeering -ResourceGroupName sql-mi-rg -VirtualNetworkName sql-mi-vnet
Write-Host "Remote virtual network peering"
$peering.RemoteVirtualNetwork.Id 

VirtualNetworkName        : sql-mi-vnet
PeeringState              : Connected
AllowVirtualNetworkAccess : True
AllowForwardedTraffic     : True
AllowGatewayTransit       : False
UseRemoteGateways         : True

Remote virtual network peering
/subscriptions/xxxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/virtualNetworks/dbi-onpremises-vnet

 

Note that to allow traffic that comes from my on-premise network to go through my first VNET (dbi-onpremises-vnet) at the destination of the second one (sql-mi-vnet), I need to enable some configuration settings as Allow Gateway Transit, Allow Forwarded Traffic and remote gateway on the concerned networks.

At this stage, I still faced a weird issue because I was able to connect to a virtual machine installed on the same VNET than my SQL MI but no luck with the SQL instance. In addition, the psping command output confirmed my connectivity issue letting me think about a routing issue.

blog 142 - 5 - psping command output

Routes from my on-premise network seemed to be well configured as show below. The VPN is a dial up internet connection in my case.

blog 142 - 6 - local route

I also got the confirmation that my on-premise network packets were correctly sent through my Azure VPN gateway by the Microsoft support team (a particular to Filipe Bárrios – support engineer Azure Networking). In fact, I got stuck a couple of days without to figure out exactly what happens. Furthermore Checking effective routes seemed to not viable option in my case because there is no explicit network interface with SQL MI. Please feel free to comment if I get wrong on this point. Fortunately, I found out a PowerShell script provided by Jovan Popovic (MSFT) which seems to have put me on the right track:

blog 142 - 4 - VNET PGP configuration

Referring to the Microsoft documentation, it seems PGP propagation could be very helpful in my case.

Support transit routing between your on-premises networks and multiple Azure VNets

BGP enables multiple gateways to learn and propagate prefixes from different networks, whether they are directly or indirectly connected. This can enable transit routing with Azure VPN gateways between your on-premises sites or across multiple Azure Virtual Networks.

After enabling the corresponding option in the SQL MI route table and opening the SQL MI ports in my firewall connection was finally successful.

blog 142 - 8 - sql mi route bgp

blog 142 - 7 - psping output 2

Hope it helps!

See you

 

 

Cet article Connecting to Azure SQL Managed Instance from on-premise network est apparu en premier sur Blog dbi services.

Oracle 18c: Cluster With Oracle ASM Filter Driver

Sat, 2018-09-08 17:32

During the installation of Oracle Grid Infrastructure, you can optionally enable automated installation and configuration of Oracle ASM Filter Driver for your system with the Configure ASM Filter Driver check box on the Create ASM Disk Group wizard page. When you enable the Configure ASM Filter Driver box, an automated process for Oracle ASMFD is launched during Oracle Grid Infrastructure installation.

If Oracle ASMLIB exists on your Linux system, then deinstall Oracle ASMLIB before installing Oracle Grid Infrastructure, so that you can choose to install and configure Oracle ASMFD during an Oracle Grid Infrastructure installation.
In this blog I do install a 2 nodes cluster of Oracle 18c using Oracle ASMFD. Below the disks we will use.

[root@rac18ca ~]# ls -l /dev/sd[d-f]
brw-rw----. 1 root disk 8, 48 Sep  8 22:09 /dev/sdd
brw-rw----. 1 root disk 8, 64 Sep  8 22:09 /dev/sde
brw-rw----. 1 root disk 8, 80 Sep  8 22:09 /dev/sdf
[root@rac18ca ~]#

[root@rac18cb ~]# ls -l /dev/sd[d-f]
brw-rw----. 1 root disk 8, 48 Sep  8 22:46 /dev/sdd
brw-rw----. 1 root disk 8, 64 Sep  8 22:46 /dev/sde
brw-rw----. 1 root disk 8, 80 Sep  8 22:46 /dev/sdf
[root@rac18cb ~]#

We suppose that all prerequisites are done (public IP, private IP, scan,shared disks ….). Also we will not show all print screens.
The first step is to unzip the Oracle software in the ORACLE_HOME for the grid infrastructure.

unzip -d /u01/app/grid/18.0.0.0 LINUX.X64_180000_grid_home.zip

After we have to use the ASMCMD afd_label command to provision disk devices for use with Oracle ASM Filter Driver as follows.

[root@rac18ca ~]# export ORACLE_HOME=/u01/app/oracle/18.0.0.0/grid
[root@rac18ca ~]# export ORACLE_BASE=/tmp                                       
[root@rac18ca ~]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_label VOTOCR /dev/sde --init
[root@rac18ca ~]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_label DATA /dev/sdd --init
[root@rac18ca ~]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_label DIVERS /dev/sdf --init
[root@rac18ca ~]#

And then we can use the ASMCMD afd_lslbl command to verify the device has been marked for use with Oracle ASMFD.

[root@rac18ca network-scripts]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_lsl                              bl /dev/sde
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
VOTOCR                                /dev/sde
[root@rac18ca network-scripts]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_lslbl /dev/sdd
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DATA                                  /dev/sdd
[root@rac18ca network-scripts]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_lslbl /dev/sdf
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DIVERS                                /dev/sdf
[root@rac18ca network-scripts]#

Now that disks are initialized for ASMFD, we can start the installation.

[oracle@rac18ca grid]$ ./gridSetup.sh

We will not show all the pictures.

imag1

imag2

imag3

imag4

imag5

imag6

imag7

And in next window, we can choose the disks for the OCR and Voting files. We will also check Configure Oracle ASM Filter Driver.

imag8

And then continue the installation. We will have to run the orainstRoot.sh and the root.sh scripts. All these steps are not shown here.
At the end of the installation we can verify the status of the cluster

[oracle@rac18cb ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]

[oracle@rac18ca ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.DG_DATA.dg
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.DG_VOTOCR.dg
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.net1.network
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.ons
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.proxy_advm
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.MGMTLSNR
      1        OFFLINE OFFLINE                               STABLE
ora.asm
      1        ONLINE  ONLINE       rac18ca                  Started,STABLE
      2        ONLINE  ONLINE       rac18cb                  Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.mgmtdb
      1        OFFLINE OFFLINE                               STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.rac18ca.vip
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.rac18cb.vip
      1        ONLINE  ONLINE       rac18cb                  STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac18ca                  STABLE
--------------------------------------------------------------------------------
[oracle@rac18ca ~]$

We also can check that ASMFD is enabled.

[oracle@rac18ca ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DATA                        ENABLED   /dev/sdd
DIVERS                      ENABLED   /dev/sdf
VOTOCR                      ENABLED   /dev/sde
[oracle@rac18ca ~]$


[oracle@rac18ca ~]$ asmcmd dsget
parameter:/dev/sd*, AFD:*
profile:/dev/sd*,AFD:*
[oracle@rac18ca ~]$

[oracle@rac18ca ~]$ asmcmd lsdsk
Path
AFD:DATA
AFD:DIVERS
AFD:VOTOCR
[oracle@rac18ca ~]$

Conclusion
In this blog we have seen how we can install a cluster using ASMFD

 

Cet article Oracle 18c: Cluster With Oracle ASM Filter Driver est apparu en premier sur Blog dbi services.

Documentum – Silent Install – xPlore IndexAgent

Sat, 2018-09-08 08:00

In previous blogs, we installed in silent the Documentum binaries (CS), a docbroker (+licence(s) if needed), several repositories (here and here), D2 and finally the xPlore binaries & Dsearch. This blog will be the last one of this series related to silent installation on Documentum and it will be about how to install an xPlore IndexAgent on the existing docbase/repository created previously.

So let’s start, as always, with the preparation of the properties file:

[xplore@full_text_server_01 ~]$ vi /tmp/xplore_install/FT_IA_Installation.properties
[xplore@full_text_server_01 ~]$ cat /tmp/xplore_install/FT_IA_Installation.properties
### Silent installation response file for Indexagent
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
common.installLocation=/opt/xPlore
COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH=/opt/xPlore
common.64bits=true
COMMON.JAVA64_HOME=/opt/xPlore/java64/JAVA_LINK

### Configuration mode
indexagent.configMode.create=1
indexagent.configMode.upgrade=0
indexagent.configMode.delete=0
indexagent.configMode.create.migration=0

### Other configurations
indexagent.ess.host=full_text_server_01.dbi-services.com
indexagent.ess.port=9300

indexagent.name=Indexagent_Docbase1
indexagent.FQDN=full_text_server_01.dbi-services.com
indexagent.instance.port=9200
indexagent.instance.password=ind3x4g3ntAdm1nP4ssw0rd

indexagent.docbase.name=Docbase1
indexagent.docbase.user=dmadmin
indexagent.docbase.password=dm4dm1nP4ssw0rd

indexagent.connectionBroker.host=content_server_01.dbi-services.com
indexagent.connectionBroker.port=1489

indexagent.globalRegistryRepository.name=gr_docbase
indexagent.globalRegistryRepository.user=dm_bof_registry
indexagent.globalRegistryRepository.password=dm_b0f_reg1s7ryP4ssw0rd

indexagent.storage.name=default
indexagent.local_content_area=/opt/xPlore/wildfly9.0.1/server/DctmServer_Indexagent_Docbase1/data/Indexagent_Docbase1/export

common.installOwner.password=ind3x4g3ntAdm1nP4ssw0rd

[xplore@full_text_server_01 ~]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • common.installLocation: The path you installed xPlore on. I put here /opt/xPlore but you can use whatever you want
  • COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH: Same value as “common.installLocation” for linux but for Windows, you need to change double back-slash with forward slash
  • common.64bits: Whether or not the below mentioned java is a 32 or 64 bits
  • COMMON.JAVA64_HOME: The path of the JAVA_HOME that has been installed with the binaries. If you installed xPlore under /opt/xPlore, then this value should be: /opt/xPlore/java64/JAVA_LINK
  • indexagent.configMode.create: Whether or not you want to install an IndexAgent (binary value)
  • indexagent.configMode.upgrade: Whether or not you want to upgrade an IndexAgent (binary value)
  • indexagent.configMode.delete: Whether or not you want to delete an IndexAgent (binary value)
  • indexagent.configMode.create.migration: This isn’t used anymore in recent installer versions but I still don’t know what was its usage before… In any cases, set this to 0 ;)
  • indexagent.ess.host: The Fully Qualified Domain Name of the primary Dsearch this new IndexAgent will be linked to
  • indexagent.ess.port: The port that the primary Dsearch is using
  • indexagent.name: The name of the IndexAgent to be installed. The default name is usually Indexagent_<docbase_name>
  • indexagent.FQDN: The Fully Qualified Domain Name of the current host the IndexAgent is being installed on
  • indexagent.instance.port: The port that the IndexAgent is/will be using (HTTP)
  • indexagent.instance.password: The password to be used for the new IndexAgent JBoss admin
  • indexagent.docbase.name: The name of the docbase/repository that this IndexAgent is being installed for
  • indexagent.docbase.user: The name of an account on the target docbase/repository to be used to configure the objects (updating the dm_server_config, dm_ftindex_agent_config, aso…) and that has the needed permissions for that
  • indexagent.docbase.password: The password of the above-mentioned account
  • indexagent.connectionBroker.host: The Fully Qualified Domain Name of the target docbroker/connection broker that is aware of the “indexagent.docbase.name” docbase/repository. This will be used in the dfc.properties
  • indexagent.connectionBroker.port: The port of the target docbroker/connection broker that is aware of the “indexagent.docbase.name” docbase/repository. This will be used in the dfc.properties
  • indexagent.globalRegistryRepository.name: The name of the GR repository
  • indexagent.globalRegistryRepository.user: The name of the BOF Registry account created on the CS inside the GR repository. This is usually something like “dm_bof_registry”
  • indexagent.globalRegistryRepository.password: The password used by the BOF Registry account
  • indexagent.storage.name: The name of the storage location to be created. The default one is “default”. If you intend to create new collections, you might want to give it a more meaningful name
  • indexagent.local_content_area: The path to be used to store the content temporarily on the file system. The value I used above is the default one but you can put it wherever you want. If you are using a multi-node, this path needs to be accessible from all nodes of the multi-node so you can put it under the “ess.data_dir” folder for example
  • common.installOwner.password: The password of the xPlore installation owner. I assume this is only used on Windows environments for the service setup because on linux, I always set a dummy password and there is no issue

 

Once the properties file is ready, make sure that the Dsearch this IndexAgent is linked to is currently running (http(s)://<indexagent.ess.host>:<indexagent.ess.port>/dsearchadmin), make sure that the Global Registry repository (gr_docbase) as well as the target repository (Docbase1) are running and then you can install the Documentum IndexAgent in silent using the following command:

[xplore@full_text_server_01 ~]$ /opt/xPlore/setup/indexagent/iaConfig.bin LAX_VM "/opt/xPlore/java64/JAVA_LINK/bin/java" -f /tmp/xplore_install/FT_IA_Installation.properties

 

This now concludes the series about Documentum silent installation. There are other components that support the silent installation like the Process Engine for example but usually they require only a few parameters (or even none) so that’s why I’m not including them here.

 

Cet article Documentum – Silent Install – xPlore IndexAgent est apparu en premier sur Blog dbi services.

SCAN listener does not know about service

Wed, 2018-09-05 06:59

When trying to connect to a database via SCAN listener in a RAC environment with sqlplus, an ORA-12514 error is thrown. Tnsping can resolve the connect string. Whereas connecting to the same database over node listener with sqlplus succeeds.

One possible reason could be, that the parameter remote_listener of the database to be connected is not set to SCAN listener of RAC cluster.

So try to set remote_listener to SCAN_LISTENER_HOST:SCAN_LISTENER_PORT like (e.g. host is scan_host, port is 1521):

alter system set remote_listener = 'scan_host:1521' scope=both;

 

Cet article SCAN listener does not know about service est apparu en premier sur Blog dbi services.

Documentum – Silent Install – xPlore binaries & Dsearch

Sat, 2018-09-01 08:00

In previous blogs, we installed in silent the Documentum binaries (CS), a docbroker (+licence(s) if needed), several repositories (here and here) and finally D2. I believe I only have 2 blogs left and they are both related to xPlore. In this one, we will see how to install the xPlore binaries as well as configure a first instance (Dsearch here) on it.

Just like other Documentum components, you can find some silent installation files or at least a template for the xPlore part. On the Full Text side, it is actually easier to find these silent files because they are included directly into the tar installation package so you will be able to find the following files as soon as you extract the package (xPlore 1.6):

  • installXplore.properties: Contains the template to install the FT binaries
  • configXplore.properties: Contains the template to install a FT Dsearch (primary, secondary) or a CPS only
  • configIA.properties: Contains the template to install a FT IndexAgent

 

In addition to that and contrary to most of the Documentum components, you can actually find a documentation about most of the xPlore silent parameters so if you have questions, you can check the documentation.

 

1. Documentum xPlore binaries installation

The properties file for the xPlore binaries installation is really simple:

[xplore@full_text_server_01 ~]$ cd /tmp/xplore_install/
[xplore@full_text_server_01 xplore_install]$ tar -xvf xPlore_1.6_linux-x64.tar
[xplore@full_text_server_01 xplore_install]$
[xplore@full_text_server_01 xplore_install]$ chmod 750 setup.bin
[xplore@full_text_server_01 xplore_install]$ rm xPlore_1.6_linux-x64.tar
[xplore@full_text_server_01 xplore_install]$
[xplore@full_text_server_01 xplore_install]$ ls *.properties
configIA.properties  configXplore.properties  installXplore.properties
[xplore@full_text_server_01 xplore_install]$
[xplore@full_text_server_01 xplore_install]$ vi FT_Installation.properties
[xplore@full_text_server_01 xplore_install]$ cat FT_Installation.properties
### Silent installation response file for FT binary
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
common.installLocation=/opt/xPlore
SMTP_HOST=localhost
ADMINISTRATOR_EMAIL_ADDRESS=xplore@full_text_server_01.dbi-services.com

[xplore@full_text_server_01 xplore_install]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • common.installLocation: The path you want to install xPlore on. This will be the base folder under which the binaries will be installed. I put here /opt/xPlore but you can use whatever you want
  • SMTP_HOST: The host to target for the SMTP (emails)
  • ADMINISTRATOR_EMAIL_ADDRESS: The email address to be used for the watchdog. If you do not specify the SMTP_HOST and ADMINISTRATOR_EMAIL_ADDRESS properties, the watchdog configuration will end-up with a non-fatal error, meaning that the binaries installation will still be working without issue but you will have to add these manually for the watchdog if you want to use it. If you don’t want to use it, you can go ahead without, the Dsearch and IndexAgents will work properly without but obviously you are loosing the features that the watchdog brings

 

Once the properties file is ready, you can install the Documentum xPlore binaries in silent using the following command:

[xplore@full_text_server_01 xplore_install]$ ./setup.bin -f FT_Installation.properties

 

2. Documentum xPlore Dsearch installation

I will use below a lot the word “Dsearch” but this section can actually be used to install any instances: Primary Dsearch, Secondary Dsearch or even CPS only. Once you have the binaries installed, you can install a first Dsearch (PrimaryDsearch usually or PrimaryEss) that will be used for the Full Text indexing. The properties file for this component is as follow:

[xplore@full_text_server_01 xplore_install]$ vi FT_Dsearch_Installation.properties
[xplore@full_text_server_01 xplore_install]$ cat FT_Dsearch_Installation.properties
### Silent installation response file for Dsearch
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
common.installLocation=/opt/xPlore
COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH=/opt/xPlore
common.64bits=true
COMMON.JAVA64_HOME=/opt/xPlore/java64/JAVA_LINK

### Configuration mode
ess.configMode.primary=1
ess.configMode.secondary=0
ess.configMode.upgrade=0
ess.configMode.delete=0
ess.configMode.cpsonly=0

### Other configurations
ess.primary=true
ess.sparenode=0

ess.data_dir=/opt/xPlore/data
ess.config_dir=/opt/xPlore/config

ess.primary_host=full_text_server_01.dbi-services.com
ess.primary_port=9300
ess.xdb-primary-listener-host=full_text_server_01.dbi-services.com
ess.xdb-primary-listener-port=9330
ess.transaction_log_dir=/opt/xPlore/config/wal/primary

ess.name=PrimaryDsearch
ess.FQDN=full_text_server_01.dbi-services.com

ess.instance.password=ds34rchAdm1nP4ssw0rd
ess.instance.port=9300

ess.ess.active=true
ess.cps.active=false
ess.essAdmin.active=true

ess.xdb-listener-port=9330
ess.admin-rmi-port=9331
ess.cps-daemon-port=9321
ess.cps-daemon-local-port=9322

common.installOwner.password=ds34rchAdm1nP4ssw0rd
admin.username=admin
admin.password=ds34rchAdm1nP4ssw0rd

[xplore@full_text_server_01 xplore_install]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • common.installLocation: The path you installed xPlore on. I put here /opt/xPlore but you can use whatever you want
  • COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH: Same value as “common.installLocation” for linux but for Windows, you need to change double back-slash with forward slash
  • common.64bits: Whether or not the system supports a 64 bits architecture
  • COMMON.JAVA64_HOME: The path of the JAVA_HOME that has been installed with the binaries. If you installed xPlore under /opt/xPlore, then this value should be: /opt/xPlore/java64/JAVA_LINK
  • ess.configMode.primary: Whether or not you want to install a Primary Dsearch (binary value)
  • ess.configMode.secondary: Whether or not you want to install a Secondary Dsearch (binary value)
  • ess.configMode.upgrade: Whether or not you want to upgrade an instance (binary value)
  • ess.configMode.delete: Whether or not you want to delete an instance (binary value)
  • ess.configMode.cpsonly: Whether or not you want to install a CPS only and not a Primary/Secondary Dsearch (binary value)
  • ess.primary: Whether or not this instance is a primary instance (set this to true if installing a primary instance)
  • ess.sparenode: Whether or not the secondary instance is to be used as a spare node. This should be set to 1 only if “ess.configMode.secondary=1″ and you want it to be a spare node only
  • ess.data_dir: The path to be used to contain the instance data. For a single-node, this is usually /opt/xPlore/data and for a multi-node, it needs to be a shared folder between the different nodes of the multi-node
  • ess.config_dir: Same as “ess.data_dir” but for the config folder
  • ess.primary_host: The Fully Qualified Domain Name of the primary Dsearch this new instance will be linked to. Here we are installing a Primary Dsearch so it is the local host
  • ess.primary_port: The port that the primary Dsearch is/will be using
  • ess.xdb-primary-listener-host: The Fully Qualified Domain Name of the host where the xDB has been installed on for the primary Dsearch. This is usually the same value as “ess.primary_host”
  • ess.xdb-primary-listener-port: The port that the xDB is/will be using for the primary Dsearch. This is usually the value of “ess.primary_port” + 30
  • ess.transaction_log_dir: The path to be used to store the xDB transaction logs. This is usually under the “ess.config_dir” folder (E.g.: /opt/xPlore/config/wal/primary)
  • ess.name: The name of the instance to be installed. For a primary Dsearch, it is usually something like PrimaryDsearch
  • ess.FQDN: The Fully Qualified Domain Name of the current host the instance is being installed on
  • ess.instance.password: The password to be used for the new instance (xDB Administrator & superuser). Using the GUI installer, you can only set 1 password and it will be used for everything (JBoss admin, xDB Administrator, xDB superuser). In silent, you can separate them a little bit, if you want to
  • ess.instance.port: The port of the instance to be installed. For a primary Dsearch, it is usually 9300
  • ess.ess.active: Whether or not you want to enable/deploy the Dsearch (set this to true if installing a primary or secondary instance)
  • ess.cps.active: Whether or not you want to enable/deploy the CPS (already included in the Dsearch so set this to true only if installing a CPS Only)
  • ess.essAdmin.active: Whether or not you want to enable/deploy the Dsearch Admin
  • ess.xdb-listener-port: The port to be used by the xDB for the instance to be installed. For a primary Dsearch, it is usually “ess.instance.port” + 30
  • ess.admin-rmi-port: The port to be used by the RMI for the instance to be installed. For a primary Dsearch, it is usually “ess.instance.port” + 31
  • ess.cps-daemon-port: I’m not sure what this is used for because the correct port for the CPS daemon0 (on a primary Dsearch) is the next parameter but I know that the default value for this is usually “ess.instance.port” + 21. It is possible that this parameter is only used in case the new instance is a CPS Only because this port (instance port + 21) is used on a CPS Only host as Daemon0 so it would make sense… To be confirmed!
  • ess.cps-daemon-local-port: The port to be used by the CPS daemon0 for the instance to be installed. For a primary Dsearch, it is usually “ess.instance.port” + 22. You need a few ports available after this one in case you are going to have several CPS daemons (9322, 9323, 9324, …)
  • common.installOwner.password: The password of the xPlore installation owner. I assume this is only used on Windows environments for the service setup because on linux, I always set a dummy password and there is no issue
  • admin.username: The name of the JBoss instance admin account to be created
  • admin.password: The password of the above-mentioned account

 

Once the properties file is ready, you can install the Documentum xPlore instance in silent using the following command:

[xplore@full_text_server_01 xplore_install]$ /opt/xPlore/setup/dsearch/dsearchConfig.bin LAX_VM "/opt/xPlore/java64/JAVA_LINK/bin/java" -f FT_Dsearch_Installation.properties

 

You now know how to install the Full Text binaries and a first instance on top of it using the silent installation provided by Documentum.

 

Cet article Documentum – Silent Install – xPlore binaries & Dsearch est apparu en premier sur Blog dbi services.

Documentum – Silent Install – Remote Docbases/Repositories (HA)

Sat, 2018-09-01 03:00

In previous blogs, we installed in silent the Documentum binaries, a docbroker (+licence(s) if needed), several repositories and finally D2. In this one, we will see how to install remote docbases/repositories to have a High Availability environment with the docbases/repositories that we already installed in silent.

As mentioned in the first blog of this series, there is a utility under “$DM_HOME/install/silent/silenttool” that can be used to generate a skeleton for a CFS/Remote CS but there are still missing parameters so I will describe them in this blog.

In this blog, I will also configure the Global Repository (GR) in HA so that you have it available even if the first node fails… This is particularly important if, like me, you prefer to set the GR as the crypto repository (so it is the repository used for encryption/decryption).

 

1. Documentum Remote Global Registry repository installation

The properties file for a Remote GR installation is as follow (it supposes that you already have the binaries and a docbroker installed on this Remote CS):

[dmadmin@content_server_02 ~]$ vi /tmp/dctm_install/CFS_Docbase_GR.properties
[dmadmin@content_server_02 ~]$ cat /tmp/dctm_install/CFS_Docbase_GR.properties
### Silent installation response file for a Remote Docbase (GR)
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.COMPONENT_ACTION=CREATE

### Docbase parameters
common.aek.passphrase.password=a3kP4ssw0rd
common.aek.key.name=CSaek
common.aek.algorithm=AES_256_CBC
SERVER.ENABLE_LOCKBOX=true
SERVER.LOCKBOX_FILE_NAME=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD=l0ckb0xP4ssw0rd

SERVER.DOCUMENTUM_DATA=
SERVER.DOCUMENTUM_SHARE=
SERVER.FQDN=content_server_02.dbi-services.com

SERVER.DOCBASE_NAME=gr_docbase
SERVER.PRIMARY_SERVER_CONFIG_NAME=gr_docbase
CFS_SERVER_CONFIG_NAME=content_server_02_gr_docbase
SERVER.DOCBASE_SERVICE_NAME=gr_docbase
SERVER.REPOSITORY_USERNAME=dmadmin
SERVER.SECURE.REPOSITORY_PASSWORD=dm4dm1nP4ssw0rd
SERVER.REPOSITORY_USER_DOMAIN=
SERVER.REPOSITORY_USERNAME_WITH_DOMAIN=dmadmin
SERVER.REPOSITORY_HOSTNAME=content_server_01.dbi-services.com

SERVER.USE_CERTIFICATES=false

SERVER.PRIMARY_CONNECTION_BROKER_HOST=content_server_01.dbi-services.com
SERVER.PRIMARY_CONNECTION_BROKER_PORT=1489
SERVER.PROJECTED_CONNECTION_BROKER_HOST=content_server_02.dbi-services.com
SERVER.PROJECTED_CONNECTION_BROKER_PORT=1489

SERVER.DFC_BOF_GLOBAL_REGISTRY_VALIDATE_OPTION_IS_SELECTED=true
SERVER.PROJECTED_DOCBROKER_HOST_OTHER=content_server_01.dbi-services.com
SERVER.PROJECTED_DOCBROKER_PORT_OTHER=1489
SERVER.GLOBAL_REGISTRY_REPOSITORY=gr_docbase
SERVER.BOF_REGISTRY_USER_LOGIN_NAME=dm_bof_registry
SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD=dm_b0f_reg1s7ryP4ssw0rd

[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$ sed -i "s,SERVER.DOCUMENTUM_DATA=.*,SERVER.DOCUMENTUM_DATA=$DOCUMENTUM/data," /tmp/dctm_install/CFS_Docbase_GR.properties
[dmadmin@content_server_02 ~]$ sed -i "s,SERVER.DOCUMENTUM_SHARE=.*,SERVER.DOCUMENTUM_SHARE=$DOCUMENTUM/share," /tmp/dctm_install/CFS_Docbase_GR.properties
[dmadmin@content_server_02 ~]$

 

Just like in previous blog, I will let you set the DATA and SHARE folders as you want to.

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • SERVER.COMPONENT_ACTION: The action to be executed, it can be either CREATE, UPGRADE or DELETE. You can upgrade a Documentum environment in silent even if the source doesn’t support the silent installation/upgrade as long as the target version (CS 7.3, CS 16.4, …) does
  • common.aek.passphrase.password: The password used for the AEK on the Primary CS
  • common.aek.key.name: The name of the AEK key used on the Primary CS. This is usually something like “CSaek”
  • common.aek.algorithm: The algorithm used for the AEK key. I would recommend the strongest one, if possible: “AES_256_CBC”
  • SERVER.ENABLE_LOCKBOX: Whether or not you used a Lockbox to protect the AEK key on the Primary CS. If set to true, the lockbox will be downloaded from the Primary CS, that’s why you don’t need the “common.use.existing.aek.lockbox” property
  • SERVER.LOCKBOX_FILE_NAME: The name of the Lockbox used on the Primary CS. This is usually something like “lockbox.lb”
  • SERVER.LOCKBOX_PASSPHRASE.PASSWORD: The password used for the Lockbox on the Primary CS
  • SERVER.DOCUMENTUM_DATA: The path to be used to store the Documentum documents, accessible from all Content Servers which will host this docbase/repository
  • SERVER.DOCUMENTUM_SHARE: The path to be used for the share folder
  • SERVER.FQDN: The Fully Qualified Domain Name of the current host the docbase/repository is being installed on
  • SERVER.DOCBASE_NAME: The name of the docbase/repository created on the Primary CS (dm_docbase_config.object_name)
  • SERVER.PRIMARY_SERVER_CONFIG_NAME: The name of the dm_server_config object created on the Primary CS
  • CFS_SERVER_CONFIG_NAME: The name of dm_server_config object to be created for this Remote CS
  • SERVER.DOCBASE_SERVICE_NAME: The name of the service to be used
  • SERVER.REPOSITORY_USERNAME: The name of the Installation Owner. I believe it can be any superuser account but I didn’t test it
  • SERVER.SECURE.REPOSITORY_PASSWORD: The password of the above account
  • SERVER.REPOSITORY_USER_DOMAIN: The domain of the above account. If using an inline user like the Installation Owner, you should keep it empty
  • SERVER.REPOSITORY_USERNAME_WITH_DOMAIN: Same value as the REPOSITORY_USERNAME if the USER_DOMAIN is kept empty
  • SERVER.REPOSITORY_HOSTNAME: The Fully Qualified Domain Name of the Primary CS
  • SERVER.USE_CERTIFICATES: Whether or not to use SSL Certificate for the docbase/repository (it goes with the SERVER.CONNECT_MODE). If you set this to true, you will have to add the usual additional parameters, just like for the Primary CS
  • SERVER.PRIMARY_CONNECTION_BROKER_HOST: The Fully Qualified Domain Name of the Primary CS
  • SERVER.PRIMARY_CONNECTION_BROKER_PORT: The port used by the docbroker/connection broker on the Primary CS
  • SERVER.PROJECTED_CONNECTION_BROKER_HOST: The hostname to be use for the [DOCBROKER_PROJECTION_TARGET] on the server.ini file, meaning the docbroker/connection broker the docbase/repository should project to, by default
  • SERVER.PROJECTED_CONNECTION_BROKER_PORT: The port to be use for the [DOCBROKER_PROJECTION_TARGET] on the server.ini file, meaning the docbroker/connection broker the docbase/repository should project to, by default
  • SERVER.DFC_BOF_GLOBAL_REGISTRY_VALIDATE_OPTION_IS_SELECTED: Whether or not you want to validate the GR on the Primary CS. I always set this to true for the first docbase/repository installed on the Remote CS (in other words: for the GR installation). If you set this to true, you will have to provide some additional parameters:
    • SERVER.PROJECTED_DOCBROKER_HOST_OTHER: The Fully Qualified Domain Name of the docbroker/connection broker that the GR on the Primary CS projects to so this is usually the Primary CS…
    • SERVER.PROJECTED_DOCBROKER_PORT_OTHER: The port of the docbroker/connection broker that the GR on the Primary CS projects to
    • SERVER.GLOBAL_REGISTRY_REPOSITORY: The name of the GR repository
    • SERVER.BOF_REGISTRY_USER_LOGIN_NAME: The name of the BOF Registry account created on the Primary CS inside the GR repository. This is usually something like “dm_bof_registry”
    • SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD: The password used by the BOF Registry account

 

Once the properties file is ready, first make sure the gr_docbase is running on the “Primary” CS (content_server_01) and then start the CFS installer using the following commands:

[dmadmin@content_server_02 ~]$ dmqdocbroker -t content_server_01.dbi-services.com -p 1489 -c getservermap gr_docbase
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Using specified port: 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : content_server_01.dbi-services.com
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 12 123 12345678 content_server_01.dbi-services.com 123.123.123.123
Docbroker version         : 7.3.0050.0039  Linux64
**************************************************
**           S E R V E R     M A P              **
**************************************************
Docbase gr_docbase has 1 server:
--------------------------------------------
  server name         :  gr_docbase
  server host         :  content_server_01.dbi-services.com
  server status       :  Open
  client proximity    :  1
  server version      :  7.3.0050.0039  Linux64.Oracle
  server process id   :  12345
  last ckpt time      :  6/12/2018 14:23:35
  next ckpt time      :  6/12/2018 14:28:35
  connect protocol    :  TCP_RPC
  connection addr     :  INET_ADDR: 12 123 12345678 content_server_01.dbi-services.com 123.123.123.123
  keep entry interval :  1440
  docbase id          :  1010101
  server dormancy status :  Active
--------------------------------------------
[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$ $DM_HOME/install/dm_launch_cfs_server_config_program.sh -f /tmp/dctm_install/CFS_Docbase_GR.properties

 

Don’t forget to check the logs once done to make sure it went without issue!

 

2. Other Remote repository installation

Once you have the Remote Global Registry repository installed, you can install the Remote repository that will be used by the end-users (which isn’t a GR then). The properties file for an additional remote repository is as follow:

[dmadmin@content_server_02 ~]$ vi /tmp/dctm_install/CFS_Docbase_Other.properties
[dmadmin@content_server_02 ~]$ cat /tmp/dctm_install/CFS_Docbase_Other.properties
### Silent installation response file for a Remote Docbase
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.COMPONENT_ACTION=CREATE

### Docbase parameters
common.aek.passphrase.password=a3kP4ssw0rd
common.aek.key.name=CSaek
common.aek.algorithm=AES_256_CBC
SERVER.ENABLE_LOCKBOX=true
SERVER.LOCKBOX_FILE_NAME=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD=l0ckb0xP4ssw0rd

SERVER.DOCUMENTUM_DATA=
SERVER.DOCUMENTUM_SHARE=
SERVER.FQDN=content_server_02.dbi-services.com

SERVER.DOCBASE_NAME=Docbase1
SERVER.PRIMARY_SERVER_CONFIG_NAME=Docbase1
CFS_SERVER_CONFIG_NAME=content_server_02_Docbase1
SERVER.DOCBASE_SERVICE_NAME=Docbase1
SERVER.REPOSITORY_USERNAME=dmadmin
SERVER.SECURE.REPOSITORY_PASSWORD=dm4dm1nP4ssw0rd
SERVER.REPOSITORY_USER_DOMAIN=
SERVER.REPOSITORY_USERNAME_WITH_DOMAIN=dmadmin
SERVER.REPOSITORY_HOSTNAME=content_server_01.dbi-services.com

SERVER.USE_CERTIFICATES=false

SERVER.PRIMARY_CONNECTION_BROKER_HOST=content_server_01.dbi-services.com
SERVER.PRIMARY_CONNECTION_BROKER_PORT=1489
SERVER.PROJECTED_CONNECTION_BROKER_HOST=content_server_02.dbi-services.com
SERVER.PROJECTED_CONNECTION_BROKER_PORT=1489

[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$ sed -i "s,SERVER.DOCUMENTUM_DATA=.*,SERVER.DOCUMENTUM_DATA=$DOCUMENTUM/data," /tmp/dctm_install/CFS_Docbase_Other.properties
[dmadmin@content_server_02 ~]$ sed -i "s,SERVER.DOCUMENTUM_SHARE=.*,SERVER.DOCUMENTUM_SHARE=$DOCUMENTUM/share," /tmp/dctm_install/CFS_Docbase_Other.properties
[dmadmin@content_server_02 ~]$

 

I won’t list all these parameters again because as you can see above, it is exactly the same, except the docbase/repository name. Only the last section regarding the GR validation isn’t needed anymore. Once the properties file is ready, you can install the additional remote repository in the same way:

[dmadmin@content_server_02 ~]$ dmqdocbroker -t content_server_01.dbi-services.com -p 1489 -c getservermap Docbase1
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Using specified port: 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : content_server_01.dbi-services.com
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 12 123 12345678 content_server_01.dbi-services.com 123.123.123.123
Docbroker version         : 7.3.0050.0039  Linux64
**************************************************
**           S E R V E R     M A P              **
**************************************************
Docbase Docbase1 has 1 server:
--------------------------------------------
  server name         :  Docbase1
  server host         :  content_server_01.dbi-services.com
  server status       :  Open
  client proximity    :  1
  server version      :  7.3.0050.0039  Linux64.Oracle
  server process id   :  23456
  last ckpt time      :  6/12/2018 14:46:42
  next ckpt time      :  6/12/2018 14:51:42
  connect protocol    :  TCP_RPC
  connection addr     :  INET_ADDR: 12 123 12345678 content_server_01.dbi-services.com 123.123.123.123
  keep entry interval :  1440
  docbase id          :  1010102
  server dormancy status :  Active
--------------------------------------------
[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$ $DM_HOME/install/dm_launch_cfs_server_config_program.sh -f /tmp/dctm_install/CFS_Docbase_Other.properties

 

At this point, you will have the second docbases/repositories dm_server_config object created but that’s pretty much all you got… For a correct/working HA solution, you will need to configure the jobs for HA support (is_restartable, method_verb, …), maybe change the checkpoint_interval, configure the projections, trust the needed DFC clients (JMS applications), aso…

 

You now know how to install and configure a Global Registry repository as well as any other docbase/repository on a “Remote” Content Server (CFS) using the silent installation provided by Documentum.

 

Cet article Documentum – Silent Install – Remote Docbases/Repositories (HA) est apparu en premier sur Blog dbi services.

SQL Server Tips: How many different datetime are in my column and what the delta?

Fri, 2018-08-31 08:27

Few months ago, a customer asks me to find in a column, how many I have of the same date & time in a column and what is the delta between these dates & times.
The column is based on the function CURRENT_TIMESTAMP and used as key.
I know, it’s not good to use it as a key but some developers are not competent to develop SQL correctly (no need to comments that!)…

This usage indicates a lot of duplicate keys and the customer want to know how many and the delta between each date &time.

To perform this task, I create a little example with a temporary table with one column with a datetime format:

CREATE TABLE [#tmp_time_count] (dt datetime not null)

I insert the CURRENT_TIMESTAMP in the table a thousand times to have data to play with:
INSERT INTO [#tmp_time_count] SELECT CURRENT_TIMESTAMP
Go 1000

To see how many different datetime I have, I need just to use DISCTINCT in a COUNT:

SELECT COUNT(DISTINCT dt) as [number_of_time_diff] from [#tmp_time_count]

datetime_diff_01
In my test, I find 36 different times for 1000 rows.
The question now is to know how many I have on the same date & time…
To have this information, I try a lot of thing but finally, I write this query with a LEFT JOIN on the same table and a DATEPART on the datetime’s column.

SELECT DISTINCT [current].dt as [Date&Time], DATEPART(MILLISECOND,ISNULL([next].dt,0) –[current].dt) as [time_diff] FROM [#tmp_time_count] as [current] LEFT JOIN [#tmp_time_count] as [next] on [next].dt = (SELECT MIN(dt) FROM [#tmp_time_count] WHERE dt >[current].dt)

datetime_diff_02
Don’t forget at the end to drop the table….

DROP TABLE [#tmp_time_count];

Et voila! I hope this little query can help you in a similar situation…

 

Cet article SQL Server Tips: How many different datetime are in my column and what the delta? est apparu en premier sur Blog dbi services.

Dbvisit 8 Standby Daemon on Windows

Wed, 2018-08-29 13:55

In this previous blog, we have installed Dbvisit standby for Windows on both servers. We suppose that the database is created and that the standby is configured (see this blog). The steps are the same that on Linux except that the command is launched on a traditional DOS terminal or a PowerShell. We will not describe these steps (see here for more details).
Dbvisit by default will neither send nor apply the archived log files. With Windows we can use the Windows Scheduler or since Dbvisit 8 the Daemon.
To use Windows Scheduler we just remind that the command to be scheduled is

 dbvctl.exe –d PROD 

–dbvctl.exe is located in the Dbvisit install directory
–PROD is the name of the database

Since Dbvisit 8, it is no longer needed to use the Windows scheduler to send and to apply archived logs. We can use the new background process option where you can run Dbvisit Standby for each DDC (database) in the background. We can manage this via the Central Console.
When using Windows, the background process will run as a Windows Service. A Windows Service will be created on the primary and standby server for each database. Below an example of how to create a service for the database PROD
img1
And choose DATABASE ACTIONS TAB
img2
Choose Daemon Actions
img3
After when we select a host we will see the status of the daemon on this host. We are to create the Daemon using the INSTALL button.
img4
We provide the credentials of the user dbvisit. This user will be the owner of the service which be created for the Daemon. And let’s submit
img5
Everything is OK. If we click again on DAEMON ACTIONS and select the host winserver1 we will have
img6
And we can start the Daemon.
img7
The same steps have to be done on both servers.
We then should have a Windows service configured with an automatic startup.
img8
And now Dbvisit will send and apply archive logs as soon as they are generated.

 

Cet article Dbvisit 8 Standby Daemon on Windows est apparu en premier sur Blog dbi services.

Install Dbvisit 8 on Windows

Wed, 2018-08-29 13:52

In a previous blog we described how to install dbvisit standby in a Linux box. In this article I am going to describe the installation on a Windows machine. We are using dbvisit 8 and windows server 2016. The name of my servers are winserver1 and winserver2.
The first thing you will have to do is to download dbvisit standby package here . A trial key will be sent to you. Before starting the installation we create a user named dbvisit (feel free to change) with following properties:
img1
The user dbvisit need also privilege to logon as a service.
img2
The installation is very easy, just log with a privileged user and run the executable. Below is the installation on the server winserver2. Note that dbvisit standby need to be installed on both servers. Note that we also have turned off the WIndows User Account Control (UAC).

Dbvnet,Dbvagent and Standby Cli components install

img3
Click on Next
img4
Click on I Agree (anyway we don’t have the choice)
img5
Choose the components to install. Note that on a first time the central console is not installed. We will install it later. Note that it is recommended to install the console on a separate server (this can be a VM on Windows or Linux)
img6
Click on Next
img7
Here we give the user we created at the beginning
img8
We provide the password
img9
And then the installation starts
img10
And then the final step is to answer to some configuration questions


-----------------------------------------------------------
About to configure DBVISIT DBVNET
-----------------------------------------------------------

>>> Please specify the Dbvnet Passphrase to be used for secure connections.

The passphrase provided must be the same in both the local and remote
Dbvnet installations. It is used to establish a secure (encrypted)
Dbvnet connections

Enter a custom value:
> XXXXXXXXXXXXXXXXX

>>> Please specify the Local host name to be used by Dbvnet on this server.

Dbvnet will be listening on the local IP Address on this server which
resolve to the host name specified here.
If using a cluster or virtual IP make sure the host name or alias
specified here resolve to the IP address local to where dbvnet is
installed. The host name should resolve to IPv4 address, if not
you can use an IPv4 IP address instead of host name.

Enter a custom value or press ENTER to accept default [winserver2]:
>

>>> Please specify the Local Dbvnet PORT to be used.

Dbvnet will be listening on the specified port for incoming connections
from remote dbvnet connections. Please make sure that this port is not
already in use or blocked by any firewall. You may choose any value
between 1024 and 65535, however the default of 7890 is recommended.

Enter a custom value or press ENTER to accept default [7890]:
>

>>> Please specify the Remote host name to be used by Dbvnet.

By default Dbvnet will use this remote hostname for any remote
connections. Dbvnet must be installed and configured on the specified
remote host. If using a cluster or virtual IP make sure the host name
or alias specified here resolve to the IP address local to where dbvnet
is installed.
If you are unsure about the remote host name during installation, use
the default value which will be the current local hostname.
The host name should resolve to IPv4 address, if not
you can use an IPv4 IP address instead of host name.

Enter a custom value or press ENTER to accept default [winserver2]:
> winserver1

>>> Please specify the Remote Dbvnet PORT to be used.

Dbvnet will connect to the remote server on this specified port.
On the remote host Dbvnet will be listening on the specified port for
incoming connections. Please make sure that this port is not already in
use or blocked by any firewall. You may choose any value between 1024
and 65535, however the default of 7890 is recommended.

Enter a custom value or press ENTER to accept default [7890]:
>

-----------------------------------------------------------
Summary of the Dbvisit DBVNET configuration
-----------------------------------------------------------
DBVISIT_BASE C:\Program Files\Dbvisit
DBVNET_PASSPHRASE XXXXXXXXXXXX
DBVNET_LOCAL_HOST winserver2
DBVNET_LOCAL_PORT 7890
DBVNET_REMOTE_HOST winserver1
DBVNET_REMOTE_PORT 7890

Press ENTER to continue

-----------------------------------------------------------
About to configure DBVISIT DBVAGENT
-----------------------------------------------------------

>>> Please specify the host name to be used for the Dbvisit Agent.

The Dbvisit Agent (Dbvagent) will be listening on this local address.
If you are using the Dbvserver (GUI) - connections from the GUI will be
established to the Dbvisit Agent. The Dbvisit Agent address must be
visible from the Dbvserver (GUI) installation.
If using a cluster or virtual IP make sure the host name or alias
specified here resolve to the IP address local to where dbvnet is
installed.
The host name should resolve to IPv4 address, if not you can use
an IPv4 IP address instead of host name.

Enter a custom value or press ENTER to accept default [winserver2]:
>

>>> Please specify the listening PORT number for Dbvagent.

The Dbvisit Agent (Dbvagent) will listening on the specified port for
incoming requests from the GUI (Dbvserver). Please make sure that this
port is not already in use or blocked by any firewall. You may choose
any value between 1024 and 65535, however the default of 7891 is
recommended.

Enter a custom value or press ENTER to accept default [7891]:
>

>>> Please specify passphrase for Dbvagent

Each Dbvisit Agent must have a passpharse specified. This passphrase
does not have to match between all the servers. It will be used to
establish a secure connection between the GUI (Dbvserver) and the
Dbvisit Agent.

Enter a custom value:
> XXXXXXXXXXXXXXXXXXXX

-----------------------------------------------------------
Summary of the Dbvisit DBVAGENT configuration
-----------------------------------------------------------
DBVISIT_BASE C:\Program Files\Dbvisit
DBVAGENT_LOCAL_HOST winserver2
DBVAGENT_LOCAL_PORT 7891
DBVAGENT_PASSPHRASE XXXXXXXXXXXXXXXXXXX

Press ENTER to continue

No need to configure standby, skipped.
Copied file C:\Program Files\Dbvisit\dbvnet\conf\dbvnetd.conf to C:\Program Files\Dbvisit\dbvnet\conf\dbvnetd.conf.201808260218
DBVNET config file updated
Copied file C:\Program Files\Dbvisit\dbvagent\conf\dbvagent.conf to C:\Program Files\Dbvisit\dbvagent\conf\dbvagent.conf.201808260218
DBVAGENT config file updated

-----------------------------------------------------------
Component Installed Version
-----------------------------------------------------------
standby 8.0.22_36_gb602000a
dbvnet 8.0.22_36_gb602000a
dbvagent 8.0.22_36_gb602000a
dbvserver not installed

-----------------------------------------------------------

-----------------------------------------------------------
About to start service DBVISIT DBVNET
-----------------------------------------------------------
Successfully started service DBVISIT DBVNET

-----------------------------------------------------------
About to start service DBVISIT DBVAGENT
-----------------------------------------------------------
Successfully started service DBVISIT DBVAGENT

>>> Installation completed
Install log C:\Users\dbvisit\AppData\Local\Temp\dbvisit_install.log.201808260214.

Press ENTER to continue

Once then we can finish the installation
img11
dbserver console install

The installation of the dbserver (central console) is done is the same way. We will not show pictures but only the questions we have to reply. In our case it is installed on the primary server winserver1.

-----------------------------------------------------------
Welcome to the Dbvisit software installer.
-----------------------------------------------------------

Installing dbvserver...

-----------------------------------------------------------
About to configure DBVISIT DBVSERVER
-----------------------------------------------------------

>>> Please specify the host name to be used for Dbvserver

The Dbvisit Web Server (Dbvserver) will be listening on this local
address. If using a cluster or virtual IP make sure the host name or
alias specified here resolve to the IP address local to where Dbvserver
is installed.
If you are unsure about the remote host name during installation, use
the default value which will be the current local hostname.
The host name should resolve to IPv4 address, if not you can use
an IPv4 IP address instead of host name.

Enter a custom value or press ENTER to accept default [winserver1]:
>
>>> Please specify the listening port number for Dbvserver on the local server

You may choose any value between 1024 and 65535. The default recommended
value is 4433.

Note: if you can not access this port after the installation has
finished, then please double-check your server firewall settings
to ensure the selected port is open.
Enter a custom value or press ENTER to accept default [4433]:
>
>>> Please specify the host name (or IPv4 address) to be used for Dbvserver public interface

In most cases this will be the same as the listener address, if not sure
use the same value as the listener address.

The Dbvisit Web Server (Dbvserver) will be listening on the local
listener address. The public address can be set to an external IP
example a firewall address in case the Central Console (Dbvserver)
and agents (Primary and Standby Database servers) have a firewall
inbetween them. The public interface address will be passed to
the agents during communication for sending information back.
If you are unsure about the public host address, use
the default value which will be the current local hostname.
The host name should resolve to IPv4 address, if not you can use
an IPv4 IP address instead of host name.
Enter a custom value or press ENTER to accept default [winserver1]:
>

-----------------------------------------------------------
Summary of the Dbvisit DBVSERVER configuration
-----------------------------------------------------------
DBVISIT_BASE C:\Program Files\Dbvisit
DBVSERVER_LOCAL_HOST winserver1
DBVSERVER_LOCAL_PORT 4433
DBVSERVER_PUBLIC_HOST winserver1

Press ENTER to continue

-----------------------------------------------------------
Summary of the Dbvisit DBVSERVER configuration
-----------------------------------------------------------
DBVISIT_BASE C:\Program Files\Dbvisit
DBVSERVER_LOCAL_HOST winserver1
DBVSERVER_LOCAL_PORT 4433
DBVSERVER_PUBLIC_HOST winserver1

Press ENTER to continue

Once the console installed we can log into using the following URL

https://winserver1:4433

with the default credentials admin/admin (Note that you have to change it once logged)

Conclusion : In this blog we have shown how to install dbvisit in a Windows server. In a coming blog we will see how to create a standby database on a Windows server and how to schedule log shipping and log apply.

 

Cet article Install Dbvisit 8 on Windows est apparu en premier sur Blog dbi services.

Pages