Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 9 hours 41 min ago

Disabling database trap with dataguard broker

Thu, 2018-06-07 09:31

When connecting to dataguard broker and disabling database with ORACLE_SID set before connecting to broker, broker becomes inaccessible, which is shown below:

Login to broker via database ila:


oracle [ILA@p03] /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs : dgmgrl /
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> show configuration;

Configuration - ila

Protection Mode: MaxAvailability
Databases:
ILAP - Primary database
ila - Physical standby database
ilal - Logical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Disable database ila in broker:


DGMGRL> disable database ila;
Disabled.

Now neither enabling nor showing configuration is possible any more:


DGMGRL> enable database ila;
ORA-16541: database is not enabled

Configuration details cannot be determined by DGMGRL
DGMGRL> show configuration;
ORA-16541: database is not enabled

Configuration details cannot be determined by DGMGRL
DGMGRL> exit

Solution: You have to connect to broker via a database which is enabled.

 

Cet article Disabling database trap with dataguard broker est apparu en premier sur Blog dbi services.

Dealing with ugrade scenarios for SQL Server on Docker and Swarm

Wed, 2018-06-06 03:49

This blog post comes from an interesting experience with one customer about upgrading SQL Server on a Docker environment. Let’s set quickly the context: a production environment that includes a standalone Docker engine on the top of openSUSE Linux distribution with some SQL Server 2017 Linux containers. The deal was to update those SQL Server instances from 2017 CU1 to 2017 CU7.

blog 137 - 0 - banner update sql docker

The point is we applied the same kind of upgrades earlier on the morning but it concerned virtual environments with SQL Server 2017 on Windows Server 2016. As you already guessed, we cannot go on the same way with SQL Server containers. The good news is that the procedure is fully documented by Microsoft but let’s focus on my customer’s question: Can we achieve rolling upgrades with SQL Server containers? Rolling upgrade in this context may be defined as keeping the existing system running while we each component. Referring to the context of this customer the quick answer is no because they manage the standalone instances. However, we may hope to be as closed as possible to the existing rolling upgrade scenarios with SQL Server HA capabilities but it is pretty limited currently on Docker and didn’t make sense in this specific context.

In addition, let’s say that my customer spins up SQL Server containers by running docker run command. In this case, we had no choice to re-create the concerned containers with the new image. So basically, according to the Microsoft documentation, the game will consist of the following main steps:

  • To download the latest SQL Server image from the Microsoft docker hub.
  • To ensure we are using persistent volumes with SQL Server containers.
  • To Initiate DB user backups (Keep safe here)
  • To remove the concerned container
  • To re-create the container with the same definition but the upgraded based image

The aforementioned steps will lead to some SQL Server instance unavailability.

Let’s simulate this scenario on my lab environment with a custom image (but the principle remains the same as my customer).

[clustadmin@docker1 PROD]$ docker ps
CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS                    PORTS                    NAMES
aa6b4411e4e4        dbi/dbi_linux_sql2017:CU4   "./entrypoint.sh 'ta…"   22 minutes ago      Up 22 minutes (healthy)   0.0.0.0:1433->1433/tcp   prod_db_1
153b1cc0bbe0        registry:2                  "/entrypoint.sh /etc…"   6 weeks ago         Up About an hour          5000/tcp                 registry.1.pevhlfmsailjx889ktats1fnh

 

The first docker container concerns my SQL Server instance with a custom base image dbi/dbi_linux_sql2017:CU4. My environment includes also one user database dbi_tools and some related external objects as SQL Server jobs and additional logins – dbi and test logins. A simplified representation of my customer scenario …

blog 137 - 0 - mssql-cli sql version

blog 137 - 2 - mssql-cli sql login

blog 137 - 1 - mssql-cli user db

So, the challenge here is to upgrade the current container running on SQL Server 2017 CU4 with the last SQL Server 2017 CU7. The first step will consist in upgrading the dbi/dbi_linuxsql2017:CU4 image. Note I will use docker-compose in the next part of my demo but we’ll achieve exactly the same goal differently. So, let’s modify the FROM command line inside the docker file line as follows:

# set the base image (2017 version)
# > CU4 includes SQL Server agent now
FROM microsoft/mssql-server-linux:2017-CU7

 

Then let’s run a docker-compose command with the following docker-compose input file in order to generate a new fresh SQL Server image (CU7). The interesting part of my docker-compose file:

version: '3.1'
services:
  db: 
    build: .
    image: dbi/dbi_linux_sql2017:CU7
…
[clustadmin@docker1 PROD]$ docker-compose build

 

Let’s take a look at the existing docker images:

[clustadmin@docker1 PROD]$ docker images
REPOSITORY                                   TAG                    IMAGE ID            CREATED             SIZE
dbi/dbi_linux_sql2017                        CU7                    0b4d23626dae        44 minutes ago      1.43GB
dbi/dbi_linux_sql2017                        CU4                    0db4694645ec        About an hour ago   1.42GB
…

 

My new image has been created successfully. We may also notice 2 images now: the current one (with CU4 tag) and the new one (with CU7 tag)

Obviously persistent volumes are an important part of the customer architecture, so I also simulated some volume mappings inside my docker-compose file.

version: '3.1'
services:
  db:
    build: .
    image: dbi/dbi_linux_sql2017:CU7
    #container_name: dbi_linux_sql2017_cu4
    ports:
      - "1433:1433"
    volumes:
      - /u00/db2:/u00
      - /u01/db2:/u01
      - /u02/db2:/u02
      - /u03/db2:/u03
      - /u98/db2:/u98
environment:
      - MSSQL_SA_PASSWORD=Password1
      - ACCEPT_EULA=Y
      - MSSQL_PID=Developer
      - MSSQL_USER=dbi
      - MSSQL_USER_PASSWORD=Password1
      - TZ=Europe/Berlin

 

Let’s move forward to the next step that consists in removing the current SQL Server 2017 CU4 container (prod_db_1):

[clustadmin@docker1 PROD]$ docker stop prod_db_1 && docker rm prod_db_1
prod_db_1
prod_db_1

 

And finally let’s spin up a new container based on the new image (SQL Server 2017 CU7)

[clustadmin@docker1 PROD]$ docker-compose up -d

 

Just out of curiosity, a quick look at the docker log output command reveals some related records concerning the upgrade process:

2018-06-04 22:45:43.79 spid7s      Configuration option 'allow updates' changed from 1 to 0. Run the RECONFIGURE statement to install.
2018-06-04 22:45:43.80 spid7s
2018-06-04 22:45:43.80 spid7s      -----------------------------------------
2018-06-04 22:45:43.80 spid7s      Execution of PRE_SQLAGENT100.SQL complete
2018-06-04 22:45:43.80 spid7s      -----------------------------------------
2018-06-04 22:45:43.81 spid7s      DMF pre-upgrade steps...
2018-06-04 22:45:44.09 spid7s      DC pre-upgrade steps...
2018-06-04 22:45:44.09 spid7s      Check if Data collector config table exists...
…
2018-06-04 22:45:59.39 spid7s      ------------------------------------
2018-06-04 22:45:59.39 spid7s      Execution of InstDac.SQL complete
2018-06-04 22:45:59.39 spid7s      ------------------------------------
2018-06-04 22:45:59.40 spid7s      -----------------------------------------
2018-06-04 22:45:59.40 spid7s      Starting execution of EXTENSIBILITY.SQL
2018-06-04 22:45:59.40 spid7s      -----------------------------------------
2018-06-04 22:45:59.40 spid7s      -----------------------------------------
2018-06-04 22:45:59.40 spid7s      Finished execution of EXTENSIBILITY.SQL
2018-06-04 22:45:59.41 spid7s      -----------------------------------------
2018-06-04 22:45:59.44 spid7s      Configuration option 'show advanced options' changed from 1 to 1. Run the RECONFIGURE statement to install.
2018-06-04 22:45:59.44 spid7s      Configuration option 'show advanced options' changed from 1 to 1. Run the RECONFIGURE statement to install.
2018-06-04 22:45:59.45 spid7s      Configuration option 'Agent XPs' changed from 1 to 1. Run the RECONFIGURE statement to install.
2018-06-04 22:45:59.45 spid7s      Configuration option 'Agent XPs' changed from 1 to 1. Run the RECONFIGURE statement to install.
2018-06-04 22:45:59.53 spid7s      Dropping view [dbo].[sysutility_ucp_configuration]
2018-06-04 22:45:59.53 spid7s      Creating view [dbo].[sysutility_ucp_configuration]...
2018-06-04 22:45:59.54 spid7s      Dropping view [dbo].[sysutility_ucp_policy_configuration]
2018-06-04 22:45:59.54 spid7s      Creating view [dbo].[sysutility_ucp_policy_configuration]...
2018-06-04 22:45:59.55 spid7s      Dropping [dbo].[fn_sysutility_get_is_instance_ucp]
….

 

The container has restarted correctly with the new based image as show below:

[clustadmin@docker1 PROD]$ docker ps
CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS                             PORTS                    NAMES
a17800f70fff        dbi/dbi_linux_sql2017:CU7   "./entrypoint.sh 'ta…"   4 minutes ago       Up 33 seconds (health: starting)   0.0.0.0:1433->1433/tcp   prod_db_1
153b1cc0bbe0        registry:2                  "/entrypoint.sh /etc…"   6 weeks ago         Up 2 hours                         5000/tcp                 registry.1.pevhlfmsailjx889ktats1fnh

 

Let’s check the new SQL Server version and if all my objects are still present:

blog 137 - 3 - mssql-cli sql version

The upgrade seems to be done successfully and all existing objects previous the upgrade operation still exist:

blog 137 - 4 - mssql-cli sql objects

 

Great job! But let’s go beyond to this procedure with the following question: Could we have done better here? From a process perspective, the response is probably yes but we have to rely on more sophisticated features provided by Swarm mode (or other orchestrators as K8s) as service deployment that will make the upgrade procedure drastically easier. But don’t get me wrong here. Even in Swarm mode or other orchestrators, we are not still able to guarantee the zero down time but we may perform the upgrade faster to be very close to the target.

Previously I used docker-compose to spin up my SQL Server container. Now let’s use this counterpart on a Docker Swarm environment.

[clustadmin@docker1 PROD]$ docker info | grep -i swarm
Swarm: active

[clustadmin@docker1 PROD]$ docker node ls
ID                            HOSTNAME                    STATUS              AVAILABILITY        MANAGER STATUS
hzwjpb9rtstwfex3zsbdnn5yo *   docker1.dbi-services.test   Ready               Active              Leader
q09k7pqe940qvv4c1jprzk2yv     docker2.dbi-services.test   Ready               Active
c9burq3qn4iwwbk28wrpikqra     docker3.dbi-services.test   Ready               Active

 

I already prepared a stack deployment that includes a task related to my SQL Server instance (2017 CU4):

[clustadmin@docker1 PROD]$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                       PORTS
2bmomzq0inu8        dbi_db              replicated          1/1                 dbi/dbi_linux_sql2017:CU4   *:1433->1433/tcp
[clustadmin@docker1 PROD]$ docker service ps dbi_db
ID                  NAME                IMAGE                       NODE                        DESIRED STATE       CURRENT STATE           ERROR               PORTS
rbtbkcz0cy8o        dbi_db.1            dbi/dbi_linux_sql2017:CU4   docker1.dbi-services.test   Running

 

A quick connection to the concerned SQL Server instance confirms we run on SQL Server 2017 CU4:

blog 137 - 5- mssql-cli sql version swarm

Now go ahead and let’s perform the same upgrade we’ve done previously (2017 CU7). In this case the game will consist in updating the corresponding docker-compose file with the new image as follows (I put only the interesting sample of my docker-compose file):

version: '3.1'
services:
  db: 
    build: .
    image: dbi/dbi_linux_sql2017:CU7
…

 

… and then I just have give the new definition of my docker-compose file as input of my stack deployment as follows:

[clustadmin@docker1 PROD]$ docker stack deploy -c docker-compose.yml dbi
Ignoring unsupported options: build

Updating service dbi_db (id: 2bmomzq0inu8q0mwkfff8apm7)
…

 

The system will then perform all the step we previously performed manually in the first test including stopping the old task (container), scheduling the old task’s update with the new image and finally starting the updated container as shown below:

[clustadmin@docker1 PROD]$ docker service ps dbi_db
ID                  NAME                IMAGE                                  NODE                        DESIRED STATE       CURRENT STATE             ERROR               PORTS
4zey68lh1gin        dbi_db.1            127.0.0.1:5000/dbi_linux_sql2017:CU7   docker1.dbi-services.test   Running             Starting 40 seconds ago
rbtbkcz0cy8o         \_ dbi_db.1        dbi/dbi_linux_sql2017:CU4              docker1.dbi-services.test   Shutdown            Shutdown 40 seconds ago

 

A quick check of my new SQL Server version:

blog 137 - 51- mssql-cli sql version swarm

That’s it!

In this blog post, I hope I managed to get you interested in using swarm mode in such case. Next time I will talk about SQL Server upgrade scenarios on K8s that is a little bit different.

Stay tuned!

 

 

 

 

 

Cet article Dealing with ugrade scenarios for SQL Server on Docker and Swarm est apparu en premier sur Blog dbi services.

How to compile PostgreSQL 11 with support for JIT compilation on RHEL/CentOS 7

Wed, 2018-06-06 01:30

As you might already know PostgreSQL 11 will bring support for just-in-time compilation. When you want to compile PostgreSQL 11 with jit support on RedHat/CentOS 7 this requires a little hack (more on the reason below). In this post we’ll look at how you can do it at least for testing. For production it is of course not recommended as hacking the make file is nothing you want to do, at least I would not do it. Lets go.

As mentioned I am on CentOS 7:

postgres@pgbox:$ cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core) 

What you need to get support for jit is llvm. When you check the CentOS repository llvm is there:

postgres@pgbox:$ yum search llvm
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.spreitzer.ch
 * extras: mirror.spreitzer.ch
 * updates: mirror.spreitzer.ch
===================================================== N/S matched: llvm =====================================================
llvm-devel.x86_64 : Libraries and header files for LLVM
llvm-doc.noarch : Documentation for LLVM
llvm-libs.x86_64 : LLVM shared libraries
llvm-ocaml.x86_64 : OCaml binding for LLVM
llvm-ocaml-devel.x86_64 : Development files for llvm-ocaml
llvm-ocaml-doc.noarch : Documentation for LLVM's OCaml binding
llvm-private.i686 : llvm engine for Mesa
llvm-private.x86_64 : llvm engine for Mesa
llvm-private-devel.i686 : Libraries and header files for LLVM
llvm-private-devel.x86_64 : Libraries and header files for LLVM
llvm-static.x86_64 : LLVM static libraries
mesa-private-llvm.i686 : llvm engine for Mesa
mesa-private-llvm.x86_64 : llvm engine for Mesa
mesa-private-llvm-devel.i686 : Libraries and header files for LLVM
mesa-private-llvm-devel.x86_64 : Libraries and header files for LLVM
clang.x86_64 : A C language family front-end for LLVM
llvm.x86_64 : The Low Level Virtual Machine

  Name and summary matches only, use "search all" for everything.

The issue is that the PostgreSQL documentation clearly states that llvm needs to be at least of version 3.9 and in the CentOS repository you’ll find this:

postgres@pgbox:$ yum info llvm
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror1.hs-esslingen.de
 * extras: mirror.fra10.de.leaseweb.net
 * updates: mirror.netcologne.de
Available Packages
Name        : llvm
Arch        : x86_64
Version     : 3.4.2
Release     : 8.el7
Size        : 1.3 M
Repo        : extras/7/x86_64
Summary     : The Low Level Virtual Machine
URL         : http://llvm.org/
License     : NCSA
Description : LLVM is a compiler infrastructure designed for compile-time,
            : link-time, runtime, and idle-time optimization of programs from
            : arbitrary programming languages.  The compiler infrastructure includes
            : mirror sets of programming tools as well as libraries with equivalent
            : functionality.

What to do? What you need to do is to add the epel repository where you can find llvm 3.9 and 5.0:

postgres@pgbox:$ wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
postgres@pgbox:$ sudo yum localinstall epel-release-latest-7.noarch.rpm
postgres@pgbox:$ sudo yum install llvm5.0 llvm5.0-devel clang

Having that we should be ready for configuration:

postgres@pgbox:$ PGHOME=/u01/app/postgres/product/11/db_0
postgres@pgbox:$ SEGSIZE=1
postgres@pgbox:$ BLOCKSIZE=8
postgres@pgbox:$ WALSEGSIZE=16
postgres@pgbox:$ ./configure --prefix=${PGHOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGHOME}/bin \
            --libdir=${PGHOME}/lib \
            --sysconfdir=${PGHOME}/etc \
            --includedir=${PGHOME}/include \
            --datarootdir=${PGHOME}/share \
            --datadir=${PGHOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
            --with-wal-segsize=${WALSEGSIZE}  \
            --with-llvm LLVM_CONFIG='/usr/lib64/llvm3.9/bin/llvm-config' \
            --with-systemd 

That succeeds so lets compile:

postgres@pgbox:$ make -j 4 all

… and you will run into this issue:

/usr/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv -O2  -I../../src/include  -D_GNU_SOURCE  -flto=thin -emit-llvm -c -o localtime.bc localtime.c
clang: error: unknown argument: '-flto=thin'
make[2]: *** [localtime.bc] Error 1
make[2]: Leaving directory `/home/postgres/postgresql/src/timezone'
make[1]: *** [all-timezone-recurse] Error 2
make[1]: Leaving directory `/home/postgres/postgresql/src'
make: *** [all-src-recurse] Error 2

There is a mail thread on the hackers mailing list which describes the issue. Apparently the clang compiler is too old to understand this argument. What you could do is to adjust the corresponding line in the Makefile:

postgres@pgbox:$ vi src/Makefile.global.in
COMPILE.c.bc = $(CLANG) -Wno-ignored-attributes $(BITCODE_CFLAGS) $(CPPFLAGS) -flto=thin -emit-llvm -c
COMPILE.c.bc = $(CLANG) -Wno-ignored-attributes $(BITCODE_CFLAGS) $(CPPFLAGS) -emit-llvm -c

Doing all the stuff again afterwards:

postgres@pgbox:$ make clean
postgres@pgbox:$ ./configure --prefix=${PGHOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGHOME}/bin \
            --libdir=${PGHOME}/lib \
            --sysconfdir=${PGHOME}/etc \
            --includedir=${PGHOME}/include \
            --datarootdir=${PGHOME}/share \
            --datadir=${PGHOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
            --with-wal-segsize=${WALSEGSIZE}  \
            --with-llvm LLVM_CONFIG='/usr/lib64/llvm3.9/bin/llvm-config' \
            --with-systemd 
postgres@pgbox:$ make -j 4 all

… led to the following issue (at least for me):

make[2]: g++: Command not found
make[2]: *** [llvmjit_error.o] Error 127
make[2]: *** Waiting for unfinished jobs....
make[2]: Leaving directory `/home/postgres/postgresql/src/backend/jit/llvm'
make[1]: *** [all-backend/jit/llvm-recurse] Error 2
make[1]: Leaving directory `/home/postgres/postgresql/src'
make: *** [all-src-recurse] Error 2

This should be easy to fix:

postgres@pgbox:$ sudo yum install -y gcc-c++
postgres@pgbox:$ which g++
/bin/g++

Again:

postgres@pgbox:$ make -j 4 install
postgres@pgbox:$ cd contrib
postgres@pgbox:$ make -j 4 install

… and this time it succeeds (note that I did not run the regression tests, so maybe something will still go wrong there).

JIT is enabled by default and controlled by these parameters:

postgres=# select name,setting from pg_settings where name like 'jit%';
          name           | setting 
-------------------------+---------
 jit                     | on
 jit_above_cost          | 100000
 jit_debugging_support   | off
 jit_dump_bitcode        | off
 jit_expressions         | on
 jit_inline_above_cost   | 500000
 jit_optimize_above_cost | 500000
 jit_profiling_support   | off
 jit_provider            | llvmjit
 jit_tuple_deforming     | on
(10 rows)

To test that it really kicks in you can do something like this:

postgres=#create table ttt (a int, b text, c date );
postgres=#insert into ttt (a,b,c)
           select aa.*, md5(aa::text), now()
             from generate_series(1,1000000) aa;
postgres=#set jit_above_cost=5;
postgres=#set jit_optimize_above_cost=5;
postgres=#set jit_inline_above_cost=5;
postgres=#explain select sum(a) from ttt;

… which should lead to a plan like this:

                                      QUERY PLAN                                       
---------------------------------------------------------------------------------------
 Finalize Aggregate  (cost=15554.55..15554.56 rows=1 width=8)
   ->  Gather  (cost=15554.33..15554.54 rows=2 width=8)
         Workers Planned: 2
         ->  Partial Aggregate  (cost=14554.33..14554.34 rows=1 width=8)
               ->  Parallel Seq Scan on ttt  (cost=0.00..13512.67 rows=416667 width=4)
 JIT:
   Functions: 8
   Inlining: true
   Optimization: true
(9 rows)

Hope that helps.

 

Cet article How to compile PostgreSQL 11 with support for JIT compilation on RHEL/CentOS 7 est apparu en premier sur Blog dbi services.

PostgreSQL 11: Instant add column with a non null default value

Tue, 2018-06-05 06:53

As I am currently preparing my session for the Swiss PGDay which is about some of the new features for PostgreSQL 11, I though this one is worth a blog post as well. Up to PostgreSQL 10 when you add a column to table which has a non null default value the whole table needed to be rewritten. With PostgreSQL 11 this is not anymore the case and adding a column in such a way is almost instant. Lets check.

We start by creating a test table in PostgreSQL 10:

postgres=# select version();
                                       version                                                           
--------------------------------------------------------------------------------------------------------
 PostgreSQL 10.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16), 64-bit
(1 row)

postgres=# create table test ( a int, b text, c timestamp );
CREATE TABLE
postgres=# insert into test (a,b,c) 
           select aa.*, md5(aa::text), now() 
             from generate_series ( 1, 1000000 ) aa;
INSERT 0 1000000

This gave us 1’000’000 rows and what I want to do is to check the amount of sequential scans against the table before and after the alter table.

postgres=# select seq_scan from pg_stat_user_tables where relid = 'test'::regclass;
 seq_scan 
----------
        0
postgres=# alter table test add column d text default 'a';
ALTER TABLE
Time: 1252.188 ms (00:01.252)
postgres=# select seq_scan from pg_stat_user_tables where relid = 'test'::regclass;
 seq_scan 
----------
        1

As you can see a sequential scan happened when the alter table was performed and it took more than a second for the alter table to complete. Lets do the same in PostgreSQL 11.

Creating the table:

postgres=# select version();
                                                            version                                                            
-------------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 11devel dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16), 64-bit
(1 row)
postgres=# create table test ( a int, b text, c timestamp );
CREATE TABLE
postgres=# insert into test (a,b,c) select aa.*, md5(aa::text), now() from generate_series ( 1, 1000000 ) aa ;
INSERT 0 1000000

Doing the same test again:

postgres=# select seq_scan from pg_stat_user_tables where relid = 'test'::regclass;
 seq_scan 
----------
        0
(1 row)

postgres=# alter table test add column d text default 'a';
ALTER TABLE
Time: 5.064 ms
postgres=# select seq_scan from pg_stat_user_tables where relid = 'test'::regclass;
 seq_scan 
----------
        0
(1 row)

No sequential scan at all and it only took 5 ms for the alter table to complete. This is quite a huge improvement. The question is how does that work in the background? Actually the idea is quite simple. The catalog table pg_attribute got two new columns called “attmissingval” and “atthasmissing”:

postgres=# \d pg_attribute
              Table "pg_catalog.pg_attribute"
    Column     |   Type    | Collation | Nullable | Default 
---------------+-----------+-----------+----------+---------
 attrelid      | oid       |           | not null | 
 attname       | name      |           | not null | 
 atttypid      | oid       |           | not null | 
 attstattarget | integer   |           | not null | 
 attlen        | smallint  |           | not null | 
 attnum        | smallint  |           | not null | 
 attndims      | integer   |           | not null | 
 attcacheoff   | integer   |           | not null | 
 atttypmod     | integer   |           | not null | 
 attbyval      | boolean   |           | not null | 
 attstorage    | "char"    |           | not null | 
 attalign      | "char"    |           | not null | 
 attnotnull    | boolean   |           | not null | 
 atthasdef     | boolean   |           | not null | 
 atthasmissing | boolean   |           | not null | 
 attidentity   | "char"    |           | not null | 
 attisdropped  | boolean   |           | not null | 
 attislocal    | boolean   |           | not null | 
 attinhcount   | integer   |           | not null | 
 attcollation  | oid       |           | not null | 
 attacl        | aclitem[] |           |          | 
 attoptions    | text[]    |           |          | 
 attfdwoptions | text[]    |           |          | 
 attmissingval | anyarray  |           |          | 
Indexes:
    "pg_attribute_relid_attnam_index" UNIQUE, btree (attrelid, attname)
    "pg_attribute_relid_attnum_index" UNIQUE, btree (attrelid, attnum)

As soon as a new column with a non null default value is added to a table these columns get populated. We can see that when we check for our current table. The column we added has that set in pg_attribute:

postgres=# select attmissingval
                , atthasmissing 
             from pg_attribute 
            where attrelid = 'test'::regclass 
              and attname = 'd';

 attmissingval | atthasmissing 
---------------+---------------
 {a}           | t

(1 row)

We know that all the rows in that table should have the new default value but we know also that the table was not rewritten. So whenever you select from the table and a row is missing the default it will be populated from pg_attribute:

postgres=# select d from test where a = 1;
 d 
---
 a

For new rows the default will be there anyway and as soon as the table is rewritten the flag is cleared:

postgres=# vacuum full test;
VACUUM
postgres=# select attmissingval
                , atthasmissing 
             from pg_attribute 
            where attrelid = 'test'::regclass 
              and attname = 'd';

 attmissingval | atthasmissing 
---------------+---------------
               | f
(1 row)

Nice feature.

 

Cet article PostgreSQL 11: Instant add column with a non null default value est apparu en premier sur Blog dbi services.

How to flashback databases in dataguard with broker

Tue, 2018-06-05 06:15

Last week I had to do some tests with dataguard. To make restores easier restore points were required.

Given is following configuration:

show configuration;
Configuration - ila
Protection Mode: MaxAvailability
 Databases:
 ila - Primary database
 ilal - Logical standby database
 ILAP - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
 SUCCESS

How to set restore points here ?

Two things to consider:
Redo apply to a physical standby is incompatible with setting a restore point so it must me switched off for a while.
SCN of the restore points of the standby databases must be lower than on primary database, otherwise redo shipment does not continue after flashback.

Setting of restore points

First redo shipment has to be stopped:

edit database "ilal" set state=APPLY-OFF;
edit database "ILAP" set state=APPLY-OFF;

Second, restore points on both standby databases are set:

create restore point RZ_C GUARANTEE FLASHBACK DATABASE;

 

Third, restore point on primary database is set:

create restore point RZ_C GUARANTEE FLASHBACK DATABASE;

Last, redo shipment is switched on again:

edit database "ILAP" set state=APPLY-ON;
edit database "ilal" set state=APPLY-ON;

Now dataguard is running as before.

Let’s flashback the databases:

First switch off redo shipment:

edit database "ILAP" set state=APPLY-OFF;
edit database "ilal" set state=APPLY-OFF;

Second start all three databases in mount mode:

shutdown immediate;
startup mount;

Third flashback primary database:

flashback database to restore point RZ_C;
alter database open read only;

Check whether flashbacked database looks fine.

If yes, open it read write:

shutdown immediate;
startup mount;
alter database open resetlogs;

Do the same on logical standby database:

flashback database to restore point RZ_C;
alter database open read only;

Check whether flashbacked database looks fine.

If yes, open it:

shutdown immediate;
startup mount;
alter database open resetlogs;

Last do flashback on physical standby:

flashback database to restore point RZ_C;

Now start redo shipment again:

edit database "ILAP" set state=APPLY-ON;
edit database "ilal" set state=APPLY-ON;

After a few minutes, dataguard will run as before:

show configuration;
Configuration - ila
Protection Mode: MaxAvailability
 Databases:
 ila - Primary database
 ilal - Logical standby database
 ILAP - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
 SUCCESS
 

Cet article How to flashback databases in dataguard with broker est apparu en premier sur Blog dbi services.

PostgreSQL 11 : Procedures are coming

Sat, 2018-06-02 05:38

Reading about new features about future version PostgreSQL 11. I see that procedures will be implemented. Why is it so important? Until now only functions are available in PostgtreSQL. Of course a function is a procedure that returns a value, we can say, yes it’s true but you cannot manage transactions in a function. To better understand let’s do a quick demonstration.
Note that I am using the snapshot developer version of PostgreSQL 11 .

[postgres@pg_essentials_p1 bin]$ ./psql
Null display is "NULL".
psql (11devel)
Type "help" for help.

(postgres@[local]:5432) [postgres] > select version();
                                                  version
--------------------------------------------------------------------------------
---------------------------
 PostgreSQL 11devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623
 (Red Hat 4.8.5-4), 64-bit
(1 row)

(postgres@[local]:5432) [postgres] >

For the demonstration I have a table emp

(postgres@[local]:5432) [test] > table emp;
 id | name
----+------
  1 | toto
  2 | Edge
(2 rows)

(postgres@[local]:5432) [test] >

And let’s say I want to insert data in my table using following function

CREATE or replace FUNCTION fun_insert_emp(id_emp int,  emp_name varchar(20))
 RETURNS  void AS $$
BEGIN
 insert into emp (id,name) values (id_emp,emp_name);
commit;
END;
$$
LANGUAGE PLPGSQL;

We can describe the function like this

(postgres@[local]:5432) [test] >  \sf  fun_insert_emp                           
CREATE OR REPLACE FUNCTION public.fun_insert_emp(id_emp integer, emp_name character varying)
 RETURNS void
 LANGUAGE plpgsql
AS $function$
BEGIN
 insert into emp (id,name) values (id_emp,emp_name);
commit;
END;
$function$
(postgres@[local]:5432) [test] >

To call a function we use a select like any built-in function. So let’s try to insert a new employee with the function

(postgres@[local]:5432) [test] > select fun_insert_emp(3,'New Emp');
ERROR:  invalid transaction termination
CONTEXT:  PL/pgSQL function fun_insert_emp(integer,character varying) line 4 at COMMIT
(postgres@[local]:5432) [test] >

Seems that the word COMMIT is causing trouble. OK let’s recreate the function without the COMMIT

(postgres@[local]:5432) [test] >  \sf  fun_insert_emp                           
CREATE OR REPLACE FUNCTION public.fun_insert_emp(id_emp integer, emp_name character varying)
 RETURNS void
 LANGUAGE plpgsql
AS $function$
BEGIN
 insert into emp (id,name) values (id_emp,emp_name);
END;
$function$
(postgres@[local]:5432) [test] >

And let’s call again the function. We can see that the row was inserted

(postgres@[local]:5432) [test] > select fun_insert_emp(3,'New Emp');             
fun_insert_emp
----------------

(1 row)

(postgres@[local]:5432) [test] > table emp;
 id |  name
----+---------
  1 | toto
  2 | Edge
  3 | New Emp
(3 rows)

(postgres@[local]:5432) [test] >

But the main problem we saw is that a function in PostgreSQL does not support any transaction instruction (BEGIN…COMMIT, BEGIN…ROLLBACK). This means that in a function I cannot use instructions that allow to rollback or commit based on some conditions.
But PostgreSQL 11 will support procedure. Let’s do again the demonstration with a procedure.
Let’s first create the procedure

(postgres@[local]:5432) [test] > create or replace procedure proc_insert_emp (id_emp int,  emp_name varchar(20))
test-# as $$
test$# Begin
test$# insert into emp (id,name) values (id_emp,emp_name);
test$# commit;
test$# end ;
test$# $$
test-# LANGUAGE PLPGSQL;
CREATE PROCEDURE
(postgres@[local]:5432) [test] >

And let’s insert a new row in table emp using proc_insert_emp

(postgres@[local]:5432) [test] > call  proc_insert_emp(4,'Brice');
CALL

(postgres@[local]:5432) [test] > table emp;
 id |  name
----+---------
  1 | toto
  2 | Edge
  3 | New Emp
  4 | Brice
(4 rows)

(postgres@[local]:5432) [test] >

We can see that the row was inserted. But the main difference is the support of autonomous transaction and this will be definitively change life for developers.

 

Cet article PostgreSQL 11 : Procedures are coming est apparu en premier sur Blog dbi services.

Installing MAMP to play with PHP, MySQL and OpenFlights

Sat, 2018-06-02 02:30

You may wonder what I’m doing with those technologies that are completely different from what I usually do. I’ll detail in a future blog post but the idea is giving a short introduction to databases to students at https://powercoders.org/, a coding academy for refugees in Switzerland. They install MAMP (My Apache – MySQL – PHP) during their curriculum for website development, and then I’ll use this environment to teach data modeling and SQL basics. Thus, I’ve to look at MAMP, PHP and MySQL for the first time… And I decided to load the OpenFlights open data to play with.
That explains the title.

CaptureMAMP002
So MAMP is like LAMP (Linux+Apache+PHP+MySQL) but with a M for MacOS, but also Windows (W being an upside-down M after all). Let’s install that on my laptop. I download it from https://www.mamp.info/en/downloads/, run the .exe, all is straightforward and the installer notifies me that the installation will be completed after a reboot.

What? Reboot? Seriously, we are in 2018, that’s Windows 10, I refuse to reboot to install a simple Apache server!

This bundle is very easy to use: a simple window to start and stop the servers (Apache and MySQL) . A setup menu to configure them, but I keep the default. And a link to the start page. All that is installed under C:\MAMP (you can change it, I just kept the default settings). The first time you start the servers, the Windows Firewall configuration is raised and you have to accept it:

CaptureMAMP003

With all defaults (Apache on port 80) my web server pages are on http://localhost (serving the files in C:\MAMP\htdocs) and administration page is at http://localhost/MAMP/
The MySQL administration page (phpMyAdmin) is at http://localhost/MAMP/index.php?page=phpmyadmin. It seems that, at least by default, I don’t need a password to go to the admin pages.

display_errors

I’ll write some PHP and because it’s the first time in my life, I will have some errors. With the default configuration, Apache just raises and Error 500 which does not help me a lot for debugging. This configuration is ok for production because displaying errors may give clues to hackers. But I’m there to play and I want to see the error messages and line numbers.

I have to set display_errors=on for that. The current setting is displayed in http://localhost/MAMP/index.php?language=English&page=phpinfo#module_core and I can change it in C:\MAMP\conf\php7.2.1\php.ini and after a restart of the Apache server I can see full error messages:

Warning: mysqli_real_connect(): (28000/1045): Access denied for user 'root'@'localhost' (using password: YES) in C:\MAMP\htdocs\index.php on line 123

Little fix

But now that I display the errors, I get this annoying message each time I try to do something in phpMyAdmin (which runs as PHP in the same Apache server):

MAMP "count(): Parameter must be an array or an object that implements Countable"

CaptureMAMP004

So this product, which is free but has also a ‘PRO’ version, probably running the same code, is delivered with bad code, raising errors that were ignored. Don’t tell me that it is just a warning. You will see that parentheses are missing, this is a syntax error and raising only a warning for that is quite bad.
CaptureMAMP006
My common sense tells me that we should set display_errors=on and test a few screens before releasing a software. But that step has probably been skipped. Fortunately, the message is clear: line 615 of C:\MAMP\bin\phpMyAdmin\libraries\sql.lib.php

The message is about count() not having the correct parameter. The line 615 shows count($analyzed_sql_results[‘select_expr’] == 1 ) which is probably not correct because it counts a boolean expression. I’ve changed it to (count($analyzed_sql_results[‘select_expr’]) == 1 ) as I suppose they want to count and compare to one.

Well, I’ve never written one line of PHP and I already hate it for its error handling weakness.

Load some data

I want to initialize the database with some data and I’ll use the OpenFlights database. I’ve downloaded and unzipped https://github.com/jpatokal/openflights/blob/master/sql/master.zip
I go to the unzipped directory and run MySQL:

cd /d D:\Downloads\openflights-master

Another little thing to fix here: the sql\create.sql and sql\load-data.sql files contain some lines starting with “\! echo” but this \! command (to run a system command) exists on Linux but not on the Windows port of MySQL. We have to remove them before running the SQL scripts. I’m used to Oracle where I can port my code and scripts from one platform to the other, and was a but surprised by this.

Ready to connect:

C:\MAMP\bin\mysql\bin\mysql test --port=3306 --host=localhost --user root --password
Enter password:

The MySQL connection parameters are displayed on http://localhost/MAMP/ including the password (root)


source sql\create.sql
 
mysql> source sql\create.sql
Query OK, 0 rows affected (0.00 sec)
 
Connection id: 314
Current database: flightdb2
 
Query OK, 0 rows affected (0.02 sec)
 
Query OK, 0 rows affected (0.02 sec)
 
Query OK, 0 rows affected (0.01 sec)
 
Query OK, 0 rows affected (0.01 sec)
 
Query OK, 0 rows affected (0.02 sec)
Records: 0 Duplicates: 0 Warnings: 0
 
Query OK, 0 rows affected (0.02 sec)
Records: 0 Duplicates: 0 Warnings: 0
 
Query OK, 0 rows affected (0.00 sec)
...

This has created the flightdb2 database, with openflights user, and 15 tables.

Now, if you are still in the unzipped directory, you can load data with the source sql\load-data.sql script which loads from the data\*.dat files

mysql> source sql\load-data.sql
Query OK, 6162 rows affected, 4 warnings (0.04 sec)
Records: 6162 Deleted: 0 Skipped: 0 Warnings: 4
 
Query OK, 7184 rows affected, 7184 warnings (0.12 sec)
Records: 7184 Deleted: 0 Skipped: 0 Warnings: 7184
 
Query OK, 67663 rows affected (0.53 sec)
Records: 67663 Deleted: 0 Skipped: 0 Warnings: 0
 
Query OK, 260 rows affected (0.01 sec)
Records: 260 Deleted: 0 Skipped: 0 Warnings: 0
 
Query OK, 12 rows affected (0.01 sec)
Records: 12 Deleted: 0 Skipped: 0 Warnings: 0

Query from PHP

So, for my first lines of PHP I’ve added the following to C:\MAMP\htdocs\index.php:

<?php
$user = 'openflights'; $password = '';
$db = 'flightdb2'; $host = 'localhost'; $port = 3306;
 
$conn = mysqli_init();
if (!$conn) {
die("mysqli_init failed");
}
if (!$success = mysqli_real_connect( $conn, $host, $user, $password, $db, $port)) {
die("😠 Connection Error: " . mysqli_connect_error());
}
echo "😎 Connected to database <b>$db</b> as user <b>$user</b>.";
?>
 
<p>
Here are the Airports:
<table border=1>
<tr><th>IATA</th><th>Name</th></tr>
 
<?php
$result = $conn->query("select iata,name from airports where country='Greenland' order by 2");
if ($result->num_rows > 0) {
while($row = $result->fetch_assoc()) {
echo "<tr><td>" . $row["iata"]. "</td><td> " . $row["name"]. "</tr>";
}
} else {
echo "0 results";
}
mysqli_close($conn);
?>
</table>

CaptureMAMP008

Here, I call mysqli_init(), set the credentials and call mysqli_real_connect() to get the connection handle. Then I run my query and display the result as an HTML table. Nothing difficult here. The main challenge is probably to keep the code maintainable.

In my opinion, and despite the small issues encountered, MAMP is a simple way to setup a development environment on Windows. All is there to introduce SQL and Database for developers, and show how to call it from a programming language.

 

Cet article Installing MAMP to play with PHP, MySQL and OpenFlights est apparu en premier sur Blog dbi services.

SQLcl connect target depends on previous connection

Fri, 2018-06-01 05:36

I thought it was a bug for two reasons: first, because I don’t like that my connect destination depends on the context, and then because it is a different behavior than in SQL*Plus. But finally, it is the correct behavior, and the 100% compatibility with SQL*Plus is expected only when SET CLASSIC=ON. And as I was surprised, and I think it can be dangerous, here is a blog post about it. Basically, be very careful if you are connected with a service name and you want to connect locally.

Basically, if you attended my demo about ‘From Transportable Tablespaces to Pluggable Databases’, where I switch between different instances you may have seen that I had to run ‘connect / as sysdba’ two times because the first one failed with invalid username/password

SQL> connect scott/tiger@//localhost/PDB1
Connected.
...
SQL> connect / as sysdba
USER =
URL = jdbc:oracle:oci8:@//localhost/PDB1
Error Message = ORA-01017: invalid username/password; logon denied
 
Warning: You are no longer connected to ORACLE.
 
SQL> connect / as sysdba
Connected.

This is not a big deal, but that means that it tries to connect to //localhost/PDB1 when I wanted to connect locally to my ORACLE_SID environment variable. Here, expecting a bequeath connection, I didn’t provide a password, then I cannot connect to the PDB. But imagine that I use a password, and the password is the same in the two databases… I would have been connected to the wrong database. Just imagine this:

SQL> host echo $ORACLE_SID
TEST
SQL> connect sys/password as sysdba
Connected.
-- checking something on PROD
SQL> connect sys/password@PROD as sysdba
Connected.
...
-- back to TEST (or at least I think so)
SQL> host echo $ORACLE_SID
TEST
SQL> connect sys/password as sysdba
Connected.
-- do terrible things
SQL> drop table VERY_IMPORTANT_TABLE;
Table VERY_IMPORTANT_TABLE dropped.
-- now look where I am:
SQL> show connection
CONNECTION:
SYS@jdbc:oracle:oci8:@PROD AS SYSDBA
CONNECTION_IDENTIFIER:
PROD

Actually, what happens is that when SQLcl is already connected with a connection string (i.e not locally using bequeath) the next connect command will use the same connection string. This means that:

connect user/password

is actually equivalent to

connect user/password@&_CONNECT_IDENTIFIER

SQL*Plus

This behavior has been introduced in SQLcl but this is not how SQL*Plus works:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> connect / as sysdba
Connected.
SQL> define _CONNECT_IDENTIFIER
DEFINE _CONNECT_IDENTIFIER = "CDB1" (CHAR)

Disconnect

The first solution to avoid this in SQLcl is to always disconnect before you want to connect to a different service:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> disc
 
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.2.0.0.0
SQL> connect sys/oracle as sysdba
Connected.
SQL> show connection
CONNECTION:
SYS@jdbc:oracle:oci8:@ AS SYSDBA
CONNECTION_IDENTIFIER:
CDB$ROOT

This is why the second time was ok in my case: first one failed with invalid password and then I was disconnected.

TWO_TASK

The second solution is to set an impossible TWO_TASK (or LOCAL in Windows) so that local connections are impossible:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> connect sys/oracle as sysdba
USER = sys
URL = jdbc:oracle:oci8:@NoWhere
Error Message = ORA-12154: TNS:could not resolve the connect identifier specified
USER = sys
URL = jdbc:oracle:thin:@NoWhere
Error Message = IO Error: Unknown host specified
USER = sys
URL = jdbc:oracle:thin:@NoWhere:1521/NoWhere
Error Message = IO Error: Unknown host specified

CLASSIC=ON

The third solution is to run SQLcl in SQL*Plus 100% compatible mode:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> set classic on
SQL> show classic
SQL*Plus mode: ON
SQL> connect / as sysdba
Connected.

Here we have the same behavior as SQL*Plus: no use of current connection string.

The SQL CLASSIC ON is usually for the output (error messages, autotrace statistics, and a few enhancement made to SQLcl). And the online help still says that it is about output:

SQL> help set classic
SET CLASSIC
SET CLASSIC [ ON | OFF ] Set classic SQL*Plus Settings on
This will allow scripts which expect traditional output to be honored.

However, it seems that this CLASSIC mode is also very important for connection.

Test and show _CONNECTION_STRING

If you show the connection string at the prompt, this may prevent errors:

SQL> set sqlprompt "_connect_identifier> "
//localhost/CDB2> connect sys/oracle@//localhost/CDB1 as sysdba
Connected.
//localhost/CDB1> connect sys/oracle as sysdba
Connected.

Always check which database

By the way, when I prepare a script that can make some damages when not run at the correct place, I usually add a test on DBID on top of it:

CDB1> whenever sqlerror exit failure;
CDB1> select 0/decode(dbid,'944121612',1,0) from v$database;
 
Error starting at line : 1 in command -
select 0/decode(dbid,'944121612',1,0) from v$database
Error report -
ORA-01476: divisor is equal to zero

Different passwords

Of course, you should have different passwords on prod and test databases. However, I prefer to have passwords in a wallet (external password file) and then you will always have the correct identification as it is recorded for each service name.

 

Cet article SQLcl connect target depends on previous connection est apparu en premier sur Blog dbi services.

Patching ODA lite to 12.2.1.3.0

Thu, 2018-05-31 09:57

Here is the latest patch for your ODA and it seems that Oracle documentation for this patch is not exactly the procedure you’ll have to follow to successfully patch your appliance. I recently updated X6-2M and X6-2L to this latest release and here is how to do that. In this example I was patching from 12.1.2.12.0, no intermediate patch was needed.

1) Download the patch

It seems odd but sometimes finding the corresponding patch is not so easy! With the patch number, it’s more convenient. For 12.2.1.3.0 with the dcs stack the number is 27648057. This patch will update all the components:  dcs (odacli), operating system, bios/firmwares, ilom, GI and dbhomes.

Copy the patch in a temporary folder on the server, for example /opt/patch. You’ll need to be root to apply the patch.

2) Check the actual versions and free up space on disk

I recommend you to check the actual versions. It’s easy:

odacli describe-component

System Version
---------------
12.1.2.12.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.1.2.12.0           up-to-date
GI                                        12.1.0.2.170814       up-to-date
DB {
[ OraDB12102_home1 ]                      12.1.0.2.170814       up-to-date
[ OraDB11204_home1 ]                      11.2.0.4.170814       up-to-date
}
ILOM                                      3.2.7.26.a.r112632    up-to-date
BIOS                                      39050100              up-to-date
OS                                        6.8                   up-to-date
FIRMWARECONTROLLER {
[ c6 ]                                    4.650.00-7176         up-to-date
[ c0,c1,c2,c3,c4,c5                       KPYABR3Q              up-to-date
}
FIRMWAREDISK                              0R3Q                  up-to-date

Don’t care about Available Version column, it’s only valid after you register a new patch. Your ODA doesn’t check the latest patch online.

For free space check that folders /, /u01 and /opt have enough free GB to process. At least 10GB I think.

3) Prepare the patch files

It’s slightly different than previous versions. Only the first zipfile has to be uncompressed and registered. The 2 other files can directly be registered without unzipping them.

cd /opt/patch
unzip p27648057_122130_Linux-x86-64_1of3.zip

odacli update-repository -f /opt/patch/oda-sm-12.2.1.3.0-180504-server1of3.zip
odacli update-repository -f /opt/patch/p27648057_122130_Linux-x86-64_2of3.zip
odacli update-repository -f /opt/patch/p27648057_122130_Linux-x86-64_3of3.zip

4) Update the dcs-agent

First you’ll have to update the dcs-agent:

odacli update-dcsagent -v 12.2.1.3.0
{
"jobId" : "150b3486-cfb2-4b32-b751-0ed89ce3d7be",
"status" : "Created",
"message" : "Dcs agent will be restarted after the update. Please wait for 2-3 mins before executing the other commands",
"reports" : [ ],
"createTimestamp" : "May 22, 2018 10:31:50 AM CEST",
"resourceList" : [ ],
"description" : "DcsAgent patching",
"updatedTime" : "May 22, 2018 10:31:50 AM CEST"
}

As for every kind of operation you do with odacli, you receive a jobId you can monitor:

odacli describe-job -i "150b3486-cfb2-4b32-b751-0ed89ce3d7be"

Job details
----------------------------------------------------------------
                     ID:  150b3486-cfb2-4b32-b751-0ed89ce3d7be
            Description:  DcsAgent patching
                 Status:  Success
                Created:  May 22, 2018 10:31:50 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
dcsagent rpm verification                May 22, 2018 10:31:50 AM CEST       May 22, 2018 10:31:50 AM CEST       Success
Patch location validation                May 22, 2018 10:31:50 AM CEST       May 22, 2018 10:31:50 AM CEST       Success
dcs-agent upgrade                        May 22, 2018 10:31:50 AM CEST       May 22, 2018 10:31:51 AM CEST       Success

This update takes only a minute.

Check again the version and you will see a new component in 18c, quite weird:

odacli describe-component
System Version
---------------
12.2.1.3.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.1.2.12.0           12.2.1.3.0
GI                                        12.1.0.2.170814       12.2.0.1.180116
DB {
[ OraDB12102_home1 ]                      12.1.0.2.170814       12.1.0.2.180116
[ OraDB11204_home1 ]                      11.2.0.4.170814       11.2.0.4.180116
}
DCSAGENT                                  18.1.3.0.0            up-to-date
ILOM                                      3.2.7.26.a.r112632    up-to-date
BIOS                                      39050100              39090000
OS                                        6.8                   6.9
FIRMWARECONTROLLER {
[ c6 ]                                    4.650.00-7176         up-to-date
[ c0,c1,c2,c3,c4,c5 ]                     KPYABR3Q              kpyagr3q
}
FIRMWAREDISK                              0R3Q                  up-to-date

5) Update the server

Updating the server will actually update not only the operating system but also the ILOM, the BIOS, the firmware of internal disks and the GI. For the OS, Oracle Linux will go from 6.8 to 6.9 and the update process will assume that no additional packages have been installed. If you installed additional packages, please remove them as they can prevent the patch to apply correctly. Even devel packages should be removed, the upgrade of normal packages linked to them will not work because of the dependencies. You can easily remove the install packages with rpm -e, for example:

rpm -e openssl-devel-1.0.1e-48.el6_8.4.x86_64
rpm -e krb5-devel-1.10.3-57.el6.x86_64
rpm -e zlib-devel-1.2.3-29.el6.x86_64
rpm -e keyutils-libs-devel-1.4-5.0.1.el6.x86_64
rpm -e libcom_err-devel-1.42.8-1.0.2.el6.x86_64
rpm -e libselinux-devel-2.0.94-7.el6.x86_64
rpm -e libsepol-devel-2.0.41-4.el6.x86_64

Now you can safely run the patching:

odacli update-server -v 12.2.1.3.0

This update is the longest one (between 30 minutes and 1 hour), while you should see your server rebooting. As usual check the status of the job with describe-job:

odacli describe-job -i "27d2195f-f16b-44d8-84e0-6af6e48ccad7"

At the end of the process, describe-job will look like that:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Patch location validation                May 23, 2018 11:07:09 AM CEST       May 23, 2018 11:07:09 AM CEST       Success
dcs-controller upgrade                   May 23, 2018 11:07:09 AM CEST       May 23, 2018 11:07:09 AM CEST       Success
Patch location validation                May 23, 2018 11:07:09 AM CEST       May 23, 2018 11:07:09 AM CEST       Success
dcs-cli upgrade                          May 23, 2018 11:07:09 AM CEST       May 23, 2018 11:07:09 AM CEST       Success
Creating repositories using yum          May 23, 2018 11:07:09 AM CEST       May 23, 2018 11:07:10 AM CEST       Success
Applying HMP Patches                     May 23, 2018 11:07:10 AM CEST       May 23, 2018 11:09:04 AM CEST       Success
Patch location validation                May 23, 2018 11:09:04 AM CEST       May 23, 2018 11:09:04 AM CEST       Success
oda-hw-mgmt upgrade                      May 23, 2018 11:09:04 AM CEST       May 23, 2018 11:09:04 AM CEST       Success
Creating repositories using yum          May 23, 2018 11:09:04 AM CEST       May 23, 2018 11:09:09 AM CEST       Success
Applying OS Patches                      May 23, 2018 11:09:09 AM CEST       May 23, 2018 11:11:04 AM CEST       Success
OSS Patching                             May 23, 2018 11:11:04 AM CEST       May 23, 2018 11:11:04 AM CEST       Success
Applying Firmware Disk Patches           May 23, 2018 11:11:04 AM CEST       May 23, 2018 11:11:08 AM CEST       Success
Applying Firmware Expander Patches       May 23, 2018 11:11:08 AM CEST       May 23, 2018 11:11:13 AM CEST       Success
Applying Firmware Controller Patches     May 23, 2018 11:11:13 AM CEST       May 23, 2018 11:11:16 AM CEST       Success
Checking Ilom patch Version              May 23, 2018 11:11:17 AM CEST       May 23, 2018 11:11:19 AM CEST       Success
Patch location validation                May 23, 2018 11:11:19 AM CEST       May 23, 2018 11:11:20 AM CEST       Success
Apply Ilom patch                         May 23, 2018 11:11:21 AM CEST       May 23, 2018 11:11:22 AM CEST       Success
Copying Flash Bios to Temp location      May 23, 2018 11:11:22 AM CEST       May 23, 2018 11:11:22 AM CEST       Success
Starting the clusterware                 May 23, 2018 11:11:23 AM CEST       May 23, 2018 11:11:23 AM CEST       Success
Creating GI home directories             May 23, 2018 11:11:23 AM CEST       May 23, 2018 11:11:23 AM CEST       Success
Cloning Gi home                          May 23, 2018 11:11:23 AM CEST       May 23, 2018 11:13:53 AM CEST       Success
Configuring GI                           May 23, 2018 11:13:53 AM CEST       May 23, 2018 11:14:06 AM CEST       Success
Running GI upgrade root scripts          May 23, 2018 11:14:06 AM CEST       May 23, 2018 11:29:04 AM CEST       Success
Resetting DG compatibility               May 23, 2018 11:29:04 AM CEST       May 23, 2018 11:29:09 AM CEST       Success
Running GI config assistants             May 23, 2018 11:29:09 AM CEST       May 23, 2018 11:30:10 AM CEST       Success
restart oakd                             May 23, 2018 11:30:13 AM CEST       May 23, 2018 11:30:23 AM CEST       Success
Updating GiHome version                  May 23, 2018 11:30:23 AM CEST       May 23, 2018 11:30:25 AM CEST       Success
preRebootNode Actions                    May 23, 2018 11:30:39 AM CEST       May 23, 2018 11:31:25 AM CEST       Success
Reboot Ilom                              May 23, 2018 11:31:25 AM CEST       May 23, 2018 11:31:25 AM CEST       Success

Don’t forget that if there is a problem during the patching process (you forgot to remove an additional rpm for example), you can relaunch the patching and it will skip the already patched components. But you will loose a lot of time! Please control the components version after the reboot with describe-component.

6) Patch the dbhomes

You now need to patch the dbhomes separately. First of all list them:

odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
86b6a068-55a8-4171-9f94-48b86f135065     OraDB12102_home1     12.1.0.2.170814 (26680878, 26609798)     /u01/app/oracle/product/12.1.0.2/dbhome_1     Configured
c6e7d556-5785-41d8-a120-ed3ef756848a     OraDB11204_home1     11.2.0.4.170814 (26609929, 26609445)     /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured

And then patch the first one:

odacli update-dbhome -v 12.2.1.3 -i c6e7d556-5785-41d8-a120-ed3ef756848a

It will also apply the datapatch on all the primary databases connected to this home. If you don’t have any database linked to the dbhome patching is fast:

odacli describe-job -i "9defa52d-2997-42ea-9bd3-aafdbf0a4dc5"
Job details
----------------------------------------------------------------
                     ID:  9defa52d-2997-42ea-9bd3-aafdbf0a4dc5
            Description:  DB Home Patching: Home Id is c6e7d556-5785-41d8-a120-ed3ef756848a
                 Status:  Success
                Created:  May 23, 2018 11:43:55 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
clusterware patch verification           May 23, 2018 11:43:56 AM CEST       May 23, 2018 11:43:58 AM CEST       Success
Patch location validation                May 23, 2018 11:43:58 AM CEST       May 23, 2018 11:43:59 AM CEST       Success
Opatch updation                          May 23, 2018 11:44:19 AM CEST       May 23, 2018 11:44:20 AM CEST       Success
Patch conflict check                     May 23, 2018 11:44:20 AM CEST       May 23, 2018 11:44:27 AM CEST       Success
db upgrade                               May 23, 2018 11:44:27 AM CEST       May 23, 2018 11:46:01 AM CEST       Success

About 2 minutes for an empty dbhome.

And for a dbhome linked to databases:

odacli update-dbhome -v 12.2.1.3 -i 86b6a068-55a8-4171-9f94-48b86f135065
{
"jobId" : "8083dc43-61fe-49da-8081-43b4e5257e95",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "May 23, 2018 12:46:06 PM CEST",
"resourceList" : [ ],
"description" : "DB Home Patching: Home Id is 86b6a068-55a8-4171-9f94-48b86f135065",
"updatedTime" : "May 23, 2018 12:46:06 PM CEST"
}

odacli describe-job -i "e871f741-7dd6-49c6-9b4a-af0d68e647e7"

Job details
----------------------------------------------------------------
ID:  e871f741-7dd6-49c6-9b4a-af0d68e647e7
Description:  DB Home Patching: Home Id is 86b6a068-55a8-4171-9f94-48b86f135065
Status:  Success
Created:  May 23, 2018 1:37:00 PM CEST
Message:  WARNING::Failed to run datapatch on db DB01TST##WARNING::Failed to run datapatch on db DB03TST

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
clusterware patch verification           May 23, 2018 1:37:21 PM CEST        May 23, 2018 1:37:23 PM CEST        Success
Patch location validation                May 23, 2018 1:37:23 PM CEST        May 23, 2018 1:37:23 PM CEST        Success
Opatch updation                          May 23, 2018 1:37:23 PM CEST        May 23, 2018 1:37:23 PM CEST        Success
Patch conflict check                     May 23, 2018 1:37:23 PM CEST        May 23, 2018 1:37:23 PM CEST        Success
db upgrade                               May 23, 2018 1:37:23 PM CEST        May 23, 2018 1:37:23 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:37:23 PM CEST        May 23, 2018 1:37:57 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:37:57 PM CEST        May 23, 2018 1:38:26 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:38:26 PM CEST        May 23, 2018 1:38:51 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:38:51 PM CEST        May 23, 2018 1:39:49 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:39:49 PM CEST        May 23, 2018 1:40:28 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:40:28 PM CEST        May 23, 2018 1:40:53 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:40:53 PM CEST        May 23, 2018 1:41:19 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:41:19 PM CEST        May 23, 2018 1:41:44 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:41:44 PM CEST        May 23, 2018 1:42:14 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:42:14 PM CEST        May 23, 2018 1:42:48 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:42:48 PM CEST        May 23, 2018 1:43:22 PM CEST        Success

It’s longer but less than 10 minutes for this example. You can see the number of databases here (one line SqlPatch upgrade for one database). The job is successful, but actually not so successful because 2 warnings are raised. And be careful because message field in the job details is limited: check the /opt/oracle/dcs/log/dcs-agent.log for extended warning messages.

Always check the DB Version after the update of all dbhomes, this is the new version of the binaries:

odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
86b6a068-55a8-4171-9f94-48b86f135065     OraDB12102_home1     12.1.0.2.180116 (26925218, 26925263)     /u01/app/oracle/product/12.1.0.2/dbhome_1     Configured
c6e7d556-5785-41d8-a120-ed3ef756848a     OraDB11204_home1     11.2.0.4.180116 (26609929, 26925576)     /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured

If update job is really successful, jump to step 8.

7) Update the databases where datapatch failed

For all the databases where datapatch cannot be applied, you’ll have to do it manually. I discovered that if your database is not in AMERICAN_AMERICA language and territory (FRENCH_FRANCE in my case) odacli will not be able to apply the datapatch on the database. Let’s check the status of the datapatch on the database, set the NLS_LANG environment to AMERICAN_AMERICA and apply the patch on these databases:

su – oracle
. oraenv <<< DB01TST
cd $ORACLE_HOME/OPatch
sqlplus / as sysdba
SQL> set serveroutput on
SQL> exec dbms_qopatch.get_sqlpatch_status;

Patch Id : 26609798
Action : APPLY
Action Time : 24-SEP-2017 23:50:14
Description : DATABASE BUNDLE PATCH 12.1.0.2.170814
Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/26609798/21481992/26609798_apply_G100316_CD
BROOT_2017Sep24_23_48_22.log
Status : SUCCESS

Patch Id : 26925263
Action : APPLY
Action Time : 29-MAY-2018 15:40:27
Description : DATABASE BUNDLE PATCH 12.1.0.2.180116
Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/26925263/21857460/26925263_apply_DB01TST_20
18May29_15_40_13.log
Status : FAILED

PL/SQL procedure successfully completed.

exit
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
./datapatch -verbose
sqlplus / as sysdba
SQL> set serveroutput on
SQL> exec dbms_qopatch.get_sqlpatch_status;

Patch Id : 26609798
Action : APPLY
Action Time : 24-SEP-2017 23:50:14
Description : DATABASE BUNDLE PATCH 12.1.0.2.170814
Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/26609798/21481992/26609798_apply_G100316_CD
BROOT_2017Sep24_23_48_22.log
Status : SUCCESS

Patch Id : 26925263
Action : APPLY
Action Time : 29-MAY-2018 15:40:27
Description : DATABASE BUNDLE PATCH 12.1.0.2.180116
Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/26925263/21857460/26925263_apply_DB01TST_20
18May29_15_40_13.log
Status : FAILED

Patch Id : 26925263
Action : APPLY
Action Time : 29-MAY-2018 15:53:43
Description : DATABASE BUNDLE PATCH 12.1.0.2.180116
Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/26925263/21857460/26925263_apply_DB01TST_20
18May29_15_53_01.log
Status : SUCCESS

exit

Repeat this operation on all the faulty databases.

8) Patching the storage

Patching the storage is actually patching the NVMe disks. This is not always mandatory as your disks can already have the latest version. So, if needed, apply the patch:

odacli update-storage -v 12.2.1.3.0
{
"jobId" : "4a221df4-8c85-4f34-aa7f-e014cdb751f7",
"status" : "Created",
"message" : "Success of Storage Update may trigger reboot of node after 4-5 minutes. Please wait till node restart",
"reports" : [ ],
"createTimestamp" : "May 29, 2018 13:27:41 PM CEST",
"resourceList" : [ ],
"description" : "Storage Firmware Patching",
"updatedTime" : "May 29, 2018 13:27:41 PM CEST"
}

odacli describe-job -i "4a221df4-8c85-4f34-aa7f-e014cdb751f7"

Job details
----------------------------------------------------------------
ID:  4a221df4-8c85-4f34-aa7f-e014cdb751f7
Description:  Storage Firmware Patching
Status:  Success
Created:  May 29, 2018 1:27:41 PM CEST
Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Applying Firmware Disk Patches           May 29, 2018 1:27:41 PM CEST        May 29, 2018 1:27:45 PM CEST        Success
Applying Firmware Expander Patches       May 29, 2018 1:27:45 PM CEST        May 29, 2018 1:27:52 PM CEST        Success
Applying Firmware Controller Patches     May 29, 2018 1:27:52 PM CEST        May 29, 2018 1:28:51 PM CEST        Success
preRebootNode Actions                    May 29, 2018 1:28:51 PM CEST        May 29, 2018 1:28:51 PM CEST        Success
Reboot Ilom                              May 29, 2018 1:28:51 PM CEST        May 29, 2018 1:28:51 PM CEST        Success

Beware of the reboot of the server few minutes after the patching is finished! And control the components version after the reboot.

9) Optional: deploy the latest db clone files

If you’ll never deploy a new dbhome this step is not necessary. If you will, or simply if you don’t know if you’ll have to do that later, download and register the new db clone files in the repository to be able to create a new dbhome at the same patch level, for example:

cd /opt/patch
unzip p23494992_122130_Linux-x86-64.zip
Archive:  p23494992_122130_Linux-x86-64.zip
 extracting: odacli-dcs-12.2.1.3.0-180320-DB-12.1.0.2.zip
  inflating: README.txt

update-image --image-files odacli-dcs-12.2.1.3.0-180320-DB-12.1.0.2.zip
Unpacking will take some time,  Please wait...
Unzipping odacli-dcs-12.2.1.3.0-180320-DB-12.1.0.2.zip

10) Control the version of the components

Now the patching is done. It took about 2 hours, not so bad for all these updates. Finally, do a last check of the components to see if everything is fine:

odacli describe-component
System Version
---------------
12.2.1.3.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.2.1.3.0            up-to-date
GI                                        12.2.0.1.180116       up-to-date
DB {
[ OraDB12102_home1 ]                      12.1.0.2.180116       up-to-date
[ OraDB11204_home1 ]                      11.2.0.4.180116       up-to-date
}
DCSAGENT                                  18.1.3.0.0            up-to-date
ILOM                                      4.0.0.24.r121140      up-to-date
BIOS                                      39090000              up-to-date
OS                                        6.9                   up-to-date
FIRMWARECONTROLLER {
[ c6 ]                                    4.650.00-7176         up-to-date
[ c0,c1,c2,c3,c4,c5 ]                     KPYAGR3Q              up-to-date
}
FIRMWAREDISK                              0R3Q                  up-to-date

Everything should be up-to-date now, until the next patch :-)

 

Cet article Patching ODA lite to 12.2.1.3.0 est apparu en premier sur Blog dbi services.

Introducing SQL Server on Kubernetes

Thu, 2018-05-31 06:50

After spending some times with Docker Swarm let’s introduce SQL Server on Kubernetes (aka K8s). Why another container orchestrator? Well, because Microsoft gives a strong focus on Kubernetes in their documentation and their events and because K8s is probably one of the most popular orchestration tools in the IT industry. By the way, I like to refer to the Portworx Annual Container Adoption Survey to get a picture of container trend over the years and we may notice there is no clear winner among orchestration tools yet between Swarm and K8s. By 2017, one another interesting point was persistent storage challenge that is the top 1 of the top list of adoption. I’m looking forward to see the next report about this point because you probably guessed, database containers rely mainly on it.

Anyway, as an IT services company, it appears justifiable to include K8s to our to-do list about container orchestrators :)

blog 136 - 000 - K8s - banner

First of all, let’s say this blog post doesn’t aim to compare Docker Swarm and K8s. Each platform has pros and cons and you can read a lot on the internet. I will rather expose some thoughts about deploying our dbi services docker image on this platform. Indeed, since last year we mainly work on our SQL Server docker image based on Docker and Docker Swarm architectures and it may be interesting to see if we may go the same way with K8s.

But before deploying our custom image we need to install a K8s infrastructure. From an installation perspective K8s cluster is likely harder to use than Docker Swarm. This time rather than using my own lab environment, I will shift on both Azure container and Azure container registry services to provision an operational K8s service. I just want here to focus on deploying my image and get some experience feedbacks about interacting with K8s. The Microsoft procedure is well-documented so there is no really adding-value to duplicate the installation steps. Because we operate on Azure, I will use a lot of az cli and kubectl commands to deploy and to manage my K8s service. Here some important information concerning my infrastructure:

I first installed and configured a private registry through the Azure container registry service in order to push my custom docker image for SQL Server 2017 on Linux. Obviously, this step may be optional regarding your context. My custom image is named dbi_linux_sql2017.

[dab@DBI-LT-DAB:#]> az acr repository list --name dbik8registry --output table
Result
------------------
dbi_linux_sql2017
mssql-server-linux

 

Then I installed my K8s service that includes 2 nodes. This is likely not a recommended scenario for production but it will fit with my need for the moment. I will probably scale my architecture for future tests.

[dab@DBI-LT-DAB:#]> kubectl cluster-info
Kubernetes master is running at https://dbik8sclus-k8s-rg-913528-...
Heapster is running at https://dbik8sclus-k8s-rg-913528-...
KubeDNS is running at https://dbik8sclus-k8s-rg-913528-...
kubernetes-dashboard is running at https://dbik8sclus-k8s-rg-913528-...
…
[dab@DBI-LT-DAB:#]> kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://dbik8sclus-k8s-rg-913528-3eb7146d.hcp.westeurope.azmk8s.io:443
  name: dbik8scluster
contexts:
- context:
    cluster: dbik8scluster
    user: clusterUser_k8s-rg_dbik8scluster
  name: dbik8scluster
current-context: dbik8scluster
…

[dab@DBI-LT-DAB:#]> kubectl get nodes
NAME                       STATUS    ROLES     AGE       VERSION
aks-nodepool1-78763348-0   Ready     agent     6h        v1.9.6
aks-nodepool1-78763348-1   Ready     agent     6h        v1.9.6

 

From an Azure perspective, my K8s cluster is composed of several resources in a dedicated resource group with virtual machines, disks, network interfaces, availability sets and a K8s load balancer reachable from a public IP address.

blog 136 - 00 - K8s - Azure config

Finally, I granted to my K8s cluster sufficient permissions to access my private Docker registry (READ role).

[dab@DBI-LT-DAB:#]>$CLIENT_ID=(az aks show --resource-group k8s-rg --name dbik8scluster --query "servicePrincipalProfile.clientId" --output tsv)
[dab@DBI-LT-DAB:#]>$ACR_ID=$(az acr show --name dbik8registry --resource-group k8s-rg --query "id" --output tsv)

[dab@DBI-LT-DAB:#]>az role assignment create --assignee $CLIENT_ID --role Reader --scope $ACR_ID

[dab@DBI-LT-DAB:#]> kubectl get secrets
NAME                  TYPE                                  DATA      AGE
default-token-s94vc   kubernetes.io/service-account-token   3         6h

 

Similar to Docker Swarm, we may rely on secret capabilities to protect the SQL Server sa password. So, let’s take advantage of it!

[dab@DBI-LT-DAB:#]> kubectl create secret generic mssql --from-literal=SA_PASSWORD="xxxxx"
[dab@DBI-LT-DAB:#]> kubectl get secrets
NAME                  TYPE                                  DATA      AGE
default-token-s94vc   kubernetes.io/service-account-token   3         6h
mssql                 Opaque                                1         6h

 

At this stage before deploying my SQL Server application, let’s introduce some K8s important concepts we should be familiar as a database administrator.

  • Pod

Referring to the K8s documentation, a pod is a logical concept that represents one or more application containers with some shared resources as shared storage, networking including unique cluster IP address and metadata about each container image such image version, exposed port etc ….

Each container in the same pod is always co-located and co-scheduled and run in shared context on the same node. Comparing to Docker Swarm, the latter doesn’t offer such capabilities because as far I as know by default, tasks are spread services across the cluster and there is no really easy way to achieve the same concept than K8s pod.

To simplify, a K8s pod is a group of containers that are deployed together on the same host. Referring to my SQL Server deployment with only one container, pod may be replaced by container here but in a real production scenario SQL Server will likely be one part of a K8s pod.

blog 136 - 0 - K8s - POD

 

We may correlate what was said previously by using K8s related commands to pods. Here a status of the pod related to my SQL Server deployment.

[dab@DBI-LT-DAB:#]> kubectl get pods
NAME                                READY     STATUS    RESTARTS   AGE
mssql-deployment-5845f974c6-xx9jv   1/1       Running   0          3h

[dab@DBI-LT-DAB:#]> kubectl describe pod mssql-deployment-5845f974c6-xx9jv
Name:           mssql-deployment-5845f974c6-xx9jv
Namespace:      default
Node:           aks-nodepool1-78763348-0/10.240.0.4
Start Time:     Wed, 30 May 2018 19:16:46 +0200
Labels:         app=mssql
                pod-template-hash=1401953072
Annotations:    <none>
Status:         Running
IP:             10.244.1.13
Controlled By:  ReplicaSet/mssql-deployment-5845f974c6
Containers:
  mssql:
    Container ID:   docker://b71ba9ac3c9fa324d8ff9ffa8ec24015a676a940f4d2b64cbb85b9de8ce1e227
    Image:          dbik8registry.azurecr.io/dbi_linux_sql2017:CU4
    Image ID:       docker-pullable://dbik8registry.azurecr.io/dbi_linux_sql2017@sha256:5b9035c51ae2fd4c665f957da2ab89432b255db0d60d5cf63d3405b22a36ebc1
    Port:           1433/TCP
    State:          Running
      Started:      Wed, 30 May 2018 19:17:22 +0200
    Ready:          True
    Restart Count:  0
    Environment:
      ACCEPT_EULA:        Y
      MSSQL_SA_PASSWORD:  xxxxxx
      DMK:                Y
    Mounts:
      /var/opt/mssql from mssqldb (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-s94vc (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  mssqldb:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mssql-data
    ReadOnly:   false
….

 

  • Replica set

A Replication set is a structure that enables you to easily create multiple pods, then make sure that that number of pods always exists. If a pod does crash, the Replication Controller replaces it. It also provides scale capabilities as we get also from Docker Swarm.

  • Service

From K8s documentation a service is also an abstraction which defines a logical set of Pods and a policy by which to access them – sometimes called a micro-service. The set of Pods targeted by a service is (usually) determined by a Label Selector. While there are some differences under the hood, we retrieve the same concepts with Docker Swarm from a deployment perspective.

  •  Virtual IP and service proxies

Referring again to the K8s documentation, every node in a Kubernetes cluster runs a kube-proxy that is responsible for implementing a form of virtual IP for Services. It includes Ingress network that is also part of Docker Swarm architecture with overlay networks and routing mesh capabilities.

In my case, as described previously I used an external load balancer with an EXTERNAL-IP configured to access my SQL Server container from the internet (xx.xxx.xxx.xx is my masked public IP as you already guessed)

[dab@DBI-LT-DAB:#]> kubectl get services
NAME               TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)          AGE
kubernetes         ClusterIP      10.0.0.1       <none>          443/TCP          7h
mssql-deployment   LoadBalancer   10.0.200.170   xx.xxx.xxx.xx   1433:30980/TCP   4h

 

From an Azure perspective the above output corresponds to what we may identify as my Kubernetes load balancer and public IP address resources as well.

blog 136 - 1 - K8s - Load Balancer

blog 136 - 12 - K8s - Public IP

Once again, my intention was not to compare Docker Swarm and K8s at all but just to highlight the fact if you’re already comfortable with Docker Swarm, the move on K8s is not as brutal as we may suppose from a high-level point of view.

Ok let’s start now the deployment phase. As said previously my private container registry already contains my custom SQL Server image. I just had to tag my image on my local machine and to push it the concerned registry as I might do with other remote Docker registries.

[dab@DBI-LT-DAB:#]> docker images
REPOSITORY                                    TAG                 IMAGE ID            CREATED             SIZE
dbik8registry.azurecr.io/dbi_linux_sql2017    CU4                 3c6bafb33a5c        17 hours ago        1.42GB
dbi/dbi_linux_sql2017                         CU4                 3c6bafb33a5c        17 hours ago        1.42GB

[dab@DBI-LT-DAB:#]> az acr repository list --name dbik8registry --output table
Result
------------------
dbi_linux_sql2017
mssql-server-linux

[dab@DBI-LT-DAB:#]> az acr repository show-tags --name dbik8registry --repository mssql-server-linux --output table
Result
--------
2017-CU4

 

In addition, I used a persistent storage based on Azure managed disk in order to guarantee persistence for my SQL Server database files.

[dab@DBI-LT-DAB:#]> kubectl describe pvc mssql-data
Name:          mssql-data
Namespace:     default
StorageClass:  azure-disk
Status:        Bound
Volume:        pvc-32a42393-6402-11e8-885d-f2170a386bd7
…

 

Concerning the image itself we use some custom parameters to create both a dedicated user for applications that will run on the top of the SQL Server instance and to enable the installation of the DMK maintenance module for SQL Server at the container start up. We have other customization topics but for this blog post it will be sufficient to check what we want to test.

Here my deployment file. Comparing to Docker Swarm deployment file, I would say the manifest is more complex with K8s (that’s a least my feeling).

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mssql
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: mssql
        image: dbik8registry.azurecr.io/dbi_linux_sql2017:CU4
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: MSSQL_SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mssql
              key: SA_PASSWORD 
        - name: DMK
          value: "Y"
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssql
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: mssql-data
---
apiVersion: v1
kind: Service
metadata:
  name: mssql-deployment
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 1433
      targetPort: 1433
  type: LoadBalancer

 

Let’s deploy and let’s spin up our SQL Server application

[dab@DBI-LT-DAB:#]> kubectl apply -f T:\dbi_dbaas_azure\sqlk8sazuredeployment.yaml
deployment "mssql-deployment" created
service "mssql-deployment" created

 

Pod and services are created. Let’s take a look at some information about them. Deployment and pod are ok. The last command shows the associated internal IP to connect in order to the SQL Server pod as well as a external / public IP address that corresponds to the Ingress load-balancer to connect from outside Azure internal network. We also get a picture of exposed ports.

[dab@DBI-LT-DAB:#]> kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
mssql-deployment   1         1         1            1           7m

[dab@DBI-LT-DAB:#]> kubectl get pods -o wide
NAME                               READY     STATUS    RESTARTS   AGE       IP            NODE
mssql-deployment-8c67fdccc-pbg6d   1/1       Running   0          12h       10.244.1.16   aks-nodepool1-78763348-0

[dab@DBI-LT-DAB:#]> kubectl get replicasets
NAME                         DESIRED   CURRENT   READY     AGE
mssql-deployment-8c67fdccc   1         1         1         12h

[dab@DBI-LT-DAB:#]> kubectl get services
NAME               TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)          AGE
kubernetes         ClusterIP      10.0.0.1       <none>          443/TCP          9h
mssql-deployment   LoadBalancer   10.0.134.101   xxx.xxx.xxx.xxx   1433:31569/TCP   7m

[dab@DBI-LT-DAB:#]> kubectl describe service mssql-deployment
Name:                     mssql-deployment
Namespace:                default
Labels:                   <none>
…
Selector:                 app=mssql
Type:                     LoadBalancer
IP:                       10.0.134.101
LoadBalancer Ingress:     xxx.xxx.xxx.xxx
Port:                     <unset>  1433/TCP
TargetPort:               1433/TCP
NodePort:                 <unset>  31569/TCP
Endpoints:                10.244.1.16:1433
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

 

Let’s try now to connect to new fresh SQL Server instance from my remote laptop:

blog 136 - 2 - K8s - Container

Great job! My container includes all my custom stuff as the dbi_tools database and dedicated maintenance jobs related to our DMK maintenance tool. We may also notice the dbi user created during the container start up.

Just out of curiosity, let’s have a look at the pod log or container log because there is only one in the pod in my case. The log includes SQL Server log startup and I put only some interesting samples here that identify custom actions we implemented during the container startup.

[dab@DBI-LT-DAB:#]> kubectl get po -a
NAME                               READY     STATUS    RESTARTS   AGE
mssql-deployment-8c67fdccc-pk6sm   1/1       Running   0          21m

[dab@DBI-LT-DAB:#]> kubectl logs mssql-deployment-8c67fdccc-pk6sm
…
======= 2018-05-30 21:04:59 Creating /u00 folder hierarchy ========
cat: /config.log: No such file or directory
======= 2018-05-30 21:04:59 Creating /u01 folder hierarchy ========
======= 2018-05-30 21:04:59 Creating /u02 folder hierarchy ========
======= 2018-05-30 21:04:59 Creating /u03 folder hierarchy ========
======= 2018-05-30 21:04:59 Creating /u98 folder hierarchy ========
======= 2018-05-30 21:04:59 Linking binaries and configuration files to new FHS ========
======= 2018-05-30 21:04:59 Creating MSFA OK =======
….
2018-05-30 21:05:13.85 spid22s     The default language (LCID 1033) has been set for engine and full-text services.
======= 2018-05-30 21:05:29 MSSQL SERVER STARTED ========
======= 2018-05-30 21:05:29 Configuring tempdb database files placement =======
…
2018-05-30 21:06:05.16 spid51      Configuration option 'max server memory (MB)' changed from 2147483647 to 1500. Run the RECONFIGURE statement to install.
Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.
Configuration option 'max server memory (MB)' changed from 2147483647 to 1500. Run the RECONFIGURE statement to install.
======= 2018-05-30 21:06:05 Configuring max server memory OK =======
======= 2018-05-30 21:06:05 Creating login dbi =======
======= 2018-05-30 21:06:05 Creating login dbi OK =======
======= 2018-05-30 21:06:05 Installing DMK =======
Changed database context to 'master'.
Creating dbi_tools...
2018-05-30 21:06:12.47 spid51      Setting database option MULTI_USER to ON for database 'dbi_tools'.
Update complete.
Changed database context to 'dbi_tools'.

(1 rows affected)

(1 rows affected)
======= 2018-05-30 21:06:12 Installing DMK OK =======
======= MSSQL CONFIG COMPLETED =======
2018-05-30 21:10:09.20 spid51      Using 'dbghelp.dll' version '4.0.5'
2018-05-30 21:10:19.76 spid51      Attempting to load library 'xplog70.dll' into memory. This is an informational message only. No user action is required.
2018-05-30 21:10:19.87 spid51      Using 'xplog70.dll' version '2017.140.3022' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required.

 

To finish this blog post up properly, let’s simulate a pod failure to check the K8s behavior with our SQL Server container.

[dab@DBI-LT-DAB:#]> kubectl delete pod mssql-deployment-8c67fdccc-pk6sm
pod "mssql-deployment-8c67fdccc-pk6sm" deleted

[dab@DBI-LT-DAB:#]> kubectl get pods
NAME                               READY     STATUS        RESTARTS   AGE
mssql-deployment-8c67fdccc-jrdgg   1/1       Running       0          5s
mssql-deployment-8c67fdccc-pk6sm   1/1       Terminating   0          26m

[dab@DBI-LT-DAB:#]> kubectl get pods
NAME                               READY     STATUS    RESTARTS   AGE
mssql-deployment-8c67fdccc-jrdgg   1/1       Running   0          2m

 

As expected, the replica set is doing its job by re-creating the pod to recover my SQL Server instance, and by connecting to the persistent storage. We can check we can still connect on the instance from the load balancer IP address without running into any corruption issue.

blog 136 - 3 - K8s - Container 2

To conclude, I would say that moving our custom SQL docker image was not as hard as I expected. Obviously, there are some difference between the both orchestrator products but from an application point of view it doesn’t make a big difference. In an administration perspective, I’m agree the story is probably not the same :)

What about K8s from a development perspective? You may say that you didn’t own such Azure environment but the good news is you can use Minikube which is the single node version of Kubernetes mainly designed for local development. I will probably blog about it in the future. Stay tuned!

 

 

Cet article Introducing SQL Server on Kubernetes est apparu en premier sur Blog dbi services.

An awk filter to truncate or wrap around tabular output

Thu, 2018-05-31 04:47

In my previous blog “idql and its column output”, see link here, I provided a small awk filter to reflow the idql output’s columns by wrapping their content around. Later I came of thinking that it could be useful to be able to truncate the columns instead, in order to have an even compacter output. A shorter, denser table can certainly help is some cases, such as when a quick overview of a query’s result is enough to get an idea of its correctness or its performance.
Of course, in case of truncation, a user-definable trailing string, defaulting to the ellipsis, would be appended to show that a truncation has been applied; this is to avoid to present incomplete and therefore incorrect data to an unsuspecting or distracted reader.
So, here is the script:

# Usage:
#    gawk [-v maxw=nn] [-v truncate=0|1] [-v ellipsis=...] [-v wrap=1|0] -f compact_wwa.awk file
# or:
#    cmd | gawk [-v maxw=nn] [-v truncate=0|1] [-v ellipsis=...] [-v wrap=1|0] -f compact_wwa.awk
# where:
#     missing numeric parameters assume the value 0;
#     maxw is the maximum column width and defaults to 32;
#         characters outside this limit are either wrapped around in their own column or truncated, depending on the parameters truncate and wrap;
#     wrapping is the default and has priority over truncating;
#     truncate=0 wrap=0 --> wrapping around;
#     truncate=0 wrap=1 --> wrapping around;
#     truncate=1 wrap=0 --> truncate;
#     truncate=1 wrap=1 --> wrapping around;
#     truncating is therefore only done when explicitly and unambiguously requested; this is to preserve data integrity whenever possible;
#     ellipsis string is only considered when truncating and defaults to '...'; thus, there is always one to warn of truncation;
# example:
#     gawk -v maxw=50 -f compact_wwa.awk tmp_file  | less -S
#     C. Cervini, dbi-services;
BEGIN {
   if (!maxw) maxw = 32
   if (!truncate)
      if (!wrap)
         wrap = 1
      else;
   else if (!wrap);
   else
      truncate = 0
   if (!ellipsis)
      ellipsis = "..."
   while (getline && !match($0, /^([0-9]+> )+/));
   header = substr($0, RLENGTH + 1)
   getline
   nbFields = NF
   fs[0] = 0; fw[0] = -1 # just so that fs[1] = 1, see below;
   headerLine = ""; sepLine = ""
   for (i = 1; i <= NF; i++) {
      fs[i] = fs[i - 1] + fw[i - 1] + 2
      fw[i] = length($i)
      Min = min(fw[i], maxw)
      if (1 == truncate)
         Min = max(Min, length(ellipsis))
      sepLine = sepLine sprintf("%s  ", substr($0, fs[i], Min))
   }
   printWithWA(header)
   printf("%s\n", sepLine)
}
{
   if (match($0, /^\([0-9]+ rows? affected\)/)) {
      print
      exit
   }
   printWithWA($0)
}
function printWithWA(S) {
   do {
      left_over = ""
      for (i = 1; i <= nbFields; i++) {
         Min = min(fw[i], maxw)
         if (1 == truncate)
            Min = max(Min, length(ellipsis))
         columnS = substr(S, fs[i], Min)
         if (1 == truncate) {
            restColumn = substr(S, fs[i] + Min, fw[i] - Min); gsub(/ +$/, "", restColumn)
            if (restColumn)
               columnS = substr(columnS, 1, length(columnS) - length(ellipsis)) ellipsis
         }
         printf("%s  ", columnS)
         restS = substr(S, fs[i] + Min, fw[i] - Min)
         if (length(restS) > 0) {
            left_over = left_over sprintf("%-*s  ", fw[i], restS)
         }
         else
            left_over = left_over sprintf("%*s  ", fw[i], "")
      }
      printf "\n"
      gsub(/ +$/, "", left_over)
      S = left_over
   } while (left_over && wrap)
}
function min(x, y) {
   return(x <= y ? x : y)
}
function max(x, y) {
   return(x >= y ? x : y)
}

Example of execution

idql dmtest -Udmadmin -Pdmadmin -w500 <<EoQ | gawk -v maxw=25 -v truncate=1 -v ellipsis="(...)" -f compact_wwa.awk | less -S
select r_object_id, user_name, user_os_name, user_address, user_group_name, user_privileges, owner_def_permit, world_def_permit, group_def_permit, default_folder, user_db_name, description,
acl_domain, acl_name, user_os_domain, home_docbase, user_state, client_capability, globally_managed, user_delegation, workflow_disabled, alias_set_id, user_source, user_ldap_dn, user_xprivileges,
failed_auth_attempt, user_admin, user_global_unique_id, user_login_name, user_login_domain, user_initials, USER_PASSWORD, user_web_page, first_failed_auth_utc_time, last_login_utc_time,
deactivated_utc_time, deactivated_ip_addr, root_log_dir
from
dm_user
go
exit
EoQ

Result:
blog10
blog11
Here the ellipsis string has been set to “(…)”.
One could wonder why there are 2 apparently contradictory options to specify the wrapping, truncate and wrap. Wouldn’t one be enough since truncate = non wrap ? And it would also remove the 7 lines of validation and prioritization at the beginning. One of the reason is comfort as it is often easier to think in positive rather than in negative logic. The other is that both options are not exactly opposite; the opposite of truncate can be “do nothing” as well as “wrap around”, and vice-versa. It is better to be explicit rather than implicit so to avoid any ambiguity.
What if a do-nothing filter is needed while still calling compact_wwa.awk (have I said that wwa stands for “with wrap-around” ?), e.g. from another script that passes the command-line options to it ? Just specify some large value for maxw and that’s it. As its name suggests it, compact_wwa.awk only attempts to shrink columns if larger than maxw; narrower columns are not modified.

Possible emprovements

There are still lots of ways to enhance this script. For instance, one of them is to provide a maxw per column. Here, the parameter maxw is applied to all the columns but it would be nice to compress (whether by truncating or by wrapping around) only the uninteresting columns so to have more space for the relevant ones.
Another way would be to make the script work both ways, enlarging columns as well as shrinking them, which could sometimes simplify their parsing since they’ll all have the same width. Maybe the next time…

Wrapping up

I think this script can prove useful to those who need to work with idql and have to deal with its limits. Countless times in the past, I had to open a DQL extraction’s output in UltraEdit to manually shrink all those columns. Too bad that I didn’t have this little utility before, it could have spared me tons of time doing all this tedious work.

 

Cet article An awk filter to truncate or wrap around tabular output est apparu en premier sur Blog dbi services.

Adding a Documentum Extension to gawk, part II

Thu, 2018-05-31 04:41

This is part II of the article “Adding a Documentum Extension to gawk”. You can find Part I here
Before we can test the extension, we need a test program and some helper functions for comfort. Let’s prepare them.
o  Move back to the dmgawk directory and edit DctmAPI.awk, the wrapper functions;

$ cd ../../..
$ pwd
/home/dmadmin/gawk
$ vi DctmAPI.awk

o  Cut and paste the lines below:

@load "dctm"

# provides high-level function to do Documentum stuff;
# set environment variable $DOCUMENTUM to where the dmcl.ini file is and $LD_LIBRARY_PATH to the location of libdmcl40.so;
# this library file is made available to gawk scripts by @include-ing in it;
# the environment variable AWKPATH can be set to the path containing the included awk sources;
# Cesare Cervini
# dbi-services.com
# 5/2018

dmLogLevel = 1

function dmShow(mesg) {
# displays the message msg if allowed
   if (dmLogLevel > 0)
      print mesg
}

function dmConnect(docbase, user_name, password) {
# connects to given docbase as user_name/password;
# returns a session id if OK, an empty string otherwise
   dmShow("in connect(), docbase = " docbase ", user_name = " user_name ", password = " password)
   session = dmAPIGet("connect," docbase "," user_name "," password)
   if (!session) {
      print "unsuccessful connection to docbase " docbase " as user " user_name
      return "" 
   }
   else {
      dmShow("successful session " session)
      dmShow(dmAPIGet("getmessage," session))
   }
   dmShow("exiting connect()")
   return session
}

function dmExecute(session, dql_stmt) {
# execute non-SELECT DQL statements;
# returns 1 if OK, 0 otherwise;
   dmShow("in dmExecute(), dql_stmt=" dql_stmt)
   query_id = dmAPIGet("query," session "," dql_stmt)
   if (!query_id) {
      dmShow(dmAPIGet("getmessage," session))
      return 0
   }
   if (!dmAPIExec("close," session "," query_id)) {
      dmShow(dmAPIGet("getmessage," session))
      return 0
   }
   dmAPIGet("getmessage," session)
   return 1
}

function dmSelect(session, dql_stmt, attribute_names) {
# execute the DQL SELECT statement passed in dql_stmt and outputs the result to stdout;
# attributes_names is a list of attributes to extract from the result set;
# return 1 if OK, 0 otherwise;
   dmShow("in dmSelect(), dql_stmt=" dql_stmt)
   query_id = dmAPIGet("query," session "," dql_stmt)
   if (!query_id) {
      dmShow(dmAPIGet("getmessage," session))
      return 0
   }

   s = ""
   nb_attrs = split(attribute_names, attributes_tab, " ")
   for (attr = 1; attr <= nb_attrs; attr++)
      s = s "[" attributes_tab[attr] "]\t"
   print s
   resp_cntr = 0
   while (dmAPIExec("next," session "," query_id) > 0) {
      s = ""
      for (attr = 1; attr <= nb_attrs; attr++) {
         value = dmAPIGet("get," session "," query_id "," attributes_tab[attr])
         if ("r_object_id" == attributes_tab[attr] && !value) {
            dmShow(dmAPIGet("getmessage," session))
            return 0
         }
         s= s "[" (value ? value : "NULL") "]\t"
      }
      resp_cntr += 1
      dmShow(sprintf("%d: %s", resp_cntr, s))
   }
   dmShow(sprintf("%d rows iterated", resp_cntr))

   if (!dmAPIExec("close," session "," query_id)) {
      dmShow(dmAPIGet("getmessage," session))
      return 0
   }

   return 1
}

function dmDisconnect(session) {
# closes the given session;
# returns 1 if no error, 0 otherwise;
   dmShow("in dmDisconnect()")
   status = dmAPIExec("disconnect," session)
   if (!status)
      dmShow("Exception in dmDisconnect():")
   else
      dmShow("exiting disconnect()")
   return status
}

o  Ditto for the test program tdctm.awk;
$ vi tdctm.awk

# test program for DctmAPI.awk and the interface dctm.c;
# Cesare Cervini
# dbi-services.com
# 5/2018

@include "DctmAPI.awk"

BEGIN {
   dmLogLevel = 1

   status = dmAPIInit()
   printf("dmAPIInit(): %d\n", status)
   if (status)
      print("dmAPIInit() was successful")
   else
      print("dmAPIInit() was not successful")
 
   printf "\n"
   session = dmConnect("dmtest", "dmadmin" , "dmadmin")
   printf("dmConnect: session=%s\n", session)
   if (!session) {
      print("no session opened, exiting ...")
      exit(1)
   }

   printf "\n"
   dump = dmAPIGet("dump," session ",0900c35080008107")
   print("object 0900c35080008107 dumped:\n" dump)

   printf "\n"
   stmt = "update dm_document object set language_code = 'FR' where r_object_id = '0900c35080008107'"
   status = dmExecute(session, stmt)
   if (status)
      print("dmExecute [" stmt "] was successful")
   else
      print("dmExecute [" stmt "] was not successful")

   printf "\n"
   stmt = "select r_object_id, object_name, owner_name, acl_domain, acl_name from dm_document"
   status = dmSelect(session, stmt, "r_object_id object_name owner_name acl_domain acl_name")
   if (status)
      print("dmSelect [" stmt "] was successful")
   else
      print("dmSelect [" stmt "] was not successful")

   printf "\n"
   stmt = "select count(*) from dm_document"
   status = dmSelect(session, stmt,  "count(*)")
   if (status)
      print("dmSelect [" stmt "] was successful")
   else
      print("dmSelect [" stmt "] was not successful")

   printf "\n"
   status = dmDisconnect(session)
   if (status)
      print("successfully disconnected")
   else
      print("error while  disconnecting")

   printf "\n"
   status = dmAPIDeInit()
   if (status)
      print("successfully deInited")
   else
      print("error while  deInited")

   exit(0)
}

o  Let’s test now !
first, set the needed variables for the Documentum environment, if still not done;
$ export DOCUMENTUM=/home/dmadmin/documentum
$ export LD_LIBRARY_PATH=$DOCUMENTUM/product/7.3/bin

o  Then, we need to tell the new gawk where to find the dynamic extension and the awk wrapper;
the environment variables are AWKLIBPATH and AWKPATH respectively;
let’s define those variables on-the-fly for a change:

$ AWKLIBPATH=gawk-4.2.1/extension/.libs AWKPATH=./ gawk-4.2.1/gawk -f ./tdctm.awk

o  Produced output:

dmAPIInit(): 1
dmAPIInit() was successful
 
in connect(), docbase = dmtest, user_name = dmadmin, password = dmadmin
successful session s0
[DM_SESSION_I_SESSION_START]info: "Session 0100c3508000d6d6 started for user dmadmin."
exiting connect()
dmConnect: session=s0
 
object 0900c35080008107 dumped:
USER ATTRIBUTES
 
object_name : 4/10/2018 20:09:28 dm_DBWarning
title : Result of dm_method(dm_DBWarning) with status code (0)
subject : Result of dm_method(dm_DBWarning) with command line: ./dmbasic -f../install/admin/mthd4.ebs -eDBWarning -- -docbase_name dmtest.dmtest -user_name dmadmin -job_id 0800c3508000035f -method_....
authors []:
keywords []:
resolution_label :
owner_name : dmadmin
owner_permit : 7
group_name : docu
group_permit : 5
world_permit : 3
log_entry :
acl_domain : dmadmin
acl_name : dm_4500c35080000101
language_code : FR
 
SYSTEM ATTRIBUTES
 
r_object_type : dm_document
r_creation_date : 4/10/2018 20:09:41
r_modify_date : 5/13/2018 22:09:06
r_modifier : dmadmin
r_access_date : nulldate
r_composite_id []:
r_composite_label []:
r_component_label []:
r_order_no []:
...
 
i_is_replica : F
i_vstamp : 200
 
in dmExecute(), dql_stmt=update dm_document object set language_code = 'FR' where r_object_id = '0900c35080008107'
dmExecute [update dm_document object set language_code = 'FR' where r_object_id = '0900c35080008107'] was successful
 
in dmSelect(), dql_stmt=select r_object_id, object_name, owner_name, acl_domain, acl_name from dm_document
[r_object_id] [object_name] [owner_name] [acl_domain] [acl_name] 1: [0900c350800001d0] [Default Signature Page Template] [dmadmin] [dmadmin] [dm_4500c35080000101] 2: [6700c35080000100] [CSEC Plugin] [dmadmin] [dmadmin] [dm_4500c35080000101] 3: [6700c35080000101] [Snaplock Connector] [dmadmin] [dmadmin] [dm_4500c35080000101] 4: [0900c350800001ff] [Blank Word 2007 / 2010 Document] [dmadmin] [dmadmin] [dm_4500c35080000101] 5: [0900c35080000200] [Blank Word 2007 / 2010 Template] [dmadmin] [dmadmin] [dm_4500c35080000101] 6: [0900c35080000201] [Blank Word 2007 / 2010 Macro-enabled Document] [dmadmin] [dmadmin] [dm_4500c35080000101] 7: [0900c35080000202] [Blank Word 2007 / 2010 Macro-enabled Template] [dmadmin] [dmadmin] [dm_4500c35080000101] 8: [0900c35080000203] [Blank Excel 2007 / 2010 Workbook] [dmadmin] [dmadmin] [dm_4500c35080000101] 9: [0900c35080000204] [Blank Excel 2007 / 2010 Template] [dmadmin] [dmadmin] [dm_4500c35080000101] 10: [0900c350800001da] [11/21/2017 16:31:10 dm_PostUpgradeAction] [dmadmin] [dmadmin] [dm_4500c35080000101] ...
885: [0900c35080005509] [11/30/2017 20:11:05 dm_DataDictionaryPublisher] [dmadmin] [dmadmin] [dm_4500c35080000101] 886: [0900c3508000611d] [12/11/2017 19:40:05 dm_DataDictionaryPublisher] [dmadmin] [dmadmin] [dm_4500c35080000101] 887: [0900c35080006123] [12/11/2017 20:10:08 dm_DataDictionaryPublisher] [dmadmin] [dmadmin] [dm_4500c35080000101] 888: [0900c3508000612d] [12/11/2017 20:30:07 dm_UpdateStats] [dmadmin] [dmadmin] [dm_4500c35080000101] 888 rows iterated
dmSelect [select r_object_id, object_name, owner_name, acl_domain, acl_name from dm_document] was successful
 
in dmSelect(), dql_stmt=select count(*) from dm_document
[count(*)] 1: [888] 1 rows iterated
dmSelect [select count(*) from dm_document] was successful
 
in dmDisconnect()
exiting disconnect()
successfully disconnected
 
successfully deInited

That’s it, enjoy your new dmgawk!

Next steps

Now that the extension has proved to be usable, it could be interesting to deploy it system-wide. This can be done by the generated Makefile at the root directory of the package as follows:

$ sudo make install

The loadable extensions are stored in /usr/local/lib/gawk:

$ ll /usr/local/lib/gawk
total 504
-rwxr-xr-x 1 root root 124064 May 13 16:38 filefuncs.so
...
-rwxr-xr-x 1 root root 26920 May 13 16:38 time.so
-rwxr-xr-x 1 root root 31072 May 13 16:38 dctm.so

As this is the standard place to look for them, there no need to specify the AWKLIBPATH anymore.
There is also a standard place for awk utility scripts, /usr/local/share/awk. If DctmAPI.awk is installed there, it becomes available to anybody from anywhere on the system and setting AWKPATH is not needed any more. In addition, if /usr/local/bin is in the righteous $PATH, the newly extended gawk is accessible directly, along with its extensions and its ancillary scripts, e.g.:
$ gawk -f dmgawk/tdctm.awk
You may also want to define an alias dmawk pointing to the extended gawk:
$ alias dmawk=/usr/local/bin/gawk
or you may want to name it dmgawk so you still can access the original dmawk:
$ alias dmgawk=/usr/local/bin/gawk

I’m not finished yet!

I started this mini-project on a 32-bit Ubuntu VM with the Documentum v5.3 libdmcl40.so library. Once it proved working, I replayed all the steps on a 64-bit Ubuntu VM with the Documentum v7.3 binaries. There, as already noticed in the past, the library libdmcl40.so kept crashing the executable that loads it at run-time, gawk here, so I successfully resorted to libdmcl.so, the one that calls java code behind the scenes through JNI. It is a bit slower at start up but at least does work as expected. It is very likely that the ancient libdmcl40.so is going to be retired in some future release of the content server, so be prepared to switch.
You may remember that I edited the automake Makefile.am to have the extension compiled, and that I had to relink said extension manually with the libdmcl.so library. A better way would be to provide it with its own makefile to automate those steps. This way, no original gawk source file would be altered.

Conclusion, finally

This test program and wrapper are actually the gawk counterparts of the Documentum extension for python I wrote earlier for this blog (check it here). With these 2 extensions, an administrator does not have any longer to be stuck with a limited and, even worse, stagnant tool. The awk language is easy and well suited to the kind of work administrators do with Documentum. Its gawk implementation is powerful, open, very well maintained and, as shown, easily extensible so there is no reason not to use it.
For licensing reasons, the interface cannot be distributed by Documentum but anybody can install it privately without restriction.

 

Cet article Adding a Documentum Extension to gawk, part II est apparu en premier sur Blog dbi services.

Adding a Documentum Extension to gawk, part I

Thu, 2018-05-31 04:41

Recently, I was searching my NAS for some files which, to end this intolerable suspense, I did not find but on the other hand I did stumbled across a couple of interesting dmawk scripts I wrote for on customer more than 20 years ago. Some statements looked a bit odd, e.g. access to elements from multi-dimensional arrays such as “a[i1][i2]”, or “delete A” to empty an array (instead of the well-known awk idiom split(“”, A)). After a while, I understood what was going on here: the dmawk that came with content server v3.x was formerly based on the GNU dialect of awk named gawk. It was already a more powerful interpreter than the standard AT&T awk available on the Unix I was using, namely HP-UX, and nowadays it has become even better, as you can see by yourself by checking the official manual here.

At that time, it already allowed to be extended, which Documentum took profit of by turning gawk into a DMCL client, dmawk. However, it was quite a tedious task because it required hacking deeply into the source code. When years later I was playing with this and trying to add Oracle connectivity to gawk (and turn it into an Oracle OCI-based client, oragawk ;-), I remember for instance one step that required the symbol table to be edited in order to add the new functions, and possibly their implementation code inserted in the bison file; finally, the whole gawk source had to be recompiled and relinked. Tedious yet bearable as it didn’t prevent passionate people from forking custom versions with useful new functionalities such as xml processing in xgawk.

Over the years, starting with v4.1.1, gawk has evolved a new, much easier mechanism for adding extensions (aka plugins). It is named dynamic extension (see documentation here); it lets one load shared libraries at run-time and invoke their functions from within a gawk script through a minimum interface to be provided; the other way around, i.e. callbacks, is also possible for the brave, or really motivated, ones. Several powerful extensions have been developed through this system such as: json serialization from/to an associative array (useful e.g. to prepare parameters to pass to javascript functions such as the ones in HighChart or flot libraries), arbitrary-precision integer and floating point support, postgresql database access, etc. (for a list of current extensions, see here). If you have a recent gawk compiled with the MPFR extension, try for example the command below:
gawk -M -v prec=16000 'BEGIN{PREC = prec; printf("a big fp number:\n%f\n\n", 3.0**10**4); n = 3**10**4; print "a large integer number:\n" n; print "it has " length(n) " digits"; exit}'
Impressive, isn’t ?
To know if your local gawk has the option, ask it so:
gawk --version
GNU Awk 4.1.3, API: 1.1 (GNU MPFR 3.1.4, GNU MP 6.1.0)
Copyright (C) 1989, 1991-2015 Free Software Foundation.

This one has it.

So why hasn’t Documentum kept up with gawk and gone for the standard, lesser awk instead ? Maybe because of a licensing issue ? Gawk is protected by the GNU GPL (see its full text in the gawk manual above) which mandates not only its modified source code but also all extension code statically or dynamically linked with it to be provided also in source code form and included in the delivered package, and maybe this was disturbing at the time. Now that open source is omnipresent, from O/Ses to RDBMS, from the network software to WEB development frameworks, from ERP to CRM software, it is so common that even a restrictive license such as the GPL does not shock anyone today. Linux itself, or GNU/Linux as some like to call it, contains a massive amount of GPL-ed software, linux per se being limited to the kernel, and is under the GPL itself. Thus, in the new extension mechanism, the GPL requires to publish the source code of not only the interface to be linked with gawk but also of the proprietary shared library libdmcl40.so (see the definition of Corresponding Source) that is loaded at run-time; but wait: since libdmcl40.so tightly uses other proprietary Documentum libraries, their source code would also need to be published, and so on, transitively. Obviously, this licensing is by design not favorable to closed source, which is exactly the reason d’être of the FSF (aka, the Free Software Foundation). If Documentum chose not to go the GPL way at that time with the simpler EDMS, it is very unlikely that it will in the future unless it drastically changes its business model and become, say, the Red Hat of document management!

Fortunately, there are no reasons to hold one’s breath until that day for I propose here a simple, straightforward interface between gawk and the Documentum libdmcl40.so/libdmcl.so run-time libraries which will not violate the GPL as nothing closed source is distributed. Its usage is exactly the same as documented in the API reference manual. I’ll show you hereafter how to prepare and use it. Since this article will be quite long, I’ll split it in two parts. Part I, you’re reading it, presents the interface and explains how to compile it as a gawk extension. Part II here will deal with a wrapper around that interface providing a few helper functions and a test program to show how to use the extension.

Installation

o  Check the currently installed version of gawk;

$ gawk --version
GNU Awk 4.1.3, API: 1.1 (GNU MPFR 3.1.4, GNU MP 6.1.0)
Copyright (C) 1989, 1991-2015 Free Software Foundation.

o  This is an old release; let’s take this opportunity to upgrade to the current latest, the 4.2.1;
    prepare a working directory; the name “dmgawk” fits it well;
$ mkdir ~/dmgawk; cd ~/dmgawk

o  Let’s get the latest source code of gawk, it’s the release 4.2.1;
$ wget http://ftp.gnu.org/gnu/gawk/gawk-4.2.1.tar.gz
$ tar -xpvzf gawk-4.2.1.tar.gz
$ cd gawk-4.2.1

o  compile it;
$ ./configure
$ make

o  Check the new version;
$ ./gawk --version
GNU Awk 4.2.1, API: 2.0
Copyright (C) 1989, 1991-2018 Free Software Foundation.

o  Note that the MPFR and GMP extensions were not linked in; that’s because those libraries were not present on my system;
    to have them, just download, compile and install them, and then run gawk’s ./configure and make again;
    fine so far; we’ll extend this local version; whether it will or can be installed system-wide is up to you;
    move to the extensions directory;
    edit the automake Makefile.am and add the highlighted references below to the new interface, let’s call it dctm.c;
$ cd extension
$ vi Makefile.am
pkgextension_LTLIBRARIES = \
filefuncs.la \
...
time.la \
dctm.la
...
time_la_SOURCES = time.c
time_la_LDFLAGS = $(MY_MODULE_FLAGS)
time_la_LIBADD = $(MY_LIBS)
 
dctm_la_SOURCES = dctm.c
dctm_la_LDFLAGS = $(MY_MODULE_FLAGS)
dctm_la_LIBADD = $(MY_LIBS)


o  save and quit; that’s all for the make file;
    let’s edit the interface dctm.c now and insert the code below;
$ vi dctm.c

/*
 * dctm.c - Builtin functions that provide an interface to Documentum dmapp.h;
 * see dmapp.h for description of functions; 
 *
 * Cesare Cervini
 * dbi-services.com
 * 5/2018
 * go to .libs and and link dmcl.o it with the Documentum library with: gcc -o dctm.so -shared dctm.o path-to-the-shared-library/libdmcl40.so;
 */
#ifdef HAVE_CONFIG_H
#include <config.h>
#endif

#include <stdio.h>
#include <assert.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>

#include <sys/types.h>
#include <sys/stat.h>

#include "gawkapi.h"

#include "gettext.h"
#define _(msgid)  gettext(msgid)
#define N_(msgid) msgid

/* make it point to the Documentum dmapp.h on your system */
#include "/home/dmadmin/documentum/share/sdk/include/dmapp.h"

static const gawk_api_t *api;	/* for convenience macros to work */
static awk_ext_id_t ext_id;
static const char *ext_version = "dctm extension: version 1.0";
static awk_bool_t (*init_func)(void) = NULL;

int plugin_is_GPL_compatible;

/*  do_dmAPIInit */
static awk_value_t *
do_dmAPIInit(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   unsigned int ret = 0;

   assert(result != NULL);

   ret = dmAPIInit();
   ret &= 0xff;

   return make_number(ret, result);
}

/*  do_dmAPIDeInit */
static awk_value_t *
do_dmAPIDeInit(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   unsigned int ret = 0;

   assert(result != NULL);

   ret = dmAPIDeInit();
   ret &= 0xff;

   return make_number(ret, result);
}

/*  do_dmAPIExec */
static awk_value_t *
do_dmAPIExec(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t str;
   unsigned int ret = 0;

   assert(result != NULL);

   if (get_argument(0, AWK_STRING, & str)) {
      ret = dmAPIExec(str.str_value.str);
      ret &= 0xff;
   } else if (do_lint)
      lintwarn(ext_id, _("dmAPIExec: called with inappropriate argument(s)"));

   return make_number(ret, result);
}

/*  do_dmAPIGet */
static awk_value_t *
do_dmAPIGet(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t str;
   char *got_value;

   assert(result != NULL);

   if (get_argument(0, AWK_STRING, & str)) {
      got_value = dmAPIGet(str.str_value.str);
   } else if (do_lint)
      lintwarn(ext_id, _("dmAPIGet: called with inappropriate argument(s)"));

   make_const_string(got_value == NULL ? "" : got_value, strlen(got_value), result);
   return result;
}

/*  do_dmAPISet */
static awk_value_t *
do_dmAPISet(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t str1;
   awk_value_t str2;
   unsigned int ret = 0;

   assert(result != NULL);

   if (get_argument(0, AWK_STRING, & str1) && get_argument(0, AWK_STRING, & str2)) {
      ret = dmAPISet(str1.str_value.str, str2.str_value.str);
      ret &= 0xff;
   } else if (do_lint)
      lintwarn(ext_id, _("dmAPISet: called with inappropriate argument(s)"));

   return make_number(ret, result);
}

/* these are the exported functions along with their min and max arities; */
static awk_ext_func_t func_table[] = {
	{ "dmAPIInit",   do_dmAPIInit, 0, 0, awk_false, NULL },
	{ "dmAPIDeInit", do_dmAPIDeInit, 0, 0, awk_false, NULL },
	{ "dmAPIExec",   do_dmAPIExec, 1, 1, awk_false, NULL },
	{ "dmAPIGet",    do_dmAPIGet, 1, 1, awk_false, NULL },
	{ "dmAPISet",    do_dmAPISet, 2, 2, awk_false, NULL },
};

/* define the dl_load function using the boilerplate macro */

dl_load_func(func_table, dctm, "")

o  Again, run configure and build the extensions;
$ pwd
/home/dmadmin/dmgawk/gawk-4.2.1/extension
$ ./configure
$ make

o  The extensions’ object files and shared libraries are actually contained in the hidden .libs directory below extension; so move there and check the generated dctm.so library;

$ cd .libs
$ ldd dctm.so
linux-vdso.so.1 => (0x00007ffc1d7e5000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f3452930000)
/lib64/ld-linux-x86-64.so.2 (0x00007f3452efd000)

o  The library has still no reference to libdmcl.so; let’s link it with it and check again;
    find that Documentum library on your system; on mine, it’s in /home/dmadmin/documentum/product/7.3/bin;
$ gcc -o dctm.so -shared dctm.o /home/dmadmin/documentum/product/7.3/bin/libdmcl.so
$ ldd dctm.so
linux-vdso.so.1 => (0x00007fff55dbf000)
/home/dmadmin/documentum/product/7.3/bin/libdmcl.so (0x00007fece91bb000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fece8df1000)
libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007fece8bb9000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fece899c000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fece861a000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fece8311000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fece80fb000)
/lib64/ld-linux-x86-64.so.2 (0x00007fece95ca000)

Good, so far dctm.so has been linked with libdmcl.so and thus is able to access the dmAPI*() functions. gawk let us load dynamic extensions through two ways:
1. the statement @load “name_of_extension” inserted in the client awk script;
2. the -l | –load “name_of_extension” command-line options
Please, turn now to part II here for the rest of this article.

 

Cet article Adding a Documentum Extension to gawk, part I est apparu en premier sur Blog dbi services.

OpenShift on my Windows 10 laptop with MiniShift

Thu, 2018-05-31 01:23

If you want to play with OpenShift on your laptop, you can, in a Virtual Machine. I have VirtualBox installed on my laptop. I’ll install Minishift here, which will create the VM to run OpenShift with few simple commands only. On Linux you can refer to Daniel’s post. Here is the Windows version. Oh, and Daniel did that to run Postgres but my goal is to run an Oracle container of course. Or MySQL maybe.

I’ve downloaded minishift-1.18.0-windows-amd64.zip and unzipped it in D:\Downloads\minishift-1.18.0-windows-amd64 where I have minishift.exe

Minishift

I configure to use VirtualBox

minishift config set vm-driver virtualbox

It is installed in my Windows profile:

C:\Users\fpa\.minishift

Be careful, minishift do not know that we have multiple drives in Windows. When I was running minishift.exe from the D: disk is was not able to find the virtual machine’s files that were on C:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>minishift start
-- Starting profile 'minishift'
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
-- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
-- Checking if VirtualBox is installed ... OK
-- Checking the ISO URL ... OK
-- Downloading OpenShift binary 'oc' version 'v3.9.0'
40.81 MiB / 40.81 MiB [===================================================================================] 100.00% 0s-- Downloading OpenShift v3.9.0 checksums ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting local OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 2 GB
vCPUs : 2
Disk size: 20 GB
-- Starting Minishift VM ..... FAIL E0529 17:08:31.056327 1448 start.go:391] Error starting the VM: Error creating the VM. Error creating machine: Error in driver during machine creation: open /Users/fpa/.minishift/cache/iso/b2d/v1.3.0/minishift-b2d.iso: The system cannot find the path specified.. Retrying.
Error starting the VM: Error creating the VM. Error creating machine: Error in driver during machine creation: open /Users/fpa/.minishift/cache/iso/b2d/v1.3.0/minishift-b2d.iso: The system cannot find the path specified.

Then, I changed to the C: drive

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>C:
 
C:\Users\fpa>dir \Users\fpa\.minishift\cache\iso\b2d\v1.3.0\
Volume in drive C is OS
Volume Serial Number is 26AE-33F7
 
Directory of C:\Users\fpa\.minishift\cache\iso\b2d\v1.3.0
 
29-May-18 15:22 .
29-May-18 15:22 ..
29-May-18 15:22 41,943,040 minishift-b2d.iso
1 File(s) 41,943,040 bytes
2 Dir(s) 30,652,370,944 bytes free

And I run minishift from there:

C:\Users\fpa>D:minishift start
-- Starting profile 'minishift'
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
-- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
-- Checking if VirtualBox is installed ... OK
-- Checking the ISO URL ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting local OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 2 GB
vCPUs : 2
Disk size: 20 GB
-- Starting Minishift VM .............................. OK
-- Checking for IP address ... OK
-- Checking for nameservers ... OK
-- Checking if external host is reachable from the Minishift VM ...
Pinging 8.8.8.8 ... OK
-- Checking HTTP connectivity from the VM ...
Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 0% used OK
Importing 'openshift/origin:v3.9.0' . CACHE MISS
Importing 'openshift/origin-docker-registry:v3.9.0' . CACHE MISS
Importing 'openshift/origin-haproxy-router:v3.9.0' . CACHE MISS
-- OpenShift cluster will be configured with ...
Version: v3.9.0
-- Copying oc binary from the OpenShift container image to VM ...................................... OK
-- Starting OpenShift cluster ........................
Using Docker shared volumes for OpenShift volumes
Using public hostname IP 192.168.99.102 as the host IP
Using 192.168.99.102 as the server IP
Starting OpenShift using openshift/origin:v3.9.0 ...
OpenShift server started.
 
The server is accessible via web console at:
  https://192.168.99.102:8443
 
You are logged in as:
User: developer
Password:
 
To login as administrator:
oc login -u system:admin

Docker

That’s all. I have a new VM in VirtualBox whith its main files in C:/Users/fpa/.minishift

C:/Users/fpa/.minishift/machines/minishift/boot2docker.iso
C:/Users/fpa/.minishift/machines/minishift/disk.vmdk

The VM boots on the Boot2Docker iso, which is the way to run Docker on Windows without enabling HyperV. The first network interface is NAT for internet access. The second one has a DHCP IP from 192.168.99.1

You can control the VM with minishift (start, stop, configure, ssh,…):

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>d:minishift ssh
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.12.6, build HEAD : 5ab2289 - Wed Jan 11 03:20:40 UTC 2017
Docker version 1.12.6, build 78d1802

We have everything running in containers here:

docker@minishift:/mnt/sda1$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
77c0ef5a80d7 50c7bffa0653 "/usr/bin/openshift-r" 8 minutes ago Up 8 minutes k8s_router_router-1-tsmw7_default_7a1be0e2-635b-11e8-843d-f2c6a11ee2db_1
10fb6a2a6b70 9b472363b07a "/bin/sh -c '/usr/bin" 8 minutes ago Up 8 minutes k8s_registry_docker-registry-1-zfxm5_default_7865ac33-635b-11e8-843d-f2c6a11ee2db_1
2f6b90fb0bb4 openshift/origin-pod:v3.9.0 "/usr/bin/pod" 8 minutes ago Up 8 minutes k8s_POD_router-1-tsmw7_default_7a1be0e2-635b-11e8-843d-f2c6a11ee2db_1
3c720d166989 fae77002371b "/usr/bin/origin-web-" 8 minutes ago Up 8 minutes k8s_webconsole_webconsole-7dfbffd44d-48b9h_openshift-web-console_62b66c66-635b-11e8-843d-f2c6a11ee2db_1
bb5870fc0b7e openshift/origin-pod:v3.9.0 "/usr/bin/pod" 8 minutes ago Up 8 minutes k8s_POD_docker-registry-1-zfxm5_default_7865ac33-635b-11e8-843d-f2c6a11ee2db_1
95663bf29487 openshift/origin-pod:v3.9.0 "/usr/bin/pod" 8 minutes ago Up 8 minutes k8s_POD_webconsole-7dfbffd44d-48b9h_openshift-web-console_62b66c66-635b-11e8-843d-f2c6a11ee2db_1
5fc022dbe112 openshift/origin:v3.9.0 "/usr/bin/openshift s" 8 minutes ago Up 8 minutes origin

But we should not have to connect to this machine.

The minishift executable can be used to control anything. As I have docker client installed on my laptop (the Docker Toolbox) I can get the environment variables:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>d:minishift docker-env
SET DOCKER_TLS_VERIFY=1
SET DOCKER_HOST=tcp://192.168.99.102:2376
SET DOCKER_CERT_PATH=C:\Users\fpa\.minishift\certs
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i

and see, from Windows, the docker images that are in the VM:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>where docker
C:\Program Files\Docker Toolbox\docker.exe
 
SET DOCKER_TLS_VERIFY=1
SET DOCKER_HOST=tcp://192.168.99.102:2376
SET DOCKER_CERT_PATH=C:\Users\fpa\.minishift\certs
 
D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
openshift/origin-web-console v3.9.0 fae77002371b 12 days ago 495.1 MB
openshift/origin-docker-registry v3.9.0 9b472363b07a 12 days ago 464.9 MB
openshift/origin-haproxy-router v3.9.0 50c7bffa0653 12 days ago 1.283 GB
openshift/origin-deployer v3.9.0 e4de3cb64af9 12 days ago 1.261 GB
openshift/origin v3.9.0 83ec0170e887 12 days ago 1.261 GB
openshift/origin-pod v3.9.0 b6d2be1df9c0 12 days ago 220.1 MB

While I’m there, I can run whatever I want as a docker container. Let’s try with Oracle.

I need to login to the Docker store (where I have accepted the license conditions)

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: pachot
Password:
Login Succeeded

Let’s pull the Oracle ‘slim’ image:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>docker pull store/oracle/database-enterprise:12.2.0.1-slim
12.2.0.1: Pulling from store/oracle/database-enterprise
4ce27fe12c04: Downloading [> ] 1.076 MB/83.31 MB
9d3556e8e792: Downloading [> ] 535.3 kB/151 MB
fc60a1a28025: Download complete
0c32e4ed872e: Download complete
b465d9b6e399: Waiting

And run it:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>docker run -it --rm --name orcl store/oracle/database-enterprise:12.2.0.1-slim
Setup Oracle Database
Oracle Database 12.2.0.1 Setup
Wed May 30 20:16:56 UTC 2018
 
Check parameters ......
log file is : /home/oracle/setup/log/paramChk.log
paramChk.sh is done at 0 sec
 
untar DB bits ......
log file is : /home/oracle/setup/log/untarDB.log
untarDB.sh is done at 38 sec
...

You may find that funny, but the Oracle images in the Docker store contains only a tarball of Oracle Home and a pre-created database. Just the time to untar those and run the instance and after 2 minutes I have my database ready. All is untar-ed to the volume, including the software.

Here are the ports that are redirected to:

C:\Users\fpa>SET DOCKER_TLS_VERIFY=1
C:\Users\fpa>SET DOCKER_HOST=tcp://192.168.99.102:2376
C:\Users\fpa>SET DOCKER_CERT_PATH=C:\Users\fpa\.minishift\certs
C:\Users\fpa>docker port orcl
1521/tcp -> 0.0.0.0:32771
5500/tcp -> 0.0.0.0:32770

Then, easy to connect with SQL*Net with the credentials provided (see the setup instructions)

C:\Users\fpa>sqlcl sys/Oradoc_db1@//192.168.99.102:32771/orclpdb1.localdomain as sysdba
SQLcl: Release 18.1.1 Production on Wed May 30 22:41:10 2018
Copyright (c) 1982, 2018, Oracle. All rights reserved.
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.2.0.0.0
SQL> select instance_name,host_name from v$instance;
 
INSTANCE_NAME HOST_NAME
------------- ------------
ORCLCDB 254f96463883

OpenShift

So, that’s an easy way to run Oracle on Docker when you have VirtualBox. One download and 5 commands and I’m ready to connect. But that’s not the goal. Here we have OpenShift here to manage multiple Docker containers.

According to the ‘minishift start’ output I have a Web server on https://192.168.99.102:8443 (user: system, password: admin)

It already contains a lot of databases:
CaptureMiniShift001

They are really easy to use. In two clicks I’ve run a MySQL container:
CaptureMiniShift002

If you don’t like the GUI, there’s the command line interface of OpenShift within the minishift ‘cache:

C:\Users\fpa>C:\Users\fpa\.minishift\cache\oc\v3.9.0\windows\oc.exe login https://192.168.99.102:8443
Authentication required for https://192.168.99.102:8443 (openshift)
Username: system
Password:
Login successful.
 
You have one project on this server: "test"
 
Using project "test".
 
C:\Users\fpa>C:\Users\fpa\.minishift\cache\oc\v3.9.0\windows\oc.exe status
In project test on server https://192.168.99.102:8443
 
svc/mysql - 172.30.224.246:3306
dc/mysql deploys openshift/mysql:5.7
deployment #1 deployed 7 minutes ago - 1 pod

Now that I have OpenShift running and installed, I’ll be able to run Oracle and manage containers from there. That’s for the next post on this subject.

 

Cet article OpenShift on my Windows 10 laptop with MiniShift est apparu en premier sur Blog dbi services.

EDB Failover Manager in EDB containers in Minishift/OpenShift – Failovers

Tue, 2018-05-29 13:42

In the last post we had a look at the basic configuration of EDB EFM and confirmed that we can do manual switchovers for maintenance operations. Being able to do a switchover and switchback is fine but what really is important are automatic failovers. Without an automatic failover this setup would be somehow useless as a) EDB EFM is supposed to do exactly that and b) without that we would need to implement something on our own. So lets have a look if that works.

For this little I scaled the PostgreSQL instances to three, meaning one master and two replica instances. Three is the minimum required as EFM requires at least three agents so at least two can decide what to do. In a traditional setup this is usually done by making a non database host the EDB EFM witness host. In this OpenShift/MiniShift setup we do not have a witness node so the minimum amount of database hosts (where EFM runs on) is three. This is how it currently looks like:

dwe@dwe:~$ oc get pods -o wide -L role
NAME                 READY     STATUS    RESTARTS   AGE       IP            NODE        ROLE
edb-as10-0-1-dzcj6   0/1       Running   2          3h        172.17.0.9    localhost   standbydb
edb-as10-0-1-jmfjk   0/1       Running   2          4h        172.17.0.3    localhost   masterdb
edb-as10-0-1-rlzdb   0/1       Running   2          3h        172.17.0.6    localhost   standbydb
edb-pgpool-1-jtwg2   0/1       Running   2          4h        172.17.0.4    localhost   queryrouter
edb-pgpool-1-pjxzs   0/1       Running   2          4h        172.17.0.5    localhost   queryrouter

What I will do is to kill the master pod (edb-as10-0-1-jmfjk) to see what happens to my current setup. Before I do that, lets open a second session into one of the standby pods and check the current EFM status:

dwe@dwe:~$ oc rsh edb-as10-0-1-dzcj6
sh-4.2$ /usr/edb/efm-3.0/bin/efm cluster-status edb
Cluster Status: edb
VIP: 

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Master      172.17.0.3           UP     UP        
	Standby     172.17.0.6           UP     UP        
	Standby     172.17.0.9           UP     UP        

Allowed node host list:
	172.17.0.3

Membership coordinator: 172.17.0.3

Standby priority host list:
	172.17.0.9 172.17.0.6

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.17.0.3           0/F000060        
	Standby     172.17.0.9           0/F000060        
	Standby     172.17.0.6           0/F000060        

	Standby database(s) in sync with master. It is safe to promote.

Looks fine. So from the first session using the oc command line utility lets kill the master pod. What I at least expect is that one of the standby instances get promoted and the remaining standby is reconfigured to connect to the new master.

dwe@dwe:~$ oc delete pod edb-as10-0-1-jmfjk
pod "edb-as10-0-1-jmfjk" deleted
dwe@dwe:~$ oc get pods
NAME                 READY     STATUS        RESTARTS   AGE
edb-as10-0-1-dzcj6   1/1       Running       2          3h
edb-as10-0-1-jmfjk   1/1       Terminating   2          4h
edb-as10-0-1-rlzdb   1/1       Running       2          4h
edb-as10-0-1-snnxc   0/1       Running       0          2s
edb-bart-1-s2fgj     1/1       Running       2          4h
edb-pgpool-1-jtwg2   1/1       Running       2          4h
edb-pgpool-1-pjxzs   1/1       Running       2          4h

The master pod is terminating. What does the second session in the standby pod tell us when we ask EFM for the cluster status?

sh-4.2$ /usr/edb/efm-3.0/bin/efm cluster-status edb
Cluster Status: edb
VIP: 

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Standby     172.17.0.6           UP     UP        
	Promoting   172.17.0.9           UP     UP        

Allowed node host list:
	172.17.0.3

Membership coordinator: 172.17.0.6

Standby priority host list:
	172.17.0.6

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.17.0.9           0/F000250        
	Standby     172.17.0.6           0/F000140        

	One or more standby databases are not in sync with the master database.

The master is gone and one of the standby instances gets promoted. Lets wait a few seconds and check again:

sh-4.2$ /usr/edb/efm-3.0/bin/efm cluster-status edb
Cluster Status: edb
VIP: 

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Idle        172.17.0.11          UP     UNKNOWN   
	Standby     172.17.0.6           UP     UP        
	Master      172.17.0.9           UP     UP        

Allowed node host list:
	172.17.0.3

Membership coordinator: 172.17.0.6

Standby priority host list:
	172.17.0.6

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.17.0.9           0/F000288        
	Standby     172.17.0.6           0/F000288        

	Standby database(s) in sync with master. It is safe to promote.

Idle Node Status (idle nodes ignored in XLog location comparisons):

	Address              XLog Loc         Info
	--------------------------------------------------------------
	172.17.0.11          0/F000288        DB is in recovery.

The new master is ready, the remaining standby is reconfigured to connect to the new master. Even better a few seconds later the old master is back as a new standby:

sh-4.2$ /usr/edb/efm-3.0/bin/efm cluster-status edb
Cluster Status: edb
VIP: 

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Standby     172.17.0.11          UP     UP        
	Standby     172.17.0.6           UP     UP        
	Master      172.17.0.9           UP     UP        

Allowed node host list:
	172.17.0.3

Membership coordinator: 172.17.0.6

Standby priority host list:
	172.17.0.6 172.17.0.11

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.17.0.9           0/F000288        
	Standby     172.17.0.6           0/F000288        
	Standby     172.17.0.11          0/F000288        

	Standby database(s) in sync with master. It is safe to promote.

Works fine and is what you must have in a good setup. When we kill one of the standby pods the expected result is that the pod restarts and the instance will come back as a standby, lets check:

20:32:47 dwe@dwe:~$ oc delete pod edb-as10-0-1-hf27d
pod "edb-as10-0-1-hf27d" deleted
dwe@dwe:~$ oc get pods -o wide -L role
NAME                 READY     STATUS        RESTARTS   AGE       IP            NODE        ROLE
edb-as10-0-1-8p5rd   0/1       Running       0          3s        172.17.0.6    localhost   
edb-as10-0-1-dzcj6   1/1       Running       2          4h        172.17.0.9    localhost   masterdb
edb-as10-0-1-hf27d   0/1       Terminating   1          12m       172.17.0.3    localhost   
edb-as10-0-1-snnxc   1/1       Running       0          20m       172.17.0.11   localhost   standbydb
edb-pgpool-1-jtwg2   1/1       Running       2          5h        172.17.0.4    localhost   queryrouter
edb-pgpool-1-pjxzs   1/1       Running       2          5h        172.17.0.5    localhost   queryrouter

A few moments later we have the previous status:

dwe@dwe:~$ oc get pods -o wide -L role
NAME                 READY     STATUS    RESTARTS   AGE       IP            NODE        ROLE
edb-as10-0-1-8p5rd   1/1       Running   0          4m        172.17.0.6    localhost   standbydb
edb-as10-0-1-dzcj6   1/1       Running   2          4h        172.17.0.9    localhost   masterdb
edb-as10-0-1-snnxc   1/1       Running   0          24m       172.17.0.11   localhost   standbydb
edb-pgpool-1-jtwg2   1/1       Running   2          5h        172.17.0.4    localhost   queryrouter
edb-pgpool-1-pjxzs   1/1       Running   2          5h        172.17.0.5    localhost   queryrouter

Fine as well. To complete this little series about EDB containers in MiniShift/OpenShift we will have a look at how we can add an EDB BART container to the setup, because backup and recovery is still missing :)

 

Cet article EDB Failover Manager in EDB containers in Minishift/OpenShift – Failovers est apparu en premier sur Blog dbi services.

SQL Saturday Paris 2018 – Pré-conférence SQL Server Linux, Docker et Azure

Mon, 2018-05-28 14:41

The next SQL Saturday in Paris is coming soon and the agenda has been published by the @GUSS on twitter. Le prochain SQL Saturday Paris arrive bientôt et l’agenda a été publié sur Twitter par le @GUSS

blog 135 - 0 - SQL Sat Paris 2018

Cette année, un savant mélange de speakers francophones et internationaux, un plus pour les participants …

Un rapide coup d’œil à l’agenda – si on se focalise sur la partie moteur SQL Server – montre également qu’une bonne partie du stream rouge concerne Docker et l’automatisation. Intéressant :)

Bien entendu les sujets sur la performance et l’architecture auront toujours leur popularité et vous pourrez continuer à y assister :) Ce n’est pas moi qui vais m’en plaindre :)

 

Pré-conférence SQL Server au delà de Windows : installation, déploiement et gestion

sqlsatparis2018

Comme vous le savez déjà, Microsoft s’est ouvert à l’Open Source depuis SQL Server 2017 et a également renforcé son offre Azure avec notamment les instances managées. Par conséquent l’écho-système autour de SQL Server s’est considérablement agrandi. Cette année j’aurai l’opportunité de partager quelques expériences à ce sujet lors d’une pré- conférence qui se déroulera le vendredi 6 juin 2017.

Nous aborderons ensemble des thématiques bien connus par tout administrateur de données comme l’installation, la configuration, le monitoring, la performance ou encore la haute disponibilité dans des environnements autre que Windows.

Pour vous inscrire c’est par ici.

Au plaisir de vous y retrouver!

 

 

 

Cet article SQL Saturday Paris 2018 – Pré-conférence SQL Server Linux, Docker et Azure est apparu en premier sur Blog dbi services.

EDB Failover Manager in EDB containers in Minishift/OpenShift

Mon, 2018-05-28 12:47

In the last three posts we deployed an EDB database container and two pgpool instances, scaled that up to include a read only replica and finally customized the PostgreSQL instance with ConfigMaps. In this post will we look at how the EDB Failover Manager is configured in the database containers.

This are the pods currently running in my environment, two pgpool containers and two PostgreSQL containers:

dwe@dwe:~$ oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
edb-as10-0-1-gk8dt   1/1       Running   1          9d
edb-as10-0-1-n5z4w   1/1       Running   0          3m
edb-pgpool-1-h5psk   1/1       Running   1          9d
edb-pgpool-1-tq95s   1/1       Running   1          9d

The first one (edb-as10-0-1-gk8dt) is the primary instance and EFM should be running there as well:

dwe@dwe:~$ oc rsh edb-as10-0-1-gk8dt
sh-4.2$ psql -c "select pg_is_in_recovery()" postgres
 pg_is_in_recovery 
-------------------
 f
(1 row)

sh-4.2$ ps -ef | grep efm
edbuser    202     1  0 08:45 ?        00:00:04 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-7.b10.el7.x86_64/jre/bin/java -cp /usr/edb/efm-3.0/lib/EFM-3.0.0.jar -Xmx32m com.enterprisedb.efm.main.ServiceCommand __int_start /etc/edb/efm-3.0/edb.properties
sh-4.2$ 

Looking at the configuration there are some interesting points:

sh-4.2$ cat /etc/edb/efm-3.0/edb.properties | egrep -v "^$|^#"
db.user=edbuser
db.password.encrypted=ca78865e0f85d15edc6c51b2e5c0a58f
db.port=5444
db.database=edb
db.service.owner=edbuser
db.service.name=
db.bin=/usr/edb/as10/bin
db.recovery.conf.dir=/edbvolume/edb/edb-as10-0-1-gk8dt/pgdata
jdbc.sslmode=disable
user.email=none@none.com
script.notification=
bind.address=172.17.0.6:5430
admin.port=5431
is.witness=false
local.period=10
local.timeout=60
local.timeout.final=10
remote.timeout=10
node.timeout=10
stop.isolated.master=false
pingServerIp=8.8.8.8
pingServerCommand=/bin/ping -q -c3 -w5
auto.allow.hosts=false
db.reuse.connection.count=0
auto.failover=true
auto.reconfigure=true
promotable=true
minimum.standbys=0
recovery.check.period=2
auto.resume.period=0
virtualIp=
virtualIp.interface=
virtualIp.prefix=
script.fence=
script.post.promotion=/var/efm/post_promotion_steps.sh %f
script.resumed=
script.db.failure=/var/efm/stopEFM
script.master.isolated=
script.remote.pre.promotion=
script.remote.post.promotion=
script.custom.monitor=
custom.monitor.interval=
custom.monitor.timeout=
custom.monitor.safe.mode=
sudo.command=sudo
sudo.user.command=sudo -u %u
log.dir=/var/log/efm-3.0
jgroups.loglevel=
efm.loglevel=
jvm.options=-Xmx32m
kubernetes.port.range=1
kubernetes.namespace=myproject
kubernetes.pod.labels=cluster=edb
kubernetes.master.host=172.30.0.1
kubernetes.master.httpsPort=443
create.database.master=/var/lib/edb/bin/createmasterdb.sh
create.database.standby=/var/lib/edb/bin/createstandbydb.sh
kubernetes.is.init.master=true

The last 8 lines are not there when you do a manual EFM installation so this is something specific in the container deployment. Apparently it is EFM that creates the master and the replica instance(s). The rest is more or less the default setup. The cluster status should be fine then:

sh-4.2$ /usr/edb/efm-3.0/bin/efm cluster-status edb
Cluster Status: edb
VIP: 

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Master      172.17.0.6           UP     UP        
	Standby     172.17.0.8           UP     UP        

Allowed node host list:
	172.17.0.6

Membership coordinator: 172.17.0.6

Standby priority host list:
	172.17.0.8

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.17.0.6           0/5000140        
	Standby     172.17.0.8           0/5000140        

	Standby database(s) in sync with master. It is safe to promote.

We should be able to do a switchover:

sh-4.2$ /usr/edb/efm-3.0/bin/efm promote edb -switchover    
Promote/switchover command accepted by local agent. Proceeding with promotion and will reconfigure original master. Run the 'cluster-status' command for information about the new cluster state.
sh-4.2$ /usr/edb/efm-3.0/bin/efm cluster-status edb
Cluster Status: edb
VIP: 

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Standby     172.17.0.6           UP     UP        
	Master      172.17.0.8           UP     UP        

Allowed node host list:
	172.17.0.6

Membership coordinator: 172.17.0.6

Standby priority host list:
	172.17.0.6

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.17.0.8           0/60001A8        
	Standby     172.17.0.6           0/60001A8        

	Standby database(s) in sync with master. It is safe to promote.

Seems it worked so the instances should have switched the roles and the current instance must be in recovery:

sh-4.2$ psql -c "select pg_is_in_recovery()" postgres
 pg_is_in_recovery 
-------------------
 t
(1 row)

Fine. This works as expected. So far for the first look at EFM inside the containers. It is not the same setup you’ll find when you install EFM on your own and EFM is doing more here than it does usually. A lot of stuff happens in the scripts provided by EDB here:

sh-4.2$ ls -la /var/lib/edb/bin/
total 72
drwxrwx---  2 enterprisedb root  4096 May 11 20:40 .
drwxrwx--- 24 enterprisedb root  4096 May 28 18:03 ..
-rwxrwx---  1 enterprisedb root  1907 Feb 17 17:14 cleanup.sh
-rwxrwx---  1 enterprisedb root  4219 May 10 22:11 createmasterdb.sh
-rwxrwx---  1 enterprisedb root  2582 May 11 03:30 createstandbydb.sh
-rwxrwx---  1 enterprisedb root  1491 May 10 22:12 dbcommon.sh
-rwxrwx---  1 enterprisedb root 10187 May 10 22:28 dbfunctions.sh
-rwxrwx---  1 enterprisedb root   621 May 10 22:15 dbsettings.sh
-rwxrwx---  1 enterprisedb root  5778 Apr 26 22:55 helperfunctions.sh
-rwxrwx---  1 enterprisedb root    33 Feb 18 03:43 killPgAgent
-rwxrwx---  1 enterprisedb root  5431 May 10 22:29 launcher.sh
-rwxrwx---  1 enterprisedb root   179 May 10 22:12 startPgAgent
-rwxrwx---  1 enterprisedb root   504 May 11 12:32 startPgPool

These scripts are referenced in the EFM configuration in several places and contain all the logic for initializing the cluster, starting it up, stopping and restarting the cluster, setting up replication and so on. To understand what really is going on one needs to understand the scripts (which is out of scope of this post).

 

Cet article EDB Failover Manager in EDB containers in Minishift/OpenShift est apparu en premier sur Blog dbi services.

Which Bitnami service to choose in the Oracle Cloud Infrastructure?

Fri, 2018-05-25 15:40

In the Oracle Cloud PaaS you have a marketplace where you can choose your service. Some are packaged from Bitnami and available on multiple OS. My first idea is that in PaaS you do not care about the OS. But Oracle Cloud has this very nice feature where you still have full access to the OS, as root, even in PaaS. Then, you choose the Linux distribution of your preference. Except if performance is different. They run on different Linux kernels. Is Oracle Linux Unbreakable Kernel more efficient?

We need to compare with relevant and reliable measures. And for that I’ve created a Postgres service and used Kevin Closson SLOB method, as I’m a lucky beta tester for pgio. I’ll post later about the creation of the Postgres service in the Oracle Cloud.

Cached IO

First, I’ve run cached IO to measure CPU performance.

I’ve run with the following settings in pgio.conf:

UPDATE_PCT=0
RUN_TIME=300
NUM_SCHEMAS=4
NUM_THREADS=1
WORK_UNIT=255
SCALE=1G

This is 4 schemas with 1GB of data. This fit in my 30GB host. Actually, here is a vmstat sample during the run showing 5GB cached and large free memory:

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
4 0 0 15508168 12444 5495652 0 0 0 0 1530 1631 28 22 50 0 0
4 0 0 15508292 12452 5495648 0 0 0 4 1506 1564 27 23 50 0 0

There’s no I/O to block device here, which is my goal.

I’ve run 10 times the runit.sh and here is the summary of main result with IOPS:


$ for i in allopc@144.21.82.244.out allopc@144.21.82.255.out allopc@144.21.82.252.out ; do grep os-release $i | grep PRETTY | tail -1 ; grep x86 $i | tail -1 ; grep -E "runit.out.*DBNAME" $i | tail -10 | nl ; done
 
Fri May 25 11:56:11 UTC 2018 /etc/os-release : PRETTY_NAME="Ubuntu 16.04.4 LTS"
Linux ubuntu 4.4.0-127-generic #153-Ubuntu SMP Sat May 19 10:58:46 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
1 Fri May 25 11:11:05 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >393232< CACHE_HITS/s >15711<
2 Fri May 25 11:16:06 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >398179< CACHE_HITS/s >16000<
3 Fri May 25 11:21:07 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >402080< CACHE_HITS/s >16019<
4 Fri May 25 11:26:07 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >404682< CACHE_HITS/s >16086<
5 Fri May 25 11:31:07 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >408524< CACHE_HITS/s >16327<
6 Fri May 25 11:36:08 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >409209< CACHE_HITS/s >16390<
7 Fri May 25 11:41:08 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >403647< CACHE_HITS/s >16327<
8 Fri May 25 11:46:09 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >397440< CACHE_HITS/s >15894<
9 Fri May 25 11:51:09 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >397273< CACHE_HITS/s >15956<
10 Fri May 25 11:56:11 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >396906< CACHE_HITS/s >15904<
 
Fri May 25 11:54:56 UTC 2018 /etc/os-release : PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
Linux debian 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 GNU/Linux
1 Fri May 25 11:09:53 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395672< CACHE_HITS/s >15882<
2 Fri May 25 11:14:54 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >401400< CACHE_HITS/s >16188<
3 Fri May 25 11:19:54 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >391551< CACHE_HITS/s >15764<
4 Fri May 25 11:24:54 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >393827< CACHE_HITS/s >15802<
5 Fri May 25 11:29:54 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >404462< CACHE_HITS/s >16198<
6 Fri May 25 11:34:55 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >392712< CACHE_HITS/s >15732<
7 Fri May 25 11:39:55 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >399389< CACHE_HITS/s >16063<
8 Fri May 25 11:44:55 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >390283< CACHE_HITS/s >15567<
9 Fri May 25 11:49:56 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >391426< CACHE_HITS/s >15771<
10 Fri May 25 11:54:56 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >392760< CACHE_HITS/s >15874<
 
Fri May 25 11:53:58 UTC 2018 /etc/os-release : PRETTY_NAME="Oracle Linux Server 7.5"
Linux b5e501 4.1.12-124.15.2.el7uek.x86_64 #2 SMP Tue May 22 11:52:31 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux
1 Fri May 25 11:08:54 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395815< CACHE_HITS/s >15759<
2 Fri May 25 11:13:55 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395346< CACHE_HITS/s >16009<
3 Fri May 25 11:18:55 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395340< CACHE_HITS/s >15898<
4 Fri May 25 11:23:56 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 301 seconds. RIOPS >402556< CACHE_HITS/s >16200<
5 Fri May 25 11:28:56 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >397847< CACHE_HITS/s >16039<
6 Fri May 25 11:33:56 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >398848< CACHE_HITS/s >16027<
7 Fri May 25 11:38:57 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >398817< CACHE_HITS/s >16089<
8 Fri May 25 11:43:57 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >398878< CACHE_HITS/s >15961<
9 Fri May 25 11:48:57 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395885< CACHE_HITS/s >15606<
10 Fri May 25 11:53:58 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395965< CACHE_HITS/s >15893<

I’ve put this data in Excel to show the picture where, basically, all 3 environments perform exactly the same:

CapturePGLIOPS

Physical IO

For the second test, I wanted to test physical IOPS. But not to measure disk performance, which is the same for my 3 services. I want to do physical I/O only to see if there’s a difference in context switches when doing non-blocking I/O – which we do not see in the previous test because they were filesystem cache hits. Then I kept the small scale of 4 sessions with 1GB so that there’s a good chance that it remains in the storage cache. But I reduced the memory in order to have less than 4GB in filesystem cache.

pgio comes with an utility (pgio_reduce_free_memory.sh) to allocate enough huge page to limit the filesystem cache:


$ sudo bash pgio/pgio_reduce_free_memory.sh 2
 
Taking action to reduce free memory down to 2GB available.
total used free shared buff/cache available
Mem: 30886100 124676 30698952 19088 62472 30469900
Swap: 0 0 0
 
Attempting to allocate 13966 huge pages
MemAvailable: 1869148 kB
HugePages_Total: 13966

This is perfect: 13966 huge pages, that’s 27 GB in my 30GB VM that cannot be used by the filesystem, so that my 4x1GB will need I/O calls to the disk.

Here is a vmstat extract to confirm that the filesystem cache is less than 2GB

procs -----------memory---------- ---swap-- -----io---- ---system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 4 0 154416 2344 2015204 0 0 93025 0 32425 24295 2 3 51 44 1
0 4 0 153816 2428 2015848 0 0 94923 21 32400 24414 2 3 51 44 1

runit.sh calls vmstat with a 3 seconds delay so you can see a higher number of block/s and context switches.

Here are the interesting lines from the runit.sh output:

$ for i in allopc@144.21.89.85.out allopc@144.21.89.53.out allopc@144.21.89.26.out ; do grep os-release $i | grep PRETTY | tail -1 ; grep x86 $i | tail -1 ; grep -E "runit.out.*DBNAME" $i | tail -10 | nl ; done
 
Fri May 25 14:58:47 UTC 2018 /etc/os-release : PRETTY_NAME="Ubuntu 16.04.4 LTS"
Linux ubuntu 4.4.0-127-generic #153-Ubuntu SMP Sat May 19 10:58:46 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
1 Fri May 25 14:13:40 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 301 seconds. RIOPS >1214< CACHE_HITS/s >55<
2 Fri May 25 14:18:42 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >4545< CACHE_HITS/s >195<
3 Fri May 25 14:23:43 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >17053< CACHE_HITS/s >682<
4 Fri May 25 14:28:43 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18865< CACHE_HITS/s >801<
5 Fri May 25 14:33:44 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18933< CACHE_HITS/s >794<
6 Fri May 25 14:38:44 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18832< CACHE_HITS/s >777<
7 Fri May 25 14:43:45 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18826< CACHE_HITS/s >757<
8 Fri May 25 14:48:46 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 301 seconds. RIOPS >19229< CACHE_HITS/s >819<
9 Fri May 25 14:53:46 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19408< CACHE_HITS/s >835<
10 Fri May 25 14:58:47 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19524< CACHE_HITS/s >799<
 
Fri May 25 14:58:41 UTC 2018 /etc/os-release : PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
Linux debian 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 GNU/Linux
1 Fri May 25 14:13:35 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >1727< CACHE_HITS/s >82<
2 Fri May 25 14:18:36 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >10743< CACHE_HITS/s >534<
3 Fri May 25 14:23:37 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18660< CACHE_HITS/s >763<
4 Fri May 25 14:28:37 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18496< CACHE_HITS/s >811<
5 Fri May 25 14:33:38 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18537< CACHE_HITS/s >757<
6 Fri May 25 14:38:38 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18647< CACHE_HITS/s >774<
7 Fri May 25 14:43:39 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18838< CACHE_HITS/s >775<
8 Fri May 25 14:48:40 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18686< CACHE_HITS/s >786<
9 Fri May 25 14:53:40 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18742< CACHE_HITS/s >782<
10 Fri May 25 14:58:41 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 301 seconds. RIOPS >18634< CACHE_HITS/s >793<
 
Fri May 25 14:57:25 UTC 2018 /etc/os-release : PRETTY_NAME="Oracle Linux Server 7.5"
Linux b5e501 4.1.12-124.15.2.el7uek.x86_64 #2 SMP Tue May 22 11:52:31 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux
1 Fri May 25 14:12:20 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >17025< CACHE_HITS/s >721<
2 Fri May 25 14:17:21 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19331< CACHE_HITS/s >792<
3 Fri May 25 14:22:21 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19271< CACHE_HITS/s >770<
4 Fri May 25 14:27:22 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19324< CACHE_HITS/s >802<
5 Fri May 25 14:32:22 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18943< CACHE_HITS/s >802<
6 Fri May 25 14:37:23 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19202< CACHE_HITS/s >818<
7 Fri May 25 14:42:24 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18999< CACHE_HITS/s >803<
8 Fri May 25 14:47:24 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19459< CACHE_HITS/s >823<
9 Fri May 25 14:52:24 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19138< CACHE_HITS/s >836<
10 Fri May 25 14:57:25 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18958< CACHE_HITS/s >781<

And I’ve graphed them:
CapturePGPIOPS

As I hit the storage here, I needed a few runs to warm up the storage cache and get the I/O I wanted: low latency rfom storage cache, but involving context switches on the server. And basically, all 3 alternatives (Ubuntu with Linux 4.4, Debian with Linux 4.9 and OEL with the unbreakable kernel 4.1) behave the same. From these tests, I can say that the performance is not the major criteria to choose one of the PaaS alternatives. Just choose the distribution you like.

About pgio, it is great to get performance measures that are reliable and focused on what I want to test. I had no problems to run it on all 3 platforms. Just needed to apt-get / yum to install systat and bc which are not there by default.

 

Cet article Which Bitnami service to choose in the Oracle Cloud Infrastructure? est apparu en premier sur Blog dbi services.

OpenSSH is now available on Windows 10

Fri, 2018-05-25 06:26

A couple of days ago, the Windows 10 april 2018 update was installed on my laptop. And what, you may say? Well, surprisingly this update provides some interesting “hidden” features and one of them concerns OpenSSH that is now available on-demand in Windows 10 (and likely available soon on Windows Server).

blog 134 - 0 - openSSH - Win10

This a obviously a good news because so far, I used either putty or directly a bash environment from my Windows 10 laptop available since the Anniversary Update on 2016 august 2. I know that some of my colleagues use Cygwin as well. An quick example of using the new bash environment from my Win10 Pro laptop:

C:\Users\dab>bash
Performing one-time upgrade of the Windows Subsystem for Linux file system for this distribution...

mikedavem@DBI-LT-DAB:/mnt/c/Users/dab$ cat /proc/version
Linux version 4.4.0-17134-Microsoft (Microsoft@Microsoft.com) (gcc version 5.4.0 (GCC) ) #48-Microsoft Fri Apr 27 18:06:00 PST 2018

mikedavem@DBI-LT-DAB:/mnt/c/Users/dab$ ssh
usage: ssh [-1246AaCfGgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
           [-D [bind_address:]port] [-E log_file] [-e escape_char]
           [-F configfile] [-I pkcs11] [-i identity_file] [-L address]
           [-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port]
           [-Q query_option] [-R address] [-S ctl_path] [-W host:port]
           [-w local_tun[:remote_tun]] [user@]hostname [command]

 

After applying the corresponding update, OpenSSH client is already installed and available to use.

[dab@DBI-LT-DAB:#]> Get-WindowsCapability -Online | ? Name -like 'OpenSSH*'


Name  : OpenSSH.Client~~~~0.0.1.0
State : Installed

Name  : OpenSSH.Server~~~~0.0.1.0
State : NotPresent

 

If you want to also install the server you just have to go through the Add-WindowsCapability cmdlet as follows:

[dab@DBI-LT-DAB:#]> Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0


Path          :
Online        : True
RestartNeeded : False
[dab@DBI-LT-DAB:#]> Get-WindowsCapability -Online | ? Name -like 'OpenSSH*'


Name  : OpenSSH.Client~~~~0.0.1.0
State : Installed

Name  : OpenSSH.Server~~~~0.0.1.0
State : Installed

 

From now on, I may use directly a ssh command from both my PowerShell or my command line environment as follows:

C:\
[dab@DBI-LT-DAB:#]> ssh
usage: ssh [-46AaCfGgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
           [-D [bind_address:]port] [-E log_file] [-e escape_char]
           [-F configfile] [-I pkcs11] [-i identity_file]
           [-J [user@]host[:port]] [-L address] [-l login_name] [-m mac_spec]
           [-O ctl_cmd] [-o option] [-p port] [-Q query_option] [-R address]
           [-S ctl_path] [-W host:port] [-w local_tun[:remote_tun]]
           destination [command]

 

We will also be able to access a Linux server from either Password-based or Key-based authentication. Let’s try with the first one (Password-based authentication) against my Linux docker private registry:

[dab@DBI-LT-DAB:#]> ssh dab@xxx.xxx.xxx.xxx
The authenticity of host 'xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx)' can't be established.
ECDSA key fingerprint is SHA256:7HwUjHowFNEJ3ILErsmBmgr8sqxossLV+fFt71YsBtA.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'xxx.xxx.xxx.xxx' (ECDSA) to the list of known hosts.
dab@xxx.xxx.xxx.xxx's password:

 

Not a big suprise here! It works as expected (assuming your SSH server is configured to accept authentication with password)! Let’s try now the second method (Key-based authentication). In fact, I already have an .ssh folder from a previous request to connect to our GitLab environment. For the demo, let’s use the same public/private key pairs.

C:\Users\dab
[dab@DBI-LT-DAB:#]> dir .ssh


    Directory: C:\Users\dab\.ssh


Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----       09.05.2018     11:25           3243 id_rsa
-a----       09.05.2018     11:25            757 id_rsa.pub
-a----       25.05.2018     10:24            380 known_hosts

 

The next step will consist in copying my public key (id_rsa.pub) to the remote Linux server folder .ssh as authorized_keys file.

C:\Users\dab\.ssh
[dab@DBI-LT-DAB:#]> scp .\id_rsa.pub dab@xxx.xxx.xxx.xxx:/home/dab/.ssh/authorized_keys
id_rsa.pub

 

To avoid retyping the secret phrase for each connection, let’s start the ssh-agent service on my Windows 10 machine.

C:\Users\dab\.ssh
[dab@DBI-LT-DAB:#]> Start-Service -Name ssh-agent
C:\Users\dab\.ssh
[dab@DBI-LT-DAB:#]> Get-Service ssh-agent

Status   Name               DisplayName
------   ----               -----------
Running  ssh-agent          OpenSSH Authentication Agent

 

Then I just have to add the private key to this agent …

C:\Users\dab\.ssh
[dab@DBI-LT-DAB:#]> ssh-add.exe id_rsa
Identity added: id_rsa (id_rsa)

 

… and finally to try a connection to my Linux Server as follows:

C:\Users\dab\.ssh
[dab@DBI-LT-DAB:#]> ssh dab@xxx.xxx.xxx.xxx
Last login: Fri May 25 09:43:16 2018 from gateway

 

It works like a charm! I’m now connecting to my Linux server as dab user. I can get a picture of my docker containers, Note the bash prompt has changed here (server name) even it is pretty similar to my PowerShell prompt. Indeed, I customized my PowerShell profile to be similar to a bash shell in apparence and in some behaviors as well :)

[dab@localhost ~]$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
a60f3412b864        registry:2          "/entrypoint.sh /e..."   9 months ago        Up 37 minutes       0.0.0.0:5000->5000/tcp   registry

 

See you!

 

 

 

 

 

Cet article OpenSSH is now available on Windows 10 est apparu en premier sur Blog dbi services.

Windocks – a different way to use SQL Server on Docker

Thu, 2018-05-24 13:14

I spent some times to write blog posts about SQL Server on Docker since few months and you likely noticed it concerned mainly SQL Server on Linux. But what about SQL Server on Docker and Windows? If you take a look at the Docker images provided by Microsoft, only 2017 version is available and it is only intended for development and testing use. Versions prior 2017 seem to not be supported so far (but I don’t have in mind the Microsoft plan on this topic) and if you want to use containers with older SQL Server versions you have to start from scratch by yourself. This is not a hard task but I have in mind a discussion with one of my customers about the maintenance of SQL Server images prior 2017 and he told me he didn’t allocate resources to create and maintain such images by himself.

In addition, I recently presented at different events including our dbi services internal event  for customers the internal project we are managing about our DMK maintenance module tool for SQL Server. To cut the story short, this tool aims to provide smart maintenance capabilities for customer databases including backups, index, statistics etc …

Over the time, the code grew up and we had to support different versions of SQL Server from 2008 to SQL Server 2017. In the same time, the number of scenarios we have to manage and to test regarding the new features shipped with service packs and cumulative updates as well increased drastically over the time. So we decided to move on docker containers from different obvious reasons:

  • Containers are easy to provision for unit testing
  • Lower footprint on our local machine
  • We can share easily SQL images between members of our “development” team in a private registry
  • We will able to integer containers in a potential “future” CI pipeline

In our context, each developer has a local docker engine installed on a Windows 10 Pro laptop but for medium/large companies, internal rules may prevent installing such product on each individual developer laptop. Therefore, they prefer likely to provide a dedicated environment for developers that meet the internal rules and to keep the focus on flexibility.

In such shared infrastructure a main concern will be probably disk space issues because of the potential testing database(s) size and the number of containers provisioned at the same time. Let’s say that several developers would like to provision one SQL Server instance each with a testing database attached on it and that comes from the production through a sanitized database pipeline. The total size of this database is enough big to warn about a potential disk space issue because each instanced container will include its own image size (roughly 10GB with SQL Server containers on Windows) + the size of the testing database – let’s say more than 50 GBs and likely more for usual cases.

If you’re in one of the aforementioned scenarios, Windocks may be a good alternative scenario.

During this month I had the opportunity to test the product in the context of our internal project and after some experimentation stuff I admit I was agreeably surprised by some built-in capabilities as:

  • Windows authentication mode supported
  • SQL Server database cloning capabilities
  • The ability to use both Docker CLI and Web UI (even if I’m in favor of Docker CLI)
  • The ability to refresh a cloned database image from a differential backup

Before to dig further into the above features let’s just take few seconds to see the Windocks architecture design

blog 133 - 1 - windocks - architecture

We may notice some differences here. With a traditional Docker architecture, containers run on shared operating system kernel (either Windows or Linux) whereas Windocks is a strictly application construct-oriented solution and requires installing a SQL Server instance on the host as base instance for SQL Server containers. This is a fundamental change that provides some interesting advantages:

  • We don’t have to rebuild the containers to get OS & Framework patches through base image updates because containers rely only on the SQL Server base instance
  • We don’t break compatibility with some Windows storage functionalities as VSS and SQL Writer as well if you rely on them with third-party backup tools
  • We benefit from the underlying security and configuration of the SQL Server based instance meaning we may use Windows authentication for instance

Let’s talk now about one interesting feature shipped with Windocks that is cloned databases. Here a picture (from Windocks) that shows a high-level workflow of using SQL Server containers with cloned databases.

blog 133 - 2 - windocks - cloned db architecture

The starting point is a full database backup or a snapshot and Windocks will generate the corresponding VHD parent image of the backup media. Then each SQL Server container generated will use a writable clone of this parent image reducing drastically the disk footprint of each container (at least when the SQL Server container is generated). This is a common workflow that customers may implement for unit testing or for CI/CD pipeline to refresh development environments. The production database may be big in size and in this context the interest of using cloned databases becomes obvious. Another way provided would be to rely on mounted snapshot-based volumes from the storage provider but at this stage I didn’t test it. Maybe a next time!

To give a better picture of what it is possible to do with cloned databases, let me show you one way to implement it in the context of our DMK maintenance tool project. The development workflow of the development project is as follows:

blog 133 - 3 - windocks - dev workflow

We use SSDT and GitLab to manage our DMK maintenance tool sources and we perform unit testing by provisioning one or several SQL Server containers regarding the target version we want to validate at this moment. Furthermore we developed testing scenarios based on tSQLt framework we run after provisioning the SQL Server containers. With such architecture, we initially have to maintain images of different SQL Server versions and each time we want to create a container we have to attach one copy of the customized AdventureWorks_dbi database. Let’s say we want to work with 4 containers (SQL Server 2016) at time. We must to ensure we have sufficient disk space for 4 copies of this database (5 x 4 = 20GB) + space required for each container (10GB x 4 = 40GB) = 60GB. The dbi_tools database is intended to stay very small (512MB up to 1GB) comparing to other testing components, this is way I didn’t include it to the math.

Let’s now apply Windocks with the above context and the new architecture becomes as follows:

blog 133 - 4 - windocks - dev workflow

In this second scenario, we may include both the AdventureWorks_dbi and dbi_tools databases (including tSQLt framework objects) in a VHD parent image. For the demo, I used a SQL Server 2016 instance installed on the host that will be controlled by Windocks service during the container generation.

As I said previously we may rely on the configuration and the security of the base instance. Thus, I configured my security based on both Windows Authentication (dbi-services\clustadmin domain account) for SQL Server sysadmins and SQL authentication for development purpose (windock user).

In terms of configuration, tSQLt framework requires enabling CLR on the server level, so I changed the configuration directly on the SQL based instance to allow all my containers to inherit this configuration change.

SELECT 
	[name],
	principal_id,
	[type_desc]
FROM 
	sys.server_principals 
WHERE 
	[type] IN ('U', 'S')
	AND [name] NOT LIKE '##MS_%##'
	AND [name] NOT LIKE 'NT SERVICE%'

 

blog 133 - 41- windocks - base instance security

Here the content of my docker file.

FROM mssql-2016
SETUPCLONING FULL AdventureWorks_dbi \\Win2012sql16ls\windock\AdventureWorks_dbi_2008.bak
SETUPCLONING FULL dbi_tools \\Win2012sql16ls\windock\dbi_tools.bak

 

You may notice some new commands here:

  • FROM mssql-2016 indicates we will use the SQL Server 2016 instance as base image.
  • SETUPCLONING FULL indicates to generate the VHD parent image that we will include both the AdventureWorks_dbi and dbi_tools databases in a remote network path

Go ahead and let’s generate the corresponding SQL Server docker cloned image with the special -t flag

docker build -t 2016withdmk C:\DMK\WithClone\BasedImage\

 

blog 133 - 5- windocks - cloned database image

The process may take some times to generate the VHD parent image depending on the different database sizes and the location (local fast disk, network share bandwidth etc …)

blog 133 - 6- windocks - cloned db size

As expected, because the VHD parent image is a full byte copy of the data, the size is basically the sum of both AdventureWorks_dbi and dbi_tools database sizes.

Compared to a traditional approach, the provisioning of the associated containers will be faster irrespective to database size. Let’s create 2 containers from the image generated previously with the following docker commands:

PS C:\DMK\WithClone> docker run --name sql20161 -d 2016withdmk
PS C:\DMK\WithClone> docker run --name sql20162 -d 2016withdmk

 

Note that by default, Windocks will choose a random port between a configured range in the node.conf (START_PORT/ PORTS_PER_USER) unless you override the default behavior using the -p parameter.

blog 133 - 7- windocks - create containers

Let’s get a picture of the existing containers. My 2 containers have been generated correctly from the 2016withdmk base image.

PS C:\DMK\WithClone> docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS
89344d99758b        2016withdmk         ""                  5 minutes ago       Started             1433/
4dfd1a198626        2016withdmk         ""                  4 minutes ago       Started             1434/

 

Let’s take another look at the storage side:

blog 133 - 8 - windocks - cloned databases diff

The interesting point here is the two differential disks created during the creation of my 2 previous containers are very small size (~70MB per container).

Let’s try to connect from SSMS to the new fresh containers:

blog 133 - 9 - windocks - ssms connection

It works and did you notice I was connected with my domain account? :)

Last topic I wanted to share with you is the Windocks capability to update a base image from differential backups. In a real context, you may need to refresh frequently your environment with recent data for developers and regarding your full backup size it might be advantageous to deal with differential backups.

Let’s consider this process with my internal project environment. During the development process we already had to scale the database schema of our testing database with the features shipped with new versions, service packs or cumulative update over the time. Let’s say we have to add a new dbo.bigTransactionhistory_cci table to test columnstore index scenarios. We first add the concerned table. Then we will perform a differential backup and finally we will update the 2016withDMK base image with it. Obviously in the context of my demo, the database size is likely not big enough to take full advantage of this feature but I trust you to draw a comparison with a more realistic scenario.

The image I want to update is named 2016withdmk. Note the mssql-xxx images that are in fact images from SQL Server base instances installed on my host server.

PS C:\DMK\WithClone> docker images
REPOSITORY            TAG                 IMAGE ID            CREATED                  VIRTUAL SIZE
2016withdmk           none                b3a249ba-2cf        Less than a second ago   0 B
agitated_heisenberg   none                bbd0ce26-4bb        Less than a second ago   0 B
dotnet-4.5            none                dotnet-4.5          3 years ago              0 B
windows               none                windows             3 years ago              0 B
mssql-2008r2          none                mssql-2008r2        3 years ago              0 B
mssql-2016            none                mssql-2016          3 years ago              0 B
mssql-2014            none                mssql-2014          3 years ago              0 B

 

My new docker file content to update the 2016withdmk base image is as follows.

FROM 2016withdmk

SETUPCLONING DIFF AdventureWorks_dbi \\Win2012sql16ls\windock\AdventureWorks_dbi_2008.DIFF
SETUPCLONING DIFF dbi_tools \\Win2012sql16ls\windock\dbi_tools.DIFF

 

I used the differential backup of the AdventureWorks_dbi database with the SETUPCLONNING DIFF command.

Let’s start updating the 2016withDMK base image with the following docker command (I tagged my new image with 1.2 suffix):

docker build -t 2016withdmk1.2 C:\DMK\WithClone\DiffImage\

 

blog 133 - 10 - windocks - cloned database diff backup

Although using a differential backup may allow for substantial productivity keep in mind that updating a VHD parent image will require creating an additional VHD parent image that will become another full byte copy of the environment as shown below:

blog 133 - 11- windocks - cloned db size diff

After running the same docker run command exposed before, let’s get a picture of the running on my server. The container id 789ce49562d0 is the new fresh container with updated data (2016withdmk1.2 image).

PS C:\DMK\WithClone> docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS
89344d99758b        2016withdmk         ""                  About an hour ago   Started             1433/
4dfd1a198626        2016withdmk         ""                  About an hour ago   Started             1434/
789ce49562d0        2016withdmk1.2      ""                  2 minutes ago       Started             1436/

 

I may retrieve the new dbo.bigTransactionHistory_cci after connecting to the concerned SQL Server instance. I’m now able to update and to execute tSQLt scenarios to test columnstore index maintenance scenarios!

blog 133 - 12- windocks - ssms connection 2

 

In this blog post we’ve just surfaced some capabilities and possibilities provided by Windocks. There are other interesting features as encrypted password in the docker file (as we may use with Docker Swarm for instance), the support of TDE databases or lastly the ability to manage SQL Server on Linux. I will probably blog about it in the near future. Stay tuned!

 

 

 

 

Cet article Windocks – a different way to use SQL Server on Docker est apparu en premier sur Blog dbi services.

Pages