Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 18 hours 2 min ago

Dbvisit 8 Standby Daemon on Windows

Wed, 2018-08-29 13:55

In this previous blog, we have installed Dbvisit standby for Windows on both servers. We suppose that the database is created and that the standby is configured (see this blog). The steps are the same that on Linux except that the command is launched on a traditional DOS terminal or a PowerShell. We will not describe these steps (see here for more details).
Dbvisit by default will neither send nor apply the archived log files. With Windows we can use the Windows Scheduler or since Dbvisit 8 the Daemon.
To use Windows Scheduler we just remind that the command to be scheduled is

 dbvctl.exe –d PROD 

–dbvctl.exe is located in the Dbvisit install directory
–PROD is the name of the database

Since Dbvisit 8, it is no longer needed to use the Windows scheduler to send and to apply archived logs. We can use the new background process option where you can run Dbvisit Standby for each DDC (database) in the background. We can manage this via the Central Console.
When using Windows, the background process will run as a Windows Service. A Windows Service will be created on the primary and standby server for each database. Below an example of how to create a service for the database PROD
img1
And choose DATABASE ACTIONS TAB
img2
Choose Daemon Actions
img3
After when we select a host we will see the status of the daemon on this host. We are to create the Daemon using the INSTALL button.
img4
We provide the credentials of the user dbvisit. This user will be the owner of the service which be created for the Daemon. And let’s submit
img5
Everything is OK. If we click again on DAEMON ACTIONS and select the host winserver1 we will have
img6
And we can start the Daemon.
img7
The same steps have to be done on both servers.
We then should have a Windows service configured with an automatic startup.
img8
And now Dbvisit will send and apply archive logs as soon as they are generated.

 

Cet article Dbvisit 8 Standby Daemon on Windows est apparu en premier sur Blog dbi services.

Install Dbvisit 8 on Windows

Wed, 2018-08-29 13:52

In a previous blog we described how to install dbvisit standby in a Linux box. In this article I am going to describe the installation on a Windows machine. We are using dbvisit 8 and windows server 2016. The name of my servers are winserver1 and winserver2.
The first thing you will have to do is to download dbvisit standby package here . A trial key will be sent to you. Before starting the installation we create a user named dbvisit (feel free to change) with following properties:
img1
The user dbvisit need also privilege to logon as a service.
img2
The installation is very easy, just log with a privileged user and run the executable. Below is the installation on the server winserver2. Note that dbvisit standby need to be installed on both servers. Note that we also have turned off the WIndows User Account Control (UAC).

Dbvnet,Dbvagent and Standby Cli components install

img3
Click on Next
img4
Click on I Agree (anyway we don’t have the choice)
img5
Choose the components to install. Note that on a first time the central console is not installed. We will install it later. Note that it is recommended to install the console on a separate server (this can be a VM on Windows or Linux)
img6
Click on Next
img7
Here we give the user we created at the beginning
img8
We provide the password
img9
And then the installation starts
img10
And then the final step is to answer to some configuration questions


-----------------------------------------------------------
About to configure DBVISIT DBVNET
-----------------------------------------------------------

>>> Please specify the Dbvnet Passphrase to be used for secure connections.

The passphrase provided must be the same in both the local and remote
Dbvnet installations. It is used to establish a secure (encrypted)
Dbvnet connections

Enter a custom value:
> XXXXXXXXXXXXXXXXX

>>> Please specify the Local host name to be used by Dbvnet on this server.

Dbvnet will be listening on the local IP Address on this server which
resolve to the host name specified here.
If using a cluster or virtual IP make sure the host name or alias
specified here resolve to the IP address local to where dbvnet is
installed. The host name should resolve to IPv4 address, if not
you can use an IPv4 IP address instead of host name.

Enter a custom value or press ENTER to accept default [winserver2]:
>

>>> Please specify the Local Dbvnet PORT to be used.

Dbvnet will be listening on the specified port for incoming connections
from remote dbvnet connections. Please make sure that this port is not
already in use or blocked by any firewall. You may choose any value
between 1024 and 65535, however the default of 7890 is recommended.

Enter a custom value or press ENTER to accept default [7890]:
>

>>> Please specify the Remote host name to be used by Dbvnet.

By default Dbvnet will use this remote hostname for any remote
connections. Dbvnet must be installed and configured on the specified
remote host. If using a cluster or virtual IP make sure the host name
or alias specified here resolve to the IP address local to where dbvnet
is installed.
If you are unsure about the remote host name during installation, use
the default value which will be the current local hostname.
The host name should resolve to IPv4 address, if not
you can use an IPv4 IP address instead of host name.

Enter a custom value or press ENTER to accept default [winserver2]:
> winserver1

>>> Please specify the Remote Dbvnet PORT to be used.

Dbvnet will connect to the remote server on this specified port.
On the remote host Dbvnet will be listening on the specified port for
incoming connections. Please make sure that this port is not already in
use or blocked by any firewall. You may choose any value between 1024
and 65535, however the default of 7890 is recommended.

Enter a custom value or press ENTER to accept default [7890]:
>

-----------------------------------------------------------
Summary of the Dbvisit DBVNET configuration
-----------------------------------------------------------
DBVISIT_BASE C:\Program Files\Dbvisit
DBVNET_PASSPHRASE XXXXXXXXXXXX
DBVNET_LOCAL_HOST winserver2
DBVNET_LOCAL_PORT 7890
DBVNET_REMOTE_HOST winserver1
DBVNET_REMOTE_PORT 7890

Press ENTER to continue

-----------------------------------------------------------
About to configure DBVISIT DBVAGENT
-----------------------------------------------------------

>>> Please specify the host name to be used for the Dbvisit Agent.

The Dbvisit Agent (Dbvagent) will be listening on this local address.
If you are using the Dbvserver (GUI) - connections from the GUI will be
established to the Dbvisit Agent. The Dbvisit Agent address must be
visible from the Dbvserver (GUI) installation.
If using a cluster or virtual IP make sure the host name or alias
specified here resolve to the IP address local to where dbvnet is
installed.
The host name should resolve to IPv4 address, if not you can use
an IPv4 IP address instead of host name.

Enter a custom value or press ENTER to accept default [winserver2]:
>

>>> Please specify the listening PORT number for Dbvagent.

The Dbvisit Agent (Dbvagent) will listening on the specified port for
incoming requests from the GUI (Dbvserver). Please make sure that this
port is not already in use or blocked by any firewall. You may choose
any value between 1024 and 65535, however the default of 7891 is
recommended.

Enter a custom value or press ENTER to accept default [7891]:
>

>>> Please specify passphrase for Dbvagent

Each Dbvisit Agent must have a passpharse specified. This passphrase
does not have to match between all the servers. It will be used to
establish a secure connection between the GUI (Dbvserver) and the
Dbvisit Agent.

Enter a custom value:
> XXXXXXXXXXXXXXXXXXXX

-----------------------------------------------------------
Summary of the Dbvisit DBVAGENT configuration
-----------------------------------------------------------
DBVISIT_BASE C:\Program Files\Dbvisit
DBVAGENT_LOCAL_HOST winserver2
DBVAGENT_LOCAL_PORT 7891
DBVAGENT_PASSPHRASE XXXXXXXXXXXXXXXXXXX

Press ENTER to continue

No need to configure standby, skipped.
Copied file C:\Program Files\Dbvisit\dbvnet\conf\dbvnetd.conf to C:\Program Files\Dbvisit\dbvnet\conf\dbvnetd.conf.201808260218
DBVNET config file updated
Copied file C:\Program Files\Dbvisit\dbvagent\conf\dbvagent.conf to C:\Program Files\Dbvisit\dbvagent\conf\dbvagent.conf.201808260218
DBVAGENT config file updated

-----------------------------------------------------------
Component Installed Version
-----------------------------------------------------------
standby 8.0.22_36_gb602000a
dbvnet 8.0.22_36_gb602000a
dbvagent 8.0.22_36_gb602000a
dbvserver not installed

-----------------------------------------------------------

-----------------------------------------------------------
About to start service DBVISIT DBVNET
-----------------------------------------------------------
Successfully started service DBVISIT DBVNET

-----------------------------------------------------------
About to start service DBVISIT DBVAGENT
-----------------------------------------------------------
Successfully started service DBVISIT DBVAGENT

>>> Installation completed
Install log C:\Users\dbvisit\AppData\Local\Temp\dbvisit_install.log.201808260214.

Press ENTER to continue

Once then we can finish the installation
img11
dbserver console install

The installation of the dbserver (central console) is done is the same way. We will not show pictures but only the questions we have to reply. In our case it is installed on the primary server winserver1.

-----------------------------------------------------------
Welcome to the Dbvisit software installer.
-----------------------------------------------------------

Installing dbvserver...

-----------------------------------------------------------
About to configure DBVISIT DBVSERVER
-----------------------------------------------------------

>>> Please specify the host name to be used for Dbvserver

The Dbvisit Web Server (Dbvserver) will be listening on this local
address. If using a cluster or virtual IP make sure the host name or
alias specified here resolve to the IP address local to where Dbvserver
is installed.
If you are unsure about the remote host name during installation, use
the default value which will be the current local hostname.
The host name should resolve to IPv4 address, if not you can use
an IPv4 IP address instead of host name.

Enter a custom value or press ENTER to accept default [winserver1]:
>
>>> Please specify the listening port number for Dbvserver on the local server

You may choose any value between 1024 and 65535. The default recommended
value is 4433.

Note: if you can not access this port after the installation has
finished, then please double-check your server firewall settings
to ensure the selected port is open.
Enter a custom value or press ENTER to accept default [4433]:
>
>>> Please specify the host name (or IPv4 address) to be used for Dbvserver public interface

In most cases this will be the same as the listener address, if not sure
use the same value as the listener address.

The Dbvisit Web Server (Dbvserver) will be listening on the local
listener address. The public address can be set to an external IP
example a firewall address in case the Central Console (Dbvserver)
and agents (Primary and Standby Database servers) have a firewall
inbetween them. The public interface address will be passed to
the agents during communication for sending information back.
If you are unsure about the public host address, use
the default value which will be the current local hostname.
The host name should resolve to IPv4 address, if not you can use
an IPv4 IP address instead of host name.
Enter a custom value or press ENTER to accept default [winserver1]:
>

-----------------------------------------------------------
Summary of the Dbvisit DBVSERVER configuration
-----------------------------------------------------------
DBVISIT_BASE C:\Program Files\Dbvisit
DBVSERVER_LOCAL_HOST winserver1
DBVSERVER_LOCAL_PORT 4433
DBVSERVER_PUBLIC_HOST winserver1

Press ENTER to continue

-----------------------------------------------------------
Summary of the Dbvisit DBVSERVER configuration
-----------------------------------------------------------
DBVISIT_BASE C:\Program Files\Dbvisit
DBVSERVER_LOCAL_HOST winserver1
DBVSERVER_LOCAL_PORT 4433
DBVSERVER_PUBLIC_HOST winserver1

Press ENTER to continue

Once the console installed we can log into using the following URL

https://winserver1:4433

with the default credentials admin/admin (Note that you have to change it once logged)

Conclusion : In this blog we have shown how to install dbvisit in a Windows server. In a coming blog we will see how to create a standby database on a Windows server and how to schedule log shipping and log apply.

 

Cet article Install Dbvisit 8 on Windows est apparu en premier sur Blog dbi services.

SQL Server Tips: Drop a database-user attached to a service…

Wed, 2018-08-29 08:19

Few weeks ago, I have a little issue when I try to drop a database-user without login

Unfortunately, I do a little mistake at the beginning…
I receive like every morning a report if all AD logins (computers, groups, users) registered on SQL server instances are in the AD with the useful command sp_validatelogins
This report indicates that a computer name dbi\server_name$ was no more in the AD.
I drop the login without problem and without verifying the binding with database-users (this was my mistake…). :-?

The day after, I receive another alert that I have an orphan database-user on the SCOM database OperationManager12.

My reaction was to connect to the instance and go dropping the user like usual when I become this alert.
error_drop_server01
As you can see, I receive the error message:
Msg 15284, Level 16, State 1, Line 15
The database principal has granted or denied permissions to objects in the database and cannot be dropped.

I “google” the error and found some explanations.
The user is owner of services in the service broker and I use this query to find the message:

select * from sys.database_permissions where grantor_principal_id = user_id('dbi\server_name$')

error_drop_server02
The user is linked to a service number 65536. I search now the service linked to this number.
error_drop_server03
With the name of the service, I can revoke the SEND permission from this user.
error_drop_server04
And I receive this error:
Cannot grant, deny, or revoke permissions to sa, dbo, entity owner, information_schema, sys, or yourself.
With the Google’s help, I re-try with the EXECUTE AS command with the server name as user:

EXECUTE AS USER= 'dbi\server_name$'
REVOKE SEND ON SERVICE::Service_mid10_39_40_55_pid4288_adid2_r479087710 FROM [dbi\server_name$]
REVERT

error_drop_server05
As excepted, I receive a new error:
Msg 15404, Level 16, State 11, Line 37
Could not obtain information about Windows NT group/user ‘BISAD\WBNSS55$’, error code 0x534

The login does not more exist, then it’s normal to have this error.
And now, what to do?
The only workaround that I found, is to drop the service, drop the user an recreate the service with dbo as owner(before dropping, create the create service statement before):

DROP SERVICE [Service_mid10_39_40_55_pid4288_adid2_r479087710]
GO

error_drop_server06

USE [OperationsManager12]
GO
DROP USER [dbi\server_name$]
GO

error_drop_server07

CREATE SERVICE [Service_mid10_39_40_55_pid4288_adid2_r479087710]  ON QUEUE [dbo].[Queue_mid10_39_40_55_pid4288_adid2_r479087710] ([http://schemas.microsoft.com/SQL/Notifications/PostQueryNotification])
GO

error_drop_server08
Et voila! It was a little bit tricky to find out a solution but this one works! 8-)

 

Cet article SQL Server Tips: Drop a database-user attached to a service… est apparu en premier sur Blog dbi services.

COMMIT

Wed, 2018-08-29 02:00
By Franck Pachot

.
COMMIT is the SQL statement that ends a transaction, with two goals: persistence (changes are durable) and sharing (changes are visible to others). That’s a weird title and introduction for the 499th blog post I write on the dbi-services blog. 499 posts in nearly 5 years- roughly two blog posts per week. This activity was mainly motivated by the will to persist and share what I learn every day.

Persistence is primarily for myself: writing a test case with a little explanation is a good way to remember an issue encountered, and Google helps to get back to it when the problem is encountered again later. Sharing is partly for others: I learn a lot from what others are sharing (blogs, forums, articles, mailing lists,…) and it makes sense to also share what I learn. But in addition to that, publishing an idea is also a good way to validate it. If something is partially wrong or badly explained, or just benefits from exchanging ideas, then I’ll get feedbacks, by comments, tweets, e-mails.

This high throughput of things I learn every day gets its source from multiple events. In a consulting company, going from one customer to another means different platforms, versions, editions, different requirements, different approaches. Our added value is our experience. From all the problems seen in all those environments, we have build knowledge, best practices and tools (this is the idea of DMK) to bring a reliable and efficient solution to customers projects. But dbi services also invests a lot in research and training, in order to build this knowledge pro-actively, before encountering the problems at customers. A lot of blog posts were motivated by lab problems only (beta testing, learning new features, setting up a proof of concept before proposing it to a customer). And then encountered later at customers, with faster solutions as this had been investigated before. Dbi services also provides workshops for all technologies and preparing training exercises, as well as giving the workshop, was also a great source of blog posts.

I must say that dbi services is an amazing company in this area. Five years ago, I blogged in French on developpez.com and answered forums such as dba-village.com, and wrote a few articles for SOUG. But as soon as I started at dbi services, I passed the OCM, I presented for the first time in public, at DOAG, and then at many local and international conferences. I attended my first Oracle Open World. I became ACE and later ACE Director. The blogging activity is one aspect only. What the dbi services Technology Organization produces is amazing, for the benefit of the customers and the consultants.

You may have heard that I’m going to work in the database team at CERN, which means quiescing my consulting and blogging activity here. For sure I’ll continue to share, but probably differently. Maybe on the Databases at CERN blog, and probably posting on Medium. Blogs will be also replicated to http://www.oaktable.net/ of course. Anyway, it is easy to find me on LinkedIn or Twitter. For sure I’ll be at conferences and probably not only Oracle ones.

Database transparent_1000pxOracle_100_1000pxI encourage you to continue to follow the dbi services blog, as I’ll do. Many colleagues are already sharing on all technologies. And new ones are coming. Even if my goal was the opposite, I’m aware that publishing so often may have throttled other authors to do so. I’m now releasing some bandwidth to them. The dbi services blog is in the 9th position in the Top-100 Oracle blogs and 27th position in the Top-60 Database blogs with 6 blog posts a week on average. And there’s also a lot non-database topics covered as well. So stay tuned on https://blog.dbi-services.com/.

 

Cet article COMMIT est apparu en premier sur Blog dbi services.

RMAN PITR recover table Oracle 12c

Tue, 2018-08-28 10:18

At one client’s site, I had to restore a table someone had partially deleted one week before. Before Oracle 12c, we had to duplicate the target database to another server, and then to export and import data to the target database. But depending on the database size, it could cost a lot of time, and as nobody knew when the delete action happened, it was more practical to use the rman recover table command in order to have multiple versions of the table content.

At first for security, we save the application table:

SQL> create table appuser.employe_save as select * from appuser.employe;

Table created.

My backups are configured on sbt_tape with ddboost, so I thought I only have to run such a command :

run {
ALLOCATE CHANNEL C1 DEVICE TYPE SBT_TAPE PARMS 'BLKSIZE=1048576, 
SBT_LIBRARY=/opt/dpsapps/dbappagent/lib/lib64/libddboostora.so, 
SBT_PARMS=(CONFIG_FILE=/opt/dpsapps/dbappagent/config/oracle_ddbda_proddb.cfg)' 
FORMAT '%d_%U' ;
recover table appuser.employe
until time "to_date('16-AUG-2018 08:00:00','DD-MON-YYYY HH24:MI:SS')"
auxiliary destination '/tmp/proddb/aux';
}

But I got this error message:

RMAN-03002: failure of recover command at 08/23/2018 10:50:04
RMAN-03015: error occurred in stored script Memory Script
RMAN-06026: some targets not found - aborting restore
RMAN-06101: no channel to restore a backup or copy of the control file

The problem is documented with bug 17089942:

The table recovery fails when channels are allocated manually within a run block. The solution consists in defining the channel device type in the rman configuration:

rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Aug 23 13:52:39 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PRODDB (DBID=271333692)
connected to recovery catalog database

RMAN> configure channel device type sbt_tape parms 'BLKSIZE=1048576, 
SBT_LIBRARY=/opt/dpsapps/dbappagent/lib/lib64/libddboostora.so, 
SBT_PARMS=(CONFIG_FILE=/opt/dpsapps/dbappagent/config/oracle_ddbda_proddb.cfg)';

starting full resync of recovery catalog
full resync complete
new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'BLKSIZE=1048576, 
SBT_LIBRARY=/opt/dpsapps/dbappagent/lib/lib64/libddboostora.so, 
SBT_PARMS=(CONFIG_FILE=/opt/dpsapps/dbappagent/config/oracle_ddbda_proddb.cfg)';
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

Then connected with rman we can run the following recover command in order to restore the employe table with a new name employe_16082018:

RMAN> run {
 recover table appuser.employe
until time "to_date('16-AUG-2018 08:00:00','DD-MON-YYYY HH24:MI:SS')"
auxiliary destination '/tmp/proddb/aux'
remap table appuser.employe:employe_16082018;
}

What happens ? Oracle will create a pseudo database under /tmp/proddb/aux with SYSTEM SYSAUX TEMP UNDO and data tablespaces, then it restores the appuser.employe table at the specified date and renames it with the specified new name. Finally Oracle deletes the pseudo database.

RMAN> run {
2> recover table appuser.employe
3> until time "to_date('16-AUG-2018 08:00:00','DD-MON-YYYY HH24:MI:SS')"
4> auxiliary destination '/tmp/PRODDB/aux'
5> remap table appuser.employe:employe_16082018;
6> }

Starting recover at 23-AUG-2018 14:03:05
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=765 device type=DISK
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=2562 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: database app agent Oracle v4.5.0.0

Creating automatic instance, with SID='ecvh'

initialization parameters used for automatic instance:
db_name=PRODDB
db_unique_name=ecvh_pitr_PRODDB
compatible=12.1.0.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/u00/app/oracle
_system_trig_enabled=FALSE
sga_target=2560M
processes=200
db_create_file_dest=/tmp/PRODDB/aux
log_archive_dest_1='location=/tmp/PRODDB/aux'
#No auxiliary parameter file used

…..

Performing import of tables...
   IMPDP> Master table "SYS"."TSPITR_IMP_ecvh_cesy" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_ecvh_cesy":
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   IMPDP> . . imported "APPUSER"."EMPLOYE_16082018"           7.137 MB   16173 rows
   IMPDP> Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   IMPDP> Job "SYS"."TSPITR_IMP_ecvh_cesy" successfully completed at Thu Aug 23 14:10:28 2018 elapsed 0 00:00:10
Import completed


Removing automatic instance
Automatic instance removed
auxiliary instance file /tmp/PRODDB/aux/PRODDB_CI/datafile/o1_mf_temp_fqx8w89p_.tmp deleted
auxiliary instance file /tmp/PRODDB/aux/ECVH_PITR_PRODDB/onlinelog/o1_mf_3_fqx90jyn_.log deleted
auxiliary instance file /tmp/PRODDB/aux/ECVH_PITR_PRODDB/onlinelog/o1_mf_2_fqx90hyd_.log deleted
auxiliary instance file /tmp/PRODDB/aux/ECVH_PITR_PRODDB/onlinelog/o1_mf_1_fqx90gwo_.log deleted
auxiliary instance file /tmp/PRODDB/aux/ECVH_PITR_PRODDB/datafile/o1_mf_affac_1_fqx8xybx_.dbf deleted
auxiliary instance file /tmp/PRODDB/aux/PRODDB_CI/datafile/o1_mf_sysaux_fqx8p1p7_.dbf deleted
auxiliary instance file /tmp/PRODDB/aux/PRODDB_CI/datafile/o1_mf_undotbs1_fqx8nskn_.dbf deleted
auxiliary instance file /tmp/PRODDB/aux/PRODDB_CI/datafile/o1_mf_system_fqx8olyx_.dbf deleted
auxiliary instance file /tmp/PRODDB/aux/PRODDB_CI/controlfile/o1_mf_fqx8nb57_.ctl deleted
auxiliary instance file tspitr_ecvh_63884.dmp deleted
Finished recover at 23-AUG-2018 14:10:29

The recover was quite fast, so I had the possibility to run multiple recover at different times allowing me to understand at which time the delete command happened:

SQL> select table_name from all_tables where owner = 'APPUSER' and table_name like 'EMPLOYE%'

TABLE_NAME
--------------------------------------------------------------------------------
EMPLOYE
EMPLOYE_16082018
EMPLOYE_22072018
EMPLOYE_SAVE

SQL> select count(*) from appuser.employe_22072018;

  COUNT(*)
----------
     16141

SQL> r
  1* select count(*) from appuser.employe_16082018

  COUNT(*)
----------
     16173

SQL> select count(*) from appuser.employe;

  COUNT(*)
----------
     16226

I already tested this recover feature on my own virtual machine on a test database. Running this recover command on a production database allowed me to discover the Oracle bug when your backups are on tape. Finally using ddboost with rman is so fast that you do not have to hesitate to restore tables with Oracle 12c even with a huge volumetry.

 

Cet article RMAN PITR recover table Oracle 12c est apparu en premier sur Blog dbi services.

Deploy WebLogic docker images using Docker Toolbox and Virtual Box on Windows

Tue, 2018-08-28 02:16

I was interested to run Docker on my Windows machine and found out the Docker Toolbox for Windows that configure itself with the already installed VirtualBox at installation time.

Once installed, You can start the Docker QuickStart shell preconfigured for a Docker command-line environment. At startup time it will start a VM named default and will be ready to work with Docker.
Starting "default"...
(default) Check network to re-create if needed...
(default) Waiting for an IP...
Machine "default" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
Regenerate TLS machine certs? Warning: this is irreversible. (y/n): Regenerating TLS certificates
Waiting for SSH to be available...
Detecting the provisioner...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...

## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/

docker is configured to use the default machine with IP 192.168.99.100
For help getting started, check out the docs at https://docs.docker.com

Start interactive shell
$
The “docker-machine env” displays the machine environment that has been created:
$ docker-machine env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="C:\Users\PBR\.docker\machine\machines\default"
export DOCKER_MACHINE_NAME="default"
export COMPOSE_CONVERT_WINDOWS_PATHS="true"
# Run this command to configure your shell:
# eval $("C:\Program Files\Docker Toolbox\docker-machine.exe" env)

Here is how to directly set the environment from it:

$ eval $("C:\Program Files\Docker Toolbox\docker-machine.exe" env)

Once the environment is set, it can be displayed as follow:


$ docker info
Containers: 9
Running: 0
Paused: 0
Stopped: 9
Images: 2
Server Version: 18.06.0-ce
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 34
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d64c661f1d51c48782c9cec8fda7604785f93587
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.93-boot2docker
Operating System: Boot2Docker 18.06.0-ce (TCL 8.2.1); HEAD : 1f40eb2 - Thu Jul 19 18:48:09 UTC 2018
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.955GiB
Name: default
ID: AV7B:Z7GA:ZWLU:SNMY:ALYL:WTCT:2X2F:NHPY:2TRP:VK27:JY3L:PHJO
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: pbrand
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

I will use a Docker image provided by Oracle on the Docker store: the Oracle WebLogic 12.2.1.3 image. First I need to sign in to the Docker Store

docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: pbrand
Password:
Login Succeeded

Then I can pull the Oracle WebLogic 12.2.1.3 image


docker pull store/oracle/weblogic:12.2.1.3
12.2.1.3: Pulling from store/oracle/weblogic
9fd8609e6e4d: Pull complete
eac7b4a33e34: Pull complete
b6f7d13c859b: Pull complete
e0ca246b2272: Pull complete
7ba4d6bfba43: Pull complete
5e3b8c4731f0: Pull complete
97623ceb6339: Pull complete
Digest: sha256:4c7ce451c093329784a2808a55cd4fc4f1e93d8444b8492a24148d283936add9
Status: Downloaded newer image for store/oracle/weblogic:12.2.1.3

Display all images in my Docker:


$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
store/oracle/weblogic 12.2.1.3 c6bb22ff0ea8 2 weeks ago 1.14GB

In the docker repository, for the Oracle WebLogic 12.2.1.3 image, it is written that the Administrator user should be provided using a domain.properties file having the format below and provided in the command lline to start the Docker image.
The format of the domain.properties file is key value pair:

username=myadminusername
password=myadminpassword

The command line suggested is the following:


$ docker run -d -p 7001:7001 -p 9002:9002 -v $PWD/domain.properties:/u01/oracle/properties/domain.properties store/oracle/weblogic:12.2.1.3

This run command is fine on Linux but doesn’t suite to Windows environment. The created domain.properties file is on the C: drive on windows and the mapping can’t use environment variables like PWD.
In my case, the Docker run command to run is the following:

$ docker run -d --name wls12213 -p 7001:7001 -p 9002:9002 -v //c/Users/PBR/docker_weblogic/domain.properties:/u01/oracle/properties/domain.properties store/oracle/weblogic:12.2.1.3
670fc3bd2c8131b71ecc6a182181d1f03a4832a4c0e8d9d530e325e759afe151

With the -d option it displays only the instance ID, no logs.

Checking the logs using the Docker log command:

$ docker logs wls12213

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

domain_name : [base_domain] admin_listen_port : [7001] domain_path : [/u01/oracle/user_projects/domains/base_domain] production_mode : [prod] admin name : [AdminServer] administration_port_enabled : [true] administration_port : [9002]

I noticed from the logs that the Administration channel is enabled and listening on HTTPS Port 9002. The URL to browse to the WebLogic Administration Console is then:
https://192.168.99.100:9002/console
wls12213_console_servers1

 

Cet article Deploy WebLogic docker images using Docker Toolbox and Virtual Box on Windows est apparu en premier sur Blog dbi services.

Deploying SQL Server on MiniShift / RedHat OpenShift

Fri, 2018-08-24 06:19

Currently we begin to see customer adopting containerization for SQL Server databases (mainly driven by CI/CD and DevOps trends). A lot of them are using RedHat OpenShift as container management platform. From my side, I didn’t want to setup a complete OpenShift infrastructure on my lab to test only my SQL Server pod deployment on such infrastructure. I rather installed MiniShift that comes with one OpenShift node cluster which perfectly meets my requirements.

 

blog 143 -  0 - banner

 

I’ll be running MiniShift on my Windows 10 laptop and I will use Hyper-V as the hypervisor for Minishift. I used the following MiniShift configuration settings. Just remember that SQL Server memory requirement is 2GB so I had to increase the default setting value to 6GB to be more comfortable running my SQL Server pod. I also setup my MiniShift default folders location to another disk.

[dab@DBI-LT-DAB:#]> minishift config set vm-driver hyperv
[dab@DBI-LT-DAB:#]> minishift config set hyperv-virtual-switch Internet
[dab@DBI-LT-DAB:#]> minishift config set memory 6GB
$env:MINISHIFT_HOME="T:\minishift\"

 

Let’s start MiniShift:

[dab@DBI-LT-DAB:#]> minishift start
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
-- Checking if requested hypervisor 'hyperv' is supported on this platform ... OK
-- Checking if Hyper-V driver is installed ... OK
-- Checking if Hyper-V driver is configured to use a Virtual Switch ...
   'Internet' ... OK
-- Checking if user is a member of the Hyper-V Administrators group ... OK
-- Checking the ISO URL ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting the OpenShift cluster using 'hyperv' hypervisor ...
-- Starting Minishift VM ................................................ OK
-- Checking for IP address ... OK
-- Checking for nameservers ... OK
-- Checking if external host is reachable from the Minishift VM ...
   Pinging 8.8.8.8 ... OK
-- Checking HTTP connectivity from the VM ...
   Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 10% used OK
-- OpenShift cluster will be configured with ...
   Version: v3.9.0
-- Copying oc binary from the OpenShift container image to VM ... OK
-- Starting OpenShift cluster ...........
Deleted existing OpenShift container
Using nsenter mounter for OpenShift volumes
Using public hostname IP 192.168.0.17 as the host IP
Using 192.168.0.17 as the server IP
Starting OpenShift using openshift/origin:v3.9.0 ...
OpenShift server started.

The server is accessible via web console at:

https://192.168.0.17:8443

…

 

I also needed to configure docker and oc environment to get access on them from my PowerShell console.

& minishift docker-env | Invoke-Expression
& minishift oc-env | Invoke-Expression

 

Configuration done. Let’s start creating my first project then:

[dab@DBI-LT-DAB:#]> oc new-project mssqlserver --description="mssqlserver deployment on Minishift" --display-name="mssqlserver project"
Now using project "mssqlserver" on server "https://192.168.0.17:8443".

 

Let’s get a list of existing projects:

[dab@DBI-LT-DAB:#]> oc projects
You have access to the following projects and can switch between them with 'oc project <projectname>':

  * mssqlserver - mssqlserver project
    myproject - My Project

Using project "mssqlserver" on server "https://192.168.0.17:8443".

 

I will need to use an OpenShift private registry for my tests:

[dab@DBI-LT-DAB:#]> minishift openshift registry
172.30.1.1:5000

 

My OpenShift registry contains the following images by default:

[dab@DBI-LT-DAB:#]> docker images
REPOSITORY                         TAG                 IMAGE ID            CREATED             SIZE
openshift/origin-web-console       v3.9.0              aa12a2fc57f7        8 weeks ago         495MB
openshift/origin-docker-registry   v3.9.0              8e6f7a854d66        8 weeks ago         465MB
openshift/origin-haproxy-router    v3.9.0              448cc9658480        8 weeks ago         1.28GB
openshift/origin-deployer          v3.9.0              39ee47797d2e        8 weeks ago         1.26GB
openshift/origin                   v3.9.0              4ba9c8c8f42a        8 weeks ago         1.26GB
openshift/origin-pod               v3.9.0              6e08365fbba9        8 weeks ago         223MB

 

For my tests, I picked up my custom dbi services image for SQL Server used for our DMK maintenance tool. Next steps consisted in building, tagging and uploading the corresponding image to my OpenShift integrated registry. Image tagging was done with the [registry_ip]:[port]/[project]/[image]/[tag] pattern:

[dab@DBI-LT-DAB:#]> docker tag dbi/dbi_linux_sql2017:2017-CU4 172.30.1.1:5000/mssqlserver/dbi_linux_sql2017:2017-CU4
[dab@DBI-LT-DAB:#]> docker images
REPOSITORY                                TAG                 IMAGE ID            CREATED             SIZE
172.30.1.1:5000/mssqlserver/dbi_linux_sql2017   2017-CU4            d37ecabe87e8        22 minutes ago      1.42GB
dbi/dbi_linux_sql2017                     2017-CU4            d37ecabe87e8        22 minutes ago      1.42GB

[dab@DBI-LT-DAB:#]> docker push 172.30.1.1:5000/mssqlserver/dbi_linux_sql2017:2017-CU4
The push refers to a repository [172.30.1.1:5000/mssqlserver/dbi_linux_sql2017]
2e3c7826613e: Pushed
66ccaff0cef8: Pushed
…

 

My custom image is now available as image stream on OpenShift.

Go ahead and let’s try first to deploy my SQL Server pod from the mssqlserver project through the web console. The task is easy. You just have to choose deployment from an image and search then the corresponding image available as imagestream in your OpenShift integrated registry. In my case deployment was ok after configuring some environment variable values.

blog 143 - 1 - mssql pod

blog 143 - 3 - mssql deployment variables

From the web console you have access to pod logs. In my case, it corresponds to the SQL Server error log during the startup phase. My custom image includes creating a custom dbi_tools database as well as installing tSQLt framework.

blog 143 - 4 - mssql logs

The final step consists in exposing the SQL Server pod to outside world (not by default):

[dab@DBI-LT-DAB:#]> oc get pod
NAME                        READY     STATUS    RESTARTS   AGE
dbi-linux-sql2017-1-9vvfw   1/1       Running   0          1h

C:\Users\dab\Desktop
[dab@DBI-LT-DAB:#]> oc port-forward dbi-linux-sql2017-1-9vvfw 1433:1433

 

Let’s try a connection from mssql-cli tool:

[dab@DBI-LT-DAB:$]> mssql-cli -S 127.0.0.1 -U sa -P Password1
Version: 0.15.0
Mail: sqlcli@microsoft.com
Home: http://github.com/dbcli/mssql-cli
master> select *
....... from sys.dm_os_host_info;
+-----------------+---------------------+----------------+---------------------------+------------+-----------------------+
| host_platform   | host_distribution   | host_release   | host_service_pack_level   | host_sku   | os_language_version   |
|-----------------+---------------------+----------------+---------------------------+------------+-----------------------|
| Linux           | Ubuntu              | 16.04          |                           | NULL       | 0                     |
+-----------------+---------------------+----------------+---------------------------+------------+-----------------------+
(1 row affected)
Time: 0.405s
master>

 

Done!

This is my first deployment but we can do better here. Indeed, in my previous scenario, I didn’t setup persistent volume to host my database files or I didn’t use OpenShift secrets to protect my credential information. Let’ do it!

Let’s create first a persistent volume. Developer user doesn’t have permissions to manage volume on the cluster so let’s switch to the system user:

[dab@DBI-LT-DAB:#]> oc login -u system:admin
Logged into "https://192.168.0.17:8443" as "system:admin" using existing credentials.

 

OpenShift runs on the top of K8s which is object-oriented. Objects can be deployed from deployment files as well and this is definitely my favorite path currently for many reasons. I configured both PersistentVolume and PersistentVolumeClaim objects in a deployment file as follows. Note the hostPath value corresponds to a local path in the MiniShift cluster I setup in a previous step.

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-data-sql
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  storageClassName: slow
  hostPath:
    path: /mnt/sda1/var/lib/minishift/openshift.local.pv/pv0069
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pv-claim-data-sql
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: slow
selector:
  name: pv-data-sql

 

Let’s deploy both my persistent volume and persistent volume claim …

[dab@DBI-LT-DAB:#]> oc create -f .\docker_openshift_storage.yaml
persistentvolume "pv-data-sql" created
persistentvolumeclaim "pv-claim-data-sql" created

 

… and get status of my persistent volume deployment

[dab@DBI-LT-DAB:#]> oc get pvc
NAME                STATUS    VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pv-claim-data-sql   Bound     pv-data-sql   5Gi        RWO            hostpath       1m
[dab@DBI-LT-DAB:#]> oc get pv
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                           STORAGECLASS   REASON    AGE
pv-data-sql   5Gi        RWO            Retain           Bound       mssqlserver/pv-claim-data-sql   hostpath                 1m
…

 

“Bound” status indicates that everything seems to be ok.

Let’s continue and let’s add an OpenShift secret from my deployment file:

apiVersion: v1
kind: Secret
metadata:
  name: mssql-env
stringData:
  MSSQL_SA_PASSWORD: Password1

[dab@DBI-LT-DAB:#]> oc create -f .\docker_openshift_mssql_secret.yaml
secret "mssql-env" created
C:\Users\dab\Desktop
[dab@DBI-LT-DAB:#]> oc get secret
NAME                       TYPE                                  DATA      AGE
…
mssql-env                  Opaque                                1         1h

 

At this step, you have different ways to deploy a pod so I finally use a deployment configuration file as follows:

apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
  labels:
    app: mssql
  name: dbi-linux-sql2017
  namespace: mssqlserver
spec:
  replicas: 1
  selector:
    app: mssql
    deploymentconfig: dbi-linux-sql2017
  strategy:
    type: Rolling
  template:
    metadata:
      labels:
        app: mssql
        deploymentconfig: dbi-linux-sql2017
    spec:
      containers:
        - env:
            - name: ACCEPT_EULA
              value: 'Y'
            - name: DMK
              value: 'Y'
            - name: MSSQL_SA_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: MSSQL_SA_PASSWORD
                  name: mssql-env
          envFrom:
            - secretRef:
                name: mssql-env
          image: 
            172.30.1.1:5000/mssqlserver/dbi_linux_sql2017:2017-CU4
          imagePullPolicy: Always
          name: dbi-linux-sql2017
          ports:
            - containerPort: 1433
              protocol: TCP
          volumeMounts:
            - mountPath: /var/opt/mssql/
              name: volume-x1d5y
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      volumes:
        - name: volume-x1d5y
          persistentVolumeClaim:
            claimName: pv-claim-data-sql
  triggers:
    - type: ConfigChange
    - imageChangeParams:
        automatic: true
        containerNames:
          - dbi-linux-sql2017
        from:
          kind: ImageStreamTag
          name: 'dbi_linux_sql2017:2017-CU4'
          namespace: mssqlserver
      type: ImageChange

 

… To deploy my SQL Server pod:

[dab@DBI-LT-DAB:#]> oc create -f .\deployment-config-mssql.yml
deploymentconfig "dbi-linux-sql2017" created

 

Once again, I exposed the corresponding service port to connect from my laptop and connection to my SQL Server pod was successful again. Note that the pod is different from the first time. Updating my configuration led K8s to spin-up another container in this case.

[dab@DBI-LT-DAB:#]> oc get pod
NAME                        READY     STATUS    RESTARTS   AGE
dbi-linux-sql2017-1-9ddfbx   1/1       Running   0          1h

C:\Users\dab\Desktop
[dab@DBI-LT-DAB:#]> oc port-forward dbi-linux-sql2017-9ddfbx 1433:1433
Forwarding from 127.0.0.1:1433 -> 1433

 

Finally let’s take a look at the MiniShift cluster storage layer to get a picture of my SQL Server database files including data, log and secrets under /var/opt/mssql:

[dab@DBI-LT-DAB:#]> minishift ssh

[root@minishift ~]# ll /mnt/sda1/var/lib/minishift/openshift.local.pv/pv0069/
total 0
drwxr-xr-x. 2 root root 202 Aug 23 15:07 data
drwxr-xr-x. 2 root root 232 Aug 23 15:18 log
drwxr-xr-x. 2 root root  25 Aug 23 15:06 secrets

 

I was quick on some topics in this write-up that deserves probably to dig further into details and there are other ones to investigate. I will get other opportunities to share my thoughts on it in a context of SQL Server database scenarios. Stay tuned!

 

 

 

 

 

 

 

 

 

Cet article Deploying SQL Server on MiniShift / RedHat OpenShift est apparu en premier sur Blog dbi services.

Oracle 18c DataGuard with Nologging Mode

Thu, 2018-08-23 15:04

With Oracle 18c database nologging has been extended with two new modes: Standby Nologging for Load Performance and Standby Nologging for Data Availability. These modes provide better support for use in an Oracle Active Data Guard environment without significantly increasing the amount of redo generated. In the documentation we can find following definitions:

FORCE LOGGING mode prevents any load operation from being performed in a nonlogged manner. This can slow down the load process because the loaded data must be copied into the redo logs.

STANDBY NOLOGGING FOR DATA AVAILABILITY mode causes the load operation to send the loaded data to each standby through its own connection to the standby. The commit is delayed until all the standbys have applied the data as part of running managed recovery in an Active Data Guard environment.

STANDBY NOLOGGING FOR LOAD PERFORMANCE is similar to the previous mode except that the loading process can stop sending the data to the standbys if the network cannot keep up with the speed at which data is being loaded to the primary. In this mode it is possible that the standbys may have missing data, but each standby automatically fetches the data from the primary as a normal part of running managed recovery in an Active Data Guard environment.

In this Blog I am doing a test with the mode STANDBY NOLOGGING FOR LOAD PERFORMANCE. I am using two virtual machines.
This mode is enabled in the primary database using following command.

SQL> ALTER DATABASE SET STANDBY NOLOGGING FOR LOAD PERFORMANCE;

Database altered.

SQL> select db_unique_name,force_logging from v$database;

DB_UNIQUE_NAME                 FORCE_LOGGING
------------------------------ ---------------------------------------
CONT18C_SITE                   STANDBY NOLOGGING FOR LOAD PERFORMANCE

And then we build a Data Guard environment (steps not shown). Below the configuration of the Data Guard.

DGMGRL> show configuration;

Configuration - CONT18C_DR

  Protection Mode: MaxPerformance
  Members:
  CONT18C_SITE  - Primary database
    CONT18C_SITE1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 3 seconds ago)

DGMGRL>

Now that the Data Guard is build, let’s do some nologging operation in the primary and let’s see if there are replicated on the standby

SQL> show con_name

CON_NAME
------------------------------
PDB1

SQL> create tablespace TBS_NOLOG datafile '/u01/app/oracle/oradata/CONT18C/PDB1/tbs_nolog01.dbf' size 5M nologging;

Tablespace created.
SQL> create table testnlog nologging tablespace TBS_NOLOG as select * from hr.EMPLOYEES;

Table created.

SQL>

In the standby when we select the query we got following errors

SQL> select db_unique_name,force_logging,open_mode from v$database;

DB_UNIQUE_NAME  FORCE_LOGGING                           OPEN_MODE
--------------- --------------------------------------- --------------------
CONT18C_SITE1   STANDBY NOLOGGING FOR LOAD PERFORMANCE  READ ONLY WITH APPLY

SQL> show con_name

CON_NAME
------------------------------
PDB1


SQL> select count(*) from testnlog;
select count(*) from testnlog
       *
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 13, block # 163)
ORA-01110: data file 13: '/u01/app/oracle/oradata/CONT18C/PDB1/tbs_nolog01.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option

Why the replication did not work. In fact in this licencing document https://docs.oracle.com/en/database/oracle/oracle-database/18/dblic/Licensing-Information.html#GUID-0F9EB85D-4610-4EDF-89C2-4916A0E7AC87

We can verify that the feature Oracle Data Guard—Automatic Correction of Non-logged Blocks at a Data Guard Standby Database is available on Oracle Database Enterprise Edition on Engineered Systems (EE_ES) and in the Cloud.

So let’s simulate an Exadata using the hidden parameter “_exadata_feature_on” on both servers

SQL> alter system set "_exadata_feature_on"=true scope=spfile;

System altered.

After both databases restart let’s ty again
On the primary database

SQL> drop table testnlog;

Table dropped.

SQL> create table testnlog nologging tablespace TBS_NOLOG as select * from hr.EMPLOYEES;

Table created.

SQL>

On the standby, we can see that nologged data are now replicated

SQL> select db_unique_name,force_logging,open_mode from v$database;

DB_UNIQUE_NAME  FORCE_LOGGING                           OPEN_MODE
--------------- --------------------------------------- --------------------
CONT18C_SITE1   STANDBY NOLOGGING FOR LOAD PERFORMANCE  READ ONLY WITH APPLY

SQL>

SQL> select count(*) from testnlog;

  COUNT(*)
----------
       107

SQL>

Conclusion :
With Oracle 18c, we can now configure database with no-logging mode. But this feature is only supported on following editions :

Oracle Database Enterprise Edition on Engineered Systems On-Premises EE-ES Oracle Database Enterprise Edition software installed on an on-premises engineered system (Oracle Exadata Database Machine or Oracle Database Appliance). Oracle Database Cloud Service Enterprise Edition Cloud DBCS EE Includes Oracle Database Enterprise Edition software. Oracle Database Cloud Service Enterprise Edition – High Performance Cloud DBCS EE-HP Includes Oracle Database Enterprise Edition software plus many Oracle Database options and Oracle management packs. Oracle Database Cloud Service Enterprise Edition – Extreme Performance Cloud DBCS EE-EP Includes Oracle Database Enterprise Edition software plus all Oracle Database options and Oracle management packs that are appropriate for use in Oracle Database Cloud Service. Oracle Database Exadata Cloud Service Cloud ExaCS Includes Oracle Database Enterprise Edition software plus all Oracle Database options and Oracle management packs that are appropriate for use in Oracle Database Exadata Cloud Service.

The licensing policies for ExaCS also apply to Oracle Database Exadata Cloud at Customer.

 

 

Cet article Oracle 18c DataGuard with Nologging Mode est apparu en premier sur Blog dbi services.

RMAN catalog upgrade, why, when and how

Thu, 2018-08-23 05:54

One of our customer has been initially creating a RMAN catalog on an Oracle database release 12.1.0.2.0 and was now intending to register new Oracle 12.2.0.1.0 databases.

Registering the databases will be failing with errors :

PL/SQL package RCAT.DBMS_RCVCAT version 12.01.00.02 in RCVCAT database is too old
RMAN-06429: RCVCAT database is not compatible with this version of RMAN

The problem is coming from the catalog version that needs to be at least equal or higher than the database version to register. I had been then wondering if we are talking about the version of the catalog database itself or the version of the catalog.

Fortunately, in most of the cases a catalog database upgrade is not needed and a catalog upgrade is enough.

RMAN Compatibility Matrix (Doc ID 73431.1) MOS Note will provide the below compatibility matrix.

Target/Auxiliary
Database
RMAN Executable Catalog Database Catalog Schema 8.1.7.4 8.1.7.4 >=8.1.7 < 12C 8.1.7.4 8.1.7.4 8.1.7.4 >=8.1.7 < 12C >=9.0.1.4 9.0.1 9.0.1 >=8.1.7 < 12C >= RMAN executable 9.2.0 >=9.0.1.3 and <= Target database >=8.1.7 < 12C >= RMAN executable 10.1.0.5 >=10.1.0.5 and <= Target database >=10.1.0.5 >= RMAN executable 10.2.0 >=10.1.0.5 and <= target database >=10.1.0.5 >= RMAN executable 11.1.0 >=10.1.0.5 and <= target database >=10.2.0.3 (note 1) >= RMAN executable 11.2.0 >=10.1.0.5 and <= target database >=10.2.0.3 (note 1) >= RMAN executable >=12.1.0.x = target database executable >=10.2.0.3 >= RMAN executable 18.1 = target database executable >=10.2.0.3 >= RMAN executable

So, in our case, we will connect from the database to be registered (R12.2.0.1.0) in order to :

  1. Check the release of the catalog
    dbiservices@<server_name>:C:\Windows\system32\ [DB_NAME] sqlplus <catalog_user>/<pwd>@<catalog_TNS>
    
    SQL*Plus: Release 12.2.0.1.0 Production on Thu Aug 16 14:50:08 2018
    
    Connected to:
    
    Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production
    
    SQL> select * from rcver;
    
    VERSION
    
    ------------
    
    12.01.00.02
  2. Upgrade the catalog version
    dbiservices@<server_name>:C:\Windows\system32\ [DB_NAME] rman target / catalog <catalog_user>/<pwd>@<catalog_TNS>
    
    Recovery Manager: Release 12.2.0.1.0 - Production on Thu Aug 16 14:52:15 2018
    
    connected to target database: ARGOSP (DBID=469810750)
    
    connected to recovery catalog database
    
    PL/SQL package <catalog_user>.DBMS_RCVCAT version 12.01.00.02. in RCVCAT database is too old
    
    RMAN> upgrade catalog;
    
    recovery catalog owner is <catalog_user>
    
    enter UPGRADE CATALOG command again to confirm catalog upgrade
    
    RMAN> upgrade catalog;
    
    recovery catalog upgraded to version 12.02.00.01
    
    DBMS_RCVMAN package upgraded to version 12.02.00.01
    
    DBMS_RCVCAT package upgraded to version 12.02.00.01.
  3. Check the release of the catalog
dbiservices@<server_name>:C:\Windows\system32\ [DB_NAME] sqlplus <catalog_user>/<pwd>@<catalog_TNS>

SQL*Plus: Release 12.2.0.1.0 Production on Thu Aug 16 14:50:08 2018

Connected to:

Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production

SQL> select * from rcver;

VERSION

------------

12.02.00.01

 

Database registration will then be successful:

RMAN> register database;

database registered in recovery catalog

starting full resync of recovery catalog

full resync complete
 

Cet article RMAN catalog upgrade, why, when and how est apparu en premier sur Blog dbi services.

The size of Oracle Home: from 9GB to 600MB – What about SQL Server?

Wed, 2018-08-22 01:00

Holidays is over and let’s start working with a funny write-up. I was reading interesting stuff done by my colleagues Franck Pachot and Daniel Westermann about reducing the size of Oracle / PostgreSQL binaries at the minimum in the context of a Docker images. I was curious what can be done on SQL Server side on Linux environment but just keep in mind that some operations performed in this post are probably not supported in production.

Firstly, remember that SQL Server installation is package-oriented on Linux in a such way we may choose only installing packages that include the SQL Server engine as well as the basic client tools as SQLCMD. In other words, we just need to install the mssql-server-* (engine), mssql-tools-* (CLI tools) and their dependencies leading to the following result in my case (SQL Server 2017 RTM-CU9-GDR and CentOS 7):

$ du -sh /opt/mssql /u01/sqlserverdata/mssqlserver /var/opt/mssql/ /usr/share/doc/mssql-server/
891M    /opt/mssql
52M     /u01/sqlserverdata/mssqlserver
12M     /var/opt/mssql/
848K    /usr/share/doc/mssql-server/

 

A total size of 956MB.

In my context the following folders contain folders and files related to my SQL Server instance:

  • /opt/mssql hierarchy folder contains all the SQL Server engine binaries and libraries
  • /u01/sqlserverdata/mssqlserver contains all database system files (master, msdb, model, tempdb)
  • /var/opt/mssql contains SQL Server and SQL Agent log files, default extended event session files, default trace files and the machine.key
  • /usr/share/doc/mssql-server/ contains some license text files

 

Firstly, we can remove all files from /usr/share/doc/mssql-server/ with a disk space reduction of 848KB. One another easy way to continue reducing disk space consumed by SQL Server in /var/opt/mssql/log is to disable telemetry …

$ sudo /opt/mssql/bin/mssql-conf set telemetry.customerfeedback false

 

… extended events and the default trace as well …

DROP EVENT SESSION [system_health] ON SERVER 
GO
DROP EVENT SESSION [telemetry_xevents] ON SERVER 
GO
DROP EVENT SESSION [AlwaysOn_health] ON SERVER 
GO

EXEC sp_configure 'default trace enabled', 0;
GO
RECONFIGURE;
GO

 

… and drop their related files on disk then.

$ sudo rm -f /var/opt/mssql/log/*.trc
$ sudo rm -f /var/opt/mssql/log/*.xel

 

~ another 4 MB saved … not a really a good job, isn’t it? :)

$ du -sh /opt/mssql /u01/sqlserverdata/mssqlserver /var/opt/mssql/
891M    /opt/mssql
52M     /u01/sqlserverdata/mssqlserver
8.8M    /var/opt/mssql/
0       /usr/share/doc/mssql-server/

 

By taking a look at the other folders in the /var/opt/mssql hierarchy seems to confirm we can’t do more. Let’s try to cleanup other MSSQL related folders.

We may try now to reduce the size of the system databases but these files will probably grow over the time and moving them on a persistent volume storage with containers is an easy task in a such way we may exclude them from the equation.

$ sudo du -sh /u01/sqlserverdata/mssqlserver/*
4.0M    /u01/sqlserverdata/mssqlserver/master.mdf
8.0M    /u01/sqlserverdata/mssqlserver/modellog.ldf
8.0M    /u01/sqlserverdata/mssqlserver/model.mdf
15M     /u01/sqlserverdata/mssqlserver/msdbdata.mdf
768K    /u01/sqlserverdata/mssqlserver/msdblog.ldf
8.0M    /u01/sqlserverdata/mssqlserver/tempdb.mdf
8.0M    /u01/sqlserverdata/mssqlserver/templog.ldf

 

Another 52MB. Most of the work could be done in the /opt/mssql hierarchy folder with both SQL Server binaries and libraries.

$ sudo du -sh /opt/mssql/* | sort -r
862M    /opt/mssql/lib
29M     /opt/mssql/bin

 

The above output indicates that the SQL Server binaries size is rather small compared to the related libraries. As Franck and Daniel, I attempted to execute a strip command against SQL Server binaries and libraries but /!\ keep in mind this operation is probably not supported by Microsoft. /!\

$ sudo strip /opt/mssql/bin/*
strip:/opt/mssql/bin/compress-dump.sh: File format not recognized
strip:/opt/mssql/bin/crash-support-functions.sh: File format not recognized
strip:/opt/mssql/bin/generate-sql-dump.sh: File format not recognized
strip:/opt/mssql/bin/handle-crash.sh: File format not recognized
strip:/opt/mssql/bin/mssql-conf: File format not recognized
$ sudo strip /opt/mssql/lib/*
strip: Warning: '/opt/mssql/lib/loc' is not an ordinary file
strip: Warning: '/opt/mssql/lib/mssql-conf' is not an ordinary file
strip:/opt/mssql/lib/secforwarderxplat.sfp: File format not recognized
strip:/opt/mssql/lib/sqlagent.sfp: File format not recognized
strip:/opt/mssql/lib/sqlservr.sfp: File format not recognized
strip:/opt/mssql/lib/system.certificates.sfp: File format not recognized
strip:/opt/mssql/lib/system.common.sfp: File format not recognized
strip:/opt/mssql/lib/system.netfx.sfp: File format not recognized
strip:/opt/mssql/lib/system.sfp: File format not recognized

 

The below output was beyond my expectation but looking further, the size reduction concerns only paldumper in /opt/mssql/bin/ (27MB to 1.6MB). Other executables remain unchanged. Some other Linux dynamic shared libraries as libc++abi.so.1 (from 2.5MB to 384KB), libunwind.so.8 (from 368KB to 48KB) or libsqlvdi.so (from 348KB to 48KB).

$ sudo du -sh /opt/mssql/* | sort -r
857M    /opt/mssql/lib
2.9M    /opt/mssql/bin

 

Another 31MB. That’s said, restarting my SQL Server instance remained successful and I was able to connect to it. I think it could be safe to test the generation of a dump but for now let’s consider that it does not interfere with the correct execution of the SQL Server engine…

$ sudo systemctl restart mssql-server

$ sqlcmd -S localhost -U sa -P XXXXX -Q"SELECT @@VERSION"
                                                                                                                                                                                                                                        
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Microsoft SQL Server 2017 (RTM-CU9-GDR) (KB4293805) - 14.0.3035.2 (X64)
        Jul  6 2018 18:24:36
        Copyright (C) 2017 Microsoft Corporation
        Developer Edition (64-bit) on Linux (CentOS Linux 7 (Core))

 

However, the biggest remaining part is about SQL Server files with sfp extension as shown below.

$ sudo du -sh /opt/mssql/lib/* | sort -hr
370M    /opt/mssql/lib/sqlservr.sfp
304M    /opt/mssql/lib/system.netfx.sfp
158M    /opt/mssql/lib/system.common.sfp
15M     /opt/mssql/lib/sqlagent.sfp
8.3M    /opt/mssql/lib/system.sfp
1.6M    /opt/mssql/lib/system.certificates.sfp
652K    /opt/mssql/lib/libc++.so.1
384K    /opt/mssql/lib/mssql-conf
304K    /opt/mssql/lib/libc++abi.so.1
48K     /opt/mssql/lib/secforwarderxplat.sfp
48K     /opt/mssql/lib/libunwind.so.8
48K     /opt/mssql/lib/libsqlvdi.so
44K     /opt/mssql/lib/loc

 

Files with sfp extension are not supported by the strip command (see above) and it turns out these files are packed archive and are part of the “SQL Platform Abstraction Layer” (or SQLPAL) that hosts the operating system in a library (aka Drawbridge Library OS). These files are used to load SQL Server and its dependent programs and DLLs to run under SQLPAL.

We may dig into these packed archives by using the sfpack tool (available on GitHub) that include a lot of what we may recognize as windows files. Here some of them:

  • .exe files
$ egrep *.exe sqlservr.sfp.txt | head -n 10
/opt/mssql/lib/sqlservr/Content/sqlservr.exe.lnx.hiv
/opt/mssql/lib/sqlservr/Content/sqlservr.exe.win.hiv
/opt/mssql/lib/sqlservr/Content/binn/sqlcmd.exe
/opt/mssql/lib/sqlservr/Content/binn/useraccounttool.exe
/opt/mssql/lib/sqlservr/Content/binn/sqlceip.exe.config
/opt/mssql/lib/sqlservr/Content/binn/SqlDumper.exe
/opt/mssql/lib/sqlservr/Content/binn/reg.exe
/opt/mssql/lib/sqlservr/Content/binn/rep.exe
/opt/mssql/lib/sqlservr/Content/binn/OSQL.exe
/opt/mssql/lib/sqlservr/Content/binn/sqlservr.exe.config

 

  • .dll library files
$ egrep *.dll sqlservr.sfp.txt | head -n 10
/opt/mssql/lib/sqlservr/Content/binn/sqlsatellite.dll
/opt/mssql/lib/sqlservr/Content/binn/SQLOS.dll
/opt/mssql/lib/sqlservr/Content/binn/pidgenx.dll
/opt/mssql/lib/sqlservr/Content/binn/cpprest120_2_8.dll
/opt/mssql/lib/sqlservr/Content/binn/hadrres.dll
/opt/mssql/lib/sqlservr/Content/binn/msvcr100.dll
/opt/mssql/lib/sqlservr/Content/binn/autoadmin.dll
/opt/mssql/lib/sqlservr/Content/binn/sqlboot.dll
/opt/mssql/lib/sqlservr/Content/binn/sqlaccess.dll
/opt/mssql/lib/sqlservr/Content/binn/qds.dll

 

According to the GitHub documentation dbpatch files are binary patch files to their respective siblings with the same name without any syscalls instructions from binaries like ntdll.dll, win32k.sys etc …

$ grep dbpatch   system.sfp.txt
/opt/mssql/lib/system/Content/Windows/System32/DkDll_2.dll.dbpatch
/opt/mssql/lib/system/Content/Windows/System32/DkDll.dll.dbpatch
/opt/mssql/lib/system/Content/Windows/System32/sqlpal.dll.dbpatch
/opt/mssql/lib/system/Content/Windows/System32/DkDll_4.dll.dbpatch
/opt/mssql/lib/system/Content/Windows/System32/DkDll_3.dll.dbpatch
/opt/mssql/lib/system/Content/Windows/System32/DkDll_1.dll.dbpatch

$ grep dbpatch system.common.sfp.txt
/opt/mssql/lib/system.common/Content/Windows/System32/win32kfull.sys.dbpatch
/opt/mssql/lib/system.common/Content/Windows/System32/ntdll.dll.dbpatch
/opt/mssql/lib/system.common/Content/Windows/System32/win32u.dll.dbpatch
/opt/mssql/lib/system.common/Content/Windows/System32/win32kbase.sys.dbpatch
/opt/mssql/lib/system.common/Content/Windows/System32/Drivers/msrpc.sys.dbpatch
/opt/mssql/lib/system.common/Content/Windows/System32/Drivers/ksecdd.sys.dbpatch
/opt/mssql/lib/system.common/Content/Windows/System32/Drivers/cng.sys.dbpatch

 

Well that would be definitely lead to another interesting write-up but for now I believe this funny work seems to be over because reducing the size of these packed archives is out of our control (at least in a supported way as far as I know). So, the minimum final size I may get would be 869.7MB. I didn’t manage to significantly reduce the size by myself but my feeling is there is some room and optimizations that can be done in the SQLPAL layer by Microsoft. Let’s see in the future!

$ sudo du -sh /opt/mssql /u01/sqlserverdata/mssqlserver /var/opt/mssql/
860M    /opt/mssql
9.7M    /var/opt/mssql/

 

See you!

 

 

 

 

 

Cet article The size of Oracle Home: from 9GB to 600MB – What about SQL Server? est apparu en premier sur Blog dbi services.

Restoring a database without having any controlfile backup

Tue, 2018-08-21 09:00

It should never happen but sometimes it happens. You just lost your datafiles as well as your fast recovery area (probably because most of the time these areas are on the same disks despite the recommendations).

Normal restore operations with RMAN are quite easy and secure as far as you have backupsets for database, archivelogs, and spfile/controlfile:

Step 1 – restore the spfile and start the instance
Step 2 – restore the controlfile and mount the database
Step 3 – restore the database (meaning the datafiles)
Step 4 – recover the database as far as possible (by applying archivelogs)
Step 5 – open the database in (no)resetlogs

If you cannot go through step 2 because you don’t have any controlfile backup, you can’t go further with RMAN, that’s it. But there is another way to get a controlfile back to work.

Not having the spfile is annoying, but it’s just a subset of instance parameters, not really important stuff for your data. You can eventually recreate a pfile (you will probably convert it to spfile later) by picking up the non-default parameters in the alert_SID.log, these are located just after the last start of the instance. Or you can create a very basic pfile with very few parameters: at least the db_unique_name, and for this example I need compatible parameter, and a temporary fast recovery area for easy restore of the archivelogs.

vi /u01/oradata/DBTEST1/initDBTEST1.ora
*.db_name='DBTEST1'
control_files='/u01/oradata/DBTEST1/control01.dbf'
compatible=12.1.0.2
db_recovery_file_dest='/u01/oradata/fast_recovery_area/'
db_recovery_file_dest_size=10G

 

Fortunately you remember where you put the backup and you found this:

oracle@vmoratest1:/oracle/backup/ [DBTEST1] ls -lrt
total 189828
-rw-r-----. 1 oracle oinstall   4333568 Aug 20 14:47 DB_34tb1lfb_1_1
-rw-r-----. 1 oracle oinstall     98304 Aug 20 14:47 DB_36tb1lfd_1_1
-rw-r-----. 1 oracle oinstall  54304768 Aug 20 14:47 DB_33tb1lfb_1_1
-rw-r-----. 1 oracle oinstall 121438208 Aug 20 14:47 DB_32tb1lfb_1_1
-rw-r-----. 1 oracle oinstall     92672 Aug 20 14:49 ARC_3atb1lj7_1_1
-rw-r-----. 1 oracle oinstall   1730560 Aug 20 14:49 ARC_39tb1lj7_1_1
-rw-r-----. 1 oracle oinstall   5758464 Aug 20 14:49 ARC_38tb1lj7_1_1
-rw-r-----. 1 oracle oinstall   6619648 Aug 20 14:49 ARC_37tb1lj7_1_1

 

First of all, start the instance.

sqlplus / as sysdba
SQL> startup nomount pfile='/u01/oradata/DBTEST1/initDBTEST1.ora';

 

After trying to restore the controlfile from backuppieces inside the backup directory, you found that no backup has a controlfile:

rman target /
RMAN> restore controlfile from '/oracle/backup/DB_36tb1lfb_1_1';

Starting restore at 20-AUG-2018 15:53:08
using channel ORA_DISK_1

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 08/20/2018 15:53:08
RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece

RMAN> restore controlfile from '/oracle/backup/DB_32tb1lfb_1_1';

Starting restore at 20-AUG-2018 15:53:21
using channel ORA_DISK_1

channel ORA_DISK_1: restoring control file
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 08/20/2018 15:53:21
ORA-19697: standby control file not found in backup set

RMAN> restore controlfile from '/oracle/backup/ARC_3atb1lj7_1_1';

Starting restore at 20-AUG-2018 15:53:56
using channel ORA_DISK_1

channel ORA_DISK_1: restoring control file
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 08/20/2018 15:53:56
ORA-19870: error while restoring backup piece /oracle/backup/ARC_3atb1lj7_1_1
ORA-19626: backup set type is archived log - can not be processed by this conversation

...

 

Having an instance started is always better than nothing. And through this instance you can have access to many things without actually having a real database. For example, you can use the dbms_backup_restore package: this package is able to restore datafiles without having any controlfile. Very useful for us now. You can easily restore a datafile from a backuppiece but you have to provide the datafile number. A few lines of PL/SQL code can help you to restore all the datafiles from all the available backuppieces.

cd /u01/oradata/DBTEST1/
vi resto.sql

set serveroutput on
declare
        v_dev           varchar2(30) ;
        v_rest_ok       boolean;
        v_df_num        number := 1;
        v_df_max        number := 30;
        v_bck_piece     varchar2(256) := '&1';
        v_rest_folder   varchar2(226) := '/u01/oradata/DBTEST1/';
        v_rest_df       varchar2(256);
begin
       v_dev := dbms_backup_restore.deviceallocate;
       while v_df_num <= v_df_max loop
                v_rest_df := v_rest_folder||'DF_'||lpad(v_df_num,4,'0');
                dbms_backup_restore.restoreSetDatafile;
                dbms_backup_restore.restoreDataFileTo(dfnumber=>v_df_num,toname=>v_rest_df);
                BEGIN
                        dbms_backup_restore.restoreBackupPiece(done=>v_rest_ok,handle=>v_bck_piece);
                EXCEPTION
                        WHEN OTHERS
                        THEN
                                v_rest_ok := FALSE;
                                -- dbms_output.put_line('Datafile '||v_df_num||' is not in this piece');
                END;
                if v_rest_ok THEN
                        dbms_output.put_line('Datafile '||v_df_num||' is restored : '||v_rest_df);
                end if;
                v_df_num := v_df_num + 1;
        end loop;
        dbms_backup_restore.deviceDeallocate;
end;
/
exit;

 

Let’s iterate this anonymous PL/SQL block for each backuppiece in your backup folder:

for a in `find /oracle/backup/ -name DB*`; do sqlplus -s / as sysdba @resto $a; done;

old   6:     v_bck_piece    varchar2(256) := '&1';
new   6:     v_bck_piece    varchar2(256) := '/oracle/backup/DB_32tb1lfb_1_1';
Datafile 1 is restored : /u01/oradata/DBTEST1/DF_0001.dbf
Datafile 4 is restored : /u01/oradata/DBTEST1/DF_0004.dbf
Datafile 9 is restored : /u01/oradata/DBTEST1/DF_0009.dbf

PL/SQL procedure successfully completed.

old   6:     v_bck_piece    varchar2(256) := '&1';
new   6:     v_bck_piece    varchar2(256) := '/oracle/backup/DB_33tb1lfb_1_1';
Datafile 2 is restored : /u01/oradata/DBTEST1/DF_0002.dbf
Datafile 7 is restored : /u01/oradata/DBTEST1/DF_0007.dbf
Datafile 8 is restored : /u01/oradata/DBTEST1/DF_0008.dbf

PL/SQL procedure successfully completed.

old   6:     v_bck_piece    varchar2(256) := '&1';
new   6:     v_bck_piece    varchar2(256) := '/oracle/backup/DB_36tb1lfd_1_1';

PL/SQL procedure successfully completed.

old   6:     v_bck_piece    varchar2(256) := '&1';
new   6:     v_bck_piece    varchar2(256) := '/oracle/backup/DB_34tb1lfb_1_1';
Datafile 3 is restored : /u01/oradata/DBTEST1/DF_0003.dbf
Datafile 5 is restored : /u01/oradata/DBTEST1/DF_0005.dbf
Datafile 6 is restored : /u01/oradata/DBTEST1/DF_0006.dbf

PL/SQL procedure successfully completed.

 

Well done! 9 datafiles were restored. Now look at your folder, you’ll find the 9 datafiles, actually all your database if your backup is reliable:

ls -lrt /u01/oradata/DBTEST1/

total 2017372
-rw-r--r--. 1 oracle oinstall      1035 Aug 20 23:05 resto.sql
-rw-r--r--. 1 oracle oinstall        91 Aug 20 23:12 initDBTEST1.ora
-rw-r-----. 1 oracle oinstall 734011392 Aug 20 23:15 DF_0001.dbf
-rw-r-----. 1 oracle oinstall   5251072 Aug 20 23:15 DF_0004.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0009.dbf
-rw-r-----. 1 oracle oinstall 576724992 Aug 20 23:15 DF_0002.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0007.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0008.dbf
-rw-r-----. 1 oracle oinstall 487596032 Aug 20 23:15 DF_0003.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0005.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0006.dbf

 

You can now manually create the controlfile with these datafiles (you just have to remember the characterset of your database):

sqlplus / as sysdba

CREATE CONTROLFILE REUSE DATABASE "DBTEST1" RESETLOGS  ARCHIVELOG
      MAXLOGFILES 16
      MAXLOGMEMBERS 3
      MAXDATAFILES 100
      MAXINSTANCES 8
      MAXLOGHISTORY 2073
LOGFILE
    GROUP 1 '/u01/oradata/DBTEST1/redo01.rdo'  SIZE 100M BLOCKSIZE 512,
    GROUP 2 '/u01/oradata/DBTEST1/redo02.rdo'  SIZE 100M BLOCKSIZE 512,
    GROUP 3 '/u01/oradata/DBTEST1/redo03.rdo'  SIZE 100M BLOCKSIZE 512
DATAFILE
    '/u01/oradata/DBTEST1/DF_0001.dbf',
    '/u01/oradata/DBTEST1/DF_0002.dbf',
    '/u01/oradata/DBTEST1/DF_0003.dbf',
    '/u01/oradata/DBTEST1/DF_0004.dbf',
    '/u01/oradata/DBTEST1/DF_0005.dbf',
    '/u01/oradata/DBTEST1/DF_0006.dbf',
    '/u01/oradata/DBTEST1/DF_0007.dbf',
    '/u01/oradata/DBTEST1/DF_0008.dbf',
    '/u01/oradata/DBTEST1/DF_0009.dbf'
CHARACTER SET AL32UTF8 ;


Control file created.

 

ls -lrt /u01/oradata/DBTEST1/

total 2029804
-rw-r--r--. 1 oracle oinstall      1035 Aug 20 23:05 resto.sql
-rw-r--r--. 1 oracle oinstall        91 Aug 20 23:12 initDBTEST1.ora
-rw-r-----. 1 oracle oinstall 734011392 Aug 20 23:15 DF_0001.dbf
-rw-r-----. 1 oracle oinstall   5251072 Aug 20 23:15 DF_0004.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0009.dbf
-rw-r-----. 1 oracle oinstall 576724992 Aug 20 23:15 DF_0002.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0007.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0008.dbf
-rw-r-----. 1 oracle oinstall 487596032 Aug 20 23:15 DF_0003.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0005.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0006.dbf
-rw-r-----. 1 oracle oinstall  12730368 Aug 20 23:24 control01.dbf

 

What a relief to see pfile, controlfile and datafiles all together again!

Work is not yet finished because the datafiles are probably inconsistent. There is no need to mount the database as it’s already mounted, and it’s now possible to catalog all your  backuppieces for some kind of RMAN catalog restore:

rman target /
catalog start with '/oracle/backup/';

using target database control file instead of recovery catalog
searching for all files that match the pattern /oracle/backup/

List of Files Unknown to the Database
=====================================
File Name: /oracle/backup/DB_32tb1lfb_1_1
File Name: /oracle/backup/ARC_37tb1lj7_1_1
File Name: /oracle/backup/ARC_39tb1lj7_1_1
File Name: /oracle/backup/DB_33tb1lfb_1_1
File Name: /oracle/backup/DB_36tb1lfd_1_1
File Name: /oracle/backup/ARC_3atb1lj7_1_1
File Name: /oracle/backup/DB_34tb1lfb_1_1
File Name: /oracle/backup/ARC_38tb1lj7_1_1

Do you really want to catalog the above files (enter YES or NO)? YES

cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /oracle/backup/DB_32tb1lfb_1_1
File Name: /oracle/backup/ARC_37tb1lj7_1_1
File Name: /oracle/backup/ARC_39tb1lj7_1_1
File Name: /oracle/backup/DB_33tb1lfb_1_1
File Name: /oracle/backup/DB_36tb1lfd_1_1
File Name: /oracle/backup/ARC_3atb1lj7_1_1
File Name: /oracle/backup/DB_34tb1lfb_1_1
File Name: /oracle/backup/ARC_38tb1lj7_1_1

 

You now need to restore the archivelogs:

RMAN> restore archivelog all;

Starting restore at 20-AUG-2018 23:43:47
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=23 device type=DISK

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=44
channel ORA_DISK_1: reading from backup piece /oracle/backup/ARC_39tb1lj7_1_1
channel ORA_DISK_1: piece handle=/oracle/backup/ARC_39tb1lj7_1_1 tag=TAG20180820T144911
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=45
channel ORA_DISK_1: reading from backup piece /oracle/backup/ARC_38tb1lj7_1_1
channel ORA_DISK_1: piece handle=/oracle/backup/ARC_38tb1lj7_1_1 tag=TAG20180820T144911
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=46
channel ORA_DISK_1: reading from backup piece /oracle/backup/ARC_37tb1lj7_1_1
channel ORA_DISK_1: piece handle=/oracle/backup/ARC_37tb1lj7_1_1 tag=TAG20180820T144911
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=47
channel ORA_DISK_1: reading from backup piece /oracle/backup/ARC_3atb1lj7_1_1
channel ORA_DISK_1: piece handle=/oracle/backup/ARC_3atb1lj7_1_1 tag=TAG20180820T144911
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 20-AUG-2018 23:43:52

 

Now it’s probably possible to recover the database:

sqlplus / as sysdba

SQL> recover database until cancel using backup controlfile;
ORA-00279: change 1386561 generated at 08/20/2018 14:47:07 needed for thread 1
ORA-00289: suggestion :
/u01/oradata/fast_recovery_area/DBTEST1/archivelog/2018_08_21/o1_mf_1_47_fqplx2l
n_.arc
ORA-00280: change 1386561 for thread 1 is in sequence #47


Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
ORA-00279: change 1386635 generated at 08/20/2018 14:49:10 needed for thread 1
ORA-00289: suggestion :
/u01/oradata/fast_recovery_area/DBTEST1/archivelog/2018_08_21/o1_mf_1_48_%u_.arc
ORA-00280: change 1386635 for thread 1 is in sequence #48
ORA-00278: log file
'/u01/oradata/fast_recovery_area/DBTEST1/archivelog/2018_08_21/o1_mf_1_47_fqplx2
ln_.arc' no longer needed for this recovery


ORA-00308: cannot open archived log
'/u01/oradata/fast_recovery_area/DBTEST1/archivelog/2018_08_21/o1_mf_1_48_%u_.ar
c'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

 

Last error is normal because Oracle didn’t know the sequence 48 never existed.

Now all the archivelogs are applied, fingers crossed for the last operation that is supposed to bring back the database to life:

SQL> alter database open resetlogs;

Database altered.

SQL> select instance_name, status from v$instance;

INSTANCE_NAME     STATUS
---------------- ------------
DBTEST1      OPEN

SQL> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
/u01/oradata/DBTEST1/DF_0001.dbf
/u01/oradata/DBTEST1/DF_0002.dbf
/u01/oradata/DBTEST1/DF_0003.dbf
/u01/oradata/DBTEST1/DF_0004.dbf
/u01/oradata/DBTEST1/DF_0005.dbf
/u01/oradata/DBTEST1/DF_0007.dbf
/u01/oradata/DBTEST1/DF_0006.dbf
/u01/oradata/DBTEST1/DF_0009.dbf
/u01/oradata/DBTEST1/DF_0008.dbf

9 rows selected.

Yes everything is OK!!! Apart from generic name for your datafiles, a single controlfile, no spfile, default-configured redologs and probably no temporary tablespace. But the database is up and running, and you feel like a hero. Or you just manage to keep your job ;-)

 

Cet article Restoring a database without having any controlfile backup est apparu en premier sur Blog dbi services.

When we do a pg_dump and right afterwards truncate a table which is in the dump, what happens?

Tue, 2018-08-21 01:16

Being at customers is always the best way to learn. Today while discussing that pg_dump will always produce a consistent dump because it uses the “repeatable read” isolation level this question came up: What happens when we dump a database and while the dump is running we truncate a table in that database? Does that block? Well, the answer is in the documentation: pg_dump is a utility for backing up a PostgreSQL database. It makes consistent backups even if the database is being used concurrently. pg_dump does not block other users accessing the database (readers or writers).

What is not in the documentation is that pg_dump uses the “repeatable read” isolation level, but it is documented in the source code:

postgres@pgbox:/home/postgres/postgresql-10.4/ [PG10] vi src/bin/pg_dump/pg_dump.c
...
 *      Note that pg_dump runs in a transaction-snapshot mode transaction,
 *      so it sees a consistent snapshot of the database including system
 *      catalogs. However, it relies in part on various specialized backend
 *      functions like pg_get_indexdef(), and those things tend to look at
 *      the currently committed state.  So it is possible to get 'cache
 *      lookup failed' error if someone performs DDL changes while a dump is
 *      happening. The window for this sort of thing is from the acquisition
 *      of the transaction snapshot to getSchemaData() (when pg_dump acquires
 *      AccessShareLock on every table it intends to dump). It isn't very large,
 *      but it can happen.
...

For the moment lets ignore the rest of that paragraph and focus on the original question. For that lets create some sample data we can dump:

postgres=# create database dump;
CREATE DATABASE
postgres=# \c dump
You are now connected to database "dump" as user "postgres".
dump=# create table t_dump as 
       select a.*, md5(a::text) 
         from generate_series ( 1, 3000000 ) a;
SELECT 3000000

As we need two sessions for this demo we increase the time it takes for the dump by compressing at the highest level:

postgres@pgbox:/home/postgres/ [PG10] pg_dump --compress=9 dump > test.dump

In a second session, while the dump is running, we truncate the table?

dump=# truncate table t_dump;
TRUNCATE TABLE
Time: 9411.574 ms (00:09.412)

And surprise: Yes, the pg_dump operation is blocking the truncate (you can see that from the time it took, usually a truncate is instant). So the documentation is not quite accurate. Before going further, does the same happen when we modify the table while the dump is running? Same test as above for the dump and in the second session:

dump=# alter table t_dump add c text;
ALTER TABLE
Time: 11093.535 ms (00:11.094)

Same here, blocking (otherwise the addition of a column would have been instant). So when you do a DDL against a table while a dump is running that DDL has to wait until the dump completed.

Coming back to the remaining sentences of the paragraph from the source code. pg_dump acquires an AccessShareLock while it is running and we can verify this in the second session while the dump is running:

dump=# select database, relation::regclass, mode from pg_locks where relation = 't_dump'::regclass;
 database | relation |      mode       
----------+----------+-----------------
    33985 | t_dump   | AccessShareLock
(1 row)

This does not lock the table for reading or writing but it does lock the table for DDLs. We can confirm that as well when we do a select and an insert in the second session while the dump is running in the first session:

dump=# insert into t_dump (a,md5,c) values (-1,'aaa','bbb');
INSERT 0 1
Time: 8.131 ms
dump=# select * from t_dump where a = -1;
 a  | md5 |  c  
----+-----+-----
 -1 | aaa | bbb
(1 row)

No issues here. When we manually lock the table in “AccessShareLock” in the first session we will not be able to alter it in the second session.
Session 1:

dump=# begin;
BEGIN
dump=# lock table t_dump IN ACCESS SHARE MODE;
LOCK TABLE
dump=# 

… and in the second session try some DDL:

dump=# alter table t_dump alter COLUMN c set default 'a';
-- blocks

Creating an index on that table while is locked in that mode works:

dump=# create index i1 on t_dump (c);
CREATE INDEX

… while dropping an index while the table is locked in that mode will block as well:

dump=# drop index i1;
-- blocks

So the final advice: Plan to do your dumps when there is no DDL activity.

 

Cet article When we do a pg_dump and right afterwards truncate a table which is in the dump, what happens? est apparu en premier sur Blog dbi services.

A tribute to Natural Join

Mon, 2018-08-20 10:23
By Franck Pachot

.
I know that lot of people are against the ANSI join syntax in Oracle. And this goes beyond the limits when talking about NATURAL JOIN. But I like them and use them quite often.

Why is Natural Join bad?

Natural join is bad because it relies on column names, and, at the time of writing the query, you don’t know which columns will be added or removed later. Here is an example on the SCOTT schema, joining on DEPTNO which has the same name in DEPT and EMP:

SQL> select * from EMP natural join DEPT where DNAME='SALES';
 
DEPTNO EMPNO ENAME JOB MGR HIREDATE SAL COMM DNAME LOC
---------- ---------- ---------- --------- ---------- --------- ---------- ---------- -------------- -------------
30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 SALES CHICAGO
30 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 SALES CHICAGO
30 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 SALES CHICAGO
30 7900 JAMES CLERK 7698 03-DEC-81 950 SALES CHICAGO
30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 SALES CHICAGO
30 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 SALES CHICAGO

The DEPT table has a ‘LOC’column for the location of the department. But the data model may evolve and you may add a location for each employee. And we may also call it LOC:

SQL> alter table EMP add (LOC varchar2(10));
Table altered.

But now our Natural Join adds this column to the join predicate and the result is wrong because it shows only rows which have same department location as employee location:

SQL> select * from EMP natural join DEPT where DNAME='SALES';
 
no rows selected

Projection

In my opinion, the problem is not the Natural Join. Column names have a meaning for their tables. But the tables have different roles in our queries. As soon as a table or view participates to our query, we should redefine the column names. If we don’t, the result is completely wrong as:

SQL> select * from EMP join DEPT using(DEPTNO) where DNAME='SALES';
 
DEPTNO EMPNO ENAME JOB MGR HIREDATE SAL COMM LOC DNAME LOC
---------- ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- -------------- -------------
30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 SALES CHICAGO
30 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 SALES CHICAGO
30 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 SALES CHICAGO
30 7900 JAMES CLERK 7698 03-DEC-81 950 SALES CHICAGO
30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 SALES CHICAGO
30 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 SALES CHICAGO

Look: the result has two columns with the same name. This is completely wrong for a relational database and I don’t even understand why this parses without raising an error.

The projection is the most important relational operation, often overlooked as if it was just a rename for aesthetic purpose. You need to name the columns of your result set. They are the metadata for the interface between SQL and the host language. ‘select *’ is a shortcut when running an interactive query, to get a glance at the result rows. But a SQL query result is not complete without proper column names. And in most cases, at least when you query more than one table, the name of the query result columns should be different than the name of the underlying table columns. A department may have a location. And an employee may have a location. But the location of the employee department is something completely different than the employee location.

Then, as you need to name each column anyway, why not doing it as soon as possible? Do it for each table involved in the query, so that you are sure that all column names are correct within the query. As soon as you introduce an new table in the FROM clause, you should actually name the columns according to their role in the query. Let’s take an example with an airline data model. Each airport is linked to a city. This can be a CITY column in the AIRPORTS table. But as soon as you join FLIGHTS with AIRPORTS, this table has a different role. You join on destination airport or source airport. Then you alias the AIRPORTS table in the FROM clause, such as DST_AIRPORTS or SRC_AIRPORTS. Within the query, you can reference the columns with the table alias, such as DST_AIRPORTS.CITY or SRC_AIRPORTS.CITY but this cannot be exposed as-is in the query result. You must name them in the SELECT clause with something like SELECT DST_AIRPORTS.CITY as DST_ARP_CITY , SRC_AIRPORTS.CITY as SRC_ARP_CITY.

Then, as I’ll need to rename them anyway, I prefer to do it as soon as I join to a new table in the FROM clause. Instead of joining to AIRPORTS DST_AIRPORTS I can join to (SELECT IATA DST_ARP_IATA, CITY DST_ARP_CITY FROM AIRPORTS) and all column names will relate to the role without table aliases and without further renaming. And when I do that correctly, I can use natural join without risk.

Projection in the FROM clause

Let’s take an example. Here is a query in DEPT where I explicitly mention that LOC is the department location. This is implicit when the column name belongs to the DEPT table. But it will not be implicit anymore once I join this table to another table. Here is the view ready to be included in any query:


SQL> select DEPTNO,DNAME DEPT_DNAME,LOC DEPT_LOC from DEPT where DNAME='SALES';
 
DEPTNO DEPT_DNAME DEPT_LOC
---------- -------------- -------------
30 SALES CHICAGO

Now, I can join this to the EMP table. I prefix all columns from EMP with “EMP_” and all columns from DEPT with “EMP_DEPT_” because they belong to DEPT when in the role of employee department:

SQL> select EMP_EMPNO,EMP_ENAME,EMP_DEPT_DNAME,EMP_DEPT_LOC,EMP_LOC,EMP_MGR_EMPNO
from
(select DEPTNO EMP_DEPTNO,EMPNO EMP_EMPNO,ENAME EMP_ENAME,MGR EMP_MGR_EMPNO,LOC EMP_LOC from EMP)
natural join
(select DEPTNO EMP_DEPTNO,DNAME EMP_DEPT_DNAME,LOC EMP_DEPT_LOC from DEPT)
where EMP_DEPT_DNAME='SALES';
 
EMP_EMPNO EMP_ENAME EMP_DEPT_DNAME EMP_DEPT_LOC EMP_LOC EMP_MGR_EMPNO
---------- ---------- -------------- ------------- ---------- -------------
7521 WARD SALES CHICAGO 7698
7844 TURNER SALES CHICAGO 7698
7499 ALLEN SALES CHICAGO 7698
7900 JAMES SALES CHICAGO 7698
7698 BLAKE SALES CHICAGO 7839
7654 MARTIN SALES CHICAGO 7698

As you can see, when the names are clearly indicating the column with its role in the join, and how they are correlated with the other tables, there is no need to mention any join predicate. I used Natural Join because the join is on EMP_DEPTNO and I’m sure that it will always be the one and only one column with the same name. By query design.

And the column names in the result are correct, explicitly mentioning what is an Employee attribute or an Employee department attribute. That can be easy to parse and put in an object graph in the host language. You can see there that the MGR column of EMP was named EMP_MGR_EMPNO because this is actually what it is: the EMPNO of the employee manager. It is a foreign key to the EMP table.

And then, adding more information about the manager is easy: join with EMP again but with the proper projection of columns: EMPNO will be EMP_MGR_EMPNO when in the role of the employee manager, ENAME will be EMP_MGR_ENAME, DEPTNO will be EMP_MGR_DEPTNO, and so on:


SQL> select EMP_EMPNO,EMP_ENAME,EMP_DEPT_DNAME,EMP_DEPT_LOC,EMP_LOC,EMP_MGR_DEPTNO,EMP_MGR_ENAME
from
(select DEPTNO EMP_DEPTNO,EMPNO EMP_EMPNO,ENAME EMP_ENAME,MGR EMP_MGR_EMPNO,LOC EMP_LOC from EMP)
natural join
(select DEPTNO EMP_DEPTNO,DNAME EMP_DEPT_DNAME,LOC EMP_DEPT_LOC from DEPT)
natural join
(select DEPTNO EMP_MGR_DEPTNO,EMPNO EMP_MGR_EMPNO,ENAME EMP_MGR_ENAME from EMP)
where EMP_DEPT_DNAME='SALES';
 
EMP_EMPNO EMP_ENAME EMP_DEPT_DNAME EMP_DEPT_LOC EMP_LOC EMP_MGR_DEPTNO EMP_MGR_ENAME
---------- ---------- -------------- ------------- ---------- -------------- -------------
7900 JAMES SALES CHICAGO 30 BLAKE
7499 ALLEN SALES CHICAGO 30 BLAKE
7654 MARTIN SALES CHICAGO 30 BLAKE
7844 TURNER SALES CHICAGO 30 BLAKE
7521 WARD SALES CHICAGO 30 BLAKE
7698 BLAKE SALES CHICAGO 10 KING

No need to review the whole query when adding a new table. No need to solve the new ‘column ambiguously defined’. We don’t even need to alias the tables here.

Want to add the department name of the manager? That’s easy: join to DEPT with the right column projection (all prefixed by EMP_MGR_DEPT as the new columns are all about the employee manager’s department):

SQL> select EMP_EMPNO,EMP_ENAME,EMP_DEPT_DNAME,EMP_DEPT_LOC,EMP_LOC,EMP_MGR_DEPTNO,EMP_MGR_ENAME,EMP_MGR_DEPT_DNAME
from
(select DEPTNO EMP_DEPTNO,EMPNO EMP_EMPNO,ENAME EMP_ENAME,MGR EMP_MGR_EMPNO,LOC EMP_LOC from EMP)
natural join
(select DEPTNO EMP_DEPTNO,DNAME EMP_DEPT_DNAME,LOC EMP_DEPT_LOC from DEPT)
natural join
(select DEPTNO EMP_MGR_DEPTNO,EMPNO EMP_MGR_EMPNO,ENAME EMP_MGR_ENAME from EMP)
natural join
(select DEPTNO EMP_MGR_DEPTNO,DNAME EMP_MGR_DEPT_DNAME,LOC EMP_MGR_DEPT_LOC from DEPT)
where EMP_DEPT_DNAME='SALES';
 
EMP_EMPNO EMP_ENAME EMP_DEPT_DNAME EMP_DEPT_LOC EMP_LOC EMP_MGR_DEPTNO EMP_MGR_EN EMP_MGR_DEPT_D
---------- ---------- -------------- ------------- ---------- -------------- ---------- --------------
7698 BLAKE SALES CHICAGO 10 KING ACCOUNTING
7900 JAMES SALES CHICAGO 30 BLAKE SALES
7499 ALLEN SALES CHICAGO 30 BLAKE SALES
7654 MARTIN SALES CHICAGO 30 BLAKE SALES
7844 TURNER SALES CHICAGO 30 BLAKE SALES
7521 WARD SALES CHICAGO 30 BLAKE SALES

This can be even easier when you generate SQL queries. When adding a new table to join to, you just prefix all columns with their role. Check foreign keys so that the naming is consistent with the referenced tables. Then when parsing the result, the naming convention can help to break on the object hierarchy.

Additional notes

I mentioned that aliasing the subquery is not mandatory because I do not have to prefix the column names. However, when looking at the predicates section of the execution plan, the columns may be prefixed with an internal alias:

Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("from$_subquery$_006"."EMP_MGR_DEPTNO"="from$_subquery$_009"."EMP_MGR_DEPTNO")
2 - access("from$_subquery$_001"."EMP_MGR_EMPNO"="from$_subquery$_006"."EMP_MGR_EMPNO" AND "from$_subquery$_001"."EMP_DEPTNO"="from$_subquery$_003"."EMP_DEPTNO")

Then it is a good idea to add prefixes, such as EMP, EMP_DEPT, EMP_MGR EMP_MGR_DEPTNO in the query above so that the predicates become:

Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("EMP_MGR"."EMP_MGR_DEPTNO"="EMP_MGR_DEPT"."EMP_MGR_DEPTNO")
2 - access("EMP"."EMP_MGR_EMPNO"="EMP_MGR"."EMP_MGR_EMPNO" AND "EMP"."EMP_DEPTNO"="EMP_DEPT"."EMP_DEPTNO")
5 - filter("DNAME"='SALES')

I also like to add a QB_NAME hint so that I can reference easily those subqueries if I have to add some hints there. Finally, this is what I can generate for this query:


SQL> select EMP_EMPNO,EMP_ENAME,EMP_DEPT_DNAME,EMP_DEPT_LOC,EMP_LOC,EMP_MGR_DEPTNO,EMP_MGR_ENAME,EMP_MGR_DEPT_DNAME
from
(select /*+qb_name(EMP)*/ DEPTNO EMP_DEPTNO,EMPNO EMP_EMPNO,ENAME EMP_ENAME,MGR EMP_MGR_EMPNO,LOC EMP_LOC from EMP) EMP
natural join
(select /*+qb_name(EMP_DEPT)*/ DEPTNO EMP_DEPTNO,DNAME EMP_DEPT_DNAME,LOC EMP_DEPT_LOC from DEPT) EMP_DEPT
natural join
(select /*+qb_name(EMP_MGR)*/ DEPTNO EMP_MGR_DEPTNO,EMPNO EMP_MGR_EMPNO,ENAME EMP_MGR_ENAME from EMP) EMP_MGR
natural join
(select /*+qb_name(EMP_MGR_DEPT)*/ DEPTNO EMP_MGR_DEPTNO,DNAME EMP_MGR_DEPT_DNAME,LOC EMP_MGR_DEPT_LOC from DEPT) EMP_MGR_DEPT
where EMP_DEPT_DNAME='SALES';

So what?

My goal here is not to recommend to always use natural joins. This depends on the context (ad-hoc queries, embedded ones in existing code with naming standards,…) and whether con control exactly the columns names. There are also a few bugs with ANSI joins, and natural join is not widely used, so maybe not tested a lot. But when I hear that Natural Join is bad, I want to explain the why/how/when. And one of the good sides of it is that it forces us to do the projection/rename as soon as possible and this makes the query easier to read/maintain/evolve. Of course, using natural join in that way requires that all tables are added to the FROM clause through a subquery which carefully names all columns in the SELECT clause so that the correlation with the other tables is clearly defined.

 

Cet article A tribute to Natural Join est apparu en premier sur Blog dbi services.

Running PLSQL as SYSDBA through DEFINER-rights?

Mon, 2018-08-20 10:07

Recently I got an interesting request: The customer wanted to allow the application installation routine to create a guaranteed restore point without giving it all required privileges to do so. So the idea was to encapsulate creating and dropping a guaranteed restore point in a PLSQL package and granting the application owner the permission to execute the package. The problem with that approach is that SYSDBA-privileges are required to create a guaranteed restore point and the question came up if it is actually possible to have a PLSQL package created with DEFINER-rights, where the DEFINER has the SYSDBA-privilege? Actually that is not possible, because you have to be connected “AS SYSDBA” to have the SYSDBA-privilege. A package created from a user, who connected as SYSDBA does not inherit the SYSDBA-privilege as the following example shows:


sqlplus / as sysdba
 
create user dbadmin identified by dbadmin;
grant sysdba to dbadmin;
create user appluser identified by appluser;
grant create session to appluser;
connect dbadmin/dbadmin as sysdba
 
create or replace package grp_handling as
procedure create_grp;
procedure drop_grp;
end;
/
 
create or replace package body grp_handling as
procedure create_grp
as
begin
begin
execute immediate 'drop restore point before_appl_installation';
exception
when others then null;
end;
execute immediate 'create restore point before_appl_installation guarantee flashback database';
end;
procedure drop_grp
as
begin
execute immediate 'drop restore point before_appl_installation';
end;
end;
/
 
exec grp_handling.create_grp;
 
PL/SQL procedure successfully completed.
 
select name from v$restore_point;
 
NAME
----------------------------------------------
BEFORE_APPL_INSTALLATION
 
select owner,object_type from dba_objects where object_name='GRP_HANDLING';
 
OWNER OBJECT_TYPE
------------------------------ -----------------------
SYS PACKAGE
SYS PACKAGE BODY
 
select user from dual;
 
USER
------------------------------
SYS
 
grant execute on grp_handling to appluser;
 
connect appluser/appluser
exec sys.grp_handling.create_grp;
 
*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.GRP_HANDLING", line 10
ORA-06512: at line 1

So first of all a user, who connects “AS SYSDBA” actually connects as SYS. Secondly the SYSDBA-privilege is not inherited as a DEFINER-right in PLSQL-objects.

So how to resolve the issue to create a guaranteed restore point from the appluser-Session then?
I suggested to wrap a dbms_scheduler-external callout in a Package as follows:

1.) Create the bash-Skript /home/oracle/GRP/cre_grp.bash as OS-User oracle


#!/bin/bash
. oraenv <<EOF
prem122
EOF
 
sqlplus / as sysdba <<EOF
begin
execute immediate 'drop restore point before_appl_installation';
exception
when others then null;
end;
/
create restore point before_appl_installation guarantee flashback database;
exit
EOF

–> Add execute permissions for the user: chmod u+x /home/oracle/GRP/cre_grp.bash

2.) Create credential and job


connect dbadmin as sysdba
BEGIN
DBMS_CREDENTIAL.create_credential(
credential_name => 'oracle_122',
username => 'oracle',
password => '<passwd OS-user oracle>'
);
END;
/
 
BEGIN
dbms_scheduler.CREATE_job
(
job_name => 'CRE_GUARANTEED_RP',
job_type => 'EXECUTABLE',
job_action => '/home/oracle/GRP/cre_grp.bash',
enabled => true,
auto_drop => false,
credential_name => 'oracle_122'
);
END;
/

3.) Create a procedure to run the Job


connect dbadmin as sysdba
create procedure run_my_GRP_job
as
begin
dbms_scheduler.run_job('CRE_GUARANTEED_RP');
end;
/
 
grant execute on run_my_GRP_job to appluser;

4.) Now the appluser can run the job:


connect appluser
exec sys.run_my_GRP_job;

Conclusion: You cannot provide SYSDBA-privileges through DEFINER-rights in PLSQL. In case you have to run PLSQL “AS SYSDBA” then you have to connect “AS SYSDBA”. Running SYSDBA-commands as a non-SYSDBA-user is possible with a workaround like through a procedure, which runs an external job.

 

Cet article Running PLSQL as SYSDBA through DEFINER-rights? est apparu en premier sur Blog dbi services.

The size of Oracle Home: from 9GB to 600MB – What about PostgreSQL?

Fri, 2018-08-17 13:20

A recent blog post from Franck and a tweet around that topic is the inspiration for this blog post, thanks Jan for requesting :). In short it is about how small you can get the binaries. Is that important? At least when it comes to Docker images it might get important as you usually try make the image as small as possible. Well, comparing PostgreSQL and Oracle in that field is unfair as Oracle comes with many stuff by default which PostgreSQL is just not shipping (e.g. Apex, SQL Developer, …), so please treat this more a as fun post, please.

The way we usually compile PostgreSQL is this (not in /var/tmp in real life):

postgres@pgbox:/home/postgres/ [pg103] cd /var/tmp/
postgres@pgbox:/var/tmp/ [pg103] wget https://ftp.postgresql.org/pub/source/v10.5/postgresql-10.5.tar.bz2
postgres@pgbox:/var/tmp/ [pg103] tar -axf postgresql-10.5.tar.bz2
postgres@pgbox:/var/tmp/ [pg103] cd postgresql-10.5/
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] PGHOME=/var/tmp/pg105
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] SEGSIZE=2
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] BLOCKSIZE=8
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] WALSEGSIZE=16
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] ./configure --prefix=${PGHOME} \
                                                             --exec-prefix=${PGHOME} \
                                                             --bindir=${PGHOME}/bin \
                                                             --libdir=${PGHOME}/lib \
                                                             --sysconfdir=${PGHOME}/etc \
                                                             --includedir=${PGHOME}/include \
                                                             --datarootdir=${PGHOME}/share \
                                                             --datadir=${PGHOME}/share \
                                                             --with-pgport=5432 \
                                                             --with-perl \
                                                             --with-python \
                                                             --with-openssl \
                                                             --with-pam \
                                                             --with-ldap \
                                                             --with-libxml \
                                                             --with-libxslt \
                                                             --with-segsize=${SEGSIZE} \
                                                             --with-blocksize=${BLOCKSIZE} \
                                                             --with-wal-segsize=${WALSEGSIZE}  \
                                                             --with-systemd
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make -j 4 all
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make install
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] cd contrib
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make -j 4 install

When we do this against the PostgreSQL 10.5 source code the result is this (without the documentation, of course, but containing all the extensions ):

postgres@pgbox:/var/tmp/postgresql-10.5/contrib/ [pg103] du -sh /var/tmp/pg105/
28M	/var/tmp/pg105/

Can we get that even smaller? Let’s try to skip the extensions:

postgres@pgbox:/var/tmp/postgresql-10.5/contrib/ [pg103] cd ..
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make clean
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] ./configure --prefix=${PGHOME} \
                                                             --exec-prefix=${PGHOME} \
                                                             --bindir=${PGHOME}/bin \
                                                             --libdir=${PGHOME}/lib \
                                                             --sysconfdir=${PGHOME}/etc \
                                                             --includedir=${PGHOME}/include \
                                                             --datarootdir=${PGHOME}/share \
                                                             --datadir=${PGHOME}/share \
                                                             --with-pgport=5432 \
                                                             --with-perl \
                                                             --with-python \
                                                             --with-openssl \
                                                             --with-pam \
                                                             --with-ldap \
                                                             --with-libxml \
                                                             --with-libxslt \
                                                             --with-segsize=${SEGSIZE} \
                                                             --with-blocksize=${BLOCKSIZE} \
                                                             --with-wal-segsize=${WALSEGSIZE}  \
                                                             --with-systemd
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make -j 4 all
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] rm -rf /var/tmp/pg105/
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make install

What do we have now?

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] du -sh /var/tmp/pg105/
25M	/var/tmp/pg105/

We saved another 3MB. Can we do more? Let’s try to skip all the “–with” flags that enable perl and so on for the configure command:

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make clean
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] ./configure --prefix=${PGHOME} \
                                                             --exec-prefix=${PGHOME} \
                                                             --bindir=${PGHOME}/bin \
                                                             --libdir=${PGHOME}/lib \
                                                             --sysconfdir=${PGHOME}/etc \
                                                             --includedir=${PGHOME}/include \
                                                             --datarootdir=${PGHOME}/share \
                                                             --datadir=${PGHOME}/share \
                                                             --with-pgport=5432 \
                                                             --with-segsize=${SEGSIZE} \
                                                             --with-blocksize=${BLOCKSIZE} \
                                                             --with-wal-segsize=${WALSEGSIZE}  \
                                                             --with-systemd
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make -j 4 all
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] rm -rf /var/tmp/pg105/
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make install

Do we see a change?

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] du -sh /var/tmp/pg105/
25M	/var/tmp/pg105/

No, that does not change anything. Franck stripped the Oracle binaries and libraries, so lets try to do the same (although I am not sure right now if that is supported):

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] du -sh /var/tmp/pg105/
25M	/var/tmp/pg105/
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] strip /var/tmp/pg105/bin/*
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] strip /var/tmp/pg105/lib/*
strip: Warning: '/var/tmp/pg105/lib/pkgconfig' is not an ordinary file
strip: Warning: '/var/tmp/pg105/lib/postgresql' is not an ordinary file
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] du -sh /var/tmp/pg105/
24M	/var/tmp/pg105/

So, another 1MB less. Can we still initialize and start PostgreSQL?

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] /var/tmp/pg105/bin/initdb -D /var/tmp/testpg
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.utf8
  CTYPE:    en_US.utf8
  MESSAGES: en_US.utf8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /var/tmp/testpg ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg -l logfile start

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg/ start
waiting for server to start....2018-08-17 18:57:50.329 CEST [8528] LOG:  listening on IPv6 address "::1", port 5432
2018-08-17 18:57:50.329 CEST [8528] LOG:  listening on IPv4 address "127.0.0.1", port 5432
2018-08-17 18:57:50.334 CEST [8528] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2018-08-17 18:57:50.354 CEST [8529] LOG:  database system was shut down at 2018-08-17 18:57:31 CEST
2018-08-17 18:57:50.358 CEST [8528] LOG:  database system is ready to accept connections
 done
server started

Looks good and we are able to connect:

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] /var/tmp/pg105/bin/psql -c "select version()" postgres
                                                 version                                                 
---------------------------------------------------------------------------------------------------------
 PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

Time: 1.428 ms

What else can we do? When you do not need the utilities on the server you could just remove them (as said, this is a fun post, don’t do this):

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] cd /var/tmp/pg105/bin
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] rm clusterdb createdb createuser dropdb dropuser pg_archivecleanup pg_basebackup pg_dump pg_dumpall pg_isready pg_receivewal pg_recvlogical pg_resetwal pg_restore pg_rewind pg_test_fsync pg_test_timing pg_upgrade pg_waldump reindexdb vacuumdb

We could probably even remove pgbench and psql but these two I will need to show that the server is still working. What do we have now?

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] du -sh /var/tmp/pg105/
21M	/var/tmp/pg105/

Another 3MB less. Can we still restart and connect?

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg/ stop
waiting for server to shut down....2018-08-17 19:08:49.588 CEST [9144] LOG:  received fast shutdown request
2018-08-17 19:08:49.593 CEST [9144] LOG:  aborting any active transactions
2018-08-17 19:08:49.597 CEST [9144] LOG:  worker process: logical replication launcher (PID 9151) exited with exit code 1
2018-08-17 19:08:49.598 CEST [9146] LOG:  shutting down
2018-08-17 19:08:49.625 CEST [9144] LOG:  database system is shut down
 done
server stopped
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg/ start
waiting for server to start....2018-08-17 19:08:51.949 CEST [9368] LOG:  listening on IPv6 address "::1", port 9999
2018-08-17 19:08:51.949 CEST [9368] LOG:  listening on IPv4 address "127.0.0.1", port 9999
2018-08-17 19:08:51.953 CEST [9368] LOG:  listening on Unix socket "/tmp/.s.PGSQL.9999"
2018-08-17 19:08:51.966 CEST [9369] LOG:  database system was shut down at 2018-08-17 19:08:49 CEST
2018-08-17 19:08:51.969 CEST [9368] LOG:  database system is ready to accept connections
 done
server started
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/psql -c "select version()" postgres
                                                 version                                                 
---------------------------------------------------------------------------------------------------------
 PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

Time: 2.043 ms

Looks good. Now lets do the final step and remove the rest which is not required for the server, but before that we do an initdb as we can not do that afterwards:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg/ stop
waiting for server to shut down....2018-08-17 19:10:31.693 CEST [9368] LOG:  received fast shutdown request
2018-08-17 19:10:31.696 CEST [9368] LOG:  aborting any active transactions
2018-08-17 19:10:31.696 CEST [9368] LOG:  worker process: logical replication launcher (PID 9375) exited with exit code 1
2018-08-17 19:10:31.697 CEST [9370] LOG:  shutting down
2018-08-17 19:10:31.712 CEST [9368] LOG:  database system is shut down
 done
server stopped
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] rm -rf /var/tmp/testpg/
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/initdb -D /var/tmp/testpg
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.utf8
  CTYPE:    en_US.utf8
  MESSAGES: en_US.utf8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /var/tmp/testpg ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg -l logfile start

So, remove the rest:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] rm pg_config pg_controldata psql pgbench initdb ecpg pgbench pg_ctl
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] du -sh /var/tmp/pg105/
20M	/var/tmp/pg105/

We are down to 20MB but we can still start the instance:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/postgres -D /var/tmp/testpg/ &
[1] 9486
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] 2018-08-17 19:13:54.917 CEST [9486] LOG:  listening on IPv6 address "::1", port 9999
2018-08-17 19:13:54.917 CEST [9486] LOG:  listening on IPv4 address "127.0.0.1", port 9999
2018-08-17 19:13:54.924 CEST [9486] LOG:  listening on Unix socket "/tmp/.s.PGSQL.9999"
2018-08-17 19:13:54.955 CEST [9487] LOG:  database system was shut down at 2018-08-17 19:10:56 CEST
2018-08-17 19:13:54.960 CEST [9486] LOG:  database system is ready to accept connections

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] ps -ef | grep postgres
root      1061   941  0 18:26 ?        00:00:00 sshd: postgres [priv]
postgres  1064  1061  0 18:26 ?        00:00:02 sshd: postgres@pts/0
postgres  1065  1064  0 18:26 pts/0    00:00:01 -bash
postgres  9486  1065  0 19:13 pts/0    00:00:00 /var/tmp/pg105/bin/postgres -D /var/tmp/testpg/
postgres  9488  9486  0 19:13 ?        00:00:00 postgres: checkpointer process   
postgres  9489  9486  0 19:13 ?        00:00:00 postgres: writer process   
postgres  9490  9486  0 19:13 ?        00:00:00 postgres: wal writer process   
postgres  9491  9486  0 19:13 ?        00:00:00 postgres: autovacuum launcher process   
postgres  9492  9486  0 19:13 ?        00:00:00 postgres: stats collector process   
postgres  9493  9486  0 19:13 ?        00:00:00 postgres: bgworker: logical replication launcher  
postgres  9496  1065  0 19:14 pts/0    00:00:00 ps -ef
postgres  9497  1065  0 19:14 pts/0    00:00:00 grep --color=auto postgres

Using another psql on that box we can confirm that we can connect:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /u01/app/postgres/product/10/db_4/bin/psql -c "select version()" postgres
                                                 version                                                 
---------------------------------------------------------------------------------------------------------
 PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

Still too much? What else can we? What is consuming space:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] du -sh /var/tmp/pg105/*
6.6M	/var/tmp/pg105/bin
5.9M	/var/tmp/pg105/include
4.1M	/var/tmp/pg105/lib
2.9M	/var/tmp/pg105/share

We can not do more in the “bin” directory, nothing left to delete:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] ls -l /var/tmp/pg105/bin
total 6660
-rwxr-xr-x. 1 postgres postgres 6817480 Aug 17 18:56 postgres
lrwxrwxrwx. 1 postgres postgres       8 Aug 17 18:54 postmaster -> postgres

Everything else will probably safe us a few bytes such as the sample files:

postgres@pgbox:/var/tmp/pg105/ [pg103] find . -name *sample*
./share/postgresql/tsearch_data/synonym_sample.syn
./share/postgresql/tsearch_data/thesaurus_sample.ths
./share/postgresql/tsearch_data/hunspell_sample.affix
./share/postgresql/tsearch_data/ispell_sample.affix
./share/postgresql/tsearch_data/ispell_sample.dict
./share/postgresql/tsearch_data/hunspell_sample_long.affix
./share/postgresql/tsearch_data/hunspell_sample_long.dict
./share/postgresql/tsearch_data/hunspell_sample_num.affix
./share/postgresql/tsearch_data/hunspell_sample_num.dict
./share/postgresql/pg_hba.conf.sample
./share/postgresql/pg_ident.conf.sample
./share/postgresql/postgresql.conf.sample
./share/postgresql/recovery.conf.sample
./share/postgresql/pg_service.conf.sample
./share/postgresql/psqlrc.sample

So how much space do we consume for the PostgreSQL installation and the files which make up the instance?

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] du -sh /var/tmp/pg105/
20M	/var/tmp/pg105/
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] du -sh /var/tmp/testpg/
41M	/var/tmp/testpg/

… 61MB. When we add the wal file Jan mentioned in his tweet we come the 77MB. Not much.

The final question is if PostgreSQL is still working. Let’s use pgbench from another installation on the same server against this:

postgres@pgbox:/var/tmp/pg105/ [pg103] /u01/app/postgres/product/10/db_3/bin/pgbench -i -s 10 postgres
NOTICE:  table "pgbench_history" does not exist, skipping
NOTICE:  table "pgbench_tellers" does not exist, skipping
NOTICE:  table "pgbench_accounts" does not exist, skipping
NOTICE:  table "pgbench_branches" does not exist, skipping
creating tables...
100000 of 1000000 tuples (10%) done (elapsed 0.08 s, remaining 0.75 s)
200000 of 1000000 tuples (20%) done (elapsed 0.24 s, remaining 0.95 s)
300000 of 1000000 tuples (30%) done (elapsed 0.42 s, remaining 0.98 s)
400000 of 1000000 tuples (40%) done (elapsed 0.49 s, remaining 0.74 s)
500000 of 1000000 tuples (50%) done (elapsed 0.70 s, remaining 0.70 s)
600000 of 1000000 tuples (60%) done (elapsed 0.88 s, remaining 0.58 s)
700000 of 1000000 tuples (70%) done (elapsed 0.95 s, remaining 0.41 s)
800000 of 1000000 tuples (80%) done (elapsed 1.14 s, remaining 0.29 s)
900000 of 1000000 tuples (90%) done (elapsed 1.32 s, remaining 0.15 s)
1000000 of 1000000 tuples (100%) done (elapsed 1.41 s, remaining 0.00 s)
vacuum...
set primary keys...
done.
postgres@pgbox:/var/tmp/pg105/ [pg103] /u01/app/postgres/product/10/db_3/bin/pgbench -s 10 postgres
scale option ignored, using count from pgbench_branches table (10)
starting vacuum...end.
transaction type: 
scaling factor: 10
query mode: simple
number of clients: 1
number of threads: 1
number of transactions per client: 10
number of transactions actually processed: 10/10
latency average = 4.436 ms
tps = 225.435296 (including connections establishing)
tps = 285.860401 (excluding connections establishing)

Looks good. So you can come down to 20MB for the PostgreSQL installation and another 41Mb for the files you need to start the instance. You could even drop the postgres database to save another 7MB. But remember: Please don’t do that, you are still fine with around 30MB :)

 

Cet article The size of Oracle Home: from 9GB to 600MB – What about PostgreSQL? est apparu en premier sur Blog dbi services.

The size of Oracle Home: from 9GB to 600MB

Thu, 2018-08-16 14:43

This is research only and totally unsupported. When building docker images to run Oracle Database in a container, we try to get the smallest image possible. One way is to remove some subdirectories that we know will not be used. For example, the patch history is not used anymore once we have the required version. The dbca templates can be removed as soon as we have created the database… In this post I take the opposite approach: run some workload on a normal Oracle Home, and keep only the files that were used.

I have Oracle Database 18c installed in /u00/app/oracle/product/18EE and it takes 9GB on my host:

[oracle@vmreforatun01 ~]$ du --human-readable --max-depth=1 $ORACLE_HOME | sort -h | tail -10
 
352M /u00/app/oracle/product/18EE/jdk
383M /u00/app/oracle/product/18EE/javavm
423M /u00/app/oracle/product/18EE/inventory
437M /u00/app/oracle/product/18EE/assistants
605M /u00/app/oracle/product/18EE/md
630M /u00/app/oracle/product/18EE/bin
673M /u00/app/oracle/product/18EE/apex
1.4G /u00/app/oracle/product/18EE/.patch_storage
2.3G /u00/app/oracle/product/18EE/lib
9.4G /u00/app/oracle/product/18EE

Gigabytes of libraries (most of them used only to link the executables), hundreds of megabytes of binaries, templates for new databases, applied patches, old object files, options, tools, command line and graphical interfaces,… Do we need all that?

For a full installation in production, yes for sure. The more we have, the better it is. When you have to connect at 2 a.m because you are on-call and a critical alert wakes you up, then you will appreciate to have all tools on the server. Especially if you connect through a few security obstacles such as remote VPN, desktop, Wallix, tunnels to finally get a high latency tty with no copy-paste possibilities. With a full Oracle Home, you can face any issue. You have efficient command line interfaces (sqlplus and hopefully sqlcl) or graphical (SQLDeveloper, asmca,…). For severe problems, you can even re-link, apply or rollback patches, quickly create a new database to import something in it,…

But what if you just want to provide a small container where a database is running, and no additional administration support? Where you will never re-install the software, apply patches, re-create the database, troubleshoot weird issues. Just have users connect through the listener port and never log to the container. Then, most of these 9.4 GB are useless.

But how to know which files are useful or not?

If you can rely on Linux ‘access time’ then you may look at the files accessed during the last days – after any installation or database creation is done:

[oracle@vmreforatun01 ~]$ find $ORACLE_HOME -atime -1 -exec stat -L -c "%x %y %z %F %n" {} \; | sort

But this is not reliable. Access time depends on the file type, filesystem, mount options,… and is usually bypassed as much as possible because writing something just to log that you read something is not a very good idea.

Here, I’ll trace all system calls related to file names (strace -e trace=file). I’ll trace them from the start of the database, so that I run strace on dbstart with the -f arguments to trace across forks. Then, I’ll trace the listener, the instance processes and any user process created through the listener.

I pipe the output to an awk script which extracts the file names (which is enclosed in double quotes in the strace output). Basically, the awk is just setting the field separator with -F” and prints the $2 token for each line. There are many single and double quotes here because of shell interpretation.

[oracle@vmreforatun01 ~]$ dbshut $ORACLE_HOME ; strace -fe trace=file -o "|awk -F'"'"'"' '"'{print $2}'"'" sh -xc "dbstart $ORACLE_HOME >&2" | grep "^$ORACLE_HOME" | sort -u > /tmp/files.txt &

Then I run some activity. I did this on our Oracle Tuning training workshop lab, when reviewing all exercises after upgrading the lab VM to 18c. This runs some usual SQL for application (we use Swingbench) and monitoring. The idea is to run through all features that you want to be available on the container you will build.

When I’m done, I dbshut (remember this is for a lab only – strace is not for production) and then strace output gets deduplicated (sort -u) and written to a file.txt in /tmp.

This file contains all files referenced by system calls. Surprisingly, there is one that is not captured here, the ldap messages file, but if I do not take it then the remote connections will fail with:

ORA-07445: exception encountered: core dump [gslumcCalloc()+41] [SIGSEGV] [ADDR:0x21520] [PC:0x60F92D9] [Address not mapped to object] []

I got it with a very empirical approach, will try to understand later. For the moment, I just add it to the list:

[oracle@vmreforatun01 ~]$ ls $ORACLE_HOME/ldap/mesg/ldapus.msb >> /tmp/files.txt

I also add adrci and dbshut scripts as they are small and may be useful:

[oracle@vmreforatun01 ~]$ ls $ORACLE_HOME/bin/adrci $ORACLE_HOME/bin/dbshut >> /tmp/files.txt

From this list, I check thise which are not directories, and tar all regular files and symbolic links into /tmp/smalloh.tar:

[oracle@vmreforatun01 ~]$ stat -c "%F %n" $(cat /tmp/files.txt) | awk '!/^directory/{print $3}' | tar -cvf /tmp/smalloh.tar --dereference --files-from=-

This is a 600M tar:

[oracle@vmreforatun01 ~]$ du -h /tmp/smalloh.tar
 
598M /tmp/smalloh.tar

Then I can remove my Oracle Home

[oracle@vmreforatun01 ~]$ cd $ORACLE_HOME/..
[oracle@vmreforatun01 product]$ rm -rf 18EE
[oracle@vmreforatun01 product]$ mkdir 18EE

and extract the files from my tar:

[oracle@vmreforatun01 /]$ tar -xf /tmp/smalloh.tar

I forgot that there are some setuid executables so I must be root to set them:

[oracle@vmreforatun01 /]$ ls -l $ORACLE_HOME/bin/oracle
-rwxr-x--x. 1 oracle oinstall 437157251 Aug 11 18:40 /u00/app/oracle/product/18EE/bin/oracle
[oracle@vmreforatun01 /]$ su
Password:
[root@vmreforatun01 /]# tar -xf /tmp/smalloh.tar
[root@vmreforatun01 /]# exit
[oracle@vmreforatun01 /]$ ls -l $ORACLE_HOME/bin/oracle
-rwsr-s--x. 1 oracle oinstall 437157251 Aug 11 18:40 /u00/app/oracle/product/18EE/bin/oracle

That’s a 600MB Oracle Home then. You can reduce it further by stripping the binaries:

[oracle@vmreforatun01 18EE]$ du -hs $ORACLE_HOME
599M /u00/app/oracle/product/18EE
[oracle@vmreforatun01 18EE]$ strip $ORACLE_HOME/bin/* $ORACLE_HOME/lib/*
[oracle@vmreforatun01 18EE]$ du -hs $ORACLE_HOME
570M /u00/app/oracle/product/18EE

but for only 30MB I really prefer to have all symbols. As I’m doing something completely unsupported, I may have to do some toubleshooting.

Now I’m ready to start the database and the listener:

[oracle@vmreforatun01 18EE]$ dbstart $ORACLE_HOME
Processing Database instance "DB1": log file /u00/app/oracle/product/18EE/rdbms/log/startup.log

and I run some Swingbench workload to check that everything is fine:

[oracle@vmreforatun01 18EE]$ /home/oracle/swingbench/bin/charbench -cs //localhost:1521/APP -u soe -p soe -uc 10 -min 5 -max 20 -a -v
Author : Dominic Giles
Version : 2.5.0.932
 
Results will be written to results.xml.
 
Time Users TPM TPS
 
6:35:15 PM 0 0 0
...
6:35:44 PM 10 12 9
6:35:45 PM 10 16 4
6:35:46 PM 10 21 5
6:35:47 PM 10 31 10

The only errors in alert.log are about checking the patches at install:

QPI: OPATCH_INST_DIR not present:/u00/app/oracle/product/18EE/OPatch
Unable to obtain current patch information due to error: 20013, ORA-20013: DBMS_QOPATCH ran mostly in non install area
ORA-06512: at "SYS.DBMS_QOPATCH", line 767
ORA-06512: at "SYS.DBMS_QOPATCH", line 547
ORA-06512: at "SYS.DBMS_QOPATCH", line 2124

Most of those 600MB are in the server executable (bin/oracle) and client shared library (lib/libclntsh.so):

[oracle@vmreforatun01 ~]$ size -td /u00/app/oracle/product/18EE/bin/* /u00/app/oracle/product/18EE/lib/* | sort -n
 
text data bss dec hex filename
2423 780 48 3251 cb3 /u00/app/oracle/product/18EE/lib/libofs.so
4684 644 48 5376 1500 /u00/app/oracle/product/18EE/lib/libskgxn2.so
5301 732 48 6081 17c1 /u00/app/oracle/product/18EE/lib/libodm18.so
10806 2304 1144 14254 37ae /u00/app/oracle/product/18EE/bin/sqlplus
13993 2800 1136 17929 4609 /u00/app/oracle/product/18EE/bin/adrci
46456 3008 160 49624 c1d8 /u00/app/oracle/product/18EE/lib/libnque18.so
74314 4824 1248 80386 13a02 /u00/app/oracle/product/18EE/bin/oradism
86396 23968 1144 111508 1b394 /u00/app/oracle/product/18EE/bin/lsnrctl
115523 2196 48 117767 1cc07 /u00/app/oracle/product/18EE/lib/libocrutl18.so
144591 3032 160 147783 24147 /u00/app/oracle/product/18EE/lib/libdbcfg18.so
216972 2564 48 219584 359c0 /u00/app/oracle/product/18EE/lib/libclsra18.so
270692 13008 160 283860 454d4 /u00/app/oracle/product/18EE/lib/libskjcx18.so
321701 5024 352 327077 4fda5 /u00/app/oracle/product/18EE/lib/libons.so
373988 7096 9536 390620 5f5dc /u00/app/oracle/product/18EE/lib/libmql1.so
717398 23224 110088 850710 cfb16 /u00/app/oracle/product/18EE/bin/orabaseconfig
717398 23224 110088 850710 cfb16 /u00/app/oracle/product/18EE/bin/orabasehome
878351 36800 1144 916295 dfb47 /u00/app/oracle/product/18EE/bin/tnslsnr
928382 108920 512 1037814 fd5f6 /u00/app/oracle/product/18EE/lib/libcell18.so
940122 56176 2376 998674 f3d12 /u00/app/oracle/product/18EE/lib/libsqlplus.so
1118019 16156 48 1134223 114e8f /u00/app/oracle/product/18EE/lib/libocr18.so
1128954 5936 160 1135050 1151ca /u00/app/oracle/product/18EE/lib/libskgxp18.so
1376814 18548 48 1395410 154ad2 /u00/app/oracle/product/18EE/lib/libocrb18.so
1685576 130464 160 1816200 1bb688 /u00/app/oracle/product/18EE/lib/libasmclntsh18.so
2517125 16496 15584 2549205 26e5d5 /u00/app/oracle/product/18EE/lib/libipc1.so
3916867 86504 111912 4115283 3ecb53 /u00/app/oracle/product/18EE/lib/libclntshcore.so.18.1
4160241 26320 69264 4255825 40f051 /u00/app/oracle/product/18EE/lib/libmkl_rt.so
5120001 459984 7784 5587769 554339 /u00/app/oracle/product/18EE/lib/libnnz18.so
10822468 302312 21752 11146532 aa1524 /u00/app/oracle/product/18EE/lib/libhasgen18.so
11747579 135320 160 11883059 b55233 /u00/app/oracle/product/18EE/lib/libshpkavx218.so
61758209 2520896 134808 64413913 3d6e0d9 /u00/app/oracle/product/18EE/lib/libclntsh.so.18.1
376147897 3067672 602776 379818345 16a39169 /u00/app/oracle/product/18EE/bin/oracle
487369241 7106932 1203944 495680117 1d8b7a75 (TOTALS)

Of course, this is probably not sufficient, especially if you want to run APEX, OJVM, OracleText. The method is there: run a workload that covers everything you need, and build the Oracle Home from the files used there. I used strace here, but auditd can also be a good idea. Ideally, this job will be done one day by Oracle itself in a supported way, so that we can build a core container for Oracle Database and add features as Dockerfile layers. This had be done to release Oracle XE 11g which is 300MB only. However Oracle XE 18c announced for October will probably be larger as it includes nearly all option.

 

Cet article The size of Oracle Home: from 9GB to 600MB est apparu en premier sur Blog dbi services.

Easily manage dual backup destination with RMAN

Wed, 2018-08-15 10:40

Backup on disk with RMAN is great. It’s fast, you can set as many channels as your platform can handle for faster backups. And you can restore as fast as you can read and write files on disk with these multiple channels. As far as you’re using Enterprise Edition because Standard Edition is stuck to a single channel.

Disk space is very often limited and you’ll probably have to find another solution to keep backups longuer if you want to. You can think about tapes or you can connect RMAN to a global backup tool, but it requires additional libraries that are not free, and it definitely adds complexity.

The other solution is to have dual disk destination for the backups. The first one will be the main destination for your daily backups, the other one will be dedicated to long-term backups, maybe on slower disks but with more free space available. This second destination can eventualy be backed up with another tool without using any library.

For the demonstration, assume you have 2 filesystems, /backup is dedicated to latest daily backups and /lt_backup is for long-term backups.

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

4.0K    backup
ls: cannot access backup/*: No such file or directory

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

First of all, take a backup on the first destination:

RMAN> backup as compressed backupset database format '/oracle/backup/%U';

 

This is a small database and backup is done with the default single channel, so there is only two backupsets, one for the datafiles and the other for the controlfile and the spfile:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:27 backup/2btaj0mt_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:27 backup/2ctaj0nm_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

It’s quite easy to move the backup to the long term destination with RMAN:

RMAN> backup backupset all format '/oracle/lt_backup/%U' delete input;

 

BACKUP BACKUPSET with DELETE INPUT is basically the same as a system mv or move. But it does not require to recatalog the backup files as RMAN is doing this automatically.

Now our backup is located in the second destination:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

4.0K    backup
ls: cannot access backup/*: No such file or directory

162M    lt_backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:28 lt_backup/2btaj0mt_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:28 lt_backup/2ctaj0nm_1_2

 

You can see here that backup filename has changed: last number increased. Oracle knows that this is the second copy of these backupsets (even the first ones don’t exist anymore).

Like a mv command you can put again your backup in previous destination:

RMAN> backup backupset all format '/oracle/backup/%U' delete input;

162M    backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:29 backup/2btaj0mt_1_3
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:29 backup/2ctaj0nm_1_3

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

All the backupsets are now back to the first destination only, and you can see another increase on the filename. And RMAN catalog is up-to-date.

Now let’s make the first folder the default destination for the backups, and go for compressed backupset as a default behavior:

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET ;
RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/oracle/backup/%U';

 

Now you only need a 2-word command to backup the database:

RMAN> backup database;

 

New backup is in first destination as expected:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

323M    backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:29 backup/2btaj0mt_1_3
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:29 backup/2ctaj0nm_1_3
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:35 backup/2dtaj15o_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:35 backup/2etaj16h_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

Suppose you want to move the oldest backups, those done before 1.30AM:

RMAN> backup backupset completed before 'TRUNC(SYSDATE)+1.5/24' format '/oracle/lt_backup/%U' delete input;

 

Everything is working as expected, latest backup is still in the first destination, and the oldest one is in the lt_backup filesystem. With another increase of the number ending the filename:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:35 backup/2dtaj15o_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:35 backup/2etaj16h_1_1

162M    lt_backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:38 lt_backup/2btaj0mt_1_4
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:38 lt_backup/2ctaj0nm_1_4

 

Now that the tests are OK, let’s simulate a real world example. First, tidy up all the backups:

RMAN> delete noprompt backupset;

 

Let’s take a new backup.

RMAN> backup database;

 

Backup is in default destination:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:43 backup/2ftaj1lv_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:43 backup/2gtaj1mo_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

Let’s take another backup later:

RMAN> backup database;

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

323M    backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:43 backup/2ftaj1lv_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:43 backup/2gtaj1mo_1_1
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:00 backup/2htaj2m4_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:01 backup/2itaj2mt_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

Now let’s move the oldest backup to the other folder:

RMAN> backup backupset completed before 'TRUNC(SYSDATE)+2/24' format '/oracle/lt_backup/%U' delete input;

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:00 backup/2htaj2m4_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:01 backup/2itaj2mt_1_1

162M    lt_backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 02:02 lt_backup/2ftaj1lv_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:02 lt_backup/2gtaj1mo_1_2

 

Storing only the oldest backups in the long-term destination is not so clever, imagine you loose your first backup destination? It could be great to have the latest backup in both destinations. You can do that with a BACKUP BACKUPSET COMPLETED AFTER and no DELETE INPUT for basically the same as a cp or copy command:

RMAN> backup backupset completed after 'TRUNC(SYSDATE)+2/24' format '/oracle/lt_backup/%U';

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:00 backup/2htaj2m4_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:01 backup/2itaj2mt_1_1

323M    lt_backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 02:02 lt_backup/2ftaj1lv_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:02 lt_backup/2gtaj1mo_1_2
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:03 lt_backup/2htaj2m4_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:03 lt_backup/2itaj2mt_1_2

 

That’s it, you now have a first destination for newest backups, and a second one for all the backups. And you just have to schedule these 2 BACKUP BACKUPSET after your daily backup of your database.

Note that backups will stay in both destinations until they reach the retention limit you defined for your database. The DELETE OBSOLETE will purge the backupsets wherever they are and delete all the known copies.

 

Cet article Easily manage dual backup destination with RMAN est apparu en premier sur Blog dbi services.

Oracle 18c DataGuard : Rman RECOVER STANDBY DATABASE

Wed, 2018-08-15 10:08

With Oracle Database 18c, we can now refresh a standby database over the network using one RMAN command, RECOVER STANDBY DATABASE.

The RECOVER STANDBY DATABASE command restarts the standby instance, refreshes the control file from the primary database, and automatically renames data files, temp files, and online logs. It restores new data files that were added to the primary database and recovers the standby database up to the current time.
When you use the RECOVER STANDBY DATABASE command to refresh a standby database, you specify either a FROM SERVICE clause or a NOREDO clause. The FROM SERVICE clause specifies the name of a primary service. The NOREDO clause specifies that backups should be used for the refresh, which allows a standby to be rolled forward to a specific time or SCN.
The MRP must be manually stopped on the standby before any attempt is made to sync with primary database.

In this blog I am doing some tests of standby refresh using the Recover Standby Database command.

From a fine Data Guard let’s set the property StandbyFileManagement to MANUAL

DGMGRL> show configuration;

Configuration - CONT18C_DR

  Protection Mode: MaxPerformance
  Members:
  CONT18C_SITE  - Primary database
    CONT18C_SITE1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 18 seconds ago)

DGMGRL>

DGMGRL> edit database 'CONT18C_SITE' set property StandbyFileManagement=MANUAL;
Property "standbyfilemanagement" updated
DGMGRL> edit database 'CONT18C_SITE1' set property StandbyFileManagement=MANUAL;
Property "standbyfilemanagement" updated
DGMGRL> show  database 'CONT18C_SITE' StandbyFileManagement;
  StandbyFileManagement = 'manual'
DGMGRL> show  database 'CONT18C_SITE1' StandbyFileManagement;
  StandbyFileManagement = 'manual'
DGMGRL>

And Then I create add new tablespace and new table in the primary

SQL> create tablespace TBS_2 datafile '/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf' size 5M ;

Tablespace created.

SQL> create table test (id number) tablespace TBS_2;

Table created.

SQL> insert into test values (1);

1 row created.

SQL> insert into test values (2);

1 row created.

SQL> commit;

Commit complete.

SQL>

As expected the changes are not being replicated as shown in the standby alert logfile and in the broker sonfiguration

(3):File #14 added to control file as 'UNNAMED00014' because
(3):the parameter STANDBY_FILE_MANAGEMENT is set to MANUAL
(3):The file should be manually created to continue.
MRP0 (PID:6307): MRP0: Background Media Recovery terminated with error 1274
2018-08-15T13:31:08.343276+02:00
Errors in file /u01/app/oracle/diag/rdbms/cont18c_site1/CONT18C/trace/CONT18C_mrp0_6307.trc:
ORA-01274: cannot add data file that was originally created as '/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf'
MRP0 (PID:6307): Managed Standby Recovery not using Real Time Apply
Recovery interrupted!

Using the broker

DGMGRL> show database 'CONT18C_SITE1';

Database - CONT18C_SITE1

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          4 minutes 33 seconds (computed 1 second ago)
  Average Apply Rate: 3.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    CONT18C

  Database Error(s):
    ORA-16766: Redo Apply is stopped

  Database Warning(s):
    ORA-16853: apply lag has exceeded specified threshold

Database Status:
ERROR

DGMGRL>

Now let’s try to sync the standby database using the RECOVER command. First let’s stop the recovery process.

DGMGRL> edit database 'CONT18C_SITE1' set state ='APPLY-OFF';
Succeeded.
DGMGRL> show database 'CONT18C_SITE1';

Database - CONT18C_SITE1

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-OFF
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          26 minutes 28 seconds (computed 0 seconds ago)
  Average Apply Rate: (unknown)
  Real Time Query:    OFF
  Instance(s):
    CONT18C

Database Status:
SUCCESS

DGMGRL>

After let’s connect with Rman as the target to the standby and let’s run the command
If we try to run the command while connecting to the primary as target we will get following error

RMAN> RECOVER STANDBY DATABASE FROM SERVICE CONT18c_SITE;

Starting recover at 15-AUG-18
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 08/15/2018 14:00:15
RMAN-05146: must be connected to standby database to issue RECOVER STANDBY DATABASE

RMAN>

So from the standby as target. Note that outputs are truncated

[oracle@primaserver admin]$ rman target sys/root@cont18c_site1

Recovery Manager: Release 18.0.0.0.0 - Production on Wed Aug 15 14:03:55 2018
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CONT18C (DBID=4292751651)

RMAN>  RECOVER STANDBY DATABASE FROM SERVICE CONT18c_SITE;

Starting recover at 15-AUG-18
using target database control file instead of recovery catalog
Executing: alter database flashback off
Oracle instance started

Total System Global Area     956299440 bytes

Fixed Size                     8902832 bytes
Variable Size                348127232 bytes
Database Buffers             595591168 bytes
Redo Buffers                   3678208 bytes

contents of Memory Script:
{
   restore standby controlfile from service  'CONT18c_SITE';
   alter database mount standby database;
}
executing Memory Script

Starting restore at 15-AUG-18
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=39 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service CONT18c_SITE
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04
output file name=/u01/app/oracle/oradata/CONT18C/control01.ctl
output file name=/u01/app/oracle/oradata/CONT18C/control02.ctl
Finished restore at 15-AUG-18

released channel: ORA_DISK_1
Statement processed

contents of Memory Script:
{
set newname for datafile  14 to
 "/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf";
   restore from service  'CONT18c_SITE' datafile
    14;
   catalog datafilecopy  "/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf";
   switch datafile all;
}
executing Memory Script

executing command: SET NEWNAME

Starting restore at 15-AUG-18
Starting implicit crosscheck backup at 15-AUG-18
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=51 device type=DISK
Crosschecked 5 objects
Finished implicit crosscheck backup at 15-AUG-18

Starting implicit crosscheck copy at 15-AUG-18
using channel ORA_DISK_1
Finished implicit crosscheck copy at 15-AUG-18

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u01/app/oracle/fast_recovery_area/CONT18C/CONT18C_SITE1/archivelog/2018_08_15/o1_mf_1_47_fq7q5ls5_.arc
File Name: /u01/app/oracle/fast_recovery_area/CONT18C/CONT18C_SITE1/archivelog/2018_08_15/o1_mf_1_48_fq7qn5s3_.arc
File Name: /u01/app/oracle/fast_recovery_area/CONT18C/CONT18C_SITE1/archivelog/2018_08_15/o1_mf_1_49_fq7r0715_.arc
File Name: 
…
…

contents of Memory Script:
{
  recover database from service  'CONT18c_SITE';
}
executing Memory Script

Starting recover at 15-AUG-18
using channel ORA_DISK_1
skipping datafile 5; already restored to SCN 1550044
skipping datafile 6; already restored to SCN 1550044
skipping datafile 8; already restored to SCN 1550044
skipping datafile 14; already restored to SCN 2112213
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service CONT18c_SITE
destination for restore of datafile 00001: /u01/app/oracle/oradata/CONT18C/system01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service CONT18c_SITE
destination for restore of datafile 00003: 
…
…
destination for restore of datafile 00012: /u01/app/oracle/oradata/CONT18C/PDB1/users01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service CONT18c_SITE
destination for restore of datafile 00013: /u01/app/oracle/oradata/CONT18C/PDB1/tbs_nolog01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01

starting media recovery

media recovery complete, elapsed time: 00:00:00
Finished recover at 15-AUG-18
flashback needs to be reenabled on standby open
Finished recover at 15-AUG-18

RMAN>

And we can verify that the configuration is now sync

DGMGRL> edit database 'CONT18C_SITE1' set state ='APPLY-ON';
Succeeded.
DGMGRL> show configuration;

Configuration - CONT18C_DR

  Protection Mode: MaxPerformance
  Members:
  CONT18C_SITE  - Primary database
    CONT18C_SITE1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 37 seconds ago)

DGMGRL>

After opening the standby on read only mode we can verify that everything is now fine

SQL> alter session set container=pdb1;

Session altered.

SQL> select name from v$tablespace;

NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
TBS_NOLOG
TBS_2

7 rows selected.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/CONT18C/PDB1/system01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/users01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/tbs_nolog01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf

6 rows selected.

SQL> select * from test;

        ID
----------
         1
         2

SQL>
 

Cet article Oracle 18c DataGuard : Rman RECOVER STANDBY DATABASE est apparu en premier sur Blog dbi services.

Oracle Database on OpenShift

Tue, 2018-08-14 15:39
By Franck Pachot

.
In a previous post I described the setup of MiniShift on my laptop in order to run OpenShift for test purpose. I even pulled the Oracle Database image from the Docker Store. But the goal is to import it into OpenShift to deploy it from the Image Stream.

I start MiniShift on my laptop, specifying a larger disk (default is 20GB)

C:\Users\Franck>minishift start --disk-size 40g
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
-- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
-- Checking if VirtualBox is installed ... OK
-- Checking the ISO URL ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting the OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 2 GB
vCPUs : 2
Disk size: 40 GB
-- Starting Minishift VM .................................................................... OK
-- Checking for IP address ... OK
-- Checking for nameservers ... OK
-- Checking if external host is reachable from the Minishift VM ...
Pinging 8.8.8.8 ... OK
-- Checking HTTP connectivity from the VM ...
Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 1% used OK
Importing 'openshift/origin:v3.9.0' ............. OK
Importing 'openshift/origin-docker-registry:v3.9.0' ... OK
Importing 'openshift/origin-haproxy-router:v3.9.0' ...... OK
-- OpenShift cluster will be configured with ...
Version: v3.9.0
-- Copying oc binary from the OpenShift container image to VM ... OK
-- Starting OpenShift cluster ...........................................................
Using nsenter mounter for OpenShift volumes
Using public hostname IP 192.168.99.105 as the host IP
Using 192.168.99.105 as the server IP
Starting OpenShift using openshift/origin:v3.9.0 ...
OpenShift server started.
 
The server is accessible via web console at:
https:⁄⁄192.168.99.105:8443
 
You are logged in as:
User: developer
Password:
 
To login as administrator:
oc login -u system:admin

MiniShift is starting a VirualBox and gets an IP address from the VirtualBox DHCP – here 192.168.99.105
I can access to the console https://192.168.99.105:8443 and log as developer or admin but for the moment I’m continuing in command line.

At any moment I can log to the VM running OpenShift with the minishift command. Here checking the size of the disks

C:\Users\Franck>minishift ssh
 
[docker@minishift ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/live-rw 9.8G 697M 9.0G 8% /
devtmpfs 974M 0 974M 0% /dev
tmpfs 1000M 0 1000M 0% /dev/shm
tmpfs 1000M 18M 983M 2% /run
tmpfs 1000M 0 1000M 0% /sys/fs/cgroup
/dev/sr0 344M 344M 0 100% /run/initramfs/live
/dev/sda1 39G 1.8G 37G 5% /mnt/sda1
tmpfs 200M 0 200M 0% /run/user/0
tmpfs 200M 0 200M 0% /run/user/1000

Build the Docker image

The goal is to run in OpenShift a container from an image that has been build somewhere else. In this example I’ll not build one but use one provided on the Docker store: the Oracle Database ‘slim’ image. For this example, I’ll use the minishift VM docker, just because it is there.

I have DockerTools installed on my laptop and just want to set the environment to connect to the docker server on the minishift VM. I can get the environment from minishift:

C:\Users\Franck>minishift docker-env
SET DOCKER_TLS_VERIFY=1
SET DOCKER_HOST=tcp://192.168.99.105:2376
SET DOCKER_CERT_PATH=C:\Users\Franck\.minishift\certs
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i

Here is how to directly set the environemnt from it:

C:\Users\Franck>@FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i

Now my docker commands will connect to this docker server. Here are the related info, minishift is already running several containers there for its own usage:

C:\Users\Franck>docker info
Containers: 9
Running: 7
Paused: 0
Stopped: 2
Images: 6
Server Version: 1.13.1
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log:
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Init Binary: docker-init
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: e9c345b3f906d5dc5e8100b05ce37073a811c74a (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
Profile: default
selinux
Kernel Version: 3.10.0-862.6.3.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.953GiB
Name: minishift
ID: U7IQ:TE3X:HSGK:3ES2:IO6G:A7VI:3KUU:YMBC:3ZIR:QYUL:EQUL:VFMS
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: pachot
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
Experimental: false
Insecure Registries:
172.30.0.0/16
127.0.0.0/8
Live Restore Enabled: false

As for this example, I’ll use the Oracle Database image, I need to log to the Docker Store to prove that I accept the licensing conditions:

C:\Users\Franck>docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username:
Password:
Login Succeeded

I pull the image, takes some time because ‘slim’ means 2GB with Oracle Database.

C:\Users\Franck>docker pull store/oracle/database-enterprise:12.2.0.1-slim
Trying to pull repository docker.io/store/oracle/database-enterprise ...
12.2.0.1-slim: Pulling from docker.io/store/oracle/database-enterprise
4ce27fe12c04: Pull complete
9d3556e8e792: Pull complete
fc60a1a28025: Pull complete
0c32e4ed872e: Pull complete
be0a1f1e8dfd: Pull complete
Digest: sha256:dbd87ae4cc3425dea7ba3d3f34e062cbd0afa89aed2c3f3d47ceb5213cc0359a
Status: Downloaded newer image for docker.io/store/oracle/database-enterprise:12.2.0.1-slim

Here is the image:

C:\Users\Franck>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
openshift/origin-web-console v3.9.0 aa12a2fc57f7 7 weeks ago 495MB
openshift/origin-docker-registry v3.9.0 0530b896b578 7 weeks ago 465MB
openshift/origin-haproxy-router v3.9.0 6b85d7aec983 7 weeks ago 1.28GB
openshift/origin-deployer v3.9.0 39ee47797d2e 7 weeks ago 1.26GB
openshift/origin v3.9.0 12a3f005312b 7 weeks ago 1.26GB
openshift/origin-pod v3.9.0 6e08365fbba9 7 weeks ago 223MB
store/oracle/database-enterprise 12.2.0.1-slim 27c9559d36ec 12 months ago 2.08GB

My minishift VM disk has increased by 2GB:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 3.9G 35G 11% /mnt/sda1

Push the image to OpenShift registry

OpenShift has its integrated container registry from which the Docker images are visible to Image Stream.
Here is the address of the registry:

C:\Users\Franck>minishift openshift registry
172.30.1.1:5000

I’ll run some OpenShift commands and the path to the minishift cache for ‘oc’ can be set with:

C:\Users\Franck>minishift oc-env
SET PATH=C:\Users\Franck\.minishift\cache\oc\v3.9.0\windows;%PATH%
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i
 
C:\Users\Franck>@FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i

I am still connected as developer to OpenShift:

C:\Users\Franck>oc whoami
developer

and I get the login token:

C:\Users\Franck>oc whoami -t
lde5zRPHjkDyaXU9ninZ6zX50cVu3liNBjQVinJdwFc

I use this token to login to the OpenShift registry with docker in order to be able to push the image:

C:\Users\Franck>docker login -u developer -p lde5zRPHjkDyaXU9ninZ6zX50cVu3liNBjQVinJdwFc 172.30.1.1:5000
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded

I create a new project to import the image to:

C:\Users\Franck>oc new-project oracle --display-name=Oracle
Now using project "oracle" on server "https://192.168.99.105:8443".
 
You can add applications to this project with the 'new-app' command. For example, try:
 
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
 
to build a new example application in Ruby.

This can also be done from the GUI. Here is the project on the right:
CaptureOpenShiftProject

I tag the image with the name of the registry (172.30.1.1:5000) and the name of the project (oracle) and add an image name, so that the full name is: 172.30.1.1:5000/oracle/ora122slim

C:\Users\Franck>docker tag store/oracle/database-enterprise:12.2.0.1-slim 172.30.1.1:5000/oracle/ora122slim

We can see this tagged image

C:\Users\Franck>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
openshift/origin-web-console v3.9.0 aa12a2fc57f7 7 weeks ago 495MB
openshift/origin-docker-registry v3.9.0 0530b896b578 7 weeks ago 465MB
openshift/origin-haproxy-router v3.9.0 6b85d7aec983 7 weeks ago 1.28GB
openshift/origin-deployer v3.9.0 39ee47797d2e 7 weeks ago 1.26GB
openshift/origin v3.9.0 12a3f005312b 7 weeks ago 1.26GB
openshift/origin-pod v3.9.0 6e08365fbba9 7 weeks ago 223MB
172.30.1.1:5000/oracle/ora122slim latest 27c9559d36ec 12 months ago 2.08GB
store/oracle/database-enterprise 12.2.0.1-slim 27c9559d36ec 12 months ago 2.08GB

Note that it is the same IMAGE ID and doesn’t take more space:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 3.9G 35G 11% /mnt/sda1

Then I’m finally ready to pull the image to the OpenShift docker registry:

C:\Users\Franck>docker push 172.30.1.1:5000/oracle/ora122slim
The push refers to a repository [172.30.1.1:5000/oracle/ora122slim] 066e811424fb: Pushed
99d7f2451a1a: Pushed
a2c532d8cc36: Pushed
49c80855196a: Pushed
40c24f62a02f: Pushed
latest: digest: sha256:25b0ec7cc3987f86b1e754fc214e7f06761c57bc11910d4be87b0d42ee12d254 size: 1372

This is a copy, and takes an additional 2GB:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 5.4G 33G 14% /mnt/sda1

Deploy the image

Finally, I can deploy the image as it is visible in the GUI:
CaptureOpenShiftImport

I choose to deploy from fommand line:

C:\Users\Franck>oc new-app --image-stream=ora122slim --name=ora122slimdeployment
--> Found image 27c9559 (12 months old) in image stream "oracle/ora122slim" under tag "latest" for "ora122slim"
 
* This image will be deployed in deployment config "ora122slimdeployment"
* Ports 1521/tcp, 5500/tcp will be load balanced by service "ora122slimdeployment"
* Other containers can access this service through the hostname "ora122slimdeployment"
* This image declares volumes and will default to use non-persistent, host-local storage.
You can add persistent volumes later by running 'volume dc/ora122slimdeployment --add ...'

--> Creating resources ...
imagestreamtag "ora122slimdeployment:latest" created
deploymentconfig "ora122slimdeployment" created
service "ora122slimdeployment" created
--> Success
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/ora122slimdeployment'
Run 'oc status' to view your app.

CaptureOpenShiftDeploy

I expose the service:

C:\Users\Franck>oc expose service ora122slimdeployment
route "ora122slimdeployment" exposed

/bin/bash: /home/oracle/setup/dockerInit.sh: Permission denied

Here is one little thing to change. From the POD terminal, I can see the following error:
CaptureOpenShiftCrash

The same can be read from command line:

C:\Users\Franck>oc status
In project Oracle (oracle) on server https://192.168.99.105:8443
 
http://ora122slimdeployment-oracle.192.168.99.105.nip.io to pod port 1521-tcp (svc/ora122slimdeployment)
dc/ora122slimdeployment deploys istag/ora122slim:latest
deployment #1 deployed 7 minutes ago - 0/1 pods (warning: 6 restarts)
 
Errors:
* pod/ora122slimdeployment-1-86prl is crash-looping
 
1 error, 2 infos identified, use 'oc status -v' to see details.
 
C:\Users\Franck>oc logs ora122slimdeployment-1-86prl -c ora122slimdeployment
/bin/bash: /home/oracle/setup/dockerInit.sh: Permission denied

This is because by default, for security reason, OpenShift runs the container with a random user id. But the files are executable only by oracle:

sh-4.2$ ls -l /home/oracle/setup/dockerInit.sh
-rwxr-xr--. 1 oracle oinstall 2165 Aug 17 2017 /home/oracle/setup/dockerInit.sh
sh-4.2$

The solution is quite simple: allow the container to run with its own user id:

C:\Users\Franck>minishift addon apply anyuid
-- Applying addon 'anyuid':.
Add-on 'anyuid' changed the default security context constraints to allow pods to run as any user.
Per default OpenShift runs containers using an arbitrarily assigned user ID.
Refer to https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#security-context-constraints and
https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-origin-specific-guidelines for more information.

The the restart of the POD will go further:
CaptureOpenShiftOracle

This Oracle Database from the Docker Store is not really an image of an installed Oracle Database, but just a tar of Oracle Home and Database files that have to be untared.

Now, in addition to the image size I have an additional 2GB layer for the container:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 11G 28G 28% /mnt/sda1
 
C:\Users\Franck>docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 7 6 3.568GB 1.261GB (35%)
Containers 17 9 1.895GB 58.87kB (0%)
Local Volumes 0 0 0B 0B
Build Cache 0B 0B

Of course there is more to customize. The minishift VM should have more memory and the container for Oracle Database as well. We probably want to add an external volume, and export ports outside of the minishift VM.

 

Cet article Oracle Database on OpenShift est apparu en premier sur Blog dbi services.

ODA database been stuck in deleting status

Tue, 2018-08-14 15:03

Facing an internal inconsistency in the ODA derby database is very painful (see https://blog.dbi-services.com/oda-lite-what-is-this-odacli-repository/ for more info about the derby database). I have recently faced a case where the database deletion was failing and the database remained then in “Deleting” status.  Connecting directly to the internal derby database and doing some self cleaning is very risky and should be performed at your own and known risk. So, in most of the case, a database inconsistency issue ends with an Oracle Support ticket to get their help for cleaning. Before doing so I wanted to look closer to the issue and was very happy to fix it myself. I wanted to share my experience here.

Issue description

As explained in the introduction, the database deletion failed and the database remained in “Deleting” status.

[root@prod1 ~]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
ea49c5a8-8747-4459-bb99-cd71c8c87d58     testtst1   Si       12.1.0.2             false      OLTP     Odb1s    ACFS       Deleting     80a2e501-31d8-4a5d-83db-e04dad34a7fa

Looking at the job activity log, we can see that the deletion is failing while trying to delete the FileSystem.

[root@prod1 ~]# odacli describe-job -i 50a8c1c2-686e-455e-878f-eaa537295c9f

Job details
----------------------------------------------------------------
                     ID:  50a8c1c2-686e-455e-878f-eaa537295c9f
            Description:  Database service deletion with db name: testtst1 with id : ea49c5a8-8747-4459-bb99-cd71c8c87d58
                 Status:  Failure
                Created:  July 25, 2018 9:40:17 AM CEST
                Message:  DCS-10011:Input parameter 'ACFS Device for delete' cannot be NULL.

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
database Service deletion for ea49c5a8-8747-4459-bb99-cd71c8c87d58 July 25, 2018 9:40:17 AM CEST       July 25, 2018 9:40:22 AM CEST       Failure
database Service deletion for ea49c5a8-8747-4459-bb99-cd71c8c87d58 July 25, 2018 9:40:17 AM CEST       July 25, 2018 9:40:22 AM CEST       Failure
Validate db ea49c5a8-8747-4459-bb99-cd71c8c87d58 for deletion July 25, 2018 9:40:17 AM CEST       July 25, 2018 9:40:17 AM CEST       Success
Database Deletion                        July 25, 2018 9:40:18 AM CEST       July 25, 2018 9:40:18 AM CEST       Success
Unregister Db From Cluster               July 25, 2018 9:40:18 AM CEST       July 25, 2018 9:40:19 AM CEST       Success
Kill Pmon Process                        July 25, 2018 9:40:19 AM CEST       July 25, 2018 9:40:19 AM CEST       Success
Database Files Deletion                  July 25, 2018 9:40:19 AM CEST       July 25, 2018 9:40:19 AM CEST       Success
Deleting FileSystem                      July 25, 2018 9:40:21 AM CEST       July 25, 2018 9:40:22 AM CEST       Failure

I decided to have a look why it would have failed on the file system deletion step, and I was very surprised to see there was no data volume for this database anymore. This can be seen in the below volinfo command output. Not sure what happened, but it is weird : why failing if what you want to delete is no more existing and stopping processing further.

ASMCMD> volinfo --all
Diskgroup Name: DATA

         Volume Name: COMMONSTORE
         Volume Device: /dev/asm/commonstore-265
         State: ENABLED
         Size (MB): 5120
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /opt/oracle/dcs/commonstore

Diskgroup Name: RECO

         Volume Name: RECO
         Volume Device: /dev/asm/reco-403
         State: ENABLED
         Size (MB): 304128
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /u03/app/oracle/
 Solution

So why not trying to give the ODA what he is expecting to see? Therefore I tried to create the ACFS volume with exact naming and I was very happy to see that this solved the problem. There was no other relation key than the name of the volume. Let’s look in details the steps I performed.

Let’s create the database expected data volume.

ASMCMD> volcreate -G DATA -s 10G DATTESTTST1

ASMCMD> volinfo -G DATA -a
Diskgroup Name: DATA

         Volume Name: COMMONSTORE
         Volume Device: /dev/asm/commonstore-265 
         State: ENABLED
         Size (MB): 5120
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /opt/oracle/dcs/commonstore

         Volume Name: DATTESTTST1
         Volume Device: /dev/asm/dattesttst1-265
         State: ENABLED
         Size (MB): 10240
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage:
         Mountpath:

Let’s create the file system for the newly created volume.

grid@prod1:/home/grid/ [+ASM1] mkfs.acfs /dev/asm/dattesttst1-265
mkfs.acfs: version                   = 12.2.0.1.0
mkfs.acfs: on-disk version           = 46.0
mkfs.acfs: volume                    = /dev/asm/dattesttst1-265
mkfs.acfs: volume size               = 10737418240  (  10.00 GB )
mkfs.acfs: Format complete.

Let’s check the expected mount points needed for the corresponding database.

[root@prod1 ~]# odacli describe-dbstorage -i 31d852f7-bdd0-40f5-9224-2ca139a2c3db
DBStorage details
----------------------------------------------------------------
                     ID: 31d852f7-bdd0-40f5-9224-2ca139a2c3db
                DB Name: testtst1
          DBUnique Name: testtst1_RZ1
         DB Resource ID: ea49c5a8-8747-4459-bb99-cd71c8c87d58
           Storage Type: Acfs
          DATA Location: /u02/app/oracle/oradata/testtst1_RZ1
          RECO Location: /u03/app/oracle/fast_recovery_area/
          REDO Location: /u03/app/oracle/redo/
   FLASH Cache Location:
                  State: ResourceState(status=Configured)
                Created: July 18, 2018 10:28:39 AM CEST
            UpdatedTime: July 18, 2018 10:29:01 AM CEST

In order to add and start the appropriate file system.

[root@prod1 testtst1_RZ1]# cd /u01/app/12.2.0.1/grid/bin/
[root@prod1 bin]# ./srvctl add filesystem -volume DATTESTTST1 -diskgroup DATA -path /u02/app/oracle/oradata/testtst1_RZ1 -fstype ACFS -autostart ALWAYS -mountowner oracle
[root@prod1 bin]# ./srvctl start filesystem -device /dev/asm/dattesttst1-265

Let’s check the mounted file system.

[root@prod1 bin]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolRoot
                       30G   24G  4.1G  86% /
tmpfs                 189G  1.3G  187G   1% /dev/shm
/dev/md0              477M   40M  412M   9% /boot
/dev/sda1             500M  320K  500M   1% /boot/efi
/dev/mapper/VolGroupSys-LogVolOpt
                       59G   13G   44G  22% /opt
/dev/mapper/VolGroupSys-LogVolU01
                       99G   25G   69G  27% /u01
/dev/asm/commonstore-265
                      5.0G  319M  4.7G   7% /opt/oracle/dcs/commonstore
/dev/asm/reco-403     297G   14G  284G   5% /u03/app/oracle
/dev/asm/dattesttst1-265
                       10G  265M  9.8G   3% /u02/app/oracle/oradata/testtst1_RZ1

Let’s now try to delete the database again. Option -fd is mandatory to force deletion.

[root@prod1 bin]# odacli delete-database -i ea49c5a8-8747-4459-bb99-cd71c8c87d58 -fd
{
  "jobId" : "976c8689-a69d-4e0d-a5e0-e40a30a77d29",
  "status" : "Running",
  "message" : null,
  "reports" : [ {
    "taskId" : "TaskZJsonRpcExt_471",
    "taskName" : "Validate db ea49c5a8-8747-4459-bb99-cd71c8c87d58 for deletion",
    "taskResult" : "",
    "startTime" : "July 25, 2018 10:04:24 AM CEST",
    "endTime" : "July 25, 2018 10:04:24 AM CEST",
    "status" : "Success",
    "taskDescription" : null,
    "parentTaskId" : "TaskSequential_469",
    "jobId" : "976c8689-a69d-4e0d-a5e0-e40a30a77d29",
    "tags" : [ ],
    "reportLevel" : "Info",
    "updatedTime" : "July 25, 2018 10:04:24 AM CEST"
  } ],
  "createTimestamp" : "July 25, 2018 10:04:23 AM CEST",
  "resourceList" : [ ],
  "description" : "Database service deletion with db name: testtst1 with id : ea49c5a8-8747-4459-bb99-cd71c8c87d58",
  "updatedTime" : "July 25, 2018 10:04:23 AM CEST"
}

The database deletion is now successful.

[root@prod1 bin]# odacli describe-job -i 976c8689-a69d-4e0d-a5e0-e40a30a77d29

Job details
----------------------------------------------------------------
                     ID:  976c8689-a69d-4e0d-a5e0-e40a30a77d29
            Description:  Database service deletion with db name: testtst1 with id : ea49c5a8-8747-4459-bb99-cd71c8c87d58
                 Status:  Success
                Created:  July 25, 2018 10:04:23 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate db ea49c5a8-8747-4459-bb99-cd71c8c87d58 for deletion July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:24 AM CEST      Success
Database Deletion                        July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:24 AM CEST      Success
Unregister Db From Cluster               July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:24 AM CEST      Success
Kill Pmon Process                        July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:24 AM CEST      Success
Database Files Deletion                  July 25, 2018 10:04:24 AM CEST      July 25, 2018 10:04:25 AM CEST      Success
Deleting Volume                          July 25, 2018 10:04:30 AM CEST      July 25, 2018 10:04:32 AM CEST      Success

Let’s check the volume and file system to make sure they have been removed.

ASMCMD> volinfo --all
Diskgroup Name: DATA

         Volume Name: COMMONSTORE
         Volume Device: /dev/asm/commonstore-265
         State: ENABLED
         Size (MB): 5120
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /opt/oracle/dcs/commonstore

Diskgroup Name: RECO

         Volume Name: RECO
         Volume Device: /dev/asm/reco-403
         State: ENABLED
         Size (MB): 304128
         Resize Unit (MB): 512
         Redundancy: MIRROR
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /u03/app/oracle/

grid@prod1:/home/grid/ [+ASM1] df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolRoot
                       30G   24G  4.1G  86% /
tmpfs                 189G  1.3G  187G   1% /dev/shm
/dev/md0              477M   40M  412M   9% /boot
/dev/sda1             500M  320K  500M   1% /boot/efi
/dev/mapper/VolGroupSys-LogVolOpt
                       59G   13G   44G  22% /opt
/dev/mapper/VolGroupSys-LogVolU01
                       99G   25G   69G  27% /u01
/dev/asm/commonstore-265
                      5.0G  319M  4.7G   7% /opt/oracle/dcs/commonstore
/dev/asm/reco-403     297G   14G  284G   5% /u03/app/oracle
grid@prod1:/home/grid/ [+ASM1]

Listing the database would show that the unique database has now been deleted.

[root@prod1 bin]# odacli list-databases
DCS-10032:Resource database is not found.

To complete the test and make sure all is ok, I created a new database, which I expected would be successful.

[root@prod1 bin]# odacli describe-job -i cf896c7f-0675-4980-a63f-a8a2b09b1352

Job details
----------------------------------------------------------------
                     ID:  cf896c7f-0675-4980-a63f-a8a2b09b1352
            Description:  Database service creation with db name: testtst2
                 Status:  Success
                Created:  July 25, 2018 10:12:24 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance               July 25, 2018 10:12:25 AM CEST      July 25, 2018 10:12:25 AM CEST      Success
Creating volume dattesttst2              July 25, 2018 10:12:25 AM CEST      July 25, 2018 10:12:36 AM CEST      Success
Creating ACFS filesystem for DATA        July 25, 2018 10:12:36 AM CEST      July 25, 2018 10:12:44 AM CEST      Success
Database Service creation                July 25, 2018 10:12:44 AM CEST      July 25, 2018 10:18:49 AM CEST      Success
Database Creation                        July 25, 2018 10:12:44 AM CEST      July 25, 2018 10:17:36 AM CEST      Success
Change permission for xdb wallet files   July 25, 2018 10:17:36 AM CEST      July 25, 2018 10:17:36 AM CEST      Success
Place SnapshotCtrlFile in sharedLoc      July 25, 2018 10:17:36 AM CEST      July 25, 2018 10:17:37 AM CEST      Success
Running DataPatch                        July 25, 2018 10:18:34 AM CEST      July 25, 2018 10:18:47 AM CEST      Success
updating the Database version            July 25, 2018 10:18:47 AM CEST      July 25, 2018 10:18:49 AM CEST      Success
create Users tablespace                  July 25, 2018 10:18:49 AM CEST      July 25, 2018 10:18:51 AM CEST      Success



[root@prod1 bin]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
e0e8163d-dcaa-4692-85c5-24fb9fe17291     testtst2   Si       12.1.0.2             false      OLTP     Odb1s    ACFS       Configured   80a2e501-31d8-4a5d-83db-e04dad34a7fa

 

 

 

Cet article ODA database been stuck in deleting status est apparu en premier sur Blog dbi services.

Pages