Feed aggregator

[VLOG]: Oracle Cloud Infrastructure (OCI): New Features Aug 2018

Online Apps DBA - Tue, 2018-08-28 03:03

Do You know about the Oracle Cloud Infrastructure New Features? [VLOG] Oracle Cloud Infrastructure (OCI) Visit https://k21academy.com/oci14 & know more about, how these features are useful to you! Do You know about the Oracle Cloud Infrastructure New Features? [VLOG] Oracle Cloud Infrastructure (OCI) Visit https://k21academy.com/oci14 & know more about, how these features are useful to […]

The post [VLOG]: Oracle Cloud Infrastructure (OCI): New Features Aug 2018 appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Deploy WebLogic docker images using Docker Toolbox and Virtual Box on Windows

Yann Neuhaus - Tue, 2018-08-28 02:16

I was interested to run Docker on my Windows machine and found out the Docker Toolbox for Windows that configure itself with the already installed VirtualBox at installation time.

Once installed, You can start the Docker QuickStart shell preconfigured for a Docker command-line environment. At startup time it will start a VM named default and will be ready to work with Docker.
Starting "default"...
(default) Check network to re-create if needed...
(default) Waiting for an IP...
Machine "default" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
Regenerate TLS machine certs? Warning: this is irreversible. (y/n): Regenerating TLS certificates
Waiting for SSH to be available...
Detecting the provisioner...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...

## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/

docker is configured to use the default machine with IP 192.168.99.100
For help getting started, check out the docs at https://docs.docker.com

Start interactive shell
$
The “docker-machine env” displays the machine environment that has been created:
$ docker-machine env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="C:\Users\PBR\.docker\machine\machines\default"
export DOCKER_MACHINE_NAME="default"
export COMPOSE_CONVERT_WINDOWS_PATHS="true"
# Run this command to configure your shell:
# eval $("C:\Program Files\Docker Toolbox\docker-machine.exe" env)

Here is how to directly set the environment from it:

$ eval $("C:\Program Files\Docker Toolbox\docker-machine.exe" env)

Once the environment is set, it can be displayed as follow:


$ docker info
Containers: 9
Running: 0
Paused: 0
Stopped: 9
Images: 2
Server Version: 18.06.0-ce
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 34
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d64c661f1d51c48782c9cec8fda7604785f93587
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.93-boot2docker
Operating System: Boot2Docker 18.06.0-ce (TCL 8.2.1); HEAD : 1f40eb2 - Thu Jul 19 18:48:09 UTC 2018
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.955GiB
Name: default
ID: AV7B:Z7GA:ZWLU:SNMY:ALYL:WTCT:2X2F:NHPY:2TRP:VK27:JY3L:PHJO
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: pbrand
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

I will use a Docker image provided by Oracle on the Docker store: the Oracle WebLogic 12.2.1.3 image. First I need to sign in to the Docker Store

docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: pbrand
Password:
Login Succeeded

Then I can pull the Oracle WebLogic 12.2.1.3 image


docker pull store/oracle/weblogic:12.2.1.3
12.2.1.3: Pulling from store/oracle/weblogic
9fd8609e6e4d: Pull complete
eac7b4a33e34: Pull complete
b6f7d13c859b: Pull complete
e0ca246b2272: Pull complete
7ba4d6bfba43: Pull complete
5e3b8c4731f0: Pull complete
97623ceb6339: Pull complete
Digest: sha256:4c7ce451c093329784a2808a55cd4fc4f1e93d8444b8492a24148d283936add9
Status: Downloaded newer image for store/oracle/weblogic:12.2.1.3

Display all images in my Docker:


$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
store/oracle/weblogic 12.2.1.3 c6bb22ff0ea8 2 weeks ago 1.14GB

In the docker repository, for the Oracle WebLogic 12.2.1.3 image, it is written that the Administrator user should be provided using a domain.properties file having the format below and provided in the command lline to start the Docker image.
The format of the domain.properties file is key value pair:

username=myadminusername
password=myadminpassword

The command line suggested is the following:


$ docker run -d -p 7001:7001 -p 9002:9002 -v $PWD/domain.properties:/u01/oracle/properties/domain.properties store/oracle/weblogic:12.2.1.3

This run command is fine on Linux but doesn’t suite to Windows environment. The created domain.properties file is on the C: drive on windows and the mapping can’t use environment variables like PWD.
In my case, the Docker run command to run is the following:

$ docker run -d --name wls12213 -p 7001:7001 -p 9002:9002 -v //c/Users/PBR/docker_weblogic/domain.properties:/u01/oracle/properties/domain.properties store/oracle/weblogic:12.2.1.3
670fc3bd2c8131b71ecc6a182181d1f03a4832a4c0e8d9d530e325e759afe151

With the -d option it displays only the instance ID, no logs.

Checking the logs using the Docker log command:

$ docker logs wls12213

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

domain_name : [base_domain] admin_listen_port : [7001] domain_path : [/u01/oracle/user_projects/domains/base_domain] production_mode : [prod] admin name : [AdminServer] administration_port_enabled : [true] administration_port : [9002]

I noticed from the logs that the Administration channel is enabled and listening on HTTPS Port 9002. The URL to browse to the WebLogic Administration Console is then:
https://192.168.99.100:9002/console
wls12213_console_servers1

 

Cet article Deploy WebLogic docker images using Docker Toolbox and Virtual Box on Windows est apparu en premier sur Blog dbi services.

Tunning Between clause

Tom Kyte - Mon, 2018-08-27 18:46
i am trying to tune a query which contains between clause in Oracle 11g. i have table employee(id number, join_dt date, end_dt date) which has 10 million records. and it has index on join_dt,end_dt first run, dbms_stats.gather_table_stats(owne...
Categories: DBA Blogs

FOPEN to sub folders

Tom Kyte - Mon, 2018-08-27 18:46
Hello, I am trying to find a way to write a file into the sub folder of an Oracle Directory. I can write into the base of the oracle directory but not into the sub folders. To keep it simple, This is what we have that currently works, after that i...
Categories: DBA Blogs

ORA-00600: internal error code, arguments: [156057], [], [], [], [], [], [], [], [], [], [],

Tom Kyte - Mon, 2018-08-27 18:46
Hi Tom, Our database is oracle 11.2.0.3. My customer met an error "ORA-00600: internal error code, arguments: [156057], [], [], [], [], [], [], [], [], [], []," when he did 'select * from UPL_SECTOR'. UPL_SECTOR is a table he created by himself...
Categories: DBA Blogs

Move historical data between databases

Tom Kyte - Mon, 2018-08-27 18:46
Hello Tom See how you could optimize moving records (historical by date) from one table in a production database to another table in another historical database in an automatic way. Could you support me in Oracle Partition? It could be used ex...
Categories: DBA Blogs

Integrate Oracle E-Business Suite (EBS) R12 with OAM/OID/OUD 12c (12.2.1.3.0) High level Steps

Online Apps DBA - Mon, 2018-08-27 03:31

Do you want to learn How to Integrate Oracle E-Business Suite Release R12 (12.2 & 12.1) with Oracle Identity & Access Management 12c Release 2 Patchset 3 (12.2.1.3.0)? [BLOG] Integrate Oracle E-Business Suite (EBS) R12 with OAM/OID/OUD 12c (12.2.1.3.0) High level Steps Visit: https://k21academy.com/ebsoam25 to get the answer. Do you want to learn How to […]

The post Integrate Oracle E-Business Suite (EBS) R12 with OAM/OID/OUD 12c (12.2.1.3.0) High level Steps appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[BLOG] Oracle EBS Cloud Admin Tool | OCI – C | OCI | Cloud at Customer (Part -1)

Online Apps DBA - Sun, 2018-08-26 22:00

Do you now What is EBS Cloud Admin tool? Visit: https://k21academy.com/ebscloud25 & learn more on the Centralized Tool which is used for managing multiple EBS environment on Oracle Cloud… Do you now What is EBS Cloud Admin tool? Visit: https://k21academy.com/ebscloud25 & learn more on the Centralized Tool which is used for managing multiple EBS environment […]

The post [BLOG] Oracle EBS Cloud Admin Tool | OCI – C | OCI | Cloud at Customer (Part -1) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Partitioning -- 3c : Unique Index[es] on Partitioned Table

Hemant K Chitale - Sun, 2018-08-26 03:49
Let's explore what sort of Unique Indexes you can create on a Partitioned Table.

There are three types of partitioning for Indexes :

a  Global (Non-Partitioned)

b  Global Partitioned

c  Local Partitioned

Can a Unique Index be created using either type ?

Let me start with another table, SALES_DATA_2  which has the same structure and Partition Key as SALES_DATA, except that it doesn't have the Primary Key definition that builds the Unique Index.

SQL> l
1 CREATE TABLE SALES_DATA_2
2 ( SALE_ID NUMBER,
3 SALE_DATE DATE,
4 INVOICE_NUMBER VARCHAR2(21),
5 CUSTOMER_ID NUMBER,
6 PRODUCT_ID NUMBER,
7 SALE_VALUE NUMBER
8 )
9 TABLESPACE HEMANT
10 PARTITION BY RANGE (SALE_DATE)
11 (PARTITION P_2018 VALUES LESS THAN (TO_DATE(' 2019-01-01','YYYY-MM-DD'))
12 TABLESPACE TBS_YEAR_2018 ,
13 PARTITION P_2019 VALUES LESS THAN (TO_DATE(' 2020-01-01','YYYY-MM-DD'))
14 TABLESPACE TBS_YEAR_2019 ,
15 PARTITION P_2020 VALUES LESS THAN (TO_DATE(' 2021-01-01','YYYY-MM-DD'))
16 TABLESPACE TBS_YEAR_2020 ,
17 PARTITION P_MAXVALUE VALUES LESS THAN (MAXVALUE)
18* TABLESPACE HEMANT )
SQL> /

Table created.

SQL>


Next, I try a Global (Non-Partitioned) Unique Index on SALE_ID.  Note that the "GLOBAL" Keyword is optional if it is Non-Partitioned.

SQL> create unique index sales_2_uk
2 on sales_data_2 (sale_id) global
3 tablespace hemant
4 /

Index created.

SQL>
SQL> select partitioned, status
2 from user_indexes
3 where index_name = upper('sales_2_uk')
4 /

PAR STATUS
--- --------
NO VALID

SQL> drop index sales_2_uk;

Index dropped.

SQL>


Effectively, this Global Index is the same as the Primary Key index on SALES_DATA that I built earlier.

Next, I try a Unique Global Partitioned Index on the same column.

SQL> create unique index sales_2_uk
2 on sales_data_2 (sale_id) global
3 partition by range (sale_id)
4 (partition p_1mill values less than (1000001) tablespace new_indexes,
5 partition p_2mill values less than (2000001) tablespace new_indexes,
6 partition p_3mill values less than (3000001) tablespace new_indexes,
7 partition p_maxval values less than (maxvalue) tablespace new_indexes)
8 /

Index created.

SQL>
SQL> select uniqueness, partitioned, status
2 from user_indexes
3 where index_name = upper('sales_2_uk')
4 /

UNIQUENES PAR STATUS
--------- --- --------
UNIQUE YES N/A

SQL>
SQL> l
1 select column_position, column_name
2 from user_part_key_columns
3 where name = upper('sales_2_uk')
4* order by column_position
SQL> /

COLUMN_POSITION COLUMN_NAME
--------------- ----------------
1 SALE_ID

SQL>
SQL> select partition_name, status
2 from user_ind_partitions
3 where index_name = upper('sales_2_uk')
4 order by partition_position
5 /

PARTITION_NAME STATUS
------------------------------ --------
P_1MILL USABLE
P_2MILL USABLE
P_3MILL USABLE
P_MAXVAL USABLE

SQL>


So, that is a valid Unique Global Partitioned Index.

The next attempt is a Unique Local Partitioned Index -- i.e. partitioned by the same key as the Table.

SQL> create unique index sales_2_uk
2 on sales_data_2 (sale_id) local
3 /
on sales_data_2 (sale_id) local
*
ERROR at line 2:
ORA-14039: partitioning columns must form a subset of key columns of a UNIQUE
index


SQL> !oerr ora 14039
14039, 00000, "partitioning columns must form a subset of key columns of a UNIQUE index"
// *Cause: User attempted to create a UNIQUE partitioned index whose
// partitioning columns do not form a subset of its key columns
// which is illegal
// *Action: If the user, indeed, desired to create an index whose
// partitioning columns do not form a subset of its key columns,
// it must be created as non-UNIQUE; otherwise, correct the
// list of key and/or partitioning columns to ensure that the index'
// partitioning columns form a subset of its key columns

SQL>
SQL> create unique index sales_2_uk
2 on sales_data_2 (sale_id, sale_date) local
3 /

Index created.

SQL>
SQL> select uniqueness, partitioned, status
2 from user_indexes
3 where index_name = upper('sales_2_uk')
4 /

UNIQUENES PAR STATUS
--------- --- --------
UNIQUE YES N/A

SQL> select column_position, column_name
2 from user_part_key_columns
3 where name = upper('sales_2_uk')
4 order by column_position
5 /

COLUMN_POSITION COLUMN_NAME
--------------- ----------------
1 SALE_DATE

SQL> select column_position, column_name
2 from user_ind_columns
3 where index_name = upper('sales_2_uk')
4 order by column_position
5 /

COLUMN_POSITION COLUMN_NAME
--------------- ----------------
1 SALE_ID
2 SALE_DATE

SQL>
SQL> select partition_name, tablespace_name, status
2 from user_ind_partitions
3 where index_name = upper('sales_2_uk')
4 order by partition_position
5 /

PARTITION_NAME TABLESPACE_NAME STATUS
------------------------------ ------------------------------ --------
P_2018 TBS_YEAR_2018 USABLE
P_2019 TBS_YEAR_2019 USABLE
P_2020 TBS_YEAR_2020 USABLE
P_MAXVALUE HEMANT USABLE

SQL>


So, a Unique Local Partitioned Index must include the Table Partition Key as a subset of the Index Key columns.  This is something you must consider when Partitioning the Table and Index both.
(Also, note how USER_PART_KEY_COLUMNS doesn't show SALE_ID as a Partition Key.  This is in 11.2.0.4)



Categories: DBA Blogs

ODC Latin America Tour : It’s a Wrap!

Tim Hall - Sat, 2018-08-25 19:43

The ODC Latin America Tour (Northern Leg) is now over for me. I still can’t really believe I get invited to these tours and actually do them.

I’m simultaneously excited and terrified by these tours. I have to admit I hate the travelling, but I love meeting people around the world who share a mutual interest. Give me an opportunity to geek out and I’m all over it.

After the year I’ve had so far (see here) I was more nervous about this tour than any previous one. My nightmare seemed to be coming true when I needed medical attention on the plane in Quito, but after that glitch it went really well, and I’m glad I didn’t chicken out!

Thanks to all the individual user groups for inviting me and making me welcome in your country. Thank to all the attendees for coming along and supporting the events. Meeting all of you is the best bit of doing this. Thanks as always to the Oracle ACE Program and Oracle Developer Champion program for making this possible for me, without ever expecting anything from me other than contributing to the community.

The posts that I put out related to this tour are listed here.

Cheers

Tim…

ODC Latin America Tour : It’s a Wrap! was first posted on August 26, 2018 at 1:43 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

systemd: systemd-notify not working for non-root-users

Dietrich Schroff - Sat, 2018-08-25 12:53
Sometimes you have to write your own startup scripts. Recent linux distributions require systemd scripts. This is not really a problem except you have to fulfill the following requirements:
  • Run the service as a non-root-user
  • The service has a startup phase and you want to start the next startup scripts after this startup phase
So the systemd-script has to look like this:
# cat /lib/systemd/system/TEST.service
[Unit]
Description=MyTestSystemdConfiguration

[Service]
User=schroff
Type=notify
ExecStart=/home/schroff/bin/test.sh
NotifyAccess=allThe service startup scripts have to look like this:
$ cat /home/schroff/bin/test.sh
#!/bin/bash

echo Starting serivce
sleep 10
#Starting your services
echo Services started

/bin/systemd-notify --ready
echo Notify done

while test 1 do
  sleep 600
done
#keep this scripts running, as long your service runsIn the startup phase you will get the following:
schroff@zerberus:~/bin$ systemctl status TEST.service
● TEST.service - MyTestSystemdConfiguration
   Loaded: loaded (/lib/systemd/system/TEST.service; static; vendor preset: enabled)
   Active: activating (start) since 19:39:27 CET; 7s ago
 Main PID: 17390 (test.sh)
    Tasks: 2 (limit: 4915)
   Memory: 532.0K::
      CPU: 7ms
   CGroup: /system.slice/TEST.service
           ├─17390 /bin/bash /home/schroff/bin/test.sh
           └─17395 sleep 10

19:39:27 zerberus systemd[1]: Starting MyTestSystemdConfiguration...
19:39:27 zerberus test.sh[17390]: Starting serivceAnd after the startup phase this is the output (if there were no errors):
schroff@zerberus:~/bin$ systemctl status TEST.service
● TEST.service - MyTestSystemdConfiguration
   Loaded: loaded (/lib/systemd/system/TEST.service; static; vendor preset: enabled)
   Active: active (running) since 19:38:38 CET; 3s ago
 Main PID: 17242 (test.sh)
    Tasks: 2 (limit: 4915)
   Memory: 932.0K
      CPU: 9ms
   CGroup: /system.slice/TEST.service
           ├─17242 /bin/bash /home/schroff/bin/test.sh
           └─17259 sleep 600

19:38:28 zerberus systemd[1]: Starting MyTestSystemdConfiguration...
19:38:28 zerberus test.sh[17242]: Starting serivce
19:38:38 zerberus test.sh[17242]: Services started
19:38:38 zerberus systemd[1]: Started MyTestSystemdConfiguration.
19:38:38 zerberus test.sh[17242]: Notify doneBut sometime you will get:
# systemctl restart TEST.service
Job for TEST.service failed because a timeout was exceeded.
See "systemctl  status TEST.service" and "journalctl  -xe" for details.19:44:46 zerberus systemd[1]: TEST.service: Start operation timed out. Terminating.
19:44:46 zerberus systemd[1]: Failed to start MyTestSystemdConfiguration.
-- Subject: Unit TEST.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit TEST.service has failed.
--
-- The result is failed.
19:44:46 zerberus systemd[1]: TEST.service: Unit entered failed state.
19:44:46 zerberus systemd[1]: TEST.service: Failed with result 'timeout'.
Note that this will happen after 600s (default). You can change this with the parameter (systemd configuration, see manpage systemd.service)
TimeoutSecBut changing this Parameter will not help, because systemd status will never enter the state "active (running)".

The problem is systemd-notify doesn't work, since it lives too short (Redhat Bugzilla).


A workaround is described in that bug entry:
Instead of
systemd-notify --readyuse
python -c "import systemd.daemon, time; systemd.daemon.notify('READY=1'); time.sleep(5)"

How Backup Spfile to Pfile Saved My Arse

Michael Dinh - Sat, 2018-08-25 11:43

Typically when I perform backup review, I always suggest to add the following:

run {
allocate channel c1 device type disk;
SQL "alter database backup controlfile to trace as ''/tmp/ctl_@_trace.sql'' reuse resetlogs";
SQL "create pfile=''/tmp/init@.ora'' from spfile";
release channel c1;
}

Example:

RMAN> run {
allocate channel c1 device type disk;
2> 3> SQL "alter database backup controlfile to trace as ''/tmp/ctl_@_trace.sql'' reuse resetlogs";
4> SQL "create pfile=''/tmp/init@.ora'' from spfile";
5> release channel c1;
6> }

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=31 instance=hawk1 device type=DISK

sql statement: alter database backup controlfile to trace as ''/tmp/ctl_@_trace.sql'' reuse resetlogs

sql statement: create pfile=''/tmp/init@.ora'' from spfile

released channel: c1

RMAN> exit

[oracle@racnode-dc1-1 tmp]$ ll /tmp/init*
-rw-r--r-- 1 oracle dba 1978 Aug 25 21:11 /tmp/inithawk1.ora
[oracle@racnode-dc1-1 tmp]$ ll /tmp/ctl*
-rw-r--r-- 1 oracle dba 7318 Aug 25 21:11 /tmp/ctl_hawk1_trace.sql
[oracle@racnode-dc1-1 tmp]$ sysresv|tail -1
Oracle Instance alive for sid "hawk1"
[oracle@racnode-dc1-1 tmp]$

When on RAC, do not create pfile to it’s default destination, e.g. SQL “create pfile from spfile”;

ORIGINAL LOCATION:
====================================================================================================
$ asmcmd ls -l +DATA/SOXPA/PARAMETERFILE
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   AUG 17 10:00:00  Y    spfile.321.984394591
PARAMETERFILE  UNPROT  COARSE   AUG 17 10:00:00  N    spfilehawka.ora => +DATA/HAWKA/PARAMETERFILE/spfile.321.984394591

BIG OOPS: NOTHING THERE!
====================================================================================================
$ asmcmd ls -l +DATA/HAWKA/PARAMETERFILE

ERROR: Created wrong spfile.
====================================================================================================
$ asmcmd ls -l +DATA/WH02A/PARAMETERFILE
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   AUG 23 19:00:00  Y    spfile.380.984672023
PARAMETERFILE  UNPROT  COARSE   AUG 23 19:00:00  N    spfilewh02a.ora => +DATA/WH02A/PARAMETERFILE/spfile.380.984672023
PARAMETERFILE  UNPROT  COARSE   AUG 24 14:00:00  N    spfilehawka.ora => +DATA/WH01A/PARAMETERFILE/spfile.321.985008497

Create copy of pfile.
====================================================================================================
$ ll *.good
-rw-r--r--. 1 oracle oinstall 2207 Aug 24 14:57 inithawka4.ora.good

Check controlfile location for pfile.
====================================================================================================
$ cat inithawka4.ora.good 
*.control_files='+DATA/hawka/controlfile/current.272.984393927','+FRA/hawka/controlfile/current.307.984393927'#Restore Controlfile

Check database and create new spfile from pfile.
====================================================================================================
HOST04:(SYS@hawka4):PRIMARY> show parameter spfile;

NAME                           TYPE        VALUE
------------------------------ ----------- ----------------------------------------------------------------------------------------------------
spfile                         string      +DATA/hawka/parameterfile/spfilehawka.ora

HOST04:(SYS@hawka4):PRIMARY> show parameter control_file 

NAME                           TYPE        VALUE
------------------------------ ----------- ----------------------------------------------------------------------------------------------------
control_file_record_keep_time  integer     7
control_files                  string      +DATA/hawka/controlfile/current.272.984393927, +FRA/hawka/controlfile/current.307.984393927

HOST04:(SYS@hawka4):PRIMARY> create spfile='+DATA/HAWKA/PARAMETERFILE/spfilehawka4.ora' from pfile='/u01/app/oracle/db/11.2.0.4/dbs/inithawka4.ora.good';

File created.

HOST04:(SYS@hawka4):PRIMARY> exit

rmalias and mkalias for NEW SPFILE.
====================================================================================================
$ asmcmd ls -l +DATA/HAWKA/PARAMETERFILE
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   AUG 24 15:00:00  Y    spfile.1077.985015077
PARAMETERFILE  UNPROT  COARSE   AUG 24 15:00:00  N    spfilehawka4.ora => +DATA/HAWKA/PARAMETERFILE/spfile.1077.985015077

$ asmcmd
ASMCMD> cd +DATA/HAWKA/PARAMETERFILE
ASMCMD> ls -lt
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   AUG 24 15:00:00  N    spfilehawka4.ora => +DATA/HAWKA/PARAMETERFILE/spfile.1077.985015077
PARAMETERFILE  UNPROT  COARSE   AUG 24 15:00:00  Y    spfile.1077.985015077

ASMCMD> rmalias spfilehawka4.ora

ASMCMD> mkalias +DATA/HAWKA/PARAMETERFILE/spfile.1077.985015077 spfilehawka.ora

ASMCMD> ls -lt
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   AUG 24 15:00:00  N    spfilehawka.ora => +DATA/HAWKA/PARAMETERFILE/spfile.1077.985015077
PARAMETERFILE  UNPROT  COARSE   AUG 24 15:00:00  Y    spfile.1077.985015077
ASMCMD> exit

$ asmcmd ls -l +DATA/HAWKA/PARAMETERFILE
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   AUG 24 15:00:00  Y    spfile.1077.985015077
PARAMETERFILE  UNPROT  COARSE   AUG 24 15:00:00  N    spfilehawka.ora => +DATA/HAWKA/PARAMETERFILE/spfile.1077.985015077
$ 

Verify pfile can be created from spfile.
====================================================================================================
HOST04:(SYS@hawka4):PRIMARY> create pfile='/tmp/init@.ora' from spfile;

File created.

HOST04:(SYS@hawka4):PRIMARY> exit
====================================================================================================
oracle@p2dbccx04:hawka4:/home/oracle
$ ll /tmp/inithawka4.ora
-rw-r--r--. 1 oracle asmadmin 2207 Aug 24 15:24 /tmp/inithawka4.ora

<p>&#160; [BLOG] Big Data Hadoop:

Online Apps DBA - Sat, 2018-08-25 08:00

  [BLOG] Big Data Hadoop: Introduction to Apache Spark Visit: https://k21academy.com/hadoop19 & find all the answers for: 1) What is Apache Spark 2) It’s Features 3) Apache Spark Components & Architecture   [BLOG] Big Data Hadoop: Introduction to Apache Spark Visit: https://k21academy.com/hadoop19 & find all the answers for: 1) What is Apache Spark 2) It’s Features 3) […]

The post appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Spreadsheet Upload

Tom Kyte - Fri, 2018-08-24 17:26
Hi there, If there is a way to upload the spreadsheet data in our existing application? If possible please send your answers. Regards, Aravindan Prem
Categories: DBA Blogs

JET Line Chart - Step Handling

Tom Kyte - Fri, 2018-08-24 17:26
I have a problem generating the vertical lines in a line chart. E.g. take this query: with nums as ( select rownum as rnum from dual connect by rownum < 300) select rnum/9 as x, sin(2*rnum/30) as y from nums In the X-Axis, my tick mar...
Categories: DBA Blogs

Deploying SQL Server on MiniShift / RedHat OpenShift

Yann Neuhaus - Fri, 2018-08-24 06:19

Currently we begin to see customer adopting containerization for SQL Server databases (mainly driven by CI/CD and DevOps trends). A lot of them are using RedHat OpenShift as container management platform. From my side, I didn’t want to setup a complete OpenShift infrastructure on my lab to test only my SQL Server pod deployment on such infrastructure. I rather installed MiniShift that comes with one OpenShift node cluster which perfectly meets my requirements.

 

blog 143 -  0 - banner

 

I’ll be running MiniShift on my Windows 10 laptop and I will use Hyper-V as the hypervisor for Minishift. I used the following MiniShift configuration settings. Just remember that SQL Server memory requirement is 2GB so I had to increase the default setting value to 6GB to be more comfortable running my SQL Server pod. I also setup my MiniShift default folders location to another disk.

[dab@DBI-LT-DAB:#]> minishift config set vm-driver hyperv
[dab@DBI-LT-DAB:#]> minishift config set hyperv-virtual-switch Internet
[dab@DBI-LT-DAB:#]> minishift config set memory 6GB
$env:MINISHIFT_HOME="T:\minishift\"

 

Let’s start MiniShift:

[dab@DBI-LT-DAB:#]> minishift start
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
-- Checking if requested hypervisor 'hyperv' is supported on this platform ... OK
-- Checking if Hyper-V driver is installed ... OK
-- Checking if Hyper-V driver is configured to use a Virtual Switch ...
   'Internet' ... OK
-- Checking if user is a member of the Hyper-V Administrators group ... OK
-- Checking the ISO URL ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting the OpenShift cluster using 'hyperv' hypervisor ...
-- Starting Minishift VM ................................................ OK
-- Checking for IP address ... OK
-- Checking for nameservers ... OK
-- Checking if external host is reachable from the Minishift VM ...
   Pinging 8.8.8.8 ... OK
-- Checking HTTP connectivity from the VM ...
   Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 10% used OK
-- OpenShift cluster will be configured with ...
   Version: v3.9.0
-- Copying oc binary from the OpenShift container image to VM ... OK
-- Starting OpenShift cluster ...........
Deleted existing OpenShift container
Using nsenter mounter for OpenShift volumes
Using public hostname IP 192.168.0.17 as the host IP
Using 192.168.0.17 as the server IP
Starting OpenShift using openshift/origin:v3.9.0 ...
OpenShift server started.

The server is accessible via web console at:

https://192.168.0.17:8443

…

 

I also needed to configure docker and oc environment to get access on them from my PowerShell console.

& minishift docker-env | Invoke-Expression
& minishift oc-env | Invoke-Expression

 

Configuration done. Let’s start creating my first project then:

[dab@DBI-LT-DAB:#]> oc new-project mssqlserver --description="mssqlserver deployment on Minishift" --display-name="mssqlserver project"
Now using project "mssqlserver" on server "https://192.168.0.17:8443".

 

Let’s get a list of existing projects:

[dab@DBI-LT-DAB:#]> oc projects
You have access to the following projects and can switch between them with 'oc project <projectname>':

  * mssqlserver - mssqlserver project
    myproject - My Project

Using project "mssqlserver" on server "https://192.168.0.17:8443".

 

I will need to use an OpenShift private registry for my tests:

[dab@DBI-LT-DAB:#]> minishift openshift registry
172.30.1.1:5000

 

My OpenShift registry contains the following images by default:

[dab@DBI-LT-DAB:#]> docker images
REPOSITORY                         TAG                 IMAGE ID            CREATED             SIZE
openshift/origin-web-console       v3.9.0              aa12a2fc57f7        8 weeks ago         495MB
openshift/origin-docker-registry   v3.9.0              8e6f7a854d66        8 weeks ago         465MB
openshift/origin-haproxy-router    v3.9.0              448cc9658480        8 weeks ago         1.28GB
openshift/origin-deployer          v3.9.0              39ee47797d2e        8 weeks ago         1.26GB
openshift/origin                   v3.9.0              4ba9c8c8f42a        8 weeks ago         1.26GB
openshift/origin-pod               v3.9.0              6e08365fbba9        8 weeks ago         223MB

 

For my tests, I picked up my custom dbi services image for SQL Server used for our DMK maintenance tool. Next steps consisted in building, tagging and uploading the corresponding image to my OpenShift integrated registry. Image tagging was done with the [registry_ip]:[port]/[project]/[image]/[tag] pattern:

[dab@DBI-LT-DAB:#]> docker tag dbi/dbi_linux_sql2017:2017-CU4 172.30.1.1:5000/mssqlserver/dbi_linux_sql2017:2017-CU4
[dab@DBI-LT-DAB:#]> docker images
REPOSITORY                                TAG                 IMAGE ID            CREATED             SIZE
172.30.1.1:5000/mssqlserver/dbi_linux_sql2017   2017-CU4            d37ecabe87e8        22 minutes ago      1.42GB
dbi/dbi_linux_sql2017                     2017-CU4            d37ecabe87e8        22 minutes ago      1.42GB

[dab@DBI-LT-DAB:#]> docker push 172.30.1.1:5000/mssqlserver/dbi_linux_sql2017:2017-CU4
The push refers to a repository [172.30.1.1:5000/mssqlserver/dbi_linux_sql2017]
2e3c7826613e: Pushed
66ccaff0cef8: Pushed
…

 

My custom image is now available as image stream on OpenShift.

Go ahead and let’s try first to deploy my SQL Server pod from the mssqlserver project through the web console. The task is easy. You just have to choose deployment from an image and search then the corresponding image available as imagestream in your OpenShift integrated registry. In my case deployment was ok after configuring some environment variable values.

blog 143 - 1 - mssql pod

blog 143 - 3 - mssql deployment variables

From the web console you have access to pod logs. In my case, it corresponds to the SQL Server error log during the startup phase. My custom image includes creating a custom dbi_tools database as well as installing tSQLt framework.

blog 143 - 4 - mssql logs

The final step consists in exposing the SQL Server pod to outside world (not by default):

[dab@DBI-LT-DAB:#]> oc get pod
NAME                        READY     STATUS    RESTARTS   AGE
dbi-linux-sql2017-1-9vvfw   1/1       Running   0          1h

C:\Users\dab\Desktop
[dab@DBI-LT-DAB:#]> oc port-forward dbi-linux-sql2017-1-9vvfw 1433:1433

 

Let’s try a connection from mssql-cli tool:

[dab@DBI-LT-DAB:$]> mssql-cli -S 127.0.0.1 -U sa -P Password1
Version: 0.15.0
Mail: sqlcli@microsoft.com
Home: http://github.com/dbcli/mssql-cli
master> select *
....... from sys.dm_os_host_info;
+-----------------+---------------------+----------------+---------------------------+------------+-----------------------+
| host_platform   | host_distribution   | host_release   | host_service_pack_level   | host_sku   | os_language_version   |
|-----------------+---------------------+----------------+---------------------------+------------+-----------------------|
| Linux           | Ubuntu              | 16.04          |                           | NULL       | 0                     |
+-----------------+---------------------+----------------+---------------------------+------------+-----------------------+
(1 row affected)
Time: 0.405s
master>

 

Done!

This is my first deployment but we can do better here. Indeed, in my previous scenario, I didn’t setup persistent volume to host my database files or I didn’t use OpenShift secrets to protect my credential information. Let’ do it!

Let’s create first a persistent volume. Developer user doesn’t have permissions to manage volume on the cluster so let’s switch to the system user:

[dab@DBI-LT-DAB:#]> oc login -u system:admin
Logged into "https://192.168.0.17:8443" as "system:admin" using existing credentials.

 

OpenShift runs on the top of K8s which is object-oriented. Objects can be deployed from deployment files as well and this is definitely my favorite path currently for many reasons. I configured both PersistentVolume and PersistentVolumeClaim objects in a deployment file as follows. Note the hostPath value corresponds to a local path in the MiniShift cluster I setup in a previous step.

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-data-sql
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  storageClassName: slow
  hostPath:
    path: /mnt/sda1/var/lib/minishift/openshift.local.pv/pv0069
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pv-claim-data-sql
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: slow
selector:
  name: pv-data-sql

 

Let’s deploy both my persistent volume and persistent volume claim …

[dab@DBI-LT-DAB:#]> oc create -f .\docker_openshift_storage.yaml
persistentvolume "pv-data-sql" created
persistentvolumeclaim "pv-claim-data-sql" created

 

… and get status of my persistent volume deployment

[dab@DBI-LT-DAB:#]> oc get pvc
NAME                STATUS    VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pv-claim-data-sql   Bound     pv-data-sql   5Gi        RWO            hostpath       1m
[dab@DBI-LT-DAB:#]> oc get pv
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                           STORAGECLASS   REASON    AGE
pv-data-sql   5Gi        RWO            Retain           Bound       mssqlserver/pv-claim-data-sql   hostpath                 1m
…

 

“Bound” status indicates that everything seems to be ok.

Let’s continue and let’s add an OpenShift secret from my deployment file:

apiVersion: v1
kind: Secret
metadata:
  name: mssql-env
stringData:
  MSSQL_SA_PASSWORD: Password1

[dab@DBI-LT-DAB:#]> oc create -f .\docker_openshift_mssql_secret.yaml
secret "mssql-env" created
C:\Users\dab\Desktop
[dab@DBI-LT-DAB:#]> oc get secret
NAME                       TYPE                                  DATA      AGE
…
mssql-env                  Opaque                                1         1h

 

At this step, you have different ways to deploy a pod so I finally use a deployment configuration file as follows:

apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
  labels:
    app: mssql
  name: dbi-linux-sql2017
  namespace: mssqlserver
spec:
  replicas: 1
  selector:
    app: mssql
    deploymentconfig: dbi-linux-sql2017
  strategy:
    type: Rolling
  template:
    metadata:
      labels:
        app: mssql
        deploymentconfig: dbi-linux-sql2017
    spec:
      containers:
        - env:
            - name: ACCEPT_EULA
              value: 'Y'
            - name: DMK
              value: 'Y'
            - name: MSSQL_SA_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: MSSQL_SA_PASSWORD
                  name: mssql-env
          envFrom:
            - secretRef:
                name: mssql-env
          image: 
            172.30.1.1:5000/mssqlserver/dbi_linux_sql2017:2017-CU4
          imagePullPolicy: Always
          name: dbi-linux-sql2017
          ports:
            - containerPort: 1433
              protocol: TCP
          volumeMounts:
            - mountPath: /var/opt/mssql/
              name: volume-x1d5y
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      volumes:
        - name: volume-x1d5y
          persistentVolumeClaim:
            claimName: pv-claim-data-sql
  triggers:
    - type: ConfigChange
    - imageChangeParams:
        automatic: true
        containerNames:
          - dbi-linux-sql2017
        from:
          kind: ImageStreamTag
          name: 'dbi_linux_sql2017:2017-CU4'
          namespace: mssqlserver
      type: ImageChange

 

… To deploy my SQL Server pod:

[dab@DBI-LT-DAB:#]> oc create -f .\deployment-config-mssql.yml
deploymentconfig "dbi-linux-sql2017" created

 

Once again, I exposed the corresponding service port to connect from my laptop and connection to my SQL Server pod was successful again. Note that the pod is different from the first time. Updating my configuration led K8s to spin-up another container in this case.

[dab@DBI-LT-DAB:#]> oc get pod
NAME                        READY     STATUS    RESTARTS   AGE
dbi-linux-sql2017-1-9ddfbx   1/1       Running   0          1h

C:\Users\dab\Desktop
[dab@DBI-LT-DAB:#]> oc port-forward dbi-linux-sql2017-9ddfbx 1433:1433
Forwarding from 127.0.0.1:1433 -> 1433

 

Finally let’s take a look at the MiniShift cluster storage layer to get a picture of my SQL Server database files including data, log and secrets under /var/opt/mssql:

[dab@DBI-LT-DAB:#]> minishift ssh

[root@minishift ~]# ll /mnt/sda1/var/lib/minishift/openshift.local.pv/pv0069/
total 0
drwxr-xr-x. 2 root root 202 Aug 23 15:07 data
drwxr-xr-x. 2 root root 232 Aug 23 15:18 log
drwxr-xr-x. 2 root root  25 Aug 23 15:06 secrets

 

I was quick on some topics in this write-up that deserves probably to dig further into details and there are other ones to investigate. I will get other opportunities to share my thoughts on it in a context of SQL Server database scenarios. Stay tuned!

 

 

 

 

 

 

 

 

 

Cet article Deploying SQL Server on MiniShift / RedHat OpenShift est apparu en premier sur Blog dbi services.

Error Logging

Jonathan Lewis - Fri, 2018-08-24 05:19

Error logging is a topic that I’ve mentioned a couple of times in the past, most recently as a follow-up in a discussion of the choices for copying a large volume of data from one table to another, but originally in an addendum about a little surprise you may get when you use extended strings (max_string_size = EXTENDED).

If you use the default call to dbms_errlog.create_error_log() to create an error logging table then Oracle will create a table with a few columns of its own plus every column (name) that you have in your original table – but it will create your columns as varchar2(4000), or nvarchar2(2000), or raw(2000) – unless you’ve set the max_string_size to extended.  Here’s a simple  demo script with results from two different systems, one with the default setting the other with the extended setting (note, there’s a little inconsistency in handling raw() columns.


rem
rem     Script:         log_errors_min.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jun 2018
rem     Purpose:
rem

create table t1 (
        v1      varchar2(10),
        n1      number(2,0),
        d1      date,
        nv1     nvarchar2(10),
        r1      raw(10)
);


execute dbms_errlog.create_error_log('t1')

desc err$_t1


max_string_size = STANDARD
--------------------------
 Name			       Null?	Type
 ----------------------------- -------- --------------------
 ORA_ERR_NUMBER$			NUMBER
 ORA_ERR_MESG$				VARCHAR2(2000)
 ORA_ERR_ROWID$ 			ROWID
 ORA_ERR_OPTYP$ 			VARCHAR2(2)
 ORA_ERR_TAG$				VARCHAR2(2000)
 V1					VARCHAR2(4000)
 N1					VARCHAR2(4000)
 D1					VARCHAR2(4000)
 NV1					NVARCHAR2(2000)
 R1					RAW(2000)


max_string_size = EXTENDED
--------------------------
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 ORA_ERR_NUMBER$                        NUMBER
 ORA_ERR_MESG$                          VARCHAR2(2000)
 ORA_ERR_ROWID$                         ROWID
 ORA_ERR_OPTYP$                         VARCHAR2(2)
 ORA_ERR_TAG$                           VARCHAR2(2000)
 V1                                     VARCHAR2(32767)
 N1                                     VARCHAR2(32767)
 D1                                     VARCHAR2(32767)
 NV1                                    NVARCHAR2(16383)
 R1                                     RAW(32767)

Every single “original” column that appears in this table will be a LOB, with an inline LOB locator of 30 or more bytes. (At least, that’s the 12.1.0.2 implementation, I haven’t checked for 12.2 or 18.3).

If this is going to be a problem (e.g. you have a table defined with 500 columns but only use 120 of them) you can create a minimalist error logging table. Provided you create it with the ora_err% columns suitably defined you can add only those columns you’re really interested in (or feel threatened by), and you don’t have to declare them at extreme lengths. e.g.


create table err$_special (
        ora_err_number$         number,
        ora_err_mesg$           varchar2(2000),
        ora_err_rowid$          rowid,
        ora_err_optyp$          varchar2(2),
        ora_err_tag$            varchar2(2000),
        n1                      varchar2(128)
)
;

insert into t1 values(1,'abc','02-jan-1984',sys_op_c2c('abc'),hextoraw('0xFF')) 
log errors into err$_special
reject limit unlimited
;

execute print_table('select * from err$_special')


ORA_ERR_NUMBER$               : 1722
ORA_ERR_MESG$                 : ORA-01722: invalid number

ORA_ERR_ROWID$                :
ORA_ERR_OPTYP$                : I
ORA_ERR_TAG$                  :
N1                            : abc


If you try to create an error logging table that doesn’t include the 5 critical columns you’ll see Oracle error ORA-38900: missing mandatory column “ORA_ERR_{something}” of error log table “{your logging table name}” when you try to log errors into it, and the 5 critical columns have to be the first 5 columns (in any order) in the table or you’ll get Oracle error ORA-38901: column “ORA_ERR_{something}$” of table “{your logging table name}” when you try to log errors into it.

Purpose of Mockups and different elements

Nilesh Jethwa - Thu, 2018-08-23 23:07

The Elements and Purpose of Mockups So you’re done with the tedious process of defining and creating the structure of your website in that design part called wireframing. The stockholders are quite impressed. Now, you’re on to the next step … Continue reading ?

Hat Tip To: MockupTiger Wireframes

RMAN: Synchronize standby database using production archivelog backupset

Michael Dinh - Thu, 2018-08-23 22:07

If you have not read RMAN: Synchronize standby database using production archivelog, then please do so.

# Primary archivelog is on local vs shared storage.
# Primary RMAN archivelog backupset resides on shared folder with Standby.
# Full backup is performed once per day and include archivelog with format arch_DB02_`date '+%Y%m%d'
# MANAGED REAL TIME APPLY is running.
PRI: /shared/prod/DB02/rman/
SBY: /shared/backup/arch/DB02a/

#!/bin/sh -e
# Michael Dinh: Aug 21, 2018
# RMAN sync standby using production archivelog backupset
#
. ~/working/dinh/dinh.env
. ~/working/dinh/DB02a.env
sysresv|tail -1
set -x
# List production archivelog backupset for current day
ls -l /shared/prod/DB02/rman/arch_DB02_`date '+%Y%m%d'`*
# Copy production archivelog backupset for current day to standby
cp -ufv /shared/prod/DB02/rman/arch_DB02_`date '+%Y%m%d'`* /shared/backup/arch/DB02a
rman msglog /tmp/rman_sync_standby.log > /dev/null << EOF
set echo on;
connect target;
show all;
# Catalog production archivelog backupset from standby
catalog start with '/shared/backup/arch/DB02a' noprompt;
# Restore production archivelog backupset to standby
restore archivelog from time 'trunc(sysdate)-1';
exit
EOF
sleep 15m
# Verify Media Recovery Log from alert log
tail -20 $ORACLE_BASE/diag/rdbms/$ORACLE_UNQNAME/$ORACLE_SID/trace/alert_$ORACLE_SID.log
exit
$ crontab -l
00 12 * * * /home/oracle/working/dinh/rman_sync_standby.sh > /tmp/rman_sync_standby.sh.out 2>&1

$ ll /tmp/rman*
-rw-r--r--. 1 oracle oinstall 7225 Aug 22 12:01 /tmp/rman_sync_standby.log
-rw-r--r--. 1 oracle oinstall 4318 Aug 22 12:16 /tmp/rman_sync_standby.sh.out

+ tail -20 /u01/app/oracle/diag/rdbms/DB02a/DB02a2/trace/alert_DB02a2.log
ALTER DATABASE RECOVER  managed standby database using current logfile nodelay disconnect  
ORA-1153 signalled during: ALTER DATABASE RECOVER  managed standby database using current logfile nodelay disconnect  ...
Tue Aug 21 15:41:27 2018
Using STANDBY_ARCHIVE_DEST parameter default value as USE_DB_RECOVERY_FILE_DEST
Tue Aug 21 15:54:30 2018
db_recovery_file_dest_size of 204800 MB is 21.54% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Aug 22 12:01:21 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31636.1275.984830461
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31637.1276.984830461
Wed Aug 22 12:01:46 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31638.1278.984830487
Wed Aug 22 12:01:58 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31639.1277.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31640.1279.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31641.1280.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31642.1281.984830489
Media Recovery Waiting for thread 1 sequence 31643
+ exit

# Manual recovery: WAIT_FOR_LOG and BLOCK#=0 and never increment.
SQL> r
  1  select PID,inst_id inst,thread#,client_process,process,status,sequence#,block#,DELAY_MINS
  2  from gv$managed_standby
  3  where 1=1
  4  and status not in ('CLOSING','IDLE','CONNECTED')
  5  order by status desc, thread#, sequence#
  6*

                        CLIENT                                               DELAY
     PID  INST  THREAD# PROCESS    PROCESS  STATUS       SEQUENCE#   BLOCK#   MINS
-------- ----- -------- ---------- -------- ------------ --------- -------- ------
   94734     2        1 N/A        MRP0     WAIT_FOR_LOG     31643        0      0

SQL>
$ cat /tmp/rman_sync_standby.log 

Recovery Manager: Release 11.2.0.4.0 - Production on Wed Aug 22 12:00:58 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN> 
echo set on

RMAN> connect target;
connected to target database: DB02 (DBID=1816794213, not open)

RMAN> show all;
using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name DB02A are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 7;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DEVICE TYPE 'SBT_TAPE' BACKUP TYPE TO BACKUPSET PARALLELISM 1;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/u01/app/oracle/product/11.2.0/dbhome_1/lib/libddobk.so, ENV=(STORAGE_UNIT=dd-u99,BACKUP_HOST=dd860.ccx.carecentrix.com,ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1)';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/db/11.2.0.4/dbs/snapcf_DB02a2.f'; # default

RMAN> catalog start with '/shared/backup/arch/DB02a' noprompt;
searching for all files that match the pattern /shared/backup/arch/DB02a

List of Files Unknown to the Database
=====================================
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9ttb6h01_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9utb6h02_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9vtb6h02_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_a9tb6j6q_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_aatb6j6q_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_abtb6j6q_1_1
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9ttb6h01_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9utb6h02_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9vtb6h02_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_a9tb6j6q_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_aatb6j6q_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_abtb6j6q_1_1

RMAN> restore archivelog from time 'trunc(sysdate)-1';
Starting restore at 22-AUG-2018 12:01:00
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=285 instance=DB02a2 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=3 instance=DB02a2 device type=DISK

archived log for thread 1 with sequence 31630 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31630.496.984755257
archived log for thread 1 with sequence 31631 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31631.497.984755273
archived log for thread 1 with sequence 31632 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31632.498.984755273
archived log for thread 1 with sequence 31633 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31633.499.984755275
archived log for thread 1 with sequence 31634 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31634.500.984755275
archived log for thread 1 with sequence 31635 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31635.501.984755275
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=31636
channel ORA_DISK_1: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_9ttb6h01_1_1
channel ORA_DISK_2: starting archived log restore to default destination
channel ORA_DISK_2: restoring archived log
archived log thread=1 sequence=31637
channel ORA_DISK_2: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_9utb6h02_1_1
channel ORA_DISK_1: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_9ttb6h01_1_1 tag=TAG20180822T110121
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=31638
channel ORA_DISK_1: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_9vtb6h02_1_1
channel ORA_DISK_2: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_9utb6h02_1_1 tag=TAG20180822T110121
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:25
channel ORA_DISK_2: starting archived log restore to default destination
channel ORA_DISK_2: restoring archived log
archived log thread=1 sequence=31639
channel ORA_DISK_2: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_a9tb6j6q_1_1
channel ORA_DISK_2: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_a9tb6j6q_1_1 tag=TAG20180822T113906
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:01
channel ORA_DISK_2: starting archived log restore to default destination
channel ORA_DISK_2: restoring archived log
archived log thread=1 sequence=31640
channel ORA_DISK_2: restoring archived log
archived log thread=1 sequence=31641
channel ORA_DISK_2: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_aatb6j6q_1_1
channel ORA_DISK_2: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_aatb6j6q_1_1 tag=TAG20180822T113906
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:01
channel ORA_DISK_2: starting archived log restore to default destination
channel ORA_DISK_2: restoring archived log
archived log thread=1 sequence=31642
channel ORA_DISK_2: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_abtb6j6q_1_1
channel ORA_DISK_2: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_abtb6j6q_1_1 tag=TAG20180822T113906
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_9vtb6h02_1_1 tag=TAG20180822T110121
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:17
Finished restore at 22-AUG-2018 12:01:44

RMAN> exit
$ cat /tmp/rman_sync_standby.sh.out 
ORACLE_SID = [oracle] ? The Oracle base has been set to /u01/app/oracle
Oracle Instance alive for sid "DB02a2"
CURRENT_INSTANCE=DB02a2
ORACLE_UNQNAME=DB02a
OTHER_INSTANCE=DB02a3,DB02a4
ORACLE_SID=DB02a2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/db/11.2.0.4
NLS_DATE_FORMAT=DD-MON-YYYY HH24:MI:SS
Oracle Instance alive for sid "DB02a2"
++ date +%Y%m%d
+ ls -l /shared/prod/DB02/rman/arch_DB02_20180822_9ttb6h01_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_9utb6h02_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_9vtb6h02_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_a9tb6j6q_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_aatb6j6q_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_abtb6j6q_1_1
-rw-r-----. 1 oracle dba 1900124160 Aug 22 11:02 /shared/prod/DB02/rman/arch_DB02_20180822_9ttb6h01_1_1
-rw-r-----. 1 oracle dba 1938098176 Aug 22 11:02 /shared/prod/DB02/rman/arch_DB02_20180822_9utb6h02_1_1
-rw-r-----. 1 oracle dba 1370842112 Aug 22 11:01 /shared/prod/DB02/rman/arch_DB02_20180822_9vtb6h02_1_1
-rw-r-----. 1 oracle dba   11870720 Aug 22 11:39 /shared/prod/DB02/rman/arch_DB02_20180822_a9tb6j6q_1_1
-rw-r-----. 1 oracle dba       3584 Aug 22 11:39 /shared/prod/DB02/rman/arch_DB02_20180822_aatb6j6q_1_1
-rw-r-----. 1 oracle dba       3072 Aug 22 11:39 /shared/prod/DB02/rman/arch_DB02_20180822_abtb6j6q_1_1
++ date +%Y%m%d
+ cp -ufv /shared/prod/DB02/rman/arch_DB02_20180822_9ttb6h01_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_9utb6h02_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_9vtb6h02_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_a9tb6j6q_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_aatb6j6q_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_abtb6j6q_1_1 /shared/backup/arch/DB02a
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_9ttb6h01_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_9ttb6h01_1_1\u2019
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_9utb6h02_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_9utb6h02_1_1\u2019
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_9vtb6h02_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_9vtb6h02_1_1\u2019
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_a9tb6j6q_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_a9tb6j6q_1_1\u2019
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_aatb6j6q_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_aatb6j6q_1_1\u2019
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_abtb6j6q_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_abtb6j6q_1_1\u2019
+ rman msglog /tmp/rman_sync_standby.log
+ sleep 15m
+ tail -20 /u01/app/oracle/diag/rdbms/DB02a/DB02a2/trace/alert_DB02a2.log
ALTER DATABASE RECOVER  managed standby database using current logfile nodelay disconnect  
ORA-1153 signalled during: ALTER DATABASE RECOVER  managed standby database using current logfile nodelay disconnect  ...
Tue Aug 21 15:41:27 2018
Using STANDBY_ARCHIVE_DEST parameter default value as USE_DB_RECOVERY_FILE_DEST
Tue Aug 21 15:54:30 2018
db_recovery_file_dest_size of 204800 MB is 21.54% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Aug 22 12:01:21 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31636.1275.984830461
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31637.1276.984830461
Wed Aug 22 12:01:46 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31638.1278.984830487
Wed Aug 22 12:01:58 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31639.1277.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31640.1279.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31641.1280.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31642.1281.984830489
Media Recovery Waiting for thread 1 sequence 31643
+ exit

Pages

Subscribe to Oracle FAQ aggregator