Syed Jaffar

Subscribe to Syed Jaffar feed
Whatever topic has been discussed on this blog is my own finding and views, not necessary match with others. I strongly recommend you to do a test before you implement the piece of advice given at my blog.The Human Flyhttp://www.blogger.com/profile/03489518270084955004noreply@blogger.comBlogger274125
Updated: 14 hours 42 min ago

Oracle 18c Autonomous Health Framework (AHF) - Part 1

Thu, 2018-05-31 10:03
Recently I had to present at eProseed AnualTech conference in Luxembourg and I was requested  to present a topic something about Oracle18c.

Obviously I don't want to talk and repeat the same about Autonomous Database, many experts already said much on this. I then decided to pick a topic which really helps DBAs, Administrators and finally to the organization. I was really fascinated about Oracle 18c autonomous health framework concepts and decided to do a presentation on this topics.

Working in a complex and huge Oracle environment, I knew where most of our energy and time is spend, as a DBA or system administrator. We always focus on avoiding run time availability and performance issues. In a complex and critical environment, every other day, you will face a new challenge and you must be on your toes as DBA during the business hours.

For DBA, most importantly, we need to ensure the database availability, at the same time, ensure its deliver the same performance 24x7. Imagine, if you get stuck with latches, instance crash, node crash, someone changes the binaries permission/ownership, you will spend hours and hours to fix and find the root cause of the issues.

With 18c autonomous health framework, its easy to avoid and auto fix run-time availability and performance issues. There are 8 components that makes this framework. Though some of them are present in 12.2, but, all these are configured automatically upon 18c configuration and run 24x7 in daemon mode. Also, 3 of the components have machine learning capabilities. to fix issue automatically.

I will start discussing about 8 components in next series. Stay tune for more on this topic.



utlrp weird behavior with INVAID objects in an Orace EBS database

Wed, 2018-05-30 07:05
In one of the recent database migration & upgrade activities, I have migrated an Oracle EBS database 11.2.0.4 to a new DB host and upgraded to 12.1.0.2. After migration and post upgrade, an utlrp.sql was ran to validate the 100k invalid objects in the database.

Weirdly, during the utlrp execution and when the INVALID objects count goes to 200, the INVALID objects counts started to increase again, and ultimately reaching the 100k number again. The utlrp was in kind of loop and never exited.

As a workaround, I have manually compiled all invalid objects and ran ultrp which ran successfully. I am yet to unlock the theory caused the situation.

Stay tuned for more updates on this post.

Traditional Export/Import Fail With ORA-01031 in DB Vault Environment Starting Release 11.2.0.3

Wed, 2018-05-30 06:27
Oracle Database Vault was configured on RAC database v11.2.0.3 and appropriate permissions given to allow the regular duties, such as data pump export/import etc.

However, when classical/traditional import (a dump file from v9 database) was performed, the import on 11.2.0.3 database failed with the following errors:


Import: Release 11.2.0.3.0 - Production on Tue May 29 16:56:23 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


IMP-00058: ORACLE error 1031 encountered
ORA-01031: insufficient privilegesUsername: 

Data pump jobs were successfully executing without any issues. The only problem is with traditional export/import . According to below Oracle notes, it is an expected behavior in 11.2.0.3 with DB Vault in place.


Classic Export/Import Fail With ORA-01031 in Database Vault Environment Starting Release 11.2.0.3 (Doc ID 1419687.1)
Bug 13615338 - EXP/IMP disabled in Database Vault environment (Doc ID 13615338.8)

According to the notes,

Export and Import tools are no longer recommended to be used, especially in Database Vault environments.
When it is not possible to use data pump (for instance when the export has been created from a pre-10g database),
the patch 13615338 must be installed to allow the usage of classic export/import utilities.



The option would be to upgrade the database to higher than 11.2.0.3.


Will apply the patch and update this blog if the issue is fixed or not.






ORA-12545 TNS: Host or Object Does not Exist , while using SCAN

Sat, 2018-03-31 11:30
As part of SSL configuration between WebLogic tier and DB tier, post sqlnet.ora settings and scan listener restart, we encountered an ORA-12545 error on a 3 node RAC database when using the SCAN listener.

We verified the LISTENER status and we could find all the services in READY status. Tried using VIP, it worked perfectly. So, the only issue was connecting through SCAN. We were fortunate to find a solution after a very quick search over the net. The MOS Doc 364855.1 explained the issue and provided the solution.

As part of the configuration, we set the LOCAL_LISTENER to an VIP hostname. After modifying the LOCAL_LISTENER string to point to VIP IP address, we could manage to connect through the SCAN.

Logically the VIP hostname should have worked, but, we noticed through many references the same issue. 



References:
Client Connection to RAC Intermittently Fails-ORA-12545 TNS: Host or Object Does not Exist (Doc ID 364855.1)
https://ardentperf.com/2007/04/02/local_listener-and-ora-12545/
http://spotdba.blogspot.com/2012/08/error-ora-12545-while-trying-to-connect.html

Backup2Cloud - a backup/restore automation utility

Wed, 2018-03-28 10:38
For cost optimization and flexibility, its a good idea to go with Oracle Database backup cloud services. The Oracle Database Backup Cloud service can actually compliment with the existing backup strategy by providing an off-site storage location in the cloud.

The bck2cloud utility is easy to use, install and secures the data. It simplifies the RMAN backup/restore operations when using the Oracle Backup Cloud Services. Once you subscribe to the service, you simply need to download  module from the OTN and install it on the database host. Once the module is successfully installed, you then need to configure the RMAN settings. With the utility, all data transfers strictly between the database instance and Oracle backup cloud account controlled by the customer.

Following operations are supported by the utility:

  • RMAN Operations
  • Cloud Storage operations
  • Password Encryption operation
  • Install Oracle Public Cloud (OPC) Module operation



Bck2Cloud simplifies and perform many RMAN operations:
  • Backup : all levels/types of backups (full, incremental, spfile, controlfile etc)
  • BckReport : provides information about all backups
  • BValidate : Validates backups
  • Crosscheck :
  • DelBck
  • ListBck
  • ListInc
  • Recovery
  • Restore
  • RPreview
  • Rvalidate


For more details and examples, read the Oracle White paper mentioned below:


References:
https://cloud.oracle.com/opc/paas/ebooks/Bck2Cloud.pdf

Benefits from Oracle Backup to could service

Sun, 2018-03-25 08:13
Organizations treat their data as an asset. In this regards, the top priority for them is data availability and protection from all sorts of disasters. We have seen more and more organizations investing huge amount of money to achieve continuous data availability and safeguard the data form any aspects. Keeping the data backups off-site also become one of the critical factors of data protection policy. Traditionally, following model is followed by many organizations (image taken from Oracle white paper)
 




Oracle database backup cloud service offers very flexible and strong protection from disasters:
  1. Backups are always available over the internet and for immediate recovery need
  2. Data is transparently replicated across multiple nodes in the same geographic region
  3. Provided end-to-end security
  4. Supports all levels of Oracle editions, DB version >=10.2.5, and compressions
  5. Supports all RMAN operations
  6. low cost solution
  7. leverages MAA configuration and BP
  8.  Guaranteed backup and recovery operations
High level of OCB architecture:




on-premises End-to-End flow example



 Cloud end-to-end flow example:





Conclusion:

Oracle Cloud backup service is a low cost solution with MAA configuration to protect Oracle databases.

Will discuss about 1-Button automated backup tool in the next blog.

White papers:

Oracle database backup cloud service
Best Practices for on-premise and cloud database
https://cloud.oracle.com/opc/paas/ebooks/Bck2Cloud.pdf

Backup2Cloud
1-Button Cloud backup/restore automation utility
https://cloud.oracle.com/opc/paas/ebooks/Bck2Cloud.pdf

Tricky Oracle data guard situation

Sat, 2018-03-24 14:43
Had an interesting situation configuring a Data Guard setup of 7TB Oracle 12c database at one of the clients. After a long and rounds of discussions, considering the constraints and limitations, agreed to use the traditional way of Data Guard setup. Backup the database, replicate the backups to DR site, create a standby controfile, restore the control file on standby host , do DB restore, recovery and configure the synchronization.

Restore failed due to unavailability of the data files location (same as production) on the DR host, because, all application tablespaces data files on PROD were not used OMF format, stored under ASM with .dbf extension. Due to this fact, only default tablespaces data files (which actually used OMF format in ASM) were restored. A PROD similar directory structure (using ALTER DISKGROUP ADD DIRECTORY) created and all the data files were restored successfully. The PROD similar directory (+DATA/PROD/DATAFILE)  contained only the soft links and data files were created under the Standby Database SID directory. The entire process took more than 24 hrs, and a subsequent incremental backup was taken to fill the gap and have the data in sync using the roll-forward procedure.

As part of the roll-forward, a new standby controlfile was created on PROD and restored on the DR DB. When CATALOG START WITH command was issued, only few data files (system related) were able to CATALOG, while the soft link data files weren't. This stopped us doing the roll-forward recovery. Tried all possible option, nothing was materialized.

We then cleaned-up the DR database data files, to have a fresh restore. Used the db_convert_file_name parameter and start over the traditional DR configuration procedure. This time, all the data files were restored under standby SID directory, also, soft links for the non-standard OMF data files, were created in the same directory, unlike the first attempt. Once the restore done, successfully performed the roll-forward procedure to make the PROD and DR DBs in sync.


Conclusion:
If you have non-standard OMF files, with .dbf extension under ASM, ensure you use the db_file_name_convert to avoid the mess.

Well, if you have any alternative, you are most welcome to suggest.





Exadata X7 installation some Whitney+ issues

Sun, 2018-03-18 05:45
One of our DB teams from Europe highlighted the following issues while installation and configuring VM on Exadata X7.



"Today we started X7-2 Exadata installation. But we faced some issues. First problem is, when installing the cluster, resources are being up then suddenly, being down and there are some internal bugs for this problem:


Bug 27275655 - Running tcpdump on bondeth0 kills the Whitney LOM interface
Bug 27523644 - EXADATA X7-2 - CLIENT NETWORK STOPPED WORKING AFTER RUNNING STEP 11
Bug 27195117 - Finisar FTLX8574D3BCV-SU transceiver does not link up on X7-2 LOM
Bug 27130090: X7-2 18.1.1.0.0 - ARP TABLE OF VM NOT DYNAMICALLY POPULATED
 

These are internal bugs required to re-image the whole stack and use the image 18.1.4.0.0. Also, for the PXE(for re-imaging), there is another problem.

Without for those bugs when you are installing ExadataX7.

  

ORA-01033 when Connecting to the DB Via Scan Listener

Sun, 2018-03-18 04:51
Had a very interesting and tricky scenario a few days ago. It was something I never encountered in my DBA career. Hence, thought of sharing the detail of there store here today.

Challenge

During mid-day, an application team reported that they are suddenly getting an ORA-01033 when connecting to a 3 node RAC database (11.2.0.3). Quick basic validations reveals that the issue is happening only when connecting through SCAN IP, but, the VIP and direct (physical IP) connections were having no issues. So, it was clear that the issue is with the SCAN IPs. Verified all the configuration settings , network and firewall to ensure there is no issues accessing the SCAN IPS. To our surprise, everything was just perfectly okay. This really puzzles us.We also suspected the Data Guard configuration of this database, but, it wasn't the case either.

Findings


After a quick search over the internet, we come across of MOS Doc: ORA-01033 is Thrown When Connecting to the Database Via Scan Listener (Doc ID 1446132.1)

The issue was, one of the team members was restoring the database backup on a new host.  The tricky part here is, the new host is part of the same network/subnet where the 3 node RAC database is running, and can access to SCAN IPs too. Perhaps the instance that is restoring is registered with the SCAN, and whenever a new connection request is made through SCAN, the connection was referred to an instance which in mount state (restoring). Hence, an ORA-1033 error is thrown.

Fix 

After reviving the note, the restore immediately stopped, and things got back to normal. Even nullifying the remote_listener parameter to de-register with SCAN would have been also worked in this case.

This issue can be prevented through configuring Class of Secure Transport (COST).

The probabilities of hitting the issue is very low, but, I felt its interesting and sharing it worth while.

References

ORA-01033 is Thrown When Connecting to the Database Via Scan Listener (Doc ID 1446132.1)
NOTE:1340831.1 - Using Class of Secure Transport (COST) to Restrict Instance Registration in Oracle RAC
NOTE:1453883.1 - Using Class of Secure Transport (COST) to Restrict Instance Registration 


ORA-19527: physical standby redo log must be renamed

Thu, 2018-03-08 13:59
Data Guard was configured a few days back at one of the customers site. An ORA-19527 error is continuously reported in the standby database alert.log (database name is changed)



Errors in file /u01/app/oracle/diag/rdbms/orcl/ORCL1/trace/ORCL1_rfs_3504.trc:
ORA-19527: physical standby redo log must be renamed
ORA-00312: online log 9 thread 1: '+DATAC1/ORCL/ONLINELOG/group_9.459.960055023'


Though this didn't interrupt the data guard replication, the customer wanted to get-rid of the error message. Starting with v10g, this was an expected behavior to improve the switchover and failover. With v10g, when MRP is started, it will attempt to clear the online log files rather than at the time of role transition.

This is also an expected behavior when the log_file_name_convert parameter is not set on the standby database. But, in our case, the log_file_name_convert was not required as PRIMARY and STANDBY has the same directory structure (DISK GROUPS for DATA and RECO).

The workaround to get rid of the message, when there is no different in the directory structure, is simply to set some dummy values to the parameter, as shown in the below example:

SQL> ALTER SYSTEM SET log_file_name_convert='dummy','dummy';

After the parameter was set, the ORA message was no longer seen in the alert.log



References:
ORA-19527 reported in Standby Database when starting Managed Recovery (Doc ID 352879.1)
ORA-19527: Physical Standby Redo Log Must Be Renamed...during switchover (Doc ID 2194825.1)

Oracle ERP DB Exadata migration + AC-50480: Internal error occurred + ORA-12154

Sun, 2018-02-18 06:24
I was recently involved in a project to migrate an Oracle ERP database to an Exadata server. The ERP database was a non-RAC Oracle 12cR1 running on RHEL with Oracle EBusiness suite 12.1.3.

The migration involved only migrating the ERP database from traditional storage/server technologies to Exadata machine. Upgrade from non-RAC database to RAC. The application remains on the same host. Though similar migration projects were successfully done earlier, this time, I faced a tough challenge resolving autoconfig issues. Here are the details of the issue and how I resolved the problem.

Downtime was not an issue, hence, I opted out for an RMAN backup and restore approach. In nutshell, the following was done:

  • Prepared the Oracle home for ERP on Exadata and applied all recommended patches on the home.
  • Performed all mandatory steps required for ERP Oracle Home.
  • After graceful shutdown of application tier, ERP database was stopped and started in MOUNT state to proceed with RMAN full backup (this is the approach I used, though, many different approaches can be achieved).
  • Copied the files to target (Exadata system) and complete the recovery procedure.
  • Through manual approach, converted the non-RAC database to RAC mode and completed the post migration steps.
Its mandatory to run catbundle script after the database migration to avoid issues like blank login page and issues changing the passwords. Some of you might defer with my advice, but, I faced this at multiple clients. So, I decided to make this as a practice right after migration.
  • The autoconfig on database nodes were successful. However, when autoconfig was executed on application tier, it completed with warnings. The autoconfig,log has the following errors:
Updating s_tnsmode to 'generateTNS'
UpdateContext exited with status: 0
AC-50480: Internal error occurred: java.lang.Exception: Error while generating listener.ora.
Error generating tnsnames.ora from the database, temporary tnsnames.ora will be generated using templates
Instantiating Tools tnsnames.ora
Tools tnsnames.ora instantiated
Web tnsnames.ora instantiated


The NetServiceHandler.log reported the following error:

SQLPLUS Executable : /oracle/PROD/apps/tech_st/10.1.2/bin/sqlplus


SQL*Plus: Release 10.1.0.5.0 - Production on Thu Feb 15 21:27:02 2018

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Enter value for 1: Enter value for 2: ERROR:
ORA-12154: TNS:could not resolve the connect identifier specified

Unable to generate listener.ora from database
Using default listener.ora file


 After googling for the solution, the common workaround was the following:
  1. Clean up the entries from fnd_nodes using the EXEC FND_CONC_CLONE.SETUP_CLEAN;
  2. Run the autoconfig on Dbtier and run the autoconfig on App tier
 Unfortunately this doesn't helped us in our case.

When I looked at the host names in the FND_NODES for database, the hostname was registered against the management hostname (Exadata has management, public IPs). Though client opened ports for Public hostname, they didn't opened the firewall against the management network. Though we have multiple IPs (management and public), the DB server can only take one hostname. So, if you are on Exadata, you need to ensure what is the hostname registered on the server, despite the different IPs.
After allowing firewall to open the LISTENER ports through management IP, the autoconfig on App tier went successfully and we manage to connect the application.

The bottom line is, when you are on Exadata for ERP databases, you need to watchout for the hostnames and the ports opened against IPs from application server.


Oracle Cloud Infrastructure services and third-party backup tools (CloudBerry)

Mon, 2018-02-12 02:46
As part of the Cloud Infrastructure services, Oracle offers below persistent I/O intensive block storage and high-throughput storage options, which is manageable through the console and by CLI:

  • Oracle Cloud Infrastructure Block Volumes
  • Oracle Cloud Infrastructure Object Storage
  • Oracle Cloud Infrastructure Archive Storage
  • Oracle Cloud Infrastructure File Storage
  • Oracle Cloud Infrastructure Data Transfer Service
 To understand the details, features, pricing and etc, visit the URL below:
 https://cloud.oracle.com/en_US/infrastructure/storage

One of the customers was struggling to delete the backup data from the Oracle Cloud to avoid the charges. Oracle support requested to use the CloudBerry tools for easy management. Below is the excerpt from the CloudBerry website :

"With CloudBerry Backup you can use Oracle Storage Cloud Service as a cost-effective, remote backup solution for your enterprise data and applications. By backing up your data and applications to Oracle Storage Cloud Service, you can avoid large capital and operating expenditures in acquiring and maintaining storage hardware. By automating your backup routine to run at scheduled intervals, you can further reduce the operating cost of running your backup process. In the event of a disaster at your site, the data is safe in a remote location, and you can restore it quickly to your production systems"

CloudBerry offers below Oracle tools :
  • CloudBerry Explorer
  • CloudBerry Backup
  • CloudBerry Managed Backup
  • CloudBerry Drive
 Visit their website to explore more about these tools:

https://www.cloudberrylab.com/solutions/oracle-cloud

We then used CloudBerry Backup tool (15 days free trail) to move the backup files from the Oracle cloud storage and removed it from the cloud.

You may download the 15 days free trail and give a try.




Oracle Cloud - Database Services

Thu, 2018-01-18 03:05
This blog post highlights some of the essentials of Oracle cloud Database service offerings and its advantages. Also discuss about Database deployment (DBaaS) benefits and tools that can be used to automate backup/recovery operations and maintenance tasks.

No doubt, most of the software houses are now pushing the clients towards the cloud services. Though Cloud provides several benefits, one should really understands the benefits, threats and offerings from the different cloud vendors. I am going to discuss here about Oracle Cloud Database Services offering.


Database service
  • Oracle Database Exadata cloud at customer (Full Oracle Databases hosted on an Oracle Exadata Database Machine inside the customer's DC)
  • DB Service on Bare Metal (Dedicated database instances with full administrate control)
  • Exadata Express Service
  • Oracle Database Exadata Cloud Service (Full Oracle Databases hosted on an Oracle Exadata Database Machine inside the Oracle Cloud)
  • Database Schema service (A dedicated schema with a complete development and deployment platform managed by Oracle)
 Provides :

  • Rapid provisioning to use in minutes
  • Grow as your business grow
  • Provides tight security to protect the data
  • Off-loads your day-to-day maintenance work

Database deployment (earlier known as DBaaS): is a compute environment which provides:
  • A Linux VM
  • Oracle software
  • A per-created database
  • Cloud tools 
  • for automated and on-demand backup and recovery, automated patching and upgrades, web monitoring tool etc.




Patching a deployment database:
  • Use UI Cloud Interface from Oracle cloud DB Service console or use the command line utility : dbaascli
Backup and Recovery of deployment database:
  • bkup_api Automated backup service level
  • dbaascli Automated recovery service level 
For more information about Features, Overview and pricing of Oracle Cloud Database services, visit : https://cloud.oracle.com/en_US/database

I will be blogging about each topic separately in the coming posts.

Stay tuned.


DBMCLI Utility

Wed, 2018-01-03 03:45
Exadata comes with two types of servers, cell (storage) and compute (db). Its important for Database Machine Administrators (DMAs) to manage and configure servers for various purposes. For cell server administration and server settings, we use the cellcli utility. With X6 and higher, the DBMCLI utility can be used to administrate and configure server settings on compute nodes.

The DBMCLI utility is the command-line administration tool for configuring database servers, and managing server environment. Like cellcli, DBMCLI also runs on each compute server to enable you to configure an individual database server. The DBMCLI can be used to start and stop the server, to manage server configuration information, and to enable or disable servers. The utility is preconfigured when Oracle Exadata Database Machine is shipped.

Note : If you have enabled Capacity on Demand (CoD) during Exadata implementation, using this utility you can increase the CPU count on demand.

 To invoke/start the utility on the $ prompt, just execute dbmcli command

$ dbmcli [port_number] [-n] [-m] [-xml] [-v | -vv | -vvv] [-x] [-e command]