DBA Blogs

RMAN Backup script

DBA Scripts and Articles - Wed, 2017-11-08 03:54

This is a sample backup script I used, it has already a lot of options. Feel free to make any modification you want. If you add some good enhancements, let me know I can put them here so everybody can profit from them. RMAN Backup script [crayon-5a031da255613263047755/]  

The post RMAN Backup script appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Error handling behavior change according to PLSQL_OPTIMIZE_LEVEL

Tom Kyte - Tue, 2017-11-07 21:26
We had faced a case in our application where error message disappear according to PLSQL_OPTIMIZE_LEVEL. I had isolated the problem in a simple script. Run this script, you will see that at first execution of the procedure "test_error_proc#" error i...
Categories: DBA Blogs

SQL Query based on performance

Tom Kyte - Tue, 2017-11-07 21:26
Hi Tom, There is a table say t_tab with columns a,b and c. Data in the table is huge(more than million). You run the following three statements: 1. select * from t_tab 2. select a,b,c from t_tab 3. select b,c,a from t_tab Will there be a diffe...
Categories: DBA Blogs

Fuzzy Matching in SQL

Tom Kyte - Tue, 2017-11-07 21:26
Is there any SQL construct that does fuzzy matching ? As an example , if I have the values as Monroe , Monroe Twp , Monroe Township , "Monroe Twp,NJ" , I would like to consider them as one value .
Categories: DBA Blogs

Subquery with Select statement works in 12C but not on 11g.

Tom Kyte - Tue, 2017-11-07 21:26
Hi I am trying to run a select query which has the sub queries it is running well and good in 12C environment but its throughout error in 11g. Could you please help me on this. Thanks, Kumar
Categories: DBA Blogs

Index creation on empty column on Large Table

Tom Kyte - Tue, 2017-11-07 21:26
Quite a time we face a situation where we have large table with Hundreds of Millions of records(sometimes even Billions of records), and we might need to add column to that table and then add index on that new column. We have absolute control over...
Categories: DBA Blogs

ORA-29284: file read error for a few lines

Tom Kyte - Tue, 2017-11-07 21:26
Hi Experts, Thanks for taking the time out to ready my Question. I am receiving a file from a third party as a flat file, with different lines of different lengths. The first two characters of each line represents what data that line will hav...
Categories: DBA Blogs

using connect by without relationship using parent_id

Tom Kyte - Tue, 2017-11-07 21:26
Hi, I have information about father's , mother's and children but there is no relationship between the rows using Paernt_id as follows, <code>drop table tbl_family; create table tbl_family ( father nvarchar2(50) , mother nvarchar2(50) , ...
Categories: DBA Blogs

database migration from AIX to Linux

Tom Kyte - Tue, 2017-11-07 21:26
Hello Tom, We are planning to migrate database from AIX to Linux. Because of different endian we can't built the standby,here my request was Production databases have 30-40TB of data. Some tables have 1-5 TB of only data what was the best way...
Categories: DBA Blogs

Replication of multiple sourde database to a single read only database

Tom Kyte - Tue, 2017-11-07 21:26
Dears Hope you are fine,,, I have a distributed database about 20 branch,,, each database have the same schema structure we need a centralized report that reads from only four tables. currently we take a dump file from each branch and impo...
Categories: DBA Blogs

Guaranteed Way to Pass AWS Cloud Architect Certification Exam

Pakistan's First Oracle Blog - Tue, 2017-11-07 06:00
Today and for the sometime to come, one of the hottest IT certification to hold is AWS Cloud Architect Certification. There are various reasons for that:



  • If you pass it, it really means you know the stuff properly
  • AWS is the Cloud platform of choice world over and its not going anywhere
  • There is literally a mad rush out there as companies scramble to shift or extend their infrastructure to cloud to stay relevant and to cut costs.
  • There is a huge shortage of professional with theoretical and hands-on know-how of Cloud and this shortage is growing alarmingly.
So its not surprising that Sysadmins, developers, DBAs and other IT professionals really yearning to achieve Cloud credentials and there is no better way to do that other than getting AWS Certified.

So is there any  Guaranteed Way to Pass AWS Cloud Architect Certification Exam?

I say Yes and here is the way:

Read AWS Documentation about following AWS Services. Read about these services and read them again and again and then again. Learn them like you know your name. Get a free account and then play with these services. When you feel comfortable enough with these services and can recite them to anyone inside out then go ahead sit in exam and you will pass it for sure. So read and learn all services under sections:


  • Compute
  • Storage
  • Database 
  • Network & Content Delivery
  • Messaging
  • Identity and Access Management
Also make sure to read FAQs of all above services. Also read and remember what AWS Kinesis, WAF, Data Pipeline, EMR, Workspace are. No details are necessary for these ones but just what they stand for and what they do.

Best of Luck.
Categories: DBA Blogs

Passed the AWS Certified Solutions Architect - Associate Exam

Pakistan's First Oracle Blog - Tue, 2017-11-07 05:11
Well, it was quite an enriching experience to go through taking the AWS certification exam and I am humbled to say that I passed it. It was first time, I underwent any AWS exam and I must say that quality was high and it was challenging and interesting enough. 

I will be writing soon as how I prepared and what are my tips for passing this exam.

Good night for now.
Categories: DBA Blogs

Intoduction to Oracle Rest Data Services

DBA Scripts and Articles - Tue, 2017-11-07 03:06

Presentation First, let’s talk about REST apis. REST apis are very popular nowdays because they are easily accessible from any client able to make http(s) requests. The beauty of this is that you can access different systems through these rest apis independantly of how the system is built in the backend. For example I can … Continue reading Intoduction to Oracle Rest Data Services

The post Intoduction to Oracle Rest Data Services appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Performance comparison with delete and insert into backup table or using Oracle 12c archival feature

Tom Kyte - Tue, 2017-11-07 03:06
<code>Hi, Which of the below mentioned approaches should be better performance wise? The Update vs Insert/Delete both take almost equal time ... Is this correct or am I doing something wrong? I know about the limitation of Ora archive as well ...
Categories: DBA Blogs

cost of the query is too much. it is taking around 50 hours to run the query.

Tom Kyte - Tue, 2017-11-07 03:06
<code>when i am running below query, it is taking around 50 hours in dev environment. same query is completed in other environment with in a minute. data is same in both the environments. Any thoughts? delete FROM uet_extract_clinical uec W...
Categories: DBA Blogs

SQL to identify duplicates of data groups.

Tom Kyte - Tue, 2017-11-07 03:06
Hi, I have a peculiar requirement to identify duplicates of data groups. So here is a sample data set. <code> PI BU PR AC ---------- ---------- -------------------- ---------- 1001 100 PR1 ...
Categories: DBA Blogs

UNIQUE LOCAL (Partitioned) Index

Hemant K Chitale - Mon, 2017-11-06 21:44
It is easy to create a default GLOBAL Index that is defined as a Unique Index on a column (or composite of columns) containing unique values.

But what if you have a Partitioned Table and want to create a LOCAL (i.e. equi-partitioned with the table) Index as a Unique Index ?  Are there any constraints ?

Let me demonstrate a Partitioned table listing Users by Region Code.

SQL> create table users
2 (region_code varchar2(3),
3 username varchar2(30),
4 account_status varchar2(32),
5 created date,
6 profile varchar2(128))
7 partition by range (region_code)
8 (partition a_m values less than ('N'),
9 partition n_r values less than ('S'),
10 partition s_z values less than (MAXVALUE))
11 /

Table created.

SQL>
SQL> insert into users
2 select substr(username,1,3), username, account_status, created, profile
3 from dba_users
4 /

39 rows created.

SQL>
SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats('','USERS');

PL/SQL procedure successfully completed.

SQL>
SQL> col partition_name format a30
SQL> select partition_name, num_rows
2 from user_tab_partitions
3 where table_name = 'USERS'
4 order by partition_position
5 /

PARTITION_NAME NUM_ROWS
------------------------------ ----------
A_M 18
N_R 9
S_Z 12

SQL>


I now verify that I can create a GLOBAL (non-partitioned) Unique Index on USERNAME.

SQL> create unique index users_username_u1 on users(username) global;

Index created.

SQL> drop index users_username_u1;

Index dropped.

SQL> create unique index users_username_u1 on users(username);

Index created.

SQL> drop index users_username_u1;

Index dropped.

SQL>


I now verify that I can create a Non-Unique LOCAL Index (being equi-partitioned, the index is partitioned by REGION_CODE).  (Being equi-partitioned, the default behaviour is the Index Partition Names inherit from the Table Partition Names).

SQL> create index users_username_l1 on users(username) local;

Index created.

SQL> select partition_name, num_rows
2 from user_ind_partitions
3 where index_name = 'USERS_USERNAME_L1'
4 order by partition_position
5 /

PARTITION_NAME NUM_ROWS
------------------------------ ----------
A_M 18
N_R 9
S_Z 12

SQL>
SQL> drop index users_username_l1;

Index dropped.

SQL>


I've proven (with the GLOBAL Index) that USERNAME is Unique across the whole table.  Can I create a Unique LOCAL Index on this column ?

SQL> create unique index users_username_u_l1 on users(username) local;
create unique index users_username_u_l1 on users(username) local
*
ERROR at line 1:
ORA-14039: partitioning columns must form a subset of key columns of a UNIQUE index


SQL>


Note the Error Message. The Partition Key must be a subset of the Unique Index columns.  Let me try adding the Partition Key  (in my example table, the Partition Key is a single column -- it could be a composite of multiple columns.  In that case all the columns of the Partition Key must for a subset of the Unique Index).

SQL> create unique index users_rc_un_u_l1 on users(region_code, username) local;

Index created.

SQL> select partition_name, num_rows
2 from user_ind_partitions
3 where index_name = 'USERS_RC_UN_U_L1'
4 order by partition_position
5 /

PARTITION_NAME NUM_ROWS
------------------------------ ----------
A_M 18
N_R 9
S_Z 12

SQL> drop index users_rc_un_u_l1;

Index dropped.

SQL> create unique index users_un_rc_u_l1 on users(username, region_code) local;

Index created.

SQL> select partition_name, num_rows
2 from user_ind_partitions
3 where index_name = 'USERS_UN_RC_U_L1'
4 order by partition_position
5 /

PARTITION_NAME NUM_ROWS
------------------------------ ----------
A_M 18
N_R 9
S_Z 12

SQL>

It doesn't matter if I put the Partition Key (REGION_CODE) before or after the target column(s) (USERNAME) when I create the LOCAL Unique Index.  What is necessary is that the Partition Key be a subset of the Unique Index definition.
.
.
.

Categories: DBA Blogs

SQL Profile to fix slow inserts

Bobby Durrett's DBA Blog - Mon, 2017-11-06 14:27

I was on call Sunday and got paged about a job that normally runs for 10 to 15 minutes but had run for 5 hours already. I used the coe_xfr_sql_profile.sql script from SQLT to speed up an insert statement that was part of the job. We restarted the job and it completed in a few minutes.

I have written a number of posts about the use of coe_xfr_sql_profile.sql. Sunday’s issue was most like the post on using hints with coe_xfr_sql_profile.sql.

The first thing I did was look at the execution history of the problem insert statement which had sql_id = ‘ba9w9cjy87hd8’. I used my sqlstat.sql script.

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
ba9w9cjy87hd8      3341942963 05-NOV-17 07.00.03.091 AM                2         1302824.53        1162165      572.8985                  0                  2.271                61.1125             6077761               1021                    8.5
ba9w9cjy87hd8      3341942963 05-NOV-17 10.00.23.889 AM                3          846842.04     844443.333       108.995                  0                      0                      0          4475478.33         196.333333             5.66666667

Even though this job runs every weekend and we keep 6 weeks of history in the AWR ba9w9cjy87hd8 only showed up yesterday. I checked the SQL and it did not have any constants. It uses bind variables so the same sql_id should apply every week. My guess is that it normally runs so fast that it did not get recorded in the AWR on previous weeks but this week it chose a terrible plan. I see this a lot with SQL statements that have bind variables. I think that the bad execution plan gets chosen based on some abnormal bind variable value and then it sticks in memory.

The other thing I noticed while looking at the session for this job is that it was not using any parallel processes. I looked at the plan and there were no parallel steps. This system uses a lot of parallel query so I suspected that a parallel plan was the normal plan and the bad plan somehow came out with serial execution.

I checked our test databases to see if they had any execution history for this SQL and they did not. So, I was stuck trying to get a better plan and then using a SQL profile to force that plan on the insert statement. I ended up running the explain plan statement without giving the bind variable a value and it came out with a parallel plan. I used the outline hint from that plan to force the insert statement to run the parallel plan against a test database. I ran the insert with a rollback statement so it wouldn’t commit the new rows. Also I picked a bind variable value from the production database’s AWR to run with my test query.

Next I decided to run my test insert statement against production with the rollback statement. The first time I tried the statement it hung on a TM enqueue. I think this was because the insert was using an append hint and the running job was holding a TM lock. After we killed the production job I could run my test insert with the rollback statement.

I had to get the parallel plan into memory or into the AWR so that coe_xfr_sql_profile.sql would pick it up based on the plan hash value. It felt weird running an insert statement for a production table but I tested it on a test database first and made sure it would be rolled back.

I ran this select after the insert and rollback to get the plan for the insert in production:

select * from table(dbms_xplan.display_cursor(null,null,'ALL'));

Part of the output of the select included the plan hash value of the plan that my test insert used:

Plan hash value: 3809184161

Finally, I forced the good plan which had hash value 3809184161 onto the sql_id ba9w9cjy87hd8  using coe_xfr_sql_profile.sql. Then the job ran in a few minutes.

Bobby

Categories: DBA Blogs

Need to convert Rows to Columns

Tom Kyte - Mon, 2017-11-06 08:46
How to convert the Rows to Columns I have Audit data as below <code>create table c_t (t_id varchar2(50), Change_Field varchar2(50),new_value varchar2(50), old_value varchar2(50), a_dt date); insert into c_t values('T1','Status','WIP','Pendin...
Categories: DBA Blogs

ORA-12899: value too large when inserting over DB link with different character sets

Tom Kyte - Mon, 2017-11-06 08:46
Hi Tom, We are using two databases with two different character sets those are 11.2.0.1.0 - 64bit = AR8MSWIN1256 12.1.0.2.0 - 64bit = AL32UTF8 Trying to insert a data from the 11.2 version to the 12.1 through a DBLINK but it gives 'ORA-12899: ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs