Home » RDBMS Server » Performance Tuning » Optimal Chunk size for update of 50M record table (Oracle 11g RAC, Linux OEL 6)
Optimal Chunk size for update of 50M record table [message #672106] Wed, 03 October 2018 04:56 Go to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
Hi all,

- Large table - 50M records
- Of which I need to update 40M records.
- Production environment, while working
- Cannot prevent accessing the table - transactional data frequently read & updated ( but not the records I am updating )
- Cannot do set the table or the operation to NOLOGGING , and cannot rename/have it offline for no time whatsoever

I know it's the worst conditions possible, but I was challenged to do so numerous time by different customers.


So I tried to update the table at once - but even when setting the UNDO_RETENTION & RETENTION_GUARANTEE parameters to provide more resource for the operation - got problems with the UNDO tablespaces.

Which led me to try to update the table in chunks , every 10k records, or every 50k records e.t.c

Which leads me to the final question I am asking you DBA experts:

How can I calculate the optimal chunk size for an update on a large table in the given scenario ?



Which params do I need to take into account, which considerations do I make ?

Thanks in advance,
Andrey

Re: Optimal Chunk size for update of 50M record table [message #672107 is a reply to message #672106] Wed, 03 October 2018 06:48 Go to previous messageGo to next message
John Watson
Messages: 8922
Registered: January 2010
Location: Global Village
Senior Member
What is the problem with updating the 40 million rows (I wish you would not say "record" when you mean "row") with one UPDATE statement?
Re: Optimal Chunk size for update of 50M record table [message #672108 is a reply to message #672107] Wed, 03 October 2018 06:59 Go to previous messageGo to next message
Bill B
Messages: 1971
Registered: December 2004
Senior Member
What types of errors are you getting? Is it the dreaded ORA-01555? If you are only updating 40 million rows, your UNDO table space is severely undersized. How large is it? A little more information would help.
Re: Optimal Chunk size for update of 50M record table [message #672110 is a reply to message #672107] Wed, 03 October 2018 07:06 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
John Watson wrote on Wed, 03 October 2018 14:48
What is the problem with updating the 40 million rows (I wish you would not say "record" when you mean "row") with one UPDATE statement?
Undo tablespaces get filled up and all other actions in the system slow down to bad to impossibly slow to work.
This is why I break down to chunks.
Re: Optimal Chunk size for update of 50M record table [message #672111 is a reply to message #672110] Wed, 03 October 2018 07:12 Go to previous messageGo to next message
John Watson
Messages: 8922
Registered: January 2010
Location: Global Village
Senior Member
For the performance issue, you will have to dig a little deeper. If it is caused by other sessions having to do a large number of undo reads then, yes, smaller transactions may help. But if it is simply the workload of doing the update, then you will have problems whether it is one large transaction or many small ones. How long does the update take?
Re: Optimal Chunk size for update of 50M record table [message #672112 is a reply to message #672108] Wed, 03 October 2018 07:13 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
Bill B wrote on Wed, 03 October 2018 14:59
What types of errors are you getting? Is it the dreaded ORA-01555? If you are only updating 40 million rows, your UNDO table space is severely undersized. How large is it? A little more information would help.

I may have in other cases 500M records.
So maybe this or that case it is undersized , in other cases it can be large but still not big enough and the action all at once may become critical for the system.

This is why I am asking about how to adjust the chunk size by calculation, and not by experience and some skilled guessing, which I am sure you are well capable of, but as I explained - the question is different.

Sometimes the error is ORA-01555 , sometimes it is not being invoked but the users complain that they have trouble to work as they did before,
and when I check - UNDO space gets indeed filled up
Re: Optimal Chunk size for update of 50M record table [message #672114 is a reply to message #672111] Wed, 03 October 2018 07:27 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
John Watson wrote on Wed, 03 October 2018 14:48
Quote:
What is the problem with updating the 40 million rows (I wish you would not say "record" when you mean "row") with one UPDATE statement?


It happened ( ORA-01555 ) in the past on other updates, as well as performance issues felt by users , which made the customer demand that I conduct a chunk-based solution.
Which got me to ask the question in this post.

In general it takes between a few hours and a couple of tens of hours or so ( and records may vary between 50M and 500M records, of which 80% get updated).

[Updated on: Wed, 03 October 2018 07:29]

Report message to a moderator

Re: Optimal Chunk size for update of 50M record table [message #672115 is a reply to message #672114] Wed, 03 October 2018 07:30 Go to previous messageGo to next message
John Watson
Messages: 8922
Registered: January 2010
Location: Global Village
Senior Member
I can't handle this use of "records". It just gets on my nerves. Good bye.
Re: Optimal Chunk size for update of 50M record table [message #672116 is a reply to message #672115] Wed, 03 October 2018 07:34 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
John Watson wrote on Wed, 03 October 2018 15:30
I can't handle this use of "records". It just gets on my nerves. Good bye.
Sorry, I didn't do that on purpose.

I will use "rows" from now on in my phrasing.
I will very much appreciate if you can explain why "records" would be a wrong phrasing.

Thanks.
Re: Optimal Chunk size for update of 50M record table [message #672117 is a reply to message #672114] Wed, 03 October 2018 07:41 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
Andrey_R wrote on Wed, 03 October 2018 05:27
John Watson wrote on Wed, 03 October 2018 14:48
Quote:
What is the problem with updating the 40 million rows (I wish you would not say "record" when you mean "row") with one UPDATE statement?

It happened ( ORA-01555 ) in the past on other updates, as well as performance issues felt by users , which made the customer demand that I conduct a chunk-based solution.
Which got me to ask the question in this post.

In general it takes between a few hours and a couple of tens of hours or so ( and records may vary between 50M and 500M records, of which 80% get updated).

You have made two unsubstantiated claims.
1) UNDO is "full"
2) UNDO is the direct cause of the application slowdown

Please provide SQL & results that show above is true & correct.

Please consider that any or every SQL which does not access the unnamed "busy" table should be totally unaffected by it.
Please explain how SQL statements going against any other table in the application is slowed down by UNDO & busy table.
Re: Optimal Chunk size for update of 50M record table [message #672119 is a reply to message #672117] Wed, 03 October 2018 07:47 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
I will prepare a testcase and come back to you guys.

Thank you for your efforts so far, much appreciated.

Also, would genuinely be happy to know why "records" is wrong and "rows" is more correct.

Thanks again.
Re: Optimal Chunk size for update of 50M record table [message #672120 is a reply to message #672117] Wed, 03 October 2018 07:50 Go to previous messageGo to next message
Bill B
Messages: 1971
Registered: December 2004
Senior Member
What is it about the data that makes it a candidate for updating. Is it something that you can build a partition on? because you can build a separate table with all your updated rows and then simply exchange the partition with the table. It is really fast. See the following description of the method.

https://www.akadia.com/services/ora_exchange_partition.html

[Updated on: Wed, 03 October 2018 08:26]

Report message to a moderator

Re: Optimal Chunk size for update of 50M record table [message #672129 is a reply to message #672119] Wed, 03 October 2018 11:13 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
>got problems with the UNDO tablespaces.
PROBLEMS?
What Problem?
I don't see any problem.
Please use COPY & PASTE to show us what you do & how Oracle responds.

Please explain how SQL statements going against any other table in the application is slowed down by UNDO & busy table.

Re: Optimal Chunk size for update of 50M record table [message #672814 is a reply to message #672129] Sun, 28 October 2018 03:59 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
Hi all,
I'm back with an actual example.


This is the query to perform:

 DELETE from MYTAB
 where COL1 in ('A'
                        ,'B'
                        ,'C')
   and COL2 < sysdate - 365;
   


size of this table and indexes:


SQL>
SQL> select segment_type, count(*), sum(bytes / 1024 / 1024 / 1024) gb
  2    from user_segments
  3   where (segment_name = 'MYTAB' and segment_type = 'TABLE')
  4      or (segment_name in
  5         (select index_name from user_indexes where table_name = 'MYTAB') and
  6         segment_type = 'INDEX')
  7   group by segment_type;

SEGMENT_TYPE                           COUNT(*)         GB
------------------------------------ ---------- ----------
TABLE                                         1 18.7900391
INDEX                                        27 103.360352

Elapsed: 00:00:00.09
SQL>
SQL>







number of rows in this table:

SQL> select count(*) from MYTAB;

  COUNT(*)
----------
  83313085

Elapsed: 02:05:36.18



number of rows to be updated by my query:

SQL> select /*+ index ( t COL1_date )*/ count(*)
  2  from MYTAB t
  3  where
  4    t.COL1 in ('A'
  5                          ,'B'
  6                          ,'C')
  7     and t.COL2 < sysdate - 365;

  COUNT(*)
----------
  36537567

Elapsed: 01:07:11.88
SQL>
SQL>



I attempted it ( before I decided to post this reply , so I didn't record it in SQL*Plus) and got a "ORA-01652 Unable to extend temp segment by in tablespace"


so I added a data file to each undo table space, autoextensible up to 32GB ( maxsize unlimited).
I have undo this size now:


SQL> select tablespace_name,count(*) files, sum(maxbytes / 1024 / 1024 / 1024) gb
  2    from dba_data_files
  3   where tablespace_name in
  4         (select tablespace_name from dba_tablespaces where contents = 'UNDO')
  5   group by tablespace_name;

TABLESPACE_NAME                                                   FILES         GB
------------------------------------------------------------ ---------- ----------
UNDOTBS1                                                              2 63.9999695
UNDOTBS2                                                              2 63.9999695



A very basic check on statistics:


SQL> select num_rows, last_analyzed
  2    from user_tables
  3   where table_name = 'MYTAB';

  NUM_ROWS LAST_ANALYZE
---------- ------------
  83313085 26-OCT-18

Elapsed: 00:00:00.07
 




same for indexes:

Elapsed: 00:00:00.07
SQL> select num_rows, last_analyzed
  2     from user_indexes
  3    where table_name = 'MYTAB';

  NUM_ROWS LAST_ANALYZE
---------- ------------
  83313085 26-OCT-18
  80560843 01-SEP-18
  80549530 01-SEP-18
  80565416 01-SEP-18
  80563435 01-SEP-18
  80562500 01-SEP-18
  80558785 01-SEP-18
  80552906 01-SEP-18
  80551698 01-SEP-18
  80547597 01-SEP-18
  80545607 01-SEP-18
  80544329 01-SEP-18
  80543717 01-SEP-18
  80542771 01-SEP-18
  80542204 01-SEP-18
  80541103 01-SEP-18
  80540321 01-SEP-18
  80538074 01-SEP-18
  71588492 19-MAR-18
  71587225 19-MAR-18
  71586527 19-MAR-18
  71585113 18-MAR-18
  71584012 18-MAR-18
  71582121 18-MAR-18
  77232409 02-JUL-18
  77229465 02-JUL-18
  80568420 02-SEP-18

27 rows selected.
  



checked the SID for my session, in order to be able to monitor it:

   SQL>    select userenv('sid') from dual;

USERENV('SID')
--------------
           825





So I have a 83M row table with 27 indexes , Of which I want to delete 36M (43% ) .

The table's + indexes size sum is 122GB
My overal undo size is 128Gb (64G+64GB)




What's the best strategy... or should I say,
What's the approach to decide upon the best strategy for this DML operation ?


*Please note that I cannot take it offline or structure changed ( partitioning )in any way. No downtime allowed.

*To Moderator: I think I made a mistake with placing this whole thread in the wrong topic.
If it can be moved to Performance Tuning perhaps it would be more correct... Thank you and sorry for the mistake



TIA.

[Updated on: Sun, 28 October 2018 04:04]

Report message to a moderator

Re: Optimal Chunk size for update of 50M record table [message #672815 is a reply to message #672814] Sun, 28 October 2018 08:17 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
> "ORA-01652 Unable to extend temp segment by in tablespace"
Please NOTE that as written above, it is 100% ambiguous regarding which tablespace has a problem.
It could be tablespace holding the table data.
It could be tablespace holding the index data.
It could be REDO tablespace.

I remain unconvinced that UNDO is root cause.

I do NOT understand how any of the most recent post has anything to do with original ORA_01555 Snapshot Too Old error.
Re: Optimal Chunk size for update of 50M record table [message #672816 is a reply to message #672815] Sun, 28 October 2018 08:45 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
BlackSwan wrote on Sun, 28 October 2018 15:17
> "ORA-01652 Unable to extend temp segment by in tablespace"
Please NOTE that as written above, it is 100% ambiguous regarding which tablespace has a problem.
It could be tablespace holding the table data.
It could be tablespace holding the index data.
It could be REDO tablespace.

I remain unconvinced that UNDO is root cause.

I do NOT understand how any of the most recent post has anything to do with original ORA_01555 Snapshot Too Old error.

Ok. So let's return to the original question.
I have a delete of 36M records for 83M record table which cannot be taken offline.

Is the correct strategy to simply try to perform the delete statement in one go and see what happens ?
Re: Optimal Chunk size for update of 50M record table [message #672817 is a reply to message #672816] Sun, 28 October 2018 08:53 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
>Is the correct strategy to simply try to perform the delete statement in one go and see what happens ?

yes, that is what Oracle was designed to do.
BTW, tables have ROWS; not records.
IMO, 36M rows is tiny, tiny, tiny.
Re: Optimal Chunk size for update of 50M record table [message #672819 is a reply to message #672112] Sun, 28 October 2018 11:46 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
Andrey_R wrote on Wed, 03 October 2018 05:13
Bill B wrote on Wed, 03 October 2018 14:59
What types of errors are you getting? Is it the dreaded ORA-01555? If you are only updating 40 million rows, your UNDO table space is severely undersized. How large is it? A little more information would help.
I may have in other cases 500M records.
So maybe this or that case it is undersized , in other cases it can be large but still not big enough and the action all at once may become critical for the system.

This is why I am asking about how to adjust the chunk size by calculation, and not by experience and some skilled guessing, which I am sure you are well capable of, but as I explained - the question is different.

Sometimes the error is ORA-01555 , sometimes it is not being invoked but the users complain that they have trouble to work as they did before,
and when I check - UNDO space gets indeed filled up
I have NEVER seen an UPDATE statement reporting ORA_01555.
It has always been a long running SELECT that terminates with ORA-01555 when some process does DML against same table as SELECT and then the DML is COMMITTED.
Re: Optimal Chunk size for update of 50M record table [message #672820 is a reply to message #672819] Sun, 28 October 2018 12:07 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
BlackSwan wrote on Sun, 28 October 2018 18:46
Andrey_R wrote on Wed, 03 October 2018 05:13
Bill B wrote on Wed, 03 October 2018 14:59
What types of errors are you getting? Is it the dreaded ORA-01555? If you are only updating 40 million rows, your UNDO table space is severely undersized. How large is it? A little more information would help.
I may have in other cases 500M records.
So maybe this or that case it is undersized , in other cases it can be large but still not big enough and the action all at once may become critical for the system.

This is why I am asking about how to adjust the chunk size by calculation, and not by experience and some skilled guessing, which I am sure you are well capable of, but as I explained - the question is different.

Sometimes the error is ORA-01555 , sometimes it is not being invoked but the users complain that they have trouble to work as they did before,
and when I check - UNDO space gets indeed filled up
I have NEVER seen an UPDATE statement reporting ORA_01555.
It has always been a long running SELECT that terminates with ORA-01555 when some process does DML against same table as SELECT and then the DML is COMMITTED.
Ok look.
You are trying to find a way in which ALL the details in this thread will be correct ( the ORA-01555 did not appear in this current test at all ).
But we started from an initial sort of "general question" the nature of which ( justified) respectful members did not like and asked for actual details.

So I apologized and returned with actual info which I posted in the 1st post this morning.
I have stated absolutely correct details this time, however, now you are reminding us of the inaccuracies posted beforehand...

So.. if it makes everyone happy I can just create a new topic with only EVIDENT information in the scope of my recent actions which I can show ins SQL*Plus.
I truly apologize for all the confusion and mistakes.

[Updated on: Sun, 28 October 2018 12:07]

Report message to a moderator

Re: Optimal Chunk size for update of 50M record table [message #672822 is a reply to message #672820] Sun, 28 October 2018 12:40 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
post SQL & results that show a problem which requires a solution
Re: Optimal Chunk size for update of 50M record table [message #672868 is a reply to message #672822] Tue, 30 October 2018 04:03 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
BlackSwan wrote on Sun, 28 October 2018 19:40
post SQL & results that show a problem which requires a solution


SQL> set lines 900 pages 20000
SQL> col sql_id for a15
SQL> col sql_text for a90
SQL> col last_load_time for a20
SQL> col c1 for a50
SQL> set num=11
SP2-0268: numwidth option not a valid number
SQL>
SQL> select
  2   t.used_urec,
  3         v.sql_id,
  4         sql_text,
  5         executions,
  6         v.ELAPSED_TIME / 1000000 seconds,
  7         last_load_time,
  8         child_number,'select * from table (dbms_xplan.display_cursor ('''||v.sql_id||''','||child_number||'));' as c1
  9    from gv$sql v, gv$session s, gv$transaction t
 10   where
 11  v.inst_id (+)= s.inst_id
 12    and v.sql_id (+)= s.sql_id
 13    and s.sid=352
 14    and s.inst_id = 2
 15    and t.INST_ID(+) = s.INST_ID
 16    and t.SES_ADDR(+) = s.SADDR /*the transaction id*/
 17   order by      executions desc;

 USED_UREC SQL_ID          SQL_TEXT                                                                                   EXECUTIONS    SECONDS LAST_LOAD_TIME       CHILD_NUMBER C1
---------- --------------- ------------------------------------------------------------------------------------------ ---------- ---------- -------------------- ------------ --------------------------------------------------
  37213407 7x61wdubvycnk   DELETE from MYTAB  where COL1 in ('A'                         ,'B'              1 20972.6294 2018-10-29/23:29:59             0 select * from table (dbms_xplan.display_cursor ('7
                                                ,'C')    and COL2 < sysdate - 365                                                                            x61wdubvycnk',0));


It runs for about 14 hours already, however, it stopped updating gv$session after less than 6 hours.
from gv$transaction I can learn that it is not standing still, as USED_UREC value is rising all the time.


I also see some UNDO utilization by executing these queries.
For general:

SQL> col owner for a20
SQL> col tablespace_name for a15
SQL> define num=10
SQL> col status for a15
SQL> select u.owner, f.tablespace_name, u.commit_jtime, u.commit_wtime, u.status, round(sum(u.bytes)/1024/1024,2) "MB"
  2    from dba_undo_extents u, dba_data_files f
  3    where u.file_id = f.file_id
  4    group by u.owner, f.tablespace_name, u.commit_jtime, u.commit_wtime, u.status
  5    order by u.owner, f.tablespace_name, u.commit_jtime, u.commit_wtime, u.status;

OWNER                TABLESPACE_NAME COMMIT_JTIME COMMIT_WTIME                             STATUS                  MB
-------------------- --------------- ------------ ---------------------------------------- --------------- ----------
SYS                  UNDOTBS1                                                              EXPIRED                  7
SYS                  UNDOTBS1                                                              UNEXPIRED            16.56
SYS                  UNDOTBS2                                                              ACTIVE            65438.06
SYS                  UNDOTBS2                                                              EXPIRED               2.25
SYS                  UNDOTBS2                                                              UNEXPIRED            76.81

Elapsed: 00:00:00.15
SQL>

And for sessions:

SQL> define num=10
SQL> col program for a15
SQL> select ss.inst_id, ss.sid, s.serial#,  s.program
  2                    , round(ss.value/1024/1024,2) MB
  3    from gv$sesstat ss, v$statname sn, gv$session s
  4    where sn.statistic# = ss.statistic#
  5                  and s.inst_id = ss.inst_id and s.sid = ss.sid
  6                  and sn.name like 'undo change vector size'
  7                  and s.sid=352
  8    order by ss.value desc;

   INST_ID        SID    SERIAL# PROGRAM                 MB
---------- ---------- ---------- --------------- ----------
         2        352      36565 sqlplus.exe       54542.39

Elapsed: 00:00:00.03
SQL>


1. How can I know the progress ( for exaple, "50% done" or so ? ) of the operation , so that I know when to expect it to finish ? if divided to chunks - we can know how many "chunks" updated..
2. Is it normal for such an operation to run for so long ? How can I know/measure ?
Re: Optimal Chunk size for update of 50M record table [message #672872 is a reply to message #672868] Tue, 30 October 2018 04:25 Go to previous messageGo to next message
John Watson
Messages: 8922
Registered: January 2010
Location: Global Village
Senior Member
This does look as though you have filled your undo tablespace, which is not unlikely given the amount of data you are deleting. So make it bigger! There is no point in have two undo tablespaces, so drop the one you aren't using (it is just a waste of space) and add another couple of files to the one you are using.
As for the performance, better check the execution plan: it may be doing something silly like using an index to select the rows.
Re: Optimal Chunk size for update of 50M record table [message #672873 is a reply to message #672819] Tue, 30 October 2018 04:30 Go to previous messageGo to next message
cookiemonster
Messages: 13917
Registered: September 2008
Location: Rainy Manchester
Senior Member
BlackSwan wrote on Sun, 28 October 2018 16:46
Andrey_R wrote on Wed, 03 October 2018 05:13
Bill B wrote on Wed, 03 October 2018 14:59
What types of errors are you getting? Is it the dreaded ORA-01555? If you are only updating 40 million rows, your UNDO table space is severely undersized. How large is it? A little more information would help.
I may have in other cases 500M records.
So maybe this or that case it is undersized , in other cases it can be large but still not big enough and the action all at once may become critical for the system.

This is why I am asking about how to adjust the chunk size by calculation, and not by experience and some skilled guessing, which I am sure you are well capable of, but as I explained - the question is different.

Sometimes the error is ORA-01555 , sometimes it is not being invoked but the users complain that they have trouble to work as they did before,
and when I check - UNDO space gets indeed filled up
I have NEVER seen an UPDATE statement reporting ORA_01555.
It has always been a long running SELECT that terminates with ORA-01555 when some process does DML against same table as SELECT and then the DML is COMMITTED.
There's no reason why an update couldn't throw an ORA-01555 - it does need to read the data to update it.
Re: Optimal Chunk size for update of 50M record table [message #672891 is a reply to message #672873] Wed, 31 October 2018 07:32 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
John Watson wrote on Tue, 30 October 2018 11:25
This does look as though you have filled your undo tablespace, which is not unlikely given the amount of data you are deleting. So make it bigger! There is no point in have two undo tablespaces, so drop the one you aren't using (it is just a waste of space) and add another couple of files to the one you are using.
As for the performance, better check the execution plan: it may be doing something silly like using an index to select the rows.
John Watson wrote on Tue, 30 October 2018 11:25
This does look as though you have filled your undo tablespace, which is not unlikely given the amount of data you are deleting. So make it bigger! There is no point in have two undo tablespaces, so drop the one you aren't using (it is just a waste of space) and add another couple of files to the one you are using.
As for the performance, better check the execution plan: it may be doing something silly like using an index to select the rows.
Ok I have done the following:

-- Added 2 autoextensible datafiles with maxsize unlimited to each of the tablespaces ( one for each instance. Storage is enough )

SQL>
SQL>
 select instance_name from v$instance;
SQL>
INSTANCE_NAME
--------------------------------
mysid1

SQL>
ALTER SYSTEM SET UNDO_TABLESPACE = undotbs1 SID='mysid1';SQL>

System altered.



and


SQL> SQL> select instance_name from v$instance;

INSTANCE_NAME
--------------------------------
mysid2

SQL> ALTER SYSTEM SET UNDO_TABLESPACE = undotbs2 SID='mysid2';

System altered.

SQL>





-- Made sure each instance of the 2 has it's dedicated undo, to make sure that in any case it will have than the table + its indexes




Now my UNDO tablespaces look like this:




SQL>
select tablespace_name,count(*) files, sum(maxbytes / 1024 / 1024 / 1024) gb
  from dba_data_files
 where tablespace_name in
       (select tablespace_name from dba_tablespaces where contents = 'UNDO')
 group by tablespace_name;SQL>   2    3    4    5

TABLESPACE_NAME      FILES         GB
--------------- ---------- ----------
UNDOTBS1                 4 127.999939
UNDOTBS2                 4 127.999939

SQL>





I've checked the session that I'm about to run the command from:



SQL> select userenv('sid'), userenv('instance') from dual;

USERENV('SID') USERENV('INSTANCE')
-------------- -------------------
           120                   2






Now I am running my delete, with making sure it is doing FTS by hinting it to use parallelism:







SQL>
SQL>


 DELETE /*+  FULL(MYTAB) PARALLEL(MYTAB, 3) */  from MYTAB
 where COL1 in ( 'A'
                        ,'B'
                        ,'C')
   and COL2 < sysdate - 365;
   SQL> SQL> SQL>   2    3    4    5

   
   






and then made sure from another session on that instance that it is doing a FTS:




SQL> SQL> set lines 900 pages 20000
SQL> col sql_id for a15
SQL> col sql_text for a90
SQL> col last_load_time for a20
SQL> col c1 for a50
SQL> define num=11
SQL> --
SQL> select
  2   t.used_urec,
  3         v.sql_id,
  4         sql_text,
  5         executions,
  6         v.ELAPSED_TIME / 1000000 seconds,
  7         last_load_time,
  8         child_number,'select * from table (dbms_xplan.display_cursor ('''||v.sql_id||''','||child_number||'));' as c1
  9             from gv$sql v, gv$session s, gv$transaction t
 10     where
v.inst_id (+)= s.inst_id
 11   12    and v.sql_id (+)= s.sql_id
 13    and s.sid=120
 14    and s.inst_id = 2
 15    and t.INST_ID(+) = s.INST_ID
 16    and t.SES_ADDR(+) = s.SADDR /*the transaction id*/
 17   order by      executions desc;

 USED_UREC SQL_ID          SQL_TEXT                                                                                   EXECUTIONS    SECONDS LAST_LOAD_TIME       CHILD_NUMBER C1
---------- --------------- ------------------------------------------------------------------------------------------ ---------- ---------- -------------------- ------------ --------------------------------------------------
    183564 aqnjuf34qqkzb    DELETE /*+ FULL(MYTAB) PARALLEL(MYTAB, 3) */  from MYTAB  where COL1          0  91.012054 2018-10-31/20:25:24             0 select * from table (dbms_xplan.display_cursor ('a
                            in ('A'                         ,'B'                         ,'C')    and                                                         qnjuf34qqkzb',0));
                            COL2 < sysdate - 365


SQL>




So I checked the plan:


SQL> select * from table (dbms_xplan.display_cursor ('aqnjuf34qqkzb',0));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  aqnjuf34qqkzb, child number 0
-------------------------------------
 DELETE /*+ FULL(MYTAB) PARALLEL(MYTAB, 3) */  from
MYTAB  where COL1 in ('A'
,'B'                         ,'C')    and COL2 <
sysdate - 365

Plan hash value: 2383207100

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name       | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT      |            |       |       |   247K(100)|          |        |      |            |
|   1 |  DELETE               | MYTAB |       |       |            |          |        |      |            |
|   2 |   PX COORDINATOR      |            |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)| :TQ10000   |    34M|  5824M|   247K  (1)| 00:49:26 |  Q1,00 | P->S | QC (RAND)  |
|   4 |     PX BLOCK ITERATOR |            |    34M|  5824M|   247K  (1)| 00:49:26 |  Q1,00 | PCWC |            |
|*  5 |      TABLE ACCESS FULL| MYTAB |    34M|  5824M|   247K  (1)| 00:49:26 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access(:Z>=:Z AND :Z<=:Z)
       filter((INTERNAL_FUNCTION("COL1") AND "COL2"<SYSDATE@!-365))

Note
-----
   - dynamic sampling used for this statement (level=6)


30 rows selected.







Will monitor it as it progresses and update... thx

[Updated on: Wed, 31 October 2018 12:18] by Moderator

Report message to a moderator

Re: Optimal Chunk size for update of 50M record table [message #672892 is a reply to message #672891] Wed, 31 October 2018 08:50 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
When I check undo utilization I see that UNDOTBS2 has ACTIVE+EXPIRED storage that sums up to approx. 64GB,
while I already made sure I have 128GB... how can this be ? Am I reading this wrong somehow ?

SQL>
col owner for a20
SQL> SQL> col tablespace_name for a15
SQL> define num=10
SQL> col status for a15
SQL> select u.owner, f.tablespace_name, u.commit_jtime, u.commit_wtime, u.status, round(sum(u.bytes)/1024/1024,2) "MB"
  2    from dba_undo_extents u, dba_data_files f
  3    where u.file_id = f.file_id
  4    group by u.owner, f.tablespace_name, u.commit_jtime, u.commit_wtime, u.status
  5    order by u.owner, f.tablespace_name, u.commit_jtime, u.commit_wtime, u.status;

OWNER                TABLESPACE_NAME COMMIT_JTIME COMMIT_WTIME                             STATUS                  MB
-------------------- --------------- ------------ ---------------------------------------- --------------- ----------
SYS                  UNDOTBS1                                                              EXPIRED               6.06
SYS                  UNDOTBS1                                                              UNEXPIRED            12.44
SYS                  UNDOTBS2                                                              ACTIVE             4446.69
SYS                  UNDOTBS2                                                              EXPIRED           60487.69
SYS                  UNDOTBS2                                                              UNEXPIRED            541.5



Also, is it correct to assume by this query & output that 8% of the deletion has been completed?

SQL> set lines 900 pages 20000
SQL> col message for a80
SQL> col pct_completed for a20
SQL> select time_remaining,to_char(round((sofar / totalwork) * 100,12))||'%' pct_completed,
  2         round((sysdate - g.start_time) * 1440) as minutes_since_start,
  3         round((sysdate - last_update_time) * 1440) minutes_since_last_update,
  4         message
  5    from gv$session_longops g
  6    where sofar!=totalwork
  7    AND totalwork != 0;

TIME_REMAINING PCT_COMPLETED        MINUTES_SINCE_START MINUTES_SINCE_LAST_UPDATE MESSAGE
-------------- -------------------- ------------------- ------------------------- --------------------------------------------------------------------------------
         52876 8.541228569838%                       82                         0 Rowid Range Scan:  MYUSER.MYTAB: 19943 out of 233491 Blocks done
         52366 8.617250011849%                       82                         0 Rowid Range Scan:  MYUSER.MYTAB: 19999 out of 232081 Blocks done
         52205 8.64314029645%                        82                         0 Rowid Range Scan:  MYUSER.MYTAB: 20059 out of 232080 Blocks done

SQL>

What other checks can I do that will give me good indications of how much left to go , and how to monitor the process?

TIA

[Updated on: Wed, 31 October 2018 08:56]

Report message to a moderator

Re: Optimal Chunk size for update of 50M record table [message #672893 is a reply to message #672892] Wed, 31 October 2018 08:56 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
>What other checks can I do that will give me good indications of how much left to go , and how to monitor the process?
query V$SESSION_LONGOPS

BTW - UNDO for DELETE is the largest when compared to INSERT & UPDATE; since UNDO must hold complete rows INSERT to undo the DELETE.
Re: Optimal Chunk size for update of 50M record table [message #672894 is a reply to message #672893] Wed, 31 October 2018 08:57 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
BlackSwan wrote on Wed, 31 October 2018 15:56
>What other checks can I do that will give me good indications of how much left to go , and how to monitor the process?
query V$SESSION_LONGOPS

BTW - UNDO for DELETE is the largest when compared to INSERT & UPDATE; since UNDO must hold complete rows INSERT to undo the DELETE.

But I did - I showed it in the message you responded to right above your last one...
Re: Optimal Chunk size for update of 50M record table [message #672900 is a reply to message #672894] Wed, 31 October 2018 13:31 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
It's been 6 hours and my undo tablespace utilization is showing as 18GB:

SQL> select u.owner, f.tablespace_name, u.commit_jtime, u.commit_wtime, u.status, round(sum(u.bytes)/1024/1024,2) "MB"
  2    from dba_undo_extents u, dba_data_files f
  3    where u.file_id = f.file_id
  4    group by u.owner, f.tablespace_name, u.commit_jtime, u.commit_wtime, u.status
  5    order by u.owner, f.tablespace_name, u.commit_jtime, u.commit_wtime, u.status;


OWNER                TABLESPACE_NAME COMMIT_JTIME COMMIT_WTIME                             STATUS                  MB
-------------------- --------------- ------------ ---------------------------------------- --------------- ----------
SYS                  UNDOTBS1                                                              EXPIRED               4.69
SYS                  UNDOTBS1                                                              UNEXPIRED            16.81
SYS                  UNDOTBS2                                                              ACTIVE            17950.69
SYS                  UNDOTBS2                                                              EXPIRED           47208.38
SYS                  UNDOTBS2                                                              UNEXPIRED           572.81

SQL> SQL>


And when I check V$session_longops I get a different action documented, therefore another ETA..




SQL> SQL> set lines 900 pages 20000
SQL> col message for a80
SQL> col pct_completed for a20
SQL> select time_remaining,to_char(round((sofar / totalwork) * 100,12))||'%' pct_completed,
  2         round((sysdate - g.start_time) * 1440) as minutes_since_start,
  3         round((sysdate - last_update_time) * 1440) minutes_since_last_update,
  4         message
  5    from gv$session_longops g
  6    where sofar!=totalwork
  7    AND totalwork != 0;

TIME_REMAINING PCT_COMPLETED        MINUTES_SINCE_START MINUTES_SINCE_LAST_UPDATE MESSAGE
-------------- -------------------- ------------------- ------------------------- --------------------------------------------------------------------------------
         82536 5.770074080025%                       84                         0 Rowid Range Scan:  MYUSER.MYTAB: 17237 out of 298731 Blocks done
         73674 7.11329195637%                        94                         0 Rowid Range Scan:  MYUSER.MYTAB: 16584 out of 233141 Blocks done
         72710 6.373965445231%                       83                         0 Rowid Range Scan:  MYUSER.MYTAB: 14956 out of 234642 Blocks done




So I'm a bit lost ...
So.. with regard to performance & measuring progress, if I understood correctly, I was told to simply do the delete, and to be sure to have it do FTS,
and have sufficient UNDO space for it.

Also got an indication to look in GV$SESSION_LONGOPS.

I've done all these but still cannot have a clear view of where my statement is in its progress, how long left till finish , and if it is executing efficiently...

What am I missing here ?

TIA
Re: Optimal Chunk size for update of 50M record table [message #672901 is a reply to message #672891] Wed, 31 October 2018 13:37 Go to previous messageGo to next message
John Watson
Messages: 8922
Registered: January 2010
Location: Global Village
Senior Member
Your delete is not running in parallel. I think you have probably forgotten to

ALTER SESSION ENABLE PARALLEL DML;
Re: Optimal Chunk size for update of 50M record table [message #672904 is a reply to message #672901] Wed, 31 October 2018 14:45 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
John Watson wrote on Wed, 31 October 2018 20:37
Your delete is not running in parallel. I think you have probably forgotten to

ALTER SESSION ENABLE PARALLEL DML;

It very much looks like this

It does have 3 processes doing "Rowid Range Scan" , I specified parallelism level of 3 , and also when I tried to do it without running in parallel - it showed only one process in gv$session_longops..
I have tried it a few times before to see that when I use the hint it acts differently than without it ( as I can see from gv$session_longops + query plan table ) ... are you sure the hint is not enough for running in parallel ?
Re: Optimal Chunk size for update of 50M record table [message #672906 is a reply to message #672904] Wed, 31 October 2018 15:46 Go to previous messageGo to next message
John Watson
Messages: 8922
Registered: January 2010
Location: Global Village
Senior Member
Quote:
are you sure the hint is not enough for running in parallel ?
Yes, I am. And if you read the docs you will be sure too. Look at the plan: the parallel to serial conversion happens before the delete.
Re: Optimal Chunk size for update of 50M record table [message #672928 is a reply to message #672906] Thu, 01 November 2018 04:31 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
John Watson wrote on Wed, 31 October 2018 22:46
Quote:
are you sure the hint is not enough for running in parallel ?
Yes, I am. And if you read the docs you will be sure too. Look at the plan: the parallel to serial conversion happens before the delete.
Thanks for that. I got confused by the indication of searching for the rows to delete in parallel that made me think the access to delete them is happening the same way.

Nevertheless, it is running for 21 hours already , and my undo utilization looks the same as after 6 hours... approx 56GB:

select u.owner, f.tablespace_name, u.commit_jtime, u.commit_wtime, u.status, round(sum(u.bytes)/1024/1024,2) "MB"
SQL>   2    from dba_undo_extents u, dba_data_files f
  3    where u.file_id = f.file_id
  4    group by u.owner, f.tablespace_name, u.commit_jtime, u.commit_wtime, u.status
  5    order by u.owner, f.tablespace_name, u.commit_jtime, u.commit_wtime, u.status;

OWNER                TABLESPACE_NAME COMMIT_JTIME COMMIT_WTIME                             STATUS                  MB
-------------------- --------------- ------------ ---------------------------------------- --------------- ----------
SYS                  UNDOTBS1                                                              EXPIRED                  3
SYS                  UNDOTBS1                                                              UNEXPIRED            27.56
SYS                  UNDOTBS2                                                              ACTIVE            56410.69
SYS                  UNDOTBS2                                                              EXPIRED            2272.56
SYS                  UNDOTBS2                                                              UNEXPIRED           225.75

SQL>


When I check gv$session_longops I see it still working


SQL> SQL> set lines 900 pages 20000
SQL> col message for a80
SQL> col pct_completed for a20
SQL> select sid,time_remaining,to_char(round((sofar / totalwork) * 100,12))||'%' pct_completed,
  2         round((sysdate - g.start_time) * 1440) as minutes_since_start,
  3         round((sysdate - last_update_time) * 1440) minutes_since_last_update,
  4         message
  5    from gv$session_longops g
  6    where sofar!=totalwork
  7    AND totalwork != 0;

       SID TIME_REMAINING PCT_COMPLETED        MINUTES_SINCE_START MINUTES_SINCE_LAST_UPDATE MESSAGE
---------- -------------- -------------------- ------------------- ------------------------- --------------------------------------------------------------------------------
       352          71675 9.10187376904%                       120                         0 Rowid Range Scan:  MYUSER.MYTAB: 20149 out of 221372 Blocks done
         6         100554 .986644979026%                        17                         0 Rowid Range Scan:  MYUSER.MYTAB: 2305 out of 233620 Blocks done
       471          40202 31.496556313138%                     308                         0 Rowid Range Scan:  MYUSER.MYTAB: 72895 out of 231438 Blocks done

SQL>


So I don't understand what is happening, and no idea how to actually measure progress, and 83M rows to be updated ( or 100+GB together with indexes ) in a full scan with sufficient undo - seems too long..

What else can be done to understand the weak link in the process ?


[Updated on: Thu, 01 November 2018 04:32]

Report message to a moderator

Re: Optimal Chunk size for update of 50M record table [message #672930 is a reply to message #672928] Thu, 01 November 2018 04:53 Go to previous messageGo to next message
John Watson
Messages: 8922
Registered: January 2010
Location: Global Village
Senior Member
Perhaps the delete is hanging for some reason:

select event,seconds_in_wait from v$session where sid in (....);
Re: Optimal Chunk size for update of 50M record table [message #672936 is a reply to message #672930] Thu, 01 November 2018 06:54 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
John Watson wrote on Thu, 01 November 2018 11:53
Perhaps the delete is hanging for some reason:

select event,seconds_in_wait from v$session where sid in (....);

SQL> set lines 900 pages 20000
col sql_id for a15
SQL> SQL> col sql_text for a55
SQL> col last_load_time for a20
SQL> col c1 for a35
SQL> define num=11
SQL> col event for a23
SQL> --
SQL> select
  2   event,seconds_in_wait,t.used_urec,
  3         v.sql_id,
  4         trim(replace (sql_text, ' ',' ')) sql_text,
  5         --executions,
  6         v.ELAPSED_TIME / 1000000 seconds,
  7         last_load_time,
  8         child_number,'select * from table (dbms_xplan.display_cursor ('''||v.sql_id||''','||child_number||'));' as c1
  from gv$sql v, gv$session s, gv$transaction t
  9   10   where
 11  v.inst_id (+)= s.inst_id
 12    and v.sql_id (+)= s.sql_id
 13    and s.sid=120
 14    and s.inst_id = 2
 15    and t.INST_ID(+) = s.INST_ID
 16    and t.SES_ADDR(+) = s.SADDR /*the transaction id*/
 17   order by      executions desc;

EVENT                   SECONDS_IN_WAIT  USED_UREC SQL_ID          SQL_TEXT                                                   SECONDS LAST_LOAD_TIME       CHILD_NUMBER C1
----------------------- --------------- ---------- --------------- ------------------------------------------------------- ---------- -------------------- ------------ -----------------------------------
db file sequential read               0  513614523 aqnjuf34qqkzb   DELETE /*+ FULL(MYTAB) PARALLEL(MYTAB, 3) */  84300.4686 2018-10-31/20:25:24             0 select * from table (dbms_xplan.dis
                                                                    from MYTAB  where COL1 in ('A'                                                     play_cursor ('aqnjuf34qqkzb',0));
                                                                                     ,'B'                         ,'C'
                                                                   )    and COL2 < sysdate - 365


Elapsed: 00:00:00.04
SQL>

This means it is working on indexes, but I assume it is deleting from them, not using them to locate the rows to delete, right ?
Can I verify that somehow ?

used_urec is changing so looks like it does do *something*,
However it is now 24 hours and looks too slow..

Also, no one is working on this system...

Re: Optimal Chunk size for update of 50M record table [message #672937 is a reply to message #672936] Thu, 01 November 2018 07:05 Go to previous messageGo to next message
John Watson
Messages: 8922
Registered: January 2010
Location: Global Village
Senior Member
I give you a simple query and you come back with something ludicrously complicated, and you haven't given it the SIDs you extracted from v$session_longops. Just keep it simple, you do have a habit of running complicated diagnostic queries which really don't help.

Reads of undo segments are always sequential reads through cache, which may be what you are seeing.

Re: Optimal Chunk size for update of 50M record table [message #672938 is a reply to message #672937] Thu, 01 November 2018 07:17 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
John Watson wrote on Thu, 01 November 2018 14:05
I give you a simple query and you come back with something ludicrously complicated, and you haven't given it the SIDs you extracted from v$session_longops. Just keep it simple, you do have a habit of running complicated diagnostic queries which really don't help.

Reads of undo segments are always sequential reads through cache, which may be what you are seeing.


SQL>
SQL> select event,seconds_in_wait from gv$session where (sid in (6,471) and inst_id=2);

EVENT                                              SECONDS_IN_WAIT
-------------------------------------------------- ---------------
PX Deq Credit: send blkd                                         0
PX Deq Credit: send blkd                                         0

Elapsed: 00:00:00.00
SQL>

But nothing is running.. what can be causing those blocks ?

I have asked for parallelism level of 3 because I pre-checked that I have 8 cpu's on the linux machine ( top command , and press 1 to show cpu's... )

Re: Optimal Chunk size for update of 50M record table [message #672945 is a reply to message #672938] Thu, 01 November 2018 07:45 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
>But nothing is running.. what can be causing those blocks ?
BLOCKS?
What Blocks?

Please post SQL & result that show what the DELETE is doing or waiting on now since nobody here can query your database.
Re: Optimal Chunk size for update of 50M record table [message #672948 is a reply to message #672945] Thu, 01 November 2018 07:54 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
BlackSwan wrote on Thu, 01 November 2018 14:45
>But nothing is running.. what can be causing those blocks ?
BLOCKS?
What Blocks?


"blkd" implies the word "block" ( not like "building blocks" but more like "to block" , and also the context is retrieving column "SECONDS IN WAIT" so it sounds as if it fits the context. )

BlackSwan wrote on Thu, 01 November 2018 14:45
Please post SQL & result that show what the DELETE is doing or waiting on now since nobody here can query your database.
I have posted SQLs on data from GV$SESSION , GV$SESSION_LONGOPS, and other dynamic views as much as I can think of...
What SQLs do you suggest to run to analyze what the delete is doing, or what do you mean by that ?
Not sure I understand... TIA
Re: Optimal Chunk size for update of 50M record table [message #672951 is a reply to message #672948] Thu, 01 November 2018 07:59 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
Is DELETE session doing I/O now?

SELECT * FROM V$SESS_IO WHERE SID = <sessionID_of_DELETE>;
issue above repeatedly to see if any I/O count is increasing.

Is DELETE session WAITING?
What is result of SQL below?
SELECT Decode(request, 0, 'Holder: ', 
                       'waiter: ') 
       ||vl.sid sess, 
       status, 
       id1, 
       id2, 
       lmode, 
       request, 
       vl.TYPE 
FROM   v$lock vl, 
       v$session vs 
WHERE  ( id1, id2, vl.TYPE ) IN (SELECT id1, 
                                        id2, 
                                        TYPE 
                                 FROM   v$lock 
                                 WHERE  request > 0) 
       AND vl.sid = vs.sid 
ORDER  BY id1, 
          request; 
Re: Optimal Chunk size for update of 50M record table [message #672954 is a reply to message #672951] Thu, 01 November 2018 08:09 Go to previous messageGo to previous message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
BlackSwan wrote on Thu, 01 November 2018 14:59
Is DELETE session doing I/O now?

SELECT * FROM V$SESS_IO WHERE SID = <sessionID_of_DELETE>;
issue above repeatedly to see if any I/O count is increasing.

Is DELETE session WAITING?
What is result of SQL below?
SELECT Decode(request, 0, 'Holder: ', 
                       'waiter: ') 
       ||vl.sid sess, 
       status, 
       id1, 
       id2, 
       lmode, 
       request, 
       vl.TYPE 
FROM   v$lock vl, 
       v$session vs 
WHERE  ( id1, id2, vl.TYPE ) IN (SELECT id1, 
                                        id2, 
                                        TYPE 
                                 FROM   v$lock 
                                 WHERE  request > 0) 
       AND vl.sid = vs.sid 
ORDER  BY id1, 
          request; 

Physical reads and block changes increasing ( executed every 5s or so ) :


SQL> /

       SID BLOCK_GETS CONSISTENT_GETS PHYSICAL_READS BLOCK_CHANGES CONSISTENT_CHANGES OPTIMIZED_PHYSICAL_READS
---------- ---------- --------------- -------------- ------------- ------------------ ------------------------
       120 2113323725           41789       13573586    1100469215                  0                        0

Elapsed: 00:00:00.00
SQL> /

       SID BLOCK_GETS CONSISTENT_GETS PHYSICAL_READS BLOCK_CHANGES CONSISTENT_CHANGES OPTIMIZED_PHYSICAL_READS
---------- ---------- --------------- -------------- ------------- ------------------ ------------------------
       120 2113376577           41789       13573967    1100496584                  0                        0

Elapsed: 00:00:00.00
SQL> /

       SID BLOCK_GETS CONSISTENT_GETS PHYSICAL_READS BLOCK_CHANGES CONSISTENT_CHANGES OPTIMIZED_PHYSICAL_READS
---------- ---------- --------------- -------------- ------------- ------------------ ------------------------
       120 2113529415           41789       13574584    1100575798                  0                        0

Elapsed: 00:00:00.00
SQL> /

       SID BLOCK_GETS CONSISTENT_GETS PHYSICAL_READS BLOCK_CHANGES CONSISTENT_CHANGES OPTIMIZED_PHYSICAL_READS
---------- ---------- --------------- -------------- ------------- ------------------ ------------------------
       120 2113618547           41789       13574866    1100622011                  0                        0

Elapsed: 00:00:00.00
SQL>
SQL>
SQL>
SQL> /

       SID BLOCK_GETS CONSISTENT_GETS PHYSICAL_READS BLOCK_CHANGES CONSISTENT_CHANGES OPTIMIZED_PHYSICAL_READS
---------- ---------- --------------- -------------- ------------- ------------------ ------------------------
       120 2113805485           41789       13575584    1100718918                  0                        0

Elapsed: 00:00:00.00
SQL> /

       SID BLOCK_GETS CONSISTENT_GETS PHYSICAL_READS BLOCK_CHANGES CONSISTENT_CHANGES OPTIMIZED_PHYSICAL_READS
---------- ---------- --------------- -------------- ------------- ------------------ ------------------------
       120 2114350364           41801       13577245    1101001215                  0                        0


Don't see waits:

SQL> SQL>
SQL>

SQL> SQL>
SQL> SELECT Decode(request, 0, 'Holder: ',
                       'waiter: ')
  2    3         ||vl.sid sess,
  4         status,
  5         id1,
  6         id2,
  7         lmode,
  8         request,
  9         vl.TYPE
 10  FROM   v$lock vl,
 11         v$session vs
 12  WHERE  ( id1, id2, vl.TYPE ) IN (SELECT id1,
 13                                          id2,
 14                                          TYPE
 15                                   FROM   v$lock
 16                                   WHERE  request > 0)
 17         AND vl.sid = vs.sid
 18  ORDER  BY id1,
 19            request;

no rows selected

Elapsed: 00:00:01.20
SQL> /

no rows selected

Elapsed: 00:00:01.20
SQL> /

no rows selected

Elapsed: 00:00:01.23
SQL>
Previous Topic: A question about how to code a procedure to use dbms_parallel_execute package for a huge insert
Next Topic: how to avoid window sort?
Goto Forum:
  


Current Time: Thu Mar 28 12:44:56 CDT 2024