Feed aggregator

Using Oratcptest

Yann Neuhaus - Thu, 2018-08-09 10:32

Last day I did some tests with Oratcptest. This a command line tool which can be used to measure network bandwidth and network latency between client and server.
In this blog I am going to show how to use this tool in a DataGuard environment.
I am using a DataGuard environment with following configuration.


Configuration - PROD

  Protection Mode: MaxAvailability
  Members:
  PROD_SITE1 - Primary database
    PROD_SITE2 - Physical standby database
    PROD_SITE3 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 54 seconds ago)

DGMGRL> show database 'PROD_SITE1' LogXptMode;
  LogXptMode = 'SYNC'
DGMGRL> show database 'PROD_SITE2' LogXptMode;
  LogXptMode = 'SYNC'
DGMGRL> show database 'PROD_SITE3' LogXptMode;
  LogXptMode = 'SYNC'
DGMGRL>

The installation of Oratcptest is very simple. We just have to download on both servers the oratcptest.jar file from the oracle support site. Note that JRE 6 or higher is required. In my case I have JRE 8 on both servers

[oracle@primaserver oratcptest]$ java -version
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 1.8.0_111-b15)
OpenJDK 64-Bit Server VM (build 25.111-b15, mixed mode)
[oracle@primaserver oratcptest]$

[oracle@standserver1 oratcptest]$ java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
[oracle@standserver1 oratcptest]$

[oracle@standserver2 ~]$ java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
[oracle@standserver2 ~]$

We can invoke the help command to see all options available for the oratcptest

[oracle@primaserver oratcptest]$ java -jar oratcptest.jar -help

Now we can assess the network bandwidth for our DataGuard. Note that I am using simple virtual machines. But the steps will be the same on real productions servers.
We first have to determine the highest volume of redo log in my database. Following Oracle query can be used.

select thread#,sequence#,blocks*block_size/1024/1024 MB,(next_time-first_time)*86400 sec, blocks*block_size/1024/1024/((next_time-first_time)*86400) "MB/s" from v$archived_log where ((next_time-first_time)*864000) and first_time between  to_date('2018/08/09 08:00:00','YYYY/MM/DD HH24:MI:SS') and to_date('2018/08/09 11:00:00','YYYY/MM/DD HH24:MI:SS') and dest_id=2 order by first_time;

   THREAD#  SEQUENCE#         MB        SEC       MB/s
---------- ---------- ---------- ---------- ----------
         1        124 .003417969          9 .000379774
         1        125 .271972656        184 .001478112
         1        126 .001953125          5 .000390625
         1        127 11.3662109        915 .012422088
         1        128 10.8466797       6353 .001707332

We can see that the highest value is .012422088 MB/s. The goal is to see if our network bandwidth can support this rate.
As we are using SYNC mode, the primary database will wait for a confirmation from standby databases that they have written the change to disk before informing the application of the commit success.
For SYNC transport we then have to collect the Average redo write size which is calculated using following formula

Average=redo size / redo writes

These metrics can be obtained using an AWR report. In our case the value is

Average=15924844/4015=3967

Now we are going to simulate SYNC writes over the network using Oratcptest. Note I need the location of my standby redo logs

SQL> select member from v$logfile where type='STANDBY';

MEMBER
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/PROD/stredo01.log
/u01/app/oracle/oradata/PROD/stredo02.log
/u01/app/oracle/oradata/PROD/stredo03.log
/u01/app/oracle/oradata/PROD/stredo04.log

From the standby server I can run following command

[oracle@standserver1 oratcptest]$ java -jar oratcptest.jar -server -port=5555 -file=/u01/app/oracle/oradata/PROD/myoratcp.tmp
OraTcpTest server started.

From the primary server

[oracle@primaserver oratcptest]$ java -jar oratcptest.jar standserver1 -port=5555  -write  -mode=sync -length=3967 -duration=10s -interval=2s
[Requesting a test]
        Message payload        = 3967 bytes
        Payload content type   = RANDOM
        Delay between messages = NO
        Number of connections  = 1
        Socket send buffer     = (system default)
        Transport mode         = SYNC
        Disk write             = YES
        Statistics interval    = 2 seconds
        Test duration          = 10 seconds
        Test frequency         = NO
        Network Timeout        = NO
        (1 Mbyte = 1024x1024 bytes)

(14:26:16) The server is ready.
                    Throughput             Latency
(14:26:18)      1.124 Mbytes/s            3.375 ms   (disk-write 2.537 ms)
(14:26:20)      0.813 Mbytes/s            4.668 ms   (disk-write 3.775 ms)
(14:26:22)      1.094 Mbytes/s            3.467 ms   (disk-write 2.773 ms)
(14:26:24)      1.004 Mbytes/s            3.778 ms   (disk-write 2.991 ms)
(14:26:26)      0.560 Mbytes/s            6.779 ms   (disk-write 5.623 ms)
(14:26:26) Test finished.
               Socket send buffer = 11700 bytes
                  Avg. throughput = 0.920 Mbytes/s
                     Avg. latency = 4.126 ms (disk-write 3.280 ms)

[oracle@primaserver oratcptest]$

We can see that the Average throughput is 0.920 M/s which is sufficient to handle our highest peak rate which is .012422088 MB/s. We can also note the latency which includes the time to send the message to the server host, the optional disk write at the server host, and the acknowledgment back to the client process
If we are using ASYNC mode the test will be like

[oracle@primaserver oratcptest]$ java -jar oratcptest.jar standserver1 -port=5555    -mode=async -length=3967 -duration=10s -interval=2s
[Requesting a test]
        Message payload        = 3967 bytes
        Payload content type   = RANDOM
        Delay between messages = NO
        Number of connections  = 1
        Socket send buffer     = (system default)
        Transport mode         = ASYNC
        Disk write             = NO
        Statistics interval    = 2 seconds
        Test duration          = 10 seconds
        Test frequency         = NO
        Network Timeout        = NO
        (1 Mbyte = 1024x1024 bytes)

(14:58:03) The server is ready.
                    Throughput
(14:58:05)     13.897 Mbytes/s
(14:58:09)      5.193 Mbytes/s
(14:58:11)     40.007 Mbytes/s
(14:58:13)     21.475 Mbytes/s
(14:58:13) Test finished.
               Socket send buffer = 210600 bytes
                  Avg. throughput = 16.901 Mbytes/s

[oracle@primaserver oratcptest]$

Conclusion
In this blog we have talked about Oratcptest which is a simple tool which can help to assess our network bandwidth. Oratcptest can help before performing Rman Backups, file copies, a DataGuard environment.
Reference : Oracle Doc ID 2064368.1

 

Cet article Using Oratcptest est apparu en premier sur Blog dbi services.

Partitioning -- 2 : Simple Range Partitioning -- by DATE

Hemant K Chitale - Thu, 2018-08-09 03:35
Range Partitioning allows you to separate a logical table into a number of distinct physical segments, each segment holding data that maps to a range of values.
(I encourage you to read the Introduction in the first post in this series)

The simplest and most common implementation is Range Partitioning by a DATE column.

SQL> l
1 create table sales_data
2 (sale_id number primary key,
3 sale_date date,
4 invoice_number varchar2(21),
5 customer_id number,
6 product_id number,
7 sale_value number)
8 partition by range (sale_date)
9 (partition P_2015 values less than (to_date('01-JAN-2016','DD-MON-YYYY'))
10 tablespace TBS_YEAR_2015,
11 partition P_2016 values less than (to_date('01-JAN-2017','DD-MON-YYYY'))
12 tablespace TBS_YEAR_2016,
13 partition P_2017 values less than (to_date('01-JAN-2018','DD-MON-YYYY'))
14 tablespace TBS_YEAR_2017,
15 partition P_2018 values less than (to_date('01-JAN-2019','DD-MON-YYYY'))
16 tablespace TBS_YEAR_2018,
17 partition P_2019 values less than (to_date('01-JAN-2020','DD-MON-YYYY'))
18 tablespace TBS_YEAR_2019,
19 partition P_MAXVALUE values less than (MAXVALUE)
20 tablespace USERS
21* )
SQL> /

Table created.

SQL>


Here, I have created each Partition in a separate tablespace.  Note that the Partition Key (SALE_DATE) does not have to be the same as the Primary Key (SALE_ID)

I have also created a MAXVALUE Partition  (Some DBAs/Developers may mistakenly assume this to be a *default* partition.  Range Partitioning, unlike List Partitioning, does not have the concept of a "default" partition.  This simply is the Partition for incoming rows that have Partition Key value that is higher than the last (highest) defined Partition Key Upper Bound (31-Dec-2019 23:59:59 in this case)).

I can look up the data dictionary for these partitions in this manner :

SQL> select partition_name, tablespace_name   
2 from user_tab_partitions
3 where table_name = 'SALES_DATA'
4 order by partition_position
5 /

PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------
P_2015 TBS_YEAR_2015
P_2016 TBS_YEAR_2016
P_2017 TBS_YEAR_2017
P_2018 TBS_YEAR_2018
P_2019 TBS_YEAR_2019
P_MAXVALUE USERS

6 rows selected.

SQL>


Partitions are ordered by Partition *Position*  not Name.

How do I add a new partition for data for the year 2020 ?  By "splitting" the MAXVALUE partition.

SQL> alter table sales_data                 
2 split partition P_MAXVALUE
3 at (to_date('01-JAN-2021','DD-MON-YYYY'))
4 into
5 (partition P_2020 tablespace TBS_YEAR_2020, partition P_MAXVALUE)
6 /

Table altered.

SQL>
SQL> select partition_name, high_value
2 from user_tab_partitions
3 where table_name = 'SALES_DATA'
4 order by partition_position
5 /

PARTITION_NAME HIGH_VALUE
------------------------------ ---------------------------------------------
P_2015 TO_DATE(' 2016-01-01 00:00:00', 'SYYYY-MM-DD
P_2016 TO_DATE(' 2017-01-01 00:00:00', 'SYYYY-MM-DD
P_2017 TO_DATE(' 2018-01-01 00:00:00', 'SYYYY-MM-DD
P_2018 TO_DATE(' 2019-01-01 00:00:00', 'SYYYY-MM-DD
P_2019 TO_DATE(' 2020-01-01 00:00:00', 'SYYYY-MM-DD
P_2020 TO_DATE(' 2021-01-01 00:00:00', 'SYYYY-MM-DD
P_MAXVALUE MAXVALUE

7 rows selected.

SQL>
SQL> l
1 select partition_name, tablespace_name
2 from user_tab_partitions
3 where table_name = 'SALES_DATA'
4* order by partition_position
SQL> /

PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------
P_2015 TBS_YEAR_2015
P_2016 TBS_YEAR_2016
P_2017 TBS_YEAR_2017
P_2018 TBS_YEAR_2018
P_2019 TBS_YEAR_2019
P_2020 TBS_YEAR_2020
P_MAXVALUE USERS

7 rows selected.

SQL>


Note that, irrespective of the data format I specify in the CREATE or SPLIT commands, Oracle presents the Upper Bound Date (HIGH_VALUE) in it's own format, using a Gregorian Calendar.

How do I remove an older partition ?

SQL> alter table sales_data
2 drop partition P_2015
3 /

Table altered.

SQL>


A DROP command is very simple.

In my next post, I will add Indexes to this table.



Categories: DBA Blogs

Partitioning -- 1 : Introduction

Hemant K Chitale - Thu, 2018-08-09 02:54
I am beginning a new series of Blog Posts on Partitioning in Oracle.  I plan to cover 11g and 12c.   I might add posts on changes in 18c  (which is really 12.2.0.2 currently)

First, this is my presentation at AIOUG Sangam 11
and this the corresponding article

This series of posts will have new examples, from the simple to the complex, not present in the above presentation / article.
Categories: DBA Blogs

Facebook : My Recent Experience

Tim Hall - Thu, 2018-08-09 02:25

Here’s a little story of what has happened to me recently on Facebook.

First a little history lesson. For a long time I had an extremely small list of friends on Facebook. I would only accept friend requests from people I really knew, like IRL friends and a few work colleagues. That was it. No Oracle people were allowed… The wife has a rule that only people she would let stay in her house are friends on Facebook. Nobody is allowed in my house, so my definition had to be a little different than that.

Some time ago I changed my stance on Facebook friends and started to accept other people, mostly assigning them to the “Restricted” list, and so it went on for some time.

Recently I tweeted that I was getting a lot of friend requests and wondered what was going on. I figured I have a lot of readers, so it’s natural people would reach out, and I didn’t think to much about it. After a while I started to get some really odd things happen, so I did a little digging and found some rather “interesting” people in my friend list. I don’t really want to say more about it than that.

The long and short of it was I decided to remove several thousand friends and I’ve returned to something close to my original policy. I’m sorry if you are a decent person and feel offended that I have unfriended you, but if I don’t really know you, that’s the way it is.

By the way, Facebook used to let you mass delete friends, but that is no longer possible. What’s more, if you delete a lot of them at once they lock certain features of your account. I had to write to Facebook to explain what I was doing and why before they would let me unfriend people again. I know it’s an automatic check for suspicious behaviour, but it would be nice if they spent more effort checking what people are saying and doing on their platform…

Cheers

Tim…

Facebook : My Recent Experience was first posted on August 9, 2018 at 8:25 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

MERGE JOIN CARTESIAN: a join method or a join type?

Yann Neuhaus - Wed, 2018-08-08 17:00

I’ll present about join methods at POUG and DOAG. I’ll show how the different join methods work in order to better understand them. The idea is to show Nested Loops, Hash Join, Sort Merge Join, Merge Join Cartesian on the same query. I’ll run a simple join between DEPT and EMP with the USE_NL, USE_HASH, USE_MERGE and USE_MERGE_CARTESIAN hints. I’ll show the execution plan, with SQL Monitoring in text mode. And I’ll put some gdb breakpoints on the ‘qer’ (query execution rowsource) functions to run the plan operations step by step. Then I’ll do the same on a different query in order to show in detail the 12c adaptive plans.

But wait, I listed Nested Loops, Hash Join, Sort Merge Join, Merge Join Cartesian… but is Merge Cartesian Join really a join method? I mean, my query is not a cartesian join. I have all join predicates here. But for sure you can also do an inner join by starting with a cartesian join and then filter on the join predicate. As if doing physically what the old join syntax of Oracle is doing logically: by not putting any predicates in the from clause and add the join predicates in the where clause to filter over it.

If I look at the 12.2 documentation, it is a Join method
CaptureJoinMethods122

For the definition, a Join Method is how the join will be executed. It is not a decision of the SQL developer because SQL is declarative: you declare the result you want, and the optimizer will decide how to do it. And this is why hints are in comments: they are not part of the declarative syntax. Forcing how to do it is not part of SQL.

Just after listing the join methods, the documentation lists the join types which are part of the SQL because it declares the join result you expect. Inner join to get all matching rows. Semi join to get only the first matching row. Anti Join to get all rows which do not match. Outer join to get all matching rows in addition to those which matches. The syntax is INNER JOIN, OUTER JOIN, EXISTS or IN, NOT EXISTS or NOT IN. Join type is not ‘how’ but ‘what’.

Ok, so back to the join method. Let’s force it on my inner join between DEPT and EMP:

11g


SQL> alter session set current_schema=SCOTT statistics_level=all;
Session altered.
 
SQL> select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from DEPT join EMP using(deptno);
 
DEPTNO DNAME LOC EMPNO ENAME JOB MGR HIREDATE SAL COMM
---------- -------------- ------------- ---------- ---------- --------- ---------- --------- ---------- ----------
10 ACCOUNTING NEW YORK 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 ACCOUNTING NEW YORK 7839 KING PRESIDENT 17-NOV-81 5000
10 ACCOUNTING NEW YORK 7934 MILLER CLERK 7782 23-JAN-82 1300
10 RESEARCH DALLAS 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 RESEARCH DALLAS 7839 KING PRESIDENT 17-NOV-81 5000
10 RESEARCH DALLAS 7934 MILLER CLERK 7782 23-JAN-82 1300
10 SALES CHICAGO 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 SALES CHICAGO 7839 KING PRESIDENT 17-NOV-81 5000
10 SALES CHICAGO 7934 MILLER CLERK 7782 23-JAN-82 1300
10 OPERATIONS BOSTON 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 OPERATIONS BOSTON 7839 KING PRESIDENT 17-NOV-81 5000
10 OPERATIONS BOSTON 7934 MILLER CLERK 7782 23-JAN-82 1300
 
12 rows selected.
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------
SQL_ID 1xpfxq6pc30vq, child number 0
-------------------------------------
select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from
DEPT join EMP using(deptno)
 
Plan hash value: 2034389985
 
------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 12 |00:00:00.01 | 7 | | | |
| 1 | MERGE JOIN CARTESIAN| | 1 | 14 | 12 |00:00:00.01 | 7 | | | |
| 2 | TABLE ACCESS FULL | DEPT | 1 | 4 | 4 |00:00:00.01 | 4 | | | |
| 3 | BUFFER SORT | | 4 | 4 | 12 |00:00:00.01 | 3 | 2048 | 2048 | 2048 (0)|
|* 4 | TABLE ACCESS FULL | EMP | 1 | 4 | 3 |00:00:00.01 | 3 | | | |
------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
4 - filter("DEPT"."DEPTNO"="EMP"."DEPTNO")
 

Ok, then I declared my result with an inner join query, and I forced the join method with a hint to show that it is possible. But look at the result. 12 rows? Only DEPTNO 10 where the SCOTT schema has employees in 10, 20 and 30? And only 3 employees here, repeated 4 times for each department name? That’s wrong result.

NEVER FORCE A CARTESIAN JOIN WITH USE_MERGE_CARTESIAN!

That’s a very old bug: Bug 17064391 Wrong result with USE_MERGE_CARTESIAN hint finally fixed in 12c (12.2 and backported in 12.1 PSU)

Then how is it fixed?

18c

With the fix, the hint is just ignored and a SORT MERGE JOIN is used here:

SQL> alter session set current_schema=SCOTT statistics_level=all;
Session altered.
 
SQL> select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from DEPT join EMP using(deptno);
 
DEPTNO DNAME LOC EMPNO ENAME JOB MGR HIREDATE SAL COMM
---------- -------------- ------------- ---------- ---------- --------- ---------- --------- ---------- ----------
10 ACCOUNTING NEW YORK 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 ACCOUNTING NEW YORK 7839 KING PRESIDENT 17-NOV-81 5000
10 ACCOUNTING NEW YORK 7934 MILLER CLERK 7782 23-JAN-82 1300
20 RESEARCH DALLAS 7566 JONES MANAGER 7839 02-APR-81 2975
20 RESEARCH DALLAS 7902 FORD ANALYST 7566 03-DEC-81 3000
20 RESEARCH DALLAS 7876 ADAMS CLERK 7788 23-MAY-87 1100
20 RESEARCH DALLAS 7369 SMITH CLERK 7902 17-DEC-80 800
20 RESEARCH DALLAS 7788 SCOTT ANALYST 7566 19-APR-87 3000
30 SALES CHICAGO 7521 WARD SALESMAN 7698 22-FEB-81 1250 500
30 SALES CHICAGO 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0
30 SALES CHICAGO 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300
30 SALES CHICAGO 7900 JAMES CLERK 7698 03-DEC-81 950
30 SALES CHICAGO 7698 BLAKE MANAGER 7839 01-MAY-81 2850
30 SALES CHICAGO 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400
 
14 rows selected.
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------
SQL_ID 1xpfxq6pc30vq, child number 0
-------------------------------------
select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from
DEPT join EMP using(deptno)
 
Plan hash value: 1407029907
 
--------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 14 |00:00:00.01 | 12 | 12 | | | |
| 1 | MERGE JOIN | | 1 | 14 | 14 |00:00:00.01 | 12 | 12 | | | |
| 2 | SORT JOIN | | 1 | 4 | 4 |00:00:00.01 | 6 | 6 | 2048 | 2048 | 2048 (0)|
| 3 | TABLE ACCESS FULL| DEPT | 1 | 4 | 4 |00:00:00.01 | 6 | 6 | | | |
|* 4 | SORT JOIN | | 4 | 14 | 14 |00:00:00.01 | 6 | 6 | 2048 | 2048 | 2048 (0)|
| 5 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 6 | 6 | | | |
--------------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
4 - access("DEPT"."DEPTNO"="EMP"."DEPTNO")
filter("DEPT"."DEPTNO"="EMP"."DEPTNO")

So here the result is good, thanks to the fix, and we clearly see how it is fixed: the USE_MERGE_CARTESIAN hint has been ignored.

And the funny thing is that when you look at the 18c documentation, the Merge Join Cartesian is not a join method anymore but a join type:
CaptureJoinMethods183

Exactly the same paragraph, but now in join types (the ‘what’) rather than in join methods (the ‘when’).

What or How?

Actually, in my opinion, it is both. When you explicitly want a cartesian join, that’s a join type described by the CROSS JOIN in the ANSI join syntax, or the lack of related predicates in the old syntax. This is ‘what’. But you may also encounter a MERGE JOIN CARTESIAN for a non-cartesian join just because the optimizer decides it is more efficient. When you have very few rows on both sides, it may be faster to start with a cartesian product on small rowsources. This can be part of star transformation where fact rows are joined back to the cartesian product of filtered dimensions in order to project the dimension attributes. This is ‘how’ it will be executed. We also see it when the optimizer underestimates the cardinalities and is followed by a long nested loop.

When?

So, let’s look at the documentation “When the Optimizer Considers Cartesian Joins”:
CaptureJoinMethods183-when

  • No join condition exists: that’s when cartesian product is what we want
  • The ORDERED hint specifies a table before its join table is specified: that’s when it is the only join method possible with the specified join order
  • A Cartesian join is an efficient method: then it is a method there, even if documented in join types.

In conclusion, cartesian join is a join type. It can also be used as a join method when the optimizer decides to. But you cannot decide it yourself by hinting since 12c, and trying to do so in previous version is a very bad idea and can returns wrong results.

So, for this one I’ll explicitely run a CROSS JOIN:
CaptureUSEMERGECARTESIAN

The query is on top. The SQL monitor in the middle, showing that we are currently active on reading rows from EMP. The bottom shows the ‘qer’ functions backtrace: the fetch call is propagated from opifch2 for the SELECT STATEMENT, through the MERGE JOIN CARTESIAN (querjo), the BUFFER SORT (qerso), to the TABLE ACCESS (qertb).

So basically, the goal of this full-demo presentation is to show how to read the execution plan by understanding how it is executed. This qertbFetch on the inner table EMP is executed only on the first row coming from the outer table DEPT. As the rows are returned to a buffer, the further iterations will fetch only from this buffer and will not go further than qersoFetchSimple. The qersoProcessULS (‘process underlying row source’ – see Frits Hoogland annotations) is run only once. This is the big difference with Nested Loop where the inner loop on the underlying rowsource is run for each outer loop iteration: those two loops are nested – thus the name. But the function for the join part is the same for Nested Loop, Sort Merge Join and Merge Join Cartesian: qerjo. Only the underlying operations differenciate the join methods.

Last comment, we don’t see any function which really sort the rows in this buffer (as we will see for the Sort Merge Join method) because there is no sorting despites the name of the BUFFER SORT operation. More info on Jonathan Lewis blog.

 

Cet article MERGE JOIN CARTESIAN: a join method or a join type? est apparu en premier sur Blog dbi services.

Design question around automatic data change notifications

Tom Kyte - Wed, 2018-08-08 16:26
Hello Oracle experts. Thank you in advance for going through my question. In our organisation, we are planning to implement a solution to automatically push the data that is changed in our databases to consumers. They are not a high traffic system...
Categories: DBA Blogs

Four Options for Creating Mindbreeze Search Interfaces

A well-designed search interface is a critical component of an engaging search experience. Mindbreeze provides a nice combination of both pre-built search apps and tools for customization. This post explores the following approaches to building a Mindbreeze search interface:

  • The Mindbreeze Default Search Client
  • The Mindbreeze Search App Designer
  • Custom Mindbreeze Web Applications
  • The Mindbreeze REST API
Option 1: The Mindbreeze Default Search Client Flexibility: Low | Development Effort: None

Mindbreeze includes a built-in search client which offers a feature-rich, mobile friendly, search interface out of the box. Built-in controls exist to configure filter facets, define suggestion sources, and enable or disable export. Features are enabled and disabled via the Client Service configuration interface within the Mindbreeze Management Center. The metadata displayed within the default client is determined by the value of the “visible” property set in the Category Descriptor for the respective data sources. Some of the Mindbreeze features exposed through the default client are not available via a designer-built search app (discussed in Option 2). These include saved searches, result groupings (i.e. summarize-by), the sort-by picker, sources filters, and tabs. Organizations that wish to use these features without much effort would be wise to consider the Mindbreeze Default Search Client.

In order to integrate the built-in client with a website or application, users are typically redirected from the primary website to the Mindbreeze client when performing a search. The default client is served directly from the search appliance and the query term can be passed in the URL from the website’s search box to the Mindbreeze client. Alternately, the built-in client can be embedded directly into a website using an iframe.

What is a Category Descriptor?

Mindbreeze uses an XML file called the Category Descriptor (categorydescriptor.xml) to control various aspects of both indexing and serving for each data source category (e.g. Web, SharePoint, Google Drive, etc.). Each category plugin includes a default Category Descriptor which can be extended or modified to meet your needs. Common modifications include adding localized display labels for metadata field names, boosting the overall impact of a metadata field on relevancy, and changing which fields are visible within the default search client.

Option 2: The Mindbreeze Search App Designer Flexibility: Moderate | Development Effort: None to Moderate

The Mindbreeze Search App Designer provides a drag-and-drop interface for creating modular, mobile-friendly, search applications. Some of the most popular modules include filters, maps, charts, and galleries. Many of these features are not enabled on the aforementioned default Client, so a search app is the easiest way to use them. This drag-and-drop configuration allows for layout adjustments, widget selection, and basic configurations without coding or technical knowledge. To further customize search apps, users can modify the mustache templates that control the rendering of each search widget within the search app. Common modifications include conditionally adjusting visible metadata, removing actions, or adding custom callouts or icons for certain result types. 

A key feature is the ability to export the code needed to embed a search app into a website or application from the Search Apps page in the Mindbreeze Management Center. That code can then be placed directly in a div or iframe on the target website eliminating the need to redirect users to the appliance. Custom CSS files may be used to style the results to match the rest of the website. Although you can add a search box directly to a search app, webpages usually have their own search box in the header. You can utilize query terms from an existing search box by passing them as a URL parameter where they will be picked up by the embedded search app.

Did you know? This website uses a search app for Mindbreeze-powered website search. For a deep-dive look at that integration, check out our blog post on How We Integrated this Website with Mindbreeze InSpire.

Option 3: Custom Mindbreeze Web Applications Flexibility: High | Development Effort: Low to Moderate

The default client mentioned in Option 1 can also be copied to create a new custom version of a Mindbreeze Web Application. The most common alteration is to add a reference to a custom CSS file which modifies the look and feel of the search results without changing the underlying data or DOM structure. This modification is easy and low risk. It also very easy to isolate issues related to such a change, as you can always attempt to reproduce an issue using the default client without your custom CSS.

More substantial modifications to the applications index.html or JavaScript files can also be made to significantly customize and alter the behavior of the search experience. Examples include adding custom business logic to manipulate search constraints or applying dynamic boosting to alter relevancy at search time. Other Mindbreeze UI elements can also be added to customized web apps using Mindbreeze HTML building blocks; this includes many of the elements exposed through the search app Designer such as graphs, maps, and timelines. While these types of alterations require deeper technical knowledge than simply adding custom CSS, they are still often less effort than building a custom UI from scratch (as described in Option 4). These changes may require refactoring to be compatible with future versions or integrate new features over time, so this should be considered when implementing your results page.

Option 4: The Mindbreeze REST API Flexibility: High | Development Effort: Moderate to High

For customers seeking a more customized integration, the Mindbreeze REST API allows search results to be returned as JSON, giving you full control over their presentation. Custom search pages also allow for dynamic alterations to the query, constraints, or other parameters based on custom business logic. Filters, spelling suggestions, preview URLs, and other Mindbreeze features are all available in the JSON response, but it is up to the front-end developers to determine which features to render on the page, how to arrange them, and what styling to use. This approach allows for the most control and tightest integration with the containing site, but it is also the most effort. That said, just because custom search pages generally require the greatest effort is not to say selecting this option always will result in a lengthy deployment. In fact, one of our clients used the Mindbreeze API to power their custom search page and went from racking to go-live in 37 days.

Mindbreeze offers an excellent combination of built-in features with tools for extending capabilities when necessary. If you have any questions about our experience with Mindbreeze or would like to know more, please contact us.

The post Four Options for Creating Mindbreeze Search Interfaces appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Backing up and restoring EDB containers in MiniShift/OpenShift

Yann Neuhaus - Wed, 2018-08-08 09:38

The last blogs in the series are already some days old: Setting up MiniShift, Deploying EDB containers in MiniShift/OpenShift, Customizing PostgreSQL parameters in EDB containers in MiniShift/OpenShift, Scaling the EDB containers in MiniShift/OpenShift, EDB Failover Manager in EDB containers in Minishift/OpenShift and EDB Failover Manager in EDB containers in Minishift/OpenShift – Failovers. What is missing is how you can backup and restore instances running in this container deployment and that is the topic of this post.

What you usually use to backup and restore EDB Postgres is BART and the container world is no exception to that. Lets see how that works.

My current deployment looks like this:
Selection_013

Two pgpool containers are serving three database containers which you can also check on the command line:

dwe@dwe:~$ oc get pods -o wide -L role
NAME                 READY     STATUS    RESTARTS   AGE       IP           NODE        ROLE
edb-as10-0-1-b8lvj   1/1       Running   0          3m        172.17.0.9   localhost   masterdb
edb-as10-0-1-gj76h   1/1       Running   0          1m        172.17.0.5   localhost   standbydb
edb-as10-0-1-sb5lt   1/1       Running   0          2m        172.17.0.4   localhost   standbydb
edb-pgpool-1-qzk5v   1/1       Running   0          3m        172.17.0.7   localhost   queryrouter
edb-pgpool-1-rvtl6   1/1       Running   0          3m        172.17.0.6   localhost   queryrouter

What we want to do is to backup the database instances or at least one of them. What you need to prepare before deploying the BART container is shared storage between the databases containers and the BART container. The is especially important for the restore case as the restore procedure needs to access the backup which is hosted in the BART container. Notice that this storage configuration has the “Read-Write-Many” attributes:
Selection_030

When I initially deployed the database containers I provided exactly these storage claim and volume as a parameter so I have that available in the database containers:
Selection_031

This means in any of the database containers I will be able to see the backup volume:

dwe@dwe:~$ oc rsh edb-as10-0-1-b8lvj
sh-4.2$ ls -la /edbbackup/
total 12
drwxrwx---  3 root    root 4096 Aug  6 11:49 .
drwxr-xr-x 86 root    root 4096 Aug  8 14:03 ..
drwxrwxr-x  4 edbuser root 4096 Aug  6 11:49 edb-bart-1-89k7s
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/
total 16
drwxrwxr-x 4 edbuser root 4096 Aug  6 11:49 .
drwxrwx--- 3 root    root 4096 Aug  6 11:49 ..
drwxrwxr-x 2 edbuser root 4096 Aug  6 11:49 bart_log
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 pgbackup
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/pgbackup/
total 12
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 .dwe@dwe:~$ oc rsh edb-as10-0-1-b8lvj
sh-4.2$ ls -la /edbbackup/
total 12
drwxrwx---  3 root    root 4096 Aug  6 11:49 .
drwxr-xr-x 86 root    root 4096 Aug  8 14:03 ..
drwxrwxr-x  4 edbuser root 4096 Aug  6 11:49 edb-bart-1-89k7s
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/
total 16
drwxrwxr-x 4 edbuser root 4096 Aug  6 11:49 .
drwxrwx--- 3 root    root 4096 Aug  6 11:49 ..
drwxrwxr-x 2 edbuser root 4096 Aug  6 11:49 bart_log
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 pgbackup
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/pgbackup/
total 12
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 .
drwxrwxr-x 4 edbuser root 4096 Aug  6 11:49 ..
drwxr-xr-x 4 edbuser root 4096 Aug  6 11:52 edb
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/pgbackup/edb/
total 16
drwxr-xr-x 4 edbuser root 4096 Aug  6 11:52 .
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 ..
drwxr-xr-x 2 edbuser root 4096 Aug  6 11:52 1533556356576
drwxr-xr-x 2 edbuser root 4096 Aug  6 11:49 archived_wals
drwxrwxr-x 4 edbuser root 4096 Aug  6 11:49 ..
drwxr-xr-x 4 edbuser root 4096 Aug  6 11:52 edb
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/pgbackup/edb/
total 16
drwxr-xr-x 4 edbuser root 4096 Aug  6 11:52 .
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 ..
drwxr-xr-x 2 edbuser root 4096 Aug  6 11:52 1533556356576
drwxr-xr-x 2 edbuser root 4096 Aug  6 11:49 archived_wals

The same storage configuration then needs to be provided to the BART deployment. Here is the yaml file for the BART deployment:

apiVersion: v1
kind: Template
metadata:
   name: edb-as10-custom
   annotations:
    description: "Custom EDB Postgres Advanced Server 10.0 Deployment Config"
    tags: "database,epas,postgres,postgresql"
    iconClass: "icon-postgresql"
objects:
- apiVersion: v1 
  kind: Service
  metadata:
    name: ${DATABASE_NAME}-service 
    labels:
      role: loadbalancer
      cluster: ${DATABASE_NAME}
  spec:
    selector:                  
      lb: ${DATABASE_NAME}-pgpool
    ports:
    - name: lb 
      port: ${PGPORT}
      targetPort: 9999
    sessionAffinity: None
    type: LoadBalancer
- apiVersion: v1 
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-pgpool
  spec:
    replicas: 2
    selector:
      lb: ${DATABASE_NAME}-pgpool
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        labels:
          lb: ${DATABASE_NAME}-pgpool
          role: queryrouter
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-pgpool
          env:
          - name: DATABASE_NAME
            value: ${DATABASE_NAME} 
          - name: PGPORT
            value: ${PGPORT} 
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres'
          - name: REPL_PASSWORD
            value: 'postgres'
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: localhost:5000/edb/edb-pgpool:v3.5
          imagePullPolicy: IfNotPresent
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-as10-0
  spec:
    replicas: 1
    selector:
      db: ${DATABASE_NAME}-as10-0 
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        creationTimestamp: null
        labels:
          db: ${DATABASE_NAME}-as10-0 
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-as10 
          env:
          - name: DATABASE_NAME 
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER 
            value: ${DATABASE_USER} 
          - name: DATABASE_USER_PASSWORD
            value: 'postgres'
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres'
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: REPL_PASSWORD
            value: 'postgres'
          - name: PGPORT
            value: ${PGPORT} 
          - name: RESTORE_FILE
            value: ${RESTORE_FILE} 
          - name: LOCALEPARAMETER
            value: ${LOCALEPARAMETER}
          - name: CLEANUP_SCHEDULE
            value: ${CLEANUP_SCHEDULE}
          - name: EFM_EMAIL
            value: ${EFM_EMAIL}
          - name: NAMESERVER
            value: ${NAMESERVER}
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName 
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP 
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: localhost:5000/edb/edb-as:v10.3
          imagePullPolicy: IfNotPresent 
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5 
          livenessProbe:
            exec:
              command:
              - /var/lib/edb/testIsHealthy.sh
            initialDelaySeconds: 600 
            timeoutSeconds: 60 
          ports:
          - containerPort: ${PGPORT} 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
          - name: ${BACKUP_PERSISTENT_VOLUME}
            mountPath: /edbbackup
          - name: pg-initconf
            mountPath: /initconf
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${PERSISTENT_VOLUME_CLAIM}
        - name: ${BACKUP_PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${BACKUP_PERSISTENT_VOLUME_CLAIM}
        - name: pg-initconf
          configMap:
            name: postgres-map
              
    triggers:
    - type: ConfigChange
parameters:
- name: DATABASE_NAME
  displayName: Database Name
  description: Name of Postgres database (leave edb for default)
  value: 'edb'
- name: DATABASE_USER
  displayName: Default database user (leave enterprisedb for default)
  description: Default database user
  value: 'enterprisedb'
- name: REPL_USER
  displayName: Repl user
  description: repl database user
  value: 'repl'
- name: PGPORT
  displayName: Database Port
  description: Database Port (leave 5444 for default)
  value: "5444"
- name: LOCALEPARAMETER
  displayName: Locale
  description: Locale of database
  value: ''
- name: CLEANUP_SCHEDULE
  displayName: Host Cleanup Schedule
  description: Standard cron schedule - min (0 - 59), hour (0 - 23), day of month (1 - 31), month (1 - 12), day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0). Leave it empty if you dont want to cleanup.
  value: '0:0:*:*:*'
- name: EFM_EMAIL
  displayName: Email
  description: Email for EFM
  value: 'none@none.com'
- name: NAMESERVER
  displayName: Name Server for Email
  description: Name Server for Email
  value: '8.8.8.8'
- name: PERSISTENT_VOLUME
  displayName: Persistent Volume
  description: Persistent volume name
  value: ''
  required: true
- name: PERSISTENT_VOLUME_CLAIM 
  displayName: Persistent Volume Claim
  description: Persistent volume claim name
  value: ''
  required: true
- name: BACKUP_PERSISTENT_VOLUME
  displayName: Backup Persistent Volume
  description: Backup Persistent volume name
  value: ''
  required: false
- name: BACKUP_PERSISTENT_VOLUME_CLAIM
  displayName: Backup Persistent Volume Claim
  description: Backup Persistent volume claim name
  value: ''
  required: false
- name: RESTORE_FILE
  displayName: Restore File
  description: Restore file location
  value: ''
- name: ACCEPT_EULA
  displayName: Accept end-user license agreement (leave 'Yes' for default)
  description: Indicates whether user accepts the end-user license agreement
  value: 'Yes'
  required: true

Once that is imported we can deploy the BART container:
Selection_032
Notice that this is actually the same storage configuration as it was used to setup the database containers.
Selection_033

What I didn’t tell you is that you need to do another step before. As the BART container is supposed to backup all the instances in a project we need to pass the BART configuration file to the container via a configMap. In this setup I only have one instance so the configMap would look like this:
Selection_034

Here you would add all the instances you’d need to backup per project. Once the BART container is ready:

dwe@dwe:~$ oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
edb-as10-0-1-b8lvj   1/1       Running   0          17m
edb-as10-0-1-gj76h   1/1       Running   0          14m
edb-as10-0-1-sb5lt   1/1       Running   0          16m
edb-bart-1-7cgfv     1/1       Running   0          19s
edb-pgpool-1-qzk5v   1/1       Running   0          17m
edb-pgpool-1-rvtl6   1/1       Running   0          17m

… you can connect to it and perform a manual backup:

dwe@dwe:~$ oc rsh edb-bart-1-7cgfv
sh-4.2$ bart backup -s edb
INFO:  creating backup for server 'edb'
INFO:  backup identifier: '1533738106320'
65043/65043 kB (100%), 1/1 tablespace

INFO:  backup completed successfully
INFO:  backup checksum: 16fba63925ac3e77d474a36496c2a902 of base.tar
INFO:  
BACKUP DETAILS:
BACKUP STATUS: active
BACKUP IDENTIFIER: 1533738106320
BACKUP NAME: none
BACKUP PARENT: none
BACKUP LOCATION: /edbbackup/edb-bart-1-7cgfv/pgbackup/edb/1533738106320
BACKUP SIZE: 63.52 MB
BACKUP FORMAT: tar
BACKUP TIMEZONE: UTC
XLOG METHOD: fetch
BACKUP CHECKSUM(s): 1
 ChkSum                             File      
 16fba63925ac3e77d474a36496c2a902   base.tar  

TABLESPACE(s): 0
START WAL LOCATION: 000000010000000000000008
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2018-08-08 14:21:46 UTC
STOP TIME: 2018-08-08 14:21:47 UTC
TOTAL DURATION: 1 sec(s)

This backup is now available on the BART container but in addition it is accessible in the database container:

dwe@dwe:~$ oc rsh edb-as10-0-1-b8lvj
sh-4.2$ ls -la /edbbackup/edb-bart-1-7cgfv/pgbackup/edb/1533738106320/
total 65060
drwxr-xr-x 2 edbuser root     4096 Aug  8 14:21 .
drwxr-xr-x 4 edbuser root     4096 Aug  8 14:21 ..
-rwxr-xr-x 1 edbuser root      664 Aug  8 14:21 backupinfo
-rwxr-xr-x 1 edbuser root 66605568 Aug  8 14:21 base.tar

In case you’d need to restore that you would deploy a new database configuration specifying this backup as the “Restore file”:
Selection_035

One downside with the current versions of the containers: You can not do point in time recovery. Only restores from full backups are supported until now. This will change in the next release, though.

Have fun with the containers …

 

Cet article Backing up and restoring EDB containers in MiniShift/OpenShift est apparu en premier sur Blog dbi services.

MobaXTerm 10.9

Tim Hall - Wed, 2018-08-08 07:06

Once again I’m late to the party. About a week ago MobaXTerm 10.9 was released.

The downloads and changelog are in the usual places.

This is a great tool!

Cheers

Tim…

MobaXTerm 10.9 was first posted on August 8, 2018 at 1:06 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

MASH Continues European Expansion with Oracle Food and Beverage Cloud

Oracle Press Releases - Wed, 2018-08-08 07:00
Press Release
MASH Continues European Expansion with Oracle Food and Beverage Cloud Oracle Simphony Cloud Enables Danish Steak House Chain to Reduce IT Complexity, Streamline Operations and Drive Strategic Expansion at Scale

Redwood Shores, Calif.—Aug 8, 2018

Modern American Steak House (MASH), a high-end Danish restaurant chain, has chosen Oracle Simphony Cloud to power its steak house restaurants across Europe. With 13 restaurant locations across Denmark, Germany and the UK, and plans for additional international growth, MASH needed a technology platform that could support the strategic expansion of its business at scale. In addition, the restaurant needed a solution that would help reduce the complexity of its IT infrastructure and streamline both front and back-office operations across all of its restaurant locations.

“Our biggest challenge was scalability—since opening MASH in 2009, our expansion into other countries proved to be a very difficult and complex undertaking due to obstacles around IT infrastructure, regulations, taxes, language and currency, so finding a technology solution that would help us maneuver around these obstacles and enable us to grow at scale was a top priority,” said Mirek Nørkjær, Assistant COO, Copenhagen Concepts, the parent company of MASH restaurants. “Implementing Oracle Simphony Cloud has enabled us to minimize IT complexity through one centralized system, streamline front and back-office operations with real-time data and localization capabilities, and support our vision for expansion, all at scale.”

Oracle Simphony Cloud provides MASH with one centralized system to manage operations across all 13 of its restaurants, reducing the cost and complexity of IT, ensuring high performance scalability and allowing for more business efficiency and agility. In addition, Oracle Simphony Cloud enables MASH to maintain brand standards across its restaurants globally by enforcing brand, menu and employee management standards and allowing for localization. It also ensures menu and pricing consistency, within a country, a region, or a single location, and offers multilanguage and currency support.

“Both large restaurant chains and independent restaurants need to be able to remove obstacles in their IT infrastructure that prevent them from creating a positive foundation for growth,” said Chris Adams, vice president strategy and solutions management, Oracle Food and Beverage. “MASH demonstrates the value and potential of Oracle Simphony Cloud to transform IT processes, streamline operations and drive growth at scale.”

For more information learn how MASH delivers prime cuts and premium service.

Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Food and Beverage

Oracle Food and Beverage, formerly MICROS, brings 40 years of experience in providing software and hardware solutions to restaurants, bars, pubs, clubs, coffee shops, cafes, stadiums, and theme parks. Thousands of operators, both large and small, around the world are using Oracle technology to deliver exceptional guest experiences, maximize sales, and reduce running costs.

For more information about Oracle Food and Beverage, please visit www.Oracle.com/Food-Beverage

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 4155951584

New installed Fusion Middleware Reports or Forms WebLogic Servers fails to start after configuring SSL

Yann Neuhaus - Wed, 2018-08-08 06:52

We installed a Fusion Middleware reports and Forms 12.2.1.2.0 on a Linux Server.
Those are single node Reports and Forms WebLogic Servers.
After disabling the Non SSL Listen Port, we got the below error in the WebLogic Servers log files while starting the Managed Server with only SSL ports enabled.

####<Aug 19, 2017 9:22:44 PM GMT+00:00> <Notice> <Security> <host01.example.com> <WLS_Reports> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1320441764877> <BEA-090082> <Security initializing using security realm myrealm.>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Critical> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765180> <BEA-000386> <Server subsystem failed. Reason: java.lang.AssertionError: Servers do not have a common channel to communicate over
java.lang.AssertionError: Servers do not have a common channel to communicate over
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl.createConfiguration(ConfiguredServersMonitorImpl.java:124)
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl.<init>(ConfiguredServersMonitorImpl.java:55)
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl.<init>(ConfiguredServersMonitorImpl.java:28)
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl$Factory.<clinit>(ConfiguredServersMonitorImpl.java:39)
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl.getInstance(ConfiguredServersMonitorImpl.java:44)
at weblogic.cluster.messaging.internal.server.UnicastFragmentSocket.<init>(UnicastFragmentSocket.java:47)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at java.lang.Class.newInstance0(Class.java:355)
at java.lang.Class.newInstance(Class.java:308)
at weblogic.cluster.FragmentSocketWrapper.getUnicastFragmentSocket(FragmentSocketWrapper.java:76)
at weblogic.cluster.FragmentSocketWrapper.<init>(FragmentSocketWrapper.java:64)
at weblogic.cluster.FragmentSocketWrapper.getInstance(FragmentSocketWrapper.java:47)
at weblogic.cluster.MulticastManager.<init>(MulticastManager.java:158)
at weblogic.cluster.MulticastManager.initialize(MulticastManager.java:111)
at weblogic.cluster.ClusterService.startService(ClusterService.java:196)
at weblogic.server.ServiceActivator.start(ServiceActivator.java:96)
at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Notice> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765413> <BEA-000365> <Server state changed to FAILED>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Error> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765414> <BEA-000383> <A critical service failed. The server will shut itself down>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Notice> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765426> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Info> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765447> <BEA-000236> <Stopping execute threads.>

Even if no cluster is used, the default Reports and Forms domain configuration creates  WebLogic Clusters.

To solve “Servers do not have a common channel to communicate over” errors do the following steps for each cluster as administrator

  1. Login into the Administration Console
  2. Expand the Environment tab and navigate to Clusters
  3. In production mode, you need to select “Lock and edit”
  4. Go to General TAB and select the ‘Replication’ subTAB
  5. In the -> Replication -> find Replication Channel, check box beside “Secure Replication Enabled” (to set it as true)
  6. Activate the changes
  7. Restart the managed server(s).

Or in WLST:

Start wlst.sh(*)  and connect to the WebLogic Domain

connect('AdminUser','password','t3s://host01.example.com:7002')

startEdit()

cd('/Clusters/cluster_reports')
cmo.setClusterType('none')
cmo.setPersistSessionsOnShutdown(false)
cmo.setReplicationChannel('ReplicationChannel')
cmo.setSecureReplicationEnabled(true)

activate()

(*) wlst.sh is located in the Middleware software home $M_HOME/oracle_common/common/bin directory

 

Cet article New installed Fusion Middleware Reports or Forms WebLogic Servers fails to start after configuring SSL est apparu en premier sur Blog dbi services.

Yamaha's PSR-E463 - A truly awesome Keyboard to have

Senthil Rajendran - Wed, 2018-08-08 01:33
Though I am a beginner in this instrument, still I would like to recommend others to go with this keyboard.

Yamaha's PSR-E463 makes a perfect first keyboard for students, a solid educational tool for classrooms, and a fun, lightweight travel instrument and composition tool for professionals. Each of the E463's 61 full-sized, organ-style keys provides good resistance with a quick response that's suited for playing dynamic string pads to quick brass lines and synth leads. And speaking of instruments: the E463 comes loaded with 758 rich stereo voices, which means students and recording artists will never be short on inspiration. And for purists, a one-touch Live Grand Piano function bypasses all advanced features to provide a lush, distraction-free grand piano sound with 48-note polyphony. Its intelligent backing tracks, a USB port for thumb drives and MIDI connectivity, and the included 9-lesson Yamaha Education Suite round out the PSR-E463's value-packed features.

PLS-00172: string literal too long

Tom Kyte - Tue, 2018-08-07 22:06
Hi Tom, When i try to insert huge data into a CLOB colum, i get "PLS-00172: string literal too long". Tried searching in web/metalink, but of no use. Please let me know if a workaround is available for this insert. drop table t9; create tabl...
Categories: DBA Blogs

Time periods determined from input user dates and table defined periods

Tom Kyte - Tue, 2018-08-07 22:06
Hi, I have a procedure that takes 2 parameters (start and end date) and using some additional data from a "settings" table and some sales transactions, populates an intermediary table. The settings table was initially as 1 interval / 1 set of d...
Categories: DBA Blogs

Query a collection of JSON data

Tom Kyte - Tue, 2018-08-07 22:06
I need to query a collection of json data for specific data values contained. In this example, I'm trying to count the number of json objects where the LNAME attribute = 'Mallard'. Based on the documentation at: https://docs.oracle.com/en/d...
Categories: DBA Blogs

Patches available for Internet Explorer 11 performance

Anthony Shorten - Tue, 2018-08-07 21:45

A number of Oracle Utilities Customer and Billing customers have reported some performance issues with Internet Explorer 11 in particular situations. After analysis, it was ascertained that the issue was within Internet Explorer itself. An article is available at Known UI Performance Issues on Internet Explorer 11 (Doc Id: 2430962.1) from My Oracle Support with an explanation of the issues and advice on patches recommended to install to minimize the issue for affected versions.

It is highly recommended to read the article and install the patches to minimize any issues with Internet Explorer 11.

Larry Ellison Announces Availability of Oracle Autonomous Transaction Processing

Oracle Press Releases - Tue, 2018-08-07 16:50
Press Release
Larry Ellison Announces Availability of Oracle Autonomous Transaction Processing New self-driving cloud database uses groundbreaking machine learning and automation to deliver unprecedented cost savings, security, availability, and productivity

Redwood Shores, Calif.—Aug 7, 2018

Oracle Executive Chairman and CTO Larry Ellison today marked a major milestone in the company’s autonomous strategy with the availability of the latest Oracle Autonomous Database Cloud Service, Oracle Autonomous Transaction Processing. Leveraging innovative machine learning and automation capabilities, Oracle Autonomous Transaction Processing delivers unprecedented cost savings, security, availability, and productivity. Oracle’s new self-driving database cloud service is built to run the world’s most demanding finance, retail, manufacturing, and government applications, supporting a complex mix of high-performance transaction processing, reporting, batch, and analytic workloads. Oracle’s Autonomous Database portfolio provides organizations with the most complete and advanced set of database capabilities on the market today.

“Oracle is by far the best database in the world and it just got a lot better because now it’s autonomous,” said Ellison. “This delivers a much more reliable, much more secure system—a system that protects against data theft, a system that is up 99.995 percent of the time, and a system that makes organizations and their developers dramatically more productive.”

Traditionally, creating a database management system required experts to custom build and manually maintain a complex hardware and software stack. Oracle Autonomous Database revolutionizes data management by using machine learning to provide a self-driving, self-securing, and self-repairing database service with cloud economies of scale and elasticity. The service enables users to instantly create new autonomous databases and easily convert existing databases, dramatically reducing costs and time to market.

Complementing the existing Oracle Autonomous Data Warehouse service, Oracle Autonomous Transaction Processing can support a complex mix of high-performance transactions, reporting, batch, IoT, and machine learning in a single database, allowing for simpler application development and deployment, and enabling real-time analytics, personalization, and fraud detection on live transactional data. Users can:

  • Cut costs: Complete automation of database and infrastructure operations cuts administrative costs up to 80 percent. The efficiency of a self-optimizing database together with elastic pay-per-use cuts runtime costs up to 90 percent. 

  • Reduce risk: Automatic application of the latest security updates with no downtime eliminates cyber-attack vulnerabilities. Protection from all types of failures, including system failures, regional outages, and user errors delivers 99.995 percent availability, or less than 2.5 minutes of downtime a month, including maintenance. Database Vault prevents administrators from snooping on user data.

  • Accelerate innovation: Eliminating database maintenance allows database administrators to focus on getting more value from data. Developers become more agile by instantly creating and effortlessly using databases that require no manual tuning. Integrated machine learning algorithms enable the development of applications that perform real-time predictions, such as personalized shopping and fraud detection. The simplicity of upgrading existing databases to the autonomous cloud enables IT to transform to a modern, agile cloud model quickly and easily.

Oracle Autonomous Database builds on 40 years of experience supporting the majority of the world’s most demanding applications. It is easy for customers to adopt because it uses the same proven Oracle Database as on-premises, with the same functionality and interfaces. Oracle Autonomous Database supports the most mission-critical workloads because it is based on highly-sophisticated Oracle Database technologies that are unrivaled by any other product, including high-performance Exadata infrastructure, Real Application Clusters for transparent scale-out and fault tolerance, Active Data Guard disaster recovery, and Online Data Evolution.

“The toughest job a DBA has is that of tuning and maintaining a mission-critical transactional database,” said Carl Olofson, research vice president for IDC’s Data Management Software research. “In addition to examining statistics and applying tuning adjustments, the DBA must also apply patches, including security patches, on a very frequent basis, which is both an error-prone and operationally-disruptive activity. With Oracle Autonomous Transaction Processing, Oracle has eliminated these problematic tasks, allowing the DBA to concentrate on the higher value activity of enabling more business-responsive applications, and helping the enterprise to ensure that data will not be compromised due to known vulnerabilities.”

Oracle Cloud

Oracle Cloud is the industry’s broadest and most integrated public cloud, offering a complete range of services across SaaS, PaaS, and IaaS. It supports new cloud environments, existing ones, and hybrid, and all workloads, developers, and data. The Oracle Cloud delivers nearly 1,000 SaaS applications and 50 enterprise-class PaaS and IaaS services to customers in more than 195 countries around the world and supports 55 billion transactions each day.

Contact Info
Dan Muñoz
Oracle
+1.650.506.2904
dan.munoz@oracle.com
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Dan Muñoz

  • +1.650.506.2904

Nicole Maloney

  • +1.650.506.0806

Drop Tank Fuels Customer Loyalty Programs with Oracle Autonomous Database

Oracle Press Releases - Tue, 2018-08-07 16:45
Press Release
Drop Tank Fuels Customer Loyalty Programs with Oracle Autonomous Database Loyalty technology provider puts the brakes on mundane database maintenance while achieving better performance and scalability for its customer loyalty offerings

Redwood Shores, Calif.—Aug 7, 2018

Drop Tank, a leading loyalty technology and rewards company, is using Oracle Autonomous Data Warehouse to fuel its loyalty solutions for gas station operators. Removing the time consuming process to provision, secure, monitor and tune its databases, Drop Tank is able to automate data management with high-availability, performance and security features, enabling IT staff to innovate in more business-critical areas that drive growth and create powerful partnerships with gas station operators.

Partnering with major fuel providers such as Marathon Petroleum, as well as gas station and convenience store brands, Drop Tank deploys loyalty systems that not only drop the price of fuel for members, but also help capture and unlock customer insights through a wide range of data services. As the demand to expand its business and add more sites to its loyalty network grew, Drop Tank needed a more efficient, secure way to capture, analyze, and manage data collection for more than 3,500 retail locations, all with different systems running their operations. With Oracle Cloud Infrastructure as its underlying platform and Oracle Autonomous Data Warehouse to store data and provide analytics, Drop Tank gathers insights that help gas station operators and consumer packaged goods companies identify trends and make more intelligent decisions on future promotions and sales strategies. 

“As our loyalty programs began to evolve and reach more consumers, we needed a more efficient way to seamlessly manage and scale to enable more partners to leverage our network,” said Timothy Miller, vice president of technology, Drop Tank. “By choosing Oracle Autonomous Data Warehouse, we can automatically set load marks and scale, making it easy to support new campaigns without having to worry about time consuming exercises like indexing, patching, and tuning.”

Oracle Autonomous Data Warehouse enables Drop Tank to easily scale to address new business needs, such as the introduction of a new rewards campaign within just hours of notice. Drop Tank has been able to grow its business to support 30x more retail locations over the past four years and manage its business-critical workloads from any participating location. With Oracle Autonomous Data Warehouse, Drop Tank expects to handle a 50x increase in sales transactions through the end of next year.

“Today’s market dynamics require companies to constantly look for innovative new ways to meet customer demands,” said Andrew Mendelsohn, executive vice president, Oracle Database. “By bringing the capabilities of the Oracle Autonomous Database to companies of all sizes, the complexities of configuration, administration, and tuning are eliminated and the result is a solution that is quick to deploy, low cost, high performance, and securely-featured.”

Using Oracle Autonomous Data Warehouse’s self-driving, self-securing, and self-repairing capabilities, Drop Tank has greatly reduced the overall time needed from DBAs as compared to more cumbersome competitive database offerings. With the Oracle Cloud Platform fully integrated, Oracle SOA Cloud is used to rapidly provision and link directly to gas station host systems to capture fuel and consumer goods data, eliminating the need for developers to write additional software.

Oracle Autonomous Database provides the industry’s first self-driving, self-securing, and self-repairing database cloud service. It’s built on unique Oracle data management technologies, including Oracle Exadata Database Machine, Oracle Database 18c, Oracle Real Application Clusters, and Oracle Multitenant, plus algorithms using artificial intelligence to make it autonomous.

Contact Info
Dan Muñoz
Oracle
+1.650.506.2904
dan.munoz@oracle.com
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
About Drop Tank

Drop Tank specializes in gas station loyalty technology and loyalty solutions, with thousands of retail implementations across the United States. Drop Tank partners with fuel and c-store brands to deploy consistent loyalty functionality across stations which may have inconsistent retail technology. Drop Tank’s solutions unlock loyalty and data services for a wide range of participants in the convenience store industry. For more information, visit www.drop-tank.com.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Dan Muñoz

  • +1.650.506.2904

Nicole Maloney

  • +1.650.506.0806

How much free space can be reclaimed from a segment?

Yann Neuhaus - Tue, 2018-08-07 14:06

You have the feeling that your table takes more blocks than it should? Here are the queries I use to quickly check the free space. The idea is to call DBMS_SPACE.SPACE_USAGE and infer the minimum space from the percentages. For example, a block in FS3 (defined as having at least 50 to 75% free space) is supposed to have at least 50% of free space. Of course it can have more, but you don’t know.

Here is some PL/SQL to do so:

set serveroutput on
declare
unf number; unfb number; fs1 number; fs1b number; fs2 number; fs2b number; fs3 number; fs3b number; fs4 number; fs4b number; full number; fullb number;
begin
for i in (select * from (select * from dba_segments where segment_subtype='ASSM' and segment_type in (
'TABLE','TABLE PARTITION','TABLE SUBPARTITION','INDEX','INDEX PARTITION','INDEX SUBPARTITION','CLUSTER','LOB','LOB PARTITION','LOB SUBPARTITION'
) order by bytes desc) where 10>=rownum)
loop
begin
dbms_space.space_usage(i.owner,i.segment_name,i.segment_type,unf,unfb,fs1,fs1b,fs2,fs2b,fs3,fs3b,fs4,fs4b,full,fullb,partition_name=>i.partition_name);
dbms_output.put_line(to_char((unfb+fs1b+fs2b*0.25+fs3b*0.5+fs4b*0.75)/1024/1024/1024,'999G999D999')||' GB free in '||i.segment_type||' "'||i.owner||'"."'||i.segment_name||'" partition "'||i.partition_name||'"');
exception
when others then dbms_output.put_line(i.segment_type||' "'||i.owner||'"."'||i.segment_name||'" partition "'||i.partition_name||'": '||sqlerrm);
end;
end loop;
end;
/

The output looks like:

.001 GB free in INDEX "DEMO"."ACCOUNT_PK" partition ""
.001 GB free in TABLE "APEX_040200"."WWV_FLOW_PAGE_PLUGS" partition ""
.009 GB free in TABLE "SCOTT"."DEMO" partition ""
.000 GB free in TABLE "APEX_040200"."WWV_FLOW_STEP_ITEMS" partition ""
.003 GB free in INDEX "SYS"."WRH$_SYSMETRIC_HISTORY_INDEX" partition ""
.000 GB free in TABLE "MDSYS"."SDO_CS_SRS" partition ""
.002 GB free in INDEX "SYS"."I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST" partition ""
.006 GB free in TABLE "SYS"."WRH$_SYSMETRIC_HISTORY" partition ""
.002 GB free in TABLE "SYS"."WRH$_SQL_PLAN" partition ""

If you are in 12c, an inline function in the query might come handy:

with function freebytes(segment_owner varchar2, segment_name varchar2, segment_type varchar2,partition_name varchar2) return number as
unf number; unfb number; fs1 number; fs1b number; fs2 number; fs2b number; fs3 number; fs3b number; fs4 number; fs4b number; full number; fullb number;
begin
dbms_space.space_usage(segment_owner,segment_name,segment_type,unf,unfb,fs1,fs1b,fs2,fs2b,fs3,fs3b,fs4,fs4b,full,fullb,partition_name=>partition_name);
return unfb+fs1b+fs2b*0.25+fs3b*0.5+fs4b*0.75;
end;
select round(freebytes(owner,segment_name,segment_type,partition_name)/1024/1024/1024,3) free_GB,segment_type,owner,segment_name,partition_name
from dba_segments where segment_subtype='ASSM' and segment_type in (
'TABLE','TABLE PARTITION','TABLE SUBPARTITION','INDEX','INDEX PARTITION','INDEX SUBPARTITION','CLUSTER','LOB','LOB PARTITION','LOB SUBPARTITION'
) order by bytes desc fetch first 10 rows only
/

The result looks like:

FREE_GB SEGMENT_TYPE OWNER SEGMENT_NAME PARTITION_NAME
------- ------------ ----- ------------ --------------
0 TABLE DEMO ACCOUNTS
0.001 INDEX DEMO ACCOUNT_PK
0.001 TABLE APEX_040200 WWV_FLOW_PAGE_PLUGS
0.009 TABLE SCOTT DEMO
0.003 INDEX SYS WRH$_SYSMETRIC_HISTORY_INDEX
0 TABLE APEX_040200 WWV_FLOW_STEP_ITEMS
0.002 INDEX SYS I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST
0 TABLE MDSYS SDO_CS_SRS
0.006 TABLE SYS WRH$_SYSMETRIC_HISTORY
0.002 TABLE SYS WRH$_SQL_PLAN

Future evolution will be published on GitHub:
https://raw.githubusercontent.com/FranckPachot/scripts/master/administration/segment_free_space_plsql.sql
https://raw.githubusercontent.com/FranckPachot/scripts/master/administration/segment_free_space_sql.sql

Note that having free space does not mean that you have to shrink or reorg. Try to understand what happened to your data before, and wheter this space will be reused soon.

 

Cet article How much free space can be reclaimed from a segment? est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator