Feed aggregator

Smartphones, IoT and Connected Cars Fueling Rise in LTE Network Traffic

Oracle Press Releases - Wed, 2017-11-08 07:00
Press Release
Smartphones, IoT and Connected Cars Fueling Rise in LTE Network Traffic Annual Oracle Index Provides Communications Professionals a Road Map to Better Plan For and Manage Global Growth in LTE Diameter Signaling

AFRICACOM 2017, SOUTH AFRICA and REDWOOD SHORES, Calif.—Nov 8, 2017

Oracle today announced the “Oracle Communications LTE Diameter Signaling Index, Sixth Edition,” highlighting the continued explosive growth in LTE Diameter Signaling traffic. Fueled heavily by the proliferation of smartphones and rise of Internet of Things (IoT) enabled devices, the report demonstrates that Diameter signaling shows no sign of slowing and is expected to generate 595 million messages per second (MPS) by 2021. Other key developments impacting LTE network traffic include LTE broadcast, VoLTE, and signaling associated with the policy management required to support more sophisticated data plans and applications. Connected cars also continue to show strong network traffic momentum, with 9.4 MPS and a compound annual growth rate (CAGR) of 30 percent.

The report was designed as a tool for communications service providers (CSPs) network engineers and executives to plan for expected increases in signaling capacity over the next five years. Download the Full Report and Infographic.

It is anticipated that growth will continue in Diameter even as 5G implementations begin. While Diameter will not be the main signaling protocol of 5G, it will remain as an important part of 5G networks.

For example, according to Statista.com, smartphones will represent 76 percent of wireless connections by 2021. As consumers maintain their appetite for “always on” connections to increasingly sophisticated and data intensive applications such as gaming and video, LTE network traffic will continue to skyrocket. Likewise, devices that reach beyond the traditional mobile handset, such as IoT sensors used in everything from tracking available parking spots and moisture in crops to lost pets or workers on a job site will have a significant impact on Diameter signaling growth.

“Diameter signaling traffic continues to grow significantly with little end in sight. While smartphones continue to be the traffic leader, applications such as connected cars and IoT promise a significant impact to network traffic in years to come,” said Greg Collins, Founder and Principal Analyst, Exact Ventures. “Diameter signaling controllers continue to be vital network elements, which help enable operators to secure their network borders and to efficiently and effectively route signaling traffic. As such, it is critical for CSPs to understand what’s driving traffic and where, enabling them to avoid signaling traffic issues than can cause network outages, thereby reducing customer satisfaction and increasing customer churn.”

 “With consumer expectations at an all-time high, it’s more critical than ever that CSPs innovate and plan for continued Diameter signaling growth in order to stay relevant,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “The cloud continues to offer one of the clearest avenues for CSPs to accelerate and achieve these goals.”

Oracle helps CSPs create a more scalable and reliable Diameter signaling infrastructure with Oracle Communications Diameter Signaling Router and Oracle Communications Policy Management.

LTE Diameter Signaling Traffic by Region
  • Latin America and the Caribbean continues to show accelerated growth in Diameter networks. The region will generate 52 million MPS by 2021, a CAGR of 34 percent. Brazil is the largest contributor of Diameter signaling, followed by Mexico. 

  • The Middle East will reach 27.9 million MPS of Diameter signaling by 2021, a CAGR of 23 percent. Turkey and Iran are the largest generators in the region.

  • Africa continues to show strong growth in Diameter signaling. The region will generate 20 million MPS by 2021, a CAGR of 63 percent. Egypt is the top generator, followed by Nigeria and South Africa.

  • Central and Southern Asia will account for 10 percent of the world’s Diameter signaling, reaching 62.4 million MPS, a CAGR of 38 percent, by 2021. Policy management is the largest generator of Diameter signaling in the region, generating 10.4 percent of the world’s policy-related Diameter signaling. Pakistan, followed by India, is the largest generators.

  • Oceania, Eastern and South-Eastern Asia generates nearly half of the world’s Diameter signaling (45 percent). By 2021, the region will generate 265 million MPS of Diameter signaling, a CAGR of 18 percent. The region is also responsible for 44 percent of the world’s policy Diameter signaling, generating 131 million MPS by 2021. China alone will generate 83.5 million MPS of policy generated Diameter signaling by 2021. Indonesia is also showing strong growth.

  • North America leads the world in LTE penetration as service providers move aggressively to sunset 2G and 3G services in favor of 4G/5G. The region will show moderate growth in the coming year, generating 59.3 million MPS by 2021, a CAGR of 12 percent.

  • Eastern and Western Europe. Eastern Europe will generate 48 million MPS of Diameter signaling, a CAGR of 46 percent by 2021. Russia, followed by Poland, are the strongest generators in the region. In comparison, Western Europe will generate 61 million MPS, a CAGR of 23 percent. The United Kingdom generates the most Diameter signaling in the region today, but Germany will surpass the UK generating 6.4 million MPS of Diameter signaling by 2021. 


Usage and Citation

Oracle permits the media, financial and industry analysts, service providers, regulators, and other third parties to cite this research with the following attribution:
Source: “Oracle Communications LTE Diameter Signaling Index, Sixth Edition.”

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.415.856.5145

Displaying the contents of a PostgreSQL data file with pg_filedump

Yann Neuhaus - Wed, 2017-11-08 04:34

Did you ever wonder what exactly is in a PostgreSQL data file? Usually you don’t care, I agree. But there might be situations where knowing how you can do this might be a great help. Maybe your file is corrupted and you want to recover as much data as possible? Maybe you just want to do some research. There is a utility called pg_filedump which makes this pretty easy. Lets go …

Before you try to install pg_filedump you’ll need to make sure that all the header files are there in your PostgreSQL installation. Once you have that the installation is as simple as:

postgres@pgbox:/home/postgres/ [PG10] tar -axf pg_filedump-REL_10_0-c0e4028.tar.gz 
postgres@pgbox:/home/postgres/ [PG10] cd pg_filedump-REL_10_0-c0e4028
postgres@pgbox:/home/postgres/pg_filedump-REL_10_0-c0e4028/ [PG10] make
postgres@pgbox:/home/postgres/pg_filedump-REL_10_0-c0e4028/ [PG10] make install

If everything went fine the utility should be there:

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump -h

Version 10.0 (for PostgreSQL 10.x)
Copyright (c) 2002-2010 Red Hat, Inc.
Copyright (c) 2011-2017, PostgreSQL Global Development Group

Usage: pg_filedump [-abcdfhikxy] [-R startblock [endblock]] [-D attrlist] [-S blocksize] [-s segsize] [-n segnumber] file

Display formatted contents of a PostgreSQL heap/index/control file
Defaults are: relative addressing, range of the entire file, block
               size as listed on block 0 in the file

The following options are valid for heap and index files:
  -a  Display absolute addresses when formatting (Block header
      information is always block relative)
  -b  Display binary block images within a range (Option will turn
      off all formatting options)
  -d  Display formatted block content dump (Option will turn off
      all other formatting options)
  -D  Decode tuples using given comma separated list of types
      Supported types:
        bigint bigserial bool char charN date float float4 float8 int
        json macaddr name oid real serial smallint smallserial text
        time timestamp timetz uuid varchar varcharN xid xml
      ~ ignores all attributes left in a tuple
  -f  Display formatted block content dump along with interpretation
  -h  Display this information
  -i  Display interpreted item details
  -k  Verify block checksums
  -R  Display specific block ranges within the file (Blocks are
      indexed from 0)
        [startblock]: block to start at
        [endblock]: block to end at
      A startblock without an endblock will format the single block
  -s  Force segment size to [segsize]
  -n  Force segment number to [segnumber]
  -S  Force block size to [blocksize]
  -x  Force interpreted formatting of block items as index items
  -y  Force interpreted formatting of block items as heap items

The following options are valid for control files:
  -c  Interpret the file listed as a control file
  -f  Display formatted content dump along with interpretation
  -S  Force block size to [blocksize]

Report bugs to 

As we want to dump a file we obviously need a table with some data, so:

postgres=# create table t1 ( a int, b varchar(50));
CREATE TABLE
postgres=# insert into t1 (a,b) select a, md5(a::varchar) from generate_series(1,10) a;
INSERT 0 10

Get the name of the file:

postgres=# select * from pg_relation_filenode('t1');
 pg_relation_filenode 
----------------------
                24702
(1 row)

Look it up in PGDATA:

postgres@pgbox:/home/postgres/ [PG10] cd $PGDATA
postgres@pgbox:/u02/pgdata/PG10/ [PG10] find . -name 24702
./base/13212/24702

… and dump it:

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump ./base/13212/24702

*******************************************************************
* PostgreSQL File/Block Formatted Dump Utility - Version 10.0
*
* File: ./base/13212/24702
* Options used: None
*
* Dump created on: Wed Nov  8 10:39:33 2017
*******************************************************************
Error: Unable to read full page header from block 0.
  ===> Read 0 bytes

Hm, nothing in there. Why? The reasons is easy: The data is there in PostgreSQL but it is only WAL logged at the moment and not yet in the datafile as no checkpoint happened (in this case):

postgres=#  checkpoint;
CHECKPOINT
Time: 100.567 ms

Do it again:

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump ./base/13212/24702

*******************************************************************
* PostgreSQL File/Block Formatted Dump Utility - Version 10.0
*
* File: ./base/13212/24702
* Options used: None
*
* Dump created on: Wed Nov  8 10:40:45 2017
*******************************************************************

Block    0 ********************************************************
 -----
 Block Offset: 0x00000000         Offsets: Lower      64 (0x0040)
 Block: Size 8192  Version    4            Upper    7552 (0x1d80)
 LSN:  logid      0 recoff 0x478b2c48      Special  8192 (0x2000)
 Items:   10                      Free Space: 7488
 Checksum: 0x0000  Prune XID: 0x00000000  Flags: 0x0000 ()
 Length (including item array): 64

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
 Item   4 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
 Item   5 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
 Item   6 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
 Item   7 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
 Item   8 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
 Item   9 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
 Item  10 -- Length:   61  Offset: 7552 (0x1d80)  Flags: NORMAL


*** End of File Encountered. Last Block Read: 0 ***

Here we go. What can we learn from that output. This is not really human readable but at least we see that there are ten rows. We can also list the actual contents of the rows:

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump -f ./base/13212/24702

*******************************************************************
* PostgreSQL File/Block Formatted Dump Utility - Version 10.0
*
* File: ./base/13212/24702
* Options used: -f 
*
* Dump created on: Wed Nov  8 10:41:21 2017
*******************************************************************

Block    0 ********************************************************
 -----
 Block Offset: 0x00000000         Offsets: Lower      64 (0x0040)
 Block: Size 8192  Version    4            Upper    7552 (0x1d80)
 LSN:  logid      0 recoff 0x478b2c48      Special  8192 (0x2000)
 Items:   10                      Free Space: 7488
 Checksum: 0x0000  Prune XID: 0x00000000  Flags: 0x0000 ()
 Length (including item array): 64

  0000: 00000000 482c8b47 00000000 4000801d  ....H,.G....@...
  0010: 00200420 00000000 c09f7a00 809f7a00  . . ......z...z.
  0020: 409f7a00 009f7a00 c09e7a00 809e7a00  @.z...z...z...z.
  0030: 409e7a00 009e7a00 c09d7a00 809d7a00  @.z...z...z...z.

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
  1fc0: 96020000 00000000 00000000 00000000  ................
  1fd0: 01000200 02081800 01000000 43633463  ............Cc4c
  1fe0: 61343233 38613062 39323338 32306463  a4238a0b923820dc
  1ff0: 63353039 61366637 35383439 62        c509a6f75849b   

 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
  1f80: 96020000 00000000 00000000 00000000  ................
  1f90: 02000200 02081800 02000000 43633831  ............Cc81
  1fa0: 65373238 64396434 63326636 33366630  e728d9d4c2f636f0
  1fb0: 36376638 39636331 34383632 63        67f89cc14862c   

 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
  1f40: 96020000 00000000 00000000 00000000  ................
  1f50: 03000200 02081800 03000000 43656363  ............Cecc
  1f60: 62633837 65346235 63653266 65323833  bc87e4b5ce2fe283
  1f70: 30386664 39663261 37626166 33        08fd9f2a7baf3   

 Item   4 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
  1f00: 96020000 00000000 00000000 00000000  ................
  1f10: 04000200 02081800 04000000 43613837  ............Ca87
  1f20: 66663637 39613266 33653731 64393138  ff679a2f3e71d918
  1f30: 31613637 62373534 32313232 63        1a67b7542122c   

 Item   5 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
  1ec0: 96020000 00000000 00000000 00000000  ................
  1ed0: 05000200 02081800 05000000 43653464  ............Ce4d
  1ee0: 61336237 66626263 65323334 35643737  a3b7fbbce2345d77
  1ef0: 37326230 36373461 33313864 35        72b0674a318d5   

 Item   6 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
  1e80: 96020000 00000000 00000000 00000000  ................
  1e90: 06000200 02081800 06000000 43313637  ............C167
  1ea0: 39303931 63356138 38306661 66366662  9091c5a880faf6fb
  1eb0: 35653630 38376562 31623264 63        5e6087eb1b2dc   

 Item   7 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
  1e40: 96020000 00000000 00000000 00000000  ................
  1e50: 07000200 02081800 07000000 43386631  ............C8f1
  1e60: 34653435 66636565 61313637 61356133  4e45fceea167a5a3
  1e70: 36646564 64346265 61323534 33        6dedd4bea2543   

 Item   8 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
  1e00: 96020000 00000000 00000000 00000000  ................
  1e10: 08000200 02081800 08000000 43633966  ............Cc9f
  1e20: 30663839 35666239 38616239 31353966  0f895fb98ab9159f
  1e30: 35316664 30323937 65323336 64        51fd0297e236d   

 Item   9 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
  1dc0: 96020000 00000000 00000000 00000000  ................
  1dd0: 09000200 02081800 09000000 43343563  ............C45c
  1de0: 34386363 65326532 64376662 64656131  48cce2e2d7fbdea1
  1df0: 61666335 31633763 36616432 36        afc51c7c6ad26   

 Item  10 -- Length:   61  Offset: 7552 (0x1d80)  Flags: NORMAL
  1d80: 96020000 00000000 00000000 00000000  ................
  1d90: 0a000200 02081800 0a000000 43643364  ............Cd3d
  1da0: 39343436 38303261 34343235 39373535  9446802a44259755
  1db0: 64333865 36643136 33653832 30        d38e6d163e820   



*** End of File Encountered. Last Block Read: 0 ***

But this does not help much either. When you want to see the contents in human readable format use the “-D” switch and provide the list of data types you want to decode:

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump -D int,varchar ./base/13212/24702

*******************************************************************
* PostgreSQL File/Block Formatted Dump Utility - Version 10.0
*
* File: ./base/13212/24702
* Options used: -D int,varchar 
*
* Dump created on: Wed Nov  8 10:42:58 2017
*******************************************************************

Block    0 ********************************************************
 -----
 Block Offset: 0x00000000         Offsets: Lower      64 (0x0040)
 Block: Size 8192  Version    4            Upper    7552 (0x1d80)
 LSN:  logid      0 recoff 0x478b2c48      Special  8192 (0x2000)
 Items:   10                      Free Space: 7488
 Checksum: 0x0000  Prune XID: 0x00000000  Flags: 0x0000 ()
 Length (including item array): 64

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
COPY: 1	c4ca4238a0b923820dcc509a6f75849b
 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
COPY: 2	c81e728d9d4c2f636f067f89cc14862c
 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
COPY: 3	eccbc87e4b5ce2fe28308fd9f2a7baf3
 Item   4 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
COPY: 4	a87ff679a2f3e71d9181a67b7542122c
 Item   5 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
COPY: 5	e4da3b7fbbce2345d7772b0674a318d5
 Item   6 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
COPY: 6	1679091c5a880faf6fb5e6087eb1b2dc
 Item   7 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
COPY: 7	8f14e45fceea167a5a36dedd4bea2543
 Item   8 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
COPY: 8	c9f0f895fb98ab9159f51fd0297e236d
 Item   9 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
COPY: 9	45c48cce2e2d7fbdea1afc51c7c6ad26
 Item  10 -- Length:   61  Offset: 7552 (0x1d80)  Flags: NORMAL
COPY: 10	d3d9446802a44259755d38e6d163e820

And now we can see it. This is the same data as if you’d do a select on the table:

postgres=# select * from  t1;
 a  |                b                 
----+----------------------------------
  1 | c4ca4238a0b923820dcc509a6f75849b
  2 | c81e728d9d4c2f636f067f89cc14862c
  3 | eccbc87e4b5ce2fe28308fd9f2a7baf3
  4 | a87ff679a2f3e71d9181a67b7542122c
  5 | e4da3b7fbbce2345d7772b0674a318d5
  6 | 1679091c5a880faf6fb5e6087eb1b2dc
  7 | 8f14e45fceea167a5a36dedd4bea2543
  8 | c9f0f895fb98ab9159f51fd0297e236d
  9 | 45c48cce2e2d7fbdea1afc51c7c6ad26
 10 | d3d9446802a44259755d38e6d163e820
(10 rows)

What happens when we do an update?:

postgres=# update t1 set b = 'a' where a = 4;
UPDATE 1
postgres=# checkpoint ;
CHECKPOINT

How does it look like in the file?

postgres@pgbox:/u02/pgdata/PG10/ [PG10] pg_filedump -D int,varchar ./base/13212/24702

*******************************************************************
* PostgreSQL File/Block Formatted Dump Utility - Version 10.0
*
* File: ./base/13212/24702
* Options used: -D int,varchar 
*
* Dump created on: Wed Nov  8 11:12:35 2017
*******************************************************************

Block    0 ********************************************************
 -----
 Block Offset: 0x00000000         Offsets: Lower      68 (0x0044)
 Block: Size 8192  Version    4            Upper    7520 (0x1d60)
 LSN:  logid      0 recoff 0x478c2998      Special  8192 (0x2000)
 Items:   11                      Free Space: 7452
 Checksum: 0x0000  Prune XID: 0x00000298  Flags: 0x0000 ()
 Length (including item array): 68

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
COPY: 1	c4ca4238a0b923820dcc509a6f75849b
 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
COPY: 2	c81e728d9d4c2f636f067f89cc14862c
 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
COPY: 3	eccbc87e4b5ce2fe28308fd9f2a7baf3
 Item   4 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
COPY: 4	a87ff679a2f3e71d9181a67b7542122c
 Item   5 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
COPY: 5	e4da3b7fbbce2345d7772b0674a318d5
 Item   6 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
COPY: 6	1679091c5a880faf6fb5e6087eb1b2dc
 Item   7 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
COPY: 7	8f14e45fceea167a5a36dedd4bea2543
 Item   8 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
COPY: 8	c9f0f895fb98ab9159f51fd0297e236d
 Item   9 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
COPY: 9	45c48cce2e2d7fbdea1afc51c7c6ad26
 Item  10 -- Length:   61  Offset: 7552 (0x1d80)  Flags: NORMAL
COPY: 10	d3d9446802a44259755d38e6d163e820
 Item  11 -- Length:   30  Offset: 7520 (0x1d60)  Flags: NORMAL
COPY: 4	a

*** End of File Encountered. Last Block Read: 0 ***

The a=4 row is still there but we got a new one (Item 11) which is our update. Remember that it is the job of vacuum to recycle the dead/old rows:

postgres=# vacuum t1;
VACUUM
postgres=# checkpoint ;
CHECKPOINT

Again (just displaying the data here):

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
COPY: 1	c4ca4238a0b923820dcc509a6f75849b
 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
COPY: 2	c81e728d9d4c2f636f067f89cc14862c
 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
COPY: 3	eccbc87e4b5ce2fe28308fd9f2a7baf3
 Item   4 -- Length:    0  Offset:   11 (0x000b)  Flags: REDIRECT
 Item   5 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
COPY: 5	e4da3b7fbbce2345d7772b0674a318d5
 Item   6 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
COPY: 6	1679091c5a880faf6fb5e6087eb1b2dc
 Item   7 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
COPY: 7	8f14e45fceea167a5a36dedd4bea2543
 Item   8 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
COPY: 8	c9f0f895fb98ab9159f51fd0297e236d
 Item   9 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
COPY: 9	45c48cce2e2d7fbdea1afc51c7c6ad26
 Item  10 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
COPY: 10	d3d9446802a44259755d38e6d163e820
 Item  11 -- Length:   30  Offset: 7584 (0x1da0)  Flags: NORMAL
COPY: 4	a

… and “Item 4″ is gone (somewhere else). The same happens when you delete data:

postgres=# delete from t1 where a = 4;
DELETE 1
postgres=# vacuum t1;
VACUUM
postgres=# checkpoint;
CHECKPOINT

You’ll notice that both, Items 4 and 11, are now gone (UNUSED):

 ------ 
 Item   1 -- Length:   61  Offset: 8128 (0x1fc0)  Flags: NORMAL
COPY: 1	c4ca4238a0b923820dcc509a6f75849b
 Item   2 -- Length:   61  Offset: 8064 (0x1f80)  Flags: NORMAL
COPY: 2	c81e728d9d4c2f636f067f89cc14862c
 Item   3 -- Length:   61  Offset: 8000 (0x1f40)  Flags: NORMAL
COPY: 3	eccbc87e4b5ce2fe28308fd9f2a7baf3
 Item   4 -- Length:    0  Offset:    0 (0x0000)  Flags: UNUSED
 Item   5 -- Length:   61  Offset: 7936 (0x1f00)  Flags: NORMAL
COPY: 5	e4da3b7fbbce2345d7772b0674a318d5
 Item   6 -- Length:   61  Offset: 7872 (0x1ec0)  Flags: NORMAL
COPY: 6	1679091c5a880faf6fb5e6087eb1b2dc
 Item   7 -- Length:   61  Offset: 7808 (0x1e80)  Flags: NORMAL
COPY: 7	8f14e45fceea167a5a36dedd4bea2543
 Item   8 -- Length:   61  Offset: 7744 (0x1e40)  Flags: NORMAL
COPY: 8	c9f0f895fb98ab9159f51fd0297e236d
 Item   9 -- Length:   61  Offset: 7680 (0x1e00)  Flags: NORMAL
COPY: 9	45c48cce2e2d7fbdea1afc51c7c6ad26
 Item  10 -- Length:   61  Offset: 7616 (0x1dc0)  Flags: NORMAL
COPY: 10	d3d9446802a44259755d38e6d163e820
 Item  11 -- Length:    0  Offset:    0 (0x0000)  Flags: UNUSED

So far for the introduction of pg_filedump, more to come in more detail.

 

Cet article Displaying the contents of a PostgreSQL data file with pg_filedump est apparu en premier sur Blog dbi services.

RMAN Backup script

DBA Scripts and Articles - Wed, 2017-11-08 03:54

This is a sample backup script I used, it has already a lot of options. Feel free to make any modification you want. If you add some good enhancements, let me know I can put them here so everybody can profit from them. RMAN Backup script [crayon-5a031da255613263047755/]  

The post RMAN Backup script appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Automate OVM deployment for a production ready Oracle RAC 12.2 architecture – (part 01)

Yann Neuhaus - Wed, 2017-11-08 02:30

After having worked with OVM on various architectures I can say that it is a good technology to easily build virtualized environments for production applications. Because it is based on XEN and has simple ways to deal with existing storage (FC, ISCSI, NFS, …) and networking solution (bond, lacp, …) it is a robust and convenient way to virtualized IT infrastructures keeping “bare-metal” performance.
Besides, it is an hard partitioning technology which is compliant with the Oracle licensing policies for partitioned environments to control CPU counting for licensing.

The aim of this post is to demonstrate how simple it is to build an HA virtualized architecture with the OVM Manager command line tool only (doc link). So we will create 1 VM on each server including all Oracle OS , network and storage requirements to run RAC 12.2.

Initial state:

  • 2 physical servers installed with Oracle VM Server 3.4 (namely OVS, installation procedure here ) to host VMs including:
    • 5 NICs on each (no bounding for the example but recommended for production system)
      • eth0: administrative network connected to the organization’s administrative network
      • eth1: application network dedicated to application
      • eth2: storage network cabled to the storage for ISCSI LUNs and NFS accessibility
      • eth3: cabled between both OVS Servers for RAC interconnect link #1/2
      • eth4: cabled between both OVS Servers for RAC interconnect link #2/2
  • 1 server with Oracle VM Manager 3.4 yet installed (installation procedure here)
    • eth0: administrative network connector to the organization’s administrative network
  • 1 storage system (Here we are going to use a ZFS Storage appliance)
  • 1 OVM Template from Oracle Corp. (available here)

Summary


Step 0: Connect to the OVM Manager client

Because the client connect through SSH protocol (default port number 10000, user admin), connecting to the OVM Manager client can be done from wherever you have network connectivity with OVM Server.
OVMCLI is a separate service from OVM Manager running on the OVM Manager server. Here I check the OVMCLI service status and I connect from within the VM Manager server.

service ovmcli status

ssh -l admin localhost -p 10000

OVM>                                            # --> prompt for OVM
OVM> ?                                          # --> to show which action can be done
OVM> list ?                                     # --> to show which options are available for command "list"
OVM> set OutputMode={ Verbose | Sparse | Xml }  # --> to make output matching your automation style


Step 1: discover OVS servers

discoverServer ipAddress=192.168.56.101 password=oracle takeOwnership=Yes
discoverServer ipAddress=192.168.56.102 password=oracle takeOwnership=Yes


Step 2: Discover file server

In this example I going to store the ServerPool FSs to NFS from the ZFS Storage appliance. But it could be whatever NFS technologies or directly can be stored in ISCSI/FC LUNs.

OVM> list FileServerPlugin
Command: list FileServerPlugin
Status: Success
Time: 2017-10-19 14:51:31,311 CEST
Data:
id:oracle.ocfs2.OCFS2.OCFS2Plugin (0.1.0-47.5)  name:Oracle OCFS2 File system
id:oracle.generic.NFSPlugin.GenericNFSPlugin (1.1.0)  name:Oracle Generic Network File System

OVM> create FileServer plugin="Oracle Generic Network File System" accessHost=192.168.238.10 adminServers=ovs001,ovs002 name=zfsstorage
Command: create FileServer plugin="Oracle Generic Network File System" accessHost=192.168.238.10 adminServers=ovs001,ovs002 name=zfsstorage
Status: Success
Time: 2017-10-19 14:58:46,411 CEST
JobId: 1508417926209
Data:
id:0004fb00000900004801ecf9996f1d43  name:zfsstorage

OVM> refreshAll
Command: refreshAll
Status: Success
Time: 2017-10-19 16:26:58,705 CEST
JobId: 1508422976145

OVM> list FileSystem
Command: list FileSystem
Status: Success
Time: 2017-10-19 17:41:35,737 CEST
Data:
id:75734f6d-704d-48ee-9853-f6cc09b5af65  name:nfs on 192.168.238.10:/export/RepoOracle
id:3f81dcad-e1ce-41b9-b0f3-3222b3816b17  name:nfs on 192.168.238.10:/export/ServerPoolProd01

OVM> refresh FileSystem id=75734f6d-704d-48ee-9853-f6cc09b5af65
Command: refresh FileSystem id=75734f6d-704d-48ee-9853-f6cc09b5af65
Status: Success
Time: 2017-10-19 17:42:28,516 CEST
JobId: 1508427714903

OVM> refresh FileSystem id=3f81dcad-e1ce-41b9-b0f3-3222b3816b17
Command: refresh FileSystem id=3f81dcad-e1ce-41b9-b0f3-3222b3816b17
Status: Success
Time: 2017-10-19 17:43:02,144 CEST
JobId: 1508427760257


Step 3: Discover NAS Storage

OVM> list StorageArrayPlugin
Command: list StorageArrayPlugin
Status: Success
Time: 2017-10-19 15:28:23,932 CEST
Data:
id:oracle.s7k.SCSIPlugin.SCSIPlugin (2.1.2-3)  name:zfs_storage_iscsi_fc
id:oracle.generic.SCSIPlugin.GenericPlugin (1.1.0)  name:Oracle Generic SCSI Plugin

OVM> create StorageArray plugin=zfs_storage_iscsi_fc name=zfsstorage storageType=ISCSI accessHost=192.168.238.10 accessPort=3260 adminHost=192.168.238.10 adminUserName=ovmuser adminPassword=oracle pluginPrivateData="OVM-iSCSI,OVM-iSCSI-Target"
Command: create StorageArray plugin=zfs_storage_iscsi_fc name=zfsstorage storageType=ISCSI accessHost=192.168.238.10 accessPort=3260 adminHost=192.168.238.10 adminUserName=ovmuser adminPassword=***** pluginPrivateData="OVM-iSCSI,OVM-iSCSI-Target"
Status: Success
Time: 2017-10-19 15:48:00,761 CEST
JobId: 1508420880565
Data:
id:0004fb0000090000c105d1003f051fbd  name:zfsstorage

OVM> addAdminServer StorageArray name=zfsstorage server=ovs001
Command: addAdminServer StorageArray name=zfsstorage server=ovs001
Status: Success
Time: 2017-10-19 16:11:32,448 CEST
JobId: 1508422292175

OVM> addAdminServer StorageArray name=zfsstorage server=ovs002
Command: addAdminServer StorageArray name=zfsstorage server=ovs002
Status: Success
Time: 2017-10-19 16:11:35,424 CEST
JobId: 1508422295266

OVM> validate StorageArray name=zfsstorage
Command: validate StorageArray name=zfsstorage
Status: Success
Time: 2017-10-19 16:10:04,937 CEST
JobId: 1508422128777

OVM> refreshAll
Command: refreshAll
Status: Success
Time: 2017-10-19 16:26:58,705 CEST
JobId: 1508422976145


Step 4: Creation of a server pool

OVM need to put its physical servers in a logical space called server pool. A server pool will use a least 2 storage spaces:

  • a cluster storage configuration and disk Heartbeat (must be at least of 10GB regarding OVM 3.4’s recommendations) and it is better to separate the network access for this storage space in order to avoid unwanted cluster eviction.
  • a storage space for the serverpool in which we can store VMs configuration file, Template, ISOs and so on.
OVM> list FileSystem
Command: list FileSystem
Status: Success
Time: 2017-10-19 17:41:35,737 CEST
Data:
id:75734f6d-704d-48ee-9853-f6cc09b5af65  name:nfs on 192.168.238.10:/export/RepoOracle
id:3f81dcad-e1ce-41b9-b0f3-3222b3816b17  name:nfs on 192.168.238.10:/export/ServerPoolProd01

OVM> create ServerPool clusterEnable=yes filesystem=3f81dcad-e1ce-41b9-b0f3-3222b3816b17 name=prod01 description='Server pool for production 001' startPolicy=CURRENT_SERVER
Command: create ServerPool clusterEnable=yes filesystem=3f81dcad-e1ce-41b9-b0f3-3222b3816b17 name=prod01 description='Server pool for production 001' startPolicy=CURRENT_SERVER
Status: Success
Time: 2017-10-19 17:15:11,431 CEST
Data:
id:0004fb0000020000c6b2c32fc58646e7  name:prod01


Step 5: Add servers to the server pool

OVM> list server
Command: list server
Status: Success
Time: 2017-10-19 17:15:28,111 CEST
Data:
id:65:72:21:77:7b:0d:47:47:bc:43:e5:1f:64:3d:56:d9  name:ovs002
id:bb:06:3c:3e:a4:76:4b:e2:9c:bc:65:69:4e:35:28:b4  name:ovs001

OVM> add Server name=ovs001 to ServerPool name=prod01
Command: add Server name=ovs001 to ServerPool name=prod01
Status: Success
Time: 2017-10-19 17:17:55,131 CEST
JobId: 1508426260895

OVM> add Server name=ovs002 to ServerPool name=prod01
Command: add Server name=ovs002 to ServerPool name=prod01
Status: Success
Time: 2017-10-19 17:18:21,439 CEST
JobId: 1508426277115


Step 6: Creation of a repository to store VMs’s configuration files and to import the Oracle Template

OVM> list filesystem
Command: list filesystem
Status: Success
Time: 2017-10-19 17:44:23,811 CEST
Data:
id:0004fb00000500009cbc79dde9b6649e  name:Server Pool File System
id:75734f6d-704d-48ee-9853-f6cc09b5af65  name:nfs on 192.168.238.10:/export/RepoOracle
id:3f81dcad-e1ce-41b9-b0f3-3222b3816b17  name:nfs on 192.168.238.10:/export/ServerPoolProd01

OVM> create Repository name=RepoOracle on FileSystem name="nfs on 192.168.238.10://export//RepoOracle"
Command: create Repository name=RepoOracle on FileSystem name="nfs on 192.168.238.10://export//RepoOracle"
Status: Success
Time: 2017-10-19 17:45:22,346 CEST
JobId: 1508427888238
Data:
id:0004fb0000030000f1c8182390a36c8c  name:RepoOracle

OVM> add ServerPool name=prod01 to Repository name=RepoOracle
Command: add ServerPool name=prod01 to Repository name=RepoOracle
Status: Success
Time: 2017-10-19 17:53:08,020 CEST
JobId: 1508428361049

OVM> refresh Repository name=RepoOracle
Command: refresh Repository name=RepoOracle
Status: Success
Time: 2017-10-19 17:53:40,922 CEST
JobId: 1508428394212

OVM> importTemplate Repository name=RepoOracle url="ftp:////192.168.56.200//pub//OVM_OL7U4_X86_64_12201DBRAC_PVHVM//OVM_OL7U4_X86_64_12201DBRAC_PVHVM-1of2.tar.gz,ftp:////192.168.56.200//pub//OVM_OL7U4_X86_64_12201DBRAC_PVHVM//OVM_OL7U4_X86_64_12201DBRAC_PVHVM-2of2.tar.gz"
Command: importTemplate Repository name=RepoOracle url="ftp:////192.168.56.200//pub//OVM_OL7U4_X86_64_12201DBRAC_PVHVM//OVM_OL7U4_X86_64_12201DBRAC_PVHVM-1of2.tar.gz,ftp:////192.168.56.200//pub//OVM_OL7U4_X86_64_12201DBRAC_PVHVM//OVM_OL7U4_X86_64_12201DBRAC_PVHVM-2of2.tar.gz"
Status: Success
Time: 2017-11-02 12:05:29,341 CET
JobId: 1509619956729
Data:
id:0004fb00001400005f68a4067eda1e6b  name:OVM_OL7U4_X86_64_12201DBRAC_PVHVM-1of2.tar.gz


Step 7: Create VMs called rac001 and rac002 for my 2 nodes RAC

Here we create VMs by cloning the template OVM_OL7U4_X86_64_12201DBRAC_PVHVM from Oracle.

OVM> list vm
Command: list vm
Status: Success
Time: 2017-11-02 12:07:06,077 CET
Data:
id:0004fb00001400005f68a4067eda1e6b  name:OVM_OL7U4_X86_64_12201DBRAC_PVHVM-1of2.tar.gz

OVM> edit vm id=0004fb00001400005f68a4067eda1e6b name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM
Command: edit vm id=0004fb00001400005f68a4067eda1e6b name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM
Status: Success
Time: 2017-11-02 12:07:30,392 CET
JobId: 1509620850142

OVM> list vm
Command: list vm
Status: Success
Time: 2017-11-02 12:07:36,282 CET
Data:
id:0004fb00001400005f68a4067eda1e6b  name:OVM_OL7U4_X86_64_12201DBRAC_PVHVM

OVM> clone Vm name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM destType=Vm destName=rac001 serverPool=prod01
Command: clone Vm name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM destType=Vm destName=rac001 serverPool=prod01
Status: Success
Time: 2017-11-02 12:31:31,798 CET
JobId: 1509622291342
Data:
id:0004fb0000060000d4819629ebc0687f  name:rac001

OVM> clone Vm name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM destType=Vm destName=rac002 serverPool=prod01
Command: clone Vm name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM destType=Vm destName=rac002 serverPool=prod01
Status: Success
Time: 2017-11-02 13:57:34,125 CET
JobId: 1509627453634
Data:
id:0004fb0000060000482c8e4790b7081a  name:rac002

OVM> list vm
Command: list vm
Status: Success
Time: 2017-11-02 15:23:54,077 CET
Data:
id:0004fb00001400005f68a4067eda1e6b  name:OVM_OL7U4_X86_64_12201DBRAC_PVHVM
id:0004fb0000060000d4819629ebc0687f  name:rac001
id:0004fb0000060000482c8e4790b7081a  name:rac002

OVM> edit vm name=rac001 memory=2048 memoryLimit=2048
Command: edit vm name=rac001 memory=2048 memoryLimit=2048
Status: Success
Time: 2017-11-02 17:14:45,542 CET
JobId: 1509639285374

OVM> edit vm name=rac002 memory=2048 memoryLimit=2048
Command: edit vm name=rac002 memory=2048 memoryLimit=2048
Status: Success
Time: 2017-11-02 17:14:59,458 CET
JobId: 1509639299301


Step 8: Network definition for RAC interconnect and application network

create Network roles=VIRTUAL_MACHINE name=Application-Network
create Network roles=VIRTUAL_MACHINE name=Interco-Network-01
create Network roles=VIRTUAL_MACHINE name=Interco-Network-02

OVM> list network
Command: list network
Status: Success
Time: 2017-10-17 00:31:53,673 CEST
Data:
id:108572a7ca  name:Application-Network
id:10922ff6d7  name:Interco-Network-01
id:106765828d  name:Interco-Network-02

 

Next, we attach physical OVS’s NICs to corresponding networks

OVM> list port
Command: list port
Status: Success
Time: 2017-11-02 16:03:40,026 CET
Data:
id:0004fb00002000007667fde85d2a2944  name:eth0 on ovs002
id:0004fb00002000001fa791c597d71947  name:eth1 on ovs002
id:0004fb00002000003842bd1f3acb476b  name:eth2 on ovs002
id:0004fb000020000031652acb25248275  name:eth3 on ovs002
id:0004fb00002000006fb524dac1f2319c  name:eth4 on ovs001
id:0004fb0000200000748a37db41f80fb2  name:eth4 on ovs002
id:0004fb00002000000178e5cefb3c0161  name:eth3 on ovs001
id:0004fb000020000020373da7c0cdf4cf  name:eth2 on ovs001
id:0004fb0000200000b0e747714aa822b7  name:eth1 on ovs001
id:0004fb00002000002787de2e68f61ecd  name:eth0 on ovs001

add Port id=0004fb0000200000b0e747714aa822b7 to Network name=Application-Network
add Port id=0004fb00002000000178e5cefb3c0161 to Network name=Interco-Network-01
add Port id=0004fb00002000006fb524dac1f2319c to Network name=Interco-Network-02

add Port id=0004fb00002000001fa791c597d71947 to Network name=Application-Network
add Port id=0004fb000020000031652acb25248275 to Network name=Interco-Network-01
add Port id=0004fb0000200000748a37db41f80fb2 to Network name=Interco-Network-02

 

Then create Virtual NIC for Virtual Machines (the order matter as first created will fill first slot of the VM)

OVM> list vnic
Command: list vnic
Status: Success
Time: 2017-11-02 15:25:54,571 CET
Data:
id:0004fb00000700001fe86897bfb0ecd4  name:Template Vnic
id:0004fb00000700005351eb55314ab34e  name:Template Vnic

create Vnic name=rac001_vnic_admin network=Admin-Network on Vm name=rac001
create Vnic name=rac001_vnic_application network=Application-Network on Vm name=rac001
create Vnic name=rac001_vnic_interconnect network=Interco-Network-01 on Vm name=rac001
create Vnic name=rac001_vnic_interconnect network=Interco-Network-02 on Vm name=rac001

create Vnic name=rac002_vnic_admin network=Admin-Network on Vm name=rac002
create Vnic name=rac002_vnic_application network=Application-Network on Vm name=rac002
create Vnic name=rac002_vnic_interconnect network=Interco-Network-01 on Vm name=rac002
create Vnic name=rac002_vnic_interconnect network=Interco-Network-02 on Vm name=rac002

OVM> list vnic
Command: list vnic
Status: Success
Time: 2017-11-02 15:27:34,642 CET
Data:
id:0004fb00000700005631bb2fbbeed53c  name:rac002_vnic_interconnect
id:0004fb00000700005e93ec7e8cf529b6  name:rac001_vnic_interconnect
id:0004fb0000070000c091c9091b464846  name:rac002_vnic_admin
id:0004fb00000700001fe86897bfb0ecd4  name:Template Vnic
id:0004fb00000700009430b0a26566d6e3  name:rac002_vnic_application
id:0004fb0000070000c4113fb1d9375791  name:rac002_vnic_interconnect
id:0004fb00000700005351eb55314ab34e  name:Template Vnic
id:0004fb0000070000e1abd7e572bffc3a  name:rac001_vnic_admin
id:0004fb000007000079bb1fbf1d1942c9  name:rac001_vnic_application
id:0004fb000007000085d8a41dc8fd768c  name:rac001_vnic_interconnect


Step 9: Shared disks attachment to VMs for RAC ASM

Thanks to the Storage plugin available for the ZFS appliance we can directly create LUNs from the OVM Cli. You may find plugin for your Storage constructor in the Oracle Web Site https://www.oracle.com/virtualization/storage-connect-partner-program.html.
The storage plugin need to be installed on each OVS Servers and OVS servers need to be rediscovered after changes.

create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu001 name=clu001dgclu001 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu002 name=clu001dgclu002 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu003 name=clu001dgclu003 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu004 name=clu001dgclu004 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu005 name=clu001dgclu005 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgdata001 name=clu001dgdata001 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgdata002 name=clu001dgdata002 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgfra001 name=clu001dgfra001 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgfra002 name=clu001dgfra002 on VolumeGroup  name=data/local/OracleTech

OVM> list PhysicalDisk
Command: list PhysicalDisk
Status: Success
Time: 2017-11-02 11:44:41,624 CET
Data:
id:0004fb0000180000ae02df42a4c8e582  name:clu001dgclu004
id:0004fb0000180000d91546f7d1a09cfb  name:clu001dgclu005
id:0004fb0000180000ab0030fb540a55b9  name:clu001dgclu003
id:0004fb0000180000d20bb1d7d50d6875  name:clu001dgfra001
id:0004fb00001800009e39a0b8b1edcf90  name:clu001dgfra002
id:0004fb00001800003742306aa30bfdd4  name:clu001dgdata001
id:0004fb00001800006131006a7a9fd266  name:clu001dgdata002
id:0004fb0000180000a5177543a1ef0464  name:clu001dgclu001
id:0004fb000018000035bd38c6f5245f66  name:clu001dgclu002

create vmdiskmapping slot=10 physicalDisk=clu001dgclu001 name=asm_disk_cluster_rac001_clu001dgclu001 on Vm name=rac001
create vmdiskmapping slot=11 physicalDisk=clu001dgclu002 name=asm_disk_cluster_rac001_clu001dgclu002 on Vm name=rac001
create vmdiskmapping slot=12 physicalDisk=clu001dgclu003 name=asm_disk_cluster_rac001_clu001dgclu003 on Vm name=rac001
create vmdiskmapping slot=13 physicalDisk=clu001dgclu004 name=asm_disk_cluster_rac001_clu001dgclu004 on Vm name=rac001
create vmdiskmapping slot=14 physicalDisk=clu001dgclu005 name=asm_disk_cluster_rac001_clu001dgclu005 on Vm name=rac001
create vmdiskmapping slot=15 physicalDisk=clu001dgdata001 name=asm_disk_cluster_rac001_clu001dgdata001 on Vm name=rac001
create vmdiskmapping slot=16 physicalDisk=clu001dgdata002 name=asm_disk_cluster_rac001_clu001dgdata002 on Vm name=rac001
create vmdiskmapping slot=17 physicalDisk=clu001dgfra001 name=asm_disk_cluster_rac001_clu001dgfra001 on Vm name=rac001
create vmdiskmapping slot=18 physicalDisk=clu001dgfra002 name=asm_disk_cluster_rac001_clu001dgfra002 on Vm name=rac001

create vmdiskmapping slot=10 physicalDisk=clu001dgclu001 name=asm_disk_cluster_rac002_clu001dgclu001 on Vm name=rac002
create vmdiskmapping slot=11 physicalDisk=clu001dgclu002 name=asm_disk_cluster_rac002_clu001dgclu002 on Vm name=rac002
create vmdiskmapping slot=12 physicalDisk=clu001dgclu003 name=asm_disk_cluster_rac002_clu001dgclu003 on Vm name=rac002
create vmdiskmapping slot=13 physicalDisk=clu001dgclu004 name=asm_disk_cluster_rac002_clu001dgclu004 on Vm name=rac002
create vmdiskmapping slot=14 physicalDisk=clu001dgclu005 name=asm_disk_cluster_rac002_clu001dgclu005 on Vm name=rac002
create vmdiskmapping slot=15 physicalDisk=clu001dgdata001 name=asm_disk_cluster_rac002_clu001dgdata on Vm name=rac002
create vmdiskmapping slot=16 physicalDisk=clu001dgdata002 name=asm_disk_cluster_rac002_clu001dgdata on Vm name=rac002
create vmdiskmapping slot=17 physicalDisk=clu001dgfra001 name=asm_disk_cluster_rac002_clu001dgfra001 on Vm name=rac002
create vmdiskmapping slot=18 physicalDisk=clu001dgfra002 name=asm_disk_cluster_rac002_clu001dgfra002 on Vm name=rac002

 

#Output of an attachment:
OVM> create vmdiskmapping slot=51 physicalDisk=clu001dgfra002 name=asm_disk_cluster_rac002_clu001dgfra002 on Vm name=rac002
Command: create vmdiskmapping slot=51 physicalDisk=clu001dgfra002 name=asm_disk_cluster_rac002_clu001dgfra002 on Vm name=rac002
Status: Success
Time: 2017-11-02 15:49:44,573 CET
JobId: 1509634184144
Data:
id:0004fb0000130000d1a3ecffefcc0b5b  name:asm_disk_cluster_rac002_clu001dgfra002

 

OVM> list vmdiskmapping
Command: list vmdiskmapping
Status: Success
Time: 2017-11-02 15:50:05,117 CET
Data:
id:0004fb0000130000a2e52668e38d24f0  name:Mapping for disk Id (0004fb00001200008e5043cea31e4a1c.img)
id:0004fb00001300000b0202b6af4254b1  name:asm_disk_cluster_rac002_clu001dgclu003
id:0004fb0000130000f573415ba8af814d  name:Mapping for disk Id (0004fb0000120000073fd0cff75c5f4d.img)
id:0004fb0000130000217c1b6586d88d98  name:asm_disk_cluster_rac002_clu001dgclu002
id:0004fb00001300007c8f1b4fd9e845c4  name:asm_disk_cluster_rac002_clu001dgclu001
id:0004fb00001300009698cf153f616454  name:asm_disk_cluster_rac001_clu001dgfra002
id:0004fb0000130000c9caf8763df6bfe0  name:asm_disk_cluster_rac001_clu001dgfra001
id:0004fb00001300009771ff7e2a1bf965  name:asm_disk_cluster_rac001_clu001dgdata002
id:0004fb00001300003aed42abb7085053  name:asm_disk_cluster_rac001_clu001dgdata001
id:0004fb0000130000ac45b70bac2cedf7  name:asm_disk_cluster_rac001_clu001dgclu005
id:0004fb000013000007069008e4b91b9d  name:asm_disk_cluster_rac001_clu001dgclu004
id:0004fb0000130000a8182ada5a07d7cd  name:asm_disk_cluster_rac001_clu001dgclu003
id:0004fb00001300009edf25758590684b  name:asm_disk_cluster_rac001_clu001dgclu002
id:0004fb0000130000a93c8a73900cbf80  name:asm_disk_cluster_rac002_clu001dgfra001
id:0004fb0000130000c8c35da3ad0148c4  name:asm_disk_cluster_rac001_clu001dgclu001
id:0004fb0000130000d1a3ecffefcc0b5b  name:asm_disk_cluster_rac002_clu001dgfra002
id:0004fb0000130000ff84c64175d7e6c1  name:asm_disk_cluster_rac002_clu001dgdata
id:0004fb00001300009c08b1803928536d  name:Mapping for disk Id (dd3c390b29af49809caba202f234a443.img)
id:0004fb0000130000e85ace19b45c0ad6  name:Mapping for disk Id (0004fb00001200002aa671facc8a1307.img)
id:0004fb0000130000e595c3dc5788b87a  name:Mapping for disk Id (0004fb000012000087341e27f9faaa17.img)
id:0004fb0000130000c66fe2d0d66b7276  name:asm_disk_cluster_rac002_clu001dgdata
id:0004fb00001300009c85bca66c400366  name:Mapping for disk Id (46da481163424b739feeb08b4d22c1b4.img)
id:0004fb0000130000768a2af09207e659  name:asm_disk_cluster_rac002_clu001dgclu004
id:0004fb000013000092836d3ee569e6ac  name:asm_disk_cluster_rac002_clu001dgclu005

OVM> add StorageInitiator name=iqn.1988-12.com.oracle:1847e1b91b5b to AccessGroup name=cluster001
Command: add StorageInitiator name=iqn.1988-12.com.oracle:1847e1b91b5b to AccessGroup name=cluster001
Status: Success
Time: 2017-11-02 16:59:32,116 CET
JobId: 1509638311277

OVM> add StorageInitiator name=iqn.1988-12.com.oracle:a5c84f2c8798 to AccessGroup name=cluster001
Command: add StorageInitiator name=iqn.1988-12.com.oracle:a5c84f2c8798 to AccessGroup name=cluster001
Status: Success
Time: 2017-11-02 16:57:31,703 CET
JobId: 1509638191228

add PhysicalDisk name=clu001dgclu001 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgclu002 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgclu003 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgclu004 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgclu005 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgdata001 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgdata002 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgfra001 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgfra002 to AccessGroup name=cluster001

#Output of an Access addition:
OVM> add PhysicalDisk name=clu001dgclu001 to AccessGroup name=cluster001
Command: add PhysicalDisk name=clu001dgclu001 to AccessGroup name=cluster001
Status: Success
Time: 2017-11-02 17:10:13,636 CET
JobId: 1509639013463
OVM> refreshStorageLayer Server name=ovs001
Command: refreshStorageLayer Server name=ovs001
Status: Success
Time: 2017-11-02 16:42:26,230 CET
JobId: 1509637330270

OVM> refreshStorageLayer Server name=ovs002
Command: refreshStorageLayer Server name=ovs002
Status: Success
Time: 2017-11-02 16:42:51,296 CET
JobId: 1509637355423

 

Final state: 2 VMs hosted on 2 different Servers with OS, Network and Storage requirements to run RAC 12.2.

This concluded this part and demonstrates how easy it can be to automate those commands and deploy many different architectures.
The next part will describe how to deploy a RAC 12.2 on top of this infrastructure with the Oracle DeployCluster Tool in few commands …

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.

 

Cet article Automate OVM deployment for a production ready Oracle RAC 12.2 architecture – (part 01) est apparu en premier sur Blog dbi services.

Error handling behavior change according to PLSQL_OPTIMIZE_LEVEL

Tom Kyte - Tue, 2017-11-07 21:26
We had faced a case in our application where error message disappear according to PLSQL_OPTIMIZE_LEVEL. I had isolated the problem in a simple script. Run this script, you will see that at first execution of the procedure "test_error_proc#" error i...
Categories: DBA Blogs

SQL Query based on performance

Tom Kyte - Tue, 2017-11-07 21:26
Hi Tom, There is a table say t_tab with columns a,b and c. Data in the table is huge(more than million). You run the following three statements: 1. select * from t_tab 2. select a,b,c from t_tab 3. select b,c,a from t_tab Will there be a diffe...
Categories: DBA Blogs

Fuzzy Matching in SQL

Tom Kyte - Tue, 2017-11-07 21:26
Is there any SQL construct that does fuzzy matching ? As an example , if I have the values as Monroe , Monroe Twp , Monroe Township , "Monroe Twp,NJ" , I would like to consider them as one value .
Categories: DBA Blogs

Subquery with Select statement works in 12C but not on 11g.

Tom Kyte - Tue, 2017-11-07 21:26
Hi I am trying to run a select query which has the sub queries it is running well and good in 12C environment but its throughout error in 11g. Could you please help me on this. Thanks, Kumar
Categories: DBA Blogs

Index creation on empty column on Large Table

Tom Kyte - Tue, 2017-11-07 21:26
Quite a time we face a situation where we have large table with Hundreds of Millions of records(sometimes even Billions of records), and we might need to add column to that table and then add index on that new column. We have absolute control over...
Categories: DBA Blogs

ORA-29284: file read error for a few lines

Tom Kyte - Tue, 2017-11-07 21:26
Hi Experts, Thanks for taking the time out to ready my Question. I am receiving a file from a third party as a flat file, with different lines of different lengths. The first two characters of each line represents what data that line will hav...
Categories: DBA Blogs

using connect by without relationship using parent_id

Tom Kyte - Tue, 2017-11-07 21:26
Hi, I have information about father's , mother's and children but there is no relationship between the rows using Paernt_id as follows, <code>drop table tbl_family; create table tbl_family ( father nvarchar2(50) , mother nvarchar2(50) , ...
Categories: DBA Blogs

database migration from AIX to Linux

Tom Kyte - Tue, 2017-11-07 21:26
Hello Tom, We are planning to migrate database from AIX to Linux. Because of different endian we can't built the standby,here my request was Production databases have 30-40TB of data. Some tables have 1-5 TB of only data what was the best way...
Categories: DBA Blogs

Replication of multiple sourde database to a single read only database

Tom Kyte - Tue, 2017-11-07 21:26
Dears Hope you are fine,,, I have a distributed database about 20 branch,,, each database have the same schema structure we need a centralized report that reads from only four tables. currently we take a dump file from each branch and impo...
Categories: DBA Blogs

SunMoon Food Company Implements NetSuite OneWorld to Support Rapid Global Expansion

Oracle Press Releases - Tue, 2017-11-07 20:00
Press Release
SunMoon Food Company Implements NetSuite OneWorld to Support Rapid Global Expansion Fresh produce distributor increases productivity and efficiency, saves S$20,000 in just five months

Sngapore—Nov 8, 2017

Oracle NetSuite, one of the world’s leading providers of cloud-based financials / ERP, HR, Professional Services Automation (PSA) and omnichannel commerce software suites, announced today that SunMoon Food Company Limited (SunMoon), a global distributor of fruit and food products, has deployed NetSuite OneWorld to support its global growth, enabling dramatically increased overall productivity and efficiency. In just five months, NetSuite OneWorld facilitated 900 transactions, having saved SunMoon 150 hours and an estimated S$20,000.

Live in April 2017, SunMoon is leveraging OneWorld for financials, inventory and order management, financial consolidation across three subsidiaries in China, Indonesia, and the US, and multicurrency transactions in 11 different currencies – Australian, Canadian, Hong Kong, Singapore and US Dollar, Euro, Indonesian Rupiah, Malaysian Ringgit, Renminbi and Thai Baht. It also supports, English, Chinese, and Bahasa Malay.

Established in 1983, SunMoon distributes a wide range of fresh and sustainable produce, from premium frozen durians to ready-to-eat sweet corn. The produce is directly sourced from over 200 carefully selected and certified suppliers according to the ‘SunMoon Quality Assurance’ standard, a critical checklist of freshness, quality, safety and traceability. It is then distributed to health-conscious consumers globally, across various ecommerce channels, major supermarkets and SunMoon’s own franchise outlets.

Prior to deploying NetSuite OneWorld, SunMoon primarily used emails to correspond with its farmers, suppliers and customers for stock taking, order management, invoicing and billing. This required significant manual coordination, making it extremely difficult to track orders and compare quotes, greatly impacting productivity and the company’s growth potential.

NetSuite OneWorld empowers SunMoon’s suppliers to enter expiry dates, packaging sizes and other details from any internet-connected device into the cloud-based system. Based on this information, SunMoon can easily create a quote for its customers, which they can accept with just one click. NetSuite OneWorld then automatically sends a PO to farmers and generates an invoice once the order has been fulfilled.

“SunMoon offers over 200 products through more than 11,000 points of sales to 169 customers in 20 countries, and these numbers are growing daily,” said Gary Loh, Deputy Chairman and CEO of SunMoon Food Company Limited. “With NetSuite OneWorld, we’ve been able to move our products seamlessly from farm to fork on a global scale much faster and more efficiently. Using NetSuite OneWorld’s integrated capabilities helps us transform SunMoon into an asset-light and customer-centric enterprise.”

NetSuite OneWorld also supports SunMoon’s aggressive expansion plans. “Thanks to NetSuite OneWorld, we can enter new markets more easily,” continued Loh. “It’s multi-language and multi-currency features put us on the world map, empowering us to further expand our operations in Indonesia, the US and Southeast Asia. And best of all, we won’t even need an overseas IT department to support these countries. Our Singapore team can provide support remotely as NetSuite OneWorld is completely cloud based.”

Zakir Ahmed,General Manager, Oracle NetSuite Asia commented: “With Asia Pacific accounting for nearly 60 percent of the global population, an efficient food supply chain and distribution network is even more critical here than anywhere else in the world. We are committed to giving forward-looking businesses the tools to innovate. SunMoon is a great example of a business that harnesses technology to digitise and transform this traditional market.

NetSuite OneWorld supports 190 currencies, 20 languages, automated tax calculation and reporting in more than 100 countries, and customer transactions in more than 200 countries and territories. These global financial capabilities give SunMoon real-time organisation-wide visibility and new insights for its three subsidiaries via supplier, customer and other transaction data.

Contact Info
Suzanne Myerson
Oracle NetSuite
+61 414 101 583
suzanne.myerson@oracle.com
Mizu Chitra
Text100 Singapore
+65 6603 9000
SGNetSuite@text100.com.sg
About Oracle NetSuite

Oracle NetSuite pioneered the Cloud Computing revolution in 1998, establishing the world's first company dedicated to delivering business applications over the internet. Today, it provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit http://www.netsuite.com.sg.

Follow NetSuite's Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About SunMoon Food Company Limited (www.sunmoonfood.com)

SunMoon Food Company Limited (“SunMoon”) is a global distributor and marketer of nutritious fresh fruits, vegetables, and products, delivered to the health-conscious consumer in the most convenient way.

Started in 1983, SunMoon has grown its product offering to over 200 product types, including fresh fruits, vegetables, freeze-dried fruit snacks, nuts, fruit cups, fruit sticks, juices, sorbets, frozen fruits and assorted water packaged under its own brand.

With an extensive sales network of over 11,000 points of sales globally, SunMoon's offering of quality, premium products are distributed via supermarkets, convenience stores, online and wholesale channels, airlines, food services as well as SunMoon's franchise outlets in Singapore.

Since 2015, the company has shifted towards an asset-light consumer-centric and brand-focused business model by tapping on its strong brand equity and the 'Network x Geography x Product' operational model. Instead of owning farms, SunMoon works with farmers to ensure they meet our quality standards.

SunMoon's products come with the SunMoon Quality Assurance, backed by internationally recognised accreditations such as HACCP; Good Manufacturing Practice (GMP); AIB (Excellent), ISO 22000, Halal and Kosher Certification.

SunMoon was listed in 1997 on the Mainboard of the Singapore Exchange. 

Talk to a Press Contact

Suzanne Myerson

  • +61 414 101 583

Mizu Chitra

  • +65 6603 9000

Oracle SOA Suite 12c: rcu fails on Oracle Linux

Dietrich Schroff - Tue, 2017-11-07 15:39
Next step after setting up a database is running the rcu script to create the soa suite schema inside the database. But this step fails with an ugly exception:

[oracle@localhost bin]$ pwd
/mnt/Middleware/Oracle_Home/oracle_common/bin
[oracle@localhost bin]$ ./rcu

    RCU-Logdatei: /tmp/RCU2017-10-07_18-13_966788282/logs/rcu.log

Exception in thread "main" java.lang.ExceptionInInitializerError
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
    at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
    at javax.swing.UIDefaults.getUI(UIDefaults.java:769)
    at javax.swing.UIManager.getUI(UIManager.java:1016)
    at javax.swing.JComboBox.updateUI(JComboBox.java:266)
    at javax.swing.JComboBox.init(JComboBox.java:231)
    at javax.swing.JComboBox.(JComboBox.java:183)
    at oracle.help.DefaultNavigatorPanel$MinimumSizedComboBox.(DefaultNavigatorPanel.java:791)
    at oracle.help.DefaultNavigatorPanel.(DefaultNavigatorPanel.java:106)
    at oracle.help.Help._initHelpSystem(Help.java:1045)
    at oracle.help.Help.(Help.java:353)
    at oracle.help.Help.(Help.java:307)
    at oracle.help.Help.(Help.java:271)
    at oracle.help.Help.(Help.java:146)
    at oracle.sysman.assistants.rcu.ui.InteractiveRCUModel.initializeHelp(InteractiveRCUModel.java:261)
    at oracle.sysman.assistants.rcu.ui.InteractiveRCUModel.(InteractiveRCUModel.java:151)
    at oracle.sysman.assistants.rcu.Rcu.execute(Rcu.java:360)
    at oracle.sysman.assistants.rcu.Rcu.main(Rcu.java:433)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
    at sun.font.CompositeStrike.getStrikeForSlot(CompositeStrike.java:75)
    at sun.font.CompositeStrike.getFontMetrics(CompositeStrike.java:93)
    at sun.font.FontDesignMetrics.initMatrixAndMetrics(FontDesignMetrics.java:359)
    at sun.font.FontDesignMetrics.(FontDesignMetrics.java:350)
    at sun.font.FontDesignMetrics.getMetrics(FontDesignMetrics.java:302)
    at sun.swing.SwingUtilities2.getFontMetrics(SwingUtilities2.java:1113)
    at javax.swing.JComponent.getFontMetrics(JComponent.java:1626)
    at javax.swing.text.PlainView.calculateLongestLine(PlainView.java:639)
    at javax.swing.text.PlainView.updateMetrics(PlainView.java:209)
    at javax.swing.text.PlainView.updateDamage(PlainView.java:527)
    at javax.swing.text.PlainView.insertUpdate(PlainView.java:451)
    at javax.swing.text.FieldView.insertUpdate(FieldView.java:293)
    at javax.swing.plaf.basic.BasicTextUI$RootView.insertUpdate(BasicTextUI.java:1610)
    at javax.swing.plaf.basic.BasicTextUI$UpdateHandler.insertUpdate(BasicTextUI.java:1869)
    at javax.swing.text.AbstractDocument.fireInsertUpdate(AbstractDocument.java:201)
    at javax.swing.text.AbstractDocument.handleInsertString(AbstractDocument.java:748)
    at javax.swing.text.AbstractDocument.insertString(AbstractDocument.java:707)
    at javax.swing.text.PlainDocument.insertString(PlainDocument.java:130)
    at javax.swing.text.AbstractDocument.replace(AbstractDocument.java:669)
    at javax.swing.text.JTextComponent.setText(JTextComponent.java:1669)
    at javax.swing.JTextField.(JTextField.java:243)
    at javax.swing.JTextField.(JTextField.java:183)
    at com.jgoodies.looks.plastic.PlasticComboBoxUI.(PlasticComboBoxUI.java:88)
    ... 25 more
Hmmm. Not that good.
(i was running this from a shared virtual box folder.)
Next step was to install the Middleware home on my Oracle Linux. But fails too:
[oracle@localhost mnt]$ java -jar fmw_12.2.1.0.0_soa_quickstart.jar
Launcher-Logdatei ist /tmp/OraInstall2017-10-07_06-29-37PM/launcher2017-10-07_06-29-37PM.log.
Dateien werden extrahiert.......................................................
Oracle Universal Installer wird gestartet

Es wird geprüft, ob CPU-Geschwindigkeit über 300 MHz liegt   Tatsächlich 2904.000 MHz    Erfolgreich
Monitor wird geprüft: muss so konfiguriert sein, dass mindestens 256 Farben angezeigt werden   Tatsächlich 16777216    Erfolgreich
Swap-Bereich wird geprüft: muss größer sein als 512 MB   Tatsächlich 3967 MB    Erfolgreich
Es wird geprüft, ob diese Plattform eine 64-Bit-JVM erfordert   Tatsächlich 64    Erfolgreich (64-Bit nicht erforderlich)
Temporärer Speicherplatz wird geprüft: muss größer sein als 300 MB   Tatsächlich 16325 MB    Erfolgreich


Vorbereitung für das Starten von Oracle Universal Installer aus /tmp/OraInstall2017-10-07_06-29-37PM
Log: /tmp/OraInstall2017-10-07_06-29-37PM/install2017-10-07_06-29-37PM.log
java.lang.ArrayIndexOutOfBoundsException: 0
    at sun.font.CompositeStrike.getStrikeForSlot(CompositeStrike.java:75)
    at sun.font.CompositeStrike.getFontMetrics(CompositeStrike.java:93)
    at sun.font.FontDesignMetrics.initMatrixAndMetrics(FontDesignMetrics.java:359)
    at sun.font.FontDesignMetrics.(FontDesignMetrics.java:350)
    at sun.font.FontDesignMetrics.getMetrics(FontDesignMetrics.java:302)
    at sun.swing.SwingUtilities2.getFontMetrics(SwingUtilities2.java:1113)
    at javax.swing.JComponent.getFontMetrics(JComponent.java:1626)
    at javax.swing.text.GlyphPainter1.sync(GlyphPainter1.java:226)
    at javax.swing.text.GlyphPainter1.getSpan(GlyphPainter1.java:59)
    at javax.swing.text.GlyphView.getPreferredSpan(GlyphView.java:592)
    at javax.swing.text.FlowView$LogicalView.getPreferredSpan(FlowView.java:732)
    at javax.swing.text.FlowView.calculateMinorAxisRequirements(FlowView.java:233)
    at javax.swing.text.ParagraphView.calculateMinorAxisRequirements(ParagraphView.java:717)
    at javax.swing.text.html.ParagraphView.calculateMinorAxisRequirements(ParagraphView.java:157)
    at javax.swing.text.BoxView.checkRequests(BoxView.java:935)
    at javax.swing.text.BoxView.getMinimumSpan(BoxView.java:568)
    at javax.swing.text.html.ParagraphView.getMinimumSpan(ParagraphView.java:270)
    at javax.swing.text.BoxView.calculateMinorAxisRequirements(BoxView.java:903)
    at javax.swing.text.html.BlockView.calculateMinorAxisRequirements(BlockView.java:146)
    at javax.swing.text.BoxView.checkRequests(BoxView.java:935)
    at javax.swing.text.BoxView.getMinimumSpan(BoxView.java:568)
    at javax.swing.text.html.BlockView.getMinimumSpan(BlockView.java:378)
    at javax.swing.text.BoxView.calculateMinorAxisRequirements(BoxView.java:903)
    at javax.swing.text.html.BlockView.calculateMinorAxisRequirements(BlockView.java:146)
    at javax.swing.text.BoxView.checkRequests(BoxView.java:935)
    at javax.swing.text.BoxView.getPreferredSpan(BoxView.java:545)
    at javax.swing.text.html.BlockView.getPreferredSpan(BlockView.java:362)
    at javax.swing.plaf.basic.BasicHTML$Renderer.(BasicHTML.java:383)
    at javax.swing.plaf.basic.BasicHTML.createHTMLView(BasicHTML.java:67)
    at javax.swing.plaf.basic.BasicHTML.updateRenderer(BasicHTML.java:207)
    at javax.swing.plaf.basic.BasicLabelUI.propertyChange(BasicLabelUI.java:417)
    at oracle.bali.ewt.olaf2.OracleLabelUI.propertyChange(OracleLabelUI.java:53)
    at java.beans.PropertyChangeSupport.fire(PropertyChangeSupport.java:335)
    at java.beans.PropertyChangeSupport.firePropertyChange(PropertyChangeSupport.java:327)
    at java.beans.PropertyChangeSupport.firePropertyChange(PropertyChangeSupport.java:263)
    at java.awt.Component.firePropertyChange(Component.java:8428)
    at javax.swing.JLabel.setText(JLabel.java:330)
    at oracle.as.install.engine.modules.presentation.ui.common.label.ModifiedJLabel.setText(ModifiedJLabel.java:183)
    at oracle.as.install.engine.modules.presentation.ui.screens.WelcomeWindow.jbInit(WelcomeWindow.java:303)
    at oracle.as.install.engine.modules.presentation.ui.screens.WelcomeWindow.(WelcomeWindow.java:112)
    at oracle.as.install.engine.modules.presentation.action.LaunchWelcomeWindowAction.execute(LaunchWelcomeWindowAction.java:86)
    at oracle.as.install.engine.modules.presentation.util.ActionQueue.run(ActionQueue.java:70)
    at oracle.as.install.engine.modules.presentation.PresentationModule.prepareAndRunActions(PresentationModule.java:281)
    at oracle.as.install.engine.modules.presentation.PresentationModule.launchModule(PresentationModule.java:235)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at oracle.as.install.engine.InstallEngine.launchModule(InstallEngine.java:580)
    at oracle.as.install.engine.InstallEngine.processAndLaunchModules(InstallEngine.java:522)
    at oracle.as.install.engine.InstallEngine.startOperation(InstallEngine.java:471)
    at oracle.sysman.oio.oioc.OiocOneClickInstaller.main(OiocOneClickInstaller.java:717)
java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at oracle.as.install.engine.InstallEngine.launchModule(InstallEngine.java:580)
    at oracle.as.install.engine.InstallEngine.processAndLaunchModules(InstallEngine.java:522)
    at oracle.as.install.engine.InstallEngine.startOperation(InstallEngine.java:471)
    at oracle.sysman.oio.oioc.OiocOneClickInstaller.main(OiocOneClickInstaller.java:717)
Caused by: java.lang.ExceptionInInitializerError
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
    at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
    at javax.swing.UIDefaults.getUI(UIDefaults.java:769)
    at javax.swing.UIManager.getUI(UIManager.java:1016)
    at javax.swing.JComboBox.updateUI(JComboBox.java:266)
    at javax.swing.JComboBox.init(JComboBox.java:231)
    at javax.swing.JComboBox.(JComboBox.java:183)
    at oracle.help.DefaultNavigatorPanel$MinimumSizedComboBox.(DefaultNavigatorPanel.java:791)
    at oracle.help.DefaultNavigatorPanel.(DefaultNavigatorPanel.java:106)
    at oracle.help.Help._initHelpSystem(Help.java:1045)
    at oracle.help.Help.(Help.java:243)
    at oracle.help.Help.(Help.java:200)
    at oracle.help.Help.(Help.java:125)
    at oracle.as.install.engine.modules.presentation.ui.common.help.WizardHelpManager.configure(WizardHelpManager.java:76)
    at oracle.as.install.engine.modules.presentation.action.WizardHelpConfigAction.execute(WizardHelpConfigAction.java:228)
    at oracle.as.install.engine.modules.presentation.util.ActionQueue.run(ActionQueue.java:70)
    at oracle.as.install.engine.modules.presentation.PresentationModule.prepareAndRunActions(PresentationModule.java:281)
    at oracle.as.install.engine.modules.presentation.PresentationModule.launchModule(PresentationModule.java:235)
    ... 8 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
    at sun.font.CompositeStrike.getStrikeForSlot(CompositeStrike.java:75)
    at sun.font.CompositeStrike.getFontMetrics(CompositeStrike.java:93)
    at sun.font.FontDesignMetrics.initMatrixAndMetrics(FontDesignMetrics.java:359)
    at sun.font.FontDesignMetrics.(FontDesignMetrics.java:350)
    at sun.font.FontDesignMetrics.getMetrics(FontDesignMetrics.java:302)
    at sun.swing.SwingUtilities2.getFontMetrics(SwingUtilities2.java:1113)
    at javax.swing.JComponent.getFontMetrics(JComponent.java:1626)
    at javax.swing.text.PlainView.calculateLongestLine(PlainView.java:639)
    at javax.swing.text.PlainView.updateMetrics(PlainView.java:209)
    at javax.swing.text.PlainView.updateDamage(PlainView.java:527)
    at javax.swing.text.PlainView.insertUpdate(PlainView.java:451)
    at javax.swing.text.FieldView.insertUpdate(FieldView.java:293)
    at javax.swing.plaf.basic.BasicTextUI$RootView.insertUpdate(BasicTextUI.java:1610)
    at javax.swing.plaf.basic.BasicTextUI$UpdateHandler.insertUpdate(BasicTextUI.java:1869)
    at javax.swing.text.AbstractDocument.fireInsertUpdate(AbstractDocument.java:201)
    at javax.swing.text.AbstractDocument.handleInsertString(AbstractDocument.java:748)
    at javax.swing.text.AbstractDocument.insertString(AbstractDocument.java:707)
    at javax.swing.text.PlainDocument.insertString(PlainDocument.java:130)
    at javax.swing.text.AbstractDocument.replace(AbstractDocument.java:669)
    at javax.swing.text.JTextComponent.setText(JTextComponent.java:1669)
    at javax.swing.JTextField.(JTextField.java:243)
    at javax.swing.JTextField.(JTextField.java:183)
    at com.jgoodies.looks.plastic.PlasticComboBoxUI.(PlasticComboBoxUI.java:88)
    ... 33 more
[ERROR]: Installer has encountered an internal Error. Contact Oracle support with details
[EXCEPTION]:java.lang.reflect.InvocationTargetException
So there is a problem with running the SOA Suite installer on Oracle Linux...
The installation worked fine on my ubuntu (see here) and the rcu starts without any problem an my ubuntu:
schroff@zerberus:/home/data/opt/oracle/Middleware/Oracle_Home/oracle_common/bin$ ./rcu

    RCU-Logdatei: /tmp/RCU2017-10-14_22-36_851447466/logs/rcu.log


October 2017 Update to E-Business Suite Technology Codelevel Checker (ETCC)

Steven Chan - Tue, 2017-11-07 11:22

The E-Business Suite Technology Codelevel Checker (ETCC) tool helps you identify application or database tier overlay patches that need to be applied to your Oracle E-Business Suite Release 12.2 system. ETCC maps missing overlay patches to the default corresponding Database Patch Set Update (PSU) patches, and displays them in a patch recommendation summary.

What’s New

ETCC has been updated to include bug fixes and patching combinations for the following recommended versions of the following updates:

  • Oracle Database Proactive BP 12.1.0.2.171017
  • Oracle Database PSU 12.1.0.2.171017
  • Oracle JavaVM Component Database PSU 12.1.0.2.171017
  • Oracle Database Patch for Exadata BP 11.2.0.4.171017
  • Oracle Database PSU 11.2.0.4.171017
  • Oracle JavaVM Component Database PSU 11.2.0.4.171017
  • Microsoft Windows Database BP 12.1.0.2.170228
  • Oracle JavaVM Component 12.1.0.2.170228 on Windows
  • Microsoft Windows Database BP 11.2.0.4.170418
  • Oracle JavaVM Component 11.2.0.4.170418 on Windows

Obtaining ETCC

We recommend always using the latest version of ETCC, as new bugfixes will not be checked by older versions of the utility. The latest version of the ETCC tool can be downloaded via Patch 17537119 from My Oracle Support.

References

Related Articles

Categories: APPS Blogs

SQL Server Tips: Deactivate the Customer Experience Improvement Program (CEIP)

Yann Neuhaus - Tue, 2017-11-07 09:17

Before SQL Server 2016, you had the possibility to check the case “Send Windows and SQL Server Error Reports….” during the installation if you want to be a part of the Customer Experience Improvement Program (CEIP).
In SQL Server 2016, after the installation, all of the CEIP are automatically turned on.

Why?

SQL Server and SQL Azure share the same code now. On Azure, this service existed since a long time. It collects a large amount of data to automate various tasks and keeps the system functional including support for the following:

  • Incident Management (CRIs, LSIs)
  • Alert management (proactive approach)
  • Automated management via bots (based on alerts)
  • Machine learning / data science
  • Investigating potential new features that can benefit a maximum of clients

As you can see, the idea of integrating the CEIP service with SQL 2016 is to be able to extend this ability to collect “useful” data to Microsoft in order to maximize the impact on future developments.

My Thinking

In this article, I do not want to start a discussion whether to leave this service active or not.
With the guarantees given by Microsoft on the information collected, it is also not a question of security.
The SQL Server Team has published an explicit policy that spells out what and when data is collected: https://www.microsoft.com/EN-US/privacystatement/SQLServer/Default.aspx
As a lot of servers have no internet access, this service is often useless (as data cannot be sent).
In previous versions, I did not install the CEIP on Production environment. So in the same logic, I deactivated this service.

How to Deactivate the CEIP

To disable this service, we need 2 steps. I use PowerShell commands for both.
The first step is to deactivate all CEIP services.

Deactivate all CEIP services

CEIP is present for 3 SQL server services:

  • For SQL Server Engine, you have a SQL Server CEIP service
  • For SQL Server Analysis Service (SSAS), you have a SQL Analysis Services CEIP
  • For SQL Server Integration Service (SSIS), you have a SQL Server Integration Services CEIP service 13.0

CEIP01
As you can see on this picture, we have one CEIP service per instance per service. For the Engine & SSAS and one just for SSIS(shared component).
If you have a look on each service, the patterns for the name are the same:

  • For SQL Server CEIP service, you have a SQLTELEMETRY$<InstanceName>
  • For SQL Analysis Services CEIP, you have a SSASTELEMETRY$<InstanceName>
  • For SQL Server Integration Services CEIP service 13.0 CEIP, you have just SSISTELEMETRY130

CEIP02 I run PowerShell as Administrator and run these following command to have a status of these services:

Get-WMiObject win32_service |? name -Like "SQLTELEMETRY*" | Format-Table name,startname,startmode,state
Get-WMiObject win32_service |? name -Like "SSASTELEMETRY*" | Format-Table name,startname,startmode,state
Get-WMiObject win32_service |? name -Like "SSISTELEMETRY*" | Format-Table name,startname,startmode,state

CEIP03
We can also be more generic and use this command:

Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Format-Table name,startname,startmode,state

CEIP04
To disable these services, I do it in 2 steps. The first step is to stop the service and the second step is to disable the service:

  • Stop services
    Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | ? state -eq "running" | Stop-Service
  • Disable services
    Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Set-Service -StartMode Disabled

Here you find the “global” script:

##################################################
# Disable CEIP services  #
##################################################
Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Format-Table name,startname,startmode,state
# Stop all CEIP services
Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | ? state -eq "running" | Stop-Service
Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Format-Table name,startname,startmode,state
# Disable all CEIP services
Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Set-Service -StartMode Disabled
Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Format-Table name,startname,startmode,state
##################################################

CEIP05
All CEIP services are now stopped and disabled. Good job, Stéphane 8-) , but it’s not finished, we have a second step to do…
The second step is to set all CEIP registry keys to 0.

Set all CEIP registry keys to 0

This step is more complex because we have a lot of registry keys. Two parameters have to be set to 0:

  • CustomerFeedback
  • EnableErrorReporting

The first registry key is HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\130\
CEIP06
The second registry key is HKEY_LOCAL_MACHINE\Software\Wow6432Node\Microsoft\Microsoft SQL Server\130\
CEIP07
The other registry keys are per instance and per services(Engine, SSAS and SSRS):
HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL**.<instance>\CPE\
CEIP08
HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSAS**.<instance>\CPE\
CEIP09
HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSRS**.<instance>\CPE\
CEIP10
To set all these keys to 0, I use “simply” PowerShell Commands:

##################################################
#  Deactivate CEIP registry keys #
##################################################
# Set all CustomerFeedback & EnableErrorReporting in the key directory HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server to 0
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\***\CustomerFeedback=0
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\***\EnableErrorReporting=0
# *** --> Version of SQL Server (100,110,120,130,140,...)
# For the Engine
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL**.<instance>\CPE\CustomerFeedback=0
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL**.<instance>\CPE\EnableErrorReporting=0
# For SQL Server Analysis Server (SSAS)
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSAS**.<instance>\CPE\CustomerFeedback=0
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSAS**.<instance>\CPE\EnableErrorReporting=0
# For Server Reporting Server (SSRS)
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSRS**.<instance>\CPE\CustomerFeedback=0
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSRS**.<instance>\CPE\EnableErrorReporting=0
# ** --> Version of SQL Server (10,11,12,13,14,...)
##################################################
$Key = 'HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server'
$FoundKeys = Get-ChildItem $Key -Recurse | Where-Object -Property Property -eq 'EnableErrorReporting'
foreach ($Sqlfoundkey in $FoundKeys)
{
$SqlFoundkey | Set-ItemProperty -Name EnableErrorReporting -Value 0
$SqlFoundkey | Set-ItemProperty -Name CustomerFeedback -Value 0
}
##################################################
# Set HKEY_LOCAL_MACHINE\Software\Wow6432Node\Microsoft\Microsoft SQL Server\***\CustomerFeedback=0
# Set HKEY_LOCAL_MACHINE\Software\Wow6432Node\Microsoft\Microsoft SQL Server\***\EnableErrorReporting=0
# *** --> Version of SQL Server(100,110,120,130,140,...)
##################################################
$WowKey = "HKLM:\SOFTWARE\Wow6432Node\Microsoft\Microsoft SQL Server"
$FoundWowKeys = Get-ChildItem $WowKey | Where-Object -Property Property -eq 'EnableErrorReporting'
foreach ($SqlFoundWowKey in $FoundWowKeys)
{
$SqlFoundWowKey | Set-ItemProperty -Name EnableErrorReporting -Value 0
$SqlFoundWowKey | Set-ItemProperty -Name CustomerFeedback -Value 0
}

As you can see, I use only the EnableErrorReporting key in the Where-Object clause to find the impacted keys. After running this script you have all CEIP registry key set to 0…
Et voila, CEIP is totally deactivated!

To finish, I will thanks all my SQL Server colleagues for their help to have a good vision of this tricky subject. It was also a good discussion internally in our SQL Server Expert Team to define what to do by customer!   :-)

 

Cet article SQL Server Tips: Deactivate the Customer Experience Improvement Program (CEIP) est apparu en premier sur Blog dbi services.

Why We Chose Mindbreeze for Enterprise Search: Fishbowl’s Competitive Analysis Across Search Platforms

Comparing Mindbreeze to Google Cloud Search, Coveo, Lucidworks, Yippy, Elasticsearch, and Solr

Last month we discussed replacing the Google Search Appliance (GSA) and the Top 5 Reasons We Chose Mindbreeze. In this follow-up, we’ll explore the other vendors who made our shortlist and how they all stack up. In case you missed the last post, here’s a recap of the key requirements against which we were evaluating each solution:

  • Options for searching on-premise content
  • Connectors and connector frameworks for indexing non-web data sources
  • Support for public and secure use cases
  • Tools and APIs for search interface integration
  • Minimal development efforts and ongoing administration required
Mindbreeze vs. Google Cloud Search

As a Google Premier Partner and GSA implementer, we naturally looked to Google for GSA replacement options. At the time of our evaluation, Google Cloud Search did not have any features available to address indexing on-premise content or serving that content through websites or web applications other than their own cloud search interface. In addition, the status of their security integration options and administration experience remained widely unknown. While it was always clear that Google’s new enterprise search index would be cloud-based, the options for pushing enterprise content from on-premise repositories into that index remain unclear. The initial product direction for Google Cloud Search (previously referred to as Springboard) focused on indexing Google’s G Suite data sources such as Gmail, Google Calendar, and Google Drive. Google has since changed their directional statements to reemphasize their intention to implement indexing mechanisms for on-premise content, but even at the time of this writing, that technology is yet to be released.

Our decision to pursue solutions other than Google, and ultimately partner with Mindbreeze, largely came down to the fact that we couldn’t confidently assure our customers that Google would have a replacement ready (and able to meet the aforementioned requirements) in time for the GSA’s end of life. While I continue to be impressed with Google’s cloud innovations and hope those eventually materialize into enterprise search options, Google Cloud Search remains in its infancy.

Mindbreeze vs. Coveo

As a leader in the enterprise search and knowledge management space, Coveo has ranked well for the past several years among the analyst reports for this market. They have a mature product which made our short list of possible solutions. Two primary concerns surrounded Coveo when compared to Mindbreeze and other vendors. First, their product direction is heavily cloud-focused, available only on Amazon Web Services, with a decreasing investment in on-premise search. Our customer base has a strong need to index on-premise content along with a reasonable amount of customers who prefer the search engine itself be available on premise for governance reasons.

The other concern surrounding Coveo was price. By their own admittance, it is one of the most expensive solutions on the market. Mindbreeze was able to meet our requirements as well or better than Coveo, while providing a stronger commitment to on-premise indexing at a more attractive price point.

Mindbreeze vs. Lucidworks

Lucidworks offers enterprise support for the open source search platform Apache Solr. Their flagship product, Lucidworks Fusion, builds on Solr to add enterprise search features, including connectors and administration interfaces. Our primary reasons for preferring Mindbreeze over Lucidworks concern the ease and speed of both deployment and ongoing administration. While the Fusion platform goes a long way in creating a productized layer on top of Solr, the solution still requires comparatively more work to size, provision, configure, and maintain than Mindbreeze.

Another concern during evaluation was the less-flexible security model available with Lucidworks when compared to Mindbreeze. Mindbreeze supports ACL inheritance from container objects which means if a new user is granted access to a folder containing 50,000 items, only one item (the folder container) must be reindexed to apply the new permissions. Lucidworks applies permissions to each document, so all 50,000 documents would need to be reindexed. While Lucidworks was able to meet our indexing requirements, we felt Mindbreeze offered a shorter time to value, easier ongoing administration, and more flexible security options.

Mindbreeze vs. Yippy

The Yippy Search Appliance attempts to offer close feature parity to the GSA and is available as a cloud solution or an on-premise appliance. Our biggest concern with Yippy, when compared to Mindbreeze, was its immaturity as an enterprise search product. Born out of the Yippy metasearch engine, the Yippy Search Appliance was introduced in 2016 specifically in response to the GSA’s end of life.

The solution is notably absent from consideration by both Forrester and Gartner in their respective 2017 market reports which base inclusion criteria on factors such as referenceable enterprise customer base and proven market presence. The solution also lacks interfaces for customers and partners to create custom connectors to proprietary data sources, an important requirement for many of our customers. As a search appliance, we felt Mindbreeze offered a lower risk solution with a longer history, large reference customer base, and mature feature set.

What about open source options?

Open source options were considered during our evaluation but quickly eliminated due to the vastly greater amount of development time and steeper customer learning curve associated with their implementation. For these reasons, we felt open source search solutions were not a good fit for our customers. Due to the high volume of questions we get regarding these options, I felt it worthwhile to include a few comments on the most popular open sources search tools.

Elasticsearch

Elasticsearch is a popular open source search and analytics project created by Elastic.co. Elastic itself doesn’t claim to be an enterprise search solution, but they do offer enterprise analytics solutions, and the Elasticsearch technology is often embedded into enterprise applications to provide search functionality. It’s easy to see the confusion this can create. Gartner did not include Elastic in their 2017 Magic Quadrant for Insight Engines. Elastic was included in the Forrester Wave on Cognitive Search and Knowledge Discovery as a nonparticipating vendor where Forrester stated, “Elastic says that it is not in the enterprise search market, but many enterprise customers ask Forrester about Elasticsearch, so we have included Elastic…” As a search tool, we found Elastic was better suited to log analytics than enterprise search as it lacks many enterprise search features including security, connectors, and pre-built search apps.

Solr

Apache Solr is a widely used open source search project. Many contributions to the project are made by Lucidworks (mentioned above) whose Fusion platform extends this core technology. Standalone Solr is a framework for creating a custom search engine implementation. While powerful and often used to build highly specialized search tools, it is missing out-of-the-box enterprise features including connectors, administration interfaces, and mechanisms to support secure search.

Lucene

Apache Lucene is a popular open source search engine framework. It’s a low-level library which implements indexing and search functionality and must be integrated into another application for use. Lucene provides the base search engine behind both Solr and Elasticsearch.

Finding Success with Mindbreeze

After undergoing our evaluation last winter and joining the Mindbreeze partner network, we continue to find Mindbreeze offers an excellent combination of built-in features with tools for extending capabilities when necessary. In the past year we’ve released our Oracle WebCenter Content Connector for Mindbreeze, had ten employees complete the Mindbreeze Expert Certification and helped a long-time customer migrate from GSA to Mindbreeze. If you have any questions about our experience with Mindbreeze or would like to know more, please contact us or leave a comment below.

Time running out on your GSA?

Our expert team knows both GSA and Mindbreeze. We’ll help you understand your options and design a migration plan to fit your needs.

Contact Us

The post Why We Chose Mindbreeze for Enterprise Search: Fishbowl’s Competitive Analysis Across Search Platforms appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Oracle EBS Suite blank Login page - Post Exadata migration

Syed Jaffar - Tue, 2017-11-07 08:35
As part of EBS database migration on Exadata, we recently deployed a brand new Exadata X5-2L (eighth rack), and migrated an Oracle EBS Database from the typical storage/server technologies.

Below are the environment details:

EBS Suite 12.1.3 (running on two application servers with hardware LOAD BALANCER)
Database : 11.2.0.4, RAC database with 2 instance

After the database migration, buildxml and autoconfig procedures went well on both application and database tiers. However, when the EBS login page is launched, it came out as just blank page, plus, apps passwords were unable to change through the typical procedure. We wonder what went wrong, as none of the procedure gave any significant failure indications, all went fine and we could see the successful completion messages.

After a quick initial investigation, we found that there is an issue with the GUEST user, and also found that the profile was not loaded when the autoconfig was ran on the application server. In the autoconfig log file, we could see the process was failed to update the password (ORACLE). We then tried all workaround, the recommended on Oracle support and other websites. Unfortunately, none of the workarounds helped us.

After almost spending a whole day, investigating and analyzing the root cause, we looked at the DB components and their status through dba_registry. We found the JSserver Java Virtual Machine component INVALID. I then realized the issue happened during the Exadata software deployment. There was an error during the DB software installation while applying the patch due to conflict between the patches. Due to which the catbundle didn't executed.

Without wasting a single second, we ran the @catbundle command and followed by ultrp.sql.

Guess what, all issues disappeared. We run the autoconfig on the application servers. After which we could change the app user password and we could see the Login page too.

It was quite a nice experience.

This really 

Chartis Names Oracle RiskTech100 Leader

Oracle Press Releases - Tue, 2017-11-07 08:00
Press Release
Chartis Names Oracle RiskTech100 Leader Oracle wins four best-in-category awards at annual RiskTech event

Redwood Shores Calif—Nov 7, 2017

Oracle Financial Services Analytical Applications (OFSAA) has been named a leader on this year’s Chartis: RiskTech100 list. Adding to this success, Oracle brought home four category awards including: Risk Data Aggregation & Reporting (3rd year in a row), Balance Sheet Risk Management, Banking (Industry Category) and Core Technology (Chartis Category). This marks the second consecutive year that Oracle Financial Services has ranked at the top of the Chartis RiskTech100.
 
“For the second year in a row we are delighted to see our OFSAA suite recognized as an industry leader on the RiskTech100 list,” said Vice President of the Oracle Financial Services Global Business Unit, Ambreesh Khanna. “Our integrated architecture with a broad array of Risk, Finance, Compliance and Front Office applications continue to help banks manage their Risk, deploy Capital more efficiently, reduce operating costs and improve overall profitability. Our unified data architecture which allows banks to source data once and use multiple times is unique in the market. Our relentless focus on using modern technology within our platform allows us to scale our applications to meet the demands of the largest banks while significantly lowering operating costs.”
 
The Chartis RiskTech100 ranking is acknowledged globally as a comprehensive independent study of major players in risk and compliance technology. Companies featured in the RiskTech100 are drawn from a range of RiskTech specialisms and meet the needs of both financial and non-financial organizations. Rankings are determined by a focus on solutions, industry segments and success factors. Only companies that sell their own risk management software products and solutions are included within the report.
 
“Based on our extensive methodology, Oracle has once again garnered strong placement on the RiskTech100,” said Rob Stubbs, Head of Research at Chartis Research. “As the risk and financial crime landscape has evolved, Oracle Financial Services continues to keep pace with the industry to deliver strong solutions and continued growth.”
 
RiskTech100 evaluates the fast-moving RiskTech market with a considered methodology that has seen a greater focus on financial crime in 2018. The report rankings reflect Chartis analyst expert opinions along with research into market trends, participants, expenditure patterns and best practices. Chartis began accumulating data for this study in January 2017 and validated the analysis through several phases of independent verification. Individual ranking assessment criteria comprises six equally weighted categories: functionality, core technology, strategy, customer satisfaction, market presence and innovation. Oracle scored high marks across the board, re-affirming their position at the forefront of RiskTech.
 
Download an executive summary of the 2018 Chartis RiskTech100 ranking here.
Contact Info
Alex Moriconi
Oracle
+1-650-607-6598
alex.moriconi@oracle.com
About Oracle
The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries and territories while processing 55 billion transactions a day. For more information about Oracle (NYSE:ORCL), please visit us at http://cloud.oracle.com.
 
Trademarks
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
 
About Chartis
Chartis is the leading provider of research and analysis on the global market for risk technology, and is part of InfoPro Digital. Chartis's goal is to support enterprises as they drive business performance through better risk management, corporate governance and compliance, and to help clients make informed technology and business decisions by providing in-depth analysis and actionable advice on virtually all aspects of risk technology.
 
RiskTech Quadrant®, RiskTech100® and FinTech QuadrantTM are registered trademarks of Chartis Research (http://www.chartis-research.com).
 
Talk to a Press Contact

Alex Moriconi

  • +1-650-607-6598

Pages

Subscribe to Oracle FAQ aggregator