Feed aggregator

Experience the Future of Cloud at Oracle OpenWorld 2017

Oracle Press Releases - Fri, 2017-09-29 11:00
Press Release
Experience the Future of Cloud at Oracle OpenWorld 2017 Technology’s Most Innovative Showcase Kicks Off in San Francisco

Oracle OpenWorld, San Francisco, Calif.—Sep 29, 2017

Oracle OpenWorld 2017 By the Numbers

This weekend, Oracle welcomes tens of thousands of customers and partners spanning 175 countries and over 18 million live-stream viewers to Oracle OpenWorld 2017. Located at San Francisco’s newly redesigned Moscone Center, conference events will span multiple venues in the city’s downtown from October 1-5. Heralded as the industry’s most important business and technology show, Oracle OpenWorld delivers unprecedented opportunities to hear from the greatest minds across all event programming, including actor and director Joseph Gordon-Levitt, former United States Senator Barbara Boxer, Executive Consultant for the Los Angeles Clippers Jerry West, former Secretary of Defense Leon Panetta, and President of the Council on Foreign Relations Richard Haass.

On Sunday, Oracle CTO and Executive Chairman Larry Ellison opens the event with a special keynote showcasing all of the innovations delivered in the Oracle Cloud. Mainstage presentations will continue throughout the week, featuring Ellison, Oracle CEO Mark Hurd, and Oracle President of Product Development Thomas Kurian. Leaders from the world’s most interesting brands, including Carbon, Trek, FexEx and Gap, will join Oracle executives on stage to discuss pressing topics impacting business and technology today and in the future.

This year, Oracle OpenWorld brings innovation to learning through a new series of session formats developed in collaboration with Stanford University, as well as reimagined exhibition halls. The conference’s latest iteration of “Collective Learning” features cutting edge session designs, including: Brain Snacks, 1:1 conversations with fellow experts, and Make Your Case, hands-on workshops tackling the best Oracle case studies. Oracle OpenWorld Exchange, the conference’s redesigned exhibition hall, debuts to foster community, spark learning, promote innovation, and unite our customers, partners, and attendees.

“As we raise the curtain on Oracle OpenWorld 2017, we welcome more than 60,000 customers and partners to learn about transforming their business with Oracle Cloud,” said Judy Sim, Oracle’s Chief Marketing Officer. “The event has evolved as our customers’ needs have changed and is now one of the leading technology conferences in the world. Today, we are thrilled to bring a positive economic impact worth more than $3 billion to the City of San Francisco over the last 20 years.”

To Learn and Explore:
  • Sessions: Tap into an elite network of world-class speakers totaling 67,500+ years of industry experience. Select from 2,311 sessions presented by 3,048 customer and partner speakers, more than 523 Oracle demos and case studies showcasing emerging technology, as well as hundreds of partner and customer exhibitions.
  • Oracle Keynotes:
    • Sunday, October 1, 5:00 p.m. – 7:00 p.m.
      • Oracle CTO and Executive Chairman Larry Ellison opens the conference with an inside look at the future of Oracle Cloud and its innovation path.
      • Doug Fisher, Senior Vice President and General Manager, Software and Services Group, Intel, presents the power of data, and how data offers massive enterprise-class cloud computing opportunities.
    • Monday, October 2, 9:00 a.m. – 10:15 p.m.

       

      • Oracle CEO Mark Hurd reveals where we are now and where we are headed in a cloud foundational world. Joining him on stage will be leaders from Oracle customers Bloom Energy, FedEx and Gap.
    • Tuesday, October 3

       

      • 9:00 a.m. – 11:00 a.m. – Oracle President of Product Development Thomas Kurian and Dave Donatelli, Oracle Executive Vice President, Cloud Business Group, showcase how Oracle Cloud is harnessing the power of emerging technologies like artificial intelligence, Internet of Things, and blockchain to transform organizations of all sizes. They will be joined by Richard Noble, Director of the Bloodhound Project, an inspiring initiative that engages the next generation in science, technology, engineering and math by aiming to surpass the world land speed record.
      • 2:00 p.m. – 3:00 p.m. – Larry Ellison unveils the future of databases in the cloud, including Oracle Autonomous Database, the world’s first “self-driving” database.
    • Wednesday, October 4, 9:00 a.m. – 11:00 a.m.

       

      • Oracle CEO Mark Hurd returns to the mainstage with NetSuite’s Executive Vice President of Development Evan Goldberg and special guests, to discuss the role technology plays in getting ahead of the competition.
  • Oracle’s Leader’s Circle: Connect with luminaries on industry trends, foreign affairs, economics and security at this exclusive, invitation-only executive program hosted by Oracle CEOs Safra Catz and Mark Hurd. Join Senator Barbara Boxer and Newt Gingrich, 50th Speaker of the U.S. House of Representatives for a provocative discussion about the future of the United States.  
  • The Innovation Studio: Experience innovations from Design Tech High School students and Oracle Education Foundation. Meet startups from Oracle’s Startup Cloud Accelerator, and talk with Oracle customers, partners, and industry business unit experts.
  • Oracle Cloud User Experience Lab: Experience hands-on demos of the latest Release 13 Oracle Cloud Applications, and learn about Oracle’s vision for the future of work, including experimental robotics, artificial intelligence, augmented reality, chatbots, and more of the emerging technology tools in the smart UX toolkit.
  • JavaOne Developer Lounge: Use Oracle Internet of Things (IoT) and Big Data technologies to brew your own beer. Create your own sculptures and furniture with a 3D printer. Relive “The Matrix” and shoot your own slow motion video with 60 Raspberry Pi cameras in the BulletTime Photo Booth. Interact with a cloud chatbot robot powered by the Oracle Intelligent Bots running on Oracle Mobile Cloud Service.
  • Oracle Code Event: Join developers from around the world in this one-day event covering machine learning, chatbots, cloud, databases, programming languages, DevOps, and much more.
  • Oracle NetSuite SuiteConnect: The best of SuiteWorld comes to Oracle OpenWorld for the first time. Held on October 4, this program features NetSuite users, Oracle executives, product experts and partners.
  To Support the Community and Environment:
  • Oracle Academy’s JavaOne4Kids: Designed for children ages 10-16, attendees can use Raspberry Pi and Java programming to catch escaped Pokemon; create a robot and bring it to life; make computer games using Greenfoot and Stride; among other fun activities. Oracle Academy is one of Oracle’s key investments in our collective future. In fiscal year 2016, the program impacted over 3.5 million students in 120 countries through $3.75 billion in direct and in-direct resources.
  • Plant a Billion Trees: Learn how The Nature Conservancy and Oracle Giving are helping to advance reforestation globally. As part of its participation in The Nature Conservancy’s Plant a Billion trees initiative, Oracle has already achieved 41 percent of its goal to plant one million trees.
  • Dian Fossey Gorilla Fund International: Discover how Oracle Cloud is helping to save the gorillas. Get a sneak peek of an upcoming National Geographic three-part special on Dian Fossey’s life and work as a gorilla conservationist. Hear from Tara Stoinski, President, CEO and Chief Scientific Officer of the Dian Fossey Gorilla Fund about its 27-year partnership with Oracle, and how Oracle Cloud technology has enabled the organization to revolutionize its data management and make its database – the world’s largest, most comprehensive collection of data on a wild great ape population – available to scientists, researchers and students without charge.
  • Ride for a Reason: Support the victims of the recent hurricanes by choosing Lyft for Oracle OpenWorld transportation. Between October 1-5, five percent of the cost of rides will be donated to the American Red Cross. Enter code OOW17 using a Lyft business profile.
  To Connect and Play:
  • Oracle CloudFest.17: Dance the night away with Grammy award winners The Chainsmokers and sing along with pop sensation Ellie Goulding at Oracle’s legendary customer appreciation event taking place on October 4 at AT&T Park.
  • SuiteConnect NextUp: Celebrate the day’s experiences at a special concert with Royal Machines, joined by “special guests” on October 3 at Howard Street mainstage.
Contact Info
Julie Sugishita
Oracle Corporate Communications
+1.650.506.0076
julie.sugishita@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle OpenWorld

Oracle OpenWorld, the industry's most important business and technology conference for the past 20 years, hosts tens of thousands of in-person attendees as well as millions online. Dedicated to helping businesses leverage Cloud for their innovation and growth, the conference delivers deep insight into industry trends and breakthroughs driven by technology. Designed for attendees who want to connect, learn, explore and be inspired, Oracle OpenWorld offers more than 2,500 educational sessions led by more than 2,000 customers and partners sharing their experiences, first hand. With hundreds of demos and hands-on labs, plus exhibitions from more than 400 partners and customers from around the world, Oracle OpenWorld has become a showcase for leading cloud technologies, from Cloud Applications to Cloud Platform and Infrastructure. For more information; to register; or to watch Oracle OpenWorld keynotes, sessions, and more, visit www.oracle.com/openworld. Join the Oracle OpenWorld discussion on Twitter.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Julie Sugishita

  • +1.650.506.0076

Partitioned Indexes

Hemant K Chitale - Fri, 2017-09-29 10:15
Most discussions about Partitioning in Oracle are around Table Partitioning.  Rarely do we come across Index Partitioning.
A couple of days ago, there was an Oracle Community question on Partitioned Indexes.

So, here is a quick listing of Index Partitioning options  (these tests are in 11.2.0.4)


First, I start with a regular, non-partitioned table.

SQL> create table non_partitioned  
2 (id_col number,
3 data_col_1 number,
4 data_col_2 number,
5 data_col_3 varchar2(15)
6 )
7 /

Table created.

SQL>


I now attempt to create an Equi-Partitioned (LOCAL) Index on it.

SQL> create index equi_part on non_partitioned (id_col) local;
create index equi_part on non_partitioned (id_col) local
*
ERROR at line 1:
ORA-14016: underlying table of a LOCAL partitioned index must be partitioned


SQL>


As expected I can't create a LOCAL index on a non-partitioned table.

Can I create any partitioned index on this table ?

I try two different GLOBAL PARTITIONed Indexes

SQL> create index global_part   
2 on non_partitioned (id_col) global
3 partition by range (id_col)
4 (partition p_100 values less than (101),
5 partition p_200 values less than (201)
6 )
7 /
)
*
ERROR at line 6:
ORA-14021: MAXVALUE must be specified for all columns


SQL>
SQL> create index global_part
2 on non_partitioned (id_col) global
3 partition by range (id_col)
4 (partition p_100 values less than (101),
5 partition p_200 values less than (201),
6 partition p_max values less than (MAXVALUE)
7 )
8 /

Index created.

SQL>
SQL> create index global_part_comp
2 on non_partitioned (id_col, data_col_3) global
3 partition by range (id_col, data_col_3)
4 (partition p_1 values less than (101,'M'),
5 partition p_2 values less than (101,MAXVALUE),
6 partition p_3 values less than (201,'M'),
7 partition p_4 values less than (201,MAXVALUE),
8 partition p_max values less than (MAXVALUE, MAXVALUE)
9 )
10 /

Index created.

SQL>


So, I must have a MAXVALUE partition for the Index.  Note that the two indexes above are now Partitioned without the table itself being partitioned.

SQL> select index_name, partitioned
2 from user_indexes
3 where table_name = 'NON_PARTITIONED'
4 order by 1
5 /

INDEX_NAME PAR
------------------------------ ---
GLOBAL_PART YES
GLOBAL_PART_COMP YES

SQL>


The above indexes are Prefixed Global Partitioned Indexes. Can I create a Non-Prefixed Global Partitioned Index -- an Index where the Partition Key is not formed by the left-most columns of the index.

SQL> create index global_part_nonprefix
2 on non_partitioned (id_col, data_col_3) global
3 partition by range (data_col_1)
4 (partition p_1 values less than (101),
5 partition p_2 values less than (201),
6 partition p_max values less than (MAXVALUE)
7 )
8 /
partition by range (data_col_1)
*
ERROR at line 3:
ORA-14038: GLOBAL partitioned index must be prefixed


SQL>
SQL> !oerr ora 14038
14038, 00000, "GLOBAL partitioned index must be prefixed"
// *Cause: User attempted to create a GLOBAL non-prefixed partitioned index
// which is illegal
// *Action: If the user, indeed, desired to create a non-prefixed
// index, it must be created as LOCAL; otherwise, correct the list
// of key and/or partitioning columns to ensure that the index is
// prefixed

SQL>


So, I have proved that a Non-Partitioned Table cannot have a LOCAL Partitioned Index or a Non-Prefixed Global Partitioned Index but can still have a Global Partitioned Index where the Partition Key is left-prefixed from the Index Key. Also, that a Global Partitioned Index can be a Composite Index with columns of different datatypes.

Let me now proceed with a Partitioned Table.

SQL> create table partitioned
2 (id_col number,
3 data_col_1 number,
4 data_col_2 number,
5 data_col_3 varchar2(15)
6 )
7 partition by range (id_col)
8 (partition p_100 values less than (101),
9 partition p_200 values less than (201),
10 partition p_max values less than (MAXVALUE)
11 )
12 /

Table created.

SQL>


First, the Equi-Partitioned (LOCAL) Index.

SQL> create index part_equi_part
2 on partitioned (id_col) local
3 /

Index created.

SQL> select partition_name, partition_position
2 from user_ind_partitions
3 where index_name = 'PART_EQUI_PART'
4 order by 2
5 /

PARTITION_NAME PARTITION_POSITION
------------------------------ ------------------
P_100 1
P_200 2
P_MAX 3

SQL>


The usage of the LOCAL keyword instead of GLOBAL defines the Index as equi-partitioned with the table.  Index Partitions are automatically created to match the Table Partitions with the same Partition Names.  It is possible to create a LOCAL Partitioned Index and manually specify Partition Names but this, in my opinion, is a bad idea.  Attempting to manually name each Partition for the Index can result in a mis-match between Table Partition Names and Index Partition Names.

Next, I define two GLOBAL Partitioned Indexes on this table.

SQL> create index part_gbl_part  
2 on partitioned (data_col_1) global
3 partition by range (data_col_1)
4 (partition p_1 values less than (1001),
5 partition p_2 values less than (2001),
6 partition p_3 values less than (3001),
7 partition p_4 values less than (4001),
8 partition p_max values less than (MAXVALUE)
9 )
10 /

Index created.

SQL> create index part_gbl_part_comp
2 on partitioned (data_col_2, data_col_3) global
3 partition by range (data_col_2, data_col_3)
4 (partition p_a values less than (10, 'M'),
5 partition p_b values less than (10, MAXVALUE),
6 partition p_c values less than (20, 'M'),
7 partition p_d values less than (20, MAXVALUE),
8 partition p_e values less than (30, 'M'),
9 partition p_f values less than (30, MAXVALUE),
10 partition p_max values less than (MAXVALUE, MAXVALUE)
11 )
12 /

Index created.

SQL>
SQL> l
1 select index_name, partition_name, partition_position
2 from user_ind_partitions
3 where index_name in
4 (select index_name from user_indexes
5 where table_name = 'PARTITIONED'
6 )
7* order by 1,3
SQL> /

INDEX_NAME PARTITIO PARTITION_POSITION
------------------ -------- ------------------
PART_EQUI_PART P_100 1
PART_EQUI_PART P_200 2
PART_EQUI_PART P_MAX 3
PART_GBL_PART      P_1                       1
PART_GBL_PART P_2 2
PART_GBL_PART P_3 3
PART_GBL_PART P_4 4
PART_GBL_PART P_MAX 5
PART_GBL_PART_COMP P_A                       1
PART_GBL_PART_COMP P_B 2
PART_GBL_PART_COMP P_C 3
PART_GBL_PART_COMP P_D 4
PART_GBL_PART_COMP P_E 5
PART_GBL_PART_COMP P_F 6
PART_GBL_PART_COMP P_MAX 7

15 rows selected.

SQL>


The Equi-Partitioned (LOCAL) Index has the same number (and, recommended, names) of Partitions as the Table.
However, the GLOBAL Indexes can have different numbers of Partitions.

As with the first case, I cannot create a Global Non-Prefixed Partitioned Index (where the Index Partition key is not  a left-prefix of the Index).

SQL> create index part_global_part_nonprefix
2 on partitioned (id_col, data_col_3) global
3 partition by range (data_col_1)
4 (partition p_1 values less than (101),
5 partition p_2 values less than (201),
6 partition p_max values less than (MAXVALUE)
7 )
8 /
partition by range (data_col_1)
*
ERROR at line 3:
ORA-14038: GLOBAL partitioned index must be prefixed


SQL>


In this blog post, I haven't touched on Partial Indexing (a 12c feature).

I haven't touched on Unique LOCALly Partitioned Indexes.

I haven't demonstrated the impact of Partition Maintenance operations (TRUNCATE, DROP, MERGE, ADD, SPLIT) on LOCAL and GLOBAL Indexes here -- although I have touched on such operations and LOCAL indexes in earlier blog posts.
.
.
.

Categories: DBA Blogs

SQL Server 2016: New Dynamic Management Views (DMVs)

Yann Neuhaus - Fri, 2017-09-29 08:32

In SQL Server 2016, you will discover a lot of new Dynamic Management Views(DMVs).
In this article, I will just give you a little overview of these useful views for us as DBA.

SQL Server 2012 has 145 DMVs and SQL Server 2014 has 166 DMVs.
Now, SQL Server 2016 has 185 DMVs.

How to see it?

It is very easy to have a look using the sys.all_objects view:

SELECT * FROM sys.all_objects WHERE TYPE=’V’ AND NAME LIKE ‘dm_%’ order by name ASC

DMV_SQL2016

From SQL Server 2012 to SQL Server 2014, we can notice that a lot of new DMVs comes with the In-Memory technology with the syntax “dm_xtp_xxxxxxxx” or “dm_db_xtp_xxxxxxxx”

In SQL Server 2016, a lot of new “dm_exec_xxxxxxxx” is present.

All definitions for these views come from the Microsoft documentation or web site.

To begin, you will see 10 DMVs for the PolyBase technology:

  • dm_exec_compute_node_status
  • dm_exec_dms_workers

A useful msdn page resumes all DMVs for these new views here

Other dm_exec_xxx views are basically usefull like:

  • dm_exec_query_optimizer_memory_gateways
    • Returns the current status of resource semaphores used to throttle concurrent query optimization.
    • Microsoft Reference here
  • dm_exec_session_wait_stats
    • Returns information about all the waits encountered by threads that executed for each session
    • Microsoft Reference here

3 new DMVs for the Columstore technology:

  • dm_column_store_object_pool
  • dm_db_column_store_row_group_operational_stats
    • Returns current row-level I/O, locking, and access method activity for compressed rowgroups in a columnstore index.
    • Microsoft Reference here
  • dm_db_column_store_row_group_physical_stats
    • Provides current rowgroup-level information about all of the columnstore indexes in the current database
    • Microsoft Reference here

2 new DMVs for Stretch Databases in the database context and with rda(remote database archive):

  • dm_db_rda_migration_status
    • For the current database, list of state information of the remote data archive schema update task.
    • Microsoft Reference here

This list can change if a Service Pack is  applied.
It is just for you to have a little reference view about these useful views! 8-)

 

Cet article SQL Server 2016: New Dynamic Management Views (DMVs) est apparu en premier sur Blog dbi services.

“_suppress_identifiers_on_dupkey” – the SAP workaround for bad design

Yann Neuhaus - Fri, 2017-09-29 08:01

In SQL, ‘upsert’ is a conditional insert or update: if the row is there, you update it, but if it is not there, you insert it. In Oracle, you should use a MERGE statement for that. You are clearly doing it wrong if you code something like:

begin
insert...
exception
when dup_val_on_index then update...
end;


But it seems that there are many applications with this bad design, and Oracle has introduced an underscore parameter for them: “_suppress_identifiers_on_dupkey”. You won’t be surprised that this one is part of the long list of parameters required for SAP.

Let’s investigate this.

Insert – Exception – Update

So the idea is to try first an insert, rely on the unique constraint (primary key) to get an exception if the row exists, and in this case update the existing row. There are several flows with that.

The first problem, is that it is not as easy as it looks like. If a concurrent session deletes the row between you insert and update, then the update will fail. You have to manage this. The failed insert cannot leave a lock on the rows that was not inserted.

The second problem is that the SQL engine is optimized for transactions which commit. When the ‘dup_val_on_index’ on index occurs, you have already inserted the table row, updated some indexes, etc. And all that has to be rolled back when the exception occurs. This generates unnecessary contention on the index leaf block, and unnecessary redo.

Then the third problem, and probably the worst one, is that an exception is an error. And error management has lot of work to do, such as looking into the dictionary for the violated constraint name in order to give you a nice error message.

I’ve created the following table:

create table demo as select * from dual;
create unique index demo on demo(dummy);

And I’ve run 10 million inserts on it, all with duplicates:

exec for i in 1..1e7 loop begin insert into demo values('x'); exception when others then null; end; end loop;

Here is some extracts from the AWR on manual snapshots taked before and after.

Elapsed: 20.69 (mins)
DB Time: 20.69 (mins)

This has run for 20 minutes.


Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 100.00 Soft Parse %: 100.00
Execute to Parse %: 33.34 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 92.31 % Non-Parse CPU: 94.90
Flash Cache Hit %: 0.00

The ‘Execute to Parse %’ show that 2/3 of statements are parsed each time.


SQL ordered by Gets DB/Inst: CDB1/CDB1 Snaps: 19-20
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - Buffer Gets as a percentage of Total Buffer Gets
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Buffer Gets: 180,125,740
-> Captured SQL account for 127.7% of Total
 
Buffer Gets Elapsed
Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
1.80094E+08 1 1.800942E+08 100.0 1,239.8 99.5 .3 frvpzg5yubp29
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
BEGIN for i in 1..1e7 loop begin insert into demo values('x'); exception when ot
hers then null; end; end loop; END;
 
1.60094E+08 10,000,000 16.0 88.9 983.1 100.3 .4 319ypa1z41aba
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
INSERT INTO DEMO VALUES('x')
 
49,999,995 9,999,999 5.0 27.8 201.1 103.2 0 2skwhauh2cwky
PDB: PDB1
select o.name, u.name from obj$ o, user$ u where o.obj# = :1 and o.owner# = u.u
ser#
 
19,999,998 9,999,999 2.0 11.1 148.5 98.9 0 2jfqzrxhrm93b
PDB: PDB1
select /*+ rule */ c.name, u.name from con$ c, cdef$ cd, user$ u where c.con# =
cd.con# and cd.enabled = :1 and c.owner# = u.user#

My failed inserts have read on average 16 blocks for each attempt. that’s too much for doing nothing. And in addition to that, I see two expensive statements parsed and executed each time: one to get the object name and one to get the constraint name.
This is how we can retreive the error message which is:

 
ORA-00001: unique constraint (SCOTT.DEMO) violated
 

This is a big waste of resource. I did this test in PL/SQL but if you cumulate all worst practices and run those inserts row by row, then you will see those colors:
CaptureBreakReset

The Orange is ‘Log File Sync’ because you generate more redo than necessary.
The Green is ‘CPU’ because you read more blocks than necessary.
The read is ‘SQL*Net break/reset to client’ when the server process sends the error.

_suppress_identifiers_on_dupkey

When you set “_suppress_identifiers_on_dupkey” to true, Oracle will not return the name of the constraint which is violated, but only the information which is already there in the session context.

Here is the message that you get:

 
ORA-00001: unique constraint (UNKNOWN.obj#=73375) violated
 

Where 73375 is the OBJECT_ID of the index where the unique constraint exception has been violated.

You have less information, but it is faster:

Elapsed: 15.45 (mins)
DB Time: 15.48 (mins)

There is no Soft Parse overhead:

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 100.00 Soft Parse %: 96.43
Execute to Parse %: 99.98 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 90.38 % Non-Parse CPU: 99.95
Flash Cache Hit %: 0.00

Our statement is the only one using the CPU and reads less blocks:

SQL ordered by Gets DB/Inst: CDB1/CDB1 Snaps: 21-22
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - Buffer Gets as a percentage of Total Buffer Gets
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Buffer Gets: 110,132,467
-> Captured SQL account for 81.8% of Total
 
Buffer Gets Elapsed
Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
1.10091E+08 1 1.100906E+08 100.0 926.2 98.8 1 frvpzg5yubp29
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
BEGIN for i in 1..1e7 loop begin insert into demo values('x'); exception when ot
hers then null; end; end loop; END;
 
90,090,580 10,000,000 9.0 81.8 515.7 99.1 1.9 319ypa1z41aba
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
INSERT INTO DEMO VALUES('x')

This parameter is a workaround for bad design, but not a solution.

Update – no rows – Insert

In order to avoid all this rollback and exception management overhead, there is another idea. Start with the update and, when no row was found, insert it. This is easy with the ROWCOUNT.

begin
update ...
if SQL%ROWCOUNT = 0 then insert ...

This is more efficient but still subject to a concurrent session inserting the row between your update and you insert. But at least, you manage the different scenario with a condition on ROWCOUNT rather than with an exception, which is more scalable.

So what?

Always use the database in the expected way. Exceptions and Errors are not for the normal scenario of the use-case. Exceptions should be unusual. The solution is to use the MERGE statement which has been implemented exactly for this reason: do an upsert without the error management overhead and with the statement isolation level which prevents errors in a multi-user environment.

 

Cet article “_suppress_identifiers_on_dupkey” – the SAP workaround for bad design est apparu en premier sur Blog dbi services.

Businesses Transform Customer Engagements with Oracle Live Experience Cloud

Oracle Press Releases - Fri, 2017-09-29 07:00
Press Release
Businesses Transform Customer Engagements with Oracle Live Experience Cloud Accelerates Transformation through Rich, Highly Personalized Mobile-First Interactions

Redwood Shores, Calif.—Sep 29, 2017

Oracle today announced Oracle Live Experience Cloud, a customer engagement service for the mobile generation. With the mobile and digital landscape shaping the way customers interact with businesses, companies must quickly adapt to changing expectations to deliver frictionless, real-time, contextual experiences across channels. With Oracle Live Experience Cloud, users can address these new requirements and bring a new dimension to their mobile and business applications by being able to serve customers in the way that best meets their needs, be it HD voice, HD video, screen sharing, and annotations.

As such, businesses will have the ability to quickly resolve customer issues, drive greater customer loyalty, and increase satisfaction by engaging users on the right channel at the right time. Agents will also be empowered to deliver better customer experiences by having access to contextual customer data and insights, cutting call times and limiting customer frustration.

“Nearly 70% of IT and business leaders say ‘improving customer experience’ is the goal of their digital transformation initiative, and advancements in the contact center are crucial to success,” says Robin Gareiss, president of Nemertes Research1. “Successful digital transformation requires short time to market. By leveraging a cloud-based solution, organizations can start seeing improvements in CSAT scores, revenue, and customer retention immediately. What’s more, the ability to retain context across channels from within the native app is a huge development and will dramatically boost customer satisfaction.”

A recent Oracle report titled “The Future of Enterprise Communications: The Cloud Redefines Customer Experience” noted that while 65 percent of companies agree communications embedded within cloud applications will become the dominant way of communicating with employees, suppliers and customers, many currently lack the ability to do so effectively. Oracle Live Experience Cloud enables embedded contextual data and business analytics so users can easily switch between channels without losing key information already shared. Regardless of the customer’s preferred channel, the user will enjoy a more streamlined experience while the business gains valuable customer insights that can be leveraged within its core business applications.

“Nothing is more aggravating than dealing with a call center or service desk where you are stuck in a long, dehumanized loop of menu options with a slow resolution,” said Doug Suriano, senior vice president, general manager, Oracle Communications. “With Oracle Live Experience Cloud, businesses can eliminate customer friction points by harnessing the power of contextual communications and real-time engagement capabilities to offer a personalized and highly interactive digital experience that builds customer loyalty and improves business outcomes.”

 A cloud-native solution, Oracle Live Experience Cloud can be easily integrated into web and mobile apps and used to proactively engage customers at key moments of their individual journey. It modernizes existing Call Center and CRM solutions, supporting enterprise digital transformation efforts employees to deliver contextual and responsive cross-channel engagements that satisfy the customer and ultimately drive sales. Finally, businesses can optimize engagement success by measuring interactions in real-time and provisioning updates to further improve overall business results.

Features of Oracle Live Experience Cloud
  • Real time communication capabilities - HD voice, HD video, screen sharing, annotating
  • In application channels and mobile controls
  • Rules based contextual  routing for all channels
  • Escalate from chat bot to live assistance
  • Customer context
  • Real-time recording, search and playback
  • Integrated Analytics
  • Pre-built API integrations for key CRM systems
  • Modern desktop agent experience
  • Design personalized engagement scenarios based on context, history and business priorities
  • Insights on individual and overall service team performance, engagement success KPIs, with supervisor and administrator views.
  • Encryption and Bring Your Own Key capabilities
  • Elastic network, compute, and storage resources optimized at all layers for real time communication service
Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality and should not be relied upon in making purchasing decisions. The development, release and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation.

1 Nemertes 2017-18 Contact Center and Customer Engagement Benchmark Research Study | https://nemertes.com/

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.415.856.5145

Calling Procedure Parallel

Tom Kyte - Fri, 2017-09-29 05:06
I have below procedure which in turn calls two other Procedures. It calls and works fine but the two procs runs serial. I want to run them parallel and get the results on the main procs cursor. How do I do that? I tried with dbms_job.submit but could...
Categories: DBA Blogs

Getting sub-string from two Clobs object and compare those substrings

Tom Kyte - Fri, 2017-09-29 05:06
Hi, I am new to CLOB objects but seems like I need to get my hands dirty on this. I have a CLOB column in my table and I need to get item SKU values from this column separated by commas. This is hoe my CLOB Column value looks like. ------- <...
Categories: DBA Blogs

External table concepts

Tom Kyte - Fri, 2017-09-29 05:06
Hi All, I am new to oracle external table concepts. Have a very basic query - if i have a csv with the below columns Col1, Col2, Col3 Col4 .... Coln and i want to insert only Col3 & Col4 into an oracle external table , what would be my ...
Categories: DBA Blogs

sql query to update a table based on data from other table

Tom Kyte - Fri, 2017-09-29 05:06
Hi, Looks like my other similar questions got closed, so asking a new question. I have a cust_bug_data table with 2 columns(ROOT_CAUSE, BUG_NUMBER) like as follows: <code>create table cust_bug_data(ROOT_CAUSE VARCHAR(250), BUG_NUMBER NUMBER N...
Categories: DBA Blogs

Oracle Big Data Cloud Service – Compute Edition

Yann Neuhaus - Fri, 2017-09-29 03:00

In this blog post, we will see how to create a Big Data cluster through the Oracle Cloud Services. If you want more details about the Oracle Big Data cloud services offering, you can refer to my previous blog Introduction to Oracle Big Data.

First, you need to create your trial account through the following link: https://cloud.oracle.com/tryit. Note that, when you create your trial account, all information (phone number, address, credit card…), must be from the same country. Otherwise, you will get an error message.

Then you will get an email from Oracle with your connection information. The 4 main connection information are:

During the first connection you need to change your password and answer to 3 secret questions.

You are now login into the Oracle Cloud Services Dashboard. Select the “Big Data – Compute Edition” service to create your cluster.

BDCS-CE Dashboard

Click on “Service” and “Create Service”.

BDCS-CE Create Cluster

First, complete the service information. Cluster name, description… and click on “Next”.

BDCS-CE Cluster creation

Then, you enter the details of your Big Data cluster (configuration, credentials, storage…).

Cluster configuration:

Use the “full” deployment. It will provision a cluster with Spark, MapReduce, Zeppelin, Hive, Spark Thrift, Big Data File System.

Credentials:

Generate an ssh public key and insert it (see screenshot below). Update or keep the current Administrative user / password which is very important for the next operations.

Storage:

Oracle Public Cloud is working with Object storage container. Which means that, a storage container can be used by all cloud services. For the Big Data Service you need to use an existing storage container or create one. The storage container name must follow a specific syntax.

https://<identity_domaine>.storage.oraclecloud.com/v1/Storage-<identity_domaine>/<container_name>

Example: https://axxxxxx.storage.oraclecloud.com/v1/Storage-axxxxxx/dbistorage

You can find the complete configuration below.

BDCS-CE Configuration Overview

Confirm your cluster configuration and click on “Next”.

During the cluster deployment, you can take the time to read the documentation: https://docs.oracle.com/en/cloud/paas/big-data-cloud/index.html

Once your services has been deployed, you can access to the Big Data Cluster Console, to monitor your cluster and access it.

BDCS-CE Cluster Overview

 

BDCS-CE Cluster Console

OBDCS-CE Monitoring

You have now deployed an Big Data cluster composed by 3 nodes, based on HortonWorks distribution with the following tools:

  • HDFS = Hadoop Distributed FileSystem
  • YARN = Resources management for the cluster
  • Hive = Data Warehouse for managing large data sets using SQL
  • Spark= Data processing framework
  • Pig = High-level platform for creating programs that runs on Hadoop
  • ZooKeeper = Hadoop cluster scheduler
  • Zeppelin = Data scientist workbench, web based.
  • Alluxio = Memory speed virtual distributed storage
  • Tez = Framework for YARN-based, Data Processing Applications In Hadoop

Your Oracle Big Data cluster, through Oracle Big Data Cloud Service – Compute Edition is now ready to use.

Enjoy ;-)

 

Cet article Oracle Big Data Cloud Service – Compute Edition est apparu en premier sur Blog dbi services.

Introduction to Oracle Big Data Services

Yann Neuhaus - Fri, 2017-09-29 01:00

Since few years, Oracle decided to move forward in the Big Data area, as their main competitor. The goal of this blog post is to explain you, how the Oracle Big Data offering is composed.

As the Oracle Big Data offering is continuously improving, I’m always open to your feedback :-)

Oracle Big Data offering is split in 2 parts:

  • On-Premise
  • Public Cloud

Note: It’s important to know, that the 2 main Big Data distribution on the market are Cloudera and Hortonworks. We will see later how Oracle stands with this 2 main distributions.

On-premise: Oracle Big Data Appliance:

The main product of the Oracle Big Data offering is the Oracle Big Data Appliance. OBDA is an engineered systems based on the Cloudera distribution. The Big Data appliance offers you an easy-to-deploy solution with Cloudera manager for managing a Big Data cluster including a complete Hadoop ecosystem ready-to-use.

Oracle Big Data Appliance starts with a “Starter” rack of 6 nodes for a storage capacity of 96TB. Below the details configuration per nodes.

Oracle X6-2 server:

  • 2 × 22-Core Intel ® Xeon ® E5 Processors
  • 64GB Memory
  • 96TB disk space

Oracle Big Data Appliance is a combination of open source software and proprietary software from Oracle (i.e Oracle Big Data SQL). Below a high-level overview of Big Data Appliance software.

Screen Shot 2017-09-27 at 08.25.45

Oracle Big Data Cloud Machine:

On customer side, Oracle offers the Oracle Big Data Cloud Machine (BDCM). Fully managed by Oracle as it’s a PaaS service (Platform as a Service), based on customer infrastructures, designed to provide Big Data Cloud Service. The BDCM is a Big Data Appliance managed and operated by Oracle in customer’s data center.

The Big Data Cloud Machine starts with a “Starter Pack” of 3 nodes. Below the minimal configuration:

  • 3 nodes
  • 32 OCPU’s per node
  • 256GB RAM per node
  • 48TB disk space per node

Oracle Big Data Cloud Machine princing: https://cloud.oracle.com/en_US/big-data/cloudmachine/pricing

Oracle Public Cloud:

Oracle provides several deployment and services for Big Data:

  • Oracle Big Data Cloud Services
  • Oracle Big Data Cloud Services – Compute Edition
  • Event Hub Cloud Services (Kafka as a Service)
  • Oracle Big Data SQL Cloud Service

Oracle public cloud services, including Big Data, is available in two payment methods, metered and non-metered.

  • Metered: You are charged on the actual usage of the service resource :
    • OCPU/hour
    • Environment/hour
    • Host/hour
    • For the storage : GB or TB/month
  • Non-metered: Monthly or annual subscription for a service and it’s not depending on the resources usage. Charging is performed monthly.

For more information you can refer to the following links:

https://blogs.oracle.com/pshuff/metered-vs-un-metered-vs-dedicated-services

Oracle Big Data Cloud Services:

OBDCS is a dedicated Big Data Appliance in the public cloud. An engineered system managed and pre configured by Oracle. OBDCS is a large system from the start with Terabytes of storage.

The offering starts with a “Starter pack” of 3 nodes, including:

  • Platform as a Service
  • 2 payments methods: metered and non-metered
  • SSH connection to cluster nodes
  • Cloudera’s Distribution including Apache Hadoop, Enterprise Data Hub Edition
  • Oracle Big Data Connectors
  • Oracle Copy to Hadoop
  • Oracle Big Data Spatial and Graph

The cost entry is very high, that’s why this service is recommended for large and more mature business cases.

Pricing information: https://cloud.oracle.com/en_US/big-data/big-data/pricing

Oracle Big Data Cloud Services – Compute Edition:

OBDCS-CE provides you a dedicated Hadoop cluster based on Hortonworks distribution. The cost entry is smaller than Oracle Big Data Cloud Service, that’s why this service is more suitable for small business use case and proof and concept.

OBDCS-CE offering details:

  • Platform as a Service
  • 2 payments methods: metered and non-metered
  • Apache Hadoop cluster based on Hortonworks distribution
  • Free number of nodes for the deployment – 3 nodes is the minimum for a High Availability cluster, recommended for production. You can actually have one node clusters, but this is obviously not recommended.
  • Apache Zeppelin for Hive and Spark analytic
  • 3 access methods:
    • BDCS-CE console (GUI)
    • REST API
    • SSH

Pricing information: https://cloud.oracle.com/en_US/big-data-cloud/pricing

Summary Engineered systems PaaS On-Premise (customer side) - Big Data Appliance (BDA)- Big Data Cloud Machine (BDA managed by Oracle) Oracle Cloud Machine (OCM)  + BDCS – Compute edition Oracle Public Cloud Big Data Cloud Service (BDCS) – a BDA in Oracle public cloud – Cloudera distribution Big Data Cloud Service – Compute edition – Hortonworks distribution

More details about Oracle PaaS offering:

http://www.oracle.com/us/corporate/contracts/paas-iaas-public-cloud-2140609.pdf

I hope, this blog will help you to better understand the Oracle Big Data offering and products.

 

Cet article Introduction to Oracle Big Data Services est apparu en premier sur Blog dbi services.

Scheduler Jobs Do Not Run Automatically

Michael Dinh - Thu, 2017-09-28 21:09

After you have followed – IF: Jobs Do Not Run Automatically (Doc ID 2084527.1) – without any success,
then check to see if services have been created and are running.

RAC DB is 12.1.0.2.0 and was cloned from standby.

It just so happens, service as defined from the SQL below was not created:

select c.SERVICE
from dba_scheduler_jobs j, dba_scheduler_job_classes c
where j.JOB_CLASS=c.JOB_CLASS_NAME
and j.JOB_NAME=UPPER('&jobname')
;

To be honest, I was not able to find the issue and team mate did.

What I found very, very strange is manually running the job using exec dbms_scheduler.run_job is successful.

The manual job ran successfully without the service created and on the wrong node for where the service is defined
(the service is defined to run on node 2, while the manual run is from node 1).

Another unsolved mystery.


What are the benefits of Manufacturing Dashboards?

Nilesh Jethwa - Thu, 2017-09-28 16:15

Today in the US economy, the major players in the manufacturing industry are electronics, automobile, steel, consumer goods, and telecommunications. And as they offer more advanced products, including tablets and smartphones. These technological advancements significantly influence consumer lifestyles.

Along with these changes, the global manufacturing industry is currently embracing a new key player called metrics based manufacturing. This is actually the latest trend that industries need to consider in their sales funnel. So, what does this mean?

Read more at http://www.infocaptor.com/dashboard/manufacturing-dashboards-what-are-their-benefits

Updating records with many-to-1 linked table relationship

Tom Kyte - Thu, 2017-09-28 10:46
I have an MS_ACCESS Query to convert to Oracle SQL. Access Query <code>UPDATE target_table T INNER JOIN source_table S ON T.linkcolumn = S.linkColumn SET T.field1 = S.field1, T.field2 = S.field2, T.field3 = S.field3;</code> Note: T...
Categories: DBA Blogs

Native dynamic sql - Refcursor

Tom Kyte - Thu, 2017-09-28 10:46
Tom, Here is an example...that i want to change one function to avoid redundant information. create or replace package p_ref_cursor is type ret_ref_cursor is ref cursor; end p_ref_cursor; / drop table "tab1"; create table "tab1" ...
Categories: DBA Blogs

ORA-06502 with CHAR parameter. What am I missing?

Tom Kyte - Thu, 2017-09-28 10:46
Sorry to bother you with a ORA-06502 error. But I'm not understanding this behavior. As I saw, the length fits (see the dbms_output in result showing that the length is 16). The only thing i can think is that in the procedure proc, pl/sql is...
Categories: DBA Blogs

Windows and .NET sessions at Openworld

Christian Shay - Thu, 2017-09-28 09:25
Interested in Oracle Database on Windows performance and security, Active Directory, or .NET development topics? At Oracle Openworld SF next week there's a host of Windows and .NET sessions, hands on labs, and demogrounds for you to check out.

Here's the list of Windows sessions and demogrounds with times.
And here's the list of .NET development sessions, hands on lab, and demogrounds with times.

Use schedule builder to reserve your seats in any of those sessions before they fill up.

You can also visit us at our booth at the Moscone West "Exchange" (formerly known as "Demogrounds"). We'll have .NET experts as well as Oracle Database on Windows experts standing by to answer your questions or to give you a demo.

You can find us using this handy dandy map (we are on the left side of the exhibition hall with other Oracle application development booths) - click the image to enlarge:

See you at the show!!!


Announcing the dbi OpenDB Appliance

Yann Neuhaus - Thu, 2017-09-28 07:04

As already announced on Twitter and LinkedIn here is the blog post to describe our OpenDB appliance in more detail. I am sure you wonder what this is about so let me explain why we are doing this. What we do see day by day at our customers is that more and more databases get consolidated on to a VMWare deployment. This is not only true for the smaller ones of those but also for the critical, potentially much bigger ones. What makes it complicated, especially for smaller companies that do not necessarily have the knowhow for the specific database, is that you need to apply the best practices not only to the database deployment but also to the operating system and the VMWare deployment. But even if you have this already in place: Do you know how to deploy the PostgeSQL binaries, how to setup a PostgreSQL instance, how to monitor and how to backup and restore all that? Do you know how to do this with MySQL/MariaDB, MongoDB, Cassandra? If your answer to this is no but you need to have a PostgreSQL/MySQL/MariaDB/MongoDB/Cassandra instance ready quite fast then the dbi OpenDB Appliance might be the solution for you. Let’s dig into some details.

OpenDB-logo

A typical use case: You are forced to support an application which is running on a specific database. What do you do? Quickly setup a Linux VM, download the installer, clicking next, next, next and hopefully make the application connect to what you just installed and then cross your fingers and hope that never ever something goes wrong? You laugh? There are deployments out there which got setup in exactly this way. Another option would be to hire someone who is experienced in that area. This will not help you either as you’d at least need two people (because people tend to want to go to holidays from time to time). The next option would be to work together with external consultants which probably will work as long as you work with the right ones. Completely outsourcing the stuff is another option (or even going to the cloud), if you want to do that. With the dbi OpenDB Appliance you’ll get another option: We deliver a fully pre-configured VMWare based virtual machine image which you can easily plug into your existing VMWare landscape. Can that work? Let me explain what you would get:

As said just before you get an image which you can import into your VMWare ESX. I said this image is pre-configured, what does that mean? Well, when you start it up it boots into a CentOS 7.3 x64 Linux operating system. No magic, I know :) Additionally you’ll get four pre-configured disks:

/       15GB    The Linux operating system
/boot	1GB	The boot images (kernels)
/u01	50GB	All files belonging to the OpenDB itself
                All required DMK packages
                All source files (PostgreSQL, MariaDB, MongoDB, Cassandra)
                The Linux yum repositories
                The HOMEs of all product installations
                The admin directories for the initialized products
/u02	10GB	The data files belonging to the initialized products
/u03	10GB	The redo/wal files belonging to the initialized products
/u04	10GB	Backups

You are not supposed to touch the root, /boot and /u01 partitions but of course you will be able to resize /u02 to /u04. The 10GB provided initially are just meant as minimum setup. Resize your VMWare disk images (vmdks) and the dbi OpenDB command line utility offers you to resize the file systems as well with just a single call. At this point you probably wonder what the dbi OpenDB command line utility is about. In short this is a wrapper around our various DMK packages. Using one of the various DMK packages you can deploy and monitor databases even today. The command line utility makes use of that and wraps around the various DMKs. The interface is menu driven to make it as easy as possible for you and helps you with initializing the appliance (setting the hostname, network configuration and disk resizing). In addition you can install the products we support and create database instances on top of that without knowing the details. We take care of implementing the best practices in the background (kernel settings, file system layout, initialization parameters, …). But that is not all: We’ll go a step further and implement monitoring, alerting and backup procedures as well. The idea is that you really do not need to take care of such things: It just comes when you setup a product.

To give you an idea you’ll get something like this when you fire up the command line utility:

==============================================================================================
=                                                                                            =
=                                                                                            =
=       _ _    _    ___                 ___  ___     _             _ _                       =
=    __| | |__(_)  / _ \ _ __  ___ _ _ |   \| _ )   /_\  _ __ _ __| (_)__ _ _ _  __ ___      =
=   / _  | '_ \ | | (_) | '_ \/ -_) ' \| |) | _ \  / _ \| '_ \ '_ \ | / _  | ' \/ _/ -_)     =
=   \__,_|_.__/_|  \___/| .__/\___|_||_|___/|___/ /_/ \_\ .__/ .__/_|_\__,_|_||_\__\___|     =
=                       |_|                             |_|  |_|                             =
=                                                                                            =
=                                                                                            =
=      Please make a selection from the menu below (type 'q' to exit):                       =
=                                                                                            =
=      1. Deploy a database home                                                             =
=      2. List the deployed database homes                                                   =
=      3. Setup a database instance                                                          =
=      4. List the deployed database instances                                               =
=                                                                                            =
=     10. Stop and remove a database instance                                                =
=     11. Remove a database home                                                             =
=                                                                                            =
=                                                                                            =
=     99. Initialize the appliance                                                           =
=                                                                                            =
=                                                                                            =
==============================================================================================
 
 Your input please: 

You would start by “Initialize the appliance” to set your preferred host name, to initialize the network and to provide the monitoring credentials. Once done you can go on and start deploying product homes (e.g. PostgreSQL) and instances on top of that. Of course you can deploy multiple instances on the same home and you can install several homes of the same product version.

What do we mean by a “product”? A product is what we support with a specific release of the appliance. Initially this probably will be:

  • PostgreSQL 9.6.5
  • PostgreSQL 9.5.9

So the menu would offer you something like this for deploying the binaries:

==============================================================================================
=                                                                                            =
=                                                                                            =
=       _ _    _    ___                 ___  ___     _             _ _                       =
=    __| | |__(_)  / _ \ _ __  ___ _ _ |   \| _ )   /_\  _ __ _ __| (_)__ _ _ _  __ ___      =
=   / _  | '_ \ | | (_) | '_ \/ -_) ' \| |) | _ \  / _ \| '_ \ '_ \ | / _  | ' \/ _/ -_)     =
=   \__,_|_.__/_|  \___/| .__/\___|_||_|___/|___/ /_/ \_\ .__/ .__/_|_\__,_|_||_\__\___|     =
=                       |_|                             |_|  |_|                             =
=                                                                                            =
=                                                                                            =
=      Please make a selection from the menu below (type 'q' to exit, 'b' to go back):       =
=                                                                                            =
=                                                                                            =
=     000 - PostgreSQL 9.6.5                                                                 =
=     001 - PostgreSQL 9.5.9                                                                 =
=                                                                                            =
=                                                                                            =
==============================================================================================
 
 Your input please: 

Once you have deployed the homes you require you can list them:

==============================================================================================
=                                                                                            =
=                                                                                            =
=       _ _    _    ___                 ___  ___     _             _ _                       =
=    __| | |__(_)  / _ \ _ __  ___ _ _ |   \| _ )   /_\  _ __ _ __| (_)__ _ _ _  __ ___      =
=   / _  | '_ \ | | (_) | '_ \/ -_) ' \| |) | _ \  / _ \| '_ \ '_ \ | / _  | ' \/ _/ -_)     =
=   \__,_|_.__/_|  \___/| .__/\___|_||_|___/|___/ /_/ \_\ .__/ .__/_|_\__,_|_||_\__\___|     =
=                       |_|                             |_|  |_|                             =
=                                                                                            =
=                                                                                            =
=      Please make a selection from the menu below (type 'q' to exit, 'b' to go back):       =
=                                                                                            =
=                                                                                            =
=     The following homes are available for deploying instances on:                          =
=                                                                                            =
=                                                                                            =
=     pg965:/u01/app/opendb/product/PG96/db_5/:dummy:9999:D                                  =
=     PG959:/u01/app/opendb/product/PG95/db_9/:dummy:9999:D                                  =
=     PG959_1:/u01/app/opendb/product/PG95/db_9_0:dummy:9999:D                               =
=     PG965_1:/u01/app/opendb/product/PG96/db_5_0:dummy:9999:D                               =
=                                                                                            =
=                                                                                            =
==============================================================================================
 
 Your input please: 

Here you can see that you can have multiple homes of the same release (two for PostgreSQL 9.6.5 and two for PostgreSQL 9.5.9 in this case). The path and naming for a home follow our best practices and are generated automatically. Having the homes you can start deploying you instances:

==============================================================================================
=                                                                                            =
=                                                                                            =
=       _ _    _    ___                 ___  ___     _             _ _                       =
=    __| | |__(_)  / _ \ _ __  ___ _ _ |   \| _ )   /_\  _ __ _ __| (_)__ _ _ _  __ ___      =
=   / _  | '_ \ | | (_) | '_ \/ -_) ' \| |) | _ \  / _ \| '_ \ '_ \ | / _  | ' \/ _/ -_)     =
=   \__,_|_.__/_|  \___/| .__/\___|_||_|___/|___/ /_/ \_\ .__/ .__/_|_\__,_|_||_\__\___|     =
=                       |_|                             |_|  |_|                             =
=                                                                                            =
=                                                                                            =
=      Please make a selection from the menu below (type 'q' to exit, 'b' to go back):       =
=                                                                                            =
=                                                                                            =
=     Please specify an alias for your new instance                                          =
=       The alias needs to be at least 4 characters                                          =
=       The alias needs to be at most  8 characters                                          =
=                                                                                            =
=                                                                                            =
=                                                                                            =
==============================================================================================
 
 Your input please: MYINST1 

What happens in the background then is that the PostgreSQL cluster is initialized, started and added to the auto start configuration (systemd) so that the instance will properly shutdown when the appliance is stopped and comes up when the appliance is started. Listing the deployed instances is possible, too, of course:

==============================================================================================
=                                                                                            =
=                                                                                            =
=       _ _    _    ___                 ___  ___     _             _ _                       =
=    __| | |__(_)  / _ \ _ __  ___ _ _ |   \| _ )   /_\  _ __ _ __| (_)__ _ _ _  __ ___      =
=   / _  | '_ \ | | (_) | '_ \/ -_) ' \| |) | _ \  / _ \| '_ \ '_ \ | / _  | ' \/ _/ -_)     =
=   \__,_|_.__/_|  \___/| .__/\___|_||_|___/|___/ /_/ \_\ .__/ .__/_|_\__,_|_||_\__\___|     =
=                       |_|                             |_|  |_|                             =
=                                                                                            =
=                                                                                            =
=      Please make a selection from the menu below (type 'q' to exit, 'b' to go back):       =
=                                                                                            =
=                                                                                            =
=     The following instances are currently deployed:                                        =
=                                                                                            =
=                                                                                            =
=     MYINST1:/u01/app/opendb/product/PG96/db_5/:/u02/opendb/pgdata/MYINST1:5432:Y           =
=                                                                                            =
=                                                                                            =
==============================================================================================
 
 Your input please: 

The cronjobs for monitoring, alerting and backup have been created as well:

[opendb@opendb ~]$ crontab -l
00 01 * * * /u01/app/opendb/local/dmk/dmk_postgres/bin/dmk-pg-dump.sh -s MYINST1 -t /u04/opendb/pgdata/MYINST1/dumps >/dev/null 2>&1
58 00 * * * /u01/app/opendb/local/dmk/dmk_postgres/bin/dmk-pg-badger-reports.sh -s MYINST1 >/dev/null 2>&1
*/10 * * * * /u01/app/opendb/local/dmk/dmk_postgres/bin/dmk-check-postgres.sh -s MYINST1 -m  >/dev/null 2>&1

With every new release/update of the appliance we plan to include more products such as MariaDB/MongoDB/Cassandra, provide patch sets for the existing ones and update the Linux operating system. Updates will be delivered as tarballs and the command line utility will take care of the rest, you do not need to worry about that. You can expect updates twice a year.

To visualize this:
OpenDB-big-picture

/u02 will hold all the files that contain your user data. /u03 is there for redo/wal/binlog where required and /u04 is for holding the backups. This is fixed and must not be changed. Independent of which product you choose to deploy you’ll get a combination of pcp (Performance Co-Pilot) and vector to do real time performance monitoring of the appliance (of course configured automatically).

Alerting will be done by a combination of third party (open source) projects and DMK. The tools we’ll use for PostgreSQL will be check_postgres and pgbadger, for example. For the other products we’ll announce what we will use when it will be included in a future release.

In addition to the VMWare template you can have the appliance also in the Hidora Cloud as a pay as you go service (although that is not fully ready).

If you have any questions just send as an email to: opendb[at]dbi-services[dot]com

 

Cet article Announcing the dbi OpenDB Appliance est apparu en premier sur Blog dbi services.

Commonwealth Edison, Baltimore Gas & Electric Awarded the 2017 Oracle Sustainability Innovation Awards

Oracle Press Releases - Thu, 2017-09-28 06:45
Press Release
Commonwealth Edison, Baltimore Gas & Electric Awarded the 2017 Oracle Sustainability Innovation Awards Leading Utility Companies Recognized for Energy Efficiency Programs

Redwood Shores, Calif.—Sep 28, 2017

Oracle today announced that Commonwealth Edison (ComEd) and Baltimore Gas and Electric (BGE) will be presented with the 2017 Oracle Sustainability Innovation Awards at Oracle OpenWorld. The awards recognize customers that are committed to making environmental issues a priority across the enterprise. ComEd and BGE showcased unique ways to advance energy efficiency through innovative green practices using Oracle technology. ComEd leadership was distinguished by earning the 2017 Oracle Chief Sustainability Office of the Year award among all nominations.

“We are inspired by the successes achieved by ComEd and BGE as part of their core commitments to a sustainable future. Both companies have implemented Oracle technologies in a way that improved eco-efficiencies, innovation and transparency,” said Rodger Smith, senior vice president and general manager, Oracle Utilities. “We’re honored to present them with the 2017 Oracle Sustainability Innovation Award during Oracle OpenWorld this year.”

Both companies have made significant impact by employing the Oracle Utilities Opower Energy Efficiency solution. The solution includes Home Energy Reports (HERs) and online web tools that provide customers with information about their energy consumption, thereby encouraging conservation. Since 2009, ComEd has generated over 1.1 TWh in energy savings; BGE has used the program since 2012 to save 420,000 MWh. The two organizations have also successfully implemented Peak Management programs, which encourage customers to reduce their energy consumption when electricity demand is high. Because of this BGE has reduced peak demand by 16 percent for participating customers, and ComEd has reduced Greenhouse gases by 10 percent.

“We’re delighted to be recognized for our commitment to sustainable practices,” said Mark Case, vice president of Regulatory Policy and Strategy for BGE who leads energy efficiency initiatives for Exelon’s utilities. “We’ve seen great success and look forward to continuing to work with Oracle to drive energy efficiency.”

“At ComEd, we believe that meeting the energy needs of today and tomorrow will depend heavily on our ability to collaborate with our customers,” Val Jensen, senior vice president, Customer Operations at ComEd. “Working with Oracle gives us more opportunities to leverage data-driven insights and engage with our customers to drive energy efficiency.”

Contact Info
Valerie Beaudett
Oracle
+1.650.400.7833
valerie.beaudett@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe, and Asia. For more information about Oracle, please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • +1.650.400.7833

Searching wikipedia from the command line

Yann Neuhaus - Thu, 2017-09-28 06:41

Wouldn’t it be nice if you could search wikipedia from the command line? I often need to quickly look up a definition or want to know more about a specific topic when I am working on the command line. So here is how you can do it …

What you need is npm and wikit. On my debian based system I can install both with:

$ sudo apt-get install npm
$ sudo npm install wikit -g
$ sudo ln -s /usr/bin/nodejs /usr/bin/node

The link is to avoid the following issue:

$ wikit postgresql
/usr/bin/env: ‘node’: No such file or directory

For Fedora/RedHat/Centos you should use yum:

$ sudo yum install npm -y
$ sudo npm install wikit -g

Once you have that you can use wikit to query wikipedia (summary):

$ wikit postgresql
 PostgreSQL, often simply Postgres, is an object-relational database management system
 (ORDBMS) with an emphasis on extensibility and standards compliance. As a database
 server, its primary functions are to store data securely and return that data in
 response to requests from other software applications. It can handle workloads ranging
 from small single-machine applications to large Internet-facing applications (or
 for data warehousing) with many concurrent users; on macOS Server, PostgreSQL is
 the default database; and it is also available for Microsoft Windows and Linux (supplied
 in most distributions). PostgreSQL is ACID-compliant and transactional. PostgreSQL
 has updatable views and materialized views, triggers, foreign keys; supports functions
 and stored procedures, and other expandability. PostgreSQL is developed by the PostgreSQL
 Global Development Group, a diverse group of many companies and individual contributors.
 It is free and open-source, released under the terms of the PostgreSQL License,
 a permissive software license.

Cool. When you want to read the output in your default browser instead of the console you can do this as well by adding then “-b” flag:

$ wikit postgresql -b

When you want to open the “disambiguation” page in your browser:

$ wikit postgresql -d

Selection_013

Changing the language is possible as well with the “-lang” switch:

$ wikit --lang de postgresql 
 PostgreSQL (englisch [,pəʊstgɹɛs kjʊ'ɛl]), oft kurz Postgres genannt, ist ein freies,
 objektrelationales Datenbankmanagementsystem (ORDBMS). Seine Entwicklung begann
 in den 1980er Jahren, seit 1997 wird die Software von einer Open-Source-Community
 weiterentwickelt. PostgreSQL ist weitgehend konform mit dem SQL-Standard ANSI-SQL
 2008, d.h. der Großteil der Funktionen ist verfügbar und verhält sich wie definiert.
 PostgreSQL ist vollständig ACID-konform (inklusive der Data Definition Language),
 und unterstützt erweiterbare Datentypen, Operatoren, Funktionen und Aggregate. Obwohl
 sich die Entwicklergemeinde sehr eng an den SQL-Standard hält, gibt es dennoch eine
 Reihe von PostgreSQL-spezifischen Funktionalitäten, wobei in der Dokumentation bei
 jeder Eigenschaft ein Hinweis erfolgt, ob dies dem SQL-Standard entspricht, oder
 ob es sich um eine spezifische Erweiterung handelt. Darüber hinaus verfügt PostgreSQL
 über ein umfangreiches Angebot an Erweiterungen durch Dritthersteller, wie z.B.
 PostGIS zur Verwaltung von Geo-Daten. PostgreSQL ist in den meisten Linux-Distributionen
 enthalten. Apple liefert ab der Version Mac OS X Lion (10.7) PostgreSQL als Standarddatenbank

Quite helpful …

 

Cet article Searching wikipedia from the command line est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator