Feed aggregator

Shredding and Querying with Oracle Offline Persistence in JET

Andrejus Baranovski - Sat, 2018-02-17 14:44
I think offline functionality topic should become a trend in the future. Its great that Oracle already provides solution for offline - Oracle Offline Persistence toolkit. This is my second post related to offline support, read previous post - Oracle Offline Persistence Toolkit - Simple GET Response Example with JET. I have tested and explained with sample app how it works to handle simple GET response offline. While today I would like to go one step further and check how to filter offline data - shredding and querying offline.

Sample app is fetching a list of employees - Get Employees button. It shows online/offline status - see icon in top right corner. We are online and GET response was cached by persistence toolkit:


We can test offline behaviour easily - this can be done through Chrome Developer Tools - turn on Offline mode. Btw, take a look into Initiator field for GET request - it comes from Oracle Offline Persistence toolkit. As I mention it in my previous post - once persistence toolkit is enabled, all REST calls are going through toolkit, this is how it is able to cache response data:


While offline, click on Get Employees button - you should see data returned from cache. Did you noticed - icon in the top right corner was changed to indicate we are offline:


Ok, now we will see how shredding mechanism works (more about it read on GitHub). While offline, we can search for subset of cached data. Search By Name does that, it gets from cache entry for Lex:


Switch online and call same action again, but with different name - REST call will be invoked against back-end server as expected. Again it is transparent to JET developer, no need to worry if app state is online/offline, same REST request is done in both cases:


Let's take a quick look into implementation part (complete example is available on my GitHub repository).

Online/offline status icon is controlled by observable variable:


It is very simple to determine online/offline state. We need to add event listener for online/offline and reset observable variable accordingly:


Persistence toolkit supports Simple and Oracle shredder/query handlers. I'm using ADF BC REST for backend and so my choice is oracleRestJsonShredding and oracleRestQueryHandler. Oracle shredder understands REST structure returned by ADF BC REST. Oracle query handler support filtering parameters for ADF BC REST for offline filtering - this allows to use same query format for both online and offline. I was happy to read that Oracle query handler explicitly supports ADF BC REST - queryHandlers:


Same REST call with filtering is executed online and offline:

What you can do when your Veritas cluster shows interfaces as down

Yann Neuhaus - Sat, 2018-02-17 07:40

Recently we had the situation that the Veritas cluster (InfoScale 7.3) showed interfaces as down on the two RedHat 7.3 nodes. This e.g. can happen when you change hardware. Although all service groups were up and running this is a situation you usually want to avoid as you never know what happens when the cluster is in such a state. When you have something like this:

[root@xxxxx-node1 ~]$ lltstat -nvv | head
LLT node information:
Node State Link Status Address
  * 0 xxxxx-node1 OPEN
      eth3 UP yy:yy:yy:yy:yy:yy
      eth1 UP xx:xx:xx:xx:xx:xx
      bond0 UP rr:rr:rr:rr:rr:rr
    1 xxxxx-node2 OPEN
      eth3 UP ee:ee:ee:ee:ee:ee
      eth1 DOWN tt:tt:tt:tt:tt:tt
      bond0 DOWN qq:qq:qq:qq:qq:qq

… what can you do?

In our configuration eth1 and eth3 are used for the interconnect and bond0 is the public network. As you can see above the eth1 and bond0 are reported as down for the second node. Of course, the first check you need to do is to check the interface status on the operating system level, but that was fine in our case.

Veritas comes with a tiny little utility (dlpiping) you can use to check connectivity on the Veritas level. Using the information from the lltstat command you can start dlpiping in “send” mode on the first node:

[root@xxxxx-node1 ~]$ /opt/VRTSllt/dlpiping -vs eth1

When that is running (will not detach from the terminal) you should start in “receive” mode on the second node:

[root@xxxxx-node1 ~]$ /opt/VRTSllt/dlpiping -vc eth1 xx:xx:xx:xx:xx:xx
using packet size = 78
dlpiping: sent a request to xx:xx:xx:xx:xx:xx
dlpiping: received a packet from xx:xx:xx:xx:xx:xx

This confirms that connectivity is fine for eth1. When you repeat that for the remaining interfaces (eth3 and bond0) and all is fine then you you can proceed. If not, then you have another issue than what we faced.

The next step is to freeze all your service groups so the cluster will not touch them:

[root@xxxxx-node1 ~]$ haconf -makerw
[root@xxxxx-node1 ~]$ hagrp -freeze SERVICE_GROUP -persistent # do that for all service groups you have defined in the cluster
[root@xxxxx-node1 ~]$ haconf -dump -makerw

Now the magic:

[root@xxxxx-node1 ~]$ hastop -all -force 

Why magic? This command will stop the cluster stack on all nodes BUT it will leave all the resources running. So you can do that without shutting down any user defined cluster services (Oracle databases in our case). Once the stack is down on all the nodes stop gab and ltt on both nodes as well:

[root@xxxxx-node1 ~]$ systemctl stop gab
[root@xxxxx-node1 ~]$ systemctl stop llt

Having stopped llt and gab you just need to start them again in the correct order on both systems:

[root@xxxxx-node1 ~]$ systemctl start llt
[root@xxxxx-node1 ~]$ systemctl start gab

… and after that start the cluster:

[root@xxxxx-node1 ~]$ systemctl start vcs

In our case that was enough to make llt work as expected again and the cluster is fine:

[root@xxxxx-node1 ~]$ gabconfig -a
GAB Port Memberships
===============================================================
Port a gen f44203 membership 01
Port h gen f44204 membership 01
[root@xxxxx-node1 ~]#

[root@xxxxx-node1 ~]$ lltstat -nvv | head
LLT node information:
   Node State Link Status Address
    * 0 xxxxx-node1 OPEN
      eth3 UP yy:yy:yy:yy:yy:yy
      eth1 UP xx:xx:xx:xx:xx:xx
      bond0 UP rr:rr:rr:rr:rr:rr
    1 xxxxx-node2 OPEN
      eth3 UP ee:ee:ee:ee:ee:ee
      eth1 UP qq:qq:qq:qq:qq:qq
      bond0 UP tt:tt:tt:tt:tt:tt 

Hope that helps …

 

Cet article What you can do when your Veritas cluster shows interfaces as down est apparu en premier sur Blog dbi services.

CPUs: Cores versus Threads on an Oracle Server

Yann Neuhaus - Sat, 2018-02-17 06:49

When doing a performance review I often do talk with the DBA about the CPU utilization of the server. How reliable is the server CPU utilization with tools like top or the host CPU utilization in the AWR-report? E.g. on an Linux Intel x86-64 server with 8 Cores and 16 logical CPUs (Intel Hyperthreading), what does a utilization of 50% mean?
As I had an ODA X7-M in a test lab available, I thought I’ll do some tests on that.

In my old days at Oracle Support we used a small script to test the CPU single thread performance of an Oracle DB-server:


set echo on
set linesize 120
set timing on time on
with t as ( SELECT rownum FROM dual CONNECT BY LEVEL <= 60 )
select /*+ ALL_ROWS */ count(*) from t,t,t,t,t
/

The SQL just burns a CPU-Core for around 20 seconds. Depending on your CPU single thread performance it may take a bit longer or completes faster.

On the ODA X7-M I have 16 Cores enabled and as hyperthreading enabled I do get 32 CPUs in /proc/cpuinfo:


oracle@dbi-oda01:/home/oracle/cbleile/ [CBL122] grep processor /proc/cpuinfo | wc -l
32
oracle@dbi-oda01:/home/oracle/cbleile/ [CBL122] lscpu | egrep "Thread|Core|Socket|Model name"
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
Model name: Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz

The CPU-speed was at 2.3 GHZ all the time:


[root@dbi-oda01 ~]# for a in `ls -l /sys/devices/system/cpu/cpu*/cpufreq | grep cpufreq | cut -d "/" -f6 | cut -d "u" -f2`; do echo "scale=3;`cat /sys/devices/system/cpu/cpu${a}/cpufreq/cpuinfo_cur_freq`/1000000" | bc; done
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301
2.301

The CPU is capable of running up to 3.7 GHZ, but that did not happen on my machine.

Running my SQL-script on the ODA X7-M actually took 17.49 seconds:


18:44:00 SQL> with t as ( SELECT rownum FROM dual CONNECT BY LEVEL <= 60 )
18:44:00 2 select /*+ ALL_ROWS */ count(*) from t,t,t,t,t
18:44:00 3 /
 
COUNT(*)
----------
777600000
 
Elapsed: 00:00:17.49

I continued to do the following tests (a job means running above SQL-script):
– 1 Job alone
– 2 Jobs concurrently
– 4 Jobs concurrently
– 8 Jobs concurrently
– 16 Jobs concurrently
– 24 Jobs concurrently
– 32 Jobs concurrently
– 40 Jobs concurrently
– 50 Jobs concurrently
– 60 Jobs concurrently
– 64 Jobs concurrently
– 128 Jobs concurrently

Here the result:


Jobs Min Time Max Time Avg Time Jobs/Cores Jobs/Threads Avg/Single-Time Thread utilization
 
1 17.49 17.49 17.49 0.06 0.03 1.00 1.00
2 17.51 17.58 17.55 0.13 0.06 1.00 1.00
4 17.47 17.86 17.62 0.25 0.13 1.01 0.99
8 17.47 17.66 17.55 0.50 0.25 1.00 1.00
16 17.64 21.65 18.50 1.00 0.50 1.06 0.95
24 18 27.38 24.20 1.50 0.75 1.38 0.72
32 32.65 34.57 33.21 2.00 1.00 1.90 0.53
40 34.76 42.74 40.31 2.50 1.25 2.30 0.54
50 48.26 52.64 51.21 3.13 1.56 2.93 0.53
60 52.4 63.6 60.63 3.75 1.88 3.47 0.54
64 54.2 68.4 64.27 4.00 2.00 3.67 0.54
128 119.49 134.34 129.01 8.00 4.00 7.38 0.54

When running with 16 Jobs top showed a utilization of around 50-52%. However running more than 16 Jobs showed an increase of the average time a job takes. I.e. with 16 Jobs the 16-Cores-Server is already almost fully utilized. Running with 32 Jobs results in an average elapsed time of 1.9 times compared to running 16 jobs (or less) concurrently. As it is 1.9 times and not 2 times I can conclude that there is an advantage of running with hyperthreading enabled, but it’s only around 5-10%.

So when calculating the utilization of your server then base it on the number of cores and not on the number of threads. When looking at your host CPU-utilization in top or in the AWR-report on an hyperthreaded-enabled server then it’s a good idea to multiply the server-utilization by 1.9.

 

Cet article CPUs: Cores versus Threads on an Oracle Server est apparu en premier sur Blog dbi services.

Oracle Code is back – Bigger and Better!

OTN TechBlog - Fri, 2018-02-16 16:24

2018 is yet another great year for developers! Oracle’s awesome global developer conference series, Oracle Code, is back – and it’s bigger and better!

In 2017 Oracle ran the first series of Oracle Code developer conferences. In over 20 cities across the globe the series attracted more than 10,000 developers from all over the world, providing them with the opportunity to learn new skills, network with peers and take home some great memories. Following the huge success, Oracle is about to run yet another 14 events across the globe kicking off in late February in Los Angeles. The great thing about Oracle Code, attendance and speaking at the conferences is fully free of charge, showing Oracle holding true to the commitment to the developer communities out there. Across four continents you will get to hear everything that is hot and top in the industry: Blockchain, Containers, Microservices, API Design, Machine Learning, AI, Mobile, Chatbots, Databases, Low Code Development, trendy programming languages, CI/CD, DevOps and much, much more will be right in the center of Oracle Code.

Throughout the one-day events, that provide space for 500 people, developers can share their experience, participate in hands-on labs, talk to subject matter experts and, most importantly, have a lot of fun in the Oracle Code Lounge.

IoT Cloud Brewed Beer

Got a few minutes to try the IoT Cloud Brewed Beer from a local micro brewery? Extend manufacturing processes and logistics operations quickly using data from connected devices. Tech behind the brew: IoT Production Monitoring, IoT Asset Monitoring, Big Data, Event Hub, Oracle JET.


3D Builder Playground

Create your own sculptures and furniture with the 3D printer and help complete the furniture created using Java constructive geometry library. The Oracle technology used is Application Container Cloud running Visual IDE and Java SE running JSCG library.

Oracle Zip Labs Challenge

Want some bragging rights and to win prizes at the same time? Sign up for a 15-minute lab on Oracle Cloud content and see your name on the leaderboard as the person to beat in Oracle Zip Labs Challenge.

IoT Workshop

Interact and exchange ideas with other attendees at the IoT Workshop spaces. Get your own Wi-Fi microcontroller and connect to Oracle IoT Cloud Service. Oracle Developer Community is partnering with AppsLab and the Oracle Applications Cloud User Experience emerging technologies team to make these workshops happen.

Robots Rule with Cloud Chatbot Robot

Ask NAO the robot to do Tai Chi or ask "who brewed the beers"? So how does NAO do what it does? It uses the Intelligent Bot API on Oracle Mobile Cloud Service to understand your command and responds back by speaking back to you.

Dev Live

The Oracle Code crew also thought of the folks who aren’t that lucky to participate at Oracle Code in person: Dev Live are live interviews happening at Oracle Code that are streamed online across the globe so that everyone can watch developers and community members share their experiences.

Register NOW!

Register now for an Oracle Code event near you at: https://developer.oracle.com/code

Have something interesting that you did and want to share it with the world? Submit a proposal in the Call for Papers at: https://developer.oracle.com/code/cfp





See you next, at Oracle Code!

Oracle Adaptive Intelligent Applications for ERP

OracleApps Epicenter - Fri, 2018-02-16 11:51
Recently Oracle announced new AI-based Apps for Finance Leaders to empower CFOs w/ data-driven insights to adapt to change, develop new #markets & increase profitability! With Oracle Adaptive Intelligent Applications for ERP, finance leaders can benefit from: Better insight: Making use of analytics and synthetic intelligence to finance can enhance efficiency and will increase agility […]
Categories: APPS Blogs

Duplex RMAN backups between disk and tape

Yann Neuhaus - Fri, 2018-02-16 09:46

Below a workaround is shown how to “duplex” archivelog backups between disk and tape:

Backup on disk (normal way):

backup device type disk archivelog all;

 

Immediately  backup on tape:

backup device type sbt archivelog until time 'sysdate' not backed up 2 times;

 

This backup command backs up all archivelogs, that are not backed up twice, so all which are backed up with the first command. As in the first backup command a logfile switch is included, between the two backup commands, no logfile switch should occur, otherwise “duplexing” does not work. The until time clause in the second command is added to prevent RMAN from another logfile switch, which would lead to different contents of the backups. And this clause does not filter anything, because sysdate means date and time when issuing the command.

 

Cet article Duplex RMAN backups between disk and tape est apparu en premier sur Blog dbi services.

variable in FROM clause inside pl/sql

Tom Kyte - Fri, 2018-02-16 07:46
Hi Tom We have an anonymous pl/sql block which looks like follows but using dbms_sql (the following doesnt work) declare vRows number; begin for i in (select * from user_tables) loop select count(*) into vRows from i....
Categories: DBA Blogs

Update current row witrh values from previous row

Tom Kyte - Fri, 2018-02-16 07:46
Hi, I'm searching for a solution to solve update with a single SQL statement instead of a PL/SQL procedure: <code>create table test (stock_date DATE, stock NUMBER(5), stock_in NUMBER(5), stock_out NUMBER(5), stock_val NUMBER(5)); INSERT INTO tes...
Categories: DBA Blogs

How to find the SQL_ID of the sql statements running in a stored procedure?

Tom Kyte - Fri, 2018-02-16 07:46
Hi Team, I have scenario in which I need to check which of my procedures(they run in batch) are slowing down the operations. The procedure consist of two main tasks: 1.) Get data from multiple table (has multiple joins and vast data) 2.) insert ...
Categories: DBA Blogs

Based on parameter value need to execute the condition .

Tom Kyte - Fri, 2018-02-16 07:46
<code>create or replace procedure fetch_ids(ename in varchar2,hiredate in date) as begin select * from emp where empname=ename and join_date=hiredate ; end;</code> Problem statement: 1)if i will not pass the ename, i need to fetch all the e...
Categories: DBA Blogs

how to overcome the job queue limitation of 1000

Tom Kyte - Fri, 2018-02-16 07:46
Hi Tom, I have a very huge data aggregation which should be ideally done on a OLAP database using cube. Due to some contrains, I am doing it in my transactional database. When I ran the SQL with multiple table joins, the SQL errored out due to...
Categories: DBA Blogs

Materialized View Fast Refresh and the ATOMIC_REFRESH Parameter

Tom Kyte - Fri, 2018-02-16 07:46
Hi Tom, I have about 25 MV's in my production application. Mostly in two refresh groups. Very few stand alone. One group (lets call it GROUP-A) refreshes every minute as a business requirement and the other (GRIOUP-B) every hour. Few more MV's ev...
Categories: DBA Blogs

what is the difference between shrink ,move and impdp

Tom Kyte - Fri, 2018-02-16 07:46
Hi: I want to clean some space about some tables ,there are a few ways ,such as move ,shrink and impdp. I want to know which one is better regardless of space consideration and assume this tables can use all those methods. Can you answer my...
Categories: DBA Blogs

Oracle and Active Directory

Tom Kyte - Fri, 2018-02-16 07:46
My company has select MS Active Directory for the enterprise directory services. We would like to integrate our Oracle networking with AD, in lieu of TNSNAMES or Oracle Names, for database connection resolution. However, we are having a hard time f...
Categories: DBA Blogs

Pete Finnigan Presented About Oracle Database Vault and Oracle Security

Pete Finnigan - Fri, 2018-02-16 07:06
I have not added much here on my site for some time due to a serious health issue taking a lot of my time with a close family member. So please bear with me if you email or contact me....[Read More]

Posted by Pete On 15/02/18 At 08:44 PM

Categories: Security Blogs

Why do we have specify the authentication clause for shared private fixed database links?

Tom Kyte - Thu, 2018-02-15 13:26
Hi, A private fixed user database link only requires username + password, e.g. <code>create database link <link_name> connect to <remote_user> identified by <remote_password> using "<tns-string>";</code> A SHARED private fixed user datab...
Categories: DBA Blogs

Do primary keys on a index-organized table have to be incremental?

Tom Kyte - Thu, 2018-02-15 13:26
Hi Tom, If the primary key on a index-organized table is not incremental, wouldn't this create bottlenecks as data volume grows under OLTP loads? Wouldn't the data being inserted needed to be sorted and inserted in the middle of the leaves? Woul...
Categories: DBA Blogs

generic trigger for auditing column level changes

Tom Kyte - Thu, 2018-02-15 13:26
I'm trying to create a generic before update trigger which will compare all :old.column_values to all :new.column_values. If the column_values are different, then I would like to log the change to a separate table. When I try to compile :ol...
Categories: DBA Blogs

Non unique clustered index

Tom Kyte - Thu, 2018-02-15 13:26
Hi Tom, We have a table with a non incremental composite key of (dialaog_id, insertion_date). This creates problems since we have insertions in the middle of the leaves. Data has to be relocated. We cannot use incremental keys since this cre...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator