Feed aggregator

Is it possible - to show OS disk free space together with DBA_DATAFILES data ?

Tom Kyte - Mon, 2018-05-28 03:26
In some of my custoner's databases the DBAs are using AUTOEXTENSIBLE datafiles, but with many datafiles comparting the same filesystem, in this sense : tablespace A, datafiles /u01/oradata/ts_A_file01.dbf autoextend unlimited tablespace B, da...
Categories: DBA Blogs

Exadata Vs RAC

Tom Kyte - Mon, 2018-05-28 03:26
Tom ? My understanding of RAC( or grid computing) , we use cluster of not so expensive servers for higher availability. But Oracle is marketing Exadata ( expensive severs ? relatively speaking ) for performance / higher availability , so on. ( if ...
Categories: DBA Blogs

Filtering LOBs

Jonathan Lewis - Mon, 2018-05-28 02:25

A two-part question about the FILTER operation appeared on the Oracle-L list server a couple of days ago. The first part was a fairly common question – one that’s often prompted by the way the optimizer used to behave in older versions of Oracle. Paraphrased, it was: “Why is the total cost of the query so high compared to the sum of its parts?”

Here’s the query, and the execution plan.

 INSERT INTO TEMP
  SELECT DISTINCT 'id',
    PHT.emplid
  FROM PHOTO PHT
  WHERE 1               =1
  AND PHT.PHOTO IS NOT NULL
  AND NOT EXISTS
    (SELECT 'x'
    FROM TEMP TMP
    WHERE PHT.EMPLID=TMP.EMPLID_SRCH
    AND TMP.OPRID  = 'id'
    )
  ;  

  
-------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |          | 21210 |  3334K|  5802K  (2)| 00:03:47 |
|   1 |  LOAD TABLE CONVENTIONAL | TEMP     |       |       |            |          |
|*  2 |   FILTER                 |          |       |       |            |          |
|*  3 |    TABLE ACCESS FULL     | PHOTO    | 21211 |  3334K|   313   (1)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL     | TEMP     |     1 |    17 |   380   (2)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter( NOT EXISTS (SELECT 0 FROM " TEMP" "TMP" WHERE
              "TMP"."EMPLID_SRCH"=:B1 AND "TMP"."OPRID"='id'))
   3 - filter("PHT"."PHOTO" IS NOT NULL)
   4 - filter("TMP"."EMPLID_SRCH"=:B1 AND "TMP"."OPRID"='id')

Note that the “not exists” subquery against temp runs as a filter subquery with a cost of 380 for the tablescan. Combine that with the cost of 313 for the driving tablescan of photo and you might wonder why the resulting cost isn’t something like 693 – and in some old versions of Oracle that’s probably how it would be reported.

Historically the optimizer has been very bad about producing a final cost when queries have included subqueries – whether as filter subqueries in the predicate section or as scalar subqueries in the select list. Sometimes the cost would simply vanish from the final cost, sometimes it would be added just once to the final cost regardless of how many times the subquery might actually execute.

In this example the subquery against temp is a correlated subquery and might have to run once for every row in photo where the column photo was not null. At best it would have to run at least once for every distinct value of the photo.emplid column (the correlation column) found in those rows. In recent versions of Oracle the optimizer has tried to introduce some estimate of how many times the subquery would run as part of its calculation of the total cost. So (to a crude approximation) 5802K = 313 + N * 380. Unfortunately if we try to work backwards to N we find it would be about 15,267 which is about 72% of the 21,200 rows estimated as the result of the tablescan of photo – I haven’t tried to investigate the algorithms yet but presumably the optimizer makes some allowances somewhere for “self caching” as the subquery runs.

The more interesting part of the question came when the OP decided to test the effect of getting rid of the subquery. Check the costs in the resulting plan:


  INSERT INTO TEMP
  SELECT DISTINCT 'id',
    PHT.emplid
  FROM PHOTO PHT
  WHERE 1               =1
  AND PHT.UC_PBI_PHOTO IS NOT NULL;

  
-------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |          | 21211 |  3334K|  3659   (1)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL | TEMP     |       |       |            |          |
|*  2 |   TABLE ACCESS FULL      | PHOTO    | 21211 |  3334K|  3659   (1)| 00:00:01 |
-------------------------------------------------------------------------------------
  
Predicate Information (identified by operation id):
---------------------------------------------------
     2 - filter("PHT"."PHOTO" IS NOT NULL)

Note how the cost of the tablescan of photo has gone up from 313 in the previous query to 3,659 in the simpler query! How can a tablescan that drives a subquery have a lower cost than the tablescan on its own? Bear in mind that in both cases the Cost attributed to the operation “Table Access Full” is purely about scanning the rows in the photo table and is (or should be) entirely disconnected from the cost and frequency of the subquery.

The clue is in the table definition. The column photo.photo is a BLOB.

Models

I think there are potentially two errors in the optimizer displayed by this example. The first is that it’s adding in a cost that it shouldn’t even be considering; the second is that it’s being inconsistent in the way that it’s deriving that cost.

To demonstrate what I think is happening, I built a variant of the OP’s example as follows:


rem
rem     Script:         optimizer_lob_costs.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2018
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1
rem             12.1.0.2
rem

create table photo (
        emplid          varchar2(11) not null,
        photo           clob,
        other_col       varchar2(1000)
)
lob (photo) 
store as
        photo_lob(
        disable storage in row 
        cache
        logging
)
;

create unique index ph_uk on photo(emplid);

insert /*+ append */ into photo
select
        lpad(2 * rownum,10,0),
        rpad('x',1000),
        rpad('x',1000)
from
        all_objects
where
        rownum <= 10000 -- > comment to avoid wordpress format issue
;

commit;

create table temp(
        oprid           varchar2(30),
        emplid_srch     varchar2(11)
)
;

insert /*+ append */ into temp
select
        'id',
        lpad(2 * rownum,10,0)
from
        all_objects
where
        rownum <= 1000 -- > comment to avoid wordpress format issue
;

commit;

execute dbms_stats.gather_table_stats(user,'photo',method_opt=>'for all columns size 1', cascade=>true)
execute dbms_stats.gather_table_stats(user,'temp', method_opt=>'for all columns size 1', cascade=>true)


I’ve changed the BLOB to a CLOB defined with storage in row disabled, and I’ve introduced a varchar2() column of the same size as the CLOB column. I’ve declared the correlating column not null and created a unique index on it. Here are the two queries I want to review – slightly simplified versions of the original:


explain plan for
insert into temp(emplid_srch)
select 
        distinct pht.emplid
from 
        photo pht
where 
        1 = 1
and  pht.photo is not null
-- and     pht.other_col is not null
and     not exists (
                select /*+ no_unnest */
                        null
                from 
                        temp tmp
                where 
                        pht.emplid=tmp.emplid_srch
        )
;  

select * from table(dbms_xplan.display);

explain plan for
insert into temp(emplid_srch)
select
        distinct pht.emplid
from    photo pht
where   1               =1
and  pht.photo is not null
-- and     pht.other_col is not nulL
;  

select * from table(dbms_xplan.display);

As you can see I’ve had to include a /*+ no_unnest */ hint in my SQL to get the FILTER operation to appear in the plan (the OP had the hidden parameter “_unnest_subquery” set to false); I’ve also allowed for two variants of each query, one referencing the CLOB column the other referencing the varchar2() column. The only results I’ll show are for the queries accessing the CLOB, and here are the plans first with, then without, the subquery. Check the cost of the tablescan of the photo table in the two cases:


----------------------------------------------------------------------------------
| Id  | Operation                | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |       |  9999 |   956K| 10458   (3)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL | TEMP  |       |       |            |          |
|*  2 |   FILTER                 |       |       |       |            |          |
|*  3 |    TABLE ACCESS FULL     | PHOTO | 10000 |   957K|   216   (1)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL     | TEMP  |     1 |    11 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter( NOT EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM "TEMP" "TMP"
              WHERE "TMP"."EMPLID_SRCH"=:B1))
   3 - filter("PHT"."PHOTO" IS NOT NULL)
   4 - filter("TMP"."EMPLID_SRCH"=:B1)


----------------------------------------------------------------------------------
| Id  | Operation                | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |       | 10000 |   957K|   285   (2)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL | TEMP  |       |       |            |          |
|*  2 |   TABLE ACCESS FULL      | PHOTO | 10000 |   957K|   285   (2)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("PHT"."PHOTO" IS NOT NULL)

With the subquery in place the tablescan of photo reports a cost of 285, in the absence of the subquery it reports a cost of 216, a difference of 69. Repeating the test but using the varchar2() column the cost of the tablescan was 213 in both cases – suggesting that the variation was due to the column being a LOB.

With no further clues in the plan it looked like one of those rare occasions when I have to look at the 10053 (optimizer) trace file – and this is what I got from the 12.1.0.2 trace, looking at the section headed “SINGLE TABLE ACCESS PATH” for the photo table. First the base query without the subquery:


SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for PHOTO[PHT]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Column (#2): PHOTO(LOB)
    AvgLen: 87 NDV: 0 Nulls: 0 Density: 0.000000
  Table: PHOTO  Alias: PHT
    Card: Original: 10000.000000  Rounded: 10000  Computed: 10000.000000  Non Adjusted: 10000.000000
  Scan IO  Cost (Disk) =   210.000000
  Scan CPU Cost (Disk) =   13571440.480000
  Total Scan IO  Cost  =   210.000000 (scan (Disk))
                         + 70.000000 (io filter eval) (= 0.007000 (per row) * 10000.000000 (#rows))
                       =   280.000000
  Total Scan CPU  Cost =   13571440.480000 (scan (Disk))
                         + 9138463.200000 (cpu filter eval) (= 913.846320 (per row) * 10000.000000 (#rows))
                       =   22709903.680000

Note the “Total Scan IO Cost” described at line 13 includes a component at line 12 labelled “(io filter eval)” – why, for the predicate “photo is null”, would we do any special I/O when that predicate can be met in the basic table scan.

(Note: A predicate like “lob_column is null” means there is no lob locator in place, so no lob access need be done for that test. In fact the related, but very different, predicate “length(lob_column) = 0” meaning the lob locator exists but the lob is “empty” could also be satisfied during the tablescan without reference to the physical lob segment(s) because the length of the lob is included in the lob locator.)

Let’s assume that the optimizer is incorrectly assuming the run-time engine will have to access the lob in some way to determine that the lob is null. The worst case scenario is that Oracle will start by accessing the LOBindex – so why don’t we check how big the LOBindex is. The first step I took was to check the object_id of the LOBindex and then do a tree dump (which showed 66 leaf blocks) and then I checked the segment header block and dumped that with the following results:


  Extent Control Header
  -----------------------------------------------------------------
  Extent Header:: spare1: 0      spare2: 0      #extents: 1      #blocks: 127
                  last map  0x00000000  #maps: 0      offset: 4128
      Highwater::  0x01400447  ext#: 0      blk#: 70     ext size: 127
  #blocks in seg. hdr's freelists: 4
  #blocks below: 70
  mapblk  0x00000000  offset: 0
                   Unlocked
     Map Header:: next  0x00000000  #extents: 1    obj#: 194295 flag: 0x40000000
  Extent Map
  -----------------------------------------------------------------
   0x01400401  length: 127

See the “Highwater::” information at line 6 – the allocated space in the segment is the first 70 blocks of the first extent. That’s (almost certainly) where the incremental cost of 70 (single block I/Os) comes from.  (And I did couple of big updates to the LOB, designed to expand the LOBindex without changing the segment size of the underlying table, to corroborate that hypothesis.)

This brings us to the question of why the cost of the tablescan drops when the subquery is included. Again we generate the 10053 trace and examine the details under the “SINGLE TABLE ACCESS PATH”:


SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for PHOTO[PHT]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Table: PHOTO  Alias: PHT
    Card: Original: 10000.000000  Rounded: 10000  Computed: 10000.000000  Non Adjusted: 10000.000000
  Scan IO  Cost (Disk) =   210.000000
  Scan CPU Cost (Disk) =   13571440.480000
  Total Scan IO  Cost  =   210.000000 (scan (Disk))
                         + 3.500000 (io filter eval) (= 0.000350 (per row) * 10000.000000 (#rows))
                       =   213.500000
  Total Scan CPU  Cost =   13571440.480000 (scan (Disk))
                         + 656923.160000 (cpu filter eval) (= 65.692316 (per row) * 10000.000000 (#rows))
                       =   14228363.640000


In this case the “(io filter eval)” at line 10 is only 3.5 – and if you know your optimizer and how it handles subqueries you’re allowed to guess that could be one of Oracle’s standard guesses of 5% coming into play. (Again, growing the index seemed to corroborate this hypothesis.)

So here’s (possible) bug number 2: the first bug is adding a cost for accessing the LOBindex when there should be no need to  access the index at all – the execution plan says we will get 10,000 rows from the table, the filter predicate does report a cardinality reduced by just 1 on a column that has been declared with a uniqueness constraint, but a fairly commonly used “guess factor” of 5% is used as an indicator of the number of times the lob predicate will be tested. The various bits of the arithmetic are not consistent with each other.

Summary notes:

If you have a tablescan with a predicate that references a lob column then the cost of the tablescan includes the cost of the lob access – and there are cases where lob access is not needed but still gets costed {this is bug number 1 – the predicates are column is/is not null, and length(column) = / != 0)}.

If the lob data itself does not need to be accessed then the size of the lob index – which you can’t easily find – may have a significant impact on the cost of the tablescan.

If the query also includes predicates that result in the optimizer guessing about cardinality effects (1%, 5%, 0.25% are common guesses) then that guess may be used to scale the assumed (and potentially irrelevant) cost of the lob access. (There is scope for further experimentation in this area to examine the effects of “non-guess” predicates and the assumed order of application of predicates, e.g. are lob predicates costed as the last to be applied, does the algorithm for costing matched the execution order.)

As often happens it’s easy to see that there are oddities in the arithmetic that affect the cost of a query in ways that might make the optimizer pick a silly execution plan. Unfortunately it’s not easy to predict when you’re likely to see the effects of these oddities; the best we can do is remember that there is an anomaly with costing lob-based predicates and hope that we think of it when we see the optimizer picking a bad plan for reasons that initially are not obvious.

How to find the UUID of a device in Linux for Oracle ASM

Pakistan's First Oracle Blog - Sun, 2018-05-27 22:45
UUID stands for Universally Unique Identifier. I use UUID for my disk device, when I need to create and add disks for Oracle ASM, as UUID is independet of device name or mountpoint. So its always a good idea to include UUID of device in the fstab file in Linux.

So here is how to find the UUID of a device in Linux for Oracle ASM:




[root@bastion ~]$ ls -l /dev/disk/by-uuid
lrwxrwxrwx 1 root root 11 JAN 18 20:38 1101c254-0b92-42ca-b34a-6d283bd2d31b -> ../../sda2
lrwxrwxrwx 1 root root 11 JAN 18 20:38 11dc5104-C07b-4439-bdab-00a76fcb88df -> ../../sda1

HTH.


Categories: DBA Blogs

Oracle ADF BC REST - Performance Review and Tuning

Andrejus Baranovski - Sun, 2018-05-27 00:12
I thought to check how well ADF BC REST scales and how fast it performs. For that reason, I implemented sample ADF BC REST application and executed JMeter stress load test against it. You can access source code for application and JMeter script on my GitHub repository. Application is called Blog Visitor Counter app for a reason - I'm using same app to count blog visitors. This means each time you are accessing blog page - ADF BC REST service is triggered in the background and it logs counter value with timestamp (no personal data).

Application structure is straightforward - ADF BC REST implementation:


When REST service is accessed (GET request is executed) - it creates and commits new row in the background (this is why I like ADF BC REST - you have a lot of power and flexibility in the backend), before returning total logged rows count:


New row is assigned with counter value from DB sequence, as well as with timestamp. Both values are calculated in Groovy. Another bonus point for ADF BC REST, besides writing logic in Java - you can do scripting in Groovy - this makes code simpler:


Thats it - ADF BC REST service is ready to run. You may wonder, how I'm accessing it from blog page. ADF BC REST services as any other REST, can be invoked through HTTP request. In this particular case, I'm calling GET operation through Ajax call in JavaScript on client side. This script is uploaded to blogger HTML:


Performance

I'm using JMeter to execute performance test. In below example, REST GET request is invoked in infinite loop by 100 concurrent threads. This creates constant load and allows to measure how ADF BC REST application performs under such load:


ADF BC REST scales well, with 100 concurrent threads it does request processing in 0.1 - 0.2 seconds. If we would compare it to ADF UI request processing time, it would be around 10 times faster. This is expected, because JSF and ADF Faces UI classes are not used during ADF BC REST request. Performance test statistics for 100 threads, see Avg logged time in milliseconds:


Tuning

1. Referenced Pool Size and Application Module Pooling

ADF BC REST executes request is stateless mode, REST nature is stateless. I though to check, what this mean for Application Module tuning parameters. I have observed that changing Referenced Pool Size value doesn't influence application performance, it works either with 0 or any other value in the same way. Referenced Pool Size parameter is not important for ADF BC REST runtime:


Application performs well under load, there are no passivations/activations logged, even when Referenced Pool Size is set to zero.


However, I found that it is still important to keep Enable Application Module Pooling = ON. If you switch it OFF - passivation will start to appear, which consumes processing power and is highly unrecommended. So, keep Enable Application Module Pooling = ON.

2. Disconnect Application Module Upon Release

It is important to set Disconnect Application Module Upon Release = ON (read more about it - ADF BC Tuning with Do Connection Pooling and TXN Disconnect Level). This will ensure there will be always near zero DB connections left open:


Otherwise if we keep Disconnect Application Module Upon Release = OFF:


DB connections will not be released promptly:


This summarises important points related to ADF BC REST tuning.

Regular Expression is not working if the search criteria with LIKE and NOT LIKE in single input field

Tom Kyte - Sat, 2018-05-26 14:46
Hi Tom, Need your help ! Please find the LiveSQL link Thanks in Advance ! I have a table st_exp with s_desc column only, user has option to search by s_desc criteria. Scenario :The user may enter text critiria LIKE and NOT LIKE in the ...
Categories: DBA Blogs

Oracle APEX 18.1 – Features

John Scott - Sat, 2018-05-26 14:21

Oracle APEX 18.1 has introduced the capability to easily add Features to your APEX application.

So what are Features you might ask? Well in APEX you have always been able to add region types to pages, for example Charts, Reports etc. Think of Features are being pre-built components that span more than just a single region, or page. Current options for features include –

  • About Page
  • Access Control
  • Activity Reporting
  • Configuration Options
  • Feedback
  • Theme Style Selection

I’m sure in future versions of APEX this list will grow, but for now lets take a look how we can incorporate a feature.

Let’s create a brand new application via the Wizard –

2018-05-29_20-25-34.png

Once we select Next > we can see the new Features option

2018-05-29_20-26-40.png

At this stage you can add multiple Options, but lets add the ability for users to modify the Theme Style Selection by themselves.

2018-05-29_20-27-59.png

We can now create the application (as an aside – I love the new dynamic progress bar you see while the application gets created).

2018-05-29_20-28-37.png

Once the application is created, we can login to it and see that an Administration section has been created for us automatically.

2018-05-29_20-30-53.png

2018-05-29_20-31-29.png

You’ll notice the Desktop Theme Style drop-down already contains a number of themes

2018-05-29_20-32-25.png

Changing the Theme and pressing Apply Changes, the new theme takes effect immediately (no need to logout, neat!).

2018-05-29_20-33-01.png

A great feature is that Administrators can enable this feature to end users

2018-05-29_20-34-32.png

Once you do this, then a Customize link will appear at the bottom of the page for End Users.

2018-05-29_20-37-26.png

Clicking this link allows end users to select one of the themes and apply it to the application – note the theme choice is specific to the End User and persists between sessions.

2018-05-29_20-38-26.png

Now, you might say – but John we could already do something like this manually ourselves, but the point here is you didn’t need to do anything manually. Just by selecting an option and not writing a single line of code, you have declaratively added functionality to your application that allows your End Users to customise the application to how they like it – and they will like it!

Oracle SOA Suite 12c: How to deploy a BPEL onto a weblogic server

Dietrich Schroff - Sat, 2018-05-26 12:39
After you have created your BPEL inside Jdeveloper, you have to add an application server inside jdev:







 And here we go:

Then go to the applications view:







And after the deployment check via Application Server view, if the BPEL version was deployed:


Running Istio on Oracle Kubernetes Engine–the managed Kubernetes Cloud Service

Amis Blog - Sat, 2018-05-26 08:30

imageIn a recent post, I introduced the managed Oracle Cloud Service for Kubernetes, the Oracle Kubernetes Engine (OKE): https://technology.amis.nl/2018/05/25/first-steps-with-oracle-kubernetes-engine-the-managed-kubernetes-cloud-service/. A logical next step when working with Kubernetes in somewhat challenging situations, for example with microservice style architectures and deployments, is the use of Istio – to configure, monitor and manage the so called service mesh. Istio – https://istio.io – is brand new – not even Beta yet, although a first production release is foreseen for Q3 2018. It offers very attractive features, including:

  • intelligent routing of requests, including load balancing, A/B testing, content/condition based routing, blue/green release, canary release
  • resilicience – for example through circuit breaking and throttling
  • policy enforcement and access control
  • telemetry, monitoring, reporting

In this article, I will describe how I got started with Istio on the OKE cluster that I provisioned in the previous article. Note: there is really nothing very special about OKE for Istio: it is just another Kubernetes cluster, and Istio will do its thing. More interesting perhaps is the fact that I work on a Windows laptop and use a Vagrant/VirtualBox powered Ubuntu VM to do some of the OKE interaction, especially when commands and scripts are Linux only.

The steps I will describe:

  • install Istio client in the Linux VM
  • deploy Istio to the OKE Kubernetes Cluster
  • deploy the Bookinfo sample application with Sidecar Injection (the Envoy Sidecar is the proxy that is added to every Pod to handle all traffic into and out of the Pod; this is the magic that makes Istio work)
  • try out some typical Istio things – like traffic management and monitoring

The conclusion is that leveraging Istio on OKE is quite straightforward.

 

Install Istio Client in Linux VM

The first step with Istio, prior to deploying Istio to the K8S cluster, is the installation on your client machine of the istoctl client application and associated sources, including the Kubernetes yaml files required for the actual deployment. Note: I tried deployment of Istio using a Helm chart, but that did not work and it seems that Istio 0.7.x is not suitable for Helm (release 0.8 is supposed to be ready for Helm).

Following the instructions in the quick start guide: https://istio.io/docs/setup/kubernetes/quick-start.html

and working in the Ubuntu VM that I have spun up with Vagrant and Virtual Box, I go through these steps:

Ensure that the current OCI and OKE user kubie is allowed to do cluster administration tasks:

kubectl create clusterrolebinding k8s_clst_adm –clusterrole=cluster-admin –user=ocid1.user.oc1..aaaaaaaavorp3sgemd6bh5wjr3krnssvcvlzlgcxcnbrkwyodplvkbnea2dq

image

Download and install istioctl:

curl -L https://git.io/getLatestIstio | sh –

imageThen add the bin directory in the Istio release directory structure to the PATH variable, to make istoctl accessible from anywhere.

image

    Deploy Istio to the OKE Kubernetes Cluster

    The resources that were created during the installation of the Istio client include the yaml files that can be used to deploy Istio to the Kubernetes cluster. The command to perform that installation is very straightforward:

    kubectl apply -f install/kubernetes/istio.yaml

    The screenshot shows some of the steps executed when this command is kicked off:

    image

    The namespace istio-system is created, the logical container for all Istio related resources.

    SNAGHTML197befee

    The last two commands:

    kubectl get svc -n istio-system

    and

    kubectl get pods -n istio-system

    are used to verify what has been installed and is now running [successfully]in the Kubernetes cluster.

    The Dashboard provides a similar overview:

    image

    Deploy supporting facilities

    Istio is prepared for interaction with a number of facilities that will help with monitoring and tracing – such as Zipkin, Prometheus, Jaeger and Grafana. The core installation of Istio does not include these tools. Using the following kubectl commands, we can extend the istio-system namespace with these tools:

    kubectl apply -f install/kubernetes/addons/prometheus.yaml

    kubectl apply -f install/kubernetes/addons/zipkin.yaml

    kubectl apply -n istio-system -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/all-in-one/jaeger-all-in-one-template.yml

    kubectl apply -f install/kubernetes/addons/grafana.yaml

    kubectl apply -f install/kubernetes/addons/servicegraph.yaml

    image

    Istio-enabled applications can be configured to collect trace spans using Zipkin or Jaeger. On Grafana (https://grafana.com/):  The Grafana add-on is a pre-configured instance of Grafana. The base image (grafana/grafana:4.1.2) has been modified to start with both a Prometheus data source and the Istio Dashboard installed. The base install files for Istio, and Mixer in particular, ship with a default configuration of global (used for every service) metrics. The Istio Dashboard is built to be used in conjunction with the default Istio metrics configuration and a Prometheus backend. More details on Prometheus: https://prometheus.io/ .

    To view a graphical representation of your service mesh,  use the Service Graph Add-On:  https://istio.io/docs/tasks/telemetry/servicegraph.html .

    For log gathering with fluentd and writing them to Elastic Stack, see: https://istio.io/docs/tasks/telemetry/fluentd.html

     

     

    image

    Deploy the Bookinfo sample application with Sidecar Injection

    (the Envoy Sidecar is the proxy that is added to every Pod to handle all traffic into and out of the Pod; this is the magic that makes Istio work)

    The Bookinfo sample application (https://istio.io/docs/guides/bookinfo.html) is shipped as part of the Istio client installation. This application is composed of several (versions of) microservices that interact. These services and their interactions can be used to investigate the functionality of Istio.

    image

    To install the Bookinfo application, all you need to do:

    kubectl apply -f <(istioctl kube-inject –debug -f samples/bookinfo/kube/bookinfo.yaml)

    The istoctl kube-inject instruction (see https://istio.io/docs/reference/commands/istioctl.html) performs a preprocessing of the bookinfo.yaml file – injecting the specs for the Envoy Sidecar. Note: automatic injection of the sidecar into all Pods that get deployed is supported in Kubernetes 1.9 and higher. I did not yet get that to work, so I am using manual or explicit injection.

    image

    We can list the pods and inspect one of them:

    image

    The product page pod was defined with a single container – with a Python web application. However, because of the injection that Istio performed prior to creation of the Pod on the cluster, the Pod actually contains more than a single container: the istio-proxy was added to the pod. The same thing happened in the other pods in this bookinfo application.

    SNAGHTML199d2277

     

    This is what the Bookinfo application looks like:

    image

    (note: using kubectl port-forward I can make the application accessible from my laptop, without having to expose the service on the Kubernetes cluster)

    Try out some typical Istio things – like traffic management and monitoring

      Just by accessing the application, metrics will be gathered by the sidecar and shared with Prometheus. The Prometheus dashboard visualizes these metrics:

      image

      Zipkin helps to visualize the end to end trace of requests through the service mesh. Here is the request to the productpage dissected:

      image

      A drilldown reveals:

      image

      Reviews apparently is called sequentially, after the call to Details is complete. This may be correct, but perhaps we can improve performance by performing these calls in parallel. The calls to review takes much longer than the one to reviews. Both are still quite fast – no more than 35 ms.

      The Grafana dashboard plugin for Istio provides an out of the box dashboard on the HTTP requests that happen inside the service mesh. We can see the number of requests and the success rate (percentage of 200 and 300 response codes vs 400 and 500 responses)

      image

      Here are some more details presented in the dashboard:

      image

       

      At this point I am ready to start using Istio in anger – for my own microservices.

      Resources

      Istio.io – https://istio.io/

      Istio on Kubernetes – Quickstart Guide – https://istio.io/docs/setup/kubernetes/quick-start.html

      Working with the Istio Sample Application Bookinfo – https://istio.io/docs/guides/bookinfo.html

      YouTube: Module 1: Istio – Kubernetes – Getting Started – Installation and Sample Application Review by Bruno Terkaly – https://www.youtube.com/watch?v=ThEsWl3sYtM

      Istioctl reference: https://istio.io/docs/reference/commands/istioctl.html

      The post Running Istio on Oracle Kubernetes Engine–the managed Kubernetes Cloud Service appeared first on AMIS Oracle and Java Blog.

      Install MockupTiger wireframes on MacOS

      Nilesh Jethwa - Fri, 2018-05-25 23:07

      In this tutorial we will explore the option to install a specially designed download for MacOS. NOTE: If you are using Windows, Linux, MacOS and have your own setup of a LAMP, XAMPP, WAMP, MAMP or any other stack that … Continue reading ?

      Via: MockupTiger Wireframes

      Top 10 Albums Meme

      Greg Pavlik - Fri, 2018-05-25 21:27

      I’ve been hit by a barrage of social media posts on people’s top 10 albums, so I thought I would take a look at what I have listened to the most in the last 5 years or so. I’m not claiming these are my favorites or “the best” albums recorded (in fact there are many better albums I enjoy). But I was somewhat surprised to find that I do return to the the same albums over and over, so here’s the top 10, in no particular order.

      1)Alina, Arvo Part

      If you were going to stereotype and box in Part’s work, this would be a good album to use. It’s also amazing enough that it could run on a continuous loop forever and I’d be pretty happy with that.

      2)Benedicta: Marian Chants from Norcia, Monks of Norcia

      Yes, the music hasn’t changed much from the middle ages. And yes, these are actually monks singing, who somehow managed to top the Billboard charts. The term to use is sublime – this music is quintessentially music of peace and another album that bears repetition with ease.

      3) Mi Sueno, Ibrahim Ferrer

      I know the whole Bueno Vista Social Club thing was trendy, but this music – Cuban bolero to be precise – is full of passion, charm, and romance: it music for human beings (which is harder and harder to find these days). This is at once a work of art and a testament to real life.

      4) Dream River, Bill Callahan

      I don’t even know what to categorize this music as: it’s not popular music, rock, easy listening, country or folk. But it has elements of most of those. Callahan’s baritone voice sounds like someone is speaking to you rather than singing. This album just gets better with the years of listening and it’s by far his best.

      5) The Harrow and the Harvest, Gillian Welch

      Appalachian roots, contemporary musical twists – I don’t know what they call this: alt-blue grass? In any case, its Welch’s best album and a solid, if somewhat dark, listen.

      6) In the Spur of the Moment, Steve Turre

      Turre does his jazz trombone (no conch shells on this album – which I am happy about) along with Ray Charles on piano for the first third or so, later trending toward more Afro-Cuban jazz style. I know the complaint on this one is that it feels a bit passionless in parts, but it’s a hard mix not to feel good about.

      7) Treasury of Russian Gypsy Songs, Marusia Georgevskaya and Sergei Krotkoff

      I’ll admit that it sounds like Georgevskaya has smoked more than a few cigarettes. But this is timeless music, a timeless voice, from a timeless culture. Sophie Milman’s Ochi Chernye is sultry and seductive (she is really fantastic), but somehow I like Marusia’s better.

      9) Skeleton Tree, Nick Cave

      Nick Cave is uneven at best and often mediocre but this album is distilled pain in poet form and a major work of art. For some reason I listen to this end to end semi regularly on my morning commute.

      10) Old Crow Medicine Show, Old Crow Medicine Show

      End to end, just hits the right notes over and over again. From introspective to political to just plain fun, these guys made real music for real people at their peak. Things fell apart after Willie Watson, but there is an almost perfect collection of authentic songs.

      Which Bitnami service to choose in the Oracle Cloud Infrastructure?

      Yann Neuhaus - Fri, 2018-05-25 15:40

      In the Oracle Cloud PaaS you have a marketplace where you can choose your service. Some are packaged from Bitnami and available on multiple OS. My first idea is that in PaaS you do not care about the OS. But Oracle Cloud has this very nice feature where you still have full access to the OS, as root, even in PaaS. Then, you choose the Linux distribution of your preference. Except if performance is different. They run on different Linux kernels. Is Oracle Linux Unbreakable Kernel more efficient?

      We need to compare with relevant and reliable measures. And for that I’ve created a Postgres service and used Kevin Closson SLOB method, as I’m a lucky beta tester for pgio. I’ll post later about the creation of the Postgres service in the Oracle Cloud.

      Cached IO

      First, I’ve run cached IO to measure CPU performance.

      I’ve run with the following settings in pgio.conf:

      UPDATE_PCT=0
      RUN_TIME=300
      NUM_SCHEMAS=4
      NUM_THREADS=1
      WORK_UNIT=255
      SCALE=1G

      This is 4 schemas with 1GB of data. This fit in my 30GB host. Actually, here is a vmstat sample during the run showing 5GB cached and large free memory:

      procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
      r b swpd free buff cache si so bi bo in cs us sy id wa st
      4 0 0 15508168 12444 5495652 0 0 0 0 1530 1631 28 22 50 0 0
      4 0 0 15508292 12452 5495648 0 0 0 4 1506 1564 27 23 50 0 0

      There’s no I/O to block device here, which is my goal.

      I’ve run 10 times the runit.sh and here is the summary of main result with IOPS:


      $ for i in allopc@144.21.82.244.out allopc@144.21.82.255.out allopc@144.21.82.252.out ; do grep os-release $i | grep PRETTY | tail -1 ; grep x86 $i | tail -1 ; grep -E "runit.out.*DBNAME" $i | tail -10 | nl ; done
       
      Fri May 25 11:56:11 UTC 2018 /etc/os-release : PRETTY_NAME="Ubuntu 16.04.4 LTS"
      Linux ubuntu 4.4.0-127-generic #153-Ubuntu SMP Sat May 19 10:58:46 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
      1 Fri May 25 11:11:05 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >393232< CACHE_HITS/s >15711<
      2 Fri May 25 11:16:06 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >398179< CACHE_HITS/s >16000<
      3 Fri May 25 11:21:07 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >402080< CACHE_HITS/s >16019<
      4 Fri May 25 11:26:07 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >404682< CACHE_HITS/s >16086<
      5 Fri May 25 11:31:07 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >408524< CACHE_HITS/s >16327<
      6 Fri May 25 11:36:08 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >409209< CACHE_HITS/s >16390<
      7 Fri May 25 11:41:08 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >403647< CACHE_HITS/s >16327<
      8 Fri May 25 11:46:09 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >397440< CACHE_HITS/s >15894<
      9 Fri May 25 11:51:09 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >397273< CACHE_HITS/s >15956<
      10 Fri May 25 11:56:11 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >396906< CACHE_HITS/s >15904<
       
      Fri May 25 11:54:56 UTC 2018 /etc/os-release : PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
      Linux debian 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 GNU/Linux
      1 Fri May 25 11:09:53 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395672< CACHE_HITS/s >15882<
      2 Fri May 25 11:14:54 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >401400< CACHE_HITS/s >16188<
      3 Fri May 25 11:19:54 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >391551< CACHE_HITS/s >15764<
      4 Fri May 25 11:24:54 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >393827< CACHE_HITS/s >15802<
      5 Fri May 25 11:29:54 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >404462< CACHE_HITS/s >16198<
      6 Fri May 25 11:34:55 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >392712< CACHE_HITS/s >15732<
      7 Fri May 25 11:39:55 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >399389< CACHE_HITS/s >16063<
      8 Fri May 25 11:44:55 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >390283< CACHE_HITS/s >15567<
      9 Fri May 25 11:49:56 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >391426< CACHE_HITS/s >15771<
      10 Fri May 25 11:54:56 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >392760< CACHE_HITS/s >15874<
       
      Fri May 25 11:53:58 UTC 2018 /etc/os-release : PRETTY_NAME="Oracle Linux Server 7.5"
      Linux b5e501 4.1.12-124.15.2.el7uek.x86_64 #2 SMP Tue May 22 11:52:31 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux
      1 Fri May 25 11:08:54 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395815< CACHE_HITS/s >15759<
      2 Fri May 25 11:13:55 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395346< CACHE_HITS/s >16009<
      3 Fri May 25 11:18:55 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395340< CACHE_HITS/s >15898<
      4 Fri May 25 11:23:56 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 301 seconds. RIOPS >402556< CACHE_HITS/s >16200<
      5 Fri May 25 11:28:56 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >397847< CACHE_HITS/s >16039<
      6 Fri May 25 11:33:56 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >398848< CACHE_HITS/s >16027<
      7 Fri May 25 11:38:57 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >398817< CACHE_HITS/s >16089<
      8 Fri May 25 11:43:57 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >398878< CACHE_HITS/s >15961<
      9 Fri May 25 11:48:57 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395885< CACHE_HITS/s >15606<
      10 Fri May 25 11:53:58 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395965< CACHE_HITS/s >15893<

      I’ve put this data in Excel to show the picture where, basically, all 3 environments perform exactly the same:

      CapturePGLIOPS

      Physical IO

      For the second test, I wanted to test physical IOPS. But not to measure disk performance, which is the same for my 3 services. I want to do physical I/O only to see if there’s a difference in context switches when doing non-blocking I/O – which we do not see in the previous test because they were filesystem cache hits. Then I kept the small scale of 4 sessions with 1GB so that there’s a good chance that it remains in the storage cache. But I reduced the memory in order to have less than 4GB in filesystem cache.

      pgio comes with an utility (pgio_reduce_free_memory.sh) to allocate enough huge page to limit the filesystem cache:


      $ sudo bash pgio/pgio_reduce_free_memory.sh 2
       
      Taking action to reduce free memory down to 2GB available.
      total used free shared buff/cache available
      Mem: 30886100 124676 30698952 19088 62472 30469900
      Swap: 0 0 0
       
      Attempting to allocate 13966 huge pages
      MemAvailable: 1869148 kB
      HugePages_Total: 13966

      This is perfect: 13966 huge pages, that’s 27 GB in my 30GB VM that cannot be used by the filesystem, so that my 4x1GB will need I/O calls to the disk.

      Here is a vmstat extract to confirm that the filesystem cache is less than 2GB

      procs -----------memory---------- ---swap-- -----io---- ---system-- ------cpu-----
      r b swpd free buff cache si so bi bo in cs us sy id wa st
      0 4 0 154416 2344 2015204 0 0 93025 0 32425 24295 2 3 51 44 1
      0 4 0 153816 2428 2015848 0 0 94923 21 32400 24414 2 3 51 44 1

      runit.sh calls vmstat with a 3 seconds delay so you can see a higher number of block/s and context switches.

      Here are the interesting lines from the runit.sh output:

      $ for i in allopc@144.21.89.85.out allopc@144.21.89.53.out allopc@144.21.89.26.out ; do grep os-release $i | grep PRETTY | tail -1 ; grep x86 $i | tail -1 ; grep -E "runit.out.*DBNAME" $i | tail -10 | nl ; done
       
      Fri May 25 14:58:47 UTC 2018 /etc/os-release : PRETTY_NAME="Ubuntu 16.04.4 LTS"
      Linux ubuntu 4.4.0-127-generic #153-Ubuntu SMP Sat May 19 10:58:46 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
      1 Fri May 25 14:13:40 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 301 seconds. RIOPS >1214< CACHE_HITS/s >55<
      2 Fri May 25 14:18:42 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >4545< CACHE_HITS/s >195<
      3 Fri May 25 14:23:43 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >17053< CACHE_HITS/s >682<
      4 Fri May 25 14:28:43 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18865< CACHE_HITS/s >801<
      5 Fri May 25 14:33:44 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18933< CACHE_HITS/s >794<
      6 Fri May 25 14:38:44 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18832< CACHE_HITS/s >777<
      7 Fri May 25 14:43:45 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18826< CACHE_HITS/s >757<
      8 Fri May 25 14:48:46 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 301 seconds. RIOPS >19229< CACHE_HITS/s >819<
      9 Fri May 25 14:53:46 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19408< CACHE_HITS/s >835<
      10 Fri May 25 14:58:47 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19524< CACHE_HITS/s >799<
       
      Fri May 25 14:58:41 UTC 2018 /etc/os-release : PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
      Linux debian 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 GNU/Linux
      1 Fri May 25 14:13:35 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >1727< CACHE_HITS/s >82<
      2 Fri May 25 14:18:36 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >10743< CACHE_HITS/s >534<
      3 Fri May 25 14:23:37 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18660< CACHE_HITS/s >763<
      4 Fri May 25 14:28:37 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18496< CACHE_HITS/s >811<
      5 Fri May 25 14:33:38 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18537< CACHE_HITS/s >757<
      6 Fri May 25 14:38:38 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18647< CACHE_HITS/s >774<
      7 Fri May 25 14:43:39 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18838< CACHE_HITS/s >775<
      8 Fri May 25 14:48:40 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18686< CACHE_HITS/s >786<
      9 Fri May 25 14:53:40 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18742< CACHE_HITS/s >782<
      10 Fri May 25 14:58:41 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 301 seconds. RIOPS >18634< CACHE_HITS/s >793<
       
      Fri May 25 14:57:25 UTC 2018 /etc/os-release : PRETTY_NAME="Oracle Linux Server 7.5"
      Linux b5e501 4.1.12-124.15.2.el7uek.x86_64 #2 SMP Tue May 22 11:52:31 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux
      1 Fri May 25 14:12:20 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >17025< CACHE_HITS/s >721<
      2 Fri May 25 14:17:21 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19331< CACHE_HITS/s >792<
      3 Fri May 25 14:22:21 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19271< CACHE_HITS/s >770<
      4 Fri May 25 14:27:22 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19324< CACHE_HITS/s >802<
      5 Fri May 25 14:32:22 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18943< CACHE_HITS/s >802<
      6 Fri May 25 14:37:23 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19202< CACHE_HITS/s >818<
      7 Fri May 25 14:42:24 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18999< CACHE_HITS/s >803<
      8 Fri May 25 14:47:24 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19459< CACHE_HITS/s >823<
      9 Fri May 25 14:52:24 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19138< CACHE_HITS/s >836<
      10 Fri May 25 14:57:25 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18958< CACHE_HITS/s >781<

      And I’ve graphed them:
      CapturePGPIOPS

      As I hit the storage here, I needed a few runs to warm up the storage cache and get the I/O I wanted: low latency rfom storage cache, but involving context switches on the server. And basically, all 3 alternatives (Ubuntu with Linux 4.4, Debian with Linux 4.9 and OEL with the unbreakable kernel 4.1) behave the same. From these tests, I can say that the performance is not the major criteria to choose one of the PaaS alternatives. Just choose the distribution you like.

      About pgio, it is great to get performance measures that are reliable and focused on what I want to test. I had no problems to run it on all 3 platforms. Just needed to apt-get / yum to install systat and bc which are not there by default.

       

      Cet article Which Bitnami service to choose in the Oracle Cloud Infrastructure? est apparu en premier sur Blog dbi services.

      First steps with Oracle Kubernetes Engine–the managed Kubernetes Cloud Service

      Amis Blog - Fri, 2018-05-25 14:59

      imageOracle recently (May 2018) launched its Managed Kubernetes Cloud Service (OKE – Oracle Kubernetes Engine) – see for example this announcement: https://blogs.oracle.com/developers/kubecon-europe-2018-oracle-open-serverless-standards-fn-project-and-kubernetes. Yesterday I got my self a new free cloud trial on the Oracle Public Cloud (https://cloud.oracle.com/tryit). Subsequently, I created a Kubernetes cluster and deployed my first pod on that cluster. In this article, I will describe the steps that I went through:

      • create Oracle Cloud Trial account
      • configure OCI (Oracle Cloud Infrastructure) tenancy
        • create service policy
        • create OCI user
        • create virtual network
        • create security lists
        • create compute instance
      • configure Kubernetes Cluster & Node Pool; have the cluster deployed
      • install and configure OCI CLI tool
      • generate kubeconfig file
      • connect to Kubernetes cluster using Kubectl – inspect and roll out a Pod

      The resources section at the end of this article references all relevant documentation.

      Configure OCI (Oracle Cloud Infrastructure) tenancy

      Within your tenancy, a suitably pre-configured compartment must already exist in each region in which you want to create and deploy clusters. The compartment must contain the necessary network resources already configured (VCN, subnets, internet gateway, route table, security lists). For example, to create a highly available cluster spanning three availability domains, the VCN must include three subnets in different availability domains for node pools, and two further subnets for load balancers.

      Within the root compartment of your tenancy, a policy statement (allow service OKE to manage all-resources in tenancy) must be defined to give Container Engine for Kubernetes access to resources in the tenancy.

      Create policy

      You have to define a policy to enable Container Engine for Kubernetes to perform operations on the compartment.

      Click on Identity | Policies:

      image

      image

      Ensure you are in the Root Compartment. Click on Create Policy. Define a new policy. The statement must be:

      allow service OKE to manage all-resources in tenancy

      image

      Click on Create. The new policy is added to the list.

      image

      Precreate
      the required network resources

      See for instructions: https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengnetworkconfig.htm 

      Create a Virtual Cloud Network.

      The VCN in which you want to create and deploy clusters must be configured as follows:

      • The VCN must have a CIDR block defined that is large enough for at least five subnets, in order to support the number of hosts and load balancers a cluster will have. A /16 CIDR block would be large enough for almost all use cases (10.0.0.0/16 for example). The CIDR block you specify for the VCN must not overlap with the CIDR block you specify for pods and for the Kubernetes services (see CIDR Blocks and Container Engine for Kubernetes).
      • The VCN must have an internet gateway defined.
      • The VCN must have a route table defined that has a route rule specifying the internet gateway as the target for the destination CIDR block.
      • The VCN must have five subnets defined:

        • Three subnets in which to deploy worker nodes. Each worker node subnet must be in a different availability domain. The worker node subnets must have different security lists to the load balancer subnets.
        • Two subnets to host load balancers. Each load balancer subnet must be in a different availability domain. The load balancer subnets must have different security lists to the worker node subnets.
      • The VCN must have security lists defined for the worker node subnets and the load balancer subnets. The security list for the worker node subnets must have:

        • Stateless ingress and egress rules that allow all traffic between the different worker node subnets.
        • Stateless ingress and egress rules that allow all traffic between worker node subnets and load balancer subnets.

        Optionally, you can include ingress rules for worker node subnets to:

      image

      image

      image

      Create Internet Gateway

      image

      image

      Create Route Table

      The VCN in which you want to create and deploy clusters must have a route table. The route table must have a route rule that specifies an internet gateway as the target for the destination CIDR block 0.0.0.0/0.

      image

      image

      Set DHCP options

      The VCN in which you want to create and deploy clusters must have DHCP Options configured. The default value for DNS Type of Internet and VCN Resolver is acceptable.

      image

      Create Secuity Lists

      The VCN in which you want to create and deploy clusters must have security lists defined for the worker node subnets and the load balancer subnets. The security lists for the worker node subnets and the load balancer subnets must be different.

      Create list called workers

      image

      image

      Worker Node Seclist Configuration


      image

      Create Securty List loadbalancers

      image

      image

      Create Subnet in VCN

      The VCN in which you want to create and deploy clusters must usually have (five) subnets defined as follows:

      • (Three) subnets in which to deploy worker nodes. Each worker node subnet must be in a different availability domain. The worker node subnets must have different security lists to the load balancer subnets.
      • (Two) subnets to host load balancers. Each load balancer subnet must be in a different availability domain. The load balancer subnets must have different security lists to the worker node subnets.

      In addition, all the subnets must have the following properties set as shown:

      • Route Table: The name of the route table that has a route rule specifying an internet gateway as the target for the destination CIDR block 0.0.0.0/0
      • Subnet access: Public
      • DHCP options: Default

      image

      Subnet called workers-1

      image

      Associated Security List workers with this subnet:

      image

      image

      Create a second subnet called loadbalancers1:

      image

      Press the Create button. Now we have all subnets we need.

      image

      Create Compute Instance

      Click on Home. Then click on Create Compute Instance.

      image

      Set the attributes of the VM as shown in the two figures. I am sure other settings are fine too – but at least these work for me.

      image

      SNAGHTML15a0f2fc

      Click on Create Instance. The VM will now be spun up.

      image

      Create a non-federated identity – new user kubie in the OCI tenancy

      Note: initially I tried to just create the cluster as the initial user that was created when I initiated the OCI tenancy in my new trial account. However, the Create Cluster button was not enabled.

      image

      Oracle Product Management suggested that my account probably was a Federated Identity, which OKE does not support at this time. In order to use OKE in one of these accounts, you need to create a native OCI Identity User

      image

      image

      Click on Create/Reset password. You will be presented with a generated password. Copy it to the clipboard or in some text file. You will not see it again.

      Add user kubie to group Administrators:

      image

      After creating the OCI user kubie we can now login as that user, using the generated password that you had saved in the clipboard or in a text file:

      image

      Configure Kubernetes Cluster & Node Pool and have the cluster deployed

      After preparing the OCI tenancy to make it comply with the prerequisites for creating the K8S cluster, we can now proceed and provsion the cluster.

      Click on Containers in the main menu, Select clusteras the sub menu option. Click on Create Cluster.

      image

      Provide details for the Kubernetes cluster; see instructions here: https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Tasks/contengcreatingclusterusingoke.htm . Set the name to k8s-1, select 1.9.7 as the version. Select the VCN that was created earlier on. Select the two subnets that were defined for the load-balancers.

      Set the Kubernetes Service CIDR Block: for example to 10.96.0.0/16.

      Set the Pods CIDR Block: for example to 10.244.0.0/16.

      image

      Enable Dashboard and Tiller:

      image

      SNAGHTML14e5657e[4]

      Click on Add Node Pool.

      The details for the Node Pool:

      image

      Then press create to start the creation of the cluster.

      The k8s-1 cluster is added to the list:

      image

      At this point, the cluster creation request is being processed:

      image

      After a little while, the cluster has been created:

      image

      Install and configure OCI CLI tool

      To interact with the Oracle Cloud Infrastructure, we can make use of the OCI Command Line Interface – a Python based tool. We need to use this tool to generate the kubeconfig file that we need to interact with the Kubernetes cluster.

      My laptop is a Windows machine. I have used vagrant to spin up a VM with Ubuntu Xenial (16.04 LTS), to have an isolated and Linux based environment to work with the OCI CLI.

      In that environment, I download and install the OCI CLI:

      bash -c “$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)”

      image

      Some more output from the installation process:

      image

      And the final steps:

      image

      Next, I need to configure the OCI for my specific environment:

      oci setup config

      SNAGHTML14ce60ea

      The setup action generates a public and private key pair – the former in a file called: oci_api_key_public.pem. The contents of this file should be added as new public key to the OCI user – in my case the user called kubie

      SNAGHTML14d6291a

      At this point, OCI CLI is installed and configured for the right OCI Tenancy. The public key is added to the user account. We should now be able to use OCI CLI to access the OCI Tenancy.

      Try out the OCI CLI with simple calls like:

      oci compute instance list

      and

      oci compute instance list -c ocid1.tenancy.oc1..aaa

      Note: the compartment identifier parameter takes the value of the Tenancy OCID.

      image

      Generate kubeconfig file

      After installing and configuring the OCI CLI tool, we can continue to generate the kubeconfig file. The OCI Console contains the page with details on the k8s-1 cluster. Press the Access Kubeconfig button. A popup opens, with the instructions to generate the kubeconfig file – unfortunately not yet to simply download the kubeconfig file.

      Download the get-kubeconfig.sh script to the Ubuntu VM.

      image

      Make this script executable and execute using the instructions that are copied and pasted from the popup overhead.

      Using the commands provided from the OCI Console, I can generate the kubeconfig file:

      image


      Connect to Kubernetes cluster using Kubectl – inspect and roll out a Pod

        After generating the kubeconfig file, I have downloaded and installed kubectl to my Ubuntu VM, using the Git Gist:
        https://gist.githubusercontent.com/fabriciosanchez/2f193f76dfb1af3a3895661fce620b1a/raw/a28a70aca282a28d690ae240679e99125a3fd763/InstallingKubectl.sh

        To dowload:

        curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

        To make executable:


        chmod +x ./kubectl
        sudo mv ./kubectl /usr/local/bin/kubectl

        And try out if kubectl can be used:

        kubectl get nodes

        image

        On a Windows client that has kubectl installed and that has access to the kubeconfig file that was created in the previous section, set environment variable KUBECONFIG referencing the kubeconfig file that was generated using OCI CLI. Using that kubeconfig file, create a deployment of a test deployment.yaml through kubectl:

        image

        Expose the nginx deployment. Get the list of services. Then change the type of the service – from ClusterIP to NodePort. Then get the list of services again – to retrieve the port at which the service is exposed on the cluster node (31907)

        image

        The Kubernetes dashboard is now made available on the client using:

        kubectl proxy –kubeconfig=”C:\data\lucasjellemag-oci\kubeconfig”

        image

        Now we can check the deployment in the K8S Dashboard:

        image

        Here the new Type of Service in the Kubernetes Dashboard:

        image

        Access NGINX from any client anywhere:

        image

        Resources

        Documentation on Preparing for an OKE Cluster and installing the Cluster – https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengprerequisites.htm 

        Docs on how to get hold of kubeconfig file – https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Tasks/contengdownloadkubeconfigfile.htm 

        Installing and Configuring the OCI Command Line Interface – https://docs.us-phoenix-1.oraclecloud.com/Content/API/SDKDocs/cliinstall.htm 

        Kubectl Reference – https://kubernetes-v1-4.github.io/docs/user-guide/kubectl-overview/ 

        Git Gist for installing kubectl on Ubuntu –
        https://gist.githubusercontent.com/fabriciosanchez/2f193f76dfb1af3a3895661fce620b1a/raw/a28a70aca282a28d690ae240679e99125a3fd763/InstallingKubectl.sh

        Deploying a Sample App to the K8S Cluster – https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Tasks/contengdeployingsamplenginx.htm 

        Articles on the availability of the Oracle Kubernetes Engine cloud service:

        http://www.devopsonline.co.uk/oracles-new-container-engine-for-kubernetes/

        https://containerjournal.com/2018/05/03/oracle-extends-kubernetes-and-serverless-frameworks-support/

        https://www.forbes.com/sites/oracle/2018/05/10/cozying-up-to-kubernetes-in-copenhagen-developers-celebrate-open-serverless-apps/

        https://www.itopstimes.com/contain/oracle-brings-new-kubernetes-features-to-its-container-engine/

        https://blogs.oracle.com/developers/kubecon-europe-2018-oracle-open-serverless-standards-fn-project-and-kubernetes

        The post First steps with Oracle Kubernetes Engine–the managed Kubernetes Cloud Service appeared first on AMIS Oracle and Java Blog.

        Announcing Oracle APEX 18.1

        OTN TechBlog - Fri, 2018-05-25 12:11

        Oracle Application Express (APEX) 18.1 is now generally available! APEX enables you to develop, design and deploy beautiful, responsive, data-driven desktop and mobile applications using only a browser. This release of APEX is a dramatic leap forward in both the ease of integration with remote data sources, and the easy inclusion of robust, high-quality application features.

        Keeping up with the rapidly changing industry, APEX now makes it easier than ever to build attractive and scalable applications which integrate data from anywhere - within your Oracle database, from a remote Oracle database, or from any REST Service, all with no coding.  And the new APEX 18.1 enables you to quickly add higher-level features which are common to many applications - delivering a rich and powerful end-user experience without writing a line of code.

        "Over a half million developers are building Oracle Database applications today using  Oracle Application Express (APEX).  Oracle APEX is a low code, high productivity app dev tool which combines rich declarative UI components with SQL data access.  With the new 18.1 release, Oracle APEX can now integrate data from REST services with data from SQL queries.  This new functionality is eagerly awaited by the APEX developer community", said Andy Mendelsohn, Executive Vice President of Database Server Technologies at Oracle Corporation.

         

        Some of the major improvements to Oracle Application Express 18.1 include:

        Application Features


        It has always been easy to add components to an APEX application - a chart, a form, a report.  But in APEX 18.1, you now have the ability to add higher-level application features to your app, including access control, feedback, activity reporting, email reporting, dynamic user interface selection, and more.  In addition to the existing reporting and data visualization components, you can now create an application with a "cards" report interface, a dashboard, and a timeline report.  The result?  An easily-created powerful and rich application, all without writing a single line of code.

        REST Enabled SQL Support


        Oracle REST Data Services (ORDS) REST-Enabled SQL Services enables the execution of SQL in remote Oracle Databases, over HTTP and REST.  You can POST SQL statements to the service, and the service then runs the SQL statements against Oracle database and returns the result to the client in a JSON format.  

        In APEX 18.1, you can build charts, reports, calendars, trees and even invoke processes against Oracle REST Data Services (ORDS)-provided REST Enabled SQL Services.  No longer is a database link necessary to include data from remote database objects in your APEX application - it can all be done seamlessly via REST Enabled SQL.

        Web Source Modules


        APEX now offers the ability to declaratively access data services from a variety of REST endpoints, including ordinary REST data feeds, REST Services from Oracle REST Data Services, and Oracle Cloud Applications REST Services.  In addition to supporting smart caching rules for remote REST data, APEX also offers the unique ability to directly manipulate the results of REST data sources using industry standard SQL.

        REST Workshop


        APEX includes a completely rearchitected REST Workshop, to assist in the creation of REST Services against your Oracle database objects.  The REST definitions are managed in a single repository, and the same definitions can be edited via the APEX REST Workshop, SQL Developer or via documented API's.  Users can exploit the data management skills they possess, such as writing SQL and PL/SQL to define RESTful API services for their database.   The new REST Workshop also includes the ability to generate Swagger documentation against your REST definitions, all with the click of a button.

        Application Builder Improvements


        In Oracle Application Express 18.1, wizards have been streamlined with smarter defaults and fewer steps, enabling developers to create components quicker than ever before.  There have also been a number of usability enhancements to Page Designer, including greater use of color and graphics on page elements, and "Sticky Filter" which is used to maintain a specific filter in the property editor.  These features are designed to enhance the overall developer experience and improve development productivity.  APEX Spotlight Search provides quick navigation and a unified search experience across the entire APEX interface.

        Social Authentication


        APEX 18.1 introduces a new native authentication scheme, Social Sign-In.  Developers can now easily create APEX applications which can use Oracle Identity Cloud Service, Google, Facebook, generic OpenID Connect and generic OAuth2 as the authentication method, all with no coding.

        Charts


        The data visualization engine of Oracle Application Express powered by Oracle JET (JavaScript Extension Toolkit), a modular open source toolkit based on modern JavaScript, CSS3 and HTML5 design and development principles.  The charts in APEX are fully HTML5 capable and work on any modern browser, regardless of platform, or screen size.  These charts provide numerous ways to visualize a data set, including bar, line, area, range, combination, scatter, bubble, polar, radar, pie, funnel, and stock charts.  APEX 18.1 features an upgraded Oracle JET 4.2 engine with updated charts and API's.  There are also new chart types including Gantt, Box-Plot and Pyramid, and better support for multi-series, sparse data sets.

        Mobile UI


        APEX 18.1 introduce many new UI components to assist in the creation of mobile applications.  Three new component types, ListView, Column Toggle and Reflow Report, are now components which can be used natively with the Universal Theme and are commonly used in mobile applications.  Additional enhancements have been made to the APEX Universal Theme which are mobile-focused, namely, mobile page headers and footers which will remain consistently displayed on mobile devices, and floating item label templates, which optimize the information presented on a mobile screen.  Lastly, APEX 18.1 also includes declarative support for touch-based dynamic actions, tap and double tap, press, swipe, and pan, supporting the creation of rich and functional mobile applications.

        Font APEX


        Font APEX is a collection of over 1,000 high-quality icons, many specifically created for use in business applications.  Font APEX in APEX 18.1 includes a new set of high-resolution 32 x 32 icons which include much greater detail and the correctly-sized font will automatically be selected for you, based upon where it is used in your APEX application.

        Accessibility


        APEX 18.1 includes a collection of tests in the APEX Advisor which can be used to identify common accessibility issues in an APEX application, including missing headers and titles, and more. This release also deprecates the accessibility modes, as a separate mode is no longer necessary to be accessible.

        Upgrading


        If you're an existing Oracle APEX customer, upgrading to APEX 18.1 is as simple as installing the latest version.  The APEX engine will automatically be upgraded and your existing applications will look and run exactly as they did in the earlier versions of APEX.  

         

        "We believe that APEX-based PaaS solutions provide a complete platform for extending Oracle’s ERP Cloud. APEX 18.1 introduces two new features that make it a landmark release for our customers. REST Service Consumption gives us the ability to build APEX reports from REST services as if the data were in the local database. This makes embedding data from a REST service directly into an ERP Cloud page much simpler. REST enabled SQL allows us to incorporate data from any Cloud or on-premise Oracle database into our Applications. We can’t wait to introduce APEX 18.1 to our customers!", said Jon Dixon, co-founder of JMJ Cloud.

         

        Additional Information


        Application Express (APEX) is the low code rapid app dev platform which can run in any Oracle Database and is included with every Oracle Database Cloud Service.  APEX, combined with the Oracle Database, provides a fully integrated environment to build, deploy, maintain and monitor data-driven business applications that look great on mobile and desktop devices.  To learn more about Oracle Application Express, visit apex.oracle.com.  To learn more about Oracle Database Cloud, visit cloud.oracle.com/database

        Show parameter

        Jonathan Lewis - Fri, 2018-05-25 08:01

        Just a quick little tip for Friday afternoon.

        If you use the “show parameter” or “show spparameter” commands from SQL*Plus you’ve probably noticed that the parameter value may be too long for the defined output column, and even the parameter name may occasionally be too long. For example (from 12.2.0.1):

        SQL> show spparameter optimizer
        
        SID	 NAME			       TYPE	   VALUE
        -------- ----------------------------- ----------- ----------------------------
        *	 optimizer_adaptive_plans      boolean
        *	 optimizer_adaptive_reporting_ boolean
        	 only
        *	 optimizer_adaptive_statistics boolean
        *	 optimizer_capture_sql_plan_ba boolean
        	 selines
        *	 optimizer_dynamic_sampling    integer
        *	 optimizer_features_enable     string
        *	 optimizer_index_caching       integer
        *	 optimizer_index_cost_adj      integer
        *	 optimizer_inmemory_aware      boolean
        *	 optimizer_mode                string
        *	 optimizer_secure_view_merging boolean
        *	 optimizer_use_invisible_index boolean
        	 es
        *	 optimizer_use_pending_statist boolean
        	 ics
        *	 optimizer_use_sql_plan_baseli boolean
        	 nes
        
        SQL> show parameter audit
        
        NAME                                 TYPE        VALUE
        ------------------------------------ ----------- ------------------------------
        audit_file_dest                      string      /u01/app/oracle/admin/orcl12c/
                                                         adump
        audit_sys_operations                 boolean     TRUE
        audit_syslog_level                   string
        audit_trail                          string      NONE
        unified_audit_sga_queue_size         integer     1048576
        
        

        The column definitions for these displays are embedded in the the SQL*Plus library code ($ORACLE_HOME/lib/libsqlplus.so), and the effects can be seen by spooling the output from the basic “column” command from SQL*Plus and searching for the word ‘param’ (ignoring case). The relevant column definitions are:

        
        COLUMN   SID_COL_PLUS_SHOW_SPPARAM      ON      FORMAT    a8   HEADING SID
        COLUMN   VALUE_COL_PLUS_SHOW_SPPARAM    ON      FORMAT   a28   HEADING VALUE
        COLUMN   NAME_COL_PLUS_SHOW_SPPARAM     ON      FORMAT   a29   HEADING NAME
        
        COLUMN   value_col_plus_show_param      ON      FORMAT   a30   HEADING VALUE
        COLUMN   name_col_plus_show_param       ON      FORMAT   a36   HEADING NAME
        
        

        If you want to change a few of the lengths (or even the column headings) you can simply add modified copies of these commands to the glogin script ($ORACLE_HOME/sqlplus/admin/glogin.sql) or to a login.sql script that’s referenced in your defined SQL path (and make sure you take a look at Franck Pachot’s blog to see how that has changed in 12.2).

         

        OpenSSH is now available on Windows 10

        Yann Neuhaus - Fri, 2018-05-25 06:26

        A couple of days ago, the Windows 10 april 2018 update was installed on my laptop. And what, you may say? Well, surprisingly this update provides some interesting “hidden” features and one of them concerns OpenSSH that is now available on-demand in Windows 10 (and likely available soon on Windows Server).

        blog 134 - 0 - openSSH - Win10

        This a obviously a good news because so far, I used either putty or directly a bash environment from my Windows 10 laptop available since the Anniversary Update on 2016 august 2. I know that some of my colleagues use Cygwin as well. An quick example of using the new bash environment from my Win10 Pro laptop:

        C:\Users\dab>bash
        Performing one-time upgrade of the Windows Subsystem for Linux file system for this distribution...
        
        mikedavem@DBI-LT-DAB:/mnt/c/Users/dab$ cat /proc/version
        Linux version 4.4.0-17134-Microsoft (Microsoft@Microsoft.com) (gcc version 5.4.0 (GCC) ) #48-Microsoft Fri Apr 27 18:06:00 PST 2018
        
        mikedavem@DBI-LT-DAB:/mnt/c/Users/dab$ ssh
        usage: ssh [-1246AaCfGgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
                   [-D [bind_address:]port] [-E log_file] [-e escape_char]
                   [-F configfile] [-I pkcs11] [-i identity_file] [-L address]
                   [-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port]
                   [-Q query_option] [-R address] [-S ctl_path] [-W host:port]
                   [-w local_tun[:remote_tun]] [user@]hostname [command]

         

        After applying the corresponding update, OpenSSH client is already installed and available to use.

        [dab@DBI-LT-DAB:#]> Get-WindowsCapability -Online | ? Name -like 'OpenSSH*'
        
        
        Name  : OpenSSH.Client~~~~0.0.1.0
        State : Installed
        
        Name  : OpenSSH.Server~~~~0.0.1.0
        State : NotPresent

         

        If you want to also install the server you just have to go through the Add-WindowsCapability cmdlet as follows:

        [dab@DBI-LT-DAB:#]> Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
        
        
        Path          :
        Online        : True
        RestartNeeded : False
        [dab@DBI-LT-DAB:#]> Get-WindowsCapability -Online | ? Name -like 'OpenSSH*'
        
        
        Name  : OpenSSH.Client~~~~0.0.1.0
        State : Installed
        
        Name  : OpenSSH.Server~~~~0.0.1.0
        State : Installed

         

        From now on, I may use directly a ssh command from both my PowerShell or my command line environment as follows:

        C:\
        [dab@DBI-LT-DAB:#]> ssh
        usage: ssh [-46AaCfGgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
                   [-D [bind_address:]port] [-E log_file] [-e escape_char]
                   [-F configfile] [-I pkcs11] [-i identity_file]
                   [-J [user@]host[:port]] [-L address] [-l login_name] [-m mac_spec]
                   [-O ctl_cmd] [-o option] [-p port] [-Q query_option] [-R address]
                   [-S ctl_path] [-W host:port] [-w local_tun[:remote_tun]]
                   destination [command]

         

        We will also be able to access a Linux server from either Password-based or Key-based authentication. Let’s try with the first one (Password-based authentication) against my Linux docker private registry:

        [dab@DBI-LT-DAB:#]> ssh dab@xxx.xxx.xxx.xxx
        The authenticity of host 'xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx)' can't be established.
        ECDSA key fingerprint is SHA256:7HwUjHowFNEJ3ILErsmBmgr8sqxossLV+fFt71YsBtA.
        Are you sure you want to continue connecting (yes/no)? yes
        Warning: Permanently added 'xxx.xxx.xxx.xxx' (ECDSA) to the list of known hosts.
        dab@xxx.xxx.xxx.xxx's password:

         

        Not a big suprise here! It works as expected (assuming your SSH server is configured to accept authentication with password)! Let’s try now the second method (Key-based authentication). In fact, I already have an .ssh folder from a previous request to connect to our GitLab environment. For the demo, let’s use the same public/private key pairs.

        C:\Users\dab
        [dab@DBI-LT-DAB:#]> dir .ssh
        
        
            Directory: C:\Users\dab\.ssh
        
        
        Mode                LastWriteTime         Length Name
        ----                -------------         ------ ----
        -a----       09.05.2018     11:25           3243 id_rsa
        -a----       09.05.2018     11:25            757 id_rsa.pub
        -a----       25.05.2018     10:24            380 known_hosts

         

        The next step will consist in copying my public key (id_rsa.pub) to the remote Linux server folder .ssh as authorized_keys file.

        C:\Users\dab\.ssh
        [dab@DBI-LT-DAB:#]> scp .\id_rsa.pub dab@xxx.xxx.xxx.xxx:/home/dab/.ssh/authorized_keys
        id_rsa.pub

         

        To avoid retyping the secret phrase for each connection, let’s start the ssh-agent service on my Windows 10 machine.

        C:\Users\dab\.ssh
        [dab@DBI-LT-DAB:#]> Start-Service -Name ssh-agent
        C:\Users\dab\.ssh
        [dab@DBI-LT-DAB:#]> Get-Service ssh-agent
        
        Status   Name               DisplayName
        ------   ----               -----------
        Running  ssh-agent          OpenSSH Authentication Agent

         

        Then I just have to add the private key to this agent …

        C:\Users\dab\.ssh
        [dab@DBI-LT-DAB:#]> ssh-add.exe id_rsa
        Identity added: id_rsa (id_rsa)

         

        … and finally to try a connection to my Linux Server as follows:

        C:\Users\dab\.ssh
        [dab@DBI-LT-DAB:#]> ssh dab@xxx.xxx.xxx.xxx
        Last login: Fri May 25 09:43:16 2018 from gateway

         

        It works like a charm! I’m now connecting to my Linux server as dab user. I can get a picture of my docker containers, Note the bash prompt has changed here (server name) even it is pretty similar to my PowerShell prompt. Indeed, I customized my PowerShell profile to be similar to a bash shell in apparence and in some behaviors as well :)

        [dab@localhost ~]$ docker ps
        CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
        a60f3412b864        registry:2          "/entrypoint.sh /e..."   9 months ago        Up 37 minutes       0.0.0.0:5000->5000/tcp   registry

         

        See you!

         

         

         

         

         

        Cet article OpenSSH is now available on Windows 10 est apparu en premier sur Blog dbi services.

        5 main building blocks of the new Visual Builder Cloud Service

        Amis Blog - Fri, 2018-05-25 06:16

        In may 2018 Oracle introduced the new version of Visual Builder Cloud Service. This version is not just aimed at the Citizen Developer, in the end an experienced JavaScript can do nice things with it.

        In this blog I will have a look at 5 of the 6 main building blocks you build a VBCS applications with:

        1. REST service connections
        2. Flows and Pages
        3. Variables
        4. Action Chains
        5. UI Components

        Putting all of this in one blog is a lot, so this is a lengthy one. The final result can be found here.

        With VBCS you can create a lot using Drag and Drop! But in the end you have to be aware that it is all Javascipt, HTML5 and CSS you are creating. And it is all build on JET!

        Before we can start with these concepts, I create a New Application.

        Rest Service Connection

        I start with creating some endpoints for a publicly available REST API, https://docs.openaq.org/ An API with Air quality measurements

        This API contains several endpoints, the graph I am going to create uses the measurements API. As I am from the Netherlands, I use data from the Netherlands in this blog.

        First I create the Service Connection based on the Cities endpoint.

        1. Create Service Connection by Service Endpoint

        2. Specify “URL”: https://api.openaq.org/v1/cities

        3. Service Base URL is populated with https://api.openaq.org/v1/, give Service proper name

        4. For Request > URL Parameters Add the Static Parameter “country” with value “NL”

        5. Test the endpoint and Copy to Response Body

        6. Create the endpoint

        Create flow and first Page

        I will create a Web Application that contains the main Flow and main-start Page.

        On the main-start Page I drop a Table UI Component.

        I have marked yellow the Table UI component, the Collapse/Expand Property Inspector and Quick start button.

        For this Table we Add Data, which is available on the Quick start Tab. Select the cities Service Endpoint.

        As the country is hardcoded, I won’t display it in the table. I reorder the columns with drag and drop. City I select as Primary Key.

        In the layout editor the Data from the Service endpoint is displayed. In the code you will see that an Oracle JET oj-table component is used.

        You can also run the Application:

        Next we add a Page for the Line-graph and drag an Oracle JET Line Chart on it.

        Variables and Types

        The responses from a Rest endpoints are stored in Variables, UI components and Action Chains use Variables.

        When you have a look at the Line Chart code, it contains two Arrays: Groups and Series. The Groups Array is an Array of Strings ( [‘Group A’,’Group B’] ), the Series array is an Array of Objects ( [{name:’Series 1′,items:[41,34]},{name:’Series 2′,items:[55,30]},…] ). The Serie Object consists of a String (name) and a numeric Array (items).

        For the line Graph I create two Types in the main Flow.

        1. a Type with a structure that can hold the result data of a REST call
        2. a Type with a structure that can be mapped to a JET line graph

        A getCitiesResponse type was already created by VBCS for the response of the REST call. This is the final result I want:

        Action Chain

        I create an Action chain that will do these steps:

        • Show notification with City selected
        • Call measurement REST endpoint and map to linegraphDatatype
        • Map linegraphDatatype to linegraphType using JavaScript
        • Navigate to page line-graph page

        When I open the Actions Tab for main-start Page, I see that a Action-Chain was already created. This Action-Chain saves the Primary Key of the row selected in my city Table.

        I now create the mentioned Action Chain. In this ActionChain I create a variable and assign the page variable with the selected City as Default.

         

        Next I drop a Fire Notification Action on the +-sign below  Start.

        I set the Display Mode to transinet and specify the Message as

        {{ “AirQuality data for ” + $chain.variables.selectedCity + ” is being retrieved.” }}

        The measurement REST endpoint is called with a Call Rest Endpoint Action. selectedCity from the Action Chain is mapped to the city parameter of this Action.

        The Result of this Action has to be mapped to linegraphData variable using Asign Variables Action.

        This linegraphData array I need to convert to my linegraph object. For this I will call a peace of javascript. First I create a function in the main Flow javascript.

        The complete peace of javascript can be found in the Application Export that is attached to this blog.

        This javascript function can be called with a Call Module Function Action, VBCS recognizes the function added in the javascript. The linegraphData variable needs to be mapped to the chartData parameter.

        The result from the javascript function needs to be mapped to the linegraph variable using an Assign Variables Action.

        Finally I navigate to the main-graph Page using a Navigate Action.

        A quick way to call this Action Chain (and get the linegraph) is by calling it from the already existing Action Chain to handle the selection of a row in the Cities table. I add a Call Action Chain Action

        UI Components

        The linegraph variable is now ready to be used by our graph. In the Data Tab of the Chart we set the Groups and Series.

        To get a readable layout for the date-axis, I enable the x-axis as time-axis-type:

        Everything together

        The final graph for Amsterdam:

        Amsterdam pm25 graph

        The VBCS export can be downloaded here

        The post 5 main building blocks of the new Visual Builder Cloud Service appeared first on AMIS Oracle and Java Blog.

        Error in date comparison for partitioned table

        Tom Kyte - Fri, 2018-05-25 02:06
        Hi Tom, I am seeing a strange issue with a query which queries data from a partitioned table having sub-partitions. Please see table, query and the error. Can you please help, what could be the reason for this error? <code>CREATE TABLE trans_de...
        Categories: DBA Blogs

        Procedure Performance Number vs Pls_integer

        Tom Kyte - Fri, 2018-05-25 02:06
        I have task to improve performance in some of the packages and procedures in our application. We have 1 package and it has subprograms around 15-20 procedures. Below are my clarifications required. Iam making changes to datatypes from NUMBER to...
        Categories: DBA Blogs

        Windocks – a different way to use SQL Server on Docker

        Yann Neuhaus - Thu, 2018-05-24 13:14

        I spent some times to write blog posts about SQL Server on Docker since few months and you likely noticed it concerned mainly SQL Server on Linux. But what about SQL Server on Docker and Windows? If you take a look at the Docker images provided by Microsoft, only 2017 version is available and it is only intended for development and testing use. Versions prior 2017 seem to not be supported so far (but I don’t have in mind the Microsoft plan on this topic) and if you want to use containers with older SQL Server versions you have to start from scratch by yourself. This is not a hard task but I have in mind a discussion with one of my customers about the maintenance of SQL Server images prior 2017 and he told me he didn’t allocate resources to create and maintain such images by himself.

        In addition, I recently presented at different events including our dbi services internal event  for customers the internal project we are managing about our DMK maintenance module tool for SQL Server. To cut the story short, this tool aims to provide smart maintenance capabilities for customer databases including backups, index, statistics etc …

        Over the time, the code grew up and we had to support different versions of SQL Server from 2008 to SQL Server 2017. In the same time, the number of scenarios we have to manage and to test regarding the new features shipped with service packs and cumulative updates as well increased drastically over the time. So we decided to move on docker containers from different obvious reasons:

        • Containers are easy to provision for unit testing
        • Lower footprint on our local machine
        • We can share easily SQL images between members of our “development” team in a private registry
        • We will able to integer containers in a potential “future” CI pipeline

        In our context, each developer has a local docker engine installed on a Windows 10 Pro laptop but for medium/large companies, internal rules may prevent installing such product on each individual developer laptop. Therefore, they prefer likely to provide a dedicated environment for developers that meet the internal rules and to keep the focus on flexibility.

        In such shared infrastructure a main concern will be probably disk space issues because of the potential testing database(s) size and the number of containers provisioned at the same time. Let’s say that several developers would like to provision one SQL Server instance each with a testing database attached on it and that comes from the production through a sanitized database pipeline. The total size of this database is enough big to warn about a potential disk space issue because each instanced container will include its own image size (roughly 10GB with SQL Server containers on Windows) + the size of the testing database – let’s say more than 50 GBs and likely more for usual cases.

        If you’re in one of the aforementioned scenarios, Windocks may be a good alternative scenario.

        During this month I had the opportunity to test the product in the context of our internal project and after some experimentation stuff I admit I was agreeably surprised by some built-in capabilities as:

        • Windows authentication mode supported
        • SQL Server database cloning capabilities
        • The ability to use both Docker CLI and Web UI (even if I’m in favor of Docker CLI)
        • The ability to refresh a cloned database image from a differential backup

        Before to dig further into the above features let’s just take few seconds to see the Windocks architecture design

        blog 133 - 1 - windocks - architecture

        We may notice some differences here. With a traditional Docker architecture, containers run on shared operating system kernel (either Windows or Linux) whereas Windocks is a strictly application construct-oriented solution and requires installing a SQL Server instance on the host as base instance for SQL Server containers. This is a fundamental change that provides some interesting advantages:

        • We don’t have to rebuild the containers to get OS & Framework patches through base image updates because containers rely only on the SQL Server base instance
        • We don’t break compatibility with some Windows storage functionalities as VSS and SQL Writer as well if you rely on them with third-party backup tools
        • We benefit from the underlying security and configuration of the SQL Server based instance meaning we may use Windows authentication for instance

        Let’s talk now about one interesting feature shipped with Windocks that is cloned databases. Here a picture (from Windocks) that shows a high-level workflow of using SQL Server containers with cloned databases.

        blog 133 - 2 - windocks - cloned db architecture

        The starting point is a full database backup or a snapshot and Windocks will generate the corresponding VHD parent image of the backup media. Then each SQL Server container generated will use a writable clone of this parent image reducing drastically the disk footprint of each container (at least when the SQL Server container is generated). This is a common workflow that customers may implement for unit testing or for CI/CD pipeline to refresh development environments. The production database may be big in size and in this context the interest of using cloned databases becomes obvious. Another way provided would be to rely on mounted snapshot-based volumes from the storage provider but at this stage I didn’t test it. Maybe a next time!

        To give a better picture of what it is possible to do with cloned databases, let me show you one way to implement it in the context of our DMK maintenance tool project. The development workflow of the development project is as follows:

        blog 133 - 3 - windocks - dev workflow

        We use SSDT and GitLab to manage our DMK maintenance tool sources and we perform unit testing by provisioning one or several SQL Server containers regarding the target version we want to validate at this moment. Furthermore we developed testing scenarios based on tSQLt framework we run after provisioning the SQL Server containers. With such architecture, we initially have to maintain images of different SQL Server versions and each time we want to create a container we have to attach one copy of the customized AdventureWorks_dbi database. Let’s say we want to work with 4 containers (SQL Server 2016) at time. We must to ensure we have sufficient disk space for 4 copies of this database (5 x 4 = 20GB) + space required for each container (10GB x 4 = 40GB) = 60GB. The dbi_tools database is intended to stay very small (512MB up to 1GB) comparing to other testing components, this is way I didn’t include it to the math.

        Let’s now apply Windocks with the above context and the new architecture becomes as follows:

        blog 133 - 4 - windocks - dev workflow

        In this second scenario, we may include both the AdventureWorks_dbi and dbi_tools databases (including tSQLt framework objects) in a VHD parent image. For the demo, I used a SQL Server 2016 instance installed on the host that will be controlled by Windocks service during the container generation.

        As I said previously we may rely on the configuration and the security of the base instance. Thus, I configured my security based on both Windows Authentication (dbi-services\clustadmin domain account) for SQL Server sysadmins and SQL authentication for development purpose (windock user).

        In terms of configuration, tSQLt framework requires enabling CLR on the server level, so I changed the configuration directly on the SQL based instance to allow all my containers to inherit this configuration change.

        SELECT 
        	[name],
        	principal_id,
        	[type_desc]
        FROM 
        	sys.server_principals 
        WHERE 
        	[type] IN ('U', 'S')
        	AND [name] NOT LIKE '##MS_%##'
        	AND [name] NOT LIKE 'NT SERVICE%'

         

        blog 133 - 41- windocks - base instance security

        Here the content of my docker file.

        FROM mssql-2016
        SETUPCLONING FULL AdventureWorks_dbi \\Win2012sql16ls\windock\AdventureWorks_dbi_2008.bak
        SETUPCLONING FULL dbi_tools \\Win2012sql16ls\windock\dbi_tools.bak

         

        You may notice some new commands here:

        • FROM mssql-2016 indicates we will use the SQL Server 2016 instance as base image.
        • SETUPCLONING FULL indicates to generate the VHD parent image that we will include both the AdventureWorks_dbi and dbi_tools databases in a remote network path

        Go ahead and let’s generate the corresponding SQL Server docker cloned image with the special -t flag

        docker build -t 2016withdmk C:\DMK\WithClone\BasedImage\

         

        blog 133 - 5- windocks - cloned database image

        The process may take some times to generate the VHD parent image depending on the different database sizes and the location (local fast disk, network share bandwidth etc …)

        blog 133 - 6- windocks - cloned db size

        As expected, because the VHD parent image is a full byte copy of the data, the size is basically the sum of both AdventureWorks_dbi and dbi_tools database sizes.

        Compared to a traditional approach, the provisioning of the associated containers will be faster irrespective to database size. Let’s create 2 containers from the image generated previously with the following docker commands:

        PS C:\DMK\WithClone> docker run --name sql20161 -d 2016withdmk
        PS C:\DMK\WithClone> docker run --name sql20162 -d 2016withdmk

         

        Note that by default, Windocks will choose a random port between a configured range in the node.conf (START_PORT/ PORTS_PER_USER) unless you override the default behavior using the -p parameter.

        blog 133 - 7- windocks - create containers

        Let’s get a picture of the existing containers. My 2 containers have been generated correctly from the 2016withdmk base image.

        PS C:\DMK\WithClone> docker ps
        CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS
        89344d99758b        2016withdmk         ""                  5 minutes ago       Started             1433/
        4dfd1a198626        2016withdmk         ""                  4 minutes ago       Started             1434/

         

        Let’s take another look at the storage side:

        blog 133 - 8 - windocks - cloned databases diff

        The interesting point here is the two differential disks created during the creation of my 2 previous containers are very small size (~70MB per container).

        Let’s try to connect from SSMS to the new fresh containers:

        blog 133 - 9 - windocks - ssms connection

        It works and did you notice I was connected with my domain account? :)

        Last topic I wanted to share with you is the Windocks capability to update a base image from differential backups. In a real context, you may need to refresh frequently your environment with recent data for developers and regarding your full backup size it might be advantageous to deal with differential backups.

        Let’s consider this process with my internal project environment. During the development process we already had to scale the database schema of our testing database with the features shipped with new versions, service packs or cumulative update over the time. Let’s say we have to add a new dbo.bigTransactionhistory_cci table to test columnstore index scenarios. We first add the concerned table. Then we will perform a differential backup and finally we will update the 2016withDMK base image with it. Obviously in the context of my demo, the database size is likely not big enough to take full advantage of this feature but I trust you to draw a comparison with a more realistic scenario.

        The image I want to update is named 2016withdmk. Note the mssql-xxx images that are in fact images from SQL Server base instances installed on my host server.

        PS C:\DMK\WithClone> docker images
        REPOSITORY            TAG                 IMAGE ID            CREATED                  VIRTUAL SIZE
        2016withdmk           none                b3a249ba-2cf        Less than a second ago   0 B
        agitated_heisenberg   none                bbd0ce26-4bb        Less than a second ago   0 B
        dotnet-4.5            none                dotnet-4.5          3 years ago              0 B
        windows               none                windows             3 years ago              0 B
        mssql-2008r2          none                mssql-2008r2        3 years ago              0 B
        mssql-2016            none                mssql-2016          3 years ago              0 B
        mssql-2014            none                mssql-2014          3 years ago              0 B

         

        My new docker file content to update the 2016withdmk base image is as follows.

        FROM 2016withdmk
        
        SETUPCLONING DIFF AdventureWorks_dbi \\Win2012sql16ls\windock\AdventureWorks_dbi_2008.DIFF
        SETUPCLONING DIFF dbi_tools \\Win2012sql16ls\windock\dbi_tools.DIFF

         

        I used the differential backup of the AdventureWorks_dbi database with the SETUPCLONNING DIFF command.

        Let’s start updating the 2016withDMK base image with the following docker command (I tagged my new image with 1.2 suffix):

        docker build -t 2016withdmk1.2 C:\DMK\WithClone\DiffImage\

         

        blog 133 - 10 - windocks - cloned database diff backup

        Although using a differential backup may allow for substantial productivity keep in mind that updating a VHD parent image will require creating an additional VHD parent image that will become another full byte copy of the environment as shown below:

        blog 133 - 11- windocks - cloned db size diff

        After running the same docker run command exposed before, let’s get a picture of the running on my server. The container id 789ce49562d0 is the new fresh container with updated data (2016withdmk1.2 image).

        PS C:\DMK\WithClone> docker ps
        CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS
        89344d99758b        2016withdmk         ""                  About an hour ago   Started             1433/
        4dfd1a198626        2016withdmk         ""                  About an hour ago   Started             1434/
        789ce49562d0        2016withdmk1.2      ""                  2 minutes ago       Started             1436/

         

        I may retrieve the new dbo.bigTransactionHistory_cci after connecting to the concerned SQL Server instance. I’m now able to update and to execute tSQLt scenarios to test columnstore index maintenance scenarios!

        blog 133 - 12- windocks - ssms connection 2

         

        In this blog post we’ve just surfaced some capabilities and possibilities provided by Windocks. There are other interesting features as encrypted password in the docker file (as we may use with Docker Swarm for instance), the support of TDE databases or lastly the ability to manage SQL Server on Linux. I will probably blog about it in the near future. Stay tuned!

         

         

         

         

        Cet article Windocks – a different way to use SQL Server on Docker est apparu en premier sur Blog dbi services.

        Pages

        Subscribe to Oracle FAQ aggregator