Feed aggregator

Lightning Web Components - Events, and listening to your children

Rob Baillie - Thu, 2018-12-20 09:13

Another fantastic inclusion in Lightning Web Components is the completely reworked events model.

De-composing functionality and building smaller, and more generic building blocks has become much simpler and much more intuitive.

In the world of Lightning Components I never got on with events. The idea of adding a Salesforce configuration for an event, registering events on the dispatcher template, and then registering listeners on the receiving template seemed really cumbersome. And then added onto that was the differences in syntax between component and application events. They just felt really unnatural.

In Lightning Web Components all this has become significantly simpler, and much more in-keeping with the standard HTML / Javascript model.

We've already seen how we can use @api allow state to be passed into our components. Now we're talking about notifying our parents when events occur.

I could go into deep detail on how this is done, but the documentation on this area is spot on, and there's no need to repeat it - follow the guide in the docs and you can't go far wrong. It's particularly well written and introduces the concept brilliantly.

That said, there has to be something to say, right?

Well, yes, and before I go into some of the less obvious limitations, let's just present a simple example:

  • In the child component, we create and dispatch an event.
  • When you include the child component, specify the handler for the event

Something along the lines of:

Child component's Javascript

import { LightningElement, track } from 'lwc';

export default class ChildComponent extends LightningElement {

@track value;

// Called from the onchange handler on an input
handleValueChanged( event ) {
this.value = event.target.value;
this.dispatchEvent( new CustomEvent( 'valuechanged', { detail: this.value } ) );
}
}
Parent component's template

<c-child-component onvaluechanged={handleOnValueChanged}>
Parent component's Javascript

import { LightningElement, track } from 'lwc';

export default class ParentComponent extends LightningElement {

@track updatedValue;

handleOnValueChanged( event ) {
this.updatedValue = event.detail;
}
}

OK. So how simple is that? No Salesforce configuration to create, nice simple syntax, event handlers defined in the template, exactly the same way you would if it was a standard HTML tag

Without wanting to repeat the documentation from Salesforce, it's worth calling out a few important points:

  • dispatchEvent and CustomEvent are standard Javascript.
  • When you include the child component, you specify the handler for the event in the template.
    • The event should not start with 'on', and the attribute you assign the handler to will have 'on' appended to the start.
    • The fact we can specify the handler as 'onvaluechanged' when we create the tag is LWC specific, and for very good reason (explained later). You cannot do this with standard Web Components.
  • We can pass data from the child component in the event, by passing an object as the second parameter.
    • Note that the data can only be in the 'detail' property. If you add data to any other property you may accidentally overwrite a standard property, and if you don't use another standard property it won't be visible in the event anyway - you put data into 'detail', and that's all you have. Live with it.
    • You can pass an object, but if you do you should construct it there and then. But you probably shouldn't.

OK, that's all well and good - but where are the limitations?

Well, the main one I've found was a real surprise to me - to the point that I'm worried that I've misunderstood something.

In the standard Javascript events model - all events propagate to all levels.

For example, if I have the following HTML:


<div id="grandparent" onchange="handleChange();">
<div id="parent">
<div id="child">
<input onchange="handleChange();"/>
</div>
</div>
</div>

When the value of the input changes, the onchange event is handled by both the onchange handler on the input and the 'grandparent' div. Events propagate through the whole DOM, unless a handler stops it by calling 'stopPropogation' against the event.

It's generally recognised that this is a good thing, and that events should not be stopped unless there's very good reason.

However, as far as I can see, this is not true when you cross boundaries between LWCs.

For example, if I had the above example for a child component, and included it in a parent as such:

Parent component's template

<c-child-component onvaluechanged={handleOnValueChanged}>

And then included that in the grandparent as such:

Grandparent component's template

<c-parent-component onvaluechanged={handleOnValueChanged}>

Assuming that the parent component does not raise a 'valuechanged' event of its own, the 'onvaluechanged' handler on the grandparent component will never get called.

It seems that you can only handle a component's event in its parent's scope.

Note: these are actually slightly different scenarios I'm explaining, but I think it's worthwhile in order to illustrate the point. Also, there is a 'bubbles' property that you can set on the CustomEvent when you create it, although I didn't see a change in behaviour when I did that.

As I've said, I'm surprised by this behaviour, so am happy to be told I'm wrong, and learn where my mistake is.

Adding an event handler via Javascript

So what of the 'on' behaviour? Why is this such a cool addition?

Well, that's best explained by illustrating what we would need to do if this wasn't available to us.

Let's go back to our child component

Child component's Javascript

import { LightningElement, track } from 'lwc';

export default class ChildComponent extends LightningElement {

@track value;

// Called from the onchange handler on an input
handleValueChanged( event ) {
this.value = event.target.value;
this.dispatchEvent( new CustomEvent( 'valuechanged', { detail: this.value } ) );
}
}

It dispatches a 'valuechanged' event that we can handle in a parent component.

We include the child component with a simple node: Parent component's template


<c-child-component></c-child-component>

Note we are no longer setting onvaluechanged because, in our hypothetical scenario, this is not possible.

Now, in order to handle the event we need to attach a handler to the component in our parent component's Javascript

First we need to find it, so we set a property on the component that we can use to retrieve it. You may default to setting an 'id', but it turns out that Salesforce will adjust the ids on nodes, so we can't rely on that. Instead, we decide to set a class:

Parent component's template

<c-child-component class="child"></c-child-component>

Now, the parent component's Javascript. We need to hook into one of the lifecycle callbacks in order to attach our handler

You can see the docs for those functions here.

From there we find:

  • We can't use the constructor, as the component hasn't been added to the DOM yet.
  • We can't use the connectedCallback, as the component's children haven't been rendered yet.
  • We can use the renderedCallback, but this gets called multiple times - whenever any reactive properties change, so we need to protect against multiple adds.

So, maybe we can do this:


allocatedEventListeners = false;

renderedCallback() {
if ( ! this.allocatedEventListeners ) {
this.template.querySelector('.child').addEventListener( this.handleOnValueChanged ).bind() );
this.allocatedEventListeners = true;
}
}

That is a bit clunky, but it looks like it should work. We 'bind' the 'handleOnValueChanged' function to the event listener.

Unfortunately, it doesn't. Because of a fundamental capability of Javascript - it appears that the event handler doesn’t have access to ‘this’. And if you’re not an experienced Javascript developer then that’s when things start to get a bit crazy (actually, even if you ARE an experienced Javascript developer, I suspect it STILL gets a little messed up).

Basically, 'this' isn’t guaranteed to be what you think it is. If you write code that behaves in a procedural way, then it will generally be the object in which the method is defined. But as soon as you add in callbacks, Promises and asynchronous behaviour, it isn't guaranteed to be.

'this' can be simply the context in which the function runs, rather than the object or class in which the function is defined. This is an incredibly powerful aspect of Javascript that is very difficult to get to grips with unless you’re used to seeing it.

In Lightning Components you can see the effect of this in code such as Apex callouts in helpers where you end up with:


let self = this;

In our particular case, you could use an alternative - the fat arrow notation for defining functions.


event => { this.handleOnValueChanged( event ) }

Which is *would* transpile to (or is synonymous with) this:


function handleEvent(event) {
var _this = this;
( function (event) { _this.handleOnValueChanged(event); });
}

Look familiar?

The resulting code for adding the event handler could end up like this:


allocatedEventListeners = false;

renderedCallback() {
if ( ! this.allocatedEventListeners ) {
this.template.querySelector('.child')
.addEventListener( 'valuechanged',
( ( event ) => { this.handleOnValueChanged( event ) } ).bind() );
this.allocatedEventListeners = true;
}
}

In the end, this would work. But no-one would suggest it was elegant. And in order to get it working we had to brush up against some advanced behaviour of 'this'. Now, I admit that people are going to have to learn how 'this' and its binding behaves in order to write reliable Lightning Web Components - but just to add an event handler?

The reality is that we don't have to think about it - Salesforce have given us a very usable shorthand for it, and we should be extremely grateful for it!

Transitive Closure

Jonathan Lewis - Thu, 2018-12-20 07:19

This is a follow-up to a note I wrote nearly 12 years ago, looking at the problems of transitive closure (or absence thereof) from the opposite direction. Transitive closure gives the optimizer one way of generating new predicates from the predicates you supply in your where clause (or, in some cases, your constraints); but it’s a mechanism with some limitations. Consider the following pairs of predicates:


    t1.col1 = t2.col2
and t2.col2 = t3.col3

    t1.col1 = t2.col2
and t2.col2 = 'X'

A person can see that the first pair of predicate allows us to infer that “t1.col1 = t3.col3” and the second pair of predicates allows us to infer that “t1.col1 = ‘X'”. The optimizer is coded only to recognize the second inference. This has an important side effect that can have a dramatic impact on performance in a way that’s far more likely to appear if your SQL is generated by code. Consider this sample data set (reproduced from the 2006 article):

rem
rem     Script:         transitive_loop.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2006
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1
rem

create table t1 
as
select
        mod(rownum,100) col1,
        rpad('x',200)   v1
from
        all_objects
where   
        rownum <= 2000
;

create table t2
as
select
        mod(rownum,100) col2,
        rpad('x',200)   v2
from
        all_objects
where   
        rownum <= 2000
;

create table t3
as
select
        mod(rownum,100) col3,
        rpad('x',200)   v3
from
        all_objects
where   
        rownum <= 2000
;

-- gather stats if necessary

set autotrace traceonly explain

prompt  =========================
prompt  Baseline - two hash joins
prompt  =========================

select 
        t1.*, t2.*, t3.*
from
        t1, t2, t3
where
        t2.col2 = t1.col1
and     t3.col3 = t2.col2
;

prompt  ================================================
prompt  Force mismatch between predicates and join order
prompt  ================================================

select 
        /*+
                leading(t1 t3 t2)
        */
        t1.*, t2.*, t3.*
from
        t1, t2, t3
where
        t2.col2 = t1.col1
and     t3.col3 = t2.col2
;

The first query simply joins the tables in the from clause order on a column we know will have 20 rows for each distinct value, so the result sets will grow from 2,000 rows to 40,000 rows to 800,000 rows. Looking at the second query we would like to think that when we force Oracle to use the join order t1 -> t3 -> t2 it would be able to use the existing predicates to generate the predicate “t3.col3 = t1.col1” and therefore be able to do the same amount of work as the first query (and, perhaps, manage to produce the same final cardinality estimate).

Here are the two plans, taken from an instance of 12.2.0.1:


=========================
Baseline - two hash joins
=========================

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |   800K|   466M|    48  (38)| 00:00:01 |
|*  1 |  HASH JOIN          |      |   800K|   466M|    48  (38)| 00:00:01 |
|   2 |   TABLE ACCESS FULL | T3   |  2000 |   398K|    10   (0)| 00:00:01 |
|*  3 |   HASH JOIN         |      | 40000 |    15M|    21   (5)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| T1   |  2000 |   398K|    10   (0)| 00:00:01 |
|   5 |    TABLE ACCESS FULL| T2   |  2000 |   398K|    10   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T3"."COL3"="T2"."COL2")
   3 - access("T2"."COL2"="T1"."COL1")

================================================
Force mismatch between predicates and join order
================================================

------------------------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |   800K|   466M| 16926   (3)| 00:00:01 |
|*  1 |  HASH JOIN            |      |   800K|   466M| 16926   (3)| 00:00:01 |
|   2 |   TABLE ACCESS FULL   | T2   |  2000 |   398K|    10   (0)| 00:00:01 |
|   3 |   MERGE JOIN CARTESIAN|      |  4000K|  1556M| 16835   (2)| 00:00:01 |
|   4 |    TABLE ACCESS FULL  | T1   |  2000 |   398K|    10   (0)| 00:00:01 |
|   5 |    BUFFER SORT        |      |  2000 |   398K| 16825   (2)| 00:00:01 |
|   6 |     TABLE ACCESS FULL | T3   |  2000 |   398K|     8   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."COL2"="T1"."COL1" AND "T3"."COL3"="T2"."COL2")

As you can see, there’s a dramatic difference between the two plans, and a huge difference in cost (though the predicted time for both is still no more than 1 second).

The first plan, where we leave Oracle to choose the join order, builds an in-memory hash table from t3, then joins t1 to t2 with a hash table and uses the result to join to t3 by probing the in-memory hash table.

The second plan, where we force Oracle to use a join order that (I am pretending) we believe to be a better join order results in Oracle doing a Cartesian merge join between t1 and t3 that explodes the intermediate result set up to 4 million rows (and the optimizer’s estimate is correct) before eliminating a huge amount of redundant data.

As far as performance is concerned, the first query took 0.81 seconds to generate its result set, the second query took 8.81 seconds. In both cases CPU time was close to 100% of the total time.

As a follow-up demo I added the extra predicate “t3.col3 = t1.col1” to the second query, allowing the optimizer to use a hash join with the join order t1 -> t3 -> t2, and this brought the run time back down (with a slight increase due to the extra predicate check on the second join).

Summary

The choice of columns in join predicates may stop Oracle from choosing the best join order because it is not able to use transitive closure to generate all the extra predicates that the human eye can see. If you are using programs to generate SQL rather than writing SQL by hand you are more likely to see this limitation resulting in some execution plans being less efficient than they could be.

 

 

 

 

Oracle Data Cloud Launches Yield Intelligence to Help Publishers Better Monetize Inventory with Brand Safety, Fraud and Viewability Controls

Oracle Press Releases - Thu, 2018-12-20 07:00
Press Release
Oracle Data Cloud Launches Yield Intelligence to Help Publishers Better Monetize Inventory with Brand Safety, Fraud and Viewability Controls Bloomberg Media Group, Bonnier Corp., Vox Media Among Early Adopters

Redwood Shores, Calif.—Dec 20, 2018

Oracle Data Cloud today announced Moat Yield Intelligence, a new feature of Oracle’s Moat Analytics helping publishers ensure their advertising campaigns are viewable, avoid invalid traffic, and are delivered in a brand safe environment. Moat Yield Intelligence capabilities have been extensively tested through early implementation with key publishers, including Bloomberg Media Group, Bonnier Corp., and Vox Media. Using this new functionality, publishers have been able to protect media spend by reducing waste, optimizing yield, and gaining greater control of their inventory.

“Publishers want to be able to demonstrate to advertisers that their inventory is viewable, fraud-free, and brand safe,” said Dave Constantino, Head of Publisher Development, Oracle Data Cloud. “Moat Yield Intelligence functionality enables publishers to protect their advertisers’ media spend and can help to improve campaign performance while strengthening trust between publishers and advertisers to build proven and successful long-term relationships.”

The Moat Yield Intelligence feature provides a suite of targeting metrics that assist direct and programmatic sales efforts, including support for programmatic direct, sales via private marketplaces, and market leading SSPs:

  • Viewability thresholds that align with agency standards, the ability to customize those thresholds, and unprecedented precision through ad slot level targeting.
  • Robust fraud interdiction using Moat’s leading IVT measurement and analysis technology, including learnings from Oracle’s other anti-fraud acquisitions, such as Dyn, a DNS provider that helps detect abnormal traffic patterns on the web, and ZenEdge, a leading cybersecurity suite to detect botnets.
  • Contextual brand safety using technology from Oracle’s Grapeshot acquisition that analyzes and interprets the actual content of each page, not just URL-level keywords.

“Managing yield effectively requires sophisticated data to maximize efficiency, digital revenues and ultimately drive stronger outcomes for advertisers,” said Derek Gatts, Global Head of Advertising Operations, Bloomberg Media Group. “Moat Yield Intelligence helps us reach our goal of maximizing the value of each impression, while also ensuring our partners run in a brand safe environment that is optimal for their unique needs.”

“With a single implementation, Moat Yield Intelligence powers targeting against the full suite of media quality metrics that are crucial to our customer success," said Sean Holzman, Chief Digital Revenue Officer, Bonnier Corp. "In particular, market leading solutions for SIVT detection and brand safety avoidance enable us to stay at the forefront and offer a leading media product for our advertisers.”

"As the flight to quality accelerates for marketers, we can't keep relying on standard metrics that are optimized only for scale, like impressions and clicks," says Ryan Pauley, Chief Revenue Officer, Vox Media. "Moat Yield Intelligence is a tool that allows us to offer deeper intelligence, performance, and optimization of our inventory ultimately resulting in increased return for our marketing partners."

Contact Info
Shasta Smith
Oracle
+1.650.506.8673
shasta.smith@oracle.com
About Oracle Data Cloud

Oracle Data Cloud helps marketers use data to capture consumer attention and drive results. Used by 199 of the 200 largest advertisers, our Audience, Context and Measurement solutions extend across the top media platforms and a global footprint of more than 100 countries. We give marketers the data and tools needed for every stage of the marketing journey, from audience planning to pre-bid brand safety, contextual relevance, viewability confirmation, fraud protection, and ROI measurement. Oracle Data Cloud combines the leading technologies and talent from Oracle’s acquisitions of AddThis, BlueKai, Crosswise, Datalogix, Grapeshot, and Moat.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Shasta Smith

  • +1.650.506.8673

Unique constraint on function based index

Tom Kyte - Thu, 2018-12-20 04:54
Hi TOM, I try to create an unique constraint using FBI, it fails with "ORA-00904: : invalid identifier" error. It looks like "alter table" doesn't accept functions in expressions for unique constraints - <code> create table ctest (a integer, b i...
Categories: DBA Blogs

Scheduling backup

Tom Kyte - Thu, 2018-12-20 04:54
Schedule ?backup Oracle database? as following: 1- Saturday backup incremental 0 at 22 :15 2- Rest of the week incremental 1 at 23:00
Categories: DBA Blogs

Lightning Web Components - Unit Testing immediate impression.

Robert Baillie - Thu, 2018-12-20 04:03
One of the things I’m most excited about in Lightning Web Components is the ability to write Unit Tests for the components. The unit testing framework of choice is Jest, and it looks well suited. Not least it’s the framework of choice for Facebook, and describes itself as well suited to React apps. Why should that matter? Well, React is a 1-way bound Javascript framework - and so is LWC. So I was looking forward to get into Unit Testing, following the documentation for testing wired components Unfortunately, the documentation didn’t work out for me, and it looked like there’s a couple of mistakes in there: The example ‘getRecord.json’ file isn’t valid JSON. In order for the file to work, the field names need to be surrounded in double quotes. I.E. Instead of: // getRecord.json { fields: { Name: { value: "DYNAMO X1" } } } The file should be: // getRecord.json { "fields": { "Name": { ...

Lightning Web Components - Unit Testing immediate impression.

Rob Baillie - Wed, 2018-12-19 12:37

One of the things I’m most excited about in Lightning Web Components is the ability to write Unit Tests for the components.

The unit testing framework of choice is Jest, and it looks well suited. Not least it’s the framework of choice for Facebook, and describes itself as well suited to React apps. Why should that matter? Well, React is a 1-way bound Javascript framework - and so is LWC.

So I was looking forward to get into Unit Testing, following the documentation for testing wired components

Unfortunately, the documentation didn’t work out for me, and it looked like there’s a couple of mistakes in there:

The example ‘getRecord.json’ file isn’t valid JSON.

In order for the file to work, the field names need to be surrounded in double quotes.

I.E. Instead of:


// getRecord.json
{
fields: {
Name: {
value: "DYNAMO X1"
}
}
}

The file should be:


// getRecord.json
{
"fields": {
"Name": {
"value": "DYNAMO X1"
}
}
}

Interrogating the ‘element’ for its state does not seem to work.

Instead, I found that I needed to get data from the document object.

I.E. The following does not work:


// Resolve a promise to wait for a rerender of the new content.
return Promise.resolve().then(() => {
const content = element.querySelector('.content');
expect(content.textContent).toBe("Name:DYNAMO X1");
});

But the following does:


// Resolve a promise to wait for a rerender of the new content.
return Promise.resolve().then(() => {
const content = document.body.querySelector('.content');
expect(content.textContent).toBe("Name:DYNAMO X1");
});

Mocking doesn't seem to work for Apex, only LDS

From the quick test I did, I could get the mocking framework to work for the Lightning Data Service, once my implementation of the example was tweaked. However, I couldn't get it to work with an imported Apex method

I didn't see a reference to this being missing, though I guess I may have just missed that, and I know that the recommendation is to use LDS whenever you can. I just worry that there's a really important use case out there - it seems natural to me that components that use custom Apex are likely to be more complex than ones that use LDS. And with that in mind, it feels like missing Apex will be a big loss to the testing framework.

Hopefully the last part is already known about, is a bit of missing documentation, or is simply that I misunderstood something.

Whatever the case, I plan on doing more investigations into the Unit Testing capabilities, and will obviously blog my findings - but I have to admit that I found the initial experience a little disappointing after the extremely polished experience up to now.

I sincerely hope that it isn’t an indicator that Unit Testing is bit of an after-thought.

Debbie Saves Christmas – Database Development in a Devops Wonderland : Re-runnable DDL

The Anti-Kyte - Wed, 2018-12-19 11:45

Debbie felt a shiver run down her spine. To be fair, that wasn’t much of a surprise since Lapland at this time of the year does tend to be a little chilly.
However, it wasn’t the weather that was the cause of her discomfort. Someone high up in the IT Department of her employer, The National Elf ( aka Santa’s Grotto) had decided that Continuous Integration was the way to go and had decreed that it should be used forthwith across all projects and technologies in the Company.
This included the application that Debbie was responsible for.
Written around 15 years ago, this Stock Control Application had already survived one major database upgrade but was now resolutely “stuck” on Oracle 11g.
The thing about so many modern software development techniques is that they were based on the premise that code was file based. Of course, this was also true ( or at least, true enough) for some database objects, but tables were a little different.
You couldn’t simply “replace” a table like you could any other program as doing so would destroy any data in that table. For this reason, any changes required to tables for a mature application such as this would be applied by means of DDL ALTER statements.
Of course, there are tools around for this sort of thing. Liquibase, FlexDeploy – these were just two of the tools that Debbie had no chance of getting approval to use in the face of a bureaucracy that made the Vogon Civil Service look like it was following Extreme Programming.
If she was going to get her changes through by her Christmas Eve deadline, she would have to get creative…

Object types and their inherent “re-runnability”

Debbie began by making a list of the types of database object in her application and whether or not they could be generated by a re-runnable DDL statement – i.e. a CREATE OR REPLACE stataement.
Those that could included :

  • packages (specifications and bodies)
  • functions
  • procedures
  • views
  • triggers
  • types (specifications and bodies)

These were more of a challenge :

  • tables
  • sequences
  • indexes
  • constraints
  • materialized views

The code changes that Debbie needed to deploy mostly consisted of object types of the latter group…

Elf and Safety Gone Mad !

The staff in the warehouse had gotten fed up with loading parcels that didn’t actually need to be delivered. Elf and Safety had gotten involved, saying something about repetitive heavy lifting and…well..Debbie had come up with the following code.

First of all, there were a couple of new tables :

create table countries(
    iso_code varchar2(3),
    country_name varchar2(4000)
)
/

…and…

create table deliveries(
    id number constraint del_pk primary key,
    recipient_name varchar2(4000) not null,
    date_of_birth date not null,
    country_code varchar2(3),
    belief_end_date date, 
    constraint del_uk unique ( recipient_name, date_of_birth)
)
/

…then a sequence for the DELIVERIES.ID values( this is 11g remember, identity columns are not yet a thing)…

create sequence del_id_seq
    start with 1
    increment by 1
    nocycle
/

In subsequent iterations of her development, Debbie decided to add a Primary Key…

alter table countries add constraint coun_pk primary key (iso_code)
/

…rename a column…

alter table deliveries 
    rename column country_code to coun_iso_code
/

… add a Foreign Key…

alter table deliveries add constraint 
    del_coun_fk foreign key (coun_iso_code) references countries(iso_code)
/

…and a Not Null constraint…

alter table deliveries modify coun_iso_code not null
/

…a physical column…

alter table deliveries add naughty_flag varchar2(1) not null
/

…a check constraint…

alter table deliveries add constraint del_nf_chk check (naughty_flag in ('Y', 'N'))
/

…and a virtual column…

alter table deliveries add 
    active_flag generated always as ( 
        case when belief_end_date is not null then 'Y' else 'N' end) virtual visible
/

Finally, there was a view on the table

create or replace view deliveries_vw as
    select del.recipient_name, coun.country_name
    from deliveries del
    inner join countries coun 
        on coun.iso_code = del.coun_iso_code
    where del.active_flag = 'Y'
    and del.naughty_flag = 'N'
/

After all of that, Debbie then needed to add some static (reference) data :

-- Furthest stop on the round !
insert into countries(iso_code, country_name)
values('ATA', 'ANTARCTICA')
/

insert into countries(iso_code, country_name)
values('CAN', 'CANADA')
/

insert into countries(iso_code, country_name)
values('COL', 'COLOMBIA')
/

-- Company Head Office is here...
insert into countries(iso_code, country_name)
values('FIN', 'FINLAND')
/


insert into countries(iso_code, country_name)
values('DEU', 'GERMANY')
/


insert into countries(iso_code, country_name)
values('IND', 'INDIA')
/


insert into countries(iso_code, country_name)
values('MDG', 'MADACASCAR')
/

insert into countries(iso_code, country_name)
values('NZL', 'NEW ZEALAND')
/


-- Can't find the ISO code for Wales for some reason !
insert into countries(iso_code, country_name)
values('GBR', 'UNITED KINGDOM')
/


insert into countries(iso_code, country_name)
values('USA', 'UNITED STATES OF AMERICA')
/

commit;

It’s probably worth noting that Debbie was following the company’s coding standards which – among other things – specified that :

  1. Primary Key, Foreign Key and Check constraints were always explicitly named (although not NOT NULL constraints)
  2. The definitions of tables, constraints etc would not change between runs – i.e. if the Primary Key columns on a table were to change then the appropriate DDL would be written to drop the existing Primary Key

These changes were executed via a master release script :

prompt Creating new tables

@countries_tbl.sql
@deliveries_tbl.sql

prompt Creating a sequence

@del_id_seq.sql

prompt Adding PK to a table

@coun_pk.sql

prompt Renaming a column 

@rename_del_coun_code.sql

prompt Adding Foreign Key

@del_coun_fk.sql

prompt adding a Not Null constraint

@del_iso_code_nn.sql

prompt adding new column 

@naughty_flag.sql

prompt creating check constraint

@del_nf_chk.sql

prompt adding virtual column

@del_active_virt.sql

prompt creating view

@deliveries_vw.sql

prompt creating COUNTRIES records...

@add_countries.sql

prompt Deployment completed.

Whilst the first run of this script would work as expected…

…second and subsequent runs would be cause the CI server to light up like a Christmas Tree ( and not in a good way)…

So, armed with a roll of PL/SQL and a sprinkling of SQL*Plus fairy dust, Debbie prepared to ensure a soft Irish border write some re-runnable DDL…

Option 1 – Exceptional Scripting

The first option was simply to anticipate the errors that might come up when the same DDL statement was executed multiple times.
The general format of such a script would be something like :

declare
    e_obj_exists exception;
    
    -- ORA-00955: name is already used by an existing object
    pragma exception_init( e_obj_exists, -955);    
begin
    -- nested blocks so that each exception can be handled individually and the script can then continue
    begin
        execute immediate 'create table sleeps_till_xmas( sleeps number)';
    exception when e_obj_exists then
        dbms_output.put_line('Someone started counting early this year !');
    end;
end;
/

Using this pattern, Debbie could knock up something like…

set serveroutput on size unlimited
spool rerun_master1.log

declare
    --
    -- Create exceptions for the errors we may anticipate in the event of second or subsequent
    -- execution of DDL. 
    --

    -- Table and object exceptions
    e_obj_exists exception;
    e_no_such_tab exception;
    
    -- Sequence exceptions
    e_no_such_seq exception;

    -- Column change exceptions
    e_dup_col_name exception;
    e_col_exists exception;
    e_no_such_col exception;
    e_col_already_not_null exception;

    -- Constraint exceptions
    e_tab_has_pk exception;
    e_cons_exists exception;
    e_fk_exists exception;
    
    -- ORA-00955: name is already used by an existing object
    pragma exception_init( e_obj_exists, -955);

    -- ORA-00942: table or view does not exist 
    pragma exception_init( e_no_such_tab, -942);

    -- ORA-02289: sequence does not exist
    pragma exception_init( e_no_such_seq, -2289);

    -- ORA-00957: duplicate column name
    pragma exception_init( e_dup_col_name, -957);
    
    -- ORA-01430: column being added already exists in table
    pragma exception_init( e_col_exists, -1430);

    -- ORA-00904: "%s": invalid identifier
    pragma exception_init( e_no_such_col, -904);

    -- ORA-01442: column to be modified to NOT NULL is already NOT NULL
    pragma exception_init( e_col_already_not_null, -1442);

    -- ORA-02260 : table can have only one primary key
    pragma exception_init( e_tab_has_pk, -2260);

    -- ORA-02264: name already used by an existing constraint
    pragma exception_init( e_cons_exists, -2264);

   

    -- ORA-02275: such a referential constraint already exists in the table
    pragma exception_init( e_fk_exists, -2275);


begin
    dbms_output.put_line('Creating new tables');
    -- each DDL statement will need to be in it's own block so we can handle the exceptions separately
    begin
        dbms_output.put_line('COUNTRIES');
        execute immediate 
            'create table countries(
                iso_code varchar2(3),
                country_name varchar2(4000))';
    exception when e_obj_exists then 
        dbms_output.put_line('Table exists - skipping.');
    end;

    begin
        dbms_output.put_line('DELIVERIES');
        execute immediate
            'create table deliveries(
                id number not null,
                recipient_name varchar2(4000) not null,
                country_code varchar2(3),
                belief_end_date date,
                date_of_birth date)';
    exception when e_obj_exists then 
        dbms_output.put_line('Table exists - skipping.');
    end;

    dbms_output.put_line('Creating Sequence');
    begin
        execute immediate 
            'create sequence del_id_seq
                start with 1
                increment by 1
                nocycle';
        exception when e_obj_exists then
            dbms_output.put_line('Sequence exists - skipping');
    end;

    dbms_output.put_line('Adding PK to a table');
    begin
        execute immediate 'alter table countries add constraint coun_pk primary key (iso_code)';
    exception when e_tab_has_pk then
        dbms_output.put_line('PK already exists - skipping');
    end;

    dbms_output.put_line('Renaming a column');
    begin
        execute immediate 'alter table deliveries rename column country_code to coun_iso_code';
    exception when e_dup_col_name then
        dbms_output.put_line('Column already renamed - skipping');
    end;

    dbms_output.put_line('Adding a Foreign Key');
    begin
        execute immediate 
            'alter table deliveries add constraint 
                del_coun_fk foreign key (coun_iso_code) references countries(iso_code)';
    exception when e_fk_exists then
        dbms_output.put_line('FK already exists - skipping');
    end;

    dbms_output.put_line('adding a Not Null constraint');
    begin
        execute immediate 'alter table deliveries modify coun_iso_code not null';
    exception when e_col_already_not_null then
        dbms_output.put_line('Column is already Not Null - skipping');
    end;

    dbms_output.put_line('adding new column');
    begin
        execute immediate 'alter table deliveries add naughty_flag varchar2(1) not null';
    exception when e_col_exists then
        dbms_output.put_line('Column already exists - skipping');
    end;

    dbms_output.put_line('creating check constraint');
    begin
        execute immediate q'[alter table deliveries add constraint del_nf_chk check (naughty_flag in ('Y', 'N'))]';
    exception when e_cons_exists then 
        dbms_output.put_line('Constraint already exists - skipping');
    end;

    dbms_output.put_line('adding virtual column');
    begin
        execute immediate 
            q'[alter table deliveries add 
                active_flag generated always as ( 
                    case when belief_end_date is not null then 'Y' else 'N' end) virtual visible]';
    exception when e_col_exists then
        dbms_output.put_line('Column already exists - skipping');
    end;

end;
/

rem 
rem View statement is inherently re-runnable (CREATE OR REPLACE) so just execute the script...
rem

prompt Creating View
@deliveries_vw.sql


prompt Creating COUNTRIES records

declare

    procedure ins( i_code in countries.iso_code%type, i_name countries.country_name%type) 
    is
    begin
        merge into countries
        using dual 
        on ( iso_code = i_code)
        when not matched then 
            insert( iso_code, country_name)
            values( i_code, i_name);
    end ins;

begin
  -- Furthest stop on the round !
ins('ATA', 'ANTARCTICA');
ins('CAN', 'CANADA');
ins('COL', 'COLOMBIA');
-- Company Head Office is here...
ins('FIN', 'FINLAND');
ins('DEU', 'GERMANY');
ins('IND', 'INDIA');
ins('MDG', 'MADACASCAR');
ins('NZL', 'NEW ZEALAND');
-- Can't find the ISO code for Wales for some reason !
ins('GBR', 'UNITED KINGDOM');
ins('USA', 'UNITED STATES OF AMERICA');
commit;

end;
/

prompt Deployment completed.
spool off

On first execution, this script would run pretty much in the same way as the original :

Subsequent runs, however, would be a little smoother…

Whilst it did the job, Debbie felt that this approach had some shortcomings.

For one thing, table creation statements could get quite lengthy and complex so having them in-line as literals could get a bit fiddly.
For another, it was necessary to anticipate which exceptions you would run into and handle them accordingly. This was not always straightforward.
For example, trying to add an existing column to a table would result in :

ORA-01430: column being added already exists in table

However, renaming a column that already exists would give you :

ORA-00957: duplicate column name

On top of that, this approach required the re-coding of the exception handlers every time you wrote a new master release script.
It was for this reason that you would find exceptions that were declared but not used in a script ( as there are in this one).
Additionally, there was the temptation to throw all of the DDL into a single script to allow re-use of the exception declarations. This was likely to lead to a script which would quickly become quite large.

Debbie thought that there was an alternative that, with a bit of preparation, would require her to do rather less typing in the long run…

Using the Data Dictionary

To start with, Debbie created a package containing functions to check for the existing state of objects :

create or replace package ddl_checks
    authid current_user
as

    function object_exists( 
        i_name in user_objects.object_name%type,
        i_type in user_objects.object_type%type)
        return boolean;

    function column_exists(
        i_table in user_tab_columns.table_name%type,
        i_column in user_tab_columns.column_name%type)
        return boolean;

    function column_is_nullable(
        i_table in user_tab_columns.table_name%type,
        i_column in user_tab_columns.column_name%type)
        return boolean;

    function constraint_exists(
        i_table user_constraints.table_name%type,
        i_cons_name user_constraints.constraint_name%type,
        i_cons_type user_constraints.constraint_type%type)
        return boolean;
end ddl_checks;
/

create or replace package body ddl_checks
as

    -- PRIVATE package members
    function is_valid_type( i_type in user_objects.object_name%type)
        return boolean deterministic
    --
    -- Returns true if the object type is one that would be included in user_objects
    -- and can be used as the direct subject of a CREATE statement
    --
    is
    begin
        return upper(i_type) in (
            'TABLE', 'SEQUENCE', 'SYNONYM', 'INDEX', 'MATERIALIZED VIEW', 'VIEW',
            'FUNCTION', 'PROCEDURE', 'TRIGGER',
            'PACKAGE', 'PACKAGE BODY',
            'TYPE', 'TYPE BODY');
    end is_valid_type;

    -- PUBLIC package members
    function object_exists( 
        i_name in user_objects.object_name%type,
        i_type in user_objects.object_type%type)
        return boolean
    is
        dummy pls_integer;
        e_invalid_type exception;
    begin
        if not is_valid_type( i_type) then
            raise e_invalid_type;
        end if;
        select null into dummy 
        from user_objects
        where object_name = upper( i_name)
        and object_type = upper( i_type);

        return true;
    exception 
        when no_data_found then
            return false;
        when e_invalid_type then
            raise_application_error(-20900, 'Cannot verify the existence of this type of object');
    end object_exists;

    function column_exists(
        i_table in user_tab_columns.table_name%type,
        i_column in user_tab_columns.column_name%type)
        return boolean
    is
        dummy pls_integer;
    begin
        select null into dummy
        from user_tab_columns
        where table_name = upper(i_table)
        and column_name = upper(i_column);

        return true;
    exception when no_data_found then
        return false;
    end column_exists;

    function column_is_nullable(
        i_table in user_tab_columns.table_name%type,
        i_column in user_tab_columns.column_name%type)
        return boolean
    is
        l_nullable user_tab_columns.nullable%type;
    begin 
        select nullable into l_nullable  
        from user_tab_columns 
        where table_name = upper(i_table)
        and column_name = upper( i_column);

        return( l_nullable = 'Y');
    end column_is_nullable;

    function constraint_exists(
        i_table user_constraints.table_name%type,
        i_cons_name user_constraints.constraint_name%type,
        i_cons_type user_constraints.constraint_type%type)
        return boolean
    is
        dummy pls_integer;
        e_invalid_type exception;
    begin
        if upper(i_cons_type) not in ('P', 'U', 'R', 'C', 'V') then
            -- valid constraint types in 11g are :
            -- P(rimary Key)
            -- U(nique Key)
            -- R(eferrential Integrity Constraint or Foreign Key)
            -- C(heck constraint)
            -- V(iew - usually a check option)
            raise e_invalid_type;
        end if;

        select null into dummy
        from user_constraints
        where table_name = upper(i_table)
        and constraint_name = upper(i_cons_name)
        and constraint_type = upper(i_cons_type);

        return true;
    exception
        when no_data_found then
            return false;
        when e_invalid_type then
            raise_application_error( -20901, 'Not a valid constraint type value');
    end constraint_exists;

end ddl_checks;
/

Nothing too exotic here – Debbie simply use the data dictionary to work out the existence or state of database objects in the current schema.
The use of invoker’s rights ensures that these function will only report on objects in the calling schema.
This fitted in with the deployment practices for this application where the application owner schema was used to run the deployment.

Now, Debbie may have been tempted at this point to make use of some of SQLCL’s nifty new features. Unfortunately the Vogons had put they kybosh on that particular option, so she just had to do her best with good old SQL*Plus…

set serverout on
var command varchar2(4000)
set verify off

declare
    i_type varchar2(30) := '&1';
    i_name varchar2(30) := '&2';
    i_path varchar2(4000) := '&3';

begin    
    if ddl_checks.object_exists( i_name, i_type) then
        :command := 'prompt '||i_type||' '||i_name||' already exists - skipping';
    else
        :command := '@'||i_path;
    end if;   
end;
/
set heading off 
set feedback off 
set termout off

spool create_object.tmp
print :command
spool off

set termout on
set feedback on
@create_object.tmp

undef command

It’s probably worth pausing here to go through what this script is doing.
First, Debbie declared a SQL*Plus variable called command.

This variable is then populated in an anonymous PL/SQL block based on whether the object specified by the first two arguments passed in already exists. If it does then command will simply be set to output a message to this effect. Otherwise it will be set to call the script specified in the third argument.

The value of command is written to a file called create_object.tmp which is then executed.

So, to use this script for the COUNTRIES table which has it’s DDL in the script ddl/tables/countries_tbl.sql we can run…

@create_object.sql TABLE COUNTRIES ddl/tables/countries_tbl.sql

The first time this is run ( i.e. when the table does not already exists), the output file – create_object.tmp
looks like this :

@ddl/tables/countries_tbl.sql

As a result, the script is executed and the table is created :


The second and subsequent runs produce a file containing…

prompt table countries already exists - skipping

which produces the output :

A similar script can then be used for column creation :

set serverout on
var command varchar2(4000)
set verify off
declare
    i_table varchar2(30) := '&1';
    i_column varchar2(30) := '&2';
    i_path varchar2(4000) := '&3';

begin    
    if ddl_checks.column_exists( i_table, i_column) then
        :command := 'prompt Column '||i_table||'.'||i_column||' already exists - skipping';
    else
        :command := '@'||i_path;
    end if;   
end;
/
set heading off 
set feedback off 
set termout off

spool add_column.tmp
print :command
spool off

set termout on
set feedback on
@add_column.tmp

undef command

To add the new COUNTRIES records, Debbie’s preferred the option of simply moving the merge statement into a package :

create or replace package manage_countries as
    procedure save_country( i_code countries.iso_code%type, i_name countries.country_name%type);
end manage_countries;
/

create or replace package body manage_countries as
    procedure save_country( i_code countries.iso_code%type, i_name countries.country_name%type)
    is 
    begin
        merge into countries
            using dual
            on( iso_code = i_code)
            when matched then update
                set country_name = i_name
            when not matched then 
                insert( iso_code, country_name)
                values( i_code, i_name);
    end save_country;
                
                
end manage_countries;
/

With these changes in place, her individual scripts could remain largely recognisable (and in some cases, unchanged) as simple DDL statements.
The scripts that she would have to change were those to do with constraints…

The Primary Key :

declare
    msg varchar2(4000);
begin
    if ddl_checks.constraint_exists( 'countries', 'coun_pk', 'P') then
        msg := 'PK already exists - skipping';
    else
        execute immediate 'alter table countries add constraint coun_pk primary key (iso_code)';
        msg := 'PK Constraint added';
    end if;
    dbms_output.put_line(msg);
end;
/

The Foreign Key :

declare
    msg varchar2(4000);
begin
    if ddl_checks.constraint_exists( 'deliveries', 'del_coun_fk', 'R') then
        msg := 'FK already exists - skipping';
    else
        execute immediate 
            'alter table deliveries add constraint del_coun_fk foreign key (coun_iso_code) references countries(iso_code)';
        msg := 'FK Constraint added';
    end if;
    dbms_output.put_line(msg);
end;
/

The Not Null constraint :

declare
    msg varchar2(4000);
begin
    if ddl_checks.column_is_nullable( 'deliveries', 'coun_iso_code') then
        execute immediate 'alter table deliveries modify coun_iso_code not null';
        msg := 'Column made mandatory';
    else 
        msg := 'Column is already Not Null - skipping';
    end if;
    dbms_output.put_line(msg);
end;
/

The check constraint :

declare
    msg varchar2(4000);
begin
    if ddl_checks.constraint_exists( 'deliveries', 'del_nf_chk', 'C') then
        msg := 'FK already exists - skipping';
    else
        execute immediate 
            q'[alter table deliveries add constraint del_nf_chk check (naughty_flag in ('Y', 'N'))]';
        msg := 'FK Constraint added';
    end if;
    dbms_output.put_line(msg);
end;
/

…and the insert script :

begin
    manage_countries.save_country('ATA', 'ANTARCTICA');
    manage_countries.save_country('CAN', 'CANADA');
    manage_countries.save_country('COL', 'COLOMBIA');
    manage_countries.save_country('FIN', 'FINLAND');
    manage_countries.save_country('DEU', 'GERMANY');
    manage_countries.save_country('IND', 'INDIA');
    manage_countries.save_country('MDG', 'MADACASCAR');
    manage_countries.save_country('NZL', 'NEW ZEALAND');
    manage_countries.save_country('GBR', 'UNITED KINGDOM');
    manage_countries.save_country('USA', 'UNITED STATES OF AMERICA');

    commit;
end;
/

With these changes in place, Debbie was now able to run the following master release script, confident that any errors reported would be genuine…

prompt Creating new tables

@create_object.sql TABLE COUNTRIES ddl/tables/countries_tbl.sql
@create_object.sql Table Deliveries ddl/tables/deliveries_tbl.sql

prompt Creating a sequence

@create_object.sql sequence del_id_seq ddl/sequences/del_id_seq.sql

prompt Adding PK to a table

@ddl/table_alter/coun_pk.sql

prompt Renaming a column 

@add_column.sql DELIVERIES coun_iso_code ddl/table_alter/rename_del_coun_code.sql

prompt Adding Foreign Key

@ddl/table_alter/del_coun_fk.sql

prompt adding a Not Null constraint

@ddl/table_alter/del_iso_code_nn.sql

prompt adding new column 
@add_column.sql deliveries Naughty_Flag ddl/table_alter/naughty_flag.sql

prompt creating check constraint

@ddl/table_alter/del_nf_chk.sql

prompt adding virtual column

@add_column.sql Deliveries ACTIVE_FLAG ddl/table_alter/del_active_virt.sql

prompt creating view

@ddl/views/deliveries_vw.sql

prompt Creating package

@ddl/packages/manage_countries.pks
@ddl/packages/manage_countries.pkb

prompt creating COUNTRIES records...

@static_data/add_countries.sql

prompt Deployment completed.

The initial run looked like this :

Subsequent runs worked as expected :

“Well”, thought Debbie, “I think I’ve really earned my Christmas Bonus this year !”

NOTE : This story is fictional. Any resemblance to any Debbies, living or Welsh is purely intentional.

I’ll drink to that !

How to unload table data to csv file in old oracle version, the fastest way

Tom Kyte - Wed, 2018-12-19 10:46
We are currently using Oracle Database 8i. We have multiples table and around 600 millions of rows. We use SQLPLUS to export rows to csv with nested queries. Configuration: <code> set term off set feedback off set linesize 32767 set hea off...
Categories: DBA Blogs

Recursive query to get base tables for views, but from a list of views

Tom Kyte - Wed, 2018-12-19 10:46
Hi, I appreciate that similar questions have been asked before, but I am struggling with the next step if you like. I want to get all the base tables (recursively) used by a list of views. From various posts here I know how to do this for a sin...
Categories: DBA Blogs

Transfer data from one data file to another data file,so that I can drop a data file without losing data

Tom Kyte - Wed, 2018-12-19 10:46
Dear Team, My table space has 28 data files each with size of 30g(approx). Below the query I used to find out each data file size and free space in that data file. <code> select t1.file_id,total,free from (select file_id,trunc(bytes/1024/10...
Categories: DBA Blogs

PL/SQL Parallel function performance

Tom Kyte - Wed, 2018-12-19 10:46
Hi, I was trying to optimize one of the batch update queries (given below) in my application <code> UPDATE schema1.TEST_RECORDS PARTITION (PARTDEF) gr SET gr.purge_status_cd = 'PURGE_PENDING', gr.purge_date = SYSDATE WHERE EXISTS ( SELECT 1 F...
Categories: DBA Blogs

Nesting of functions

Tom Kyte - Wed, 2018-12-19 10:46
Hi all, I have seen many number of times a SQL function nested one another to greater depth Concat(substring(instring,concat()))... this is just an example My question is exactly this nesting work and how it is evaluated to ,which function goin...
Categories: DBA Blogs

Can one segment contain data from more then one object & Block size of Buffer cache

Tom Kyte - Wed, 2018-12-19 10:46
Hello Tom, I have a few doubts regarding Segments, extents & data_block 1. Can a Segment contain data from more then one object (Table, Index or MV) i) if no then new segments must be created with each new objects and upon DDL operations (like...
Categories: DBA Blogs

Huge Read Only Data Protection and H.A, Best Practice

Tom Kyte - Wed, 2018-12-19 10:46
Hi TOM, Suppose we have a VLDB with partitioned tables on distinct tablespaces per partition. Data on partitions (Tablespaces) can be read-only after a few months. Now we have huge (RMAN) backups from this read-only tablesapces. We want to setu...
Categories: DBA Blogs

Virtual Patching or Good Security Design instead?

Pete Finnigan - Wed, 2018-12-19 10:46
I got an email from someone recently who asked me about virtual patching for Oracle as they were running an out of date version of Oracle and were thinking that virtual patching maybe a good solution to make their database....[Read More]

Posted by Pete On 19/12/18 At 01:32 PM

Categories: Security Blogs

Lightning Web Components - @api, slots and getters

Robert Baillie - Wed, 2018-12-19 05:49
I've blogged about a few of the behaviours of Lightning Web Components, but the proof is really in building useful bits. What happens when you actually try to make a re-usable component? For our example, we'll rebuild 'ui:message'. A now (seemingly) defunct base component that would render a message in a box that is coloured based on the 'severity' of the message being shown. In the original it could be set to 'closable', although we're going to ignore that and focus on just the rendering of it. In a Lightning component we would use it like this:Original usage - Lightning Component <ui:message title="Error" severity="error" >{!v.errorMessages}</ui:message> Ideally, the version we will create, would be used like this:Desired usage <c-message title="Error" severity="error" >{errorMessages}</c-message> Looks pretty straightforward, and actually - it is. Just as long as we know about a few simple concepts. Before we go into them, let's see what a working...

Lightning Web Components - @api, slots and getters

Rob Baillie - Wed, 2018-12-19 05:49

I've blogged about a few of the behaviours of Lightning Web Components, but the proof is really in building useful bits. What happens when you actually try to make a re-usable component?

For our example, we'll rebuild 'ui:message'. A now (seemingly) defunct base component that would render a message in a box that is coloured based on the 'severity' of the message being shown. In the original it could be set to 'closable', although we're going to ignore that and focus on just the rendering of it.

In a Lightning component we would use it like this:

Original usage - Lightning Component

<ui:message title="Error" severity="error" >{!v.errorMessages}</ui:message>

Ideally, the version we will create, would be used like this:

Desired usage

<c-message title="Error" severity="error" >{errorMessages}</c-message>

Looks pretty straightforward, and actually - it is. Just as long as we know about a few simple concepts.

Before we go into them, let's see what a working example could look like:

Javascript component

import { LightningElement, api } from 'lwc';

export default class Message extends LightningElement {

@api title;
@api severity;

get classes() {
return this.severity + ' uiMessage';
}
}
HTML Template

<template>
<div class={classes} role="alert" >
<div class="uiBlock" >
<div class="bBody" >
<h4>{title}</h4><slot></slot>
</div>
</div>
</div>
</template>

OK then, let's pick a few of these bits apart, and hopefully we'll explain a few little behaviours along the way.

First up, let's take a look at the '@api' declarations.

@api

The @api property lines are pretty simple to understand - they define that 'title' and 'severity' are publicly available properties of the component. In the context of Lightning Web Components, public and private mean 'available outside of the component, and invisible to the outside of the component'. It's tempting to think that this is to do with the scope of the Javascript, but it's not.

That is, every property of the Javascript component is available to be referenced in the HTML template - whether it is 'public' or 'private'. One way of thinking about it is that the HTML template forms part of the component, and so it can see private properties.

Another (probably more accurate) way of thinking about it is that the template is processed by the Javascript component (that code it's immediately obvious, but it's almost certainly in LightningComponent - which this class extends), and the Javascript can see its own properties, so the private ones are available.

However, other components (like ones that include this in their templates) can only see public properties. @api is how you make them public. Doing so means that they are available as attributes on the tag you use to include the component (hence <c-message title="Error"... is possible)

Not only that, but every @api decorated property is also 'reactive'. That is, whenever its value changes the component is re-rendered. The documentation is pretty clear on that point - and is presented as a fundamental property of a public property:

Public Properties

To expose a public property, decorate it with @api. Public properties define the API for a component. An owner component that uses the component in its markup can access the component’s public properties. Public properties are reactive. If the value of a reactive property changes, the component’s template rerenders any content that references the property.


Why would a public property be reactive?

Put simply, if we change the value of one of those properties in a parent component, we want the component to re-render - and it's pretty much guaranteed that we ALWAYS want the component to re-render.

For example, we may do the following:


<c-message title="{title}" severity="{severity}" >{messages}</c-message>

When the value of 'title' or 'severity' changes, we would always want the message box to re-render to show our new values. And so the framework takes care of that and makes EVERY public property of the component reactive

So that takes care of the attributes we need to pass in, what about the content?

Slots

Lightning Components had facets. And they weren't intuitive. I mean they weren't complex, but they weren't in keeping with HTML - they always felt unnatural - especially in the simplest of cases.

Lightning Web Components fixes that, with slots. And in the simple case they are trivial. The documentation isn't long, and doesn't need to be.

All we need to do, in this simple case, is add <slot></slot> into our component, and the body of any tag that instantiates the component will be rendered in that slot.

Now something that's missing from the documentation, which is a fairly obvious behaviour once you see it in action, is that slots are effectively reactive.

That is, if you change the content of the tag, that content is immediately reflected in the component's rendered output.

So, in our example:


<c-message title="Error" severity="error" >{errorMessages}</c-message>

Whenever the value of 'errorMessages' changes, the slot inside the 'message' component is re-rendered to include the new content.

I admit, I had assumed that this would be the case, but I didn't immediately realise that it was an assumption. So I thought it was worth calling out

Getters

The final part of the example that I want to explain is the use of the 'getter':


get classes() {
return this.severity + ' uiMessage';
}

What we're doing here is building a list of CSS classes for a node in the component that includes one of the passed in attributes plus a standard class that must be applied

The use of the getter illustrates an important difference between the behaviour of the templates in Lightning Components (LC) and Lightning Web Components (LWC), as well a reminder of the behaviour of properties.

That is, in LC we could have done the following in our template:


<div class="{!v.severity + ' uiMessage'}" role="alert" >

In LC, our replacements could include expressions, so we could build up strings in the template. In LWC, we can't do this, we can only reference properties or getters.

Not only that, but we can't build up the strings in the attribute assignment.

I.E. We can't do this:


<div class="{severity} uiMessage" role="alert" >

In LWC we don't assign properties to attributes in this way, the framework takes care of the wrapping in double quotes, escaping the strings, and other such things, so we can only assign the property, and that's it.

I.E. This is what is allowed:


<div class={severity} role="alert" >

So, if we want to assign more than just the value of 'severity' to the class attribute, we need to build that string up outside of the template.

Your first reaction might be - OK, we can create a trackable property to store it, right?


@track classes = this.severity + ' uiMessage';

But this doesn't work. You'll end up with the classes property defined as 'undefined uiMessage', and it won't change. Why is that?

Well, it's tempting to think that 'track' and 'api' mean that Javascript will re-run when things change, but that's not what they do - nor what the documentation says they'll do

Rather, if a property is reactive it means that the component will be re-rendered when the property changes. That says nothing about running Javascript.

So when we look at the above, what happens is the property 'classes' is set when the Javascript object is constructed. At this point the property 'severity' is undefined. When the 'severity' is updated via the attribute, the component is re-rendered and the 'classes' property is re-injected into the template, but the Javascript that sets the classes property is not re-run - that is only executed when the object is instantiated.

So, instead of setting the 'classes' property directly, we set up a getter for it:

Javascript component

get classes() {
return this.severity + ' uiMessage';
}

Now, when the 'severity' property changes, the 'classes' property is re-injected. In order to get the value for 'classes', the getter is executed - this is the only way the property can be retrieved. In doing so, the string concatenation is re-evaluated and the new value is retrieved.

Summary

None of the concepts here are particularly difficult, or really that earth shattering, but building even the simplest of re-usable components starts to shed some light on what the parts do any why.

The framework has been very thoughtfully put together, and some of the techniques will be different to what people are used to, having cut their Javascript teeth with Lightning Components, but the changes are for very good reasons. An example like this really shows how those changes make for simple components.

[Solved] Java Forms opening issue in EBS (R12)

Online Apps DBA - Wed, 2018-12-19 04:11

Oracle Forms can be configured in two modes i.e. Servlet and Socket An issue that arises frequently in EBS R12 is while opening Java Forms. Are you facing the same issues as well? If yes, you’ve got nothing to worry. Visit: https://k21academy.com/appsdba40 and learn about: ✔All about the issue ✔How to Resolve this issue (Solution) & […]

The post [Solved] Java Forms opening issue in EBS (R12) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Podcast: REST or GraphQL? An Objective Comparison

OTN TechBlog - Tue, 2018-12-18 23:00

Are you a RESTafarian? Or are you a GraphQL aficionado? Either way you'll want to listen to the latest Oracle Groundbreaker Podcast, as a panel of experts weighs the pros and cons of each technology.

Representational State Transfer, known to its friends as REST, has been around for nearly two decades and has a substantial following. GraphQL, on the other hand, became publicly available in 2015, and only a few weeks ago moved under the control of the GraphQL Foundation, a project of the Linux Foundation. But despite its relative newcomer status, GraphQL has gained a substantial following of its own.

So which technology is best suited for your projects? That's your call. But this discussion will help you make that decision, as the panel explores essential questions, including: 

  • What circumstances or conditions favor one over the other?
  • How do the two technologies complement each other?
  • How difficult is it for long-time REST users to make the switch to GraphQL?

This program is Oracle Groundbreak Podcast #361. It was recorded on Wednesday December 12, 2018. Listen!

 

The Panelists Luis Weir Luis Weir
CTO | Oracle Practice, Capgemini
Twitter LinkedIn Oracle Groundbreaker Ambassssdor; Oracle ACE Director Chris Kincanon Chris Kincanon
Engineering Manager / Technical Product Owner, Spreemo
Twitter LinkedIn  Dolf Dijkstra Dolf Dijkstra
Consulting Solutions Architect | A-Team - Cloud Solutions Architect, Oracle
Twitter LinkedIn James Neate James Neate
Oracle PaaS Consultant, Capgemini
Twitter LinkedIn Additional Resources Coming Soon
  • Baruch Sadogursky, Leonid Igolnik, and Viktor Gamov discuss DevOps, streaming, liquid software, and observability in this podcast captured during Oracle Code One 2018.
  • Database: Breaking the Golden Rules: There comes a time question, and even break,  long-established rules. This program presents a discussion of the database rules that may no longer be advantageous. 
  • What's Up with Serverless? A panel discussion of where Serverless fits in the IT landscape.
Subscribe

Never miss an episode! The Oracle Groundbreakers Podcast is available via:

Pages

Subscribe to Oracle FAQ aggregator