Feed aggregator

How to gather statistics on a standard edition database

Tom Kyte - Thu, 2018-05-24 07:46
Hi, I'll like to gather some statistics on long running statements on a standard edition database. Can you please suggest the best way to gather stats on this statement? <code> BANNER ...
Categories: DBA Blogs

Limit and conversion very long IN list : WHERE x IN ( ,,, ...)

Tom Kyte - Thu, 2018-05-24 07:46
How many elements may be in the WHERE x IN (,,,) list ? I see 2 ways to overcome IN list limitation: 1) use x=el_1 OR x=el_2 OR x=el_3 OR ... 2) create temporary table , but another question arise here: why create table A( X INTEGER, Y...
Categories: DBA Blogs

ADWC – System and session settings (DWCS lockdown profile)

Yann Neuhaus - Thu, 2018-05-24 05:04

The Autonomous Data Warehouse Cloud service is a PaaS managed service where we have a PDB and an ADMIN user which has most of the system privileges. For example, we have the privilege to change initialization parameters:
SQL> select * from dba_sys_privs where grantee=user and privilege like 'ALTER S%';
 
GRANTEE PRIVILEGE ADMIN_OPTION COMMON INHERITED
------- --------- ------------ ------ ---------
ADMIN ALTER SESSION YES NO NO
ADMIN ALTER SYSTEM YES NO NO

Still, not everything is allowed for several reasons: ensure that we cannot break the Oracle managed CDB and force us to use only the features allowed in the ‘autonomous’ service. This is limited with a lockdown profile:
SQL> show parameter pdb_lockdown
 
NAME TYPE VALUE
------------ ------ -----
pdb_lockdown string DWCS

DWCS means Data Warehouse Cloud Service which was the name of the Autonomous Data Warehouse Cloud service until Larry Ellison announces this self-driven-no-human trend under the marketing umbrella of ‘autonomous’.

The limitations are all documented but I like to see them by myself and in 18c we have a mean to see the lockdown rules from the PDB itself, through V$LOCKDOWN_RULES.

ALTER SYSTEM

Basically, in this ADWC all ALTER SYSTEM statements are disallowed and then they add the few exceptions for what we are allowed to:

SQL> select * from v$lockdown_rules where rule in ('ALTER SYSTEM') and clause_option is null;
RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
--------- ---- ------ ------------- ------ ----- ------
STATEMENT ALTER SYSTEM DISABLE ALL 73
STATEMENT ALTER SYSTEM SET ENABLE COMMON 73
STATEMENT ALTER SYSTEM KILL SESSION ENABLE ALL 73

You can ignore what is enabled for COMMON users because we have no common user to connect to our PDB. We will see which ALTER SYSTEM SET clauses are allowed. But in addition to those, only the ‘KILL SESSION’ is allowed for ALTER SYSTEM.

Here is the detail about the parameters we can set:

SQL> select * from v$lockdown_rules left outer join (select upper(name) clause_option,display_value,description from v$parameter) using (clause_option) where rule in ('ALTER SYSTEM') and clause_option is not null and status='ENABLE';
 
CLAUSE_OPTION RULE_TYPE RULE CLAUSE STATUS USERS CON_ID DISPLAY_VALUE DESCRIPTION
------------- --------- ---- ------ ------ ----- ------ ------------- -----------
APPROX_FOR_AGGREGATION STATEMENT ALTER SYSTEM SET ENABLE ALL 73 FALSE Replace exact aggregation with approximate aggregation
APPROX_FOR_COUNT_DISTINCT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 FALSE Replace count distinct with approx_count_distinct
APPROX_FOR_PERCENTILE STATEMENT ALTER SYSTEM SET ENABLE ALL 73 none Replace percentile_* with approx_percentile
AWR_PDB_AUTOFLUSH_ENABLED STATEMENT ALTER SYSTEM SET ENABLE ALL 73 TRUE Enable/Disable AWR automatic PDB flushing
NLS_LANGUAGE STATEMENT ALTER SYSTEM SET ENABLE ALL 73 AMERICAN NLS language name
NLS_SORT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS linguistic definition name
NLS_TERRITORY STATEMENT ALTER SYSTEM SET ENABLE ALL 73 AMERICA NLS territory name
NLS_CALENDAR STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS calendar system name
NLS_COMP STATEMENT ALTER SYSTEM SET ENABLE ALL 73 BINARY NLS comparison
NLS_CURRENCY STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS local currency symbol
NLS_DATE_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 DD-MON-YYYY HH24:MI:ss NLS Oracle date format
NLS_DATE_LANGUAGE STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS date language name
NLS_DUAL_CURRENCY STATEMENT ALTER SYSTEM SET ENABLE ALL 73 Dual currency symbol
NLS_ISO_CURRENCY STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS ISO currency territory name
NLS_LENGTH_SEMANTICS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 BYTE create columns using byte or char semantics by default
NLS_NCHAR_CONV_EXCP STATEMENT ALTER SYSTEM SET ENABLE ALL 73 FALSE NLS raise an exception instead of allowing implicit conversion
NLS_NUMERIC_CHARACTERS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS numeric characters
NLS_TIMESTAMP_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 time stamp format
NLS_TIMESTAMP_TZ_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 timestamp with timezone format
NLS_TIME_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 time format
NLS_TIME_TZ_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 time with timezone format
OPTIMIZER_IGNORE_HINTS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 TRUE enables the embedded hints to be ignored
OPTIMIZER_IGNORE_PARALLEL_HINTS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 TRUE enables embedded parallel hints to be ignored
PLSCOPE_SETTINGS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 identifiers:all plscope_settings controls the compile time collection, cross reference, and storage of PL/SQL source code identifier and SQL statement data
PLSQL_CCFLAGS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 PL/SQL ccflags
PLSQL_DEBUG STATEMENT ALTER SYSTEM SET ENABLE ALL 73 FALSE PL/SQL debug
PLSQL_OPTIMIZE_LEVEL STATEMENT ALTER SYSTEM SET ENABLE ALL 73 1 PL/SQL optimize level
PLSQL_WARNINGS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 DISABLE:ALL PL/SQL compiler warnings settings

The APPROX_ ones, disable by default, can be used to transparently use approximations for faster results.
The NLS_ ones can be used to set NLS defaults for our sessions.
OPTIMIZER_IGNORE_ are new in 18c and are set by default here to ignore embedded hints. However, we can set then to false.
PLSQL_ are the defaults for sessions and I don’t understand why warnings are not enabled by default. Fortunately, we are able to change that at PDB level.

There are also some rules to disable some ALTER SYSTEM SET. They are there for the common users only (which have ALTER SYSTEM SET enabled) but they are interesting to see what Oracle choose to set in the ADWC service which cannot be changed in the PDB even by their common users:

SQL> select * from v$lockdown_rules left outer join (select upper(name) clause_option,display_value,description from v$parameter) using (clause_option) where rule in ('ALTER SYSTEM') and clause_option is not null and status='DISABLE';
CLAUSE_OPTION RULE_TYPE RULE CLAUSE STATUS USERS CON_ID DISPLAY_VALUE DESCRIPTION
------------- --------- ---- ------ ------ ----- ------ ------------- -----------
DB_FILES STATEMENT ALTER SYSTEM SET DISABLE ALL 73 25 max allowable # db files
"_PDB_INHERIT_CFD" STATEMENT ALTER SYSTEM SET DISABLE ALL 73
"_PDB_MAX_AUDIT_SIZE" STATEMENT ALTER SYSTEM SET DISABLE ALL 73
"_PDB_MAX_DIAG_SIZE" STATEMENT ALTER SYSTEM SET DISABLE ALL 73
MAX_IDLE_TIME STATEMENT ALTER SYSTEM SET DISABLE ALL 73 60 maximum session idle time in minutes
PARALLEL_DEGREE_POLICY STATEMENT ALTER SYSTEM SET DISABLE ALL 73 AUTO policy used to compute the degree of parallelism (MANUAL/LIMITED/AUTO/ADAPTIVE)
_PARALLEL_CLUSTER_CACHE_POLICY STATEMENT ALTER SYSTEM SET DISABLE ALL 73 ADAPTIVE policy used for parallel execution on cluster(ADAPTIVE/CACHED)
_ENABLE_PARALLEL_DML STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE enables or disables parallel dml
RESULT_CACHE_MODE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 FORCE result cache operator usage mode
RESULT_CACHE_MAX_RESULT STATEMENT ALTER SYSTEM SET DISABLE ALL 73 1 maximum result size as percent of cache size
RESOURCE_MANAGER_PLAN STATEMENT ALTER SYSTEM SET DISABLE ALL 73 FORCE:DWCS_PLAN resource mgr top plan
_CELL_OFFLOAD_VECTOR_GROUPBY STATEMENT ALTER SYSTEM SET DISABLE ALL 73 FALSE enable SQL processing offload of vector group by
PARALLEL_MIN_DEGREE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 CPU controls the minimum DOP computed by Auto DOP
_MAX_IO_SIZE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 33554432 Maximum I/O size in bytes for sequential file accesses
_LDR_IO_SIZE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 33554432 size of write IOs used during a load operation
_LDR_IO_SIZE2 STATEMENT ALTER SYSTEM SET DISABLE ALL 73 33554432 size of write IOs used during a load operation of EHCC with HWMB
_OPTIMIZER_GATHER_STATS_ON_LOAD_ALL STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE enable/disable online statistics gathering for nonempty segments
_OPTIMIZER_GATHER_STATS_ON_LOAD_HIST STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE enable/disable online histogram gathering for loads
_DATAPUMP_GATHER_STATS_ON_LOAD STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE Gather table statistics during Data Pump load rather thanimporting statistics from the dump file. This should be set to TRUE in the lockdown profile in a DWCS environment.
_OPTIMIZER_ANSWERING_QUERY_USING_STATS STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE enable statistics-based query transformation
_PX_XTGRANULE_SIZE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 128000 default size of a external table granule (in KB)
_OPTIMIZER_ALLOW_ALL_ACCESS_PATHS STATEMENT ALTER SYSTEM SET DISABLE ALL 73 FALSE allow all access paths
_DATAPUMP_INHERIT_SVCNAME STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE Inherit and propagate service name throughout job
_DEFAULT_PCT_FREE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 1 Default value of PCT_FREE enforced by DWCS lockdown

So, among the interesting ones, Result Cache is forced for all results (RESULT_CACHE_MODE=FORCE), Parallel DML is enabled for all sessions (but we will see that we can disable it at session level), PCTFREE will always be 1 (_DEFAULT_PCT_FREE=1), statistics are gathered during load (this is a 18c feature). And we cannot change that.

There are only few additional ALTER SYSTEM SET which are allowed at session level:

SQL> select * from v$lockdown_rules where rule in ('ALTER SESSION') and clause is not null and clause_option is not null
and (clause_option,status,users) not in (select clause_option,status,users from v$lockdown_rules where rule in ('ALTER SYSTEM') and clause is not null and clause_option is not null)
;
RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
--------- ---- ------ ------------- ------ ----- ------
STATEMENT ALTER SESSION SET CONTAINER ENABLE ALL 73
STATEMENT ALTER SESSION SET CURRENT_SCHEMA ENABLE ALL 73
STATEMENT ALTER SESSION SET EDITION ENABLE ALL 73
STATEMENT ALTER SESSION SET OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES ENABLE ALL 73
STATEMENT ALTER SESSION SET DEFAULT_COLLATION ENABLE ALL 73
STATEMENT ALTER SESSION SET ROW ARCHIVAL VISIBILITY = ACTIVE ENABLE ALL 73
STATEMENT ALTER SESSION SET ROW ARCHIVAL VISIBILITY = ALL ENABLE ALL 73
STATEMENT ALTER SESSION SET TIME_ZONE ENABLE ALL 73

Besides the parameters here are what we can do with ALTER SESSION:

SQL> select * from v$lockdown_rules where rule='ALTER SESSION' and clause_option is null;
 
RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
--------- ---- ------ ------------- ------ ----- ------
STATEMENT ALTER SESSION DISABLE ALL 73
STATEMENT ALTER SESSION SET ENABLE COMMON 73
STATEMENT ALTER SESSION ADVISE COMMIT ENABLE ALL 73
STATEMENT ALTER SESSION CLOSE DATABASE LINK ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE COMMIT IN PROCEDURE ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE PARALLEL DDL ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE PARALLEL DML ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE PARALLEL QUERY ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE RESUMABLE ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE COMMIT IN PROCEDURE ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE PARALLEL DDL ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE PARALLEL DML ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE PARALLEL QUERY ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE RESUMABLE ENABLE ALL 73
STATEMENT ALTER SESSION FORCE PARALLEL DDL ENABLE ALL 73
STATEMENT ALTER SESSION FORCE PARALLEL DML ENABLE ALL 73
STATEMENT ALTER SESSION FORCE PARALLEL QUERY ENABLE ALL 73

I’ll show other rules (other than ALTER SYSTEM and ALTER SESSION statements) in a future post. Lockdown profiles is a great feature because they have very fine granularity and makes it easy to document what is allowed or not. Oracle introduced them for their own usage in the public cloud. You can use the same on-premises for your private cloud. This requires multitenant architecture, but the option is not mandatory.

 

Cet article ADWC – System and session settings (DWCS lockdown profile) est apparu en premier sur Blog dbi services.

Missing Audit

Jonathan Lewis - Thu, 2018-05-24 04:27

Here’s a detail I discovered a long time ago – and rediscovered very recently: it’s possible to delete data from a table which is subject to audit without the delete being audited. I think the circumstances where it would matter are a little peculiar, and I’ve rarely seen systems that use the basic Oracle audit feature anyway, but this may solve a puzzle for someone, somewhere, someday.

The anomaly appears if you create a referential integrity constraint as “on delete cascade”. A delete from the parent table will automatically (pre-)delete matching rows from the child table but the delete on the child table will not be audited. Here’s a demonstration script – note that you will need to have set the parameter audit_trail to ‘DB’ to prove the point.


rem
rem     Script:         del_cascade_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2004
rem
rem     Last tested
rem             12.1.0.2
rem             11.2.0.4
rem

drop table t2 purge;
drop table t1 purge;

create table t1 (
        id              number(6),
        v1              varchar2(10),
        padding         varchar2(100),
        constraint t1_pk 
                primary key (id)
);


create table t2 (
        id_par          number(6),
        id_ch           number(6),
        v1              varchar2(10),
        padding         varchar2(100),
        constraint t2_pk 
                primary key (id_par,id_ch),
        constraint t2_fk_t1 
                foreign key (id_par) references t1 
                on delete cascade
);


insert into t1
select
        rownum,
        rownum,
        rpad('x',100)
from
        all_objects
where
        rownum <= 100 -- > comment to avoid wordpress format issue
;


insert into t2
select
        1+trunc((rownum-1)/5),
        rownum,
        rownum,
        rpad('x',100)
from
        all_objects
where
        rownum <= 500 -- > comment to avoid wordpress format issue
;

commit;

prompt  =================================
prompt  Parent/Child rowcounts for id = 1
prompt  =================================

select count(*) from t1 where id = 1;
select count(*) from t2 where id_par = 1;

column now new_value m_now
select to_char(sysdate,'dd-mon-yyyy hh24:mi:ss') now from dual;

audit delete on t2 by access; 
audit delete on t1 by access; 

prompt  =======================================================
prompt  If you allow the cascade (keep the t2 delete commented)
prompt  then the cascade deletion is not audited.
prompt  =======================================================

-- delete from t2 where id_par = 1;
delete from t1 where id = 1;

noaudit delete on t1; 
noaudit delete on t2; 

column obj_name format a32

select  action_name, obj_name 
from    user_audit_trail
where   timestamp >= to_date('&m_now','dd-mon-yyyy hh24:mi:ss')
;

The script has an optional delete from the child table (t2) just before the delete from the parent table (t1). When you run the script you should see that before the delete t1 reports one row while t2 reports 5 rows. After the delete(s) both tables will report zero rows.

If you leave the t2 delete commented out then the delete from t2 will have been the recursive delete due to the cascade and the number of rows returned from user_audit_trail will be one (the delete from t1). If you allow the explicit delete from t2 to take place then user_audit_trail will report two rows, one each for t1 and t2.

Sample output (with a little cosmetic editing) – when the delete from t2 is commented out:

=================================
Parent/Child rowcounts for id = 1
=================================

  COUNT(*)
----------
         1


  COUNT(*)
----------
         5

Audit succeeded.
Audit succeeded.

=======================================================
If you allow the cascade (keep the t2 delete commented)
then the cascade deletion is not audited.
=======================================================

1 row deleted.


Noaudit succeeded.
Noaudit succeeded.


  COUNT(*)
----------
         0


  COUNT(*)
----------
         0


ACTION_NAME                  OBJ_NAME
---------------------------- --------------------------------
DELETE                       T1

1 row selected.

As you can see, I’ve deleted just one row from one table (the t1 delete), but the final query against t2 shows that the child rows have also been deleted, but the only audit record reported is the one for the parent – despite the fact that if you enable sql_trace before the delete from t1 you will find the actual recursive statement ‘ delete from “TEST_USER”.”T2″ where “ID_PAR” = :1’ in the trace file.

The “recursive” nature of the delete in the trace file might be a bit of a clue – it is reported as being operated by SYS (lid = 0), not by the real end-user, and the parsing_user_id and parsing_schema_id in v$sql are both SYS (i.e. 0). Checking dba_audit_trail and the audit_file_dest for audited SYS operations, though, there was still no sign of any audit record for the child delete.

 

EMEA Edge Conference 2018

Anthony Shorten - Wed, 2018-05-23 19:41

I will be attending the EMEA Oracle Utilities Edge Conference on the 26 - 27 June 2018 in the Oracle London office. This year we are running an extended set of technical sessions around on-premise and the Oracle Utilities Cloud Services. This forum is open to Oracle Utilities customers and Oracle Utilities partners.

The sessions mirror the technical sessions for the conference in the USA held earlier this year with the following topics:

Reducing Your Storage Costs Using Information Life-cycle Management With the increasing costs of maintaining storage and satisfying business data retention rules can be challenging. Using Oracle Information Life-cycle Management solution can help simplify your storage solution and hardness the power of the hardware and software to reduce storage costs. Integration using Inbound Web Services and REST with Oracle Utilities Integration is a critical part of any implementation. The Oracle Utilities Application Framework has a range of facilities for integrating from and to other applications. This session will highlight all the facilities and where they are best suited to be used. Optimizing Your Implementation Implementations have a wide range of techniques available to implement successfully. This session will highlight a group of techniques that have been used by partners and our cloud implementations to reduce Total Cost Of Ownership. Testing Your On-Premise and Cloud Implementations Our Oracle Testing solution is popular with on premise implementations. This session will outline the current testing solution as well as outline our future plans for both on premise and in the cloud. Securing Your Implementations With the increase in cybersecurity and privacy concerns in the industry, a number of key security enhancements have made available in the product to support simple or complex security setups for on premise and cloud implementations. Turbocharge Your Oracle Utilities
Product Using the Oracle In-Memory Database Option
The Oracle Database In-Memory options allows for both OLTP and Analytics to run much faster using advanced techniques. This session will outline the capability and how it can be used in existing on premise implementations to provide superior performance. Developing Extensions using Groovy Groovy has been added as a supported language for on premise and cloud implementations. This session outlines that way that Groovy can be used in building extensions. Note: This session will be very technical in nature. Ask Us Anything Session Interaction with the customer and partner community is key to the Oracle Utilities product lines. This interactive sessions allows you (the customers and partners) to ask technical resources within Oracle Utilities questions you would like answered. The session will also allow Oracle Utilities to discuss directions and poll the audience on key initiatives to help plan road maps

Note: These sessions are not recorded or materials distributed outside this forum.

This year we have decided to not only discuss capabilities but also give an idea of how we use those facilities in our own cloud implementations to reduce our operating costs for you to use as a template for on-premise and hybrid implementations.

See you there if you are attending.

If you wish to attend, contact your Oracle Utilities local sales representative for details of the forum and the registration process.

Running Code as SYS From Another User not SYSDBA

Pete Finnigan - Wed, 2018-05-23 13:06
I have been embroiled in a twitter thread today about the post i made in this blog yesterday around granting privileges to a user and who should do the granting. Patrick today asked a further question: How do you make....[Read More]

Posted by Pete On 22/05/18 At 08:42 PM

Categories: Security Blogs

Who Should Grant Object Rights?

Pete Finnigan - Wed, 2018-05-23 13:06
Patrick Jolliffe posted a question via a tweet back in April but due to personal health pressures with a close relative of mine I have not had the time to deal with much over the last few months. I did....[Read More]

Posted by Pete On 21/05/18 At 07:08 PM

Categories: Security Blogs

Plaintiffs' Law Firms to Pay Oracle $270,000 to Settle Sanctions

Oracle Press Releases - Wed, 2018-05-23 12:43
Press Release
Plaintiffs' Law Firms to Pay Oracle $270,000 to Settle Sanctions

Redwood Shores, Calif.—May 23, 2018

Today, four plaintiffs’ law firms agreed to pay Oracle Corporation $270,000 to avoid Oracle’s motion for sanctions over their misconduct in a lawsuit related to Oracle’s acquisition of Micros Systems, Inc. in Case No. 13-C-14-099672, in the Circuit Court for Howard County, Maryland.

In 2014, Brower Piven, Robbins Arroyo LLC, Weisslaw LLC, and Pomerantz, LLP, along with other firms, sued Micros and its directors, claiming that Micros’s shareholders were not adequately informed about the transaction and that Oracle’s offer price was too low. The firms also sued Oracle for aiding and abetting the Micros board, despite the fact that Oracle simply engaged in arm’s-length negotiations to obtain the best price possible. The trial court dismissed all claims with prejudice against all defendants, including Oracle, and then courts at every level of the state system rejected five subsequent motions for reconsideration and appeals brought by these firms, affirming the trial judge’s finding that “with respect to Oracle, the Plaintiffs have failed to allege any acts, alleged acts, by agents or employees or Oracle that were impermissible under the law. The mere fact that Oracle pursued the merger is insufficient.” The outrageous litigation conduct drew the sanctions motion by Oracle, which the four firms identified above agreed to settle rather than defend.

“This substantial monetary settlement reflects the strength of our sanctions motion for what Oracle believes was clearly a “strike” suit brought against Oracle. For more than a decade, plaintiffs’ lawyers have brought these suits, challenging legitimate public mergers, in order to line their own pockets at the expense of shareholders. We are grateful that the Maryland courts recognized that the claims had no merit, and we urge other public companies to challenge these baseless suits. Shareholders’ attorneys cannot simply claim and collect what is effectively an unwarranted tax on mergers,” said Dorian Daley, Oracle’s General Counsel.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, SCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE: ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Four Enhancements for Concurrent Processing in EBS 12.1 Now Available

Steven Chan - Wed, 2018-05-23 12:15

We have just released four new enhancements for Concurrent Processing in E-Business Suite 12.1.3:

  1. Storage Strategies for Log and Output File Locations: Create custom storage strategies for management of large numbers of concurrent processing log and output files. Customers can specify the strategy that best suits their particular needs. These strategies are called schemes.
  2. Output File Naming Strategies: The output file naming conventions are now based on USER.REQID and REQID.OUT. The old USER format is desupported.
  3. Timed Shutdown: submit a normal, graceful shutdown and also specify a number of minutes after which an Abort command will be executed. After this number of minutes has passed, and Concurrent Processing has not yet shut down, the graceful shutdown will be converted to an Abort, and all remaining Concurrent Processing processes will be aborted.
  4. 64-bit Java Support for the Output Post-Processor Service: A 64-bit Java Virtual Machine (JVM) is now supported for the Output Post Processor (OPP). This support allows for a larger heap size to be set, compared to the 2G heap size that 32-bit Java allows. This larger heap size will decrease out-of-memory errors. Note that the 64-bit JVM can be run for the OPP service only.

Details about these four new enhancements are published here:

You can download these enhancements here:

Related Articles

 

Categories: APPS Blogs

Users, schemas & privileges in #Exasol

The Oracle Instructor - Wed, 2018-05-23 09:18

Exasol Logo

In Exasol, a database user may own multiple schemas – or even none at all. I connect to my Community Edition to show that:

C:\Users\uh>cd \Program Files (x86)\EXASOL\EXASolution-6.0\EXAplus

C:\Program Files (x86)\EXASOL\EXASolution-6.0\EXAplus>exaplusx64 -c 192.168.56.101:8563 -u sys -p exasol -lang EN

EXAplus 6.0.8 (c) EXASOL AG

Wednesday, May 23, 2018 3:28:29 PM CEST
Connected to database EXAone as user sys.
EXASolution 6.0.8 (c) EXASOL AG

SQL_EXA> create user adam identified by adam;
EXA: create user adam identified by adam;

Rows affected: 0

SQL_EXA> grant dba to adam;
EXA: grant dba to adam;

Rows affected: 0

SQL_EXA> select user_name from exa_dba_users;
EXA: select user_name from exa_dba_users;

USER_NAME
------------------------------------------------------------
SYS
ADAM

2 rows in resultset.

SQL_EXA> select schema_owner,schema_name from exa_schemas;
EXA: select schema_owner,schema_name from exa_schemas;

SCHEMA_OWNER
-------------------------------------------------------------
SCHEMA_NAME
-------------------------------------------------------------
SYS
RETAIL

1 row in resultset.

SQL_EXA> connect adam/ADAM;

Wednesday, May 23, 2018 3:34:42 PM CEST
Connected to database EXAone as user adam.
EXASolution 6.0.8 (c) EXASOL AG

SQL_EXA> create table t1 (n number);
EXA: create table t1 (n number);
Error: [42000] no schema specified or opened or current schema has been dropped [line 1, column 27] (Session: 1601269589413551548)
SQL_EXA> open schema adam;
EXA: open schema adam;
Error: [42000] schema ADAM not found [line 1, column 13] (Session: 1601269589413551548)

Demo user adam has the DBA role granted but there is no adam schema yet. I need to create it first:

EXA: create schema adam;

Rows affected: 0

SQL_EXA> open schema adam;
EXA: open schema adam;

Rows affected: 0

SQL_EXA> create table t1 (n number);
EXA: create table t1 (n number);

Rows affected: 0

SQL_EXA> create schema adam2;
EXA: create schema adam2;

Rows affected: 0

SQL_EXA> create table adam2.t2 (n number);
EXA: create table adam2.t2 (n number);

Rows affected: 0

SQL_EXA> select table_schema,table_name from exa_user_tables;
EXA: select table_schema,table_name from exa_user_tables;

TABLE_SCHEMA
--------------------------------------------------------
TABLE_NAME
--------------------------------------------------------
ADAM
T1
ADAM2
T2

2 rows in resultset.

As you see, user adam has now two schemas with different tables in them. Now briefly to privileges:

SQL_EXA> create user fred identified by fred;
EXA: create user fred identified by fred;

Rows affected: 0

SQL_EXA> grant create session to fred;
EXA: grant create session to fred;

Rows affected: 0

SQL_EXA> grant select on adam.t1 to fred;
EXA: grant select on adam.t1 to fred;

Rows affected: 0

SQL_EXA> connect fred/FRED;

Wednesday, May 23, 2018 3:53:34 PM CEST
Connected to database EXAone as user fred.
EXASolution 6.0.8 (c) EXASOL AG

SQL_EXA> select * from adam.t1;
EXA: select * from adam.t1;

N
-----------------

0 rows in resultset.

SQL_EXA> select * from adam2.t2;
EXA: select * from adam2.t2;
Error: [42500] insufficient privileges: SELECT on table T2 (Session: 1601270776421928841)
SQL_EXA> connect adam/ADAM;

Wednesday, May 23, 2018 3:54:33 PM CEST
Connected to database EXAone as user adam.
EXASolution 6.0.8 (c) EXASOL AG

SQL_EXA> create role allonadam2;
EXA: create role allonadam2;

Rows affected: 0

SQL_EXA> grant all on adam2 to allonadam2;
EXA: grant all on adam2 to allonadam2;

Rows affected: 0

SQL_EXA> grant allonadam2 to fred;
EXA: grant allonadam2 to fred;

Rows affected: 0

SQL_EXA> connect fred/FRED;

Wednesday, May 23, 2018 3:55:54 PM CEST
Connected to database EXAone as user fred.
EXASolution 6.0.8 (c) EXASOL AG

SQL_EXA> select * from adam2.t2;
EXA: select * from adam2.t2;

N
-----------------

0 rows in resultset.

SQL_EXA> drop table adam2.t2;
EXA: drop table adam2.t2;
Error: [42500] insufficient privileges for dropping table (Session: 1601270923042332982)

That’s because ALL contains ALTER, DELETE, EXECUTE, INSERT, SELECT and UPDATE but not DROP which can be confirmed using EXA_DBA_OBJ_PRIVS.

Categories: DBA Blogs

GDPR or AVG - regain control Part 2: Your Own Mail

Frank van Bortel - Wed, 2018-05-23 09:09
Create your own mail server Drop Yahoo, Google or Microsoft mail - they are reading your mail. Debian, Postfix, Dovecot, MariaDb, rspamd This is the second (and last) part of setting up your own internet tools in order to gain back control. Goal is to set up an email server (receive and send), secure it, and filter spam. Hardware considerations I used an abandoned ASRock ION330, where I Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

Running Kafka, KSQL and the Confluent Open Source Platform 4.x using Docker Compose on a Windows machine

Amis Blog - Wed, 2018-05-23 06:39

image

For conducting some experiments and preparing several demonstrations I needed a locally running Kafka Cluster (of a recent release) in combination with a KSQL server instance. Additional components from the Core Kafka Project and the Confluent Open Source Platform (release 4.1) would be convenient to have. I needed everything to run on my Windows laptop.

This article describes how I could get what I needed using Vagrant and VirtualBox, Docker and Docker Compose and two declarative files. One is the vagrant file that defines the Ubuntu Virtual Machine that Vagrant spins up in collaboration with VirtualBox and that will contain Docker and Docker Compose. This file is discussed in more detail in this article: https://technology.amis.nl/2018/05/21/rapidly-spinning-up-a-vm-with-ubuntu-and-docker-on-my-windows-machine-using-vagrant-and-virtualbox/. The file itself can be found here, as GitHub Gist: https://gist.github.com/lucasjellema/7593677f6d03285236c8f0391f1a78c2.

The second file is the Docker Compose file – which can be found on GitHub as well: https://gist.github.com/lucasjellema/c06f8a790114396f11eadd10434d9b7e . Note: I received great help from Guido Schmutz from Trivadis for this file!

The Docker Compose file is shared into the VM when vagrant boots up the VM

image

and is executed automatically by the Vagrant docker-compose provisioner.

SNAGHTML370e8d1

Alternatively, you can ssh into the VM and execute it manually using these commands:

cd /vagrant

docker-compose up –d

image

Docker Compose will start all Docker Containers configured in this file, the order determined by the dependencies between the containers. Note: the IP address in this file (192.168.188.102) should correspond with the IP address defined in the vagrantfile. The two gists currently do not correspond because the Vagrantfile defined 192.168.188.110 as the IP address for the VM.

Once Docker Compose has done its thing, all containers configured in the docker-compose.yml file will be running. The Kafka Broker is accessible at 192.168.188.102:9092, the Zoo Keeper at 192.168.188.102:2181 and the REST API at port 8084; the Kafka Connect UI at 8001, the Schema Registry UI at 8002 and the KSQL Server at port 8088. The Kafka Manager listens at port 9000.

image

image

image

image

To run the KSQL Command Line, use this command to execute the shell in the Docker container called ksql-server:

docker exec -it vagrant_ksql-server_1 /bin/bash

Then, inside that container, simply type

ksql

And for example list all topics:

list topics;

image

Here follows the complete contents of the docker-compose.yml file (largely credited to Guido Schmutz):


version: '2'
services:
  zookeeper:
    image: "confluentinc/cp-zookeeper:4.1.0"
    hostname: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  broker-1:
    image: "confluentinc/cp-enterprise-kafka:4.1.0"
    hostname: broker-1
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_BROKER_RACK: rack-a
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_ADVERTISED_HOST_NAME: 192.168.188.102
      KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://192.168.188.102:9092'
      KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
      KAFKA_DELETE_TOPIC_ENABLE: "true"
      KAFKA_JMX_PORT: 9999
      KAFKA_JMX_HOSTNAME: 'broker-1'
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker-1:9092
      CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
      CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
      CONFLUENT_METRICS_ENABLE: 'true'
      CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'

  schema_registry:
    image: "confluentinc/cp-schema-registry:4.1.0"
    hostname: schema_registry
    container_name: schema_registry
    depends_on:
      - zookeeper
      - broker-1
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema_registry
      SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
      SCHEMA_REGISTRY_ACCESS_CONTROL_ALLOW_ORIGIN: '*'
      SCHEMA_REGISTRY_ACCESS_CONTROL_ALLOW_METHODS: 'GET,POST,PUT,OPTIONS'

  connect:
    image: confluentinc/cp-kafka-connect:3.3.0
    hostname: connect
    container_name: connect
    depends_on:
      - zookeeper
      - broker-1
      - schema_registry
    ports:
      - "8083:8083"
    environment:
      CONNECT_BOOTSTRAP_SERVERS: 'broker-1:9092'
      CONNECT_REST_ADVERTISED_HOST_NAME: connect
      CONNECT_REST_PORT: 8083
      CONNECT_GROUP_ID: compose-connect-group
      CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_KEY_CONVERTER: io.confluent.connect.avro.AvroConverter
      CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema_registry:8081'
      CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
      CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema_registry:8081'
      CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
    volumes:
      - ./kafka-connect:/etc/kafka-connect/jars

  rest-proxy:
    image: confluentinc/cp-kafka-rest
    hostname: rest-proxy
    depends_on:
      - broker-1
      - schema_registry
    ports:
      - "8084:8084"
    environment:
      KAFKA_REST_ZOOKEEPER_CONNECT: '192.168.188.102:2181'
      KAFKA_REST_LISTENERS: 'http://0.0.0.0:8084'
      KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema_registry:8081'
      KAFKA_REST_HOST_NAME: 'rest-proxy'

  adminer:
    image: adminer
    ports:
      - 8080:8080

  db:
    image: mujz/pagila
    environment:
      - POSTGRES_PASSWORD=sample
      - POSTGRES_USER=sample
      - POSTGRES_DB=sample

  kafka-manager:
    image: trivadisbds/kafka-manager
    hostname: kafka-manager
    depends_on:
      - zookeeper
    ports:
      - "9000:9000"
    environment:
      ZK_HOSTS: 'zookeeper:2181'
      APPLICATION_SECRET: 'letmein'   

  connect-ui:
    image: landoop/kafka-connect-ui
    container_name: connect-ui
    depends_on:
      - connect
    ports:
      - "8001:8000"
    environment:
      - "CONNECT_URL=http://connect:8083"

  schema-registry-ui:
    image: landoop/schema-registry-ui
    hostname: schema-registry-ui
    depends_on:
      - broker-1
      - schema_registry
    ports:
      - "8002:8000"
    environment:
      SCHEMAREGISTRY_URL: 'http://192.168.188.102:8081'

  ksql-server:
    image: "confluentinc/ksql-cli:4.1.0"
    hostname: ksql-server
    ports:
      - '8088:8088'
    depends_on:
      - broker-1
      - schema_registry
    # Note: The container's `run` script will perform the same readiness checks
    # for Kafka and Confluent Schema Registry, but that's ok because they complete fast.
    # The reason we check for readiness here is that we can insert a sleep time
    # for topic creation before we start the application.
    command: "bash -c 'echo Waiting for Kafka to be ready... && \
                       cub kafka-ready -b 192.168.188.102:9092 1 20 && \
                       echo Waiting for Confluent Schema Registry to be ready... && \
                       cub sr-ready schema_registry 8081 20 && \
                       echo Waiting a few seconds for topic creation to finish... && \
                       sleep 2 && \
                       /usr/bin/ksql-server-start /etc/ksql/ksql-server.properties'"
    environment:
      KSQL_CONFIG_DIR: "/etc/ksql"
      KSQL_OPTS: "-Dbootstrap.servers=192.168.188.102:9092 -Dksql.schema.registry.url=http://schema_registry:8081 -Dlisteners=http://0.0.0.0:8088"
      KSQL_LOG4J_OPTS: "-Dlog4j.configuration=file:/etc/ksql/log4j-rolling.properties"

    extra_hosts:
      - "moby:127.0.0.1"
      

Resources

Vagrant File: https://gist.github.com/lucasjellema/7593677f6d03285236c8f0391f1a78c2

Docker Compose file: https://gist.github.com/lucasjellema/c06f8a790114396f11eadd10434d9b7e

The post Running Kafka, KSQL and the Confluent Open Source Platform 4.x using Docker Compose on a Windows machine appeared first on AMIS Oracle and Java Blog.

GDPR or AVG - regain control Part 1: Your Own Cloud

Frank van Bortel - Wed, 2018-05-23 06:08
Create your own Cloud Replace Google or Dropbox, and gain control over you own data. Encrypt it, protect it, share your data only with who you want. ARM, Ubuntu and secured Nextcloud This episode will be followed by an entry on email. For now, I settle for a relatively cheap ARM device (an ODroid XU4, to be precise), run it with Ubuntu, and install NextCloud. The choice for Ubuntu has Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

Oracle Developer Cloud - New Continuous Integration Engine Deep Dive

OTN TechBlog - Wed, 2018-05-23 02:00

We introduced our new Build Engine in Oracle Developer Cloud in our April release. This new build engine now comes with the capability to define build pipelines visually. Read more about it in my previous blog.

In this blog we will delve deeper into some of the functionalities of Build Pipeline feature of the new CI Engine in Oracle Developer Cloud.

Auto Start

Auto Start is an option given to the user while creating a build pipeline on Oracle Developer Cloud Service. The below screenshot shows the dialog to create a new Pipeline, where you have a checkbox which needs to be checked to ensure the pipeline execution auto starts when one of the build job in the pipeline is executed externally, then that would trigger the execution of rest of the build jobs in the pipeline.

The below screen shot shows the pipeline for NodeJS application created on Oracle Developer Cloud Pipelines. The build jobs used in the pipeline are build-microservice, test-microservices and loadtest-microservice. And in parallel to the microservice build sequence we have, WiremockInstall and WiremockConfigure.

Scenarios When Auto Start is enabled for the Pipeline:

Scenario 1:

If we run build-microservice build job externally, then it will lead to the execution of the test-microservice and loadtest-microservice build jobs in that order subsequently. But note this does not trigger the execution of WiremockInstall or WiremockConfigure build jobs as they are part of a separate sequence. Please refer the screen shot below, which shows only the build jobs executed in green.

Scenario 2:

If we run test-microservice build job externally, then it will lead to the execution of the loadtest-microservice build job only. Please refer the screen shot below, which shows only the build jobs executed in green.

Scenario 3:

If we run loadtest-microservice build job externally, then it will lead to no other build job execution in the pipeline across both the build sequences.

Exclusive Build

This enables the users to disallow the pipeline jobs to be built externally in parallel to the execution of the build pipeline. It is an option given to the user while creating a build pipeline on Oracle Developer Cloud Service. The below screenshot shows the dialog to create a new Pipeline, where you have a checkbox which needs to be checked to ensure that the execution of build jobs in pipeline will not be allowed to be built in parallel to the pipeline execution.

When you run the pipeline you would see the build jobs queued for execution which you can see in the Build History. In this case you would see two build jobs queued, one would be build-micorservice and other would be WiremockInstall as they are parallel sequences part of the same pipeline.

Now if you try to run any of the build jobs in the pipeline, for example; like test-microservice, you will be given an error message, as shown in the screenshot below.

 

Pipeline Instances:

If you click the Build Pipeline name link in the Pipelines tab you will be able to see the pipeline instances. Pipeline instance is the instance at which it was executed. 

Below screen shot shows the pipeline instances with time stamp of when it was executed. It will show if the pipeline got Auto Started (hover on the status icon of the pipeline instance) due to an external execution of the build job or shows the success status if all the build jobs of the pipeline were build successfully. It also shows the build jobs that executed successfully in green for that particular pipeline instance. The build jobs that did not get executed have a white background.  You also get an option to cancel while the pipeline is getting executed and you may choose to delete the instance post execution of the pipeline.

 

Conditional Build:

The visual build pipeline editor in Oracle Developer Cloud has a feature to support conditional builds. You will have to double click the link connecting the two build jobs and select any one of the conditions as given below:

Successful: To proceed to the next build job in the sequence if the previous one was a success.

Failed: To proceed to the next build job in the sequence if the previous one failed.

Test Failed: To proceed to the next build job in the sequence if the test failed in the previous build job in the pipeline.

 

Fork and Join:

Scenario 1: Fork

In this scenario if you have a build job like build-microservice on which the other three build jobs, “DockerBuild” which builds a deployable Docker image for the code, “terraformBuild” which builds the instance on Oracle Cloud Infrastructure and deploy the code artifact and “ArtifactoryUpload” build job to upload the generated artifact to Artifactory are dependent on then you will be able to fork the build jobs as shown below.

 

Scenario 2: Join

If you have a build job test-microservice which is dependent on two other build jobs, build-microservice which build and deploys the application and another build job WiremockConfigure to configure the service stub, then in this case you need to create a join in the pipeline as shown in the screen shot below.

 

You can refer the Build Pipeline documentation here.

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

ADF Performance Tuning: Manage Your Fetched Data

Amis Blog - Wed, 2018-05-23 01:41

In this blog I want to stress how important it is to manage the data that you fetch and load into your ADF application. I blogged on this subject earlier. It is still underestimated in my opinion. Recently I was involved in troubleshooting the performance in two different ADF projects. They had one thing in common: their servers became frequently unavailable, and they fetched far too many rows from the database. This will likely lead to memory over-consumption, ‘stop the world’ garbage collections that can run far too long, a much slower application, or in the worst case even servers that run into an OutOfMemoryError and become unavailable.

Developing a plan to manage and monitor fetched data during the whole lifetime of your ADF application is an absolute must. Keeping your sessions small is indispensable to your performance success. This blog shows a few examples of what can happen if you do not do that.

Normal JVM Heap and Garbage Collection

First, just or our reference, let’s have a look at how a normal, ‘healthy’ JVM heap and garbage collection should look like (left bottom). The ADF Performance Monitor shows real-time or historic heap usage and garbage collection times. The heap space (purple) over time is like a saw-tooth shaped line – showing a healthy JVM. There are many small and just a few big garbage collections (pink). This is because there are basically two types of garbage collectors. The big garbage collections do not run longer than 5 seconds:

Read more on adfpm.com.

The post ADF Performance Tuning: Manage Your Fetched Data appeared first on AMIS Oracle and Java Blog.

Professional Tips For Building Mockups For Web And Mobile

Nilesh Jethwa - Tue, 2018-05-22 23:57

Given how incredibly easy it is to develop a web or mobile application, startups have become commonplace as a lot of players have already joined the bandwagon. You might be here because you want to learn how to compete against … Continue reading ?

Original: MockupTiger Wireframes

InfoCaptor Reviews – Customer Feedback and Testimonials

Nilesh Jethwa - Tue, 2018-05-22 23:52

My organization was looking for a way to chart MyQSL data on our website. One of the requirements was to use something that didn’t require extensive coding or multiple js files to run. InfoCaptor fit the bill. The programmer interface … Continue reading ?

Credit: InfoCaptor Dashboard

InfoCaptor Stack bar label fix

Nilesh Jethwa - Tue, 2018-05-22 23:47

A quick patch was released to fix the problem in display of data labels for stack bar chart. Please grab the new versions

By: InfoCaptor Dashboard

Wireframe Software – MockupTiger new version released

Nilesh Jethwa - Tue, 2018-05-22 23:42

A quick note that MockupTiger new version is now available. This release is for the cloud access. Desktop version of the wireframe tool is coming very soon. Link : Wireframe software online and desktop Key highlights of this mockup and … Continue reading ?

Via: InfoCaptor Dashboard

How to edit connection on existing dashboard or visualization

Nilesh Jethwa - Tue, 2018-05-22 23:37

Scenario: You have built a dashboard on a development database connection and you need to migrate to a production server using production database. How do you change the connection on the dashboard charts and visualization? There are two ways to … Continue reading ?

By: InfoCaptor Dashboard

Pages

Subscribe to Oracle FAQ aggregator