Feed aggregator

Database Password Change

Tom Kyte - Tue, 2018-10-02 15:26
Hello Experts, I am just curious about changing passwords via cmd sql for SYS user. I issued the below command: alter user SYS identified by new_password; commit; Then restart the instance and OS. When I tried to use the new password, it ...
Categories: DBA Blogs

Regarding Undo tablespace

Tom Kyte - Tue, 2018-10-02 15:26
Which background process writes the copy of "old image" from "db buffer cache" to undo -segment of undo tablespace ?
Categories: DBA Blogs

Oracle Offline Persistence Toolkit - Before Request Sync Listener

Andrejus Baranovski - Tue, 2018-10-02 15:09
One more post from me related to Oracle Offline Persistence Toolkit. I already described how after request listener could be useful to read response data after sync - Oracle Offline Persistence Toolkit - After Request Sync Listener. Today will explain when before request listener could be useful. Same as after request listener, it is defined during persistence manager registration:


Before request listener must return promise. We can control resolved action. For example if there is no need to update request, we simply return continue. We would need to update request, if same row is updated multiple times during sync. Change indicator value must be updated in request payload. We read latest change indicator value from array, initialised in after request listener. Request payload is converted to JSON, value updated and then we construct new request and resolve it with replay. API allows to provide new request, by replacing original:


Here is the use case. While offline - update value:


While remaining offline, update same value again:


We should trace executed requests during sync, when going online. First request, initiated by first change is using change indicator value 292:


Second request is using updated change indicator value 293:


Without before and after request listener logic, second request would execute with same change indicator value as the first one. This would lead to data conflict on backend.

Sample application code is available on GitHub.

PDB Snapshot Carousel Oracle 18.3

Yann Neuhaus - Tue, 2018-10-02 09:09

A new feature with Oracle 18c is the PDB snapshot carousel. As indicated by its name a PDB snapshot is a copy of a PDB at a specific point in time. You have the possibility to create up to eight snapshots, when you reach the maximum number of snapshots, the last snapshot is over written. The snapshot carousel is obviously the name of all your PDB snapshots.

We have the possibility to create automatic snapshots using the “snapshot mode every” clause when you create or alter a PDB. For example you can change the snapshot mode from a PDB to every  3 hours:

SQL> alter session set container=pdb;

Session altered.

SQL> select snapshot_mode,snapshot_interval/60 from dba_pdbs;

SNAPSH SNAPSHOT_INTERVAL/60
------ --------------------
MANUAL

SQL> alter pluggable database snapshot mode every 3 hours;

Pluggable database altered.

SQL> select snapshot_mode,snapshot_interval/60 from dba_pdbs;

SNAPSH SNAPSHOT_INTERVAL/60
------ --------------------
AUTO			  3

To return to manual mode, just type:

SQL> alter pluggable database snapshot mode manual;

Pluggable database altered.

We can create PDB snapshots manually, you can use a specific name or not:

SQL> alter pluggable database snapshot pdb_snap;

Pluggable database altered.

SQL> alter pluggable database snapshot;

Pluggable database altered.

We can query the dba_pdb_snapshots view to display the PDB snapshots location:

SQL> SELECT CON_ID, CON_NAME, SNAPSHOT_NAME, 
SNAPSHOT_SCN AS snap_scn, FULL_SNAPSHOT_PATH 
FROM   DBA_PDB_SNAPSHOTS ORDER BY SNAP_SCN;

CON_ID CON_NAME SNAPSHOT_NAME SNAP_SCN

FULL_SNAPSHOT_PATH

3        PDB	  PDB_SNAP    1155557
/home/oracle/oradata/DB18/pdb/snap_2263384607_1155557.pdb

3        PDB	  SNAP_2263384607_987432172  1155823
/home/oracle/oradata/DB18/pdb/snap_2263384607_1155823.pdb

If you want to drop a snapshot, you have two methods:

You delete the snapshot with the following alter pluggable statement:

SQL> alter pluggable database drop snapshot SNAP_2263384607_987432172;

Pluggable database altered.

Otherwise you set the MAX_PDB_SNAPSHOTS property to zero in the PDB:

You can query the CDB_PROPERTIES and CDB_PDBS to display the parameter value:

SELECT r.CON_ID, p.PDB_NAME, PROPERTY_NAME,
  	PROPERTY_VALUE AS value, DESCRIPTION
  	FROM   CDB_PROPERTIES r, CDB_PDBS p
  	WHERE  r.CON_ID = p.CON_ID
  	AND    PROPERTY_NAME LIKE 'MAX_PDB%'
  	AND    description like 'maximum%'
  	ORDER BY PROPERTY_NAME

CON_ID	PDB_NAME	PROPERTY_NAME	VALUE	           DESCRIPTION
  3		  PDB     MAX_PDB_SNAPSHOTS    8    maximum number of snapshots for a given PDB

And if you set it to zero all your PDB snapshots will be dropped:

SQL> alter session set container=pdb;

Session altered.

SQL> alter pluggable database set max_pdb_snapshots = 0;

Pluggable database altered.

SQL> SELECT CON_ID, CON_NAME, SNAPSHOT_NAME, 
SNAPSHOT_SCN AS snap_scn, FULL_SNAPSHOT_PATH 
FROM   DBA_PDB_SNAPSHOTS
ORDER BY SNAP_SCN;

no rows selected

But the main interest of the snapshot PDBS is to create new PDBS from a productive environment based on a point in time of the production PDB.

So we create a PDB snapshot named PDB_SNAP:

SQL> alter pluggable database snapshot pdb_snap;

Pluggable database altered.

And now we create a PDB from the PDB_SNAP snapshot:

SQL> create pluggable database PDB2 from PDB using snapshot PDB_SNAP create_file_dest='/home/oracle/oradata/DB18/pdb2';

Pluggable database created.

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB		                  READ WRITE NO
	 4 APPPSI			  READ WRITE NO
	 5 PDB2 			  READ WRITE NO

We have also the possibility to change the snapshot mode:

SQL> alter session set container=pdb;

Session altered.

SQL> SELECT SNAPSHOT_MODE "S_MODE", SNAPSHOT_INTERVAL/60 "SNAP_INT_HRS" 
     FROM DBA_PDBS;

S_MODE SNAP_INT_HRS
------ ------------
MANUAL


SQL> ALTER PLUGGABLE DATABASE SNAPSHOT MODE EVERY 1 HOURS;

Pluggable database altered.

SQL> SELECT SNAPSHOT_MODE "S_MODE", SNAPSHOT_INTERVAL/60 "SNAP_INT_HRS" 
     FROM DBA_PDBS;

S_MODE SNAP_INT_HRS
------ ------------
AUTO		  1

We have the possibility to create a PDB that creates snapshots every 15 minutes :

SQL> create pluggable database pdb_new from pdb
  2  file_name_convert=('pdb','pdb_new')
  3  snapshot mode every 15 minutes;

Pluggable database created.

There is a pre requisite for configuring automatic PDB snapshots: the CDB must be in local undo mode.

Finally the snapshots are correctly created in my environment every 15 minutes:

oracle@localhost:/home/oracle/oradata/DB183/pdb/ [DB183] ls -lrt snap*
-rw-r--r--. 1 oracle dba 65690276 Oct  1 15:04 snap_3893567541_798493.pdb
-rw-r--r--. 1 oracle dba 65740202 Oct  1 15:19 snap_3893567541_801189.pdb
-rw-r--r--. 1 oracle dba 65823279 Oct  1 15:34 snap_3893567541_803706.pdb

And to verify if it is correct , I had created in my pdb_new environment a location table in my psi schema with two records at 15H20:

SQL> create table psi.location (name varchar2(10));

Table created.

SQL> insert into psi.location values ('London');

1 row created.

SQL> insert into psi.location values('Paris');

1 row created.

SQL> commit;

And we create a new pdb from the snap to verify if the data are correct:

SQL> create pluggable database pdb_psi from pdb_new 
     using snapshot SNAP_45745043_988386045 
     create_file_dest='/home/oracle/oradata/DB183/pdb_psi';

Pluggable database created.

We open pdb_psi and we check:

SQL> alter session set container=pdb_psi;

Session altered.

SQL> select * from psi.location;

NAME
----------
London
Paris

This feature might be very useful for testing purposes, imagine you have a production PDB, you only have to create a refreshable clone named PDB_MASTER and configure it to create daily snapshots. If you need a PDB for testing you only have to create a clone from any snapshot.

Conclusion

All those tests have been realized on an Linux x86-64 server, with Oracle 18.3 Enterprise Edition. My DB183 database has been initialized with the “_exadata_feature_on”  hidden parameter to avoid the “ORA-12754 Feature PDB Snapshot Carousel is disabled due to missing capability” error message.

If you have a look at the Database Licensing User Manual:

https://docs.oracle.com/en/database/oracle/oracle-database/18/dblic/Licensing-Information.html#GUID-B6113390-9586-46D7-9008-DCC9EDA45AB4

Feature / Option / Pack SE2 EE EE-ES DBCS SE DBCS EE DBCS EE-HP DBCS EE-EP ExaCS Notes PDB Snapshot Carousel N N Y N Y Y Y Y

 

You will see that PDB Carousel (and a lot of interesting new features in Oracle 18.3) are only available for Engineered System or in Cloud and not for Enterprise Edition for third party hardware. I really hope Oracle will change this behavior in the future releases.

 

Cet article PDB Snapshot Carousel Oracle 18.3 est apparu en premier sur Blog dbi services.

Oracle Rolls Out Autonomous NoSQL Database Service

Oracle Press Releases - Tue, 2018-10-02 07:00
Press Release
Oracle Rolls Out Autonomous NoSQL Database Service Oracle’s self-driving NoSQL database empowers developers to easily build modern applications with high availability and lower cost than Amazon DynamoDB

Redwood Shores, Calif.—Oct 2, 2018

Oracle today announced the availability of Oracle Autonomous NoSQL Database, the newest addition to the Oracle Autonomous Database portfolio. Designed for workloads needing fast and predictable responses to simple operations, the self-driving database service enables developers to easily deploy massive-scale applications, including UI personalization, shopping carts, online fraud detection, gaming, and advertising. Using innovative machine learning and automation capabilities, Oracle Autonomous NoSQL Database delivers a significantly more reliable NoSQL database with 99.95 percent availability and is up to 70 percent lower cost than Amazon DynamoDB1.

The fully managed Oracle Autonomous NoSQL Database handles NoSQL applications that require low latency, data model flexibility, and elastic scaling. Using simple APIs, developers can focus on application development without having to worry about managing servers, storage expansion, cluster deployments, software installation, or backup. Developers can simply specify the throughput and capacity that they wish to provision, and resources are allocated and scaled accordingly to meet dynamic workload requirements.

“We continue to leverage our revolutionary autonomous capabilities to transform the database market," said Andrew Mendelsohn, executive vice president, Oracle Database. "Our latest self-driving database cloud service, Oracle Autonomous NoSQL Database, provides extreme reliability and performance at very low costs to achieve a highly flexible application development framework.”

Oracle Autonomous Database offers exceptional performance while helping reduce risk and costs for enterprises. Running on Oracle Cloud Infrastructure, the autonomous database portfolio is self-driving, self-securing, and self-repairing to automate key management processes including patching, tuning and upgrading to keep critical infrastructure automatically running. Oracle Autonomous NoSQL joins Oracle Autonomous Transaction Processing and Oracle Autonomous Data Warehouse, which became available earlier this year. Each database cloud service is tuned to a specific workload.

Oracle Autonomous NoSQL Database provides a wealth of features to meet the needs of today’s developers:

  • Modern: A developer-oriented solution, Oracle Autonomous NoSQL Database is designed for flexibility. The database supports key value APIs including simple declarative SQL API and command line interfaces along with flexible data models for data representation including relational and ad-hoc JSON.

  • Open: The service provides a non-proprietary SQL language, delivering innovative interoperability between standard relational and standard JSON data models. Users also have deployment options to run the same application in the cloud or on-premises with no platform lock-in.

  • Easy: With an available SDK and support for popular languages including Python, Node.JS and Java, Oracle offers a no hassle application development solution to easily connect to Oracle Autonomous NoSQL Database.

1 Cost reference is for read-only workloads.

Contact Info
Dan Muñoz
Oracle
+1.650.506.2904
dan.munoz@oracle.com
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Dan Muñoz

  • +1.650.506.2904

Nicole Maloney

  • +1.650.506.0806

Find the min date from a list of date without using a sub select query

Tom Kyte - Mon, 2018-10-01 21:06
Hi, I am working on a project and I need some help with the query. I have the following data in my existing table: <code> ----------------- S_ID S_DATE ----------------- A 01-FEB-12 A 14-MAR-12 A 28-APR-14 A 28-MAR...
Categories: DBA Blogs

Live Three-day Fluid Training Event in Seattle Dec 4

Jim Marion - Mon, 2018-10-01 18:55

Are you interested in learning PeopleTools Fluid? Have you already taken a Fluid training course, but still don't feel comfortable with Fluid? Please join us in beautiful downtown Seattle from December 4th through the 6th to learn all about PeopleTools Fluid. Our curriculum starts with Fluid navigation, works its way into Fluid page construction, and finishes with advanced topics such as site-specific CSS3, JavaScript, and event mapping. This course is packed with best practices and tips.

Through the material in this course you will become comfortable with Fluid and proficient with Fluid development. You will learn the skills necessary to apply PeopleSoft-specific CSS and how to write your own custom CSS. You will learn several shortcuts for converting existing custom Classic pages to Fluid.

With most of HCM Employee Self Service Classic set to retire on December 31st of this year (MyOracle Support document 1348959.1), there is no better time to learn Fluid. Space is limited and the early bird discount expires soon so Register now to ensure a seat in the best Fluid class available!

node-oracledb 3.0 Introduces SODA Document Storage

Christopher Jones - Mon, 2018-10-01 17:11

node-oracledb icon

 

 

Release announcement: Node-oracledb 3.0.0, the Node.js module for accessing Oracle Database, is on npm.

Top features: Simple Oracle Document Access (SODA) preview, Connection Pool draining, Call timeouts.

 

Node-oracledb 3 has been released, which is very exciting. This release adds support for some Oracle Database and Oracle Client 18c features.

As with earlier versions, node-oracledb 3 is usable with Oracle Client libraries 11.2 onwards. This allows it to connect to Oracle Database 9.2 or later, depending on the client version. But only when using Oracle Client 18.3 libraries and connected to Oracle Database 18.3 will you get the latest and greatest set of Oracle features, which we are pleased to bring you.

Here are the highlights of thise node-oracledb 3.0 release:

  • Oracle Simple Document Access (SODA) support - an exciting addition to the standard relational access model is the new set of APIs for Oracle SODA. See the section lower down for examples.

  • A new drainTime argument to pool.close() allows pools to be force-closed after a specified number of seconds. This feature was a contribution from Danilo Silva. Thanks Danilo!

    When a pool is closed with a given drainTime, any subsequent getConnection() calls will fail, but connections currently in use will continue working. This allows code to complete and outstanding transactions to be committed. When no connections are in use, or at the end of the drain time (whichever occurs first), the pool and all its connections will be forcibly closed.

    Although the words 'forcibly closed' seem harsh, the drain time actually allows more graceful shutdown of applications, so that users will get clear messages that the pool is closing (or has closed), and letting the database-side sessions be cleanly freed without waiting for them to timeout. The drainTime can also be zero, forcing the immediate close of all sessions - this is handy when you want to kill an app but be nice to the database.

  • Installation of the pre-built node-oracledb binaries got a bit easier with basic proxy authentication support. Also the 'npm config' proxy value is now used when installing if there are no proxy environment variables set. These changes were a contribution from Cemre Mengu. Thank you Cemre!

  • Added a connection.callTimeout property to interrupt long running database calls. This is available when node-oracledb 3 is using Oracle Client libraries version 18.1, or later, regardless of Oracle Database version.

    The call timeout is a useful backstop to prevent out-of-control SQL and PL/SQL statement execution. For background, the main code layer beneath node-oracledb's implementation is Oracle Call Interface. This API handles all the network connectivity to Oracle Database. For each OCI function executed by node-oracledb, zero or more 'round-trips' to the database can occur - calling the database and getting a response back.

    The callTimeout value applies to each round-trip individually, not to the sum of all round-trips. Time spent processing in node-oracledb before or after the completion of each round-trip is not counted.

    • If the time from the start of any one round-trip to the completion of that same round-trip exceeds callTimeout milliseconds, then the operation is halted and an error is returned.

    • In the case where a node-oracledb operation requires more than one round-trip and each round-trip takes less than callTimeout milliseconds, then no timeout will occur, even if the sum of all round-trip calls exceeds callTimeout.

    • If no round-trip is required, the operation will never be interrupted.

    When callTimeout is exceeded, node-oracledb attempts to clean up the internal connection state. The cleanup is allowed to take another callTimeout milliseconds.

    If the cleanup was successful, a "DPI-1067: call timeout of N ms exceeded with ORA-XXX" error will be returned and the application can continue to use the connection.

    For small values of callTimeout, the connection cleanup may not complete successfully within the additional callTimeout period. In this case an ORA-3114 is returned and the connection will no longer be usable. It should be closed.

  • On Windows, node-oracledb will now attempt to load the Oracle Client libraries from the 'node_modules\oracledb\build\Release' directory before doing the standard Windows library directory search i.e. of the PATH directories.

    This new feature could be useful if you are bundling up applications on Windows and want to include the Oracle Instant Client. By putting the client libraries in the 'node_modules\oracledb\build\Release' directory there is no need to set PATH, no need to worry about users changing PATH, and no need to worry about having multiple versions of Oracle Client libraries in PATH.

  • poolPingInterval functionality has been 're-enabled' when using the connection pool with Oracle Client libraries 12.2, or later. Previously it was deliberately not enabled with these versions, since an internal check in those Oracle clients is very efficient for seeing if the network has dropped out. However users were unhappy that the internal check does not identify connections that are unusable because they have exceeded database session resource limits (e.g. return ORA-02396 when used), or have been explicitly closed by DBAs (e.g. return ORA-00028). This is where poolPingInterval helps.

    This change can make your apps seem more highly available but there is a drawback: your apps may be silently reconnecting more than is optimal, and you might not be aware of connection storms if a large pool needs to be re-established. You should monitor AWR reports to see if connections are occurring too frequently, and then work with your network and DBA administrators to prevent idle sessions being killed.

These are just the highlights. For other changes and improvements see the CHANGELOG. But read on to hear more about SODA . . . .

Simple Oracle Document Access (SODA) in node-oracledb

Oracle Simple Document Access (SODA) is a set of NoSQL-style APIs that let you create and store collections of documents (in particular JSON) in Oracle Database, retrieve them, and query them, without needing to know SQL. SODA support is now available in node-oracledb 3 when using Oracle Database 18.3 and Oracle Client 18.3, or higher. SODA APIs are also available for Python, C, Java, PL/SQL and via REST, so it is widely accessible and bound to be a useful tool in your data toolkit. We are currently labelling node-oracledb 3 SODA APIs as a 'preview' but, with a future version of the Oracle Client libraries, this will change.

The class diagram of node-oracledb shows the separation of the relational and SODA worlds:

node-oracledb class overview diagram

In reality, SODA is backed by Oracle Database tables, providing a well known, secure, and efficient storage solution. You could access those tables via SQL but this would rarely be needed, perhaps for some advanced Oracle Database functionality such as analytics for reporting.

Instead you will almost certainly just use the new classes and methods. Once a DBA has granted you the SODA_APP privilege, you can simply create collections and store documents (e.g JSON) in them. Some basic examples are:

// Create the parent object for SODA. soda = await connection.getSodaDatabase(); // Create a new SODA collection, if it doesn't exist. // This will open an existing collection, if the name is already in use. collection = await soda.createCollection("mycollection"); // Insert a document. // A system generated key is created by default. content = {name: "Matilda", address: {city: "Melbourne"}}; doc = await collection.insertOneAndGet(content); key = doc.key; console.log("The key of the new SODA document is: ", key);

You can then get documents back via a key look up, or by a search. A key lookup is straightforward:

// Fetch the document back doc = await collection.find().key(key).getOne(); // A SodaDocument content = doc.getContent(); // A JavaScript object console.log('Retrieved SODA document as an object:'); console.log(content);

For documents that can be converted to JSON you can alternatively get them as a string:

content = doc.getContentAsString(); // A JSON string console.log('Retrieved SODA document as a string:'); console.log(content);

The find() method is an operation builder, with methods that allow progressively limiting criteria to be set, reducing the set of documents that are then operated on by a terminal method such as getOne(), getDocuments() or count(), amongst others.

With JSON documents, a complete filtering specification language can be used for query-by-example (QBE) to find documents. A brief example is:

// Find all documents with city names starting with 'S' documents = await collection.find() .filter({"address.city": {"$like": "S%"}}) .getDocuments(); for (let i = 0; i < documents.length; i++) { content = documents[i].getContent(); console.log(' city is: ', content.address.city); }

A runnable example is in soda1.js

Check out the node-oracledb SODA manual section and the Introduction to Simple Oracle Document Access (SODA) manual to see its power and simplicity.

You don't have Oracle Database 18.3 yet? Get it from here. Or you may be interested in using JSON with older versions of Oracle Database.

Summary

We are continuing to introduce important features to node-oracledb to make your development experience better. We have a long wishlist and will continue our work. Contributions from the community are always welcome, and we thank the people who have contributed to this and previous releases of node-oracledb for helping to make it better.

Resources

Node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Node-oracledb change log is here.

Issues and questions about node-oracledb can be posted on GitHub or Slack (link to join Slack).

Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Follow us on Twitter or Facebook.

Using the Query Cache for good performance in #Exasol

The Oracle Instructor - Mon, 2018-10-01 08:52

The result of a query can be cached in Exasol to the effect that repeated identical queries complete in no time. This feature has been introduced in version 5 and is enabled by default.

SQL_EXA> select session_value,system_value  from exa_parameters where parameter_name='QUERY_CACHE';
EXA: select session_value,system_value  from exa_parameters where parameter...

SESSION_VALUE        SYSTEM_VALUE
-------------------- --------------------
ON                   ON

1 row in resultset.

The Query Cache can be (de-)activated on the session level as well as on the system level.

SQL_EXA> alter session set query_cache='off';
EXA: alter session set query_cache='off';

Rows affected: 0
SQL_EXA> select object_name,mem_object_size/1024/1024 as mb from exa_user_object_sizes where object_name='T';
EXA: select object_name,mem_object_size/1024/1024 as mb from exa_user_objec...

OBJECT_NAME          MB
-------------------- ----------------------------------
T                             1537.49641990661621093750

1 row in resultset.

SQL_EXA> select count(*) from t;
EXA: select count(*) from t;

COUNT(*)
---------------------
            320000000

1 row in resultset.
SQL_EXA> set timing on;
SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 4
Elapsed: 00:00:03.022

SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 5
Elapsed: 00:00:02.620

SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 6
Elapsed: 00:00:02.724

Without using the Query Cache the repeated query takes roughly 3 seconds.

SQL_EXA> alter session set query_cache='on';
EXA: alter session set query_cache='on';

Rows affected: 0


Timing element: 7
Elapsed: 00:00:00.008

SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 8
Elapsed: 00:00:00.009

Lightning fast! If statement profiling is enabled, QUERY CACHE RESULT shows as PART_NAME in tables like EXA_USER_PROFILE_LAST_DAY.
Also EXECUTION_MODE from EXA_SQL_LAST_DAY shows the usage of the Query Cache:

SQL_EXA> set timing off;
SQL_EXA> select current_statement;
EXA: select current_statement;

CURRENT_STATEMENT
---------------------
                   35

1 row in resultset.

SQL_EXA> show autocommit;
AUTOCOMMIT = "ON"
SQL_EXA> select command_name,duration,row_count,execution_mode from exa_sql_last_day where stmt_id=33 and session_id=current_session;
EXA: select command_name,duration,row_count,execution_mode from exa_sql_las...

COMMAND_NAME                             DURATION    ROW_COUNT             EXECUTION_MODE
---------------------------------------- ----------- --------------------- --------------------
SELECT                                         0.005                     1 CACHED

1 row in resultset.

If DML changes the table, the result in the Query Cache is invalidated automatically:

SQL_EXA> update t set numcol2=1 where rowid in (select rowid from t limit 1);
EXA: update t set numcol2=1 where rowid in (select rowid from t limit 1);
Rows affected: 1

SQL_EXA> set timing on;
SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
888896

1 row in resultset.

Timing element: 10
Elapsed: 00:00:02.870

SQL_EXA> set timing off;
SQL_EXA> select current_statement;
EXA: select current_statement;

CURRENT_STATEMENT
---------------------
51

1 row in resultset.

SQL_EXA> select command_name,duration,row_count,execution_mode from exa_sql_last_day where stmt_id=49 and session_id=current_session;
EXA: select command_name,duration,row_count,execution_mode from exa_sql_las...

COMMAND_NAME DURATION ROW_COUNT EXECUTION_MODE
---------------------------------------- ----------- --------------------- --------------------

0 rows in resultset.

There’s a 1 Minute interval for syncing the statistic tables. It can be triggered manually, though:

SQL_EXA> flush statistics;
EXA: flush statistics;

Rows affected: 0

SQL_EXA> commit;
EXA: commit;

Rows affected: 0

SQL_EXA> select command_name,duration,row_count,execution_mode from exa_sql_last_day where stmt_id=49 and session_id=current_session;
EXA: select command_name,duration,row_count,execution_mode from exa_sql_las...

COMMAND_NAME                             DURATION    ROW_COUNT             EXECUTION_MODE
---------------------------------------- ----------- --------------------- --------------------
SELECT                                         2.862                     1 EXECUTE

1 row in resultset.

Runtime and EXECUTION_MODE EXECUTE confirms that the Query Cache was invalidated by the UPDATE above. Now it’s automatically refreshed:

SQL_EXA> set timing on;
SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 11
Elapsed: 00:00:00.010
Categories: DBA Blogs

Oracle Food and Beverage Enables Continued Innovation with Omnivore Integration

Oracle Press Releases - Mon, 2018-10-01 07:00
Press Release
Oracle Food and Beverage Enables Continued Innovation with Omnivore Integration Easy Integrations Allow Restaurant Operators to Extend Value of Point-of-Sale Investment with Third Party Solutions

Redwood Shores, Calif.—Oct 1, 2018

Oracle Food and Beverage has collaborated with Omnivore, a universal point of sale (POS) connection for restaurant app development and a Gold level member of Oracle PartnerNetwork (OPN), to encourage restaurant operators to develop differentiated customer experiences and further streamline operations with their Oracle Food and Beverage investment. With this Omnivore connection, restaurant operators can easily take advantage of an ecosystem of third party solutions through a single point of integration into the Oracle Food and Beverage Simphony restaurant management platform.   

“Guests are demanding engaging and personalized experiences from dining establishments and we believe continuous implementation of technology is essential to helping restaurant operators keep pace,” said Chris Adams, vice president of strategy, Oracle Food and Beverage. “Through our collaboration with Omnivore, Oracle Food and Beverage is accelerating innovation in the restaurant community by making it easier than ever for operators to take advantage of the latest third party solutions.”

Omnivore connects restaurant point of sale to over 150 restaurant technologies, aligning brands with consumer, staff and support center needs in order to grow transactions while minimizing expenses.

"Together, Omnivore and Oracle Food and Beverage bring merchants a true open, best-of-breed POS technology platform, giving restaurants access to the best available technologies to meet even the most creative brands' digital needs," said Mike Wior, CEO, Omnivore.  “We’re proud to help Oracle customers navigate innovative third party solutions within a rapidly evolving digital landscape.”

Together Oracle and Omnivore are enabling operators to create a unique technology footprint that will enable restaurants to establish a competitive differentiation that reflects their guest experience. By creating a single source of integration into Oracle Food and Beverage restaurant management software and POS hardware, operators can leverage Omnivore to further streamline operations and quickly deploy new technologies faster than ever.

Contact Info
Matt Torres
Oracle
415.595.1584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Food and Beverage

Oracle Food and Beverage, formerly MICROS, brings 40 years of experience in providing software and hardware solutions to restaurants, bars, pubs, clubs, coffee shops, cafes, stadiums, and theme parks. Thousands of operators, both large and small, around the world are using Oracle technology to deliver exceptional guest experiences, maximize sales, and reduce running costs.

For more information about Oracle Food and Beverage, please visit www.Oracle.com/Food-Beverage

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

About Omnivore

Revolutionizing the worlds of hospitality and retail, Omnivore connects a retailer’s point-of-sale (POS) with new technologies, driving engagement through the full lifecycle of the consumer experience. Omnivore’s cloud-based platform enables a single integration through their API, seamlessly facilitating the connections to POS systems now and into the future. The platform helps restaurants discover apps for payment, reservations, delivery, loyalty, analytics and more to connect with millions of consumers around the world. Omnivore enables access to real-time, quality consumer level point of purchase intelligence. Located in the San Francisco Bay area, Omnivore is a privately held company. Visit omnivore.io.

Talk to a Press Contact

Matt Torres

  • 415.595.1584

Covert Column to Rows Dynamically

Tom Kyte - Mon, 2018-10-01 02:46
Hi, I have situation when I am trying to convert rows into columns(Dynamically). Here is the scenario - 1. I have a function(userdefined) which takes in a SQL query as input and returns the result as a table type. 2. The result is ...
Categories: DBA Blogs

Insufficient privilege to access object SYS.DBMS_LOCK

Tom Kyte - Mon, 2018-10-01 02:46
Dear Tom, If we are using the dbms_lock package in anonymous block it is working fine.But it is not working inside the procedure. We have gone through the following link asktom.oracle.com/~tkyte/Misc/RolesAndProcedures.html and even tried with '...
Categories: DBA Blogs

Alter Table

Tom Kyte - Mon, 2018-10-01 02:46
Hi Tom , i want ALTER TABLE by adding columns to it , if the columns already exist int the table then it doesn't add , else adds the columns . how i can do that ? Regards, Adil
Categories: DBA Blogs

Lizok's Bookshelf

Greg Pavlik - Sun, 2018-09-30 17:34
The first of Eugene Vodolazkin's novels translated to English was, of course, Laurus, which ranks as one of the significant literary works of the current century. I was impressed by the translators ability to convey not just a feel for what I presume the original has, but a kind of "other-time-yet-our-timeness" that seems an essential part of the authors objective. I recently picked up Volodazkin's Aviator and thought to look up the translator as well. I was delighted to find her blog on modern Russian literature, which can be found here:

http://lizoksbooks.blogspot.com/2018/09/the-2018-nose-award-longlist.html

Case Study

Jonathan Lewis - Sun, 2018-09-30 13:59

A question about reading execution plans and optimising queries arrived on the ODC database forum a little while ago; the owner says the following statement is taking 14 minutes to return 30,000 rows and wants some help understanding why.

If you look at the original posting you’ll see that we’ve been given the text of the query and the execution plan including rowsource execution stats. There’s an inconsistency between the supplied information and the question asked, and I’ll get back to that shortly, but to keep this note fairly short I’ve excluded the 2nd half of the query (which is a UNION ALL) because the plan says the first part of the query took 13 minutes and 20 second and the user is worried about a total of 14 minutes.

SELECT /*+ gather_plan_statistics*/ DISTINCT
                rct.org_id,
                hzp.party_name,
                hca.account_number,
                rct.interface_header_attribute1 order_number,
                rct.customer_trx_id,
                rct.trx_number,
                rct.trx_date,
                rctd.gl_date,
                rct.creation_date,
                rctl.line_number,
                rct.invoice_currency_code inv_currency,
                (
                       SELECT SUM (rct_1.extended_amount)
                       FROM   apps.ra_customer_trx_lines_all rct_1
                       WHERE  rct_1.customer_trx_id = rct.customer_trx_id
                       AND    rct_1.line_type = 'LINE') inv_net_amount,
                (
                       SELECT SUM (rct_2.extended_amount)
                       FROM   apps.ra_customer_trx_lines_all rct_2
                       WHERE  rct_2.customer_trx_id = rct.customer_trx_id
                       AND    rct_2.line_type = 'TAX') inv_tax_amount,
                (
                       SELECT SUM (rct_3.extended_amount)
                       FROM   apps.ra_customer_trx_lines_all rct_3
                       WHERE  rct_3.customer_trx_id = rct.customer_trx_id) inv_gross_amount,
                gll.currency_code                                    func_currency,
                Round((
                        (
                        SELECT SUM (rct_4.extended_amount)
                        FROM   apps.ra_customer_trx_lines_all rct_4
                        WHERE  rct_4.customer_trx_id = rct.customer_trx_id
                        AND    rct_4.line_type = 'LINE')*gdr.conversion_rate),2) func_net_amount,
                Round((
                        (
                        SELECT SUM (rct_5.extended_amount)
                        FROM   apps.ra_customer_trx_lines_all rct_5
                        WHERE  rct_5.customer_trx_id = rct.customer_trx_id
                        AND    rct_5.line_type = 'TAX')*gdr.conversion_rate),2) func_tax_amount,
                Round((
                        (
                        SELECT SUM (rct_6.extended_amount)
                        FROM   apps.ra_customer_trx_lines_all rct_6
                        WHERE  rct_6.customer_trx_id = rct.customer_trx_id)*gdr.conversion_rate),2) func_gross_amount,
                glcc.segment1                                                                 company,
                glcc.segment2                                                                 account,
                hg.geography_name                                                             billing_country,
                gdr.conversion_rate
FROM            apps.hz_parties hzp,
                apps.hz_cust_accounts hca,
                apps.ra_customer_trx_all rct,
                apps.ra_customer_trx_lines_all rctl,
                apps.ra_cust_trx_line_gl_dist_all rctd,
                apps.gl_code_combinations_kfv glcc,
                apps.hz_cust_site_uses_all hcsua,
                apps.hz_cust_acct_sites_all hcasa,
                apps.hz_party_sites hps,
                apps.hz_locations hl,
                apps.hz_geographies hg,
                apps.gl_ledgers gll,
                apps.gl_daily_rates gdr
WHERE           hzp.party_id = hca.party_id
AND             hca.cust_account_id = rct.bill_to_customer_id
AND             hca.cust_account_id = hcasa.cust_account_id
AND             rct.customer_trx_id = rctl.customer_trx_id
AND             rctl.customer_trx_line_id = rctd.customer_trx_line_id
AND             glcc.code_combination_id = rctd.code_combination_id
AND             rct.bill_to_site_use_id = hcsua.site_use_id
AND             hcsua.cust_acct_site_id = hcasa.cust_acct_site_id
AND             hcasa.party_site_id = hps.party_site_id
AND             hps.location_id = hl.location_id
AND             hl.country = hg.country_code
AND             hg.geography_type = 'COUNTRY'
AND             rctl.line_type = 'TAX'
AND             gll.ledger_id = rct.set_of_books_id
AND             gdr.from_currency = rct.invoice_currency_code
AND             gdr.to_currency = gll.currency_code
AND             to_date(gdr.conversion_date) = to_date(rctd.gl_date)
AND             gdr.conversion_type = 'Corporate'
AND             rctd.gl_date BETWEEN To_date ('01-JAN-2018', 'DD-MON-YYYY') AND  To_date ('31-JAN-2018', 'DD-MON-YYYY')
AND             glcc.segment1 = '2600'
AND             glcc.segment2 = '206911'
GROUP BY        hzp.party_name,
                hca.account_number,
                rct.interface_header_attribute1,
                rct.trx_number,
                rct.trx_date,
                rct.creation_date,
                rctl.line_number,
                rctl.unit_selling_price,
                rct.org_id,
                rctd.gl_date,
                rct.customer_trx_id,
                glcc.segment1,
                glcc.segment2,
                hg.geography_name,
                rct.invoice_currency_code,
                gll.currency_code,
                gdr.conversion_rate 

We note that there are six scalar subqueries in the text I’ve reported – and they form two groups of three, and the difference between the two groups is that one group is multiplied by a conversion rate while the other isn’t; moreover in each group the three subqueries are simply querying subsets of the same correlated data set. So it looks as if all 6 scalar subqueries could be eliminated and replaced by the inclusion of an aggregate view in the from clause and the projection of 6 columns from that view.

However, before pursuing that option, take a look at the plan with the rowsource execution stats – where is the time going ?


-----------------------------------------------------------------------------------------------------------------------------------------------------  
| Id  | Operation                                                  | Name                         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  
-----------------------------------------------------------------------------------------------------------------------------------------------------  
|   0 | SELECT STATEMENT                                           |                              |      1 |        |    501 |00:13:20.17 |    3579K|  
|   1 |  UNION-ALL                                                 |                              |      1 |        |    501 |00:13:20.17 |    3579K|  
|   2 |   HASH UNIQUE                                              |                              |      1 |      1 |    501 |00:13:20.17 |    3579K|  
|   3 |    HASH GROUP BY                                           |                              |      1 |      1 |  19827 |00:13:20.15 |    3579K|  
|   4 |     NESTED LOOPS                                           |                              |      1 |        |  21808 |00:13:10.26 |    3579K|  
|   5 |      NESTED LOOPS                                          |                              |      1 |      1 |  21808 |00:13:10.11 |    3578K|  
|   6 |       NESTED LOOPS OUTER                                   |                              |      1 |      1 |  21808 |00:13:09.90 |    3576K|  
|   7 |        NESTED LOOPS OUTER                                  |                              |      1 |      1 |  21808 |00:13:09.25 |    3501K|  
|   8 |         NESTED LOOPS OUTER                                 |                              |      1 |      1 |  21808 |00:13:08.48 |    3426K|  
|   9 |          NESTED LOOPS OUTER                                |                              |      1 |      1 |  21808 |00:13:07.66 |    3333K|  
|  10 |           NESTED LOOPS OUTER                               |                              |      1 |      1 |  21808 |00:13:06.92 |    3258K|  
|  11 |            NESTED LOOPS OUTER                              |                              |      1 |      1 |  21808 |00:13:06.08 |    3183K|  
|  12 |             NESTED LOOPS                                   |                              |      1 |      1 |  21808 |00:13:04.69 |    3090K|  
|  13 |              NESTED LOOPS                                  |                              |      1 |      1 |  21808 |00:13:05.75 |    3026K|  
|  14 |               NESTED LOOPS                                 |                              |      1 |      1 |  21808 |00:13:03.30 |    2961K|  
|  15 |                NESTED LOOPS                                |                              |      1 |      1 |  33459 |00:00:04.33 |    1123K|  
|  16 |                 NESTED LOOPS                               |                              |      1 |    351 |  33459 |00:00:03.67 |    1025K|  
|  17 |                  NESTED LOOPS                              |                              |      1 |    351 |  33459 |00:00:03.06 |     926K|  
|  18 |                   NESTED LOOPS                             |                              |      1 |    351 |  33459 |00:00:02.47 |     827K|  
|* 19 |                    HASH JOIN                               |                              |      1 |    351 |  33459 |00:00:01.90 |     730K|  
|  20 |                     TABLE ACCESS FULL                      | GL_LEDGERS                   |      1 |     38 |     39 |00:00:00.01 |      15 |  
|  21 |                     NESTED LOOPS                           |                              |      1 |        |  33459 |00:00:01.75 |     730K|  
|  22 |                      NESTED LOOPS                          |                              |      1 |    351 |  33459 |00:00:01.44 |     696K|  
|  23 |                       NESTED LOOPS                         |                              |      1 |    351 |  33459 |00:00:01.11 |     646K|  
|* 24 |                        HASH JOIN                           |                              |      1 |    385 |  33459 |00:00:00.40 |     526K|  
|* 25 |                         TABLE ACCESS BY INDEX ROWID BATCHED| GL_CODE_COMBINATIONS         |      1 |     35 |      1 |00:00:00.01 |     108 |  
|* 26 |                          INDEX RANGE SCAN                  | GL_CODE_COMBINATIONS_N2      |      1 |    499 |     77 |00:00:00.01 |       3 |  
|* 27 |                         TABLE ACCESS BY INDEX ROWID BATCHED| RA_CUST_TRX_LINE_GL_DIST_ALL |      1 |    651K|   1458K|00:00:02.22 |     526K|  
|* 28 |                          INDEX RANGE SCAN                  | RA_CUST_TRX_LINE_GL_DIST_N2  |      1 |    728K|   1820K|00:00:01.60 |   11147 |  
|* 29 |                        TABLE ACCESS BY INDEX ROWID         | RA_CUSTOMER_TRX_LINES_ALL    |  33459 |      1 |  33459 |00:00:00.53 |     119K|  
|* 30 |                         INDEX UNIQUE SCAN                  | RA_CUSTOMER_TRX_LINES_U1     |  33459 |      1 |  33459 |00:00:00.31 |   86364 |  
|* 31 |                       INDEX UNIQUE SCAN                    | RA_CUSTOMER_TRX_U1           |  33459 |      1 |  33459 |00:00:00.21 |   49850 |  
|  32 |                      TABLE ACCESS BY INDEX ROWID           | RA_CUSTOMER_TRX_ALL          |  33459 |      1 |  33459 |00:00:00.20 |   33459 |  
|  33 |                    TABLE ACCESS BY INDEX ROWID             | HZ_CUST_ACCOUNTS             |  33459 |      1 |  33459 |00:00:00.42 |   97887 |  
|* 34 |                     INDEX UNIQUE SCAN                      | HZ_CUST_ACCOUNTS_U1          |  33459 |      1 |  33459 |00:00:00.24 |   64428 |  
|  35 |                   TABLE ACCESS BY INDEX ROWID              | HZ_PARTIES                   |  33459 |      1 |  33459 |00:00:00.44 |   98783 |  
|* 36 |                    INDEX UNIQUE SCAN                       | HZ_PARTIES_U1                |  33459 |      1 |  33459 |00:00:00.26 |   65175 |  
|  37 |                  TABLE ACCESS BY INDEX ROWID               | HZ_CUST_SITE_USES_ALL        |  33459 |      1 |  33459 |00:00:00.46 |   98374 |  
|* 38 |                   INDEX UNIQUE SCAN                        | HZ_CUST_SITE_USES_U1         |  33459 |      1 |  33459 |00:00:00.28 |   64915 |  
|* 39 |                 TABLE ACCESS BY INDEX ROWID                | HZ_CUST_ACCT_SITES_ALL       |  33459 |      1 |  33459 |00:00:00.45 |   98195 |  
|* 40 |                  INDEX UNIQUE SCAN                         | HZ_CUST_ACCT_SITES_U1        |  33459 |      1 |  33459 |00:00:00.26 |   64736 |  
|  41 |                TABLE ACCESS BY INDEX ROWID BATCHED         | GL_DAILY_RATES               |  33459 |      1 |  21808 |00:12:44.59 |    1838K|  
|* 42 |                 INDEX RANGE SCAN                           | GL_DAILY_RATES_U1            |  33459 |      1 |  21808 |00:13:08.16 |    1837K|  
|  43 |               TABLE ACCESS BY INDEX ROWID                  | HZ_PARTY_SITES               |  21808 |      1 |  21808 |00:00:00.35 |   64339 |  
|* 44 |                INDEX UNIQUE SCAN                           | HZ_PARTY_SITES_U1            |  21808 |      1 |  21808 |00:00:00.23 |   42531 |  
|  45 |              TABLE ACCESS BY INDEX ROWID                   | HZ_LOCATIONS                 |  21808 |      1 |  21808 |00:00:00.33 |   64353 |  
|* 46 |               INDEX UNIQUE SCAN                            | HZ_LOCATIONS_U1              |  21808 |      1 |  21808 |00:00:00.18 |   42545 |  
|  47 |             VIEW PUSHED PREDICATE                          | VW_SSQ_1                     |  21808 |      1 |  21808 |00:00:01.17 |   93476 |  
|  48 |              SORT GROUP BY                                 |                              |  21808 |      1 |  21808 |00:00:01.06 |   93476 |  
|  49 |               TABLE ACCESS BY INDEX ROWID BATCHED          | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |     16 |    145K|00:00:00.84 |   93476 |  
|* 50 |                INDEX RANGE SCAN                            | XXC_CUSTOMER_GETPAID         |  21808 |     16 |    145K|00:00:00.36 |   59938 |  
|  51 |            VIEW PUSHED PREDICATE                           | VW_SSQ_2                     |  21808 |      1 |  21808 |00:00:00.69 |   74433 |  
|  52 |             SORT GROUP BY                                  |                              |  21808 |      1 |  21808 |00:00:00.59 |   74433 |  
|  53 |              TABLE ACCESS BY INDEX ROWID BATCHED           | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |      8 |  92201 |00:00:00.49 |   74433 |  
|* 54 |               INDEX RANGE SCAN                             | XXC_CUSTOMER_GETPAID         |  21808 |     12 |  92201 |00:00:00.24 |   59903 |  
|  55 |           VIEW PUSHED PREDICATE                            | VW_SSQ_3                     |  21808 |      1 |  21808 |00:00:00.61 |   74852 |  
|  56 |            SORT GROUP BY                                   |                              |  21808 |      1 |  21808 |00:00:00.51 |   74852 |  
|  57 |             TABLE ACCESS BY INDEX ROWID BATCHED            | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |      8 |  53060 |00:00:00.38 |   74852 |  
|* 58 |              INDEX RANGE SCAN                              | XXC_CUSTOMER_GETPAID         |  21808 |     12 |  53060 |00:00:00.19 |   59148 |  
|  59 |          VIEW PUSHED PREDICATE                             | VW_SSQ_4                     |  21808 |      1 |  21808 |00:00:00.70 |   93490 |  
|  60 |           SORT GROUP BY                                    |                              |  21808 |      1 |  21808 |00:00:00.61 |   93490 |  
|  61 |            TABLE ACCESS BY INDEX ROWID BATCHED             | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |     16 |    145K|00:00:00.63 |   93490 |  
|* 62 |             INDEX RANGE SCAN                               | XXC_CUSTOMER_GETPAID         |  21808 |     16 |    145K|00:00:00.25 |   59950 |  
|  63 |         VIEW PUSHED PREDICATE                              | VW_SSQ_5                     |  21808 |      1 |  21808 |00:00:00.63 |   74427 |  
|  64 |          SORT GROUP BY                                     |                              |  21808 |      1 |  21808 |00:00:00.54 |   74427 |  
|  65 |           TABLE ACCESS BY INDEX ROWID BATCHED              | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |      8 |  92201 |00:00:00.44 |   74427 |  
|* 66 |            INDEX RANGE SCAN                                | XXC_CUSTOMER_GETPAID         |  21808 |     12 |  92201 |00:00:00.21 |   59900 |  
|  67 |        VIEW PUSHED PREDICATE                               | VW_SSQ_6                     |  21808 |      1 |  21808 |00:00:00.59 |   74846 |  
|  68 |         SORT GROUP BY                                      |                              |  21808 |      1 |  21808 |00:00:00.50 |   74846 |  
|  69 |          TABLE ACCESS BY INDEX ROWID BATCHED               | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |      8 |  53060 |00:00:00.35 |   74846 |  
|* 70 |           INDEX RANGE SCAN                                 | XXC_CUSTOMER_GETPAID         |  21808 |     12 |  53060 |00:00:00.17 |   59144 |  
|* 71 |       INDEX RANGE SCAN                                     | HZ_GEOGRAPHIES_N11           |  21808 |   5812 |  21808 |00:00:00.13 |    2684 |  
|  72 |      TABLE ACCESS BY INDEX ROWID                           | HZ_GEOGRAPHIES               |  21808 |    168 |  21808 |00:00:00.07 |     620 |  
-----------------------------------------------------------------------------------------------------------------------------------------------------  

Let’s start by raising some concerns about the quality of information available.

First, the OP says it takes 14 minutes to return 30,000 rows: but the top line of the plan says it has taken 13 minutes and 20 seconds to return the first 501 rows, and if we look a little further down the plan operation 3 (Hash Group By) reports 00:13:20.15 to aggregate down to 19,827 rows. So this half of the plan cannot return more than 19,827 rows, and the half I have discarded (for the moment) must be returning the other 10,000+ rows. The information we have is incomplete.

Of course you may think that whatever the rest of the plan does is fairly irrelevant – it’s only going to be responsible for at most another 40 seconds of processing – except my previous experience of rowsource execution statistics tells me that when you do a large number of small operations the times reported can be subject to fairly large rounding errors and that enabling the measurement can increase the execution time by a factor of three or four. It’s perfectly feasible that this half of the query is actually the faster half under normal run-time circumstances but runs much more slowly (with a much higher level of CPU utilisation) when rowsource execution stats is in enabled. So let’s not get too confident.

With that warning in mind, what can we see in this half of the plan.

Big picture: the inline scalar subqueries have disappeared. In 12c the optimimzer can unnest scalar subqueries in the select list and turn them into outer joins, and we can see that there are 6 “Nested Loop Outer” operations, corresponding to 6 “View Pushed Predicate” operations against views labelled VW_SSQ1 through to VW_SSQ6 (SSQ = Scalar Sub Query ?). This goes back to my early comment – a person could probably rewrite the 6 scalar subqueries as a single aggregate view in the from clause: the optimizer isn’t quite clever enough to manage that in this case, but in simpler cases it might be able to do exactly that.

Big picture 2: most of the 13 minutes 20 seconds appears at operation 14 as it processes the 33,459 rows supplied to it from the 4.33 seconds of work done by operation 15 and its descendants. Reducing this part of the execution plan to the smallest relevant section we get the following:

-----------------------------------------------------------------------------------------------------------------------------------------------------  
| Id  | Operation                                                  | Name                         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  
-----------------------------------------------------------------------------------------------------------------------------------------------------  
|  14 |               NESTED LOOPS                                 |                              |      1 |      1 |  21808 |00:13:03.30 |    2961K|  
|  15 |                NESTED LOOPS                                |                              |      1 |      1 |  33459 |00:00:04.33 |    1123K|  
|  41 |                TABLE ACCESS BY INDEX ROWID BATCHED         | GL_DAILY_RATES               |  33459 |      1 |  21808 |00:12:44.59 |    1838K|  
|* 42 |                 INDEX RANGE SCAN                           | GL_DAILY_RATES_U1            |  33459 |      1 |  21808 |00:13:08.16 |    1837K|  
-----------------------------------------------------------------------------------------------------------------------------------------------------  

For each row supplied by operation 15 Oracle calls operation 41, which calls operation 42 to do an index range scan to supply a set of rowids so that operation 41 can access a table and return rows. Apparently the total time spent by operation 41 waiting for operation 42 to return rowids and then doing its own work is 12 minutes 44 seconds, while the range scans alone (all 33,459 of them) take 13 minutes and 8 seconds. Remember, though, that “lots of small operations = scope of rounding errors” when you look at these timings. Despite the inconsistency between the timings for operations 41 and 42 it’s reasonable to conclude that between them that’s where most of the execution time went.

Two questions – (a) can we refine our analysis of how the time is split between the two operations and (b) why do these lines take so much time.

Check the Starts and the A-rows: (reminder: for comparison, we expect A-rows to be approximately E-rows * Starts) for both operations we see 33,459 starts and 21,808 rows. The index range scans return (on average) a single rowid about two-thirds of the time, and every time a range scan returns a rowid the corresponding row is returned from the table (If you check the Id column there’s no asterisk on operation 41 so no extra predicate is applied as Oracle accesses the table row – but even if there were an extra predicate we’d still be happy to infer that if 21,808 rowids returned from operation 42 turned into 21,808 rows returned from the table then there are no wasted accesses to the table).

Now look at the Buffers for the index range scan – 1.837M: that’s roughly 56 buffers per range scan – that’s a lot of index to range through to find one rowid, which is a good clue that perhaps we do a lot of work with each Start and really do use up a lot of CPU on this operation. Let’s see what the Predicate Section of the plan tells us about this range scan:


Predicate Information (identified by operation id):  
---------------------------------------------------  
  42 - access("GDR"."FROM_CURRENCY"="RCT"."INVOICE_CURRENCY_CODE" AND "GDR"."TO_CURRENCY"="GLL"."CURRENCY_CODE" AND   
              "GDR"."CONVERSION_TYPE"='Corporate')  
       filter(("GDR"."CONVERSION_TYPE"='Corporate' AND TO_DATE(INTERNAL_FUNCTION("GDR"."CONVERSION_DATE"))=TO_DATE(INTERNAL_FUNCTION("RCTD"."  
              GL_DATE"))))  

We have access predicates (things which narrow down the number of leaf blocks that we walk through) and filter predicates (things we do to test every key entry we access). Notably the gdr.conversion type is a filter predciate as well as an access predicate – and that suggests that our set of predicates has “skipped over” a column in the index: from_currency and to_currency might be the first two columns in the index, but conversion_type is then NOT the third.

More significantly, though, there’s a column called conversion_date in the index (maybe that’s column 3 in the index – it feels like it ought to be); but for every index entry we’ve selected from the 56 blocks we walk through we do some sort of internal conversion (or un-translated transformation) to the column then convert the result to a date to compare it with another date (similarly processed from an earlier operation). What is that “internal function” – let’s check the query:


AND             gdr.from_currency = rct.invoice_currency_code
AND             gdr.to_currency = gll.currency_code
AND             gdr.conversion_type = 'Corporate'
AND             to_date(gdr.conversion_date) = to_date(rctd.gl_date)
AND             rctd.gl_date BETWEEN To_date ('01-JAN-2018', 'DD-MON-YYYY') AND  To_date ('31-JAN-2018', 'DD-MON-YYYY')

(I’ve swapped the order of a couple of lines to highlight a detail).

The filter predicate is comparing gdr.conversion_date with rctd.gl_date – and we can probably assume that both columns really are dates because (a) the word “date” is in their names and (b) the rctd.gl_date is being compared with genuine date values in the next predicate down (and – though I haven’t shown it – the way the plan reports the next predicate proves that the column really is a date datatype).

So the predicate in the SQL applies the to_date() function to two columns that are dates – which means the optimizer has to convert the date columns to some default character format and then convert them back to dates. The “internal function” is a to_char() call. Conversions between date and character formats are CPU-intensive, and we’re doing a double conversion (to_date(to_char(column_value)) to every data value in roughly 56 blocks of an index each time we call that line of the plan. It’s not surprising we spend a lot of time in that line.

Initial strategy:

Check the column types for those two columns, if they are both date types decide whether or not the predicate could be modified to a simple gdr.conversion_date = rctd.gl_date (though it’s possible that something slightly more sophisticated should be used) but whatever you do avoid the redundant conversion through character format.

Warning

Simply eliminating the to_date() calls may changes the results. Here’s a demonstration of how nasty things happen when you apply to_date() to a date:


SQL> desc t1
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 D1                                     DATE
 D2                                     DATE

SQL> insert into t1 values(sysdate, sysdate + 10/86400);

1 row created.

SQL> select * from t1 where d1 = d2;

no rows selected

SQL> select * from t1 where to_date(d1) = to_date(d2);

D1        D2
--------- ---------
30-SEP-18 30-SEP-18

1 row selected.

SQL> alter session set nls_date_format = 'yyyy-mm-dd hh24:mi:ss';

Session altered.

SQL> select * from d1 where to_date(d1) = to_date(d2);

no rows selected

Different users could get different results because they have different settings for their nls_date_format.

Reminder

I started my analysis with two comments about the quality of information – first, we don’t really know whether or not this half of the union all would be responsble for most of the time if rowsource execution statistics were not enabled; secondly large number of small operations can lead to a lot of rounding errors in timing. There are six occurrences of unnested scalar subqueries which are all called 21,808 times – and the recorded time for all 6 of them is remarkably small given the number of executions, even when allowing for the precision with which they operate; it’s possible that these subqueries take a larger fraction of the total time than the plan indicates, in which case it might become necessary (rather than just nice) to do a manual unnesting and reduce the number of inline views to 3 (one for each line_type), 2 (one with, one without, conversion_rate) or just one.

Footnote

Once again I’ve spent a couple of hours writing notes to explain the thoughts that went through my mind in roughly 10 minutes of reading the original posting. It’s a measure really of how many bits of information you can pull together, and possibly discard, very quickly once you understand how many things the optimizer is capable of doing and how the execution plan tries to show you how a statement was (or will be) handled.

 

Partitioning -- 6 : Hash Partitioning

Hemant K Chitale - Sun, 2018-09-30 06:25
Unlike Range or List Partitioning where you define the rule which identifies which Partition a row will be inserted into (based on the value in the Partition Key Column(s)),  Hash Partitioning relies on Oracle applying a "hashing formula (algorithm)" to "randomly" distribute incoming rows across the available Partitions in the table.
This would be useful when you want to break up a table into smaller physical segments (maybe into even separate Tablespaces on different disks) without considering grouping of data.  In Date based Range Partitioning data is grouped into different Partitions -- i.e. physical segments on disk --  based on the Date value (e.g. by Month or Year).  In List Partitioning, data is grouped based on the value in the Partition Key Column.

Here is a small example on Hash Partitioning :

SQL> create table iot_incoming_data
2 (data_item_number number,
3 data_item_key varchar2(32),
4 data_item_value varchar2(64),
5 data_item_timestamp timestamp)
6 partition by hash (data_item_number)
7 (partition p1 tablespace hash_ptn_1,
8 partition p2 tablespace hash_ptn_2,
9 partition p3 tablespace hash_ptn_3,
10 partition p4 tablespace hash_ptn_4)
11 /

Table created.

SQL>


In this definition of the table, I have "randomly" distributed incoming rows across 4 Partitions in 4 different Tablespaces.  Given the incoming "data_item_number" values (either machine generated or from a sequence), each of the 4 Partitions would be equally loaded.
(In contrast, in Date based Range Partitioning of, say, a SALES table, you might have fewer rows in older Partitions and an increasing number of rows in new Partitions as your business and Sales Volume grow over time !).

Unlike Range Partitioning, Hash Partitioning will not perform well for a "range based query"  (e.g. a range of sales dates or a range of data item numbers).  It is suitable for "equality" or "in-list" predicates.  If you do need a range based query, you would need a Global Index.

Note that it is advised that you should use a Power of 2 for the number of Hash Partitions.

Let me demonstrate insertion of data :

SQL> insert into iot_incoming_data
2 select rownum,
3 dbms_random.string('X',16),
4 dbms_random.string('X',32),
5 systimestamp
6 from dual
7 connect by level < 10001;

10000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('HEMANT','IOT_INCOMING_DATA');

PL/SQL procedure successfully completed.

SQL> select partition_name, num_rows
2 from user_tab_partitions
3 where table_name = 'IOT_INCOMING_DATA'
4 order by partition_position
5 /

PARTITION_NAME NUM_ROWS
------------------------------ ----------
P1 2471
P2 2527
P3 2521
P4 2481

SQL>


Note that I have inserted the 10,000 rows from a single session.  In the real world, you would have multiple sessions concurrently inserting rows into the table.
Based on the Hashing algorithm that Oracle used (note : this is internal to Oracle and we cannot use any custom algorithm), Oracle has more or less evenly distributed the incoming rows across the 4 Partitions.

Let me select some random rows from the Partitions :

SQL> select data_item_number  
2 from iot_incoming_data partition (P1)
3 where rownum < 6
4 order by 1;

DATA_ITEM_NUMBER
----------------
8361
8362
8369
8379
8380

SQL>
SQL> select data_item_number
2 from iot_incoming_data partition (P2)
3 where rownum < 6
4 order by 1
5 /

DATA_ITEM_NUMBER
----------------
8087
8099
8101
8105
8109

SQL>
SQL> select data_item_number
2 from iot_incoming_data partition (P3)
3 where rownum < 6
4 and data_item_number < 100
5 order by 1
6 /

DATA_ITEM_NUMBER
----------------
2
5
8
18
20

SQL>
SQL> select data_item_number
2 from iot_incoming_data partition (P4)
3 where rownum < 6
4 and data_item_number between 1000 and 1100
5 order by 1
6 /

DATA_ITEM_NUMBER
----------------
1001
1002
1005
1008
1009

SQL>


(The first two queries returned rows with values greater than 8000 simply because I didn't specify a range of values as a filter and those rows came from the first few blocks that Oracle read from the buffer cache).
Note how the DATA_ITEM_NUMBER values indicate "near-random" distribution of rows across the Partitions.  It is likely that if I had created multiple sessions concurrently running inserts into the table, distribution of the rows would have been even more "random".



Categories: DBA Blogs

Oracle Cloud Infrastructure Architect Associate (1Z0-932) | DBCS Feature

Online Apps DBA - Sat, 2018-09-29 07:39

[1Z0-932 Exam Q/A] Oracle Cloud Infrastructure Architect Exam (1Z0-932) consists of 70 Questions and you need to clear 68% so around 49 Question correct. There are different topics like Architecture, Storage, Networking, IAM, Compute, HA, and Database. Check one of the many questions that you can expect in OCI Architect 1Z0-932 from Database Cloud Service […]

The post Oracle Cloud Infrastructure Architect Associate (1Z0-932) | DBCS Feature appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Performance Tuning Process

Online Apps DBA - Sat, 2018-09-29 07:36

Do You Know There are Major Four Generic Processes to increase the performance of your database? Visit: https://k21academy.com/tuning12 to know in depth about the first interesting process, “Identify”, that includes the following Steps: ✔ Identify Which SQL to Tune? ✔ Identify End to End View ✔ Identify Database Time View ✔ Identify Simplification Do You […]

The post Performance Tuning Process appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Insertion over db links creating extra rows than expected

Tom Kyte - Fri, 2018-09-28 19:46
Hi Tom, iam facing a weird issue . below is the scenario, My package creates insert statements for 4 tables which lie on someother oracle 11g db. tab1 tab2 tab3 tab4 The same 4 tables exists in 7 servers. (admin and cus1 to cu6). ...
Categories: DBA Blogs

A SQLite extension for gawk (part II)

Yann Neuhaus - Fri, 2018-09-28 17:21

Welcome to part II of a three-part article on extending gawk with a SQLite binding. Part I is here. Part II is followed by Part III, which give some explanations for the code presented here and shows how to use the extension with a stress test.
Here, I’ll list the source code of the extension and give instructions to compile and use it in gawk. Beware though that the code should be taken with several grains of salt, actually a whole wheelbarrow of it, because it has been only superficially tested. Some more serious testing is required in order to trust it entirely. So, caveat emptor !
I assume the source code of gawk is already installed (see for example the instructions here). We still need the SQLite source code. Go here and download the amalgamation zip file. Unzip it somewhere and then copy the file sqlite3.c to the gawk extension directory, ~/dmgawk/gawk-4.2.1/extension. Compile it with the command below:

gcc -c sqlite3.c -DHAVE_READLINE -fPIC -lpthread -ldl

Now, edit the file Makefile.am and add the references to the new extension, as shown below:

vi Makefile.am
pkgextension_LTLIBRARIES = \
filefuncs.la \
...
sqlite_gawk.la <-----
 
noinst_LTLIBRARIES = \
...
time_la_SOURCES = time.c
time_la_LDFLAGS = $(MY_MODULE_FLAGS)
time_la_LIBADD = $(MY_LIBS)
sqlite_gawk_la_SOURCES = sqlite_gawk.c <-----
sqlite_gawk_la_LDFLAGS = $(MY_MODULE_FLAGS) <-----
sqlite_gawk_la_LIBADD = $(MY_LIBS) -lpthread -ldl -lreadline <-----

...

Save and quit; that’s all for the make file;
We are still in the extension directory. Let’s edit the interface sqlite_gawk.c now and insert the code below;
vi sqlite_gawk.c

/*
 * sqlite-gawk.c - an interface to sqlite() library;
 * Cesare Cervini
 * dbi-services.com
 * 8/2018
*/
#ifdef HAVE_CONFIG_H
#include <config.h>
#endif

#include <stdio.h>
#include <assert.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>

#include <sys/types.h>
#include <sys/stat.h>

#include "gawkapi.h"

// extension;
#include <time.h>
#include <errno.h>
#include <limits.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <regex.h>
#include <sqlite3.h>

#include "gettext.h"
#define _(msgid)  gettext(msgid)
#define N_(msgid) msgid

static const gawk_api_t *api;   /* for convenience macros to work */
static awk_ext_id_t ext_id;
static const char *ext_version = "an interface to sqlite3: version 1.0";

int plugin_is_GPL_compatible;

/* internal structure and variables */
/*
internally stores the db handles so an integer is returned to gawk as the index of a array;
*/
#define MAX_DB 100
static unsigned nb_sqlite_free_handles = MAX_DB;
static sqlite3 *sqlite_handles[MAX_DB];

/* init_sqlite_handles */
/*
since gawk does not know pointers, we use integers as db handles, which are actually indexes into a table of sqllite db handles;
initializes the sqlite_handles array to null pointers and resets the number of free handles;
called at program start time;
*/
static awk_bool_t init_sqlite_handles(void) {
   for (unsigned i = 0; i < MAX_DB; i++)
      sqlite_handles[i] = NULL;
   nb_sqlite_free_handles = MAX_DB;

   register_ext_version(ext_version);

   return awk_true;
}

/*
readfile() and writefile() functions for blob I/Os from/to file into/from memory;
e.g.:
   INSERT INTO chickenhouse my_blob = readfile('chicken_run.mp4');
   SELECT writefile("'Mary Poppins Returns.mp4'", my_blob) FROM children_movies;
they are taken directly from the source file sqlite3.c, i.e. sqlite-s shell program;
those functions are later registered in do_sqlite_open() via a call to sqlite_create_function() for use as extending SQL functions;
to be done while opening a db in do_sqlite_open() and for each opened db where the extended functions must be available, i.e. all for short;
*/

// ------------------------------------------- begin of imported functions from sqllite3.c ---------------------------------------------
/*
** This function is used in place of stat().  On Windows, special handling
** is required in order for the included time to be returned as UTC.  On all
** other systems, this function simply calls stat().
*/
static int fileStat(
  const char *zPath,
  struct stat *pStatBuf
){
  return stat(zPath, pStatBuf);
}

/*
** Set the result stored by context ctx to a blob containing the 
** contents of file zName.
*/
static void readFileContents(sqlite3_context *ctx, const char *zName){
  FILE *in;
  long nIn;
  void *pBuf;

  in = fopen(zName, "rb");
  if( in==0 ) return;
  fseek(in, 0, SEEK_END);
  nIn = ftell(in);
  rewind(in);
  pBuf = sqlite3_malloc( nIn );
  if( pBuf && 1==fread(pBuf, nIn, 1, in) ){
    sqlite3_result_blob(ctx, pBuf, nIn, sqlite3_free);
  }else{
    sqlite3_free(pBuf);
  }
  fclose(in);
}

/*
** Implementation of the "readfile(X)" SQL function.  The entire content
** of the file named X is read and returned as a BLOB.  NULL is returned
** if the file does not exist or is unreadable.
*/
static void readfileFunc(
  sqlite3_context *context,
  int argc,
  sqlite3_value **argv
){
  const char *zName;
  (void)(argc);  /* Unused parameter */
  zName = (const char*)sqlite3_value_text(argv[0]);
  if( zName==0 ) return;
  readFileContents(context, zName);
}

/*
** This function does the work for the writefile() UDF. Refer to 
** header comments at the top of this file for details.
*/
static int writeFile(
  sqlite3_context *pCtx,          /* Context to return bytes written in */
  const char *zFile,              /* File to write */
  sqlite3_value *pData,           /* Data to write */
  mode_t mode,                    /* MODE parameter passed to writefile() */
  sqlite3_int64 mtime             /* MTIME parameter (or -1 to not set time) */
){
  if( S_ISLNK(mode) ){
    const char *zTo = (const char*)sqlite3_value_text(pData);
    if( symlink(zTo, zFile)<0 ) return 1;
  }else
  {
    if( S_ISDIR(mode) ){
      if( mkdir(zFile, mode) ){
        /* The mkdir() call to create the directory failed. This might not
        ** be an error though - if there is already a directory at the same
        ** path and either the permissions already match or can be changed
        ** to do so using chmod(), it is not an error.  */
        struct stat sStat;
        if( errno!=EEXIST
         || 0!=fileStat(zFile, &sStat)
         || !S_ISDIR(sStat.st_mode)
         || ((sStat.st_mode&0777)!=(mode&0777) && 0!=chmod(zFile, mode&0777))
        ){
          return 1;
        }
      }
    }else{
      sqlite3_int64 nWrite = 0;
      const char *z;
      int rc = 0;
      FILE *out = fopen(zFile, "wb");
      if( out==0 ) return 1;
      z = (const char*)sqlite3_value_blob(pData);
      if( z ){
        sqlite3_int64 n = fwrite(z, 1, sqlite3_value_bytes(pData), out);
        nWrite = sqlite3_value_bytes(pData);
        if( nWrite!=n ){
          rc = 1;
        }
      }
      fclose(out);
      if( rc==0 && mode && chmod(zFile, mode & 0777) ){
        rc = 1;
      }
      if( rc ) return 2;
      sqlite3_result_int64(pCtx, nWrite);
    }
  }

  if( mtime>=0 ){
#if defined(AT_FDCWD) && 0 /* utimensat() is not universally available */
    /* Recent unix */
    struct timespec times[2];
    times[0].tv_nsec = times[1].tv_nsec = 0;
    times[0].tv_sec = time(0);
    times[1].tv_sec = mtime;
    if( utimensat(AT_FDCWD, zFile, times, AT_SYMLINK_NOFOLLOW) ){
      return 1;
    }
#else
    /* Legacy unix */
    struct timeval times[2];
    times[0].tv_usec = times[1].tv_usec = 0;
    times[0].tv_sec = time(0);
    times[1].tv_sec = mtime;
    if( utimes(zFile, times) ){
      return 1;
    }
#endif
  }

  return 0;
}

/*
** Argument zFile is the name of a file that will be created and/or written
** by SQL function writefile(). This function ensures that the directory
** zFile will be written to exists, creating it if required. The permissions
** for any path components created by this function are set to (mode&0777).
**
** If an OOM condition is encountered, SQLITE_NOMEM is returned. Otherwise,
** SQLITE_OK is returned if the directory is successfully created, or
** SQLITE_ERROR otherwise.
*/
static int makeDirectory(
  const char *zFile,
  mode_t mode
){
  char *zCopy = sqlite3_mprintf("%s", zFile);
  int rc = SQLITE_OK;

  if( zCopy==0 ){
    rc = SQLITE_NOMEM;
  }else{
    int nCopy = (int)strlen(zCopy);
    int i = 1;

    while( rc==SQLITE_OK ){
      struct stat sStat;
      int rc2;

      for(; zCopy[i]!='/' && i<nCopy; i++);
      if( i==nCopy ) break;
      zCopy[i] = '';

      rc2 = fileStat(zCopy, &sStat);
      if( rc2!=0 ){
        if( mkdir(zCopy, mode & 0777) ) rc = SQLITE_ERROR;
      }else{
        if( !S_ISDIR(sStat.st_mode) ) rc = SQLITE_ERROR;
      }
      zCopy[i] = '/';
      i++;
    }

    sqlite3_free(zCopy);
  }

  return rc;
}

/*
** Set the error message contained in context ctx to the results of
** vprintf(zFmt, ...).
*/
static void ctxErrorMsg(sqlite3_context *ctx, const char *zFmt, ...){
  char *zMsg = 0;
  va_list ap;
  va_start(ap, zFmt);
  zMsg = sqlite3_vmprintf(zFmt, ap);
  sqlite3_result_error(ctx, zMsg, -1);
  sqlite3_free(zMsg);
  va_end(ap);
}

/*
** Implementation of the "writefile(W,X[,Y[,Z]]])" SQL function.  
** Refer to header comments at the top of this file for details.
*/
static void writefileFunc(
  sqlite3_context *context,
  int argc,
  sqlite3_value **argv
){
  const char *zFile;
  mode_t mode = 0;
  int res;
  sqlite3_int64 mtime = -1;

  if( argc4 ){
    sqlite3_result_error(context, 
        "wrong number of arguments to function writefile()", -1
    );
    return;
  }

  zFile = (const char*)sqlite3_value_text(argv[0]);
  if( zFile==0 ) return;
  if( argc>=3 ){
    mode = (mode_t)sqlite3_value_int(argv[2]);
  }
  if( argc==4 ){
    mtime = sqlite3_value_int64(argv[3]);
  }

  res = writeFile(context, zFile, argv[1], mode, mtime);
  if( res==1 && errno==ENOENT ){
    if( makeDirectory(zFile, mode)==SQLITE_OK ){
      res = writeFile(context, zFile, argv[1], mode, mtime);
    }
  }

  if( argc>2 && res!=0 ){
    if( S_ISLNK(mode) ){
      ctxErrorMsg(context, "failed to create symlink: %s", zFile);
    }else if( S_ISDIR(mode) ){
      ctxErrorMsg(context, "failed to create directory: %s", zFile);
    }else{
      ctxErrorMsg(context, "failed to write file: %s", zFile);
    }
  }
}
// ------------------------------------------- end of imported functions from sqllite3.c ---------------------------------------------

/* get_free_sqlite_handle */
/*
looks for a free slot in sqlite_handles;
return its index if found, -1 otherwise;
*/
static unsigned get_free_sqlite_handle(void) {
   if (0 == nb_sqlite_free_handles) {
       fprintf(stderr, "maximum of open db [%d] reached, no free handles !\n", MAX_DB);
       return -1;
   }
   for (unsigned i = 0; i < MAX_DB; i++)
      if (NULL == sqlite_handles[i])
         return i;
   // should never come so far;
   return -1;
}

/* do_sqllite_open */
/* returns -1 if error, a db handle in the range 0 .. MAX_DB - 1 otherwise; */
static awk_value_t *
do_sqlite_open(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t db_name;
   short int ret;

   assert(result != NULL);

   unsigned int db_handle = get_free_sqlite_handle();
   if (-1 == db_handle)
      return make_number(-1, result);

   if (get_argument(0, AWK_STRING, &db_name)) {
      sqlite3 *db;

      ret = sqlite3_open(db_name.str_value.str, &db);

      if (ret) {
         char error_string[1000];
         sprintf(error_string, "sqlite3_open(): cannot open database [%s], error %s\n", db_name.str_value.str, sqlite3_errmsg(db));
         fprintf(stderr, "%s\n", error_string);
         update_ERRNO_string(_(error_string));
         ret = -1;
      }
      else {
         sqlite_handles[db_handle] = db;
         nb_sqlite_free_handles--;
         ret = db_handle;

         // register the extension functions readfile() and writefile() for blobs;
         ret = sqlite3_create_function(db, "readfile", 1, SQLITE_UTF8, 0, readfileFunc, 0, 0);
         if (ret == SQLITE_OK) {
            ret = sqlite3_create_function(db, "writefile", -1, SQLITE_UTF8, 0, writefileFunc, 0, 0);
            if (SQLITE_OK != ret)
               fprintf(stderr, "%s\n", "could not register function writefile()");
         }
         else if (SQLITE_OK != ret)
            fprintf(stderr, "%s\n", "could not register function readfile()");
      }
   }
   else {
      update_ERRNO_string(_("sqlite3_open(): missing parameter database name"));
      ret = -1;
   }

   return make_number(ret, result);
}

/* do_sqllite_close */
/* returns -1 if error, 0 otherwise; */
static awk_value_t *
do_sqlite_close(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t db_handle;
   int ret;

   assert(result != NULL);

   if (get_argument(0, AWK_NUMBER, &db_handle)) {
      sqlite3_close(sqlite_handles[(int) db_handle.num_value]);
      sqlite_handles[(int) db_handle.num_value] = NULL;
      nb_sqlite_free_handles++;
      ret = 0;
   }
   else {
      update_ERRNO_string(_("sqlite3_close(): missing parameter database handle"));
      ret = -1;
   }
   return make_number(ret, result);
}

/* do_sqllite_exec */
/*
returns -1 if error, 0 otherwise;
sqlite_exec is overloaded;
if 2 parameters, usual DML/DDL statements;
if 6 parameters, then incremental blob I/O;
sqlite_exec(db, db_name, table, column, rowid, readfile(file_name))
or
sqlite_exec(db, db_name, table, column, rowid, writefile(file_name))
implements sqlite3'c shell readfile()/writefile() syntax with incremental blob I/Os;
Example of usage:
first, get the rowid of the row that contains the blob to access;
   sqlite_select(db, "select rowid from <table> where <condition>", array)
then, call the sqlite_exec function with the tuple (<db_name>, <table>, <blob_column>, <rowid>) and the action to do, either readfile() or writefile();
   sqlite_exec(db, <db_name>, '<table>', '<blob_column>', array[0]["rowid"], readfile(file_name))
   sqlite_exec(db, <db_name>, '<table>', '<blob_column>', array[0]["rowid"], writefile(file_name))
e.g.:
   rc = sqlite_exec(my_db, "main", "test_with_blob", "my_blob", a_test[0]["rowid"], "readfile(/home/dmadmin/setup_files/documentum.tar)")
note how the file name is not quoted;
in case of readfile(), if the blob's size changes, an update of the blob filled with zero-byte bytes and with the new size is first performed, then the blob is reopened;
see doc here for incremental blob I/Os: https://sqlite.org/c3ref/blob_open.html;
int sqlite3_blob_open(sqlite3*, const char *zDb, const char *zTable, const char *zColumn, sqlite3_int64 iRow, int flags, sqlite3_blob **ppBlob);
int sqlite3_blob_reopen(sqlite3_blob *, sqlite3_int64);
int sqlite3_blob_read(sqlite3_blob *, void *Z, int N, int iOffset);
int sqlite3_blob_write(sqlite3_blob *, const void *z, int n, int iOffset);
int sqlite3_blob_close(sqlite3_blob *);
*/
static awk_value_t *
do_sqlite_exec(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t db_handle;
   int ret = 1;

   assert(result != NULL);

   if (!get_argument(0, AWK_NUMBER, &db_handle)) {
      fprintf(stderr, "in do_sqlite_exec, cannot get the db handle argument\n");
      ret = -1;
      goto end;
   }
   if (2 == nargs) {
      awk_value_t sql_stmt;
      if (!get_argument(1, AWK_STRING, &sql_stmt))  {
         fprintf(stderr, "in do_sqlite_exec, cannot get the sql_stmt argument\n");
         ret = -1;
         goto end;
      }
      char *errorMessg = NULL;
      ret = sqlite3_exec(sqlite_handles[(int) db_handle.num_value], sql_stmt.str_value.str, NULL, NULL, &errorMessg);
      if (SQLITE_OK != ret) {
         fprintf(stderr, "in do_sqlite_exec, SQL error %s while executing [%s]\n", sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), sql_stmt.str_value.str);
         sqlite3_free(errorMessg);
         ret = -1;
         goto end;
      }
   }
   else if (6 == nargs) {
      awk_value_t arg, number_value;
      char *db_name = NULL, *table_name = NULL, *column_name = NULL, *file_stmt = NULL, *file_name = NULL;

      if (!get_argument(1, AWK_STRING, &arg))  {
         fprintf(stderr, "in do_sqlite_exec, cannot get the db_name argument\n");
         ret = -1;
         goto abort;
      }
      db_name = strdup(arg.str_value.str);

      if (!get_argument(2, AWK_STRING, &arg))  {
         fprintf(stderr, "in do_sqlite_exec, cannot get the table_name argument\n");
         ret = -1;
         goto abort;
      }
      table_name = strdup(arg.str_value.str);

      if (!get_argument(3, AWK_STRING, &arg))  {
         fprintf(stderr, "in do_sqlite_exec, cannot get the column_name argument\n");
         ret = -1;
         goto abort;
      }
      column_name = strdup(arg.str_value.str);
      
      if (!get_argument(4, AWK_NUMBER, &number_value))  {
         fprintf(stderr, "in do_sqlite_exec, cannot get the rowid argument\n");
         ret = -1;
         goto abort;
      }
      long int rowid = number_value.num_value;
      
      if (!get_argument(5, AWK_STRING, &arg))  {
         fprintf(stderr, "in do_sqlite_exec, cannot get the readfile()/writefile() argument\n");
         ret = -1;
         goto abort;
      }
      file_stmt = strdup(arg.str_value.str);
      
      unsigned short bRead2Blob;
      char *RE_readfile = "^readfile\\(([^)]+)\\)$";
      char *RE_writefile = "^writefile\\(([^)]+)\\)$";
      regex_t RE;
      regmatch_t pmatches[2];

      if (regcomp(&RE, RE_readfile, REG_EXTENDED)) {
         fprintf(stderr, "in do_sqlite_exec, error compiling REs %s\n", RE_readfile);
         ret = -1;
         goto abort;
      }
      if (regexec(&RE, file_stmt, 2, pmatches, 0)) {
         // no call to readfile() requested, try writefile();
         regfree(&RE);
         if (regcomp(&RE, RE_writefile, REG_EXTENDED)) {
            fprintf(stderr, "in do_sqlite_exec, error compiling REs %s\n", RE_writefile);
            ret = -1;
            goto abort;
         }
         if (regexec(&RE, file_stmt, 2, pmatches, 0)) {
            fprintf(stderr, "in do_sqlite_exec, error executing RE %s and RE %s against %s;\nneither readfile(file_name) nor writefile(file_name) was found\n", RE_readfile, RE_writefile, file_stmt);
            ret = -1;
            goto abort;
         }
         else bRead2Blob = 0;
      }
      else bRead2Blob = 1;
      file_name = strndup(file_stmt + pmatches[1].rm_so, pmatches[1].rm_eo - pmatches[1].rm_so);
      regfree(&RE);
      sqlite3_blob *pBlob;
      if (bRead2Blob) {
         ret = sqlite3_blob_open(sqlite_handles[(int) db_handle.num_value], db_name, table_name, column_name, rowid, 1, &pBlob);
         if (SQLITE_OK != ret) {
            fprintf(stderr, "in do_sqlite_exec, at reading blob, with parameters: db_name=%s, table_name=%s, column_name=%s, rowid=%ld, file statement=%s, error in sqlite3_blob_open %s\n%s\n",
                            db_name, table_name, column_name, rowid, file_stmt,
                            sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), sqlite3_errstr(ret));
            ret = -1;
            goto abort;
         }

         FILE *fs_in = fopen(file_name, "r");
         if (NULL == fs_in) {
            fprintf(stderr, "in do_sqlite_exec, error opening file %s for reading\n", file_name);
            ret = -1;
            goto local_abort_w;
         }

         // will the blob size change ?
         fseek(fs_in, 0, SEEK_END);
         unsigned long file_size = ftell(fs_in);
         rewind(fs_in);
         unsigned long blobSize = sqlite3_blob_bytes(pBlob);
         if (file_size != blobSize) {
            // yes, must first update the blob with the new size and reopen it;
            char stmt[500];
            char *errorMessg = NULL;
            sprintf(stmt, "update %s set %s = zeroblob(%ld) where rowid = %ld", table_name, column_name, file_size, rowid);
            ret = sqlite3_exec(sqlite_handles[(int) db_handle.num_value], stmt, NULL, NULL, &errorMessg);
            if (SQLITE_OK != ret) {
               fprintf(stderr, "in do_sqlite_exec, SQL error %s while changing the blob's size through [%s]:\n%s\n", sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), stmt, errorMessg);
               sqlite3_free(errorMessg);
               ret = -1;
               goto local_abort_w;
            }
            ret = sqlite3_blob_reopen(pBlob, rowid);
            if (SQLITE_OK != ret) {
               fprintf(stderr, "in do_sqlite_exec, error while reopening the blob: %s\n%s\n", sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), sqlite3_errstr(ret));
               ret = -1;
               goto local_abort_w;
            }
         }

         // let's work with a 10 MiB large buffer;
         unsigned long BUFFER_SIZE = 10 * 1024 * 1024;
         char *pBuffer = (char *) malloc(sizeof(char) * BUFFER_SIZE);
         unsigned long nbBytes;
         unsigned long offset = 0;
         while ((nbBytes = fread(pBuffer, sizeof(char), BUFFER_SIZE, fs_in)) > 0) {
            ret = sqlite3_blob_write(pBlob, pBuffer, nbBytes, offset);
            if (SQLITE_OK != ret) {
               fprintf(stderr, "in do_sqlite_exec, sqlite3_blob_write, error %s\n%s\n", sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), sqlite3_errstr(ret));
               ret = -1;
               free(pBuffer);
               goto local_abort_w;
            }
            offset += nbBytes;
         }
         free(pBuffer);
local_abort_w:
         fclose(fs_in);
         ret = sqlite3_blob_close(pBlob);
         if (SQLITE_OK != ret) {
            fprintf(stderr, "in do_sqlite_exec, sqlite3_blob_close, error %s\n%s\n", sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), sqlite3_errstr(ret));
            ret = -1;
         }
      }
      else {
         ret = sqlite3_blob_open(sqlite_handles[(int) db_handle.num_value], db_name, table_name, column_name, rowid, 0, &pBlob);
         if (SQLITE_OK != ret) {
            fprintf(stderr, "in do_sqlite_exec at writing blob, error %d in sqlite3_blob_open with parameters: db_name=%s, table_name=%s, column_name=%s, rowid=%ld, file statement=%s\n",
                            ret,
                            db_name, table_name, column_name, rowid, file_stmt);
            ret = -1;
            goto abort;
         }
         unsigned long BUFFER_SIZE = 10 * 1024 * 1024;
         char *pBuffer = (char *) malloc(sizeof(char) * BUFFER_SIZE);
         unsigned long offset = 0;
         FILE *fs_out = fopen(file_name, "w");
         if (NULL == fs_out) {
            fprintf(stderr, "in do_sqlite_exec, error %d opening file %s for writing\n", errno, file_name);
            ret = -1;
            goto local_abort_r;
         }
         unsigned long blobSize = sqlite3_blob_bytes(pBlob);
         if (BUFFER_SIZE >= blobSize) {
            ret = sqlite3_blob_read(pBlob, pBuffer, blobSize, offset);
            if (SQLITE_OK != ret) {
               fprintf(stderr, "in do_sqlite_exec, sqlite3_blob_read, error %s\n%s\n", sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), sqlite3_errstr(ret));
               ret = -1;
               goto local_abort_r;
            }
            unsigned long nbBytes = fwrite(pBuffer, sizeof(char), BUFFER_SIZE, fs_out);
            if (nbBytes < blobSize) {
               fprintf(stderr, "in do_sqlite_exec, error in fwrite()\n");
               ret = -1;
               goto local_abort_r;
            }
         }
         else {
            unsigned long nbBytes;
            while ((nbBytes = (blobSize <= BUFFER_SIZE ? blobSize : BUFFER_SIZE)) > 0) {
               ret = sqlite3_blob_read(pBlob, pBuffer, nbBytes, offset);
               if (SQLITE_OK != ret) {
                  fprintf(stderr, "in do_sqlite_exec, sqlite3_blob_read, error %s\n%s\n", sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), sqlite3_errstr(ret));
                  ret = -1;
                  goto local_abort_r;
               }
               ret = fwrite(pBuffer, sizeof(char), nbBytes, fs_out);
               if (ret < nbBytes) {
                  fprintf(stderr, "in do_sqlite_exec, error in fwrite()\n");
                  ret = -1;
                  goto local_abort_r;
               }
               offset += nbBytes;
               blobSize -= nbBytes;
            }
         }
local_abort_r:
         fclose(fs_out);
         free(pBuffer);
         ret = sqlite3_blob_close(pBlob);
         if (SQLITE_OK != ret) {
            fprintf(stderr, "in do_sqlite_exec, processing of writefile(), sqlite3_blob_close, error %s\n%s\n", sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), sqlite3_errstr(ret));
            ret = -1;
         }
      }
abort:
      free(db_name);
      free(table_name);
      free(column_name);
      free(file_stmt);
      free(file_name);
   }
   else {
      fprintf(stderr, "in do_sqlite_exec, unsupported number of parameters in statement while processing [%d]\n", nargs);
      ret = -1;
   }
end:
   return make_number(ret, result);
}

static unsigned max(unsigned n1, unsigned n2) {
   if (n1 > n2)
      return n1;
   else return n2;
}

// this struct is used to pass parameters to the callbacks;
typedef struct DISPLAYED_TABLE {
   char *sqlStmt;

   unsigned short bHeaderPrinted;
   char *COL_SEPARATOR;
   char *ELLIPSIS;
   unsigned short len_ellipsis;
   unsigned short MAX_WIDTH;
   unsigned short MIN_WIDTH;

   unsigned nb_columns;
   unsigned *max_col_widths;
   unsigned *actual_col_widths;
   char *col_overflow_action;
   char **headers;
   unsigned widest_column;

   char *size_list;  // list of blank- or comma-separated column widths;

   unsigned long NR;
   unsigned short bStoreOrDisplay;
   awk_array_t gawk_array;

   unsigned short bEpilog;
} DISPLAYED_TABLE;

void cleanup(DISPLAYED_TABLE *dt) {
   if (dt -> sqlStmt)
      free(dt -> sqlStmt);
   if (dt -> max_col_widths)
      free(dt -> max_col_widths);
   if (dt -> actual_col_widths)
      free(dt -> actual_col_widths);
   for (unsigned i = 0; i < dt -> nb_columns; i++) {
      if (dt -> headers)
         free(dt -> headers[i]);
   }
   if (dt -> headers)
      free(dt -> headers);
   if (dt -> size_list)
      free(dt -> size_list);
}

// strip the trailing blanks so they are not counted toward the column width;
// and returns the number of characters from the beginning;
// former version attempted to insert a  terminator but it caused error when the string resides in code area (e.g. SELECT sqlite_version()) because modifications are not allowed there for obvious reasons;
unsigned getUsefulLen(char * str) {
   unsigned len = strlen(str);
   char *p = str + len - 1;
   while (' ' == *p && p > str) p--;
   if (' ' != *p)
      len = p - str + 1;
   else
      len = 0;
   return(len);
}

char *fillStr(char *S, char ch, unsigned max_len) {
   S[max_len] = '';
   for (char *p = S; max_len; p++, max_len--)
      *p = ch;
   return S;
}

/* select_callback_raw */
// displays the data without truncation nor cleaning up tainling blanks;
// columns are separated by dt -> COL_SEPARATOR, which is useful to import data as CSV;
static int select_callback_raw(void *vdt, int nb_columns, char **column_values, char **column_names) {
   DISPLAYED_TABLE *dt = (DISPLAYED_TABLE *) vdt;

   if (dt -> bEpilog) {
      printf("%ld rows selected\n", dt -> NR);
      cleanup(dt);
      return 0;
   }

   if (!dt -> bHeaderPrinted) {
      // header has not been printed yet, print it and afterwards print the first row;
      for (unsigned i = 0; i < nb_columns; i++)
         printf("%s%s", column_names[i], i  COL_SEPARATOR : "");
      printf("\n");
      dt -> bHeaderPrinted = 1;
   }

   for (unsigned i = 0; i < nb_columns; i++)
      printf("%s%s", column_values[i], i  COL_SEPARATOR : "");
   printf("\n");
   dt -> NR++;
   return 0;
}

/* select_callback_draft */
/*
display the data in maximum 15-character wide columns, with possible truncation, in which case an ellipsis (...) is appended;
at the end, the optimum widths for each column are listed so they can be passed as a string list in the call to sqlite_select(,, "....") to avoid truncation;
this output is convenient as a quick draft;
*/
static int select_callback_draft(void *vdt, int nb_columns, char **column_values, char **column_names) {
   DISPLAYED_TABLE *dt = (DISPLAYED_TABLE *) vdt;

   char col_str[dt -> MAX_WIDTH + 1];

   if (dt -> bEpilog) {
      printf("%ld rows selected\n", dt -> NR);

      printf("\nOptimum column widths\n");
      printf("=====================\n");
      printf("for query: %s\n", dt -> sqlStmt);
      for (unsigned i = 0; i < dt > nb_columns; i++)
         printf("%-*s  %d\n", dt -> widest_column + 5, dt -> headers[i], dt -> max_col_widths[i]);

      cleanup(dt);
      return 0;
   }

   if (!dt -> bHeaderPrinted) {
      // header has not been printed yet, print it and afterwards print the first row;
      dt -> nb_columns = nb_columns; 
      dt -> max_col_widths = (unsigned *) malloc(sizeof(unsigned) * nb_columns);
      dt -> actual_col_widths = (unsigned *) malloc(sizeof(unsigned) * nb_columns);
      dt -> headers = (char **) malloc(sizeof(char *) * nb_columns);

      char *header_line = NULL;

      for (unsigned i = 0; i &t; nb_columns; i++) {
         char *tmp_s;
         unsigned len = strlen(column_names[i]);
         dt -> max_col_widths[i] = len;
         dt -> widest_column = max(dt -> widest_column, len);
         if (len > dt -> MAX_WIDTH) {
            // column overflow, apply a truncation with ellipsis;
            dt -> actual_col_widths[i] = dt -> MAX_WIDTH;
            strncpy(col_str, column_names[i], dt -> MAX_WIDTH - dt -> len_ellipsis);
            col_str[dt -> MAX_WIDTH - dt -> len_ellipsis] = '';
            strcat(col_str, dt -> ELLIPSIS);
            tmp_s = col_str;
         }
         else if (len %lt; dt -> MIN_WIDTH) {
            dt -> actual_col_widths[i] = dt -> MIN_WIDTH;
            tmp_s = column_names[i];
         }
         else {
            dt -> actual_col_widths[i] = len;
            tmp_s = column_names[i];
         }
         printf("%-*s%s", dt -> actual_col_widths[i], tmp_s, i  COL_SEPARATOR : "");
         dt -> headers[i] = strdup(column_names[i]);
      }
      printf("\n");

      for (unsigned i = 0; i < nb_columns; i++) {
         header_line = (char *) realloc(header_line, sizeof(char) * dt -> actual_col_widths[i]);
         fillStr(header_line, '-', dt -> actual_col_widths[i]);
         printf("%s%s", header_line, i  COL_SEPARATOR : "");
      }
      printf("\n");
      free(header_line);

      dt -> bHeaderPrinted = 1;
   }
   // header has been printed, print the rows now;
   for (unsigned i = 0; i < nb_columns; i++) {
      char *tmp_s;
      unsigned len = getUsefulLen(column_values[i]);
      dt -> max_col_widths[i] = max(dt -> max_col_widths[i], len);
      if (len > dt -> actual_col_widths[i]) {
         strncpy(col_str, column_values[i], dt -> actual_col_widths[i] - dt -> len_ellipsis);
         col_str[dt -> actual_col_widths[i] - dt -> len_ellipsis] = '';
         strcat(col_str, dt -> ELLIPSIS);
         tmp_s = col_str;
      }
      else {
         tmp_s = column_values[i];
      }
      printf("%-*.*s%s", dt -> actual_col_widths[i], dt -> actual_col_widths[i], tmp_s, i  COL_SEPARATOR : "");
   }
   printf("\n");
   dt -> NR++;
   return 0;
}

/* printConstrained */
// prints the row's column in constrained column widths;
static void printConstrained(DISPLAYED_TABLE *dt, char **data, unsigned nb_columns) {
   // let's replicate the data because they will be modified locally;
   char **ldata = (char **) malloc(sizeof(char *) * nb_columns);
   for (unsigned i = 0; i < nb_columns; i++)
      ldata[i] = strndup(data[i], getUsefulLen(data[i]));
   unsigned bWrapOccured;
   do {
      bWrapOccured = 0;
      for (unsigned i = 0; i < nb_columns; i++) {
         char *col_str = NULL;
         unsigned len = strlen(ldata[i]);
         dt -> actual_col_widths[i] = dt -> max_col_widths[i];
         if (len > dt -> max_col_widths[i]) {
            // column width overflow, apply the requested action: either wrap-around, truncate with ellipsis or truncate without ellipsis;
            if ('e' == dt -> col_overflow_action[i]) {
               if (dt -> max_col_widths[i] < dt -gt&; len_ellipsis)
                  dt -> actual_col_widths[i] = dt -> len_ellipsis;
               col_str = strndup(ldata[i], dt -> actual_col_widths[i] - dt -> len_ellipsis);
               col_str[dt -> actual_col_widths[i] - dt -> len_ellipsis] = '';
               strcat(col_str, dt -> ELLIPSIS);
               sprintf(ldata[i], "%*s", len, " ");
            }
            else if ('t' == dt -> col_overflow_action[i]) {
               col_str = strndup(ldata[i], dt -> actual_col_widths[i]);
               sprintf(ldata[i], "%*s", len, " ");
            }
            else if ('w' == dt -> col_overflow_action[i]) {
               col_str = strndup(ldata[i], dt -> actual_col_widths[i]);
               // shift the column names by as many printed characters;
               // the new column names will be printed at the next cycle of the inner loop 
               unsigned j;
               for (j = dt -> actual_col_widths[i]; j < len; j++)
                  ldata[i][j - dt -> actual_col_widths[i]] = ldata[i][j];
               ldata[i][len - dt -> actual_col_widths[i]] = '';
               bWrapOccured = 1;
            }
         }
         else {
            col_str = strdup(ldata[i]);
            // no wrap-around necessary here but prepare the str for the next cycle just in case;
            sprintf(ldata[i], "%*s", len, " ");
         }
         printf("%-*s%s", dt -> actual_col_widths[i], col_str, i  COL_SEPARATOR : "");
         free(col_str);
      }
      printf("\n");
   } while (bWrapOccured);
   for (unsigned i = 0; i < nb_columns; i++)
      free(ldata[i]);
   free(ldata);
}

/* select_callback_sized */
// displays the columns within predetermined sizes, wrap-around if overflow;
static int select_callback_sized(void *vdt, int nb_columns, char **column_values, char **column_names) {
   DISPLAYED_TABLE *dt = (DISPLAYED_TABLE *) vdt;

   if (dt -> bEpilog) {
      printf("%ld rows selected\n", dt -> NR);
      cleanup(dt);
      return 0;
   }

   if (!dt -> bHeaderPrinted) {
      // header has not been printed yet, print it and afterwards print the first row;
      dt -> actual_col_widths = (unsigned *) malloc(sizeof(unsigned) * nb_columns);
      if (dt -> nb_columns < nb_columns) {
         unsigned last_width = dt -> max_col_widths[dt -> nb_columns - 1];
         fprintf(stderr, "warning: missing column sizes, extending the last provided one %d\n", last_width);
         dt -> max_col_widths = (unsigned *) realloc(dt -> max_col_widths, sizeof(unsigned) * nb_columns);
         dt -> col_overflow_action = (char *) realloc(dt -> col_overflow_action, sizeof(char) * nb_columns);
         char last_overflow_action = dt -> col_overflow_action[dt -> nb_columns - 1];
         for (unsigned i = dt -> nb_columns; i &t; nb_columns; i++) {
            dt -> max_col_widths[i] = last_width;
            dt -> col_overflow_action[i] = last_overflow_action;
         }
         dt -> nb_columns = nb_columns;
      }
      else if (dt -> nb_columns > nb_columns) {
         fprintf(stderr, "warning: too many columns widths given, %d vs actual %d, ignoring the %d in excess\n", dt -> nb_columns, nb_columns, dt -> nb_columns - nb_columns);
         dt -> nb_columns = nb_columns;
      }
      printConstrained(dt, column_names, nb_columns);

      char *header_line = NULL;
      for (unsigned i = 0; i < nb_columns; i++) {
         header_line = (char *) realloc(header_line, sizeof(char) * dt -> actual_col_widths[i]);
         fillStr(header_line, '-', dt -> actual_col_widths[i]);
         printf("%s%s", header_line, i < nb_columns - 1 ? dt -> COL_SEPARATOR : "");
      }
      printf("\n");
      free(header_line);
      dt -> bHeaderPrinted = 1;
   }
   printConstrained(dt, column_values, nb_columns);
   dt -> NR++;
   return 0;
}

/* select_callback_array */
/*
returs the database rows into the gawk associative array passed as parameter;
its structure is as follows:
array[0] = sub-array_0
array[1] = sub-array_1
...
array[count-1] = sub-array_count-1
where the sub-arrays are associative arrays too with structure:
sub-array0[col1] = value1
sub-array0[col2] = value2
...
sub-array0[coln] = valuen
sub-array1[col1] = value1
...
sub-array1[coln] = valuen
...
sub-arraym[col1] = value1
...
sub-arraym[coln] = valuen
Said otherwise, the returned array is an array of associative arrays whose first dimension contains the rows and second dimension contains the columns, 
i.e. it' a table of database rows or an array of hashes, or a list of dictionaries;
in perl linguo, it's an array of hashes;
in python, it would be an array of dictionaries;
*/
static int select_callback_array(void *vdt, int nb_columns, char **column_values, char **column_names) {
   DISPLAYED_TABLE *dt = (DISPLAYED_TABLE *) vdt;

   if (dt -> bEpilog) {
      printf("%ld rows selected\n", dt -> NR);
      cleanup(dt);
      return 0;
   }

   awk_array_t row;
   awk_value_t value;
   awk_value_t row_index;
   awk_value_t col_index, col_value;

   if (!dt -> bHeaderPrinted) {
      // create the main array once;
      // doesn't work; keep the code in case a fix is found;
      //db_table = create_array();
      //value.val_type = AWK_ARRAY;
      //value.array_cookie = db_table; 
   
      // add it to gawk's symbol table so it appear magically in gawk's script namespace;
      //if (!sym_update(dt -> array_name, &value)) 
      //   fatal(ext_id, "in select_callback_array, creation of table array %s failed\n", dt -> array_name);
      //db_table = value.array_cookie;

      // nothing special to do here;
      dt -> bHeaderPrinted = 1;
   }
   char index_str[50];
   unsigned len = sprintf(index_str, "%ld", dt -> NR);
   make_const_string(index_str, len, &row_index);

   // create the sub-array for each row;
   // indexes are the column names and values are the column values;
   row = create_array();
   value.val_type = AWK_ARRAY;
   value.array_cookie = row;
   if (! set_array_element(dt -> gawk_array, &row_index, &value))
      fatal(ext_id, "in select_callback_array, creation of row array %ld failed\n", dt -> NR);
   row  = value.array_cookie;

   for (unsigned i = 0; i  < nb_columns; i++) {
      make_const_string(column_names[i], strlen(column_names[i]), &col_index);
      make_const_string(column_values[i], strlen(column_values[i]), &col_value);
      if (! set_array_element(row, &col_index, &col_value))
         fatal(ext_id, "in select_callback_array, assigned value %s to index %s at row %ld failed\n", column_values[i], column_names[i], dt -> NR);
   }

   dt -> NR++;
   return 0;
}

/* do_sqllite_select */
/*
generic select entry point;
possible invocations:
Case: call profile:                                          --> action;
   0: sqlite_select(db, sql_stmt)                            --> draft output, default fixed width columns, with needed column widths list at the end;
   1: sqlite_select(db, sql_stmt, "")                        --> raw output, no truncation, | as default separator;
   2: sqlite_select(db, sql_stmt, "separator-string")        --> raw output, no truncation, use given string as separator;
   2: sqlite_select(db, sql_stmt, "list-of-columns-widths")  --> fixed sized column output, a w|t|e suffix is allowed for wrapping-around or truncating too large columns without or with ellipsis;
   3: sqlite_select(db, sql_stmt, dummy, gawk_array)         --> raw output into the gawk associative array gawk_array;
the appropriate callback will be called based on the invocation's profile;
returns -1 if error, 0 otherwise;
*/
static awk_value_t *
do_sqlite_select(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t db_handle, sql_stmt, col_sizes;
   int ret = 0;

   assert(result != NULL);

   if (!get_argument(0, AWK_NUMBER, &db_handle)) {
      fprintf(stderr, "in do_sqlite_select, cannot get the db handle argument\n");
      ret = -1;
      goto quit;
   }
   if (!get_argument(1, AWK_STRING, &sql_stmt)) {
      fprintf(stderr, "do_sqlite_select, cannot get the sql_stmt argument\n");
      ret = -1;
      goto quit;
   }
   DISPLAYED_TABLE dt;
   dt.sqlStmt = strdup(sql_stmt.str_value.str);
   dt.bHeaderPrinted = 0;
   dt.COL_SEPARATOR = "  ";
   dt.ELLIPSIS = "..."; dt.len_ellipsis = strlen(dt.ELLIPSIS); 
   dt.MAX_WIDTH = 15;
   dt.MIN_WIDTH = dt.len_ellipsis + 5;
   dt.nb_columns = 0;
   dt.max_col_widths = NULL;
   dt.actual_col_widths = NULL;
   dt.col_overflow_action = NULL;
   dt.headers = NULL;
   dt.widest_column = 0;
   dt.size_list = NULL;
   dt.NR = 0;
   dt.bStoreOrDisplay = 1;
   dt.gawk_array = NULL;
   dt.bEpilog = 0;

   unsigned short bCase;
   unsigned short bFoundSeparator = 0;
   char *errorMessg = NULL;

   if (4 == nargs) {
      bCase = 3;
      awk_value_t value;
      if (!get_argument(3, AWK_ARRAY, &value))
         fatal(ext_id, "in do_sqlite_select, accessing the gawk array parameter failed\n");
      dt.gawk_array = value.array_cookie;
      clear_array(dt.gawk_array);
   }
   else if (get_argument(2, AWK_STRING, &col_sizes)) {
      if (0 == strlen(col_sizes.str_value.str))
         // raw, unformatted output;
         bCase = 1;
      else {
         // columns are output with constrained widths and possible wrapping-around or truncation with/without ellipsis;
         bCase = 2;
         char *width_str, *tmp_str, *next_tok_iter;
         long width_value;
         tmp_str = strdup(col_sizes.str_value.str);
         next_tok_iter = tmp_str;
         while ((width_str = strtok(next_tok_iter, " ,/"))) {
            errno = 0;
            char *overflow_action_suffix;
            width_value = strtol(width_str, &overflow_action_suffix, 10);
            if ((errno == ERANGE && (width_value == LONG_MAX || width_value == LONG_MIN)) ||
                (errno != 0 && width_value == 0) ||
                (width_value < 0)) {
               if (0 == dt.nb_columns) {
                  // let's take this as a separator for select_callback_raw();
                  dt.COL_SEPARATOR = width_str;
                  bFoundSeparator = 1;
                  bCase = 0;
               }
               else {
                  fprintf(stderr, "invalid number in size string [%s], exiting ...\n", width_str);
                  if (dt.nb_columns > 0) {
                     free(dt.max_col_widths);
                     free(dt.col_overflow_action);
                  }
                  free(tmp_str);
                  ret = -1;
                  goto quit;
               }
            }
            else if (bFoundSeparator) {
               // nothing else is accepted after a separator;
               fprintf(stderr, "separator [%s] must be the only parameter in raw output, exiting ...\n", dt.COL_SEPARATOR);
               free(tmp_str);
               ret = -1;
               goto quit;
            }
            dt.max_col_widths = (unsigned *) realloc(dt.max_col_widths, sizeof(unsigned) * (dt.nb_columns + 1));
            dt.col_overflow_action = (char *) realloc(dt.col_overflow_action, sizeof(char) * (dt.nb_columns + 1));
            dt.max_col_widths[dt.nb_columns] = width_value;
            if (NULL == overflow_action_suffix || ! *overflow_action_suffix)
               dt.col_overflow_action[dt.nb_columns] = 'e';
            else if ('t' == *overflow_action_suffix || 'w' == *overflow_action_suffix || 'e' == *overflow_action_suffix)
               dt.col_overflow_action[dt.nb_columns] = *overflow_action_suffix;
            else if (0 == dt.nb_columns) {
               bCase = 0;
               dt.COL_SEPARATOR = strdup(width_str);
               bFoundSeparator = 1;
               dt.nb_columns++;
               break;      
            }
            else {
               // allowed overflow suffix is one of t, w or e;
               fprintf(stderr, "invalid overflow action suffix [%c]; it must be one of w (wrap-around), t (truncation without ellipsis) or e (truncation with ellipsis), exiting ...\n", *overflow_action_suffix);
               free(tmp_str);
               ret = -1;
               goto quit;
            }
            if ('e' == dt.col_overflow_action[dt.nb_columns] && width_value < dt.len_ellipsis) {
               fprintf(stderr, "column [%d] has maximum width [%ld] and requests a truncation with ellipsis [%s] but a minimum width of [%d] characters is necessary for this, assuming that minimum width\n", dt.nb_columns, width_value, dt.ELLIPSIS, dt.len_ellipsis);
               dt.max_col_widths[dt.nb_columns] = dt.len_ellipsis;
            }
            dt.nb_columns++;
            next_tok_iter = NULL;
         }
         free(tmp_str);
      }
   }
   else
      // draft output, i.e. default column width, possible truncation, optimal column widths listed at the end;
      bCase = 0;

   switch (bCase) {
      case 0: ret = sqlite3_exec(sqlite_handles[(int) db_handle.num_value], sql_stmt.str_value.str, select_callback_draft, &dt, &errorMessg);
              break;
      case 1: if (!bFoundSeparator)
                 // use default separator
                 dt.COL_SEPARATOR = "|";
              ret = sqlite3_exec(sqlite_handles[(int) db_handle.num_value], sql_stmt.str_value.str, select_callback_raw, &dt, &errorMessg);
              break;
      case 2: ret = sqlite3_exec(sqlite_handles[(int) db_handle.num_value], sql_stmt.str_value.str, select_callback_sized, &dt, &errorMessg);
              break;
      case 3: ret = sqlite3_exec(sqlite_handles[(int) db_handle.num_value], sql_stmt.str_value.str, select_callback_array, &dt, &errorMessg);
              break;
      default: fprintf(stderr, "programming error: did you not forget a case ?\n");
   }
   if (SQLITE_OK == ret) {
      dt.bEpilog = 1;
      0 == bCase ? select_callback_draft(&dt, 0, NULL, NULL) :
      1 == bCase ? select_callback_raw(&dt, 0, NULL, NULL) :
      2 == bCase ? select_callback_sized(&dt, 0, NULL, NULL) :
      3 == bCase ? select_callback_array(&dt, 0, NULL, NULL) :
      0;
   }
   else {
      fprintf(stderr, "do_sqlite_select, SQL error %s while executing [%s]\n", errorMessg, sql_stmt.str_value.str);
      sqlite3_free(errorMessg);
   }
quit:
   return make_number(ret, result);
}

/* these are the exported functions along with their min and max arities; */
   static awk_ext_func_t func_table[] = {
        {"sqlite_open", do_sqlite_open, 1, 1, awk_false, NULL},
        {"sqlite_close", do_sqlite_close, 1, 1, awk_false, NULL},
        {"sqlite_exec", do_sqlite_exec, 6, 2, awk_false, NULL},
        {"sqlite_select", do_sqlite_select, 4, 2, awk_false, NULL},
};

static awk_bool_t (*init_func)(void) = init_sqlite_handles;

/* define the dl_load function using the boilerplate macro */

dl_load_func(func_table, sqlite_gawk, "")

Quite the extension ! Sorry for this lengthy listing but there is a lot of stuff going on here.
Next, let’s make the awk and the new extension. Here are the incantations:

pwd
/home/dmadmin/dmgawk/gawk-4.2.1/extension
./configure
make
cd .libs; gcc -o sqlite_gawk.so -shared sqlite_gawk.o ../sqlite3.o -pthread

That’s it. As said elsewhere, an additional sudo make install will install the new gawk and its extension to their canonical locations, i.e. /usr/local/bin/gawk for gawk and /usr/local/lib/gawk for the extensions. But for the moment, let’s test it; for this, we still need a test gawk script.
vi tsqlite.awk

# test program for the sqlite_gawk, interface to sqlite3;
# Cesare Cervini
# dbi-services.com
# 8/2018

@load "sqlite_gawk"

BEGIN {
   my_db = sqlite_open("/home/dmadmin/sqlite-amalgamation-3240000/test.db")
   print "db opened:", my_db

   my_db2 = sqlite_open("/home/dmadmin/sqlite-amalgamation-3240000/test.db")
   print "db opened:", my_db2

   sqlite_close(my_db)
   sqlite_close(my_db2)

   my_db = sqlite_open("/home/dmadmin/sqlite-amalgamation-3240000/test.db")
   print "db opened:", my_db

   printf "\n"

   rc = sqlite_exec(my_db, "CREATE TABLE IF NOT EXISTS test1(n1 NUMBER, s1 TEXT, s2 CHAR(100))")
   print "return code = ", rc

   rc = sqlite_exec(my_db, "INSERT INTO test1(n1, s1, s2) VALUES(100, \"hello1\", \"hello0101\")")
   print "return code = ", rc
   rc = sqlite_exec(my_db, "INSERT INTO test1(n1, s1, s2) VALUES(200, \"hello2\", \"hello0102\")")
   print "return code = ", rc
   rc = sqlite_exec(my_db, "INSERT INTO test1(n1, s1, s2) VALUES(300, \"hello3\", \"hello0103\")")
   print "return code = ", rc
   rc = sqlite_exec(my_db, "INSERT INTO test1(n1, s1, s2) VALUES(400, \"hello4\", \"hello0104\")")
   print "return code = ", rc
   rc = sqlite_exec(my_db, "INSERT INTO test1(n1, s1, s2) VALUES(400, \"hello5 with spaces       \", \"hello0105 with spaces              \")")
   print "return code = ", rc
   rc = sqlite_exec(my_db, "INSERT INTO test1(n1, s1, s2) VALUES(400, \"hello6 with spaces        \", \"hello0106   \")")
   print "return code = ", rc
   printf "\n"

   stmt = "SELECT * FROM test1";
   split("", a_test)
   print "sqlite_select(my_db, " stmt ", 0, a_test)"
   rc = sqlite_select(my_db, stmt, 0, a_test)
   dumparray("a_test", a_test);
   for (row in a_test) {
      printf("row %d: ", row)
      for (col in a_test[row])
         printf("  %s = %s", col, a_test[row][col])
      printf "\n"
   }
   printf "\n"

   # print in draft format;
   stmt = "SELECT name FROM sqlite_master WHERE type IN ('table','view') AND name NOT LIKE 'sqlite_%' ORDER BY 1"
   print "sqlite_select(my_db, \"" stmt "\")"
   rc = sqlite_select(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   # print in draft format;
   stmt = "SELECT sql FROM sqlite_master ORDER BY tbl_name, type DESC, name"
   print "sqlite_select(my_db, \"" stmt "\", \"100\")"
   rc = sqlite_select(my_db, stmt , "100")
   print "return code = ", rc
   printf "\n"

   # print in draft format;
   stmt = "SELECT * FROM test1"
   print "sqlite_select(my_db, " stmt ")"
   rc = sqlite_select(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   # print in raw format with non default separator;
   stmt = "SELECT * FROM test1"
   print "sqlite_select(my_db, " stmt ", \"||\")"
   rc = sqlite_select(my_db, stmt, "||")
   print "return code = ", rc
   printf "\n"

   # now that we know the needed column widths, let's used them;
   # trailing spaces are removed to compact the column somewhat;
   stmt = "SELECT * FROM test1" 
   print "sqlite_select(my_db, " stmt ", \"3 18 21\")"
   rc = sqlite_select(my_db, stmt, "3 18 21")
   print "return code = ", rc
   printf "\n"

   # print in raw format, with default | separator;
   stmt = "SELECT * FROM test1"
   print "sqlite_select(my_db, " stmt ", \"\")"
   rc = sqlite_select(my_db, stmt, "")
   print "return code = ", rc
   printf "\n"

   stmt = "INSERT INTO test1(n1, s1, s2) VALUES(400, \"hello6-with-spaces        \", \"hello0106-12345\")" 
   print "sqlite_exec(my_db, " stmt ")"
   rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   stmt = "SELECT * FROM test1"
   print "sqlite_select(my_db, " stmt ", \"2e 15e 10w\")"
   rc = sqlite_select(my_db, stmt, "2e 15e 10w")
   print "return code = ", rc
   printf "\n"

   stmt = "SELECT count(*) FROM test1"
   print "sqlite_select(my_db," stmt ")"
   rc = sqlite_select(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   stmt = "DELETE FROM test1"
   print "sqlite_exec(my_db, " stmt ")"
   rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   stmt = "SELECT count(*) FROM test1"
   print "sqlite_select(my_db," stmt ")"
   rc = sqlite_select(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   rc = sqlite_exec(my_db, "CREATE TABLE IF NOT EXISTS test_with_blob(n1 NUMBER, my_blob BLOB)")
   print "return code = ", rc

   rc = sqlite_exec(my_db, "DELETE FROM test_with_blob")
   print "return code = ", rc

   stmt = "INSERT INTO test_with_blob(n1, my_blob) VALUES(1, readfile(\"gawk-4.2.1.tar.gz\"))" 
   print "sqlite_exec(my_db," stmt ")"
   #rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   
   stmt = "SELECT n1, writefile('yy' || rowid, my_blob) FROM test_with_blob" 
   print "sqlite_select(my_db, " stmt ")"
   rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   # file too large, > 3 Gb, fails silently;
   # do don't do it;
   # stmt = "INSERT INTO test_with_blob(n1, my_blob) VALUES(1000, readfile(\"/home/dmadmin/setup_files/documentum.tar\"))" 

   # this one is OK at 68 Mb;
   stmt = "INSERT INTO test_with_blob(n1, my_blob) VALUES(1000, readfile(\"/home/dmadmin/setup_files/instantclient-basic-linux.x64-12.2.0.1.0.zip\"))" 
   print "sqlite_exec(my_db," stmt ")"
   rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   stmt = "SELECT n1, writefile('\"yy' || rowid || '\"', my_blob) FROM test_with_blob where n1 = 1000" 
   print "sqlite_select(my_db, " stmt ")"
   #rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   stmt = "INSERT INTO test_with_blob(n1, my_blob) VALUES(5000, readfile('/home/dmadmin/dmgawk/gawk-4.2.1/extension/sqlite_gawk.c'))" 
   print "sqlite_exec(my_db," stmt ")"
   rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   stmt = "UPDATE test_with_blob set my_blob = readfile('/home/dmadmin/dmgawk/gawk-4.2.1/extension/sqlite_gawk.c') where n1 = 1000" 
   print "sqlite_exec(my_db," stmt ")"
   rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   # xx is a 999'000'000 bytes file; the import using a memory buffer with that size takes some time to complete;
   # the incremental blob I/Os below seem faster;
   # to make one, use: dd if=/dev/zero of=xx count=990 bs=1000000
   stmt = "UPDATE test_with_blob set my_blob = readfile('/home/dmadmin/dmgawk/xx') where n1 = 1000" 
   print "sqlite_exec(my_db," stmt ")"
   rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   # this is needed to enforce typing of a_array to array;
   # split("", a_test)
   delete(a_test)
   print "sqlite_select(db, \"select rowid from test_with_blob where n1 = 1000 limit 1\", 0, a_test)"
   sqlite_select(db, "select rowid from test_with_blob where n1 = 1000 limit 1", 0, a_test)
   print "after getting blob"
   dumparray("a_test", a_test)
   print "sqlite_exec(my_db, 'main', 'test_with_blob', 'my_blob', " a_test[0]["rowid"] ", writefile(~/dmgawk/my_blob_" a_test[0]["rowid"] "))"
   rc = sqlite_exec(my_db, "main", "test_with_blob", "my_blob", a_test[0]["rowid"], "writefile(/home/dmadmin/dmgawk/my_blob_" a_test[0]["rowid"] ")")
   print "return code = ", rc
   printf "\n"

   #print "sqlite_exec(my_db, 'main', 'test_with_blob', 'my_blob', " a_test[0]["rowid"] ", readfile(/home/dmadmin/setup_files/documentum.tar))"
   #rc = sqlite_exec(my_db, "main", "test_with_blob", "my_blob", a_test[0]["rowid"], "readfile(/home/dmadmin/setup_files/documentum.tar)")
   #rc = sqlite_exec(my_db, "main", "test_with_blob", "my_blob", a_test[0]["rowid"], "readfile(/home/dmadmin/setup_files/patch.bin)")
   rc = sqlite_exec(my_db, "main", "test_with_blob", "my_blob", a_test[0]["rowid"], "readfile(/home/dmadmin/dmgawk/xx)")
   print "return code = ", rc
   printf "\n"

   stmt = "SELECT n1, hex(my_blob) FROM test_with_blob where n1 = 2000 limit 1" 
   stmt = "SELECT n1, my_blob FROM test_with_blob where n1 = 2000 limit 1" 
   stmt = "SELECT n1, substr(my_blob, 1) FROM test_with_blob where n1 = 2000 limit 1" 
   rc = sqlite_select(my_db, stmt)
   rc = sqlite_select(my_db, stmt, "10 100w")
   print "return code = ", rc
   printf "\n"

   stmt = "SELECT n1, replace(my_blob, '\n', '\\n') as 'noLF' FROM test_with_blob where n1 = 5000 limit 2" 
   print "sqlite_select(my_db," stmt ", 10, 100w)"
   rc = sqlite_select(my_db, stmt, "10, 100w")
   print "return code = ", rc
   printf "\n"

   sqlite_close(my_db)

   exit(0)
}

function dumparray(name, array, i) {
   for (i in array)
      if (isarray(array[i]))
         dumparray(name "[\"" i "\"]", array[i])
      else
         printf("%s[\"%s\"] = %s\n", name, i, array[i])
      }

To execute the test:

AWKLIBPATH=gawk-4.2.1/extension/.libs gawk-4.2.1/gawk -f tsqlite.awk
db opened: 0
db opened: 0
db opened: 0
 
return code = 0
return code = 0
return code = 0
return code = 0
return code = 0
return code = 0
 
sqlite_select(my_db, SELECT * FROM test1, 0, a_test)
6 rows selected
a_test["0"]["n1"] = 100
a_test["0"]["s1"] = hello1
a_test["0"]["s2"] = hello0101
a_test["1"]["n1"] = 200
a_test["1"]["s1"] = hello2
a_test["1"]["s2"] = hello0102
a_test["2"]["n1"] = 300
a_test["2"]["s1"] = hello3
a_test["2"]["s2"] = hello0103
a_test["3"]["n1"] = 400
a_test["3"]["s1"] = hello4
a_test["3"]["s2"] = hello0104
a_test["4"]["n1"] = 400
a_test["4"]["s1"] = hello5 with spaces
a_test["4"]["s2"] = hello0105 with spaces
a_test["5"]["n1"] = 400
a_test["5"]["s1"] = hello6 with spaces
a_test["5"]["s2"] = hello0106
row 0: n1 = 100 s1 = hello1 s2 = hello0101
row 1: n1 = 200 s1 = hello2 s2 = hello0102
row 2: n1 = 300 s1 = hello3 s2 = hello0103
row 3: n1 = 400 s1 = hello4 s2 = hello0104
row 4: n1 = 400 s1 = hello5 with spaces s2 = hello0105 with spaces
row 5: n1 = 400 s1 = hello6 with spaces s2 = hello0106
 
sqlite_select(my_db, "SELECT name FROM sqlite_master WHERE type IN ('table','view') AND name NOT LIKE 'sqlite_%' ORDER BY 1")
name
--------
test
test1
test_...
3 rows selected
1 columns displayed
 
Optimum column widths
=====================
for query: SELECT name FROM sqlite_master WHERE type IN ('table','view') AND name NOT LIKE 'sqlite_%' ORDER BY 1
name 14
return code = 0
 
sqlite_select(my_db, "SELECT sql FROM sqlite_master ORDER BY tbl_name, type DESC, name", "100")
sql
----------------------------------------------------------------------------------------------------
CREATE TABLE test(a1 number)
CREATE TABLE test1(n1 NUMBER, s1 TEXT, s2 CHAR(100))
CREATE TABLE test_with_blob(n1 NUMBER, my_blob BLOB)
3 rows selected
return code = 0
 
sqlite_select(my_db, SELECT * FROM test1)
n1 s1 s2
-------- -------- --------
100 hello1 hello...
200 hello2 hello...
300 hello3 hello...
400 hello4 hello...
400 hello... hello...
400 hello... hello...
6 rows selected
3 columns displayed
 
Optimum column widths
=====================
for query: SELECT * FROM test1
n1 3
s1 18
s2 21
return code = 0
 
sqlite_select(my_db, SELECT * FROM test1, "||")
n1 ||s1 ||s2
--------||--------||--------
100 ||hello1 ||hello...
200 ||hello2 ||hello...
300 ||hello3 ||hello...
400 ||hello4 ||hello...
400 ||hello...||hello...
400 ||hello...||hello...
6 rows selected
3 columns displayed
 
Optimum column widths
=====================
for query: SELECT * FROM test1
n1 3
s1 18
s2 21
return code = 0
 
sqlite_select(my_db, SELECT * FROM test1, "3 18 21")
n1 s1 s2
--- ------------------ ---------------------
100 hello1 hello0101
200 hello2 hello0102
300 hello3 hello0103
400 hello4 hello0104
400 hello5 with spaces hello0105 with spaces
400 hello6 with spaces hello0106
6 rows selected
return code = 0
 
sqlite_select(my_db, SELECT * FROM test1, "")
n1|s1|s2
100|hello1|hello0101
200|hello2|hello0102
300|hello3|hello0103
400|hello4|hello0104
400|hello5 with spaces |hello0105 with spaces
400|hello6 with spaces |hello0106
6 rows selected
return code = 0
 
sqlite_exec(my_db, INSERT INTO test1(n1, s1, s2) VALUES(400, "hello6-with-spaces ", "hello0106-12345"))
return code = 0
 
sqlite_select(my_db, SELECT * FROM test1, "2e 15e 10w")
column [0] has maximum width [2] and requests a truncation with ellipsis [...] but a minimum width of [3] characters is necessary for this, assuming that minimum width
n1 s1 s2
--- --------------- ----------
100 hello1 hello0101
200 hello2 hello0102
300 hello3 hello0103
400 hello4 hello0104
400 hello5 with ... hello0105
with space
s
400 hello6 with ... hello0106
400 hello6-with-... hello0106-
12345
7 rows selected
return code = 0
 
sqlite_select(my_db,SELECT count(*) FROM test1)
count(*)
--------
7
1 rows selected
1 columns displayed
 
Optimum column widths
=====================
for query: SELECT count(*) FROM test1
count(*) 8
return code = 0
 
sqlite_exec(my_db, DELETE FROM test1)
return code = 0
 
sqlite_select(my_db,SELECT count(*) FROM test1)
count(*)
--------
0
1 rows selected
1 columns displayed
 
Optimum column widths
=====================
for query: SELECT count(*) FROM test1
count(*) 8
return code = 0
 
return code = 0
return code = 0
sqlite_exec(my_db,INSERT INTO test_with_blob(n1, my_blob) VALUES(1, readfile("gawk-4.2.1.tar.gz")))
return code = 0
 
sqlite_select(my_db, SELECT n1, writefile('yy' || rowid, my_blob) FROM test_with_blob)
return code = 0
 
sqlite_exec(my_db,INSERT INTO test_with_blob(n1, my_blob) VALUES(1000, readfile("/home/dmadmin/setup_files/instantclient-basic-linux.x64-12.2.0.1.0.zip")))
return code = 0
 
sqlite_select(my_db, SELECT n1, writefile('"yy' || rowid || '"', my_blob) FROM test_with_blob where n1 = 1000)
return code = 0
 
sqlite_exec(my_db,INSERT INTO test_with_blob(n1, my_blob) VALUES(5000, readfile('/home/dmadmin/dmgawk/gawk-4.2.1/extension/sqlite_gawk.c')))
return code = 0
 
sqlite_exec(my_db,UPDATE test_with_blob set my_blob = readfile('/home/dmadmin/dmgawk/gawk-4.2.1/extension/sqlite_gawk.c') where n1 = 1000)
return code = 0
 
sqlite_exec(my_db,UPDATE test_with_blob set my_blob = readfile('/home/dmadmin/dmgawk/xx') where n1 = 1000)
return code = 0
 
sqlite_select(db, "select rowid from test_with_blob where n1 = 1000 limit 1", 0, a_test)
1 rows selected
after getting blob
a_test["0"]["rowid"] = 1
sqlite_exec(my_db, 'main', 'test_with_blob', 'my_blob', 1, writefile(~/dmgawk/my_blob_1))
return code = 0
 
sqlite_exec(my_db, 'main', 'test_with_blob', 'my_blob', 1, readfile(/home/dmadmin/setup_files/documentum.tar))
return code = 0
 
sqlite_select(my_db)
return code = 0
 
sqlite_select(my_db,SELECT n1, replace(my_blob, '\n', '\n') as 'noLF' FROM test_with_blob where n1 = 5000 limit 2, 10, 100w)
n1 noLF
---------- ----------------------------------------------------------------------------------------------------
5000 /*\n * sqlite-gawk.c - an interface to sqlite() library;\n * Cesare Cervini\n * dbi-services.com\n *
8/2018\n*/\n#ifdef HAVE_CONFIG_H\n#include \n#endif\n\n#include \n#include \n#include \n#include \n#include \n\n#include \n#inc
lude \n\n#include "gawkapi.h"\n\n// extension;\n#include \n#include \n#
include \n#include \n#include \n#include \n#inclu
de \n#include \n\n#include "gettext.h"\n#define _(msgid) gettext(msgid)\n#defi
ne N_(msgid) msgid\n\nstatic const gawk_api_t *api; /* for convenience macros to work */\nstatic a
wk_ext_id_t ext_id;\nstatic const char *ext_version = "an interface to sqlite3: version 1.0";\n\nint
plugin_is_GPL_compatible;\n\n/* internal structure and variables */\n/*\ninternally stores the db h
...
c)(void) = init_sqlite_handles;\n\n/* define the dl_load function using the boilerplate macro */\n\n
dl_load_func(func_table, sqlite_gawk, "")\n\n
1 rows selected
return code = 0

That was a very long second part. If you are still there, please turn now to part Part III for some explanation of all this.

 

Cet article A SQLite extension for gawk (part II) est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator