Feed aggregator

I NEED YOU HELP WITH THIS QUERY! URGENT.

Tom Kyte - 11 hours 59 min ago
Hello I need your help, i don't know how to do this, I'm noob in this field and if you can help I really really gonna be happy. Here is... Assume that the following query is taking a very long time to run. The logins table has 10M records,...
Categories: DBA Blogs

How to handle two diff types in a single column ? is there any work around ?

Tom Kyte - 11 hours 59 min ago
Hi All, I need your inputs and suggestions for the below ; I have 2 column say A and B . In simple words they want to find percentage .But before that they are performing few validations In excel they had these validation before deriving t...
Categories: DBA Blogs

show a grid of deleted records

Tom Kyte - 11 hours 59 min ago
Hi Tom, please i need your help ASAP !!! i have a grid of (agents) in oracle apex with the button DELETE , and i want to know the steps how to show deleted rows in another grid
Categories: DBA Blogs

Oracle database 19c: documentation released

Dietrich Schroff - Fri, 2019-02-22 11:45
In january Oracle released the documentation for 19c:


If you are interested in the new features, take a look here:
https://docs.oracle.com/en/database/oracle/oracle-database/19/whats-new.html

Very nice is this link: Interactive Architecture Diagram where you can get a very good introduction to oracle database with many pictures like this one:


This new feature i find very interesting:
Root Scripts Automation Support for Oracle Database Installation
Starting with Oracle Database 19c, the database installer, or setup wizard, provides options to set up permissions to run the root configuration scripts automatically, as required, during a database installation. You continue to have the option to run the root configuration scripts manually.
Setting up permissions for root configuration scripts to run without user intervention can simplify database installation and help avoid inadvertent permission errors.
But 19c is not released for on premise, so i have to wait for testing this feature:

 Release date for Linux: Q2 2019?

Documentum – Documents not transferred to WebConsumer

Yann Neuhaus - Fri, 2019-02-22 10:33

Receiving an incident is not always a pleasure, but it is always the case when we share the solution!
A few days ago, I received an incident regarding WebConsumer on a production environment, saying that documents are not transferred as expected to WebConsumer.

The issue didn’t happened for all documents, that’s why I directly suspect the High Availability configuration on this environment. Moreover, I know that the IDS is installed only on CS1 (as designed). So I checked the JMS logs on :
CS1 : No errors found there.

CS2 : Errors found :

2019-02-11 04:05:39,097 UTC INFO  [stdout] (default task-60) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'
2019-02-11 04:05:39,141 UTC INFO  [stdout] (default task-60) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod before session apply 'WCPublishDocumentMethod' time: 0.044s
2019-02-11 04:05:39,773 UTC INFO  [stdout] (default task-89) 2019-02-11 04:05:39,773 UTC ERROR [com.domain.repository1.dctm.methods.WCPublishDoc] (default task-89) DfException:: THREAD: default task-89; 
MSG: [DM_METHOD_E_JMS_APP_SERVER_NAME_NOTFOUND]error:  "The app_server_name/servlet_name 'WebCache' is not specified in dm_server_config/dm_jms_config."; ERRORCODE: 100; NEXT: null

To cross check:

On CS1:

[dmadmin@CONTENT_SERVER1 ~]$ cd $DOCUMENTUM/shared/wildfly9.0.1/server/DctmServer_MethodServer/log
[dmadmin@CONTENT_SERVER1 log]$ grep DM_METHOD_E_JMS_APP_SERVER_NAME_NOTFOUND server.log | wc -l
0

On CS2:

[dmadmin@CONTENT_SERVER2 ~]$ cd $DOCUMENTUM/shared/wildfly9.0.1/server/DctmServer_MethodServer/log
[dmadmin@CONTENT_SERVER2 log]$ grep DM_METHOD_E_JMS_APP_SERVER_NAME_NOTFOUND server.log | wc -l
60

So I checked the app servers list configured in the dm_server_config:

On CS1:

API> retrieve,c,dm_server_config
...
3d01e24080000102
API> dump,c,3d01e24080000102
...
USER ATTRIBUTES

  object_name                     : repository1
...
  app_server_name              [0]: do_method
                               [1]: do_mail
                               [2]: FULLTEXT_SERVER1_PORT_IndexAgent
                               [3]: WebCache
                               [4]: FULLTEXT_SERVER2_PORT_IndexAgent
  app_server_uri               [0]: https://CONTENT_SERVER1:9082/DmMethods/servlet/DoMethod
                               [1]: https://CONTENT_SERVER1:9082/DmMail/servlet/DoMail
                               [2]: https://FULLTEXT_SERVER1:PORT/IndexAgent/servlet/IndexAgent
                               [3]: https://CONTENT_SERVER1:6679/services/scs/publish
                               [4]: https://FULLTEXT_SERVER2:PORT/IndexAgent/servlet/IndexAgent
...

Good, WebCache is configured here.

On CS2:

API> retrieve,c,dm_server_config
...
3d01e24080000255
API> dump,c,3d01e24080000255
...
USER ATTRIBUTES

  object_name                     : repository1
...
  app_server_name              [0]: do_method
                               [1]: do_mail
                               [2]: FULLTEXT_SERVER1_PORT_IndexAgent
                               [3]: FULLTEXT_SERVER2_PORT_IndexAgent
  app_server_uri               [0]: https://CONTENT_SERVER1:9082/DmMethods/servlet/DoMethod
                               [1]: https://CONTENT_SERVER1:9082/DmMail/servlet/DoMail
                               [2]: https://FULLTEXT_SERVER1:PORT/IndexAgent/servlet/IndexAgent
                               [3]: https://FULLTEXT_SERVER2:PORT/IndexAgent/servlet/IndexAgent
...

Ok! The root cause of this error is clear now.

The concerned method is WCPublishDocumentMethod, but applied when? by who?

I noticed that in the log above:

D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'

So, WCPublishDocumentMethod applied by the D2LifecycleConfig, which is applied when? by who?
Sought in the erver.log file and found:

2019-02-11 04:05:04,490 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : User  : repository1
2019-02-11 04:05:04,490 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : New session manager creation.
2019-02-11 04:05:04,491 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Session manager set identity.
2019-02-11 04:05:04,491 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Session manager get session.
2019-02-11 04:05:06,006 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Workitem ID: 4a01e2408002bd3d
2019-02-11 04:05:06,023 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching workflow tracker...
2019-02-11 04:05:06,031 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching workflow config...
2019-02-11 04:05:06,032 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Get packaged documents...
2019-02-11 04:05:06,067 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Apply on masters...
2019-02-11 04:05:06,068 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Workitem acquire...
2019-02-11 04:05:06,098 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Applying lifecycle (Target state : On Approved / Transition :promote
2019-02-11 04:05:06,098 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : No workflow properties
2019-02-11 04:05:06,098 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching target state name and/or transition type.
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Target state name :On Approved
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Target transition type :promote
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Performing D2 lifecycle on :FRM-8003970 (0901e240800311cd)
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching associated D2 lifecycle...
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::getInstancesForObject start time 0.000s
...
2019-02-11 04:05:39,097 UTC INFO  [stdout] (default task-60) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'
...

Hummmm, the D2WFLifeCycleMethod is applied by the job D2JobLifecycleBatch. I checked the target server of this job:

1> SELECT target_server FROM dm_job WHERE object_name='D2JobLifecycleBatch';
2> go
target_server                                                                                                                                                                               
-------------
 
(1 row affected)

As I suspected, no target server defined! That’s mean that the job can be executed on “Any Running Server”, that’s why this method has been executed on CS2… While CS2 is not configured to do so.

Now, two solutions are possible:
1. Change the target_server to use only CS1 (idql):

UPDATE dm_job OBJECTS SET target_server='repository1.repository1@CONTENT_SERVER1' WHERE object_name='D2JobLifecycleBatch';

2. Add the app server WebCache to CS2, pointing to CS1 (iapi):

API>fetch,c,dm_server_config
API>append,c,l,app_server_name
WebCache
API>append,c,l,app_server_uri

https://CONTENT_SERVER1:6679/services/scs/publish

API>save,c,l

Check after update:
API> retrieve,c,dm_server_config
...
3d01e24080000255
API> dump,c,3d01e24080000255
...
USER ATTRIBUTES

  object_name                     : repository1
...
  app_server_name              [0]: do_method
                               [1]: do_mail
                               [2]: FULLTEXT_SERVER1_PORT_IndexAgent
                               [3]: FULLTEXT_SERVER2_PORT_IndexAgent
                               [4]: WebCache
  app_server_uri               [0]: https://CONTENT_SERVER1:9082/DmMethods/servlet/DoMethod
                               [1]: https://CONTENT_SERVER1:9082/DmMail/servlet/DoMail
                               [2]: https://FULLTEXT_SERVER1:PORT/IndexAgent/servlet/IndexAgent
                               [3]: https://FULLTEXT_SERVER2:PORT/IndexAgent/servlet/IndexAgent
                               [4]: https://CONTENT_SERVER1:6679/services/scs/publish
...

We choose the second option, because:
– The job is handled by the application team,
– Modifying the job to run only on CS1 will resolve this case, but if the method is applied by another job or manually on CS2, we will get again the same error.

After this update no error has been recorded in the log file:

...
2019-02-12 04:06:10,948 UTC INFO  [stdout] (default task-81) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'
2019-02-12 04:06:10,955 UTC INFO  [stdout] (default task-81) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod before session apply 'WCPublishDocumentMethod' time: 0.007s
2019-02-12 04:06:10,955 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : No ARG_RETURN_ID in mapArguments
2019-02-12 04:06:10,956 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : newObject created, user session used: 0801e2408023f714
2019-02-12 04:06:10,956 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.D2SysObject                    : getFolderIdFromCache: got folder: /System/D2/Data/c6_method_return, object id: 0b01e2408000256b, docbase: repository1
2019-02-12 04:06:11,016 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : mapArguments: {-method_return_id=0801e2408023f714}
2019-02-12 04:06:11,016 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : origArguments: {-id=0901e24080122a59}
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : methodName: WCPublishDocumentMethod
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : methodParams: -id 0901e24080122a59 -user_name dmadmin -docbase_name repository1
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : Start WCPublishDocumentMethod method with JMS (Java Method Services) runLocally hint set is false
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : key: -method_return_id, and value: 0801e2408023f714
...

I hope this blog will help you to quickly resolve this kind of incident.

Cet article Documentum – Documents not transferred to WebConsumer est apparu en premier sur Blog dbi services.

MySQL 8 and Security – Encryption of binary logs

Yann Neuhaus - Fri, 2019-02-22 10:13

As I discussed in some of my recent talks at conferences (at the DOAG for example), MySQL 8 came out with new features which bring lots of improvements in terms of security.

“At-Rest” encryption has been existing from some releases by now:
– InnoDB Tablespace Encryption: by 5.7.11
– Redo and Undo Log Data Encryption: by 8.0.1
Now starting from version 8.0.14, you can also encrypt binary and relay log files. In this blog post we will see how to configure that and we will do some tests.

Case 1: Binary log files are not encrypted

Binary log files encryption is disables by default:

mysql> show variables like 'binlog_encryption';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| binlog_encryption | OFF   |
+-------------------+-------+
1 row in set (0.02 sec)

With this configuration, we could extract some sensitive information with some simple OS commands without connecting to the database, which means that if an OS user account is compromised we could have some important security issues.
First of all I create a database and a table and insert in it some sensitive information:

mysql> create database cards;
Query OK, 1 row affected (0.01 sec)
mysql> use cards;
Database changed
mysql> CREATE TABLE cards.banking_card (id int (128), day int(2), month int(2), year int(4), type varchar(128), code varchar(128));
Query OK, 0 rows affected (0.04 sec)
mysql> INSERT INTO cards.banking_card VALUES (1, 8, 3, 1984, 'secret code', '01-234-5678');
Query OK, 1 row affected (0.01 sec)

I check which is the binary log file which is currently in use:

mysql> SHOW BINARY LOGS;
+--------------------+-----------+-----------+
| Log_name           | File_size | Encrypted |
+--------------------+-----------+-----------+
| mysqld8-bin.000001 |      1384 | No        |
| mysqld8-bin.000002 |       178 | No        |
| mysqld8-bin.000003 |       974 | No        |
+--------------------+-----------+-----------+
mysql> SHOW BINLOG EVENTS IN 'mysqld8-bin.000003';
+--------------------+-----+----------------+-----------+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Log_name           | Pos | Event_type     | Server_id | End_log_pos | Info                                                                                                                                                    |
+--------------------+-----+----------------+-----------+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| mysqld8-bin.000003 |   4 | Format_desc    |         8 |         124 | Server ver: 8.0.15, Binlog ver: 4                                                                                                                       |
| mysqld8-bin.000003 | 124 | Previous_gtids |         8 |         155 |                                                                                                                                                         |
| mysqld8-bin.000003 | 155 | Anonymous_Gtid |         8 |         232 | SET @@SESSION.GTID_NEXT= 'ANONYMOUS'                                                                                                                    |
| mysqld8-bin.000003 | 232 | Query          |         8 |         341 | create database cards /* xid=17 */                                                                                                                      |
| mysqld8-bin.000003 | 341 | Anonymous_Gtid |         8 |         420 | SET @@SESSION.GTID_NEXT= 'ANONYMOUS'                                                                                                                    |
| mysqld8-bin.000003 | 420 | Query          |         8 |         635 | use `cards`; CREATE TABLE cards.banking_card (id int (128), day int(2), month int(2), year int(4), type varchar(128), code varchar(128)) /* xid=23 */ |
| mysqld8-bin.000003 | 635 | Anonymous_Gtid |         8 |         714 | SET @@SESSION.GTID_NEXT= 'ANONYMOUS'                                                                                                                    |
| mysqld8-bin.000003 | 714 | Query          |         8 |         790 | BEGIN                                                                                                                                                   |
| mysqld8-bin.000003 | 790 | Table_map      |         8 |         865 | table_id: 66 (cards.banking_card)                                                                                                                     |
| mysqld8-bin.000003 | 865 | Write_rows     |         8 |         943 | table_id: 66 flags: STMT_END_F                                                                                                                          |
| mysqld8-bin.000003 | 943 | Xid            |         8 |         974 | COMMIT /* xid=24 */                                                                                                                                     |
+--------------------+-----+----------------+-----------+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
11 rows in set (0.00 sec)

With these 2 following commands for example (od to dump files in octal and other formats and xxd to make an hexdump) I can easily read the content of my binary log file:

# od -c mysqld8-bin.000003
0001440         a 003           B           001
0001460   005   c   a   r   d   s   016   b   a   n   k   i   n   g
0001500   _   c   a   r   d   ?     006 003 003 003 003 017 017 004
0001520   002   002   ? 001 001   002 003 374 377   005 224 022
0001540 332 375   ~   n   \ 036  \b         N       257 003
0001560           B           001   002   006 377
0001600   001        \b       003       300  \a  
0001620    \v     s   e   c   r   e   t       c   o   d   e  \v  
0001640   0   1   -   2   3   4   -   5   6   7   8   9   N   _ 312 375

# xxd mysqld8-bin.000003
00001f0: ff00 1300 6361 7264 7300 4352 4541 5445  ....cards.CREATE
0000200: 2054 4142 4c45 2063 6172 6473 2e63 6172   TABLE cards.ban
0000210: 645f 656e 6372 7970 7465 6420 2869 6420  king_card    (id
0000220: 696e 7420 2831 3238 292c 2064 6179 2069  int (128), day i
0000230: 6e74 2832 292c 206d 6f6e 7468 2069 6e74  nt(2), month int
0000240: 2832 292c 2079 6561 7220 696e 7428 3429  (2), year int(4)
0000250: 2c20 7479 7065 2076 6172 6368 6172 2831  , type varchar(1
0000260: 3238 292c 2063 6f64 6520 7661 7263 6861  28), code varcha
0000270: 7228 3132 3829 29f5 771f aafd 7e6e 5c22  r(128)).w...~n\"
0000280: 0800 0000 4f00 0000 ca02 0000 0000 0000  ....O...........
0000290: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00002a0: 0000 0000 0000 0002 0200 0000 0000 0000  ................
00002b0: 0300 0000 0000 0000 e692 3509 6582 05fc  ..........5.e...
00002c0: 5301 8f38 0100 9993 56a3 fd7e 6e5c 0208  S..8....V..~n\..
00002d0: 0000 004c 0000 0016 0300 0008 000b 0000  ...L............
00002e0: 0000 0000 0005 0000 1d00 0000 0000 0001  ................
00002f0: 2000 a045 0000 0000 0603 7374 6404 ff00   ..E......std...
0000300: ff00 ff00 12ff 0063 6172 6473 0042 4547  .......cards.BEG
0000310: 494e 0567 d2c2 fd7e 6e5c 1308 0000 004b  IN.g...~n\.....K
0000320: 0000 0061 0300 0000 0042 0000 0000 0001  ...a.....B......
0000330: 0005 6361 7264 7300 0e63 6172 645f 656e  ..cards..banking
0000340: 6372 7970 7465 6400 0603 0303 030f 0f04  _card...........
0000350: 0002 0002 3f01 0100 0203 fcff 0005 9412  ....?...........
0000360: dafd 7e6e 5c1e 0800 0000 4e00 0000 af03  ..~n\.....N.....
0000370: 0000 0000 4200 0000 0000 0100 0200 06ff  ....B...........
0000380: 0001 0000 0008 0000 0003 0000 00c0 0700  ................
0000390: 000b 0073 6563 7265 7420 636f 6465 0b00  ...secret code..
00003a0: 3031 2d32 3334 2d35 3637 3839 4e5f cafd  01-234-56789N_..
00003b0: 7e6e 5c10 0800 0000 1f00 0000 ce03 0000  ~n\.............
00003c0: 0000 1800 0000 0000 0000 f9d0 d057       .............W

Yes, that’s not good news.

Case 2: Binary log files are encrypted

Now let’s try to activate encryption through the following steps:
1) Activate a keyring plugin for the master key management (for the Community Edition, it’s called keyring_file and it stores keyring data in a local file):

# mysqld_multi stop 8
mysqld_multi log file version 2.16; run: Thu Feb 21 10:54:19 2019
Stopping MySQL servers
# mysqld_multi --defaults-file=/u01/app/mysql/admin/mysqld8/etc/my.cnf start 8 > /dev/null 2>&1
# ps -eaf|grep mysqld
mysql     3362     1 13 10:58 pts/0    00:00:00 /u01/app/mysql/product/mysql-8.0.15/bin/mysqld --port=33008 --socket=/u01/app/mysql/admin/mysqld8/socket/mysqld8.sock --pid-file=/u01/app/mysql/admin/mysqld8/socket/mysqld8.pid --log-error=/u01/app/mysql/admin/mysqld8/log/mysqld8.err --datadir=/u02/mysqldata/mysqld8/ --basedir=/u01/app/mysql/product/mysql-8.0.15/ --slow_query_log=0 --slow_query_log_file=/u01/app/mysql/admin/mysqld8/log/mysqld8-slow-query.log --log-bin=/u01/app/mysql/admin/mysqld8/binlog/mysqld8-bin --innodb_flush_log_at_trx_commit=1 --sync_binlog=1 --local-infile=0 --general_log=0 --general_log_file=/u01/app/mysql/admin/mysqld8/log/mysqld8.log --lc_messages_dir=/u01/app/mysql/product/mysql-8.0.15/share/ --lc_messages=en_US --server-id=8 --log_timestamps=SYSTEM --early-plugin-load=keyring_file.so

The /u01/app/mysql/admin/mysqld8/etc/my.cnf file is defined as follows:

[mysqld8]
port                           = 33008
mysqladmin                     = /u01/app/mysql/product/mysql-8.0.15/bin/mysqladmin
...
server-id                      = 8
early-plugin-load              = keyring_file.so

2) Turn on the binlog_encryption variable:

mysql> SET PERSIST binlog_encryption=ON;
Query OK, 0 rows affected (0.03 sec)

At this point, encryption is enabled:

mysql> flush logs;
Query OK, 0 rows affected (0.05 sec)
mysql> SHOW BINARY LOGS;
+--------------------+-----------+-----------+
| Log_name           | File_size | Encrypted |
+--------------------+-----------+-----------+
| mysqld8-bin.000001 |      1384 | No        |
| mysqld8-bin.000002 |       178 | No        |
| mysqld8-bin.000003 |      1023 | No        |
| mysqld8-bin.000004 |       716 | Yes       |
| mysqld8-bin.000005 |       667 | Yes       |
+--------------------+-----------+-----------+

I insert again some data into my table:

mysql> use cards;
Database changed
mysql> INSERT INTO cards.banking_card VALUES (2, 5, 9, 1986, 'secret code', '01-234-5678');
Query OK, 1 row affected (0.02 sec)

I can try to extract some information from the binary log files on disk:

# xxd mysqld8-bin.000005
...
0000250: 2796 8d0c 9171 7109 df65 2434 9d0e 4f40  '....qq..e$4..O@
0000260: e024 07e8 9db7 ae84 f0d5 5728 90d4 905f  .$........W(..._
0000270: 9cc4 6c33 d4e1 5839 aa1f 97bb af04 b24d  ..l3..X9.......M
0000280: e36d dd05 3d0c f9d8 fbee 2379 2b85 2744  .m..=.....#y+.'D
0000290: efe4 29cb 3eff 03b8 b934 ec6b 4e9c 9189  ..).>....4.kN...
00002a0: d14b 402c 3d80 effe c34d 0d27 3be7 b427  .K@,=....M.';..'
00002b0: 5389 3208 b199 7da6 acf6 d98a 7ac3 299c  S.2...}.....z.).
00002c0: 3de0 5e12 3ed6 5849 f907 3d2c da66 f1a1  =.^.>.XI..=,.f..
00002d0: 7556 c62b b88f a3da 1a47 230b aae8 c63c  uV.+.....G#....<
00002e0: 6751 4f31 2d14 66e9 5a17 a980 4d37 2067  gQO1-.f.Z...M7 g
00002f0: 034c e0d7 b8ad 8cb4 b6d0 16e9 f6a5 3f90  .L............?.
0000300: 95aa 008e 79e1 7fda d74e ada2 f602 cc3b  ....y....N.....;
0000310: 1b61 c657 b656 3840 712d 2bb3 61b9 3c44  .a.W.V8@q-+.a..
0000390: 2a6b e68f e14c 6b3d b6ac e4cf 4f75 a828  *k...Lk=....Ou.(
00003a0: 0e21 24ad 27c7 e970 37a2 c883 46b0 ff26  .!$.'..p7...F..&
00003b0: 7c2a cf9f 9845 e4ca c067 f763 cd80 b1b3  |*...E...g.c....
00003c0: 74b8 6066 b1c0 634e fabc 9312 d0c4 ed8d  t.`f..cN........
00003d0: 880d 41b7 a1d4 3c59 bea3 63e7 ab61 11b7  ..A...<Y..c..a..
00003e0: 9f40 4555 f469 38b8 1add 1336 f03d       .@EU.i8....6.=

Well done, no clear-text data is readable anymore now!
Just a little information. When encryption is turned on, to display the content of the binary log files with mysqlbinlog, we have to use the –read-from-remote-server (-R) option, otherwise mysqlbinlog has no access to them:

# cd /u01/app/mysql/admin/mysqld8/binlog
# mysqlbinlog mysqld8-bin.000005
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
ERROR: Reading encrypted log files directly is not supported.
SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;
DELIMITER ;
#End of log file
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;

# mysqlbinlog -R --protocol TCP --host 192.168.25.2 --port 33008 --user root -p mysqld8-bin.000005
Enter password:
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
#at 4
#190221 12:01:55 server id 8  end_log_pos 124 CRC32 0x92413637  Start: binlog v 4, server v 8.0.15 created 190221 12:01:55
BINLOG '
I4VuXA8IAAAAeAAAAHwAAAAAAAQAOC4wLjE1AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAEwANAAgAAAAABAAEAAAAYAAEGggAAAAICAgCAAAACgoKKioAEjQA
CgE3NkGS
'/*!*/;
#at 124
#190221 12:01:55 server id 8  end_log_pos 155 CRC32 0x7d47d67e  Previous-GTIDs
#[empty]
#at 155
...

Stay tuned with MySQL 8! ;)

Cet article MySQL 8 and Security – Encryption of binary logs est apparu en premier sur Blog dbi services.

Now Available: HTTP Strict Transport Security (HSTS) with EBS 12.2 and 12.1

Steven Chan - Fri, 2019-02-22 09:48

We've recently updated our enabling Transport Layer Security (TLS) documentation (see References section below) to include guidelines for deploying HTTP Strict Transport Security (HSTS) with Oracle E-Business Suite Releases 12.2 and 12.1. HSTS allows you to specify a time period during which all browser communication must only use HTTPS. Using HSTS with Oracle E-Business Suite is an optional configuration. 

If you plan to configure HSTS for Oracle E-Business Suite Release 12.2 or 12.1, we recommend the following practices:

  • Configure HSTS at the TLS termination point (for example at the Oracle HTTP Server (OHS) or the load balancer).
  • Either use the default HTTPS port (443), or specify the HTTPS port in all URLs.
  • Begin by testing HSTS with a short time period.
References

Related Articles

Categories: APPS Blogs

Do your know How to troubleshoot long-running concurrent request?

Online Apps DBA - Fri, 2019-02-22 08:00

[BLOG] Troubleshoot/Debug Long Running Concurrent Request in Oracle EBS (R12/11i) Learn here https://k21academy.com/appsdba22 about: ✔ What is Long-running Concurrent Request ✔ The 6 basic steps to troubleshoot the Error ✔ Download your FREE Guide to know more about Concurrent Manager. [BLOG] Troubleshoot/Debug Long Running Concurrent Request in Oracle EBS (R12/11i) Learn here https://k21academy.com/appsdba22 about: ✔ […]

The post Do your know How to troubleshoot long-running concurrent request? appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Materialized view log , logminer

Tom Kyte - Fri, 2019-02-22 07:06
Hello Tom, would you explain a little which way is better to check on delta of information to populate DWh on schema? Materialized view log or LOGMNR?
Categories: DBA Blogs

Oracle 12c Performance issue after upgrading database from 10.2.0.5.0 to 12.1.0.2.0

Tom Kyte - Fri, 2019-02-22 07:06
Hello, One of our client has upgraded Test environment Oracle database from 10.2.0.5.0 to 12.1.0.2.0 with Production data. After upgrading the database, we are facing performance issue in few processes which are taking huge time to complete com...
Categories: DBA Blogs

EXP: how to include the "CREATE USER" statement without perform a full export?

Tom Kyte - Fri, 2019-02-22 07:06
Hello. I need to export an oracle schema using exp, included the "CREATE USER" statement. Now, for do that the only think to do that I know is perform a full export of the database. The problem is that in my database there are a lot of users and th...
Categories: DBA Blogs

rman question

Tom Kyte - Fri, 2019-02-22 07:06
thomas, happy holiday. I have question(s) related to rman 1. what is the difference between - to back up the current control file and to backup up control file copy(backup controlfile copy) 2. when using either of above command tag and ...
Categories: DBA Blogs

Working with files on the filesystem in PostgreSQL

Yann Neuhaus - Fri, 2019-02-22 04:18

PostgreSQL comes with various helper functions that support you with working with files on the filesystem on the host PostgreSQL is running on. You might ask yourself why that is important but there are use cases for that. Maybe you want to list the contents of a directory because new files that showed up since the last check do trigger something. Maybe you want to load a file into the database (which you also can (and event should) do using copy if it is text based and somehow well formatted, but that is not the scope of this post).

For listing files in a directory there is this one:

postgres=# select * from pg_ls_dir('.');
      pg_ls_dir       
----------------------
 pg_wal
 global
 pg_commit_ts
 pg_dynshmem
 pg_notify
 pg_serial
 pg_snapshots
 pg_subtrans
 pg_twophase
 pg_multixact
 base
 pg_replslot
 pg_tblspc
 pg_stat
 pg_stat_tmp
 pg_xact
 pg_logical
 PG_VERSION
 postgresql.conf
 postgresql.auto.conf
 pg_hba.conf
 pg_ident.conf
 pg_log
 postmaster.opts
 autoprewarm.blocks
 postmaster.pid
 current_logfiles
(27 rows)

By default the ‘.’ listings are omitted by you can control this:

postgres=# select * from pg_ls_dir('.',true,true);
      pg_ls_dir       
----------------------
 .
 ..
 pg_wal
 global
 pg_commit_ts
 pg_dynshmem
 pg_notify
 pg_serial
 pg_snapshots
 pg_subtrans
 pg_twophase
 pg_multixact
 base
 pg_replslot
 pg_tblspc
 pg_stat
 pg_stat_tmp
 pg_xact
 pg_logical
 PG_VERSION
 postgresql.conf
 postgresql.auto.conf
 pg_hba.conf
 pg_ident.conf
 pg_log
 postmaster.opts
 autoprewarm.blocks
 postmaster.pid
 current_logfiles
(29 rows)

There is no option to control sorting but of course you can add a where clause to do this:

postgres=# select * from pg_ls_dir('.',true,true) order by 1;
      pg_ls_dir       
----------------------
 .
 ..
 autoprewarm.blocks
 base
 current_logfiles
 global
 pg_commit_ts
 pg_dynshmem
 pg_hba.conf
 pg_ident.conf
 pg_log
 pg_logical
 pg_multixact
 pg_notify
 pg_replslot
 pg_serial
 pg_snapshots
 pg_stat
 pg_stat_tmp
 pg_subtrans
 pg_tblspc
 pg_twophase
 PG_VERSION
 pg_wal
 pg_xact
 postgresql.auto.conf
 postgresql.conf
 postmaster.opts
 postmaster.pid
(29 rows)

You could load that into an array and then do whatever you want to do with it for further processing:

postgres=# \x
Expanded display is on.
postgres=# with dirs as (select pg_ls_dir('.'::text,true,true) dir order by 1)
                select array_agg(dir) from dirs;
-[ RECORD 1 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
array_agg | {.,..,autoprewarm.blocks,base,current_logfiles,global,pg_commit_ts,pg_dynshmem,pg_hba.conf,pg_ident.conf,pg_log,pg_logical,pg_multixact,pg_notify,pg_replslot,pg_serial,pg_snapshots,pg_stat,pg_stat_tmp,pg_subtrans,pg_tblspc,pg_twophase,PG_VERSION,pg_wal,pg_xact,postgresql.auto.conf,postgresql.conf,postmaster.opts,postmaster.pid}

When you try to list the files of a directory you do not have the permissions to do so of course that fails:

postgres=# select pg_ls_dir('/root');
ERROR:  could not open directory "/root": Permission denied

All other directories the PostgreSQL operating system user has access to can be listed:

postgres=# \x
Expanded display is off.
postgres=# select pg_ls_dir('/var/tmp');
                                pg_ls_dir                                
-------------------------------------------------------------------------
 yum-postgres-uSpYMT
 systemd-private-f706224b798a404a8b1b7efbbb7137c9-chronyd.service-saK1Py
 systemd-private-bcd40d1946c94f1fbcb73d1047ee2fc2-chronyd.service-Fr7WgV
 systemd-private-798725e073664df6bbc5c6041151ef61-chronyd.service-kRvvJa
(4 rows)

When you need to get some statistics about a file there is pg_stat_file:

postgres=# select pg_stat_file('postgresql.conf');
                                     pg_stat_file                                      
---------------------------------------------------------------------------------------
 (26343,"2019-02-21 17:35:22+01","2019-02-05 15:41:11+01","2019-02-05 15:41:11+01",,f)
(1 row)
postgres=# select pg_size_pretty((pg_stat_file('postgresql.conf')).size);
 pg_size_pretty 
----------------
 26 kB
(1 row)

Loading a file into the database is possible as well:

postgres=# create table t1 ( a text );
CREATE TABLE
postgres=# insert into t1 select pg_read_file('postgresql.conf');
INSERT 0 1
postgres=# select * from t1;
                                                        a                                                        
-----------------------------------------------------------------------------------------------------------------
 # -----------------------------                                                                                +
 # PostgreSQL configuration file                                                                                +
 # -----------------------------                                                                                +
 #                                                                                                              +
 # This file consists of lines of the form:                                                                     +
 #                                                                                                              +
 #   name = value                                                                                               +
...

This works even with binary files (but do you really want to have binary files in the database?):

postgres=# create table t2 ( a bytea );
CREATE TABLE
postgres=# insert into t2 select pg_read_binary_file('/bin/cp');
INSERT 0 1
postgres=# select * from t2;
                                                                                                                                                                                                                   
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 \x7f454c4602010100000000000000000002003e0001000000293e4000000000004000000000000000c0560200000000000000000040003800090040001f001e000600000005000000400000000000000040004000000000004000400000000000f801000000000000
(1 row)
postgres=# drop table t1,t2;
DROP TABLE

As usual this is all very well documented in the PostgreSQL documentation.

Cet article Working with files on the filesystem in PostgreSQL est apparu en premier sur Blog dbi services.

Oracle VM Server: How to Upgrade Oracle Manager

Dietrich Schroff - Thu, 2019-02-21 14:47
Because of my connection problem to ovmcli via

ssh -l admin@localhost -p 10000
i decided to upgrade my OVM Manager.

After downloading OVM Manager 0.3.4.6 i mounted the ISO image:
mount /dev/sr0 /mnt
cd /mnt
[root@oraVMManager mnt]# ls -l
insgesamt 149832
drwxr-xr-x. 7 root root      8192 18. Nov 21:00 components
-r-xr-x---. 1 root root     11556 18. Nov 20:59 createOracle.sh
-rw-r--r--. 1 root root       230 18. Nov 20:59 oracle-validated.params
-r-xr-x---. 1 root root 149109589 18. Nov 21:00 ovmm-installer.bsx
-rw-r--r--. 1 root root   4293118 18. Nov 20:55 OvmSDK_3.4.6.2105.zip
-r-xr-x---. 1 root root      1919 18. Nov 20:59 runInstaller.sh
-rw-r--r--. 1 root root       372 18. Nov 20:59 sample.yml
-r--r--r--. 1 root root      1596 18. Nov 21:00 TRANS.TBL
The upgrade procedure can be found here (Oracle Documentation). So let's start:
# ./runInstaller.sh --installtype Upgrade

Oracle VM Manager Release 3.4.6 Installer

Oracle VM Manager Installer log file:
/var/log/ovmm/ovm-manager-3-install-2019-01-25-051107.log

Verifying upgrading prerequisites ...
*** WARNING: Ensure that each Oracle VM Server for x86 has at least 200MB of available space for the /boot partition and 3GB of available space for the / partition.
*** WARNING: Recommended memory for the Oracle VM Manager server installation using Local MySql DB is 7680 MB RAM

Starting Upgrade ...

Reading database parameters from config ...

==========================
Typically the current Oracle VM Manager database password will be the same as the Oracle VM Manager application password.

==========================
Database Repository
==========================
Please enter the current Oracle VM Manager database password for user ovs:

Oracle VM Manager application
=============================
Please enter the current Oracle VM Manager application password for user admin:

Oracle Weblogic Server 12c
==========================
Please enter the current password for the WebLogic domain administrator:

Please enter your fully qualified domain name, e.g. ovs123.us.oracle.com, (or IP address) of your management server for SSL certification generation 192.168.178.37 [oraVMManager.fritz.box]: 
Successfully verified password for user root
Successfully verified password for user appfw

Verifying configuration ...
Verifying 3.4.4 meets the minimum version for upgrade ...

Upgrading from version 3.4.4.1709 to version 3.4.6.2105

Start upgrading Oracle VM Manager:
   1: Continue
   2: Abort

   Select Number (1-2): 1

Running full database backup ...
Successfully backed up database to /u01/app/oracle/mysql/dbbackup/3.4.4_preUpgradeBackup-20190125_051150
Running ovm_preUpgrade script, please be patient this may take a long time ...
Exporting weblogic embedded LDAP users
Stopping service on Linux: ovmcli ...
Stopping service on Linux: ovmm ...
Exporting core database, please be patient this may take a long time  ...
NOTE: To monitor progress, open another terminal session and run: tail -f /var/log/ovmm/ovm-manager-3-install-2019-01-25-051107.log

Product component : Java in '/u01/app/oracle/java'
Java is installed ...

Removing Java installation ...


Installing Java ...


DB component : MySQL RPM package

MySQL RPM package installed by OVMM was found...

Removing MySQL RPM package installation ...


Installing Database Software...
Retrieving MySQL Database 5.6 ...
Unzipping MySQL RPM File ...
Installing MySQL 5.6 RPM package ...
Configuring MySQL Database 5.6 ...
Installing MySQL backup RPM package ...

Product component : Oracle VM Manager in '/u01/app/oracle/ovm-manager-3/'
Oracle VM Manager is installed ...
Removing Oracle VM Manager installation ...

Product component : Oracle WebLogic Server in '/u01/app/oracle/Middleware/'
Oracle WebLogic Server is installed

Removing Oracle WebLogic Server installation ...
Service ovmm is deleted.
Service ovmcli is deleted.


Retrieving Oracle WebLogic Server 12c and ADF ...
Installing Oracle WebLogic Server 12c and ADF ...
Applying patches to Weblogic ...
Applying patch to ADF ...


Installing Oracle VM Manager Core ...
Retrieving Oracle VM Manager Application ...
Extracting Oracle VM Manager Application ...

Retrieving Oracle VM Manager Upgrade tool ...
Extracting Oracle VM Manager Upgrade tool ...
Installing Oracle VM Manager Upgrade tool ...
Installing Oracle VM Manager WLST Scripts ...


Dropping the old database user 'appfw' ...
Dropping the old database 'appfw' ...
Creating new domain...
Creating new domain done.
Upgrading core database, please be patient this may take a long time ...
NOTE: To monitor progress, open another terminal session and run: tail -f /var/log/ovmm/ovm-manager-3-install-2019-01-25-051107.log
Starting restore domain's SSL configuration and create appfw database tables.
Restore domain's SSL configuration and create appfw database tables done.
AdminServer started.
Importing weblogic embedded LDAP users

Retrieving Oracle VM Manager CLI tool ...
Extracting Oracle VM Manager CLI tool...
Installing Oracle VM Manager CLI tool ...



Retrieving Oracle VM Manager Shell & API ...
Extracting Oracle VM Manager Shell & API ...
Installing Oracle VM Manager Shell & API ...

Retrieving Oracle VM Manager Wsh tool ...
Extracting Oracle VM Manager Wsh tool ...
Installing Oracle VM Manager Wsh tool ...

Retrieving Oracle VM Manager Tools ...
Extracting Oracle VM Manager Tools ...
Installing Oracle VM Manager Tools ...

Retrieving ovmcore-console ...
The ovmcore-console RPM package is latest, needn't to upgrade ...
Copying Oracle VM Manager shell to '/usr/bin/ovm_shell.sh' ...
Installing ovm_admin.sh in '/u01/app/oracle/ovm-manager-3/bin' ...
Installing ovm_upgrade.sh in '/u01/app/oracle/ovm-manager-3/bin' ...



Enabling Oracle VM Manager service ...
Shutting down Oracle VM Manager instance ...
Starting Oracle VM Manager instance ...

Please wait while WebLogic configures the applications...
Trying to connect to core via ovmwsh (attempt 1 of 20) ...
Trying to connect to core via ovm_shell (attempt 1 of 5)...

Installation Summary
--------------------
Database configuration:
  Database type               : MySQL
  Database host name          : localhost
  Database name               : ovs
  Database listener port      : 49500
  Database user               : ovs

Weblogic Server configuration:
  Administration username     : weblogic

Oracle VM Manager configuration:
  Username                    : admin
  Core management port        : 54321
  UUID                        : 0004fb000001000019f1074e05c43aa1


Passwords:
There are no default passwords for any users. The passwords to use for Oracle VM Manager, Database, and Oracle WebLogic Server have been set by you during this installation. In the case of a default install, all passwords are the same.

Oracle VM Manager UI:
  https://oraVMManager.fritz.box:7002/ovm/console
Log in with the user 'admin', and the password you set during the installation.

For more information about Oracle Virtualization, please visit:
  http://www.oracle.com/virtualization/

3.2.10/3.2.11 Oracle VM x86 Servers and SPARC agent 3.3.1 managed Servers are no longer supported in Oracle VM Manager 3.4. Please upgrade your Server to a more current version for full support
For instructions, see the Oracle VM 3.4 Installation and Upgrade guide.

Oracle VM Manager upgrade complete.

Please remove configuration file /tmp/ovm_configA6Dpxd.

And the GUI shows after that via "help":




Where is My New Optional Default Tile?

Jim Marion - Thu, 2019-02-21 14:14
Navigation is critical to any business application. Classic used breadcrumbs for navigation. As I'm sure you noticed, Fluid is different, using Tiles and Homepages as the starting point for application navigation.
"In Fluid, tiles and homepages represent the primary navigation model, replacing Classic's breadcrumb menu."In Classic, breadcrumb navigation is managed by administrators. It is fixed, not variable, not personalizable. Users cannot personalize Classic navigation (other than creating favorites). Did I say Fluid is different? Yes. Fluid gives users significant control over their navigational view by allowing them to personalize tiles and homepages. This can cause significant problems, with users removing tiles that represent critical business functions. There are a few solutions for this problem (disable personalization, mark tiles as required, etc, see Section 4 of Simon's blog post for ideas). What I want to focus on is confusion regarding optional default tiles, where an optional default tile doesn't default onto a homepage. Here is the scenario:
  • A homepage already exists
  • As an administrator, you configure a new tile as Optional Default



After configuring the homepage, all users that have NOT personalized will see the tile. Put another way, any user that has personalized the homepage will not see the new tile (and a simple accidental drag and drop will result in a personalization). Here is what users that personalize will see:


If it is optional default, what happened to the default part? When users personalize their homepages, PeopleSoft clones the current state of the homepage into a user table. Let's say Tom and Jill both personalize their home pages. Tom will now have a personalized copy of the default configuration and Jill will have an entirely different personalized copy.



Administrators will continue to insert optional default content into homepages, but Tom and Jill will not see those optional default tiles. Tom and Jill's homepages are now detached from the source. We can push optional default tiles into Tom's and Jill's copies by using the Tile Publish button available to each homepage content reference (in the portal registry). This App Engine program inserts a row for each optional default tile into each user's copy of the homepage metadata.

Pretty clear and straight forward so far? OK, let's make it more complicated. Let's say an administrator adds a new optional default tile to the default homepage described above and presses the Publish Tile button. After the App Engine runs, the administrator notices Tom sees the tile, but Jill does not. What went wrong? If Jill doesn't have security access to the tile's target, Jill won't see the new tile. Let's say Jill is supposed to have security access so we update permissions and roles. We check Jill's homepage again. Does Jill see the tile? No. Why not? When we published the tile, Jill did not have security access so PeopleSoft didn't insert a row into Jill's personalization metadata. How can we make this tile appear for Jill? We could publish again. If we recognize and resolve the security issue immediately after publishing, this may be reasonable.

Let's play out this scenario a little differently. Some time has passed since we published. Tom has seen and removed the new tile from his homepage. One day Tom is at the water cooler talking about this annoying new tile that just appeared one day so he removed it. Jill overhears Tom and logs in to look for this annoying tile. After some searching, however, she doesn't see it on her homepage. She calls the help desk to find out why she doesn't have access to the annoying tile (that she will probably remove after seeing it). This is when you discover the security issue and make the tile available to Jill. For Jill to see this tile as a default, however, you will need to republish the tile. When you republish the tile, what will happen to Tom's homepage? Yes, you guessed it. Tom will see the tile appear again and will likely call the help desk to complain about the annoying tile that just reappeared.

What's the solution? At this time there is no delivered, recommended solution. The App Engine is very short, containing a couple of SQL statements. Using it as a guide, it is trivial to write a one-off metadata insert for Jill and all others affected by the security change without affecting Tom. When writing SQL inserts into PeopleTools tables, however, we must consider cache, version increments, and many other risk factors (I probably would not do this). I would say it is safer to annoy Tom.

--

Jim' is the Principal PeopleTools instructor at JSMPROS. Take your PeopleTools skills to the next level by scheduling PeopleTools training with us today!

a sql code to remove all the special characters from a particular column of a table

Tom Kyte - Thu, 2019-02-21 12:46
a sql code to remove all the special characters from a particular column of a table . for example : iNPUT-ABC -D.E.F OUTPUT ABC DEF AND IF THERE IS TWO NAMES LIKE ABC PRIVATE LTD ONE SPACE SHOULD BE MAINTAINED BETWEEN 2 WORDS.
Categories: DBA Blogs

DISTINCT vs UNION

Tom Kyte - Thu, 2019-02-21 12:46
Hello Tom, my test case create table xxx as select * from dba_tables; insert into xxx select * from dba_tables; I tried 2 queries 1* select distinct * from xxx 2359 rows selected. Execution Plan -----------------------------------------...
Categories: DBA Blogs

Guidance for Providing Access to the Oracle E-Business Suite Database for Extensions and ...

Steven Chan - Thu, 2019-02-21 10:05

Oracle E-Business Suite is often extended, customized and integrated with third-party products. We highly recommend that you follow our published guidance for performing extensions, customizations and third-party integrations.

In addition to these guidelines, you should also follow our equivalent recommendations for granting access to Oracle E-Business Suite database objects. Our guidance for accessing an EBS database takes a least-privileges approach to providing access to database objects. The database access recommendations complement our guidance for deploying customizations, extensions and third-party integrations.

References

Related Articles

 

 

 

Categories: APPS Blogs

Spark Streaming and Kafka - Creating a New Kafka Connector

Rittman Mead Consulting - Thu, 2019-02-21 08:39
More Kafka and Spark, please!

Hello, world!

Having joined Rittman Mead more than 6 years ago, the time has come for my first blog post. Let me start by standing on the shoulders of blogging giants, revisiting Robin's old blog post Getting Started with Spark Streaming, Python, and Kafka.

The blog post was very popular, touching on the subjects of Big Data and Data Streaming. To put my own twist on it, I decided to:

  • not use Twitter as my data source, because there surely must be other interesting data sources out there,
  • use Scala, my favourite programming language, to see how different the experience is from using Python.
Why Scala?

Scala is admittedly more challenging to master than Python. However, because Scala compiles into Java bytecode, it can be used pretty much anywhere where Java is being used. And Java is being used everywhere. Python is arguably even more widely used than Java, however it remains a dynamically typed scripting language that is easy to write in but can be hard to debug.

Is there a case for using Scala instead of Python for the job? Both Spark and Kafka were written in Scala (and Java), hence they should get on like a house on fire, I thought. Well, we are about to find out.

My data source: OpenWeatherMap

When it comes to finding sample data sources for data analysis, the selection out there is amazing. At the time of this writing, Kaggle offers freely available 14,470 datasets, many of them in easy-to-digest formats like CSV and JSON. However, when it comes to real-time sample data streams, the selection is quite limited. Twitter is usually the go-to choice - easily accessible and well documented. Too bad I decided not to use Twitter as my source.

Another alternative is the Wikipedia Recent changes stream. Although in the stream schema there are a few values that would be interesting to analyse, overall this stream is more boring than it sounds - the text changes themselves are not included.

Fortunately, I came across the OpenWeatherMap real-time weather data website. They have a free API tier, which is limited to 1 request per second, which is quite enough for tracking changes in weather. Their different API schemas return plenty of numeric and textual data, all interesting for analysis. The APIs work in a very standard way - first you apply for an API key. With the key you can query the API with a simple HTTP GET request (Apply for your own API key instead of using the sample one - it is easy.):

This request

https://samples.openweathermap.org/data/2.5/weather?q=London,uk&appid=b6907d289e10d714a6e88b30761fae22

gives the following result:

{
  "coord": {"lon":-0.13,"lat":51.51},
  "weather":[
    {"id":300,"main":"Drizzle","description":"light intensity drizzle","icon":"09d"}
  ],
  "base":"stations",
  "main": {"temp":280.32,"pressure":1012,"humidity":81,"temp_min":279.15,"temp_max":281.15},
  "visibility":10000,
  "wind": {"speed":4.1,"deg":80},
  "clouds": {"all":90},
  "dt":1485789600,
  "sys": {"type":1,"id":5091,"message":0.0103,"country":"GB","sunrise":1485762037,"sunset":1485794875},
  "id":2643743,
  "name":"London",
  "cod":200
}
Getting data into Kafka - considering the options

There are several options for getting your data into a Kafka topic. If the data will be produced by your application, you should use the Kafka Producer Java API. You can also develop Kafka Producers in .Net (usually C#), C, C++, Python, Go. The Java API can be used by any programming language that compiles to Java bytecode, including Scala. Moreover, there are Scala wrappers for the Java API: skafka by Evolution Gaming and Scala Kafka Client by cakesolutions.

OpenWeatherMap is not my application and what I need is integration between its API and Kafka. I could cheat and implement a program that would consume OpenWeatherMap's records and produce records for Kafka. The right way of doing that however is by using Kafka Source connectors, for which there is an API: the Connect API. Unlike the Producers, which can be written in many programming languages, for the Connectors I could only find a Java API. I could not find any nice Scala wrappers for it. On the upside, the Confluent's Connector Developer Guide is excellent, rich in detail though not quite a step-by-step cookbook.

However, before we decide to develop our own Kafka connector, we must check for existing connectors. The first place to go is Confluent Hub. There are quite a few connectors there, complete with installation instructions, ranging from connectors for particular environments like Salesforce, SAP, IRC, Twitter to ones integrating with databases like MS SQL, Cassandra. There is also a connector for HDFS and a generic JDBC connector. Is there one for HTTP integration? Looks like we are in luck: there is one! However, this connector turns out to be a Sink connector.

Ah, yes, I should have mentioned - there are two flavours of Kafka Connectors: the Kafka-inbound are called Source Connectors and the Kafka-outbound are Sink Connectors. And the HTTP connector in Confluent Hub is Sink only.

Googling for Kafka HTTP Source Connectors gives few interesting results. The best I could find was Pegerto's Kafka Connect HTTP Source Connector. Contrary to what the repository name suggests, the implementation is quite domain-specific, for extracting Stock prices from particular web sites and has very little error handling. Searching Scaladex for 'Kafka connector' does yield quite a few results but nothing for http. However, there I found Agoda's nice and simple Source JDBC connector (though for a very old version of Kafka), written in Scala. (Do not use this connector for JDBC sources, instead use the one by Confluent.) I can use this as an example to implement my own.

Creating a custom Kafka Source Connector

The best place to start when implementing your own Source Connector is the Confluent Connector Development Guide. The guide uses JDBC as an example. Our source is a HTTP API so early on we must establish if our data source is partitioned, do we need to manage offsets for it and what is the schema going to look like.

Partitions

Is our data source partitioned? A partition is a division of source records that usually depends on the source medium. For example, if we are reading our data from CSV files, we can consider the different CSV files to be a natural partition of our source data. Another example of partitioning could be database tables. But in both cases the best partitioning approach depends on the data being gathered and its usage. In our case, there is only one API URL and we are only ever requesting current data. If we were to query weather data for different cities, that would be a very good partitioning - by city. Partitioning would allow us to parallelise the Connector data gathering - each partition would be processed by a separate task. To make my life easier, I am going to have only one partition.

Offsets

Offsets are for keeping track of the records already read and the records yet to be read. An example of that is reading the data from a file that is continuously being appended - there can be rows already inserted into a Kafka topic and we do not
want to process them again to avoid duplication. Why would that be a problem? Surely, when going through a source file row by row, we know which row we are looking at. Anything above the current row is processed, anything below - new records. Unfortunately, most of the time it is not as simple as that: first of all Kafka supports concurrency, meaning there can be more than one Task busy processing Source records. Another consideration is resilience - if a Kafka Task process fails,
another process will be started up to continue the job. This can be an important consideration when developing a Kafka Source Connector.

Is it relevant for our HTTP API connector? We are only ever requesting current weather data. If our process fails, we may miss some time periods but we cannot recover then later on. Offset management is not required for our simple connector.

So that is Partitions and Offsets dealt with. Can we make our lives just a bit more difficult? Fortunately, we can. We can create a custom Schema and then parse the source data to populate a Schema-based Structure. But we will come to that later.
First let us establish the Framework for our Source Connector.

Source Connector - the Framework

The starting point for our Source Connector are two Java API classes: SourceConnector and SourceTask. We will put them into separate .scala source files but they are shown here together:

import org.apache.kafka.connect.source.{SourceConnector, SourceTask}

class HttpSourceConnector extends SourceConnector {...}
class HttpSourceTask extends SourceTask {...}

These two classes will be the basis for our Source Connector implementation:

  • HttpSourceConnector represents the Connector process management. Each Connector process will have only one SourceConnector instance.
  • HttpSourceTask represents the Kafka task doing the actual data integration work. There can be one or many Tasks active for an active SourceConnector instance.

We will have some additional classes for config and for HTTP access.
But first let us look at each of the two classes in more detail.

SourceConnector class

SourceConnector is an abstract class that defines an interface that our HttpSourceConnector needs to adhere to. The first function we need to override is config:

  private val configDef: ConfigDef =
      new ConfigDef()
          .define(HttpSourceConnectorConstants.HTTP_URL_CONFIG, Type.STRING, Importance.HIGH, "Web API Access URL")
          .define(HttpSourceConnectorConstants.API_KEY_CONFIG, Type.STRING, Importance.HIGH, "Web API Access Key")
          .define(HttpSourceConnectorConstants.API_PARAMS_CONFIG, Type.STRING, Importance.HIGH, "Web API additional config parameters")
          .define(HttpSourceConnectorConstants.SERVICE_CONFIG, Type.STRING, Importance.HIGH, "Kafka Service name")
          .define(HttpSourceConnectorConstants.TOPIC_CONFIG, Type.STRING, Importance.HIGH, "Kafka Topic name")
          .define(HttpSourceConnectorConstants.POLL_INTERVAL_MS_CONFIG, Type.STRING, Importance.HIGH, "Polling interval in milliseconds")
          .define(HttpSourceConnectorConstants.TASKS_MAX_CONFIG, Type.INT, Importance.HIGH, "Kafka Connector Max Tasks")
          .define(HttpSourceConnectorConstants.CONNECTOR_CLASS, Type.STRING, Importance.HIGH, "Kafka Connector Class Name (full class path)")

  override def config: ConfigDef = configDef

This is validation for all the required configuration parameters. We also provide a description for each configuration parameter, that will be shown in the missing configuration error message.

HttpSourceConnectorConstants is an object where config parameter names are defined - these configuration parameters must be provided in the connector configuration file:

object HttpSourceConnectorConstants {
  val HTTP_URL_CONFIG               = "http.url"
  val API_KEY_CONFIG                = "http.api.key"
  val API_PARAMS_CONFIG             = "http.api.params"
  val SERVICE_CONFIG                = "service.name"
  val TOPIC_CONFIG                  = "topic"
  val TASKS_MAX_CONFIG              = "tasks.max"
  val CONNECTOR_CLASS               = "connector.class"

  val POLL_INTERVAL_MS_CONFIG       = "poll.interval.ms"
  val POLL_INTERVAL_MS_DEFAULT      = "5000"
}

Another simple function to be overridden is taskClass - for the SourceConnector class to know its corresponding SourceTask class.

  override def taskClass(): Class[_ <: SourceTask] = classOf[HttpSourceTask]

The last two functions to be overridden here are start and stop. These are called upon the creation and termination of a SourceConnector instance (not Task instance). JavaMap here is an alias for java.util.Map - a Java Map, which is not to be confused with the native Scala Map - that cannot be used here. (If you are a Python developer, a Map in Java/Scala is similar to the Python dictionary, but strongly typed.) The interface requires Java data structures, but that is fine - we can convert them from one to another. By far the biggest problem here is the assignment of the connectorConfig variable - we cannot have a functional programming friendly immutable value here. The variable is defined at the class level

  private var connectorConfig: HttpSourceConnectorConfig = _

and is set in the start function and then referred to in the taskConfigs function further down. This does not look pretty in Scala. Hopefully somebody will write a Scala wrapper for this interface.

Because there is no logout/shutdown/sign-out required for the HTTP API, the stop function just writes a log message.

  override def start(connectorProperties: JavaMap[String, String]): Unit = {
    Try (new HttpSourceConnectorConfig(connectorProperties.asScala.toMap)) match {
      case Success(cfg) => connectorConfig = cfg
      case Failure(err) => connectorLogger.error(s"Could not start Kafka Source Connector ${this.getClass.getName} due to error in configuration.", new ConnectException(err))
    }
  }

  override def stop(): Unit = {
    connectorLogger.info(s"Stopping Kafka Source Connector ${this.getClass.getName}.")
  }

HttpSourceConnectorConfig is a thin wrapper class for the configuration.

We are almost done here. The last function to be overridden is taskConfigs.
This function is in charge of producing (potentially different) configurations for different Source Tasks. In our case, there is no reason for the Source Task configurations to differ. In fact, our HTTP API will benefit little from parallelism, so, to keep things simple, we can assume the number of tasks always to be 1.

  override def taskConfigs(maxTasks: Int): JavaList[JavaMap[String, String]] = List(connectorConfig.connectorProperties.asJava).asJava

The name of the taskConfigs function was changed in the Kafka version 2.1.0 - please consider that when using this code for older Kafka versions.

Source Task class

In a similar manner to the Source Connector class, we implement the Source Task abstract class. It is only slightly more complex than the Connector class.

Just like for the Connector, there are start and stop functions to be overridden for the Task.

Remember the taskConfigs function from above? This is where task configuration ends up - it is passed to the Task's start function. Also, similarly to the Connector's start function, we parse the connection properties with HttpSourceTaskConfig, which is the same as HttpSourceConnectorConfig - configuration for Connector and Task in our case is the same.

We also set up the Http service that we are going to use in the poll function - we create an instance of the WeatherHttpService class. (Please note that start is executed only once, upon the creation of the task and not every time a record is polled from the data source.)

  override def start(connectorProperties: JavaMap[String, String]): Unit = {
    Try(new HttpSourceTaskConfig(connectorProperties.asScala.toMap)) match {
      case Success(cfg) => taskConfig = cfg
      case Failure(err) => taskLogger.error(s"Could not start Task ${this.getClass.getName} due to error in configuration.", new ConnectException(err))
    }

    val apiHttpUrl: String = taskConfig.getApiHttpUrl
    val apiKey: String = taskConfig.getApiKey
    val apiParams: Map[String, String] = taskConfig.getApiParams

    val pollInterval: Long = taskConfig.getPollInterval

    taskLogger.info(s"Setting up an HTTP service for ${apiHttpUrl}...")
    Try( new WeatherHttpService(taskConfig.getTopic, taskConfig.getService, apiHttpUrl, apiKey, apiParams) ) match {
      case Success(service) =>  sourceService = service
      case Failure(error) =>    taskLogger.error(s"Could not establish an HTTP service to ${apiHttpUrl}")
                                throw error
    }

    taskLogger.info(s"Starting to fetch from ${apiHttpUrl} each ${pollInterval}ms...")
    running = new JavaBoolean(true)
  }

The Task also has the stop function. But, just like for the Connector, it does not do much, because there is no need to sign out from an HTTP API session.

Now let us see how we get the data from our HTTP API - by overriding the poll function.

The fetchRecords function uses the sourceService HTTP service initialised in the start function. sourceService's sourceRecords function requests data from the HTTP API.

  override def poll(): JavaList[SourceRecord] = this.synchronized { if(running.get) fetchRecords else null }

  private def fetchRecords: JavaList[SourceRecord] = {
    taskLogger.debug("Polling new data...")

    val pollInterval = taskConfig.getPollInterval
    val startTime    = System.currentTimeMillis

    val fetchedRecords: Seq[SourceRecord] = Try(sourceService.sourceRecords) match {
      case Success(records)                    => if(records.isEmpty) taskLogger.info(s"No data from ${taskConfig.getService}")
                                                  else taskLogger.info(s"Got ${records.size} results for ${taskConfig.getService}")
                                                  records

      case Failure(error: Throwable)           => taskLogger.error(s"Failed to fetch data for ${taskConfig.getService}: ", error)
                                                  Seq.empty[SourceRecord]
    }

    val endTime     = System.currentTimeMillis
    val elapsedTime = endTime - startTime

    if(elapsedTime < pollInterval) Thread.sleep(pollInterval - elapsedTime)

    fetchedRecords.asJava
  }

Phew - that is the interface implementation done. Now for the fun part...

Requesting data from OpenWeatherMap's API

The fun part is rather straightforward. We use the scalaj.http library to issue a very simple HTTP request and get a response.

Our WeatherHttpService implementation will have two functions:

  • httpServiceResponse that will format the request and get data from the API
  • sourceRecords that will parse the Schema and wrap the result within the Kafka SourceRecord class.

Please note that error handling takes place in the fetchRecords function above.

    override def sourceRecords: Seq[SourceRecord] = {
        val weatherResult: HttpResponse[String] = httpServiceResponse
        logger.info(s"Http return code: ${weatherResult.code}")
        val record: Struct = schemaParser.output(weatherResult.body)

        List(
            new SourceRecord(
                Map(HttpSourceConnectorConstants.SERVICE_CONFIG -> serviceName).asJava, // partition
                Map("offset" -> "n/a").asJava, // offset
                topic,
                schemaParser.schema,
                record
            )
        )
    }

    private def httpServiceResponse: HttpResponse[String] = {

        @tailrec
        def addRequestParam(accu: HttpRequest, paramsToAdd: List[(String, String)]): HttpRequest = paramsToAdd match {
            case (paramKey,paramVal) :: rest => addRequestParam(accu.param(paramKey, paramVal), rest)
            case Nil => accu
        }

        val baseRequest = Http(apiBaseUrl).param("APPID",apiKey)
        val request = addRequestParam(baseRequest, apiParams.toList)

        request.asString
    }
Parsing the Schema

Now the last piece of the puzzle - our Schema parsing class.

The short version of it, which would do just fine, is just 2 lines of class (actually - object) body:

object StringSchemaParser extends KafkaSchemaParser[String, String] {
    override val schema: Schema = Schema.STRING_SCHEMA
    override def output(inputString: String) = inputString
}

Here we say we just want to use the pre-defined STRING_SCHEMA value as our schema definition. And pass inputString straight to the output, without any alteration.

Looks too easy, does it not? Schema parsing could be a big part of Source Connector implementation. Let us implement a proper schema parser. Make sure you read the Confluent Developer Guide first.

Our schema parser will be encapsulated into the WeatherSchemaParser object. KafkaSchemaParser is a trait with two type parameters - inbound and outbound data type. This indicates that the Parser receives data in String format and the result is a Kafka's Struct value.

object WeatherSchemaParser extends KafkaSchemaParser[String, Struct]

The first step is to create a schema value with the SchemaBuilder. Our schema is rather large, therefore I will skip most fields. The field names given are a reflection of the hierarchy structure in the source JSON. What we are aiming for is a flat, table-like structure - a likely Schema creation scenario.

For JSON parsing we will be using the Scala Circle library, which in turn is based on the Scala Cats library. (If you are a Python developer, you will see that Scala JSON parsing is a bit more involved (this might be an understatement), but, on the flipside, you can be sure about the result you are getting out of it.)

    override val schema: Schema = SchemaBuilder.struct().name("weatherSchema")
        .field("coord-lon", Schema.FLOAT64_SCHEMA)
        .field("coord-lat", Schema.FLOAT64_SCHEMA)

        .field("weather-id", Schema.FLOAT64_SCHEMA)
        .field("weather-main", Schema.STRING_SCHEMA)
        .field("weather-description", Schema.STRING_SCHEMA)
        .field("weather-icon", Schema.STRING_SCHEMA)
        
        // ...
        
        .field("rain", Schema.FLOAT64_SCHEMA)
        
        // ...

Next we define case classes, into which we will be parsing the JSON content.

   case class Coord(lon: Double, lat: Double)
   case class WeatherAtom(id: Double, main: String, description: String, icon: String)

That is easy enough. Please note that the case class attribute names match one-to-one with the attribute names in JSON. However, our Weather JSON schema is rather relaxed when it comes to attribute naming. You can have names like type and 3h, both of which are invalid value names in Scala. What do we do? We give the attributes valid Scala names and then implement a decoder:

    case class Rain(threeHours: Double)
    object Rain {
        implicit val decoder: Decoder[Rain] = Decoder.instance { h =>
            for {
                threeHours <- h.get[Double]("3h")
            } yield Rain(
                threeHours
            )
        }
    }

The rain case class is rather short, with only one attribute. The corresponding JSON name was 3h. We map '3h' to the Scala attribute threeHours.

Not quite as simple as JSON parsing in Python, is it?

In the end, we assemble all sub-case classes into the WeatherSchema case class, representing the whole result JSON.

    case class WeatherSchema(
                                coord: Coord,
                                weather: List[WeatherAtom],
                                base: String,
                                mainVal: Main,
                                visibility: Double,
                                wind: Wind,
                                clouds: Clouds,
                                dt: Double,
                                sys: Sys,
                                id: Double,
                                name: String,
                                cod: Double
                            )

Now, the parsing itself. (Drums, please!)

structInput here is the input JSON in String format. WeatherSchema is the case class we created above. The Circle decode function returns a Scala Either monad, error on the Left(), successful parsing result on the Right() - nice and tidy. And safe.

        val weatherParsed: WeatherSchema = decode[WeatherSchema](structInput) match {
            case Left(error) => {
                logger.error(s"JSON parser error: ${error}")
                emptyWeatherSchema
            }
            case Right(weather) => weather
        }

Now that we have the WeatherSchema object, we can construct our Struct object that will become part of the SourceRecord returned by the sourceRecords function in the WeatherHttpService class. That in turn is called from the HttpSourceTask's poll function that is used to populate the Kafka topic.

        val weatherStruct: Struct = new Struct(schema)
            .put("coord-lon", weatherParsed.coord.lon)
            .put("coord-lat", weatherParsed.coord.lat)

            .put("weather-id", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).id)
            .put("weather-main", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).main)
            .put("weather-description", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).description)
            .put("weather-icon", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).icon)

            // ...

Done!

Considering that Schema parsing in our simple example was optional, creating a Kafka Source Connector for us meant creating a Source Connector class, a Source Task class and a Source Service class.

Creating JAR(s)

JAR creation is described in the Confluent's Connector Development Guide. The guide mentions two options - either all the library dependencies can be added to the target JAR file, a.k.a an 'uber-Jar'. Alternatively, the dependencies can be copied to the target folder. In that case they must all reside in the same folder, with no subfolder structure. For no particular reason, I went with the latter option.

The Developer Guide says it is important not to include the Kafka Connect API libraries there. (Instead they should be added to CLASSPATH.) Please note that for the latest Kafka versions it is advised not to add these custom JARs to CLASSPATH. Instead, we will add them to connectors' plugin.path. But that we will leave for another blog post.

Scala - was it worth using it?

Only if you are a big fan. The code I wrote is very Java-like and it might have been better to write it in Java. However, if somebody writes a Scala wrapper for the Connector interfaces, or, even better, if a Kafka Scala API is released, writing Connectors in Scala would be a very good choice.connector

Categories: BI & Warehousing

[Video 3 of 5] Oracle Cloud: Create VCN, Subnet, Firewall (Security List), IGW, DRG: Step By Step

Online Apps DBA - Thu, 2019-02-21 08:22

You are asked to deploy a three tier highly availability application including Load Balancer on Oracle’s Gen 2 Cloud Then, ✔ How would you define Network, Subnet & Firewalls in Oracle’s Gen2 Cloud? ✔ How do you allow database port but only from Application Tier ? ✔ What are Ingress or Egress Security Rules ? […]

The post [Video 3 of 5] Oracle Cloud: Create VCN, Subnet, Firewall (Security List), IGW, DRG: Step By Step appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator