Feed aggregator

How to delete or disable JDBC or database connection

Nilesh Jethwa - Tue, 2018-05-22 23:32

Here are the steps to disable or remove any connection handle From the launch pad, click on Manage project/Users Switch to the “Database” Tab Select the connection handle by clicking on it Next, click on the pencil icon to edit … Continue reading ?

Source: InfoCaptor Dashboard

Firebird JDBC example and SQL to build dashboard

Nilesh Jethwa - Tue, 2018-05-22 23:27

Here are the detailed steps to get connected with Firebird using JDBC and build dashboard with InfoCaptor. Assumptions Make sure Firebird is installed Make sure Firebird server is running Setup SYSDBA user and password (the default password is masterkey) 1. … Continue reading ?

Source: InfoCaptor Dashboard

Top 12 Rank Tracking Software and services for your business needs

Nilesh Jethwa - Tue, 2018-05-22 23:22

The role of Search Engine Optimization and keyword tracking tools are important in this technological age. This is especially true for people involved in business. One sure way to track the performance of a business is to use software specifically … Continue reading ?

By: InfoCaptor Dashboard

How to Attract Visitors to Your Site?

Nilesh Jethwa - Tue, 2018-05-22 23:17

It’s not that easy to go from zero visitors to thousands of potential customers in an instant. But if you implement the right traffic-generating strategy, you can increase the number of visitors coming in to your website. If you can … Continue reading ?

Via: InfoCaptor Dashboard

What Are the Steps to Optimizing Your Website

Nilesh Jethwa - Tue, 2018-05-22 23:12

People use search engines like Google when looking for products or brands these days. In fact, 60 percent of consumers take advantage of Google search just to find what they exactly want, and more than 80 percent of online search … Continue reading ?

Credit: InfoCaptor Dashboard

Best Tools for Keyword Rank Tracking

Nilesh Jethwa - Tue, 2018-05-22 23:08

An important element of search engine optimization (SEO) is choosing the right keyword. With the right keywords, you can make your content rank on search engines. But the work doesn’t stop after ranking, you still need to track the position … Continue reading ?

Hat Tip To: InfoCaptor Dashboard

ORA-01659 on creation of a not unique global partitioned index

Tom Kyte - Tue, 2018-05-22 19:06
Dear Tom, I have a table that stores climatic data with this layer: idcell,day,field1,.... This table is locally partitioned by range on day and it has a local PK index: idcell,day. I want to create a not unique global partitioned index on i...
Categories: DBA Blogs

PL/SQL programming to write to file in batches of 2 million rows

Tom Kyte - Tue, 2018-05-22 19:06
I have been assigned to the task. Task :- In one table , I have 10 million records and I need to export table data into a CSV files/Text files but the condition is that(I need to export into 5 files ,each file should contain 2 million records) 1) ...
Categories: DBA Blogs

Partition pruning with MEMBER OF operator

Tom Kyte - Tue, 2018-05-22 19:06
Hello Tom ! Is it possible to force Oracle to use (sub-)partition pruning when MEMBER OF operator is used on some nested table? For example: <code>SELECT * FROM A_TABLE WHERE COL_1 MEMBER OF NUMBER_TAB_TYPE(1,10,4);</code> where NUMBER_TAB_TY...
Categories: DBA Blogs

HOW TO GET OLD VALUE OF A DATA

Tom Kyte - Tue, 2018-05-22 19:06
HI,THERE I have a situation here, in one of my table, i hvae loc_id column my requirement is that i want all loc_id that have changed to new loc_id eg: LOC_ID CUST_NAME ---------- -----------------...
Categories: DBA Blogs

How to change the subscript/index value in an associative array?

Tom Kyte - Tue, 2018-05-22 19:06
Hi Tom, Is it possible to change the subscript/index value of an existing element in an associative array? <code> declare type a_arr is table of varchar2(20) index by pls_integer; tb1 a_arr; begin tb1(1) := 'aaaa'; tb1(2) := '...
Categories: DBA Blogs

TDE Encryption Wallet Change Password

Tom Kyte - Tue, 2018-05-22 19:06
Requirement We need to be able to change the password on our Oracle TDE Encryption Wallet, which we use to encrypt some of our Tablespaces. This frequently asked questions article states that this can be done with OWM or orapki http://www.orac...
Categories: DBA Blogs

Docker: Lessons learned - Logging

Dietrich Schroff - Tue, 2018-05-22 14:23
After some time working with docker here my experiences:

Some days ago i created my own container with a minimal web service.

Here the ncweb.sh:
#!/bin/ash
sed -i  's/Hostname:.*/Hostname: '$HOSTNAME'/g' index.html
while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; cat index.html;}  | nc  -l -p 8080  2>&1 >> logfile; done
This is the Dockerfile:
FROM alpine

WORKDIR /tmp

RUN mkdir ncweb

ADD .  /tmp

ENTRYPOINT [ "/tmp/ncweb.sh" ]

After building the image
docker build -t ncweb:0.4 .
And starting the container:
docker run -d -p 8080:8080 ncweb:0.4 --name ncweb0.4
I was able to connect to the container and view the log:

To get the right command:
docker ps  |grep  ncweb:0.4 |awk '{print "docker exec -it "$1" ash"}'
and then use the output:
docker exec -it e4f9960fc8e5 ash
alpine:~/ncweb# docker exec -it e4f9960fc8e5 ash
/tmp # ls
Dockerfile  hexdump     index.html  logfile     ncweb       ncweb.sh
/tmp # cat logfile
GET / HTTP/1.1
Host: 192.168.178.46:8080
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en,de;q=0.7,en-US;q=0.3
Accept-Encoding: gzip, deflate
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Cache-Control: max-age=0

Thu May 10 10:01:23 UTC 2018 request done
But this is not the right way.
If i change the ncweb.sh to
#!/bin/ash
sed -i  's/Hostname:.*/Hostname: '$HOSTNAME'/g' index.html
while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; cat index.html;}  | nc  -l -p 8080 ;done
then you can do the following (after building a new container version):

alpine:~/ncweb# docker run -d -p 8080:8080 ncweb:0.5 --name ncweb0.5

9589f77fc289a3713354a365f8f08098279e6d0e893de99a0431d8fbd62c834a

alpine:~/ncweb# docker ps

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES

9589f77fc289        ncweb:0.5           "/tmp/ncweb.sh --n..."   8 seconds ago       Up 7 seconds        0.0.0.0:8080->8080/tcp   gracious_archimedes
To get the logs (which are written to STDOUT):

alpine:~/ncweb# docker logs -f 9589f77fc289

GET / HTTP/1.1

Host: 192.168.178.46:8080

User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

Accept-Language: en,de;q=0.7,en-US;q=0.3

Accept-Encoding: gzip, deflate

Connection: keep-alive

Upgrade-Insecure-Requests: 1

Cache-Control: max-age=0


Conclusion: It is better to use STDOUT than local logfiles. Or even better: use syslog or other central logging mechanisms.

Related posts:



Where Did the EBS Certifications Summary Go?

Steven Chan - Tue, 2018-05-22 11:46

The official word on all EBS certifications is the Certifications database on My Oracle Support (MOS):

Since I announce all of my team's certifications here on this blog, I've also created a one-page summary of all blog articles covering EBS technology stack certifications. This summary page is maintained manually, so if there's any gap between the MOS Certifications database and this summary, the Certifications database wins.

Our blogging engine has changed multiple times over the years, so the URL for the Certifications summary has also changed periodically. The Certifications summary lives here now - update your bookmarks:

You can also navigate to it from this blog's home page by clicking on the "Check Your Certifications" button:

 

Categories: APPS Blogs

Adding a Documentum extension into python

Yann Neuhaus - Tue, 2018-05-22 08:15

Many years ago, out of frustration by the poorness of scripting tools in Documentum, I realized a Documentum binding for python using the distutils and I remember how easy and straightforward it had been, even for someone not really into these things on a daily basis. Recently, I wanted to reuse that work but couldn’t find the source files, not that they were many, but I did not want to do it over again. Finally, I had to give up and admit it: I lost them for good somewhere among the tens of machines, O/Ses, disks and USB drives I played with during that period. Those were sunny, cloudless times (got it ?). Anyway, I decided to do it again but, as I hate repeating work, by using another method of extending python: this time I went for ctypes (cf. documentation here: https://docs.python.org/3/library/ctypes.html).
One of the advantages of ctypes over distutils is that no compilation is needed, even when changing the version of python or of the O/S because the interface is python, and it gets interpreted at run-time. Thanks to ctypes, there isn’t much to do and interfacing to a run-time library such as libdmcl40.so is a no brainer.
There was however a big change in the evolution from python 2 to python 3: strings no longer are arrays of bytes but are now a distinct, uncompatible type storing unicode characters. Transformation functions are of course provided to go from one type to the other and back. For low-level work such as interfacing a C/C++ shared library, the distinction is important because in C, strings are accessed as “char *”, i.e. array of bytes and one cannot just pass around python text strings with 1 to 4 bytes per character. Fortunately, there was no need to produce two versions of the interface because python 2.7, the last version of python 2, understands the type conversion functions used here:

string.encode('ascii', 'ignore') to convert python 3's strings to python 2's arrays of bytes compatible with C/C++ char*
b.decode() to convert python 2's arrays of bytes to python3 unicode strings

(see https://docs.python.org/3/howto/unicode.html). Thus, it was sufficient to just write one version for python 3 and it would also be compatible with python 2.
I started my work in a venerable 32-bit Ubuntu 14.04 with python 2.7. The Documentum library I used there was the 32-bit libdmcl40.so included in the Content server v5.3 binaries. Later, I installed python 3 on the same VM and made the necessary changes to the interface so it would be accepted by this interpreter. Later on, I copied the interface file to another VM running a 64-bit Ubuntu 16.04 and v7.3 Documentum ContentServer binaries but I couldn’t make it work with the included 64-bit libdmcl40.so. I kept receiving SIGSEGV and core dumps from both python2 and python3. A case for a gdb session sometime… Fortunately, with the java-enabled libdmcl.so library, both pythons worked well, albeit with a perceptible delay at startup because of all the jars to load, a small price to pay though.

The interface

The C functions libdmcl*.so exports are the following:

int dmAPIInit();
int dmAPIDeInit();
int dmAPIExec(const char *str);
char* dmAPIGet(const char *str);
int dmAPISet(const char *str, const char *arg);
char* dmAPIDesc(const char *str, int *code, int *type, int *sess);
char* dmAPIEval(const char *str, const char *arg);
char* dmGetPassword(const char *str);
int dmGetVersion( int *, int * );

However, the last 2 functions don’t seem to really be available from the library. Also, dmAPIDesc() (not to be confused with the describe server method) and dmAPIEval() are not documented in the API reference manual. Therefore, I’ve only considered the first 5 functions, the ones that really do the job as the building blocks of any Documentum script.
From within python, those functions are accessed through the wrapper functions below:

def dmInit()
def dmAPIDeInit()
def dmAPIGet(s)
def dmAPISet(s, value)
def dmAPIExec(stmt)

Those take care of the string conversions operations so you don’t have to; they are the only ones who directly talk to the Documentum API and the only ones to use to do API stuff. Generally, they return True or a string if successful, and False or None if not.
Every Documentum client should start with a call do dmInit() in order to load and initialize the libdmcl*.so library’s internal state. To guarantee that, the interface does it itself at load time. As this function is idempotent, further calls at script start up don’t have any effect. On the other hand, dmAPIDeInit() is not really necessary, just exiting the script will do.
Here, I named the proxy function dmInit() instead of dmAPIInit() for a reason. This function does not just invoke the library’s dmAPIInit() but also initializes the python interface and its usage of ctypes: it loads the shared library and describes the types of the API functions’ arguments (argtypes) and return values (restype). Here is a snippet of its the main part:

dmlib = 'libdmcl40.so'
...
dm = ctypes.cdll.LoadLibrary(dmlib); dm.restype = ctypes.c_char_p
...
dm.dmAPIInit.restype = ctypes.c_int;
dm.dmAPIDeInit.restype = ctypes.c_int;
dm.dmAPIGet.restype = ctypes.c_char_p; dm.dmAPIGet.argtypes = [ctypes.c_char_p] dm.dmAPISet.restype = ctypes.c_int; dm.dmAPISet.argtypes = [ctypes.c_char_p, ctypes.c_char_p] dm.dmAPIExec.restype = ctypes.c_int; dm.dmAPIExec.argtypes = [ctypes.c_char_p] status = dm.dmAPIInit()

The shared library whose name is in dmlib must be in the LD_LIBRARY_PATH (or SHLIB_PATH or LIBPATH, depending on the Unix flavor); specifying its full path name does work too.  As I wrote it before, if the script crashes, try to set it to libdmcl.so instead if it’s available.
The wrapper functions are used by all the verbs documented in the API Reference Manual. When the manual says for example:

Fetch
Purpose Fetches an object from the repository without placing a lock on the object.
Syntax
dmAPIExec("fetch,session,object_id[,type][,persistent_cache][,consistency_check_value]")
...
Return value
The Fetch method returns TRUE if successful or FALSE if unsuccessful.
...

it is the function dmAPIExec() that conveys the verb “fetch” and its arguments to the shared library. It takes just one argument, a string, and return, None if the call failed, a positive integer if it succeeded.

Another example:

Getservermap
Purpose Returns information about the servers known to a connection broker.
Syntax
dmAPIGet("getservermap,session,repository_name[,protocol][,host_name][,port_number]")
...
Return value
The Getservermap method returns a non-persistent ID for a server locator object.
...

Here, it’s dmAPIGet() that does it for the verb “getservermap”. It returns an empty string if the call failed (remapped to None to be more pythonic), a non-empty one with an ID if it succeeded.

For more usage comfort, a few functions have been added in the interface:

def connect(docbase, user_name, password):
"""
connects to given docbase as user_name/password;
returns a session id if OK, None otherwise
"""
def execute(session, dql_stmt):
"""
execute non-SELECT DQL statements;
returns TRUE if OK, False otherwise;
"""
def select(session, dql_stmt, attribute_names):
"""
execute the DQL SELECT statement passed in dql_stmt and outputs the result to stdout;
attributes_names is a list of attributes to extract from the result set;
return True if OK, False otherwise;
"""
def disconnect(session):
"""
closes the given session;
returns True if no error, False otherwise;
"""

Basically, they only wrap some error handling code around the calls to dmAPIGet()/dmAPIExec(). execute() and select() are just examples of how to use the interface and could be removed from it. Let’s give a look at the latter one for instance:

def select(session, dql_stmt, attribute_names):
   """
   execute the DQL SELECT statement passed in dql_stmt and outputs the result to stdout;
   attributes_names is a list of attributes to extract from the result set;
   return True if OK, False otherwise;
   """
   show("in select(), dql_stmt=" + dql_stmt)
   try:
      query_id = dmAPIGet("query," + session + "," + dql_stmt)
      if query_id is None:
         raise(getOutOfHere)

      s = ""
      for attr in attribute_names:
         s += "[" + attr + "]\t"
      print(s)
      resp_cntr = 0
      while dmAPIExec("next," + session + "," + query_id):
         s = ""
         for attr in attribute_names:
            value = dmAPIGet("get," + session + "," + query_id + "," + attr)
            if "r_object_id" == attr and value is None:
               raise(getOutOfHere)
            s += "[" + (value if value else "None") + "]\t"
         resp_cntr += 1
         show(str(resp_cntr) + ": " + s)
      show(str(resp_cntr) + " rows iterated")

      err_flag = dmAPIExec("close," + session + "," + query_id)
      if not err_flag:
         raise(getOutOfHere)

      status = True
   except getOutOfHere:
      show(dmAPIGet("getmessage," + session).rstrip())
      status = False
   except Exception as e:
      print("Exception in select():")
      print(e)
      traceback.print_stack()
      print(resp_cntr); print(attr); print(s); print("[" + value + "]")
      status = False
   finally:
      show("exiting select()")
      return status

If it weren’t for the error handling, it really looks like dmawk code fresh from the API manual !
And here are two invocations:

   print("")
   stmt = "select r_object_id, object_name, owner_name, acl_domain, acl_name from dm_document"
   status = DctmAPI.select(session, stmt, ("r_object_id", "object_name", "owner_name", "acl_domain", "acl_name"))
   if status:
      print("select [" + stmt + "] was successful")
   else:
      print("select [" + stmt + "] was not successful")

   print("")
   stmt = "select count(*) from dm_document"
   status = DctmAPI.select(session, stmt,  ["count(*)"])
   if status:
      print("select [" + stmt + "] was successful")
   else:
      print("select [" + stmt + "] was not successful"

Resulting in the following output:

in select(), dql_stmt=select r_object_id, object_name, owner_name, acl_domain, acl_name from dm_document
[r_object_id] [object_name] [owner_name] [acl_domain] [acl_name] 1: [0900c350800001d0] [Default Signature Page Template] [dmadmin] [dmadmin] [dm_4500c35080000101] 2: [6700c35080000100] [CSEC Plugin] [dmadmin] [dmadmin] [dm_4500c35080000101] 3: [6700c35080000101] [Snaplock Connector] [dmadmin] [dmadmin] [dm_4500c35080000101] 4: [0900c350800001ff] [Blank Word 2007 / 2010 Document] [dmadmin] [dmadmin] [dm_4500c35080000101] 5: [0900c35080000200] [Blank Word 2007 / 2010 Template] [dmadmin] [dmadmin] [dm_4500c35080000101] 6: [0900c35080000201] [Blank Word 2007 / 2010 Macro-enabled Document] [dmadmin] [dmadmin] [dm_4500c35080000101] 7: [0900c35080000202] [Blank Word 2007 / 2010 Macro-enabled Template] [dmadmin] [dmadmin] [dm_4500c35080000101] 8: [0900c35080000203] [Blank Excel 2007 / 2010 Workbook] [dmadmin] [dmadmin] [dm_4500c35080000101] 9: [0900c35080000204] [Blank Excel 2007 / 2010 Template] [dmadmin] [dmadmin] [dm_4500c35080000101] 10: [0900c350800001da] [11/21/2017 16:31:10 dm_PostUpgradeAction] [dmadmin] [dmadmin] [dm_4500c35080000101] 11: [0900c35080000205] [Blank Excel 2007 / 2010 Macro-enabled Workbook] [dmadmin] [dmadmin] [dm_4500c35080000101] 12: [0900c35080000206] [Blank Excel 2007 / 2010 Macro-enabled Template] [dmadmin] [dmadmin] [dm_4500c35080000101] 13: [0900c35080000207] [Blank Excel 2007 / 2010 Binary Workbook] [dmadmin] [dmadmin] [dm_4500c35080000101] 14: [0900c35080000208] [Blank PowerPoint 2007 / 2010 Presentation] [dmadmin] [dmadmin] [dm_4500c35080000101] 15: [0900c35080000209] [Blank PowerPoint 2007 / 2010 Slide Show] [dmadmin] [dmadmin] [dm_4500c35080000101] ...
880 rows iterated
exiting select()
select [select r_object_id, object_name, owner_name, acl_domain, acl_name from dm_document] was successful
in select(), dql_stmt=select count(*) from dm_document
[count(*)] 1: [880] 1 rows iterated
exiting select()
select [select count(*) from dm_document] was successful

Admittedly, the above select() function could be more clever and find by itself the queried attributes by inspecting the returned collection; also, the output could be more structured. Stay tuned on this channel, it’s coming up !

The packaging

In order to make the interface easily usable, it has been packaged into a module named DctmAPI. To use it, just add an “import DctmAPI” statement in the client script and prefix the functions from the module with “DctmAPI”, the module namespace, when calling them, as shown in the example above.
I’ve given some thoughts about making a class out of it but the benefits were not so obvious because many functions are so generic that most of them would have been @staticmethod of the class anyway. Moreover, the only state variable would have been the session id, so instead of carrying it around, an instance of the class would have to be used instead, no real improvement here. Even worse, as the session id would have been hidden in the instance, the statements passed to an instance object would have to be changed not to include it and leave that to the instance, which would hurt the habits of using the standard API argument format; also, as a few API verbs don’t need a session id, exceptions to the rule would need to be introduced, which would mess the class even more. Therefore, I chose to stick as closer as possible to the syntax documented in the API manual, at the only cost of introducing a namespace with the module.

The source

Without further ado, here is the full interface module DctmAPI.py:

"""
This module is a python - Documentum binding based on ctypes;
requires libdmcl40.so/libdmcl.so to be reachable through LD_LIBRARY_PATH;
C. Cervini - dbi-services.com

The binding works as-is for both python2 and python3; no recompilation required; that's the good thing with ctypes compared to e.g. distutils/SWIG;
Under a 32-bit O/S, it must use the libdmcl40.so, whereas under a 64-bit Linux it must use the java backed one, libdmcl.so;

For compatibility with python3 (where strings are now unicode ones and no longer arrays of bytes, ctypes strings parameters are always converted to unicode, either by prefixing them
with a b if litteral or by invoking their encode('ascii', 'ignore') method; to get back to text from bytes, b.decode() is used;these works in python2 as well as in python3 so the source is compatible with these two versions of the language;
"""

import os
import ctypes
import sys, traceback

# use foreign C library;
# use this library in Content server = v6.x, 64-bit Linux;
#dmlib = 'libdmcl.so'

dm = 0
logLevel = 1

class getOutOfHere(Exception):
   pass

def show(mesg):
   "displays the message mesg if allowed"
   if logLevel > 0:
      print(mesg)

def dmInit():
   """
   initializes the Documentum part;
   returns True if successfull, False otherwise;
   """

   show("in dmInit()")
   global dm

   try:
      dm = ctypes.cdll.LoadLibrary(dmlib);  dm.restype = ctypes.c_char_p
      show("dm=" + str(dm) + " after loading library " + dmlib)
      dm.dmAPIInit.restype    = ctypes.c_int;
      dm.dmAPIDeInit.restype  = ctypes.c_int;
      dm.dmAPIGet.restype     = ctypes.c_char_p;      dm.dmAPIGet.argtypes  = [ctypes.c_char_p]
      dm.dmAPISet.restype     = ctypes.c_int;         dm.dmAPISet.argtypes  = [ctypes.c_char_p, ctypes.c_char_p]
      dm.dmAPIExec.restype    = ctypes.c_int;         dm.dmAPIExec.argtypes = [ctypes.c_char_p]
      status  = dm.dmAPIInit()
   except Exception as e:
      print("exception in dminit(): ")
      print(e)
      traceback.print_stack()
      status = False
   finally:
      show("exiting dmInit()")
      return True if 0 != status else False
   
def dmAPIDeInit():
   """
   releases the memory structures in documentum's library;
   returns True if no error, False otherwise;
   """
   status = dm.dmAPIDeInit()
   return True if 0 != status else False
   
def dmAPIGet(s):
   """
   passes the string s to dmAPIGet() method;
   returns a non-empty string if OK, None otherwise;
   """
   value = dm.dmAPIGet(s.encode('ascii', 'ignore'))
   return value.decode() if value is not None else None

def dmAPISet(s, value):
   """
   passes the string s to dmAPISet() method;
   returns TRUE if OK, False otherwise;
   """
   status = dm.dmAPISet(s.encode('ascii', 'ignore'), value.encode('ascii', 'ignore'))
   return True if 0 != status else False

def dmAPIExec(stmt):
   """
   passes the string s to dmAPIExec() method;
   returns TRUE if OK, False otherwise;
   """
   status = dm.dmAPIExec(stmt.encode('ascii', 'ignore'))
   return True if 0 != status else False

def connect(docbase, user_name, password):
   """
   connects to given docbase as user_name/password;
   returns a session id if OK, None otherwise
   """
   show("in connect(), docbase = " + docbase + ", user_name = " + user_name + ", password = " + password) 
   try:
      session = dmAPIGet("connect," + docbase + "," + user_name + "," + password)
      if session is None or not session:
         raise(getOutOfHere)
      else:
         show("successful session " + session)
         show(dmAPIGet("getmessage," + session).rstrip())
   except getOutOfHere:
      print("unsuccessful connection to docbase " + docbase + " as user " + user_name)
      session = None
   except Exception as e:
      print("Exception in connect():")
      print(e)
      traceback.print_stack()
      session = None
   finally:
      show("exiting connect()")
      return session

def execute(session, dql_stmt):
   """
   execute non-SELECT DQL statements;
   returns TRUE if OK, False otherwise;
   """
   show("in execute(), dql_stmt=" + dql_stmt)
   try:
      query_id = dmAPIGet("query," + session + "," + dql_stmt)
      if query_id is None:
         raise(getOutOfHere)
      err_flag = dmAPIExec("close," + session + "," + query_id)
      if not err_flag:
         raise(getOutOfHere)
      status = True
   except getOutOfHere:
      show(dmAPIGet("getmessage," + session).rstrip())
      status = False
   except Exception as e:
      print("Exception in execute():")
      print(e)
      traceback.print_stack()
      status = False
   finally:
      show(dmAPIGet("getmessage," + session).rstrip())
      show("exiting execute()")
      return status

def select(session, dql_stmt, attribute_names):
   """
   execute the DQL SELECT statement passed in dql_stmt and outputs the result to stdout;
   attributes_names is a list of attributes to extract from the result set;
   return True if OK, False otherwise;
   """
   show("in select(), dql_stmt=" + dql_stmt)
   try:
      query_id = dmAPIGet("query," + session + "," + dql_stmt)
      if query_id is None:
         raise(getOutOfHere)

      s = ""
      for attr in attribute_names:
         s += "[" + attr + "]\t"
      print(s)
      resp_cntr = 0
      while dmAPIExec("next," + session + "," + query_id):
         s = ""
         for attr in attribute_names:
            value = dmAPIGet("get," + session + "," + query_id + "," + attr)
            if "r_object_id" == attr and value is None:
               raise(getOutOfHere)
            s += "[" + (value if value else "None") + "]\t"
         resp_cntr += 1
         show(str(resp_cntr) + ": " + s)
      show(str(resp_cntr) + " rows iterated")

      err_flag = dmAPIExec("close," + session + "," + query_id)
      if not err_flag:
         raise(getOutOfHere)

      status = True
   except getOutOfHere:
      show(dmAPIGet("getmessage," + session).rstrip())
      status = False
   except Exception as e:
      print("Exception in select():")
      print(e)
      traceback.print_stack()
      print(resp_cntr); print(attr); print(s); print("[" + value + "]")
      status = False
   finally:
      show("exiting select()")
      return status

def disconnect(session):
   """
   closes the given session;
   returns True if no error, False otherwise;
   """
   show("in disconnect()")
   try:
      status = dmAPIExec("disconnect," + session)
   except Exception as e:
      print("Exception in disconnect():")
      print(e)
      traceback.print_stack()
      status = False
   finally:
      show("exiting disconnect()")
      return status

# initializes the interface;
dmInit()
The test script

Here is an example of script showing how to use the interface:

#!/usr/bin/env python

"""
Test the ctypes-based python interface to Documentum API;
"""

import DctmAPI

# -----------------
# main;
if __name__ == "__main__":
   DctmAPI.logLevel = 1

   # not really needed as it is done in the module itself;
   status = DctmAPI.dmInit()
   if status:
      print("dmInit() was successful")
   else:
      print("dmInit() was not successful")

   print("")
   session = DctmAPI.connect(docbase = "dmtest", user_name = "dmadmin", password = "dmadmin")
   if session is None:
      print("no session opened, exiting ...")
      exit(1)
   
   print("")
   dump = DctmAPI.dmAPIGet("dump," + session + "," + "0900c35080008107")
   print("object 0900c35080008107 dumped:\n" + dump)
   
   print("")
   stmt = "update dm_document object set language_code = 'FR' where r_object_id = '0900c35080008107'"
   status = DctmAPI.execute(session, stmt)
   if status:
      print("execute [" + stmt + "] was successful")
   else:
      print("execute [" + stmt + "] was not successful")

   print("")
   stmt = "select r_object_id, object_name, owner_name, acl_domain, acl_name from dm_document"
   status = DctmAPI.select(session, stmt, ("r_object_id", "object_name", "owner_name", "acl_domain", "acl_name"))
   if status:
      print("select [" + stmt + "] was successful")
   else:
      print("select [" + stmt + "] was not successful")

   print("")
   stmt = "select count(*) from dm_document"
   status = DctmAPI.select(session, stmt,  ["count(*)"])
   if status:
      print("select [" + stmt + "] was successful")
   else:
      print("select [" + stmt + "] was not successful")

   print("")
   status = DctmAPI.disconnect(session)
   if status:
      print("successfully disconnected")
   else:
      print("error while  disconnecting")

   print("")
   status = DctmAPI.dmAPIDeInit()
   if status:
      print("successfully deInited")
   else:
      print("error while  deInited")

I’m not a day to day user of python so I guess there are ways to make the interface more idiomatic, or pythonic as they say. Feel free to adapt it to your tastes and needs. Comments and suggestions are welcome of course.

 

Cet article Adding a Documentum extension into python est apparu en premier sur Blog dbi services.

Simple CQRS – Tweets to Apache Kafka to Elastic Search Index using a little Node code

Amis Blog - Tue, 2018-05-22 07:29

Put simply – CQRS (Command Query Responsibility Segregation) is an architecture pattern that recognizes the fact that it may be wise to separate the database that processes data manipulations from the engines that handle queries. When data retrieval requires special formats, scale, availability, TCO, location, search options and response times, it is worth considering introducing additional databases to handle those specific needs. These databases can provide data in a way that caters for the special needs to special consumers – by offering data in filtered, preprocessed format or shape or aggregation, with higher availability, at closer physical distance, with support for special search patterns and with better performance and scalability.

A note of caution: you only introduce CQRS in a system if there is a clear need for it. Not because you feel obliged to implement such a shiny, much talked about pattern or you feel as if everyone should have it. CQRS is not a simple thing – especially in existing systems, packaged applications and legacy databases. Detecting changes and extracting data from the source, transporting and converting the data and applying the data in a reliable, fast enough way with the required level of consistency is not trivial.

In many of my conference presentations, I show demonstrations with running software. To better clarify what I am talking about, to allow the audience to try things out for themselves and because doing demos usually is fun. And a frequent element in these demos is Twitter. Because it is well known and because it allows the audience to participate in the demo. I can invite an audience to tweet using an agreed hashtag and their tweets trigger the demo or at least make an appearance. In this article, I discuss one of these demos – showing an example of CQRS. The picture shows the outline: tweets are consumed by a Node application. Each tweet is converted to an event on a Kafka Topic. This event is consumed by a second Node application (potentially one of multiple instances in Kafka Consumer Group, to allow for more scalability. This Node application creates a new record in an Elastic Search index – the Query destination in this little CQRS spiel.  The out of the box dashboard tool Kibana allows us to quickly inspect and analyse the tweet records. Additionally we can create an advanced query service on top of Elastic Search.

This article shows the code behind this demo. This code as prepared for the JEEConf 2018 conference in Kyiv, Ukraine – and can be found in GitHub: https://github.com/lucasjellema/50-shades-of-data-jeeconf2018-kyiv/tree/master/twitter-kafka-demo .

image

The main elements in the demo:

1. Kafka Topic tweets-topic (in my demo, this topic is created in Oracle Cloud Event Hub Service, a managed Kafka cloud service)

2. Node application that consumes from Twitter – and publishes to the Kafka topic

3. (Postman Collection to create) Elastic Search Index plus custom mapping (primarily to extract proper creation date time value from a date string) (in my demo, this Elastic Search Index is created in a Elastic Search instance running in a Docker Container on Oracle Container Cloud)

4. Node application that consumes the events from the Kafka tweets-topic and turns each event into a new record in the index. In this demo, the Node application is also running on Oracle Cloud (Application Container Cloud), but that does not have to be the case

5. Kibana dashboard on top of the Tweets Index. In my demo, Kibana is also running in a Docker container in Oracle Container Cloud

1. Kafka Tweets Topic on Oracle Event Hub Cloud Service

image

After completing the wizard, the topic is created an can be accessed by producers and consumers.

2. Node application that consumes from Twitter – and publishes to the Kafka topic

The Node application consists of an index.js file that handles HTTP Requests – for health checking – and consumes from Twitter and pulishes to a Kafka Topic. It uses a file twitterconfig.js (not included) that contains the secret details of a Twitter client. The contents of this file should look like this – and should contain your own Twitter Client Details:

// CHANGE THIS **************************************************************
// go to https://apps.twitter.com/ to register your app
var twitterconfig = {
    consumer_key: 'mykey',
    consumer_secret: 'mysecret',
    access_token_key: 'at-key',
    access_token_secret: 'at-secret'  
    };
    
    module.exports = {twitterconfig};

The index.js file requires the npm libraries kafka-node and twit as well as express and http for handling http requests.

The code can be said to be divided in three parts:

  • initialization, create HTTP server and handle HTTP requests
  • Consume from Twitter
  • Publish to Kafka

Here are the three code sections:

const express = require('express');
var http = require('http')
const app = express();
var PORT = process.env.PORT || 8144;
const server = http.createServer(app);
var APP_VERSION = "0.0.3"

const startTime = new Date()
const bodyParser = require('body-parser');
app.use(bodyParser.json());
var tweetCount = 0;
app.get('/about', function (req, res) {
  var about = {
    "about": "Twitter Consumer and Producer to " + TOPIC_NAME,
    "PORT": process.env.PORT,
    "APP_VERSION ": APP_VERSION,
    "Running Since": startTime,
    "Total number of tweets processed": tweetCount

  }
  res.json(about);
})
server.listen(PORT, function listening() {
  console.log('Listening on %d', server.address().port);
});

Code for consuming from Twitter – in this case for the hash tags #jeeconf,#java and #oraclecode:

var Twit = require('twit');
const { twitterconfig } = require('./twitterconfig');

var T = new Twit({
  consumer_key: twitterconfig.consumer_key,
  consumer_secret: twitterconfig.consumer_secret,
  access_token: twitterconfig.access_token_key,
  access_token_secret: twitterconfig.access_token_secret,
  timeout_ms: 60 * 1000,
});


var twiterHashTags = process.env.TWITTER_HASHTAGS || '#oraclecode,#java,#jeeconf';
var tracks = { track: twiterHashTags.split(',') };

let tweetStream = T.stream('statuses/filter', tracks)
tweetstream(tracks, tweetStream);

function tweetstream(hashtags, tweetStream) {
  console.log("Started tweet stream for hashtag #" + JSON.stringify(hashtags));

  tweetStream.on('connected', function (response) {
    console.log("Stream connected to twitter for #" + JSON.stringify(hashtags));
  })
  tweetStream.on('error', function (error) {
    console.log("Error in Stream for #" + JSON.stringify(hashtags) + " " + error);
  })
  tweetStream.on('tweet', function (tweet) {
    produceTweetEvent(tweet);
  });
}

Code for publishing to the Kafka Topic a516817-tweetstopic:

const kafka = require('kafka-node');
const APP_NAME = "TwitterConsumer"

var EVENT_HUB_PUBLIC_IP = process.env.KAFKA_HOST || '129.1.1.116';
var TOPIC_NAME = process.env.KAFKA_TOPIC || 'a516817-tweetstopic';

var Producer = kafka.Producer;
var client = new kafka.Client(EVENT_HUB_PUBLIC_IP);
var producer = new Producer(client);
KeyedMessage = kafka.KeyedMessage;

producer.on('ready', function () {
  console.log("Producer is ready in " + APP_NAME);
});
producer.on('error', function (err) {
  console.log("failed to create the client or the producer " + JSON.stringify(err));
})


let payloads = [
  { topic: TOPIC_NAME, messages: '*', partition: 0 }
];

function produceTweetEvent(tweet) {
  var hashtagFound = false;
  try {
    // find out which of the original hashtags { track: ['oraclecode', 'java', 'jeeconf'] } in the hashtags for this tweet; 
    //that is the one for the tagFilter property
    // select one other hashtag from tweet.entities.hashtags to set in property hashtag
    var tagFilter = "#jeeconf";
    var extraHashTag = "liveForCode";
    for (var i = 0; i < tweet.entities.hashtags.length; i++) {
      var tag = '#' + tweet.entities.hashtags[i].text.toLowerCase();
      console.log("inspect hashtag " + tag);
      var idx = tracks.track.indexOf(tag);
      if (idx > -1) {
        tagFilter = tag;
        hashtagFound = true;
      } else {
        extraHashTag = tag
      }
    }//for

    if (hashtagFound) {
      var tweetEvent = {
        "eventType": "tweetEvent"
        , "text": tweet.text
        , "isARetweet": tweet.retweeted_status ? "y" : "n"
        , "author": tweet.user.name
        , "hashtag": extraHashTag
        , "createdAt": tweet.created_at
        , "language": tweet.lang
        , "tweetId": tweet.id
        , "tagFilter": tagFilter
        , "originalTweetId": tweet.retweeted_status ? tweet.retweeted_status.id : null
      };
      eventPublisher.publishEvent(tweet.id, tweetEvent)
      tweetCount++
    }// if hashtag found
  } catch (e) {
    console.log("Exception in publishing Tweet Event " + JSON.stringify(e))
  }
}

var eventPublisher = module.exports;

eventPublisher.publishEvent = function (eventKey, event) {
  km = new KeyedMessage(eventKey, JSON.stringify(event));
  payloads = [
    { topic: TOPIC_NAME, messages: [km], partition: 0 }
  ];
  producer.send(payloads, function (err, data) {
    if (err) {
      console.error("Failed to publish event with key " + eventKey + " to topic " + TOPIC_NAME + " :" + JSON.stringify(err));
    }
    console.log("Published event with key " + eventKey + " to topic " + TOPIC_NAME + " :" + JSON.stringify(data));
  });
}//publishEvent

3. (Postman Collection to create) Elastic Search Index plus custom mapping

Preparation of an Elastic Search environment is done through REST API calls. These can be made from code or from the command line (using CURL) or from a tool such as Postman. In this case, I have created a Postman collection with a number of calls to prepare the Elastic Search index tweets.

image

The following requests are relevant:

  • Check if the Elastic Search server is healthy: GET {{ELASTIC_HOME}}:9200/_cat/health
  • Create the tweets index: PUT {{ELASTIC_HOME}}:9200/tweets
  • Create the mapping for the tweets index: PUT {{ELASTIC_HOME}}:9200/tweets/_mapping/doc

The body for the last request is relevant:

{
                "properties": {
                    "author": {
                        "type": "text",
                        "fields": {
                            "keyword": {
                                "type": "keyword",
                                "ignore_above": 256
                            }
                        }
                    },
                    "createdAt": {
                        "type": "date",
          "format": "EEE MMM dd HH:mm:ss ZZ yyyy"
  
                    },
                    "eventType": {
                        "type": "text",
                        "fields": {
                            "keyword": {
                                "type": "keyword",
                                "ignore_above": 256
                            }
                        }
                    },
                    "hashtag": {
                        "type": "text",
                        "fields": {
                            "keyword": {
                                "type": "keyword",
                                "ignore_above": 256
                            }
                        }
                    },
                    "isARetweet": {
                        "type": "text",
                        "fields": {
                            "keyword": {
                                "type": "keyword",
                                "ignore_above": 256
                            }
                        }
                    },
                    "language": {
                        "type": "text",
                        "fields": {
                            "keyword": {
                                "type": "keyword",
                                "ignore_above": 256
                            }
                        }
                    },
                    "tagFilter": {
                        "type": "text",
                        "fields": {
                            "keyword": {
                                "type": "keyword",
                                "ignore_above": 256
                            }
                        }
                    },
                    "text": {
                        "type": "text",
                        "fields": {
                            "keyword": {
                                "type": "keyword",
                                "ignore_above": 256
                            }
                        }
                    },
                    "tweetId": {
                        "type": "long"
                    }
                }
            }

The custom aspect of the mapping is primarily to extract proper creation date time value from a date string.

4. Node application that consumes the events from the Kafka tweets-topic and turns each event into a new record in the elastic search index

The tweetListener.js file contains the code for two main purposes: handle HTTP requests (primarily for health checks) and consume events from the Kafka Topic for tweets. This file requires the npm modules express, http and kafka-node for this. It also imports the local module model from the file model.js. This module writes Tweet records to the Elastic Search index. It uses the npm  module elasticsearch for this.

The code in tweetListener.js is best read in two sections:

First section for handling HTTP requests:

const express = require('express');
var https = require('https')
  , http = require('http')
const app = express();
var PORT = process.env.PORT || 8145;
const server = http.createServer(app);
var APP_VERSION = "0.0.3"


const bodyParser = require('body-parser');
app.use(bodyParser.json());
var tweetCount = 0;
app.get('/about', function (req, res) {
  var about = {
    "about": "Twitter Consumer from  " +SOURCE_TOPIC_NAME,
    "PORT": process.env.PORT,
    "APP_VERSION ": APP_VERSION,
    "Running Since": startTime,
    "Total number of tweets processed": tweetCount

  }
  res.json(about);
})
server.listen(PORT, function listening() {
  console.log('Listening on %d', server.address().port);
});

Second section for consuming Kafka events from tweets topic – and invoking the model module for each event:

var kafka = require('kafka-node');
var model = require("./model");

var tweetListener = module.exports;

var subscribers = [];
tweetListener.subscribeToTweets = function (callback) {
  subscribers.push(callback);
}

// var kafkaHost = process.env.KAFKA_HOST || "192.168.188.102";
// var zookeeperPort = process.env.ZOOKEEPER_PORT || 2181;
// var TOPIC_NAME = process.env.KAFKA_TOPIC ||'tweets-topic';

var KAFKA_ZK_SERVER_PORT = 2181;

var SOURCE_KAFKA_HOST = '129.1.1.116';
var SOURCE_TOPIC_NAME = 'a516817-tweetstopic';

var consumerOptions = {
    host: SOURCE_KAFKA_HOST + ':' + KAFKA_ZK_SERVER_PORT ,
  groupId: 'consume-tweets-for-elastic-index',
  sessionTimeout: 15000,
  protocol: ['roundrobin'],
  fromOffset: 'latest' // equivalent of auto.offset.reset valid values are 'none', 'latest', 'earliest'
};

var topics = [SOURCE_TOPIC_NAME];
var consumerGroup = new kafka.ConsumerGroup(Object.assign({ id: 'consumer1' }, consumerOptions), topics);
consumerGroup.on('error', onError);
consumerGroup.on('message', onMessage);

function onMessage(message) {
  console.log('%s read msg Topic="%s" Partition=%s Offset=%d', this.client.clientId, message.topic, message.partition, message.offset);
  console.log("Message Value " + message.value)

  subscribers.forEach((subscriber) => {
    subscriber(message.value);

  })
}

function onError(error) {
  console.error(error);
  console.error(error.stack);
}

process.once('SIGINT', function () {
  async.each([consumerGroup], function (consumer, callback) {
    consumer.close(true, callback);
  });
});


tweetListener.subscribeToTweets((message) => {
  var tweetEvent = JSON.parse(message);
  tweetCount++; 
  // ready to elastify tweetEvent
  console.log("Ready to put on Elastic "+JSON.stringify(tweetEvent));
  model.saveTweet(tweetEvent).then((result, error) => {
    console.log("Saved to Elastic "+JSON.stringify(result)+'Error?'+JSON.stringify(error));
})

})

The file model.js connects to the Elastic Search server and saves tweets to the tweets index when so requested. Very straightforward. Without any exception handling, for example in case the Elastic Search server does not accept a record or is simply unavailable. Remember: this is just the code for a demo.

var tweetsModel = module.exports;
var elasticsearch = require('elasticsearch');

var ELASTIC_SEARCH_HOST = process.env.ELASTIC_CONNECTOR || 'http://129.150.114.134:9200';

var client = new elasticsearch.Client({
    host: ELASTIC_SEARCH_HOST,
});

client.ping({
    requestTimeout: 30000,
}, function (error) {
    if (error) {
        console.error('elasticsearch cluster is down!');
    } else {
        console.log('Connection to Elastic Search is established');
    }
});

tweetsModel.saveTweet = async function (tweet) {
    try {
        var response = await client.index({
            index: 'tweets',
            id: tweet.tweetId,
            type: 'doc',
            body: tweet
        }
        );

        console.log("Response: " + JSON.stringify(response));
        return tweet;
    }
    catch (e) {
        console.error("Error in Elastic Search - index document " + tweet.tweetId + ":" + JSON.stringify(e))
    }

}

5. Kibana dashboard on top of the Tweets Index.

Kibana is an out of the box application, preconfigured in my case for the colocated Elastic Search server. Once I provide the name of the index – TWEETS – I am interested in, immediately Kibana shows an overview of (selected time ranges in) this index (the peaks in the screenshot indicate May 19th and 20th when the JEEConf was taking place in Kyiv, where I presented this demo:

image

The same results in the Twitter UI:

image

The post Simple CQRS – Tweets to Apache Kafka to Elastic Search Index using a little Node code appeared first on AMIS Oracle and Java Blog.

Costs associated in a query execution plan

Tom Kyte - Tue, 2018-05-22 00:46
Hello, I have a question to be clarified related to costs associated with SQL query execution. I have view built on 3 big tables. <code>CREATE OR REPLACE VIEW TEXT_TRANSLATION AS SELECT c.module ...
Categories: DBA Blogs

Write PLSQL procedure to get query result in an excel automatically

Tom Kyte - Tue, 2018-05-22 00:46
Hi Though I have checked in google many times for the answer of my question 'Write PLSQL procedure to get query result in an excel automatically', till I am not having clear understanding . Could you Please explain it in simplest way possible....
Categories: DBA Blogs

Update all database tables with current timestamp

Tom Kyte - Tue, 2018-05-22 00:46
Hi, I have a Production database ( version shown below) with about 2000 tables, having over 10 million rows each. <code>BANNER ----------------------------------------...
Categories: DBA Blogs

ora-01720 when creating a view with sys.dual

Tom Kyte - Tue, 2018-05-22 00:46
I have this create view statement that is failing with ORA-01720: grant option does not exist for 'SYS.DUAL' on line 7 create or replace view v_decisions (Sort_No, Name, Abbreviation, Include_In_EM, WPSetup_ID, Include_In_MBP, Web_Name ) as sel...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator