Feed aggregator

Introducing SQL Server on Docker Swarm orchestration

Yann Neuhaus - Fri, 2018-02-02 07:48

SQL Server 2017 is available on multiple platforms: Windows, Linux and Docker. The latter provides containerization features with no lengthy setup and special prerequisites before running your SQL Server databases which are probably the key success of adoption for developers.

It was my case as developer for our DMK management kit which provide to our customers a SQL Server database maintenance solution on all editions from SQL Server 2005 to SQL Server 2017 (including Linux). In the context of our DMK, we have to develop for different versions of SQL Server, including cumulative updates and service packs that may provide new database maintenance features and it may be challenging when we often have to release a new development / fix or and to perform unit tests on different SQL Server versions or platforms. At this stage you may certainly claim that virtualization already addresses those requirements and you’re right because we used a lot of provisioned virtual machines on Hyper-V so far.

The obvious question is why to switch to Docker container technologies? Well, for different reasons in reality. Firstly, sharing my SQL Server containers and test databases with my team is pretty straightforward. We may use a Docker registry and use docker push / pull commands. Then, provisioning a new SQL Server instance is quicker with containers than virtual machines and generally lead to lower CPU / Memory / Disk footprint on my laptop. I talked a little bit about it in the last SQL Pass meetup in Geneva by the way.

But in this blog post I would like to take another step with Docker and to go beyond the development area. As DBA we may have to deal with container management in production in the near future (unless it is already done for you :) ) and we need to get a more global picture of the Docker echosystem. I remembered a discussion with an attendee during my SQL Server Docker and Microservices session in the last TugaIT 2017 Lisbon who told me Docker and containers are only for developers and not suitable for production. At the time of this discussion, I had to admit he was not entirely wrong. Firstly, let’s say that as virtualization before, based-container application adoption will probably take time. This is at least what I may concluded from my experience and from what I may notice around me, even if DevOps and microservices architectures seem to contribute to improve the situation. This is probably because production environments introduce other challenges and key factors than those we may have on development area as service availability, patching or upgrading stuff, monitoring and alerting, performance …. In the same time, Docker and more generally speaking container technologies are constantly maturing as well as tools to manage such infrastructures and in production area, as you know, we prefer to be safe and there is no room to no stable and non-established products that may compromise the core business.

So, we may wonder what’s the part of the DBAs in all of this? Well, regardless the underlying infrastructure we still have the same responsibilities as to provide configuration stuff, to ensure databases are backed up, to manage and to maintain data including performance, to prevent security threats and finally to guarantee data availability. In fact, looking back to last decade, we already faced exactly the same situation with the emergence of virtualization paradigm where we had to install our SQL Server instance in such infrastructures. I still remember some reluctance and heated discussions from DBAs.

From my side, I always keep in mind high availability and performance because it is the most concern of my customers when it comes to production environments. So, I was curious to dig further on container technologies in this area and with a first start on how to deal with different orchestration tools. The main leaders on the market are probably Docker Swarm, Kubernetes, Mesosphere, CoreOS fleet (recently acquired by RedHat), RedHat OpenShift, Amazon ECS and Azure Container Services.

In this first blog, I decided to write about Docker Swarm orchestrator probably because I was already comfortable with native Docker commands and Docker Swarm offers additional set of docker commands. When going into details, the interesting point is that I discovered a plenty of other concepts which lead me to realize I was reaching another world … a production world. This time it is not just about pulling / pushing containers for sure :) Before to keep reading this blog post, it is important to precise that it is not intended to learn about how to implement Docker Swarm. Docker web site is well-designed for that. My intention is just to highlight some important key features I think DBAs should to be aware before starting managing container infrastructures.

Firstly, implementing a Docker Swarm requires to be familiar with some key architecture concepts. Fortunately, most of them are easy to understand if you are already comfortable with SQL Server and cluster-based architectures including SQL FCIs or availability groups.

Let’s have a look at the main components:

  • Nodes: A node is just an instance of docker engine participating in swarm
  • Manager nodes: They are firstly designed to dispatch units of works (called tasks tied to containers) to worker nodes according your service definition
  • Worker nodes: Receive and execute tasks dispatched from manager nodes

Here was my first implementation of my Docker lab infrastructure:

blog 127 - 0 - swarm architecture lab

It was composed of 3 docker nodes and one of them acted as a both worker and manager. Obviously, this is not an ideal scenario to implement on production because this architecture lacks of fault-tolerance design. But anyway, that was enough to start with my basic container labs.

I use a Docker Server version 17.12.0 CE and as shown below swarm mode is enabled.

$sudo docker info
Containers: 13
 Running: 4
 Paused: 0
 Stopped: 9
Images: 37
Server Version: 17.12.0-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: 7a9v6uv6jur5cf8x3bi6ggpiz
 Is Manager: true
 ClusterID: 63pdntf40nzav9barmsnk91hb
 Managers: 1
 Nodes: 3
 Orchestration:
  Task History Retention Limit: 5
The IP address of the manager is reachable from the host operation system
…

$ sudo docker node ls
ID                            HOSTNAME                      STATUS              AVAILABILITY        MANAGER STATUS
s6pu7x3htoxjqvg9vilkoffj1     sqllinux2.dbi-services.test   Ready               Active
ptcay2nq4uprqb8732u8k451a     sqllinux3.dbi-services.test   Ready               Active
7a9v6uv6jur5cf8x3bi6ggpiz *   sqllinux.dbi-services.test    Ready               Active              Leader

Here the IP address of the manager (sqllinux node) used during the swarm initialization with –advertise-addr parameter. Tu put it simply, this is the address used by other nodes to connect into this node during the joining phase.

There are a plenty of options to configure and to change the behavior of the swarm. Some of them concern resource and placement management and as DBA it makes sense to know how such infrastructures behave on your database environments regarding these settings. Maybe in a next blog post.

$ ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:00:13:4b brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.20/24 brd 192.168.40.255 scope global eth0

 

I also opened the required ports on each node

  • TCP port 2376 for secure docker client communication (Docker machine)
  • TCP port 2377 for cluster management communications
  • TCP and UDP port 7946 for communication among nodes
  • UDP port 4789 for overlay network traffic (container ingress networking)
$ sudo firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources:
  services: dhcpv6-client ssh nfs mountd rpc-bind
  ports: 2376/tcp 2377/tcp 7946/tcp 7946/udp 4789/udp 80/tcp 1433/tcp 8080/tcp
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

 

After initializing the swarm, the next step will consist in deploying the containers on it. The point here is the swarm mode changes a little bit the game because you have to deal with service or stacks (collection of services) deployment rather than using container deployment. It does mean you cannot deploy directly containers but you won’t benefit from swarm features in this case.

Let’s continue with stack / service model on docker. Because a picture is often worth a thousand words I put an overview of relationship between stacks, services and tasks.

blog 127 - 1 - swarm stack service task relationship

You may find the definition of each component in the docker documentation but let’s make a brief summary of important things: services are the primary location of interactions with the swarm and includes the definition of tasks to execute on the manager or worker nodes. Then tasks carry Docker containers and commands to run inside. Maybe the most important thing to keep in mind here: /!\Once a task is assigned to a node, it cannot move to another node. It can only run on the assigned node or fail /!\. This is a different behavior from virtualized architectures when you may go through different features to move manually one virtual machine from one host to another one (VMotion, DRS for VMware …). Finally, a stack is just a collection of services (1-N) that make up an application on a specific environment.

From a user perspective, you may deploy directly a service or to go through a stack definition if you have to deal with an application composed of several services and relationships between them. For the latter, you may probably guess that this model is pretty suitable with microservices architectures. These concepts may seem obscure but with practice they become clearer.

But just before introducing services deployment models, one aspect we did not cover so far is the storage layout. Docker has long been considered as designed for stateless applications and storage persistence a weakness in the database world. Furthermore, from a container perspective, it is always recommended to isolate the data from a container to retain the benefits of adopting containerization. Data management should be separate from the container lifecycle. Docker has managed to overcome this issue by providing different ways to persist data outside containers since the version 1.9 including the capabilities to share volumes between containers on the same host (aka data volumes). But thinking about production environments, customers will certainly deploy docker clusters rending these options useless and the containers non-portables as well. In my context, I want to be able to share data containers on different hosts and the good news is Docker provide distributed filesystem capabilities. I picked up NFS for convenience but it exists other solutions like Ceph or GluterFS for instance. A direct mapping between my host directory and the directory inside my SQL Server container over a distributed storage based on a NFS share seems to work well in my case. From a SQL Server perspective this is not an issue as long as you deploy the service with a maximum of one replica at time to avoid data corruption. My updated architecture is as following:

blog 127 - 2 - swarm architecture lab with nfs

Here the configuration from one node concerning the mount point based on NFS share. Database files will be stored on /u01/sql2 in my case.

$ cat /etc/fstab
/dev/mapper/cl-root     /                       xfs     defaults        0 0
UUID=eccbc689-88c6-4e5a-ad91-6b47b60557f6 /boot                   xfs     defaults        0 0
/dev/mapper/cl-swap     swap                    swap    defaults        0 0
/dev/sdb1       /u99    xfs     defaults        0 0
192.168.40.14:/u01      /u01    nfs     nfsvers=4.2,timeo=14,intr       0 0

$ sudo showmount -e 192.168.40.14
Export list for 192.168.40.14:
/u01 192.168.40.22,192.168.40.21,192.168.40.20

 

My storage is in-place and let’s continue with network considerations. As virtualization products, you have different option to configure network:

  • Bridge: Allows internal communication between containers on the same host
  • docker_gwbridge : Network created when the swarm is installed and it is dedicated for the communication between nodes
  • Ingress: All nodes by default participate to ingress routing mesh. I will probably introduce this feature in my next blog post but let’s say that the routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node
  • Overlay: The manager node automatically extends the overlay network to nodes that run service tasks to allow communication between host containers. Not available if you deploy containers directly

Let’s add that Docker includes an embedded DNS server which provides DNS resolution among containers connected to the same user defined network. Pretty useful feature when you deploy applications with dependent services!

So, I created 2 isolated networks. One is dedicated for back-end server’s communication (backend-server) and the other one for the front-end server’s communication (frontend-server).

$sudo docker network create \
  --driver overlay \
  --subnet 172.20.0.0/16 \
  --gateway 172.20.0.1 \
  backend-server

$sudo docker network create \
  --driver overlay \
  --subnet 172.19.0.0/16 \
  --gateway 172.19.0.1 \
  frontend-server

$ sudo docker network ls
NETWORK ID	NAME			DRIVER		SCOPE
oab2ck3lsj2o	backend-server      	overlay		swarm
1372e2d1c92f   	bridge			bridge       	local
aeb179876301  	docker_gwbridge     	bridge       	local
qmlsfg6vjdsb	frontend-server     	overlay  	swarm
8f834d49873e  	host			host		local
2dz9wi4npgjw  	ingress             	overlay         swarm

we are finally able to deploy our first service based on SQL Server on Linux image:

$ sudo docker service create \
   --name "sql2" \
   --mount 'type=bind,src=/u01/sql2,dst=/var/opt/mssql' \
   --replicas 1 \
   --network backend-server \
   --env "ACCEPT_EULA=Y" --env "MSSQL_SA_PASSWORD=P@$$w0rd1" \
   --publish published=1500,target=1433 \
   microsoft/mssql-server-linux:2017-latest

Important settings are:

  • –name “sql2″ = Name of the service to deploy
  • –replicas 1 = We tell to the manage to deploy only on one replica of the SQL Server container at time on docker workers
  • – mount ‘type=bind,src=…,dst=…’ = Here we define the data persistence strategy. It will map the /u01/02 folder directory on the host with /var/opt/mssql directory within the container. If we shutdown or remove the container the data is persisted. If container moves to another docker node, data is still available thank to the distributed storage over NFS.
  • –network back-endserver = we will attach the sql2 service to the back-endserver user network
  • microsoft/mssql-server-linux:2017-latest = The container based-image used in this case (Latest image available for SQL Server 2017 on Linux)

After deploying the sql2 service, let’s have a look at services installed from the manager. We get interesting output including the service name, the replication mode and the listen port as well. You may notice replication mode is set to replicated. In this service model, the swarm distributes a specific number of replicas among nodes. In my context I capped the number of maximum task to 1 as discussed previously.

$ sudo docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                      PORTS
ih4e2acqm2dm        registry            replicated          1/1                 registry:2                                 *:5000->5000/tcp
bqx1u9lc8dni        sql2                replicated          1/1                 microsoft/mssql-server-linux:2017-latest   *:1433->1433/tcp

Maybe you have noticed one additional service registry. Depending on the context, when you deploy services the correspond based-images must be available from all the nodes to be deployed. You may use images stored in public docker registry and to use a private one if you deploy internal images.

Let’s dig further by looking at the tasks associated to the sql2 service. We get other useful information as the desired state, current state and the node where the task is running.

$ sudo docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                      PORTS
ih4e2acqm2dm        registry            replicated          1/1                 registry:2                                 *:5000->5000/tcp
bqx1u9lc8dni        sql2                replicated          1/1                 microsoft/mssql-server-linux:2017-latest   *:1433->1433/tcp

Maybe you have noticed one additional service registry. Depending on the context, when you deploy services the correspond based-images must be available from all the nodes to be deployed. You may use images stored in public docker registry and to use a private one if you deploy internal images.

Let’s dig further by looking at the tasks associated to the sql2 service. We get other useful information as the desired state, current state and the node where the task is running.

$ sudo docker service ps sql2
ID                  NAME                IMAGE                                      NODE                          DESIRED STATE       CURRENT STATE            ERROR               PORTS
zybtgztgavsd        sql2.1              microsoft/mssql-server-linux:2017-latest   sqllinux3.dbi-services.test   Running             Running 11 minutes ago

In the previous example I deployed a service that concerned only my SQL Server instance. For some scenarios it is ok but generally speaking a back-end service doesn’t come alone on container world and it is often part of a more global application service architecture. This is where stack deployment comes into play.

As stated to the Docker documentation stacks are a convenient way to automatically deploy multiple services that are linked to each other, without needing to define each one separately. Stack files include environment variables, deployment tags, the number of services and dependencies, number of tasks to deploy, related environment-specific configuration etc… If you already dealt with docker-compose files to deploy containers and dependencies you will be comfortable with stack files. The stack file is nothing more than a docker-compose file adjusted for stack deployments. I used one to deploy the app-voting application here. This application is composed to 5 services including Python, NodeJS, Java Worker, Redis Cache and of course SQL Server.

Here the result on my lab environment. My SQL Server instance is just a service that composes the stack related to my application. Once again you may use docker commands to get a picture of the stack hierarchy.

$ sudo docker stack ls
NAME                SERVICES
myapp               5

$ sudo docker stack services myapp
ID                  NAME                MODE                REPLICAS            IMAGE                                               PORTS
fo2g822czblu        myapp_worker        replicated          1/1                 127.0.0.1:5000/examplevotingapp_worker:latest
o4wj3gn5sqd2        myapp_result-app    replicated          1/1                 127.0.0.1:5000/examplevotingapp_result-app:latest   *:8081->80/tcp
q13e25byovdr        myapp_db            replicated          1/1                 microsoft/mssql-server-linux:2017-latest            *:1433->1433/tcp
rugcve5o6i7g        myapp_redis         replicated          1/1                 redis:alpine                                        *:30000->6379/tcp
tybmrowq258s        myapp_voting-app    replicated          1/1                 127.0.0.1:5000/examplevotingapp_voting-app:latest   *:8080->80/tcp

 

So, let’s finish with the following question: what is the role of DBAs in such infrastructure as code? I don’t pretend to hold the truth but here my opinion:

From an installation and configuration perspective, database images (from official editors) are often released without neither any standard nor best practices. I believe very strongly (and it seems I’m aligned with the dbi services philosophy on this point) that the responsibility of the DBA team here is to prepare, to build and to provide well-configured images as well as related deployment files (at least the database service section(s)) related to their context – simple containers and more complex environments with built-in high availability for instance.

In addition, from a management perspective, containers will not really change the game concerning the DBA daily job. They are still responsible of the core data business regardless the underlying infrastructure where database systems are running on.

In this blog post we just surfaced the Docker Swarm principles and over the time I will try to cover other important aspects DBAs may have to be aware with such infrastructures.

See you!

 

 

 

 

 

Cet article Introducing SQL Server on Docker Swarm orchestration est apparu en premier sur Blog dbi services.

Migrate Windows VM with more than 4 Disks from VMware to OVM

Yann Neuhaus - Fri, 2018-02-02 06:24

Suppose you got an OVA image created on VMware and the VM contains more than 4 Disks
and you have to migrate this machine from VMware to OVM.

As first step you import the OVA into the OVM in the usual way:

Bildschirmfoto 2018-02-02 um 09.43.12

You see the that the appliance was imported successfully, we have 5 disks:

Bildschirmfoto 2018-02-02 um 09.45.25

Now you create your VM from the imported appliance:

Bildschirmfoto 2018-02-02 um 09.47.34

So far so good, lets have a look on our newly created VM:

Bildschirmfoto 2018-02-02 um 09.51.22

All seems good, but if you edit the machine (we want to add a network and give a boot order for the system) you will be surprised:

Bildschirmfoto 2018-02-02 um 10.11.35

Oops, we lost a disk, what happened? You don’t make a mistake, its a restriction on OVM:
http://www.oracle.com/us/technologies/virtualization/ovm3-supported-config-max-459314.pdf states that a VM can have only 4 IDE disks in maximum, and if you import your Windows VM its considered as Xen HVM Domain Type, so you can only attach 4 disks to the VM.

And now, how can we solve the problem? Lets first try, if we can boot the system:

Bildschirmfoto 2018-02-02 um 10.25.32

Ok system is up what next? We deinstall all the VMware utilities:

Bildschirmfoto 2018-02-02 um 10.35.40For the next step we download the Oracle VM Server for x86 Windows PV Drivers – 3.4.2.0.0 for Microsoft Windows x64 (64-bit) from https://edelivery.oracle.com and install them on our Windows Box:

Bildschirmfoto 2018-02-02 um 11.04.28

After a system restart, all disks except the C: drive are gone:

Bildschirmfoto 2018-02-02 um 11.08.51

We shutdown the Windows Box and put the VM into the Xen HVM PV Driver Domain:

Bildschirmfoto 2018-02-02 um 12.07.39

After that we can add our lost disk without any problems:

Bildschirmfoto 2018-02-02 um 12.09.00

Ok lets restart the system look what happens:

Bildschirmfoto 2018-02-02 um 12.16.48

Ok all disks are there, we can now bring them online:

Bildschirmfoto 2018-02-02 um 12.59.29

After a reboot we can see that our drives are used correctly:

ARC3: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 opened at log sequence 4743
Current log# 3 seq# 4743 mem# 0: I:\ORADATA\TESTIDM1\REDO03.LOG
Successful open of redo thread 1

 

 

 

 

Cet article Migrate Windows VM with more than 4 Disks from VMware to OVM est apparu en premier sur Blog dbi services.

Oracle Utilities Customer Care and Billing 2.6.0.1.0 Available

Anthony Shorten - Thu, 2018-02-01 22:09

Oracle Utilities Customer Care And Billing V2.6.0.1.0 is now available from My Oracle Support as a patch or available as a complete download in Oracle Delivery Cloud. This release uses the Oracle Utilities Application Framework V4.3.0.5.0. The release notes available from those download sites contains a full list of new, updated and deprecated functionality available for Oracle Utilities Customer Care And Billing V2.6.0.1.0 and Oracle Utilities Application Framework V4.3.0.5.0. Please refer to these documents for details.

The documentation also covers upgrading from previous versions of Oracle Utilities Customer Care And Billing.

Oracle SQL developer's native package???

Tom Kyte - Thu, 2018-02-01 19:06
hi Tom: when I check my ODBC connection under Administration tool, I don't not find any Oracle driver. the strange thing is I can still use Oracle SQL developer to connect to Oracle. I assume we always need a Oracle driver to connect to oracle dat...
Categories: DBA Blogs

date format

Tom Kyte - Thu, 2018-02-01 19:06
Hi Tom, In my pl/sql program I get date in format of string .. say like '2010/06/30 00:00:00' Now I need to convert this into a date and pass it as a parameter to another program.. what is the best way to that.. do i need to read nls_dat...
Categories: DBA Blogs

impdp with multiple REMAP_SCHEMA statements tries to load data twice in the same schema

Tom Kyte - Thu, 2018-02-01 19:06
Dear TOM, We are using impdp Release 11.2.0.3.0 - Production to exchange data between two databases, both in version 11.2.0.3.0 - 64bit Production. We use the following import.par parfile with several REMAP_SCHEMA statements: <code>DUMPFILE=...
Categories: DBA Blogs

No estimate time remainings drop column unused, with checkpoint.

Tom Kyte - Thu, 2018-02-01 19:06
Hi Tom! I had a problem removing a column set unused(version RDBMS 11.2.0.4) Because he was using CHECKPOINT, could not keep track of a running development, as well as estimate in v$SESSION_LONGOPS. For time remaings estimate, I was using an ave...
Categories: DBA Blogs

Few SQL & PL/SQL question(s)

Tom Kyte - Thu, 2018-02-01 19:06
Hi, There are three small questions in this, I wanted to ask in december but you guys were on vacation that's why I could not ask. Q1. What is the exact size restrictions of trigger I have read 32k here k is what is it KB or (32000 bytes or 3276...
Categories: DBA Blogs

SQL request with analytics to fill previous values

Tom Kyte - Thu, 2018-02-01 19:06
Hello, I have a table with this structure table values: <code>(code varchar2(10), date_value date, value number);</code> Example data : <code>'Code1','15/03/2017',25000 'Code1','06/06/2017',26000 'Code1','18/07/2017',29000 'Code1...
Categories: DBA Blogs

APEX World 2018: Het gaat los!

Joel Kallman - Thu, 2018-02-01 05:53
While we're delighted that there are more APEX-focused conferences springing up around the globe, there is one conference that started this trend, and that is...APEX World.  Thanks to the foresight of the Dutch Oracle User GroupRobin Buitenhuis and Oracle ACE Learco Brizzi, APEX World was started 9 years ago, with the primary purpose of bringing together the growing APEX community in the Netherlands.  Since that time, it has grown into a premier conference for the APEX community in Europe and beyond.

When I first had the opportunity to present at APEX World in 2014, it was a one-day event in the quaint city of Zeist.  Since that time, APEX World has blossomed into a large, international, multi-track 2-day conference with more than 400 attendees expected in 2018.  Even better, the conference itself is on board the cruise ship SS Rotterdam, which is very historic and beautiful.  The theme of this years conference is "A Deep Dive into Low Code", and it will be on March 22nd and 23rd in Rotterdam, Netherlands.

Four members of the Oracle APEX product development team will be at APEX World 2018 - John Snyders, Anthony Rayner, Shakeeb Rahman and David Peake.  We are all very open, we would love to understand what you're doing, and we also want to learn how we can help you be successful.  I encourage you to come to this great event and join the large and growing international community at APEX World 2018.


Infographic from APEX World 2017.  2018 will be even bigger and better!

where to use metadata table as a plsql developer

Tom Kyte - Thu, 2018-02-01 00:26
Hii friends, how, when, where to use metadata views like ALL_SOURCE, USER_SOURCE, ALL_DEPENDENCIES, USER_DEPENDENCIES, ALL_OBJECTS, USER_OBJECTS. THANKS.
Categories: DBA Blogs

Identify session using Oracle XA

Tom Kyte - Thu, 2018-02-01 00:26
Is there a way to identify if the session executing the PL/SQL application is part of the XA Transaction. We have an application running on WebLogic 12.2.1 using a JDBC Pool. It connects to the database through the Connection Pool. Some of the re...
Categories: DBA Blogs

How to detect where error occurred in PL/SQL

Tom Kyte - Thu, 2018-02-01 00:26
If i'm having a procedure with three select statements while debugging how will u find exactly in which statement u r getting error?
Categories: DBA Blogs

NO_DATA_FOUND exception when passing a empty row in associative array to procedure

Tom Kyte - Thu, 2018-02-01 00:26
I am trying to pass a procedure a record that is, or will be rather a new row in an associative array but am getting a a NO_DATA_FOUND exception. When I set to position in the collection to NULL, it works fine. Example simplified down: -- spec...
Categories: DBA Blogs

Keep archived logs for 5 days

Tom Kyte - Thu, 2018-02-01 00:26
Hi, I want to keep archived logs for 5 days in physical location on windows. can you please suggest on this ?
Categories: DBA Blogs

Invalid Views in PeopleSoft and the Oracle Database

David Kurtz - Wed, 2018-01-31 14:33
I was listening to the section on Invalid Views in PSADMIN Podcast #117 (@19:00). Essentially, when you drop and recreate a view that is referenced by a second view, the status on the second view in the database goes invalid. This is not a huge problem because as soon as you query the second view it is compiled. However, you would like to know whether any change to a view prevents any dependent views from compiling, although you would expect have teased these errors out before migration to production.
The PeopleSoft solution to this is to include all the dependent views in the Application Designer package. However, as pointed out, in the podcast you are now releasing code, possibly unintentionally releasing code changes and certainly updating last change dates on record definitions, when really you just need to compile the database objects.   PeopleSoft does this because it is a platform generic solution, but really it is using the PeopleSoft Application Designer to solve a database management issue.
A similar problem also occurs in the Oracle database with dependent PL/SQL procedures and packages where you sometimes get referential loops. Oracle provides a procedure DBMS_UTILITY.COMPILE_SCHEMA that recompiles all invalid objects in a schema and report any errors to the ALL_ERRORS view.  I think this is a much safer option.

Here is a very simple (non-PeopleSoft) example
drop table t purge;
drop view a;
drop view b;

create table t (a number);
insert into t values (1);
create view a as select a from t;
create view b as select a from a;

column object_name format a12
select object_type, object_name, status
from user_objects where object_name IN('T','A','B')
/

OBJECT_TYPE OBJECT_NAME STATUS
----------------------- ------------ -------
TABLE T VALID
VIEW B VALID
VIEW A VALID
Dropping and recreating view A renders view B invalid.
drop view a;
create view a as select a from t;

select object_type, object_name, status
from user_objects
where object_name IN('T','A','B')
/

OBJECT_TYPE OBJECT_NAME STATUS
----------------------- ------------ -------
TABLE T VALID
VIEW B INVALID
VIEW A VALID

select * from b;
select object_type, object_name, status
from user_objects where object_name IN('T','A','B')
/
Just querying B has made it valid again.
OBJECT_TYPE             OBJECT_NAME  STATUS
----------------------- ------------ -------
TABLE T VALID
VIEW B VALID
VIEW A VALID
Let's make B invalid again by rebuild A, but this time I will change the name of the column in view A from A to T so that view B cannot compile without an error.
drop view a;
create view a (t) as select a from t;

select object_type, object_name, status
from user_objects where object_name IN('T','A','B')
/
EXEC DBMS_UTILITY.compile_schema(schema => 'SCOTT');
select object_type, object_name, status
from user_objects where object_name IN('T','A','B')
/

OBJECT_TYPE OBJECT_NAME STATUS
----------------------- ------------ -------
TABLE T VALID
VIEW B INVALID
VIEW A VALID
And I can query the errors from user_errors
NAME
---------------------------------------------------
TYPE SEQUENCE LINE POSITION
------------ ---------- ---------- ----------
TEXT
---------------------------------------------------
ATTRIBUTE MESSAGE_NUMBER
--------- --------------
B
VIEW 1 0 0
ORA-00904: "A": invalid identifier
ERROR 0
N.B.: if you use CREATE OR REPLACE VIEW is not left invalid unless there is an error. Unfortunately, Application Designer always drops and recreates views.

Assemble Systems and Oracle Construction and Engineering Offer Model-based Scheduling for Construction

Oracle Press Releases - Wed, 2018-01-31 11:00
Press Release
Assemble Systems and Oracle Construction and Engineering Offer Model-based Scheduling for Construction Construction industry to benefit from timeline simulation and BIM data integration with Oracle’s Primavera P6 schedules

Redwood Shores, Calif.—Jan 31, 2018

Assemble Systems, a Gold level member of Oracle Partner Network (OPN), today announced a model-based scheduling integration with Oracle’s Primavera P6 Enterprise Project Portfolio Management scheduling software. Oracle’s Primavera P6 is the de facto standard and a leader in planning and scheduling for the Commercial Building and the Engineering, Procurement and Construction industry. Oracle customers will be able to enhance the value of their Primavera P6 implementation and improve their construction projects by leveraging Building Information Model (BIM) data through this integration.

Assemble’s integration with Primavera P6 enables construction managers to combine the baseline schedule and the BIM data to communicate construction sequencing to owners and other partners. Market research indicates that 4D schedule simulation is helpful with its ability to marry highly detailed scheduling to design and construction models. However, it is often viewed as too complex and difficult to share across project teams, due to the constantly changing nature of the models, activities, and other schedule data used. The new integration will allow users to easily create simulations, cost-loaded schedules, micro-schedules, and manage constant project changes.

“BIM has been playing an increasingly vital role in the evolution of the construction industry, however there has been a need to bring the same efficiencies from BIM to scheduling processes. We are pleased to work with Assemble to bring this combination of scope and schedule to our Primavera P6 customers,” said Andy Verone, Vice President Product Strategy, Oracle Construction and Engineering.

“I’m proud of our innovative development team who built this integration to provide our customers with another powerful tool from the Assemble Cloud,” said Donald Henrich, Assemble CEO. “Assemble focuses on combining data from models, drawings, and point clouds and making connections to existing customer systems including Oracle’s Primavera P6.  We are very happy to team with Oracle Construction and Engineering to drive the construction industry forward.”

The integration was unveiled at the Construction CPM Conference in New Orleans this week.  If you would like to learn more about the Assemble and Oracle integration, contact the Assemble team today here.

Contact Info
Judi Palmer
Oracle
650.506.0266
judi.palmer@oracle.com
About Oracle Construction and Engineering

Oracle Construction and Engineering delivers best-in-class project management solutions that empower organizations to proactively manage projects, gain complete visibility, improve collaboration, and manage change. Its cloud-based solutions for global project planning and execution help improve strategy execution, operations, and financial performance. For more information, please visit www.oracle.com/construction-and-engineering.

About Assemble

Assemble Systems provides a SaaS solution that acts as a HUB consuming Building Information Models, drawings, and point clouds. This solution enables construction professionals to condition, query, and connect the data to key workflows. These workflows include bid management, estimating, project management, scheduling, site management, and finance. Today, Assemble is used by over 195 companies including one fourth of the ENR 400 and at over 1,000 construction offices and sites around the world. To learn more about Assemble Systems, visit www.assemblesystems.com.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • 650.506.0266

Announcing the Oracle WebLogic Server Kuberentes Operator

OTN TechBlog - Wed, 2018-01-31 08:00

We are very excited to announce the Oracle WebLogic Server Kubernetes Operator, which is available today as a Technology Preview and which is delivered in open source at https://oracle.github.io/weblogic-kubernetes-operator.  The operator can manage any number of WebLogic domains running in a Kubernetes environment.  It provides a mechanism to create domains, automates domain startup, allows scaling WebLogic clusters up and down either manually (on-demand) or through integration with the WebLogic Diagnostics Framework or Prometheus, manages load balancing for web applications deployed in WebLogic clusters, and provides integration with ElasticSearch, logstash and Kibana.

The operator uses the standard Oracle WebLogic Server 12.2.1.3 Docker image, which can be found in the Docker Store or in the Oracle Container Registry.  It treats this image as immutable, and all of the state is persisted in a Kubernetes persistent volume.  This allows us to treat all of the pods as throwaway and replaceable, and it completely eliminates the need to manage state written into Docker containers at runtime (because there is none).

The diagram below gives a high level overview of the layout of a domain in Kubernetes when using the operator:

The operator can expose the WebLogic Server Administration Console to external users (if desired), and can also allow external T3 access; for example for WLST.  Domains can talk to each other, allowing distributed transactions, and so on. All of the pods are configured with Kubernetes liveness and readiness probes, so that Kubernetes can automatically restart failing pods, and the load balancer configuration can include only those Managed Servers in the cluster that are actually ready to service user requests.

We have a lot of documentation available on the project pages on GitHub including details about our design philosophy and architecture, as well as instructions on how to use the operator, video demonstrations of the operator in action, and a developer page for people who are interested in contributing to the operator.

We hope you take the opportunity to play with the Technology Preview and we look forward to getting your feedback.

Getting Started

The Oracle WebLogic Server Kubernetes Operator has the following requirements:

  • Kubernetes 1.7.5+, 1.8.0+ (check with kubectl version)
  • Flannel networking v0.9.1-amd64 (check with docker images | grep flannel)
  • Docker 17.03.1.ce (check with docker version)
  • Oracle WebLogic Server 12.2.1.3.0

For more details on the certification and support statement of WebLogic Server on Kubernetes, refer to My Oracle Support Doc Id 2349228.1.

A series of video demonstrations of the operator are available here:

The overall process of installing and configuring the operator and using it to manage WebLogic domains consists of the following steps. The provided scripts will perform most of these steps, but some must be performed manually:

  • Registering for access to the Oracle Container Registry
  • Setting up secrets to access the Oracle Container Registry
  • Customizing the operator parameters file
  • Deploying the operator to a Kubernetes cluster
  • Setting up secrets for the Administration Server credentials
  • Creating a persistent volume for a WebLogic domain
  • Customizing the domain parameters file
  • Creating a WebLogic domain

Complete up-to-date instructions are available at https://github.com/oracle/weblogic-kubernetes-operator/blob/master/site/installation.md or read on for an abbreviated version:

Build the Docker image for the operator

To run the operator in a Kubernetes cluster, you need to build the Docker image and then deploy it to your cluster.

First run the build using this command:

mvn clean install

Then create the Docker image as follows:

docker build -t weblogic-kubernetes-operator:developer --no-cache=true

We recommend that you use a tag other than latest to make it easy to distinguish your image. In the example above, the tag could be the GitHub ID of the developer.

Next, upload your image to your Kubernetes server as follows:

# on your build machine docker save weblogic-kubernetes-operator:developer > operator.tar scp operator.tar YOUR_USER@YOUR_SERVER:/some/path/operator.tar # on the Kubernetes server docker load < /some/path/operator.tar

Verify that you have the right image by running docker images | grep webloogic-kubernetes-operator on both machines and comparing the image ID.

We will be publishing the image in Oracle Container Registry and the instructions will be updated when it is available there.  After it is published, you will not need to build the image yourself, you will have the option to pull it from the registry instead.

Customizing the operator parameters file

The operator is deployed with the provided installation script, create-weblogic-operator.sh. The input to this script is the file create-operator-inputs.yaml, which needs to updated to reflect the target environment.

The following parameters must be provided in the input file:

CONFIGURATION PARAMETERS FOR THE OPERATOR Parameter Definition Default externalOperatorCert A base64 encoded string containing the X.509 certificate that the operator will present to clients accessing its REST endpoints. This value is only used when externalRestOption is set to custom-cert.   externalOperatorKey A base64 encoded string containing the private key ask tom This value is only used when externalRestOption is set to custom-cert.   externalRestOption Write me. Allowed values:
- none Write me
- self-signed-cert The operator will use a self-signed certificate for its REST server. If this value is specified, then the externalSans parameter must also be set.
- custom-cert Write me. If this value is specified, then the externalOperatorCert and externalOperatorKey must also be provided. none externalSans A comma-separated list of Subject Alternative Names that should be included in the X.509 Certificate. This list should include ...
Example: DNS:myhost,DNS:localhost,IP:127.0.0.1 . namespace The Kubernetes namespace that the operator will be deployed in. It is recommended that a namespace be created for the operator rather than using the default namespace. weblogic-operator targetNamespaces A list of the Kubernetes namespaces that may contain WebLogic domains that the operator will manage. The operator will not take any action against a domain that is in a namespace not listed here. default remoteDebugNodePort Tom is adding a debug on/off parameter
If the debug parameter if set to on, then the operator will start a Java remote debug server on the provided port and will suspend execution until a remote debugger has attached. 30999 restHttpsNodePort The NodePort number that should be allocated for the operator REST server on which it should listen for HTTPS requests on. 31001 serviceAccount The name of the service account that the operator will use to make requests to the Kubernetes API server. weblogic-operator loadBalancer The load balancer that is installed to provide load balancing for WebLogic clusters. Allowed values are:
- none – do not configure a load balancer
- traefik – configure the Traefik Ingress provider
- nginx – reserved for future use
- ohs – reserved for future use traefik loadBalancerWebPort The NodePort for the load balancer to accept user traffic. 30305 enableELKintegration Determines whether the ELK integration will be enabled. If set to true, then ElasticSearch, Logstash and Kibana will be installed, and Logstash will be configured to export the operator’s logs to ElasticSearch. false Decide which REST configuration to use

The operator provides three REST certificate options:

  • none will disable the REST server.
  • self-signed-cert will generate self-signed certificates.
  • custom-cert provides a mechanism to provide certificates that were created and signed by some other means.
Decide which optional features to enable

The operator provides some optional features that can be enabled in the configuration file.

Load Balancing

The operator can install the Traefik Ingress provider to provide load balancing for web applications running in WebLogic clusters. If enabled, an instance of Traefik and an Ingress will be created for each WebLogic cluster. Additional configuration is performed when creating the domain.

Note that the Technology Preview release provides only basic load balancing:

  • Only HTTP(S) is supported. Other protocols are not supported.
  • A root path rule is created for each cluster. Rules based on the DNS name, or on URL paths other than ‘/’, are not supported.
  • No non-default configuration of the load balancer is performed in this release. The default configuration gives round robin routing and WebLogic Server will provide cookie-based session affinity.

Note that Ingresses are not created for servers that are not part of a WebLogic cluster, including the Administration Server. Such servers are exposed externally using NodePort services.

Log integration with ELK

The operator can install the ELK stack and publish its logs into ELK. If enabled, ElasticSearch and Kibana will be installed in the default namespace, and a logstash pod will be created in the operator’s namespace. Logstash will be configured to publish the operator’s logs into Elasticsearch, and the log data will be available for visualization and analysis in Kibana.

To enable the ELK integration, set the enableELKintegration option to true.

Deploying the operator to a Kubernetes cluster

To deploy the operator, run the deployment script and give it the location of your inputs file:

./create-weblogic-operator.sh –i /path/to/create-operator-inputs.yaml What the script does

The script will carry out the following actions:

  • A set of Kubernetes YAML files will be created from the inputs provided.
  • A namespace will be created for the operator.
  • A service account will be created in that namespace.
  • If ELK integration was enabled, a persistent volume for ELK will be created.
  • A set of RBAC roles and bindings will be created.
  • The operator will be deployed.
  • If requested, the load balancer will be deployed.
  • If requested, ELK will be deployed and logstash will be configured for the operator’s logs.

The script will validate each action before it proceeds.

This will deploy the operator in your Kubernetes cluster.  Please refer to the documentation for next steps, including using the REST services, creating a WebLogic domain, starting a domain, and so on.

Philippine Nonprofits Amplify Social Impact with NetSuite

Oracle Press Releases - Wed, 2018-01-31 08:00
Press Release
Philippine Nonprofits Amplify Social Impact with NetSuite Zuellig Family Foundation, the Philippine Council for NGO Certification and Grupo Kalinangan, Inc. Improve Efficiency and Transparency with Oracle NetSuite Social Impact

SAN MATEO, Calif. and MAKATI CITY, Philippines—Jan 31, 2018

Oracle NetSuite, one of the world’s leading providers of cloud-based financials / ERPHRProfessional Services Automation (PSA) and omnichannel commerce software suites, today announced three nonprofit customers in the Philippines that have streamlined mission-critical processes while saving significant time and costs by moving to NetSuite. The Zuellig Family Foundation (ZFF), the Philippine Council for NGO Certification (PCNC) and Grupo Kalinangan, Inc. are among a growing number of Philippine nonprofits and social enterprises that have improved operational and financial efficiency, transparency for donors, regulatory reporting and capacity for informed, data-driven decision making since graduating from entry-level business applications to NetSuite. By eliminating labor-intensive manual work, nonprofits are freeing up resources to better focus on their social missions across the Philippines in such areas as healthcare, education, business development, disaster relief and alleviation of poverty.

Zuellig Family Foundation, PCNC and Grupo Kalinangan are supported by the Oracle NetSuite Social Impact group, which makes available free and discounted software licensing to qualified nonprofits and social enterprises. The program also includes Suite Pro Bono, wherein NetSuite employees volunteer their time and expertise to help nonprofits with training and customizations to make the most of NetSuite. More than 1,000 nonprofits and social enterprises globally use Oracle NetSuite Social Impact offerings, including Philippine organizations like Coca-Cola Foundation Philippines, American Chamber Foundation Philippines, Cartwheel Foundation, Virlanie Foundation, Microventures Foundation, BagoSphere and Theo & Philo. To learn more about Oracle NetSuite Social Impact, please visit www.netsuite.com/socialimpact.

ZFF Improves Healthcare Delivery with Leadership Training

Zuellig Family Foundation (www.zuelligfoundation.org) has provided local training to improve healthcare in nearly 650 rural municipalities across 32 provinces since its founding in 2008. Established and funded by the Zuellig family, whose patriarch, Frederick Zuellig, emigrated from Switzerland in 1901 to establish trading companies in the Philippines, ZFF focuses on training local leaders to address healthcare issues. More than 2,000 leaders have participated in ZFF’s health leadership and governance training programs, applying their insights to improve healthcare at the local level. Based in Parañaque City with 80 full-time employees, ZFF partners with other organizations such as USAID, UNICEF and MSD Merck for Mothers, with funding rising 35 percent since 2013 to $196.8 million USD in 2016. NetSuite helps ZFF better manage its financials, gaining speed and visibility and eliminating time-consuming manual work the nonprofit experienced with its previous application. ZFF estimates PHP1.65 million in annual savings and cost avoidance compared to an SAP solution it evaluated before selecting NetSuite, going live in 2015. With those savings in licensing, IT infrastructure, IT personnel and bookkeeper, ZFF is better able to focus resources on improving rural healthcare.

“NetSuite has simplified and facilitated our compliance with reporting requirements by supporting internal controls and providing accurate, real-time reporting with a complete set of audit trails,” said Wesley Villanueva, Manager. “We have greater accountability and transparency and can generate real-time reports that aid in decision-making.”

PCNC Drives Effectiveness, Accountability with NGO Certifications

The Philippine Council for NGO Certification (www.pcnc.com.ph), a nonprofit that certifies NGOs as meeting organizational standards for financial management and transparency, helps improve accountability and effectiveness of nonprofits involved in education, healthcare, poverty alleviation and other missions. Created in 1997 by a consortium of NGO networks, PCNC currently certifies approximately 450 NGOs with a “Seal of Good Housekeeping,” based on evaluations conducted by more than a hundred volunteer evaluators on six criteria including mission, goals and financial management. That gives donors confidence that funding will be used for its intended purpose. Based in Manila with eight employees, PCNC also helps match NGOs with communities in need, improving local outcomes across the nation. Since going live in 2016, NetSuite has been playing a critical role by giving PCNC new capabilities to track and segment NGOs by mission, regions and other characteristics, helping bring together NGOs and beneficiaries. PCNC also uses NetSuite for automated communications, such as notification emails to NGOs six months before certification expires, and intends to use NetSuite LightCMS for a more modern website. NetSuite technical support and Suite Pro Bono volunteers have been instrumental in helping PCNC make the most of its system.

“What used to take a day of gathering and aggregating data now takes minutes,” said Luis Morales, PCNC Executive Director. “We’re saving a lot of time with the ability to generate data very easily and that means less cost and less manual work. We didn’t realize what we were missing with NetSuite. NetSuite really knows how to work with nonprofits.”

Grupo Kalinangan Advocates Preservation of Filipino Heritage

Grupo Kalinangan, Inc. (www.grupokalinangan.org), an organization founded in 2015 in Makati City, develops and deploys IT tools, services and support systems for local government and grassroots communities for improved cultural heritage management in the Philippines. Bringing together advocates and practitioners from allied fields including social science, architecture and IT, GKI is building a data platform and map of Philippine history and culture, while implementing projects and activities aligned with understanding how to better preserve and promote Philippine culture and heritage through big data and analytics. A key focus area for GKI, with five board members and about 50 volunteers, is the use of data exchange, analytics and software such as Arches, an open-source GIS tool to collect and share data, heightening awareness of the importance of Filipino heritage. GKI, being an early adopter of technology, has always embraced modern applications to streamline its own operations. Replacing a free Wave Accounting application with NetSuite, which went live in 2016, NetSuite helps GKI save time and money in managing financials, preparing annual reports, and improving transparency for donors and in government filings. Despite Grupo Kalinangan’s initial concern that NetSuite would be too complicated, the system has proven to be easy to use, especially with the training provided by Suite Pro Bono volunteers over a six-week period. As a more cost-efficient organization, GKI can put full focus on heritage without needless overhead.

“We’ve definitely improved our processes with NetSuite and its capabilities,” said Karl Aguilar, Corporate Treasurer. “We can focus on the most important things for our organization without getting bogged down by problems and complicated financial processes. NetSuite does the work for us.”

Addressing Modern Challenges with Modern Cloud Technology

Nonprofits and social enterprises in the Philippines and elsewhere see transformational improvements by using NetSuite for functions such as accounting, donor and project management, online commerce, fund-raising and marketing. At the same time, NetSuite helps nonprofits address key pain points such as tight budgets, limited IT resources and increasing competition for funding. Benefits include:

  • Efficient financials and operations. Nonprofits eliminate error-prone, paper-based processes, dramatically reducing labor and cost and channeling savings into social impact programs.
  • Real-time visibility. Management can easily track revenue vs. expenses, fund-raising effectiveness and project status, enabling informed decisions on priority focus areas and complete tracking of results.
  • Greater transparency. Grant accounting at a line-item level helps nonprofits track how, when and where a grant is spent on any given project, helping to meet exacting donor requirements.
  • Regulatory reporting. Nonprofits speed up and simplify external audits as well as reporting to regulatory bodies, publicly documenting key financial metrics used by prospective donors to assess a nonprofit’s worthiness of funding.
  • Increased fund-raising effectiveness. Organizations gain a 360-degree view of their constituents and can utilize email marketing tools for personalized outreach and building long-term relationships.
Contact Info
Michael S. Robinson
Oracle NetSuite
781-974-9401
michael.s.robinson@oracle.com
About Oracle NetSuite Social Impact

Founded in 2006, the Oracle NetSuite Social Impact group is empowering nonprofits to use NetSuite to further their mission, regardless of their ability to pay. More than 1,000 nonprofits and social enterprises around the world are supported by NetSuite Social Impact, which makes available free and discounted software licensing to qualified organizations. The program also includes Suite Pro Bono, under which NetSuite employees provide their expertise to help nonprofits with training and customizations to make the most of the platform. To learn more about NetSuite Social Impact, please visit http://www.netsuite.com/socialimpact.

About Oracle NetSuite

Oracle NetSuite pioneered the Cloud Computing revolution in 1998, establishing the world’s first company dedicated to delivering business applications over the internet. Today, it provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries.

For more information, please visit http://www.netsuite.com.

Follow Oracle NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Michael S. Robinson

  • 781-974-9401

Pages

Subscribe to Oracle FAQ aggregator