Pakistan's First Oracle Blog

Subscribe to Pakistan's First Oracle Blog feed
Blog By Fahd Mirza Chughtai
Updated: 17 hours 46 min ago

Step by Step: Ansible Role To Setup Oracle ACFS On Multiple Nodes

Thu, 2018-09-06 17:46
This post contains step by step instructions for creating an Ansible role acfssetup to setup Oracle ASM Cluster Filesystem (ACFS) on multiple nodes of a cluster. This assumes that Grid Infrastructure 12.1.0.2.0 is already installed on the nodes, and ASM is working fine. This also assumes that there already is Ansible installed on some controller host with ssh equivalency setup between root and Oracle users.




Step 1: Create directory structure for the role acfssetup:


$ cd /etc/ansible/roles
$ mkdir acfssetup
$ mkdir files handlers meta templates tasks vars

Step 2: Create the Tasks (/etc/ansible/roles/acfssetup/tasks/main.yml):

---

- name: Install ACFS/ADVM modules on the nodes
  become_user: "{{ superuser }}"
  environment: "{{ asm_instance }}"
  shell:"{ gi_home_path }}/bin/acfsroot install"
  tags:acfs

- name: Start and enable the ACFS modules on the nodes
  become_user: "{{ superuser }}"
  environment: "{{ asm_instance }}"
  shell:"{ gi_home_path }}/bin/acfsload start"
  shell:"{ gi_home_path }}/bin/acfsroot enable"
  tags:acfs
 
- name: As oracle user, create an ASM volume for ACFS on first node
  when: inventory_hostname in groups['node1']
  become_user: "{{ gi_owner }}"
  environment: "{{ asm_instance }}"
  shell:"{ gi_home_path }}/bin/asmcmd volcreate -G {{ asm_dg_name }} -s {{ acfs_vol_size }} {{ acfs_vol_name }}"
  shell:"{ gi_home_path }}/bin/asmcmd volinfo -G {{ asm_dg_name }} {{ acfs_vol_name }} | grep Device | sed 's/.*://'"
  register: {{ device_name }}
  tags:acfs

- name: As oracle user, create the filesystem on the volume which was just created
  become_user: "{{ gi_owner }}"
  environment: "{{ asm_instance }}"
  shell:"/sbin/mkfs -t acfs {{ device_name }}.stdout"
  tags:acfs

- name: As root, create an empty directory which will house the file system
  become_user: "{{ superuser }}"
  environment: "{{ asm_instance }}"
  shell:"mkdir -p /{{ acfs_mount_name }}/{{ acfs_vol_name }}; chown root:oinstall /{{ acfs_mount_name }}; chmod 770 /{{ acfs_mount_name }}; chown -R oracle:oinstall /{{ acfs_mount_name }}/{{ acfs_vol_name }}; chmod 775 /{{ acfs_mount_name }}/{{ acfs_vol_name }}"
  tags:acfs

- name: As root, setup the file system to be auto mounted by clusterware
  become_user: "{{ superuser }}"
  environment: "{{ asm_instance }}"
  shell:"{ gi_home_path }}/bin/srvctl add volume -volume {{ acfs_vol_name }} -diskgroup {{ asm_dg_name }} -device {{ device_name }}.stdout; { gi_home_path }}/bin/srvctl add filesystem -device {{ device_name }}.stdout -path {{ acfs_mount_name }}/{{ acfs_vol_name }} -diskgroup {{ asm_dg_name }} -user {{ gi_owner }} -fstype ACFS -description \"ACFS General Purpose Mount\""
  tags:acfs

  Step 3: Create the Variables (/etc/ansible/roles/acfssetup/vars/main.yml):

ver: "12.1.0.2.0"
superuser: root
asm_instance: +ASM
asm_dg_name: DATA
acfs_vol_name: ACFSVOL1
acfs_vol_size: 10G
acfs_mount_name: acfsmounts
device_name: default([])
gi_owner: oracle
gi_group: oinstall
gi_base_path: "/u01/app/oracle"
gi_home_path: "{{ gi_base_path }}/product/{{ ver |
regex_replace('^(.*)\\.(.*)\\.(.*)\\.(.*)$', '\\1.\\2.\\3') }}/grid"
gi_home_name: "OraGI{{ ver | regex_replace('^(.*)\\.(.*)\\.(.*)\\.(.*)$', '\\1\\2')}}"

Step 4: Configure Ansible host file (/etc/ansible/hosts)

node1 ansible_host=node1.foo.com
node2 ansible_host=node2.foo.com

Step 5: Create the skeleton Playbook (/etc/ansible/acfs.yml):

---
- hosts: all
become: true
roles:
- acfssetup

Step 6: Run the playbook

$ ansible-playbook acfs.yml
Categories: DBA Blogs

OPS$Oracle user after Wallet Creation in Oracle 12c

Fri, 2018-08-10 00:37

----- In Oracle 12.1.0.2, created the wallet by using below commands:

TEST$ orapki wallet create -wallet "/u01/app/oracle/admin/TEST/wallet" -pwd ****  -auto_login_local
Oracle PKI Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.


TEST$ mkstore -wrl "/u01/app/oracle/admin/TEST/wallet" -createCredential TEST2 sys ********
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
Create credential oracle.security.client.connect_string1

----- But when I logged into the database with sys user, the show user showed OPS$ORACLE user instead of sys:

TEST$ sqlplus /@TEST2

SQL*Plus: Release 12.1.0.2.0 Production on Thu Aug 9 13:09:38 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Thu Aug 09 2018 03:18:20 -04:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> sho user
USER is "OPS$ORACLE"
SQL>

----- So made following changes and it worked fine:

Put following entry in sqlnet.ora file:

SQLNET.WALLET_OVERRIDE = TRUE
The SQLNET.WALLET_OVERRIDE entry allows this method to override any existing OS authentication configuration.

and used mkstore to create the wallet:

TEST$  mkstore -wrl "/u01/app/oracle/admin/TEST/wallet" -createCredential TEST2 sys

Categories: DBA Blogs

Network Slowness Caused Database Contention That Caused Goldengate Lag

Mon, 2018-07-30 00:55
I got paged for a goldengate extract lagging behind. Checked the extract configuration and it was normal extract and it seemed stuck without giving any error in the ggserr.log or anywhere else. It wasn't abended either and was in running state.


Tried stopping and restating it, but still it remained in running state while doing nothing and lag was increasing. So the issue was clearly outside of goldengate. Checked the database by starting from alert log and didn't see any errors there either.

Jumped into the database and ran some queries to see which sessions were active and what they were running. After going through various active sessions, turned out that few of them were doing long transactions over a dblink and these sessions were several hours old and seemed stuck. These sessions were also inducing widespread delay on the temp tablespace and were blocking other sessions. Due to undersized temp plus these stuck long running transactions, database performance was also slower than usual.

Ran a select statement over that dblink and it was very slow. Used tnsping to ping that database remotely and it returned with delay. Then used network commands like ping, tracert, etc to check network status and it all was pointing to delay in network.

Killed the long running transaction as it was going nowhere, and that eased the pressure on temp tablespace, which in return enabled extract to finish off the lag.
Categories: DBA Blogs

Log Buffer #546: A Carnival Of The Vanities For DBAs

Mon, 2018-07-30 00:38
This Log Buffer Edition covers Cloud, Oracle, and PostgreSQL.
Cloud:
Google Maps platform now integrated with the GCP Console
Getting more value from your Stackdriver logs with structured data
Improving application availability with Alias IPs, now with hot standby
Performing a large-scale principal component analysis faster using Amazon SageMaker
Optimized TensorFlow 1.8 now available in the AWS: deep learning AMIs to accelerate training on Amazon EC2 C5 and P3 instances


Oracle:
Using GoldenGate LogDump to find bad data
Partition-Wise Operations: new features in 12c and 18c
SOA Suite 12c in Docker containers: only a couple of commands, no installers, no third party scripts
Checking if the current user is logged into Application Builder
PostgreSQL:
Let’s start out with some fun! I really enjoyed Wendy Kuhn‘s article on May 5 about the history of PostgreSQL. She starts out by relaying the importance of learning the history behind new technical tools & concepts when you’re learning. I couldn’t agree more.
Speaking of history, I’ve been waiting for the right time to mention this fun article from August 2016. Now is the time, because it relates to the previous article and because I saw a few retweets last week mentioning it. Did you ever wonder why the PostgreSQL logo is an elephant? Or what his name is?? Or even better – did turtles or cheetahs ever represent PostgreSQL???? Patrycja Dybka answers these questions and more. Check it out!
Ok, moving on to the serious stuff. :) First off, we’ve got a new round of minor releases of PostgreSQL. Version 10.4, 9.6.9, etc, were released on May 10. Time to start planning those upgrade cycles!
Next up, JD posted a nice summary of PGConf US in New Jersey on May 7. I saw a lot of familiar faces in his pictures! One quick call-out: I heard good things about the speed mentoring at the career fair. I think that was a great idea. (Among many at PGConf.)
Another interesting thing JD touched on in his blog post was the growing role of larger companies in the community. He gave a few specific examples related to Google and Microsoft. Back on April 17, Pivotalpublished an article listing a number of specific ways they contribute to PostgreSQL development, as well.
Speaking of cloud companies, who doesn’t like a nice rowdy comparison? Over on the SeveralNines blog, we got exactly that on May 1: a quick side-by-side comparison of a few of the many cloud providers who ship PostgreSQL. There are a bunch more – feel free to leave comments on their blog post with the providers they left out!
As long as we’re doing comparisons, I saw this old website on Twitter last week, and it’s fun enough to pass along. Thomas Kellerer from Germany wrote a nice open-source tool called SQL Workbench/J. In the process of supporting many different databases, he’s learned a lot about the differences between them. And his website has a really detailed list. Check out this list of SQL features by database – PostgreSQL is looking good!
I always enjoy a good story. Singapore-based Ashnik recently published a new case study about a global insurance company who deployed a bank data exchange system on PostgreSQL: a fine example of the serious business that runs on PostgreSQL every day.
Moving into the technology space, infrastructure company Datrium has recently published a series of interesting articles about the benchmarking and heavyweight workloads they’re throwing at PostgreSQL. The most recent article on April 25 discusses PostgreSQL on bare metal and it has links to many previous articles.
In the category of query tuning, how would you like to make a small tweak to your schema and SQL, then experience a 290x speedup? That’s exactly what happened to Yulia Oletskaya! She writes about it in this article on May 7.
“What’s common between DBA and detective? They both solve murder and mystery while trying to make sense of the nonsense.” That’s the first sentence of Alexey Lesovsky’s April 17 article about troubleshooting a PostgreSQL crash.
Going a little deeper, I have a handful of recent articles about specific database features in PostgreSQL.
First, how about a demonstration of PostgreSQL’s top-notch built-in support for full-text search? What better example than analyzing the public email of PostgreSQL contributor Tom Lane to find what his waking hours are? Turns out that he’s very consistent. In fact, it turns out you can use Tom Lane’s consistent email habits to spot server timezone misconfigurations.
Citus also published a nice article back at the beginning of April about row-level security. I didn’t include it last month but it’s worth mentioning now. PostgreSQL’s capabilities here are quite nice.
My past newsletters have been following Dimitri Fontaine’s series on PostgreSQL data types. We’ve got three new ones this time around: JSONEnum and Point types.
A big selling point for PostgreSQL is its extensibility. On May 8, Luca Ferrari from Italy published an article in BSD magazine which walked through the process of building a customer extension to provide a new foreign data wrapper that connects the database directly to a file system data source.
Our friends at Timescale put out an article about streaming replication on May 3. Lee Hampton gives one of the best descriptions of this critical HA concept that I’ve seen anywhere.
Finally, can a week go by without new articles about vacuum in PostgreSQL? It seems not!
On Apr 30, Jorge Torralba published an article on DZone about tuning autovacuum. He has a specific focus on bloat, which is an important reason for vacuuming. There are some nice examples here.
And back on April 3, Emily Chang from Datadog published perhaps one of the most detailed articles about vacuum that I’ve seen. Well worth reviewing.
To close up this edition: something a little different. This year marks the 15th anniversary of pgpool. And Tatsuo Ishii reminded us with a blog post on April 15.
So in honor of the 15th aniversary, let’s collect a few recent links *just* about pgpool!
Tatsuo also published two other articles in April about various features of pgpool:
And Vladimir Svedov at severalnines published a two-part series on pgpool in April as well.
And that’s a wrap for this week. Likely more content than you’ll have time for, as usual! My job here is done. :)

Originally posted at https://blog.pythian.com/log-buffer-546-carnival-vanities-dbas/
Categories: DBA Blogs

How to find the UUID of a device in Linux for Oracle ASM

Sun, 2018-05-27 22:45
UUID stands for Universally Unique Identifier. I use UUID for my disk device, when I need to create and add disks for Oracle ASM, as UUID is independet of device name or mountpoint. So its always a good idea to include UUID of device in the fstab file in Linux.

So here is how to find the UUID of a device in Linux for Oracle ASM:




[root@bastion ~]$ ls -l /dev/disk/by-uuid
lrwxrwxrwx 1 root root 11 JAN 18 20:38 1101c254-0b92-42ca-b34a-6d283bd2d31b -> ../../sda2
lrwxrwxrwx 1 root root 11 JAN 18 20:38 11dc5104-C07b-4439-bdab-00a76fcb88df -> ../../sda1

HTH.


Categories: DBA Blogs

Relocate Goldengate Processes to Other Node with agctl

Fri, 2018-04-20 22:00
Oracle Grid Infrastructure Agents can be used to manage Oracle Goldengate through Oracle GI. agctl is the utility to add, modify and relocate the goldengate. These Oracle GI agents can also be used with other products like weblogic, mysql etc. 


Frits has a good article about installation and general commands regarding GI agents for a non-clustered environment.

Following is the command to relocate Goldengate processes to other node with agctl. 


[gi@hostname ~]$ agctl status goldengate [service_name]
[gi@hostname ~]$ agctl config goldengate [service_name] [gi@hostname ~]$ agctl relocate goldengate [service_name] --node [node_name] [gi@hostname ~]$ agctl config goldengate [service_name] [gi@hostname ~]$ agctl status goldengate [service_name]

Hope that helps.
Categories: DBA Blogs

Oracle DBAs and GDPR

Wed, 2018-04-18 01:32
The General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679) is a regulation by which the European Parliament, the Council of the European Union and the European Commission intend to strengthen and unify data protection for all individuals within the European Union (EU).


To bring Oracle database to align with GDPR directive, we have to encrypt all the databases and files on disk, aka encryption at rest (when data is stored). We also have to encrypt the database network traffic. 

The Transparent Data Encryption (TDE) feature allows sensitive data to be encrypted within the datafiles to prevent access to it from the operating system. 

You cannot encrypt an existing tablespace. So if you wish to encrypt existing data, you need to move them from unencrypted tablespaces to encrypted tablespaces. For doing this you can use any of following methods:

i) Oracle Data Pump utility.
ii) Commands like CREATE TABLE...AS SELECT...
iii) Move tables like ALTER TABLE...MOVE..  or rebuild indexes.
iv) Oracle Table Redefinition.

In order encrypt network traffic between client and server we have 2 options from Oracle:

i) Native Network Encryption for Database Connections
ii) Configuration of TCP/IP with SSL and TLS for Database Connections

Native Network Encryption is all about setting sqlnet.ora file and doesn't have the overhead of second option whereyou have to configure various network files at server and client and also have to obtain certificates and create wallet. In first option, there is possibility of not gurantee of encryption, whereas in second there is gurantee of encryption. 
Categories: DBA Blogs

AWS Pricing Made Easy By Simple Monthly Calculator

Wed, 2018-04-18 01:26
With ever changing pricing model and services, its hard to keep track of AWS costing.





If you want to check how much would it cost to have a certain AWS service, tailored to your requirement then use the following Simply Monthly Calculator from AWS.

AWS Price Calculator.
Categories: DBA Blogs

AWS CLI is Boon for DBAs

Sat, 2018-04-14 01:45
For most of production RDS databases , we normally have a related EC2 server to access that RDS database through tools like datapump, sqlplus etc.



RDS is great for point and click but if you want to run your own monitoring or other administration related scripts, you need to have an EC2 instance with AWS CLI installed. For example if you want to check the status of RDS instances or if you want to check if RDS snapshots happened or not today through some check and notify DBA through page or email, you can do that with the AWS CLI.

I will be writing and sharing some RDS related shell scripts using AWS CLI in coming days so stay tuned. 
Categories: DBA Blogs

Oracle DBAs and Meltdown & Spectre (M&S) vulnerability Patch

Thu, 2018-03-29 21:01
So what Oracle DBAs need to do regarding Meltdown & Spectre (M&S) vulnerability patch? 



Well, they should ask the sysadmins to install the patch to the affected versions. They need to get a maintenance window for that. They need to take full backup of Oracle infrastructure and databases before that patching and they should get some baseline of OS metrics to compare it with post patch status of the system. 

Not much there is to do for Oracle DBAs in this regard as this vulnerability is in hardware and is mainly related to sysadmins. Nonetheless, Oracle DBAs should avail this opportunity and install latest CPU. 

The vulnerability is in the chipset itself, unrelated to OS. These vulnerabilities exist at the hardware layer and provide attackers with a way to essentially read the memory used by other processes. Because of the nature of this exploit, the database itself is not currently thought to be a vector in the risk, in-fact the real "fix" for this issue relies on fixing architecture at the chip-set level. 

To mitigate the risk currently without replacing your chips, OS vendors are releasing patches that fundamentally change interactions with memory structures by processes. This is why we're seeing in "Addendum to the January 2018 CPU Advisory for Spectre and Meltdown (Doc ID 2347948.1)" Oracle is releasing patches for Oracle VM (virtual machines are particularly susceptible to this exploit as one "VM" can read the memory processes of another, making this particularly deadly to cloud computing) and Oracle Enterprise Linux. We do understand that Oracle is exploring the possibility that there may be additional patches needed for Oracle Enterprise and Standard edition DBs themselves.


Only for Exadata, It is needed to apply the latest Exadata 12.2.1.1.6 software bundle (the full version number is 12.2.1.1.6.180125.1). Spectre / Meltdown patches are included into it.


The best course of action regarding this would be to get a word from Oracle support for any database related patch. 

Categories: DBA Blogs

Move a Datafile from one ASM Diskgroup to Another Diskgroup

Thu, 2018-03-29 20:49
Following are steps to move a datafile from one ASM diskgroup to another diskgroup in the same ASM instance:




For this example, let's suppose the full path of datafile to be moved is +DATA/test/datafile/test.22.121357823 and datafile number is 11.

Step 1: From RMAN, put datafile 11 offline:

SQL "ALTER DATABASE DATAFILE ''+DATA/test/datafile/test.22.121357823'' OFFLINE";

Step 2: Backup Datafile 11 to Copy using RMAN:

$ rman target /
BACKUP AS COPY DATAFILE 11 FORMAT '+DATA_NEW';

--- Make note the path and name of the generated datafile copy.

Step 3: From RMAN, switch datafile 11 to copy:

SWITCH DATAFILE "+DATA/test/datafile/test.22.121357823" TO COPY;

Step 4: From RMAN, Recover Datafile 11:

RECOVER DATAFILE 11;

Step 5: From RMAN, put datafiles online:

SQL "ALTER DATABASE DATAFILE ''+DATA_NEW/'' ONLINE";

Step 6: From SQLPlus, verify if datafile 11 was correctly switched and was online:

sqlplus / as sysdba
SQL> select file_id,file_name,online_status from dba_data_files where file_id in (11);
Categories: DBA Blogs

Its Cloud Service, Not Oracle 18c RDBMS which is Self-Driving and Autonomous

Thu, 2018-01-04 20:08
Look at the following picture displayed at Oracle's official website here, and you would forgive anyone who would naively believe that the Oracle 18c is a self-driving, autonomous database.





Now, in above mentioned article after reading the title and the first paragraph, one still maintains the notion that Oracle 18c is autonomous and self-driving database. Its only after reading second paragraph carefully, one should understand the true picture.

The second paragraph in that article clearly says that self-driving and autonomous is Oracle Autonomous Database Cloud  powered by Oracle 18c database and not the Oracle database 18c itself.

So this autonomy is about cloud service and not about the RDBMS. This cloud service could very well run on Oracle 12c and say the same or any other version for that matter. DBA's role in such managed database cloud services is still nominal. So if its not on the cloud then DBA's role is more challenging then before as every new version is packed with new features.
Categories: DBA Blogs

ORA-00240: control file enqueue held for more than 120 seconds 12c

Mon, 2017-12-11 22:30
Occasionally, In Oracle 12c database, you may get ORA-00240: control file enqueue held for more than 120 seconds  error in the alert log file.


Well, as this error contains the word control file so it looks scary but if you are getting it infrequently and the instance stays up and running then there is no need to worry and this can be ignored as a fleeting glitch.

But if it starts happening too often and worst case if it hangs or crashes the instance then more than likely its  a bug and you need to apply either the latest PSU or any one-off patch available from Oracle support after raising SR.

There have been some occurances where this error occured due to high number of sessions and conflicting with the ulimit of the operating system resulting in hanging of ASM and RDBMS. Some time it could be due to shared pool latch contention as per few MOS documents.

So if its rare, then ignore it and move on with the life as there are other plenty of things to worry about. If its frequent and a show-stopper then by all means raise a SEV-1 SR with Oracle support as soon as possible.
Categories: DBA Blogs

Enters Amazon Aurora Serverless

Thu, 2017-11-30 23:06
More often than not, database administrators around the technologies have to fight out high load on their databases. It could be ad hoc queries, urgent reports, overflown jobs, or simply high frequency and volume of queries by the end users.

DBAs try their best to do a generous capacity planning to ensure optimal response time and throughput for the end users. There are various scenarios where it becomes very hard to predict the demand. Storage and processing needs in case of unpredictable load are hard to foretell in advance.





Cloud computing offers the promise of unmatched scalability for processing and storage needs. Amazon AWS has introduced a new service which gets closer to that ultimate scalability. Amazon Aurora is hosted relational database service by AWS. You set your instance size and storage need while setting Aurora up. If your processing requirements change, you change your instance size and if you need more read throughput you add more read replicas.

But that is good for the loads we know about and can more ore less predict. What about the loads which appear out of blue? May be for a blogging site, where some post has suddenly gone viral and it has started getting million of views instead of hundreds? And then the traffic disappears after some time suddenly just like it appeared out of nowhere and may be after some days it happens for some another post?

In this case if you are running Amazon Aurora, it would be fairly expensive to just increase the instance size or read replicas in the anticipation that some traffic burst would come. It might not, but then it might.

In front of this uncertainty, enters Amazon Aurora Serverless. With this Serverless Aurora, you don't select your instance size. You simply specify an endpoint and then all the queries are routed to that endpoint. Behind that endpoint lies a a warm proxy fleet of database capacity which can scale as per your requirements within 5 seconds.

It's all on-demand and ideal for transient spiky loads. What's more sweet is that billing is on second by second basis and deals in Aurora capacity units and minimum is 1-minute for each newly address resource.
Categories: DBA Blogs

Query Flat Files in S3 with Amazon Athena

Tue, 2017-11-21 21:01
Amazon Athena enables you to access data present in flat files stored in S3 (Simple Storage Service) as if it were in a table in the database. That and you don't have to set up any server or any other software to accomplish that.

That's another glowing example of being 'Serverless.'


So if a telecommunication has hundreds of thousands or more call detail record file in CSV or Apache Parquet or any other supported format, it can just be uploaded to S3 bucket, and then by using AWS Athena, that CDR data can be queried using well known ANSI SQL.

Ease of use, performance, and cost savings are few of the benefits of AWS Athena service. True to the Cloud promise, with Athena you are charged for what you actually do; i.e. you are only charged for the queries. You are charged $5 per terabyte scanned by your queries. Beyond S3 there are no additional storage costs.

So if you have huge amount of formatted data in files and all you want to do is to query that data using familiar ANSI SQL then AWS Athena is the way to go. Beware that Athena is not for enterprise reporting and business intelligence. For that purpose we have AWS Redshift. Athena is also not for running highly distributed processing frameworks such as Hadoop. For that purpose we have AWS EMR. Athena is more suitable for running interactive queries on your supported formatted data in S3.

Remember to keep reading the AWS Athena documentation as it will keep improving, lifting limitations, and changing like everything else in the cloud.
Categories: DBA Blogs

List of Networking Concepts to Pass AWS Cloud Architect Associate Exam

Wed, 2017-11-08 16:31
Networking is a pivotal concept in cloud computing. Knowing it is a must to be a successful Cloud Architect. Of course you won't be physically peeling the cables to put RJ45 connectors on but you must know various facets of logical networking.


You never know what exactly gonna be in the exam but that's what exams are all about. In order to prepare for AWS Cloud Architect Associate exam you must thoroughly read and understand the following from AWS documentation:


Before you read above, it would be very beneficial if you also go and learn following networking concepts:

  • LAN
  • WAN
  • IP addressing
  • Difference between IPV4 and IPV6
  • CIDR
  • SUBNET
  • VPN
  • NAT
  • DNS
  • OSI Layers
  • TCP
  • UDP
  • ICMP
  • Router, Switch
  • HTTP
  • NACL
  • Internet Gateway
  • Virtual Private Gateway
  • Caching, Latency
  • Networking commands like Route, netstat, ping, tracert etc
Feel free to add any other network concept in comments which I might have missed.
Categories: DBA Blogs

Guaranteed Way to Pass AWS Cloud Architect Certification Exam

Tue, 2017-11-07 06:00
Today and for the sometime to come, one of the hottest IT certification to hold is AWS Cloud Architect Certification. There are various reasons for that:



  • If you pass it, it really means you know the stuff properly
  • AWS is the Cloud platform of choice world over and its not going anywhere
  • There is literally a mad rush out there as companies scramble to shift or extend their infrastructure to cloud to stay relevant and to cut costs.
  • There is a huge shortage of professional with theoretical and hands-on know-how of Cloud and this shortage is growing alarmingly.
So its not surprising that Sysadmins, developers, DBAs and other IT professionals really yearning to achieve Cloud credentials and there is no better way to do that other than getting AWS Certified.

So is there any  Guaranteed Way to Pass AWS Cloud Architect Certification Exam?

I say Yes and here is the way:

Read AWS Documentation about following AWS Services. Read about these services and read them again and again and then again. Learn them like you know your name. Get a free account and then play with these services. When you feel comfortable enough with these services and can recite them to anyone inside out then go ahead sit in exam and you will pass it for sure. So read and learn all services under sections:


  • Compute
  • Storage
  • Database 
  • Network & Content Delivery
  • Messaging
  • Identity and Access Management
Also make sure to read FAQs of all above services. Also read and remember what AWS Kinesis, WAF, Data Pipeline, EMR, Workspace are. No details are necessary for these ones but just what they stand for and what they do.

Best of Luck.
Categories: DBA Blogs

Passed the AWS Certified Solutions Architect - Associate Exam

Tue, 2017-11-07 05:11
Well, it was quite an enriching experience to go through taking the AWS certification exam and I am humbled to say that I passed it. It was first time, I underwent any AWS exam and I must say that quality was high and it was challenging and interesting enough. 

I will be writing soon as how I prepared and what are my tips for passing this exam.

Good night for now.
Categories: DBA Blogs

CIDR for Dummies DBA in Cloud

Sun, 2017-10-01 02:00
For DBAs of Cloud, its imperative to learn various networking concepts and CIDR is one of them. Without going into much detail, I will just post here quick note as what CIDR is and how to use it.



A CIDR looks something like this:

10.0.0.0/28

The 10.0.0.0/28 represents range of IP addresses, and no its NOT from 10.0.0.0 to 10.0.0.28. Here is what it is:

So in order to know how many IP address are in that IP range and from where it starts and where it ends, the formula is :

2 ^ (32 - )

So for the CIDR 10.0.0.0/28 :

2 ^ (32 - 28) = 2 ^ 4 = 2 * 2 * 2* 2 = 16

So in CIDR range 10.0.0.0/28 , we have 16 IP addresses in which

Start IP = 10.0.0.0
End IP  = 10.0.0.15



Also cloud providers normally reserve few IPs out of this CIDR range for different services like DNS, NAT etc. For example, AWS reserves first 4 and last IP of any CIDR range. So in our example , we would just have 10 IP addresses to work with in AWS.

So in case of AWS, we would have a region where we would have a VPC. CIDR is assigned to that VPC. In that VPC, for example we would have 2 subnets. We can distribute our 10 IPs from our CIDR 10.0.0.0/28 to our both subnets. Below I am giving 5 IPs to each subnet. A subnet is just a logical separate network.

For example we can give:

Subnet 1:

10.0.0.5 to 10.0.0.9

Subnet 2:

10.0.0.10 to 10.0.0.14 

Hope that helps.

PS. And oh CIDR stands for Classless Inter-Domain Routing (or Supernetting)
Categories: DBA Blogs

Idempotent and Nullipotent in Cloud

Tue, 2017-09-19 04:50
I was going through the documentation of Oracle Cloud IaaS, when I came across the vaguely familiar term Idempotent.



One great thing which I have felt very strongly with all this Cloud-mania is the recall of various theoretical computing concepts which we learned/read in university courses way back. From networking through web concepts to operating system; there are plethora of concepts which are coming back to be in practice very actively in everyday life of cloud professionals.

Two such mouthful words were Idempotent and Nullipotent. These are types of actions. Difference between Idempotent and Nullipotent action is the result they return when performed.

In simple terms;

    When executed an Idempotent action would provide a result first time and then this result would remain same, no matter how many times the action is repeated after that first time.
  
    An Nullipotent action would always provide same result whether executed several times or not executed at all.
 
So in terms of Cloud where REST (Representational State Transfer) APIs and HTTP (Hyper Text Transfer Protocol) are norm, these 2 concepts of Idempotent and Nullipotent are very important. In order to manage resources in cloud (through URI), there are various HTTP actions which could be performed. Some of these actions are Idempotent and some are Nullipotent.

Like GET action of HTTP is nullipotent. No matter how many times you execute this, it doesn't affect state of the resource and would return same result. And Put is Idempotent action of HTTP which would change the state of resource first time its executed and all subsequent executions of same PUT action would be like as first time.
Categories: DBA Blogs

Pages