Feed aggregator

GDPR or AVG - regain control Part 2: Your Own Mail

Frank van Bortel - Wed, 2018-05-23 09:09
Create your own mail server Drop Yahoo, Google or Microsoft mail - they are reading your mail. Debian, Postfix, Dovecot, MariaDb, rspamd This is the second (and last) part of setting up your own internet tools in order to gain back control. Goal is to set up an email server (receive and send), secure it, and filter spam. Hardware considerations I used an abandoned ASRock ION330, where I Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

Running Kafka, KSQL and the Confluent Open Source Platform 4.x using Docker Compose on a Windows machine

Amis Blog - Wed, 2018-05-23 06:39

image

For conducting some experiments and preparing several demonstrations I needed a locally running Kafka Cluster (of a recent release) in combination with a KSQL server instance. Additional components from the Core Kafka Project and the Confluent Open Source Platform (release 4.1) would be convenient to have. I needed everything to run on my Windows laptop.

This article describes how I could get what I needed using Vagrant and VirtualBox, Docker and Docker Compose and two declarative files. One is the vagrant file that defines the Ubuntu Virtual Machine that Vagrant spins up in collaboration with VirtualBox and that will contain Docker and Docker Compose. This file is discussed in more detail in this article: https://technology.amis.nl/2018/05/21/rapidly-spinning-up-a-vm-with-ubuntu-and-docker-on-my-windows-machine-using-vagrant-and-virtualbox/. The file itself can be found here, as GitHub Gist: https://gist.github.com/lucasjellema/7593677f6d03285236c8f0391f1a78c2.

The second file is the Docker Compose file – which can be found on GitHub as well: https://gist.github.com/lucasjellema/c06f8a790114396f11eadd10434d9b7e . Note: I received great help from Guido Schmutz from Trivadis for this file!

The Docker Compose file is shared into the VM when vagrant boots up the VM

image

and is executed automatically by the Vagrant docker-compose provisioner.

SNAGHTML370e8d1

Alternatively, you can ssh into the VM and execute it manually using these commands:

cd /vagrant

docker-compose up –d

image

Docker Compose will start all Docker Containers configured in this file, the order determined by the dependencies between the containers. Note: the IP address in this file (192.168.188.102) should correspond with the IP address defined in the vagrantfile. The two gists currently do not correspond because the Vagrantfile defined 192.168.188.110 as the IP address for the VM.

Once Docker Compose has done its thing, all containers configured in the docker-compose.yml file will be running. The Kafka Broker is accessible at 192.168.188.102:9092, the Zoo Keeper at 192.168.188.102:2181 and the REST API at port 8084; the Kafka Connect UI at 8001, the Schema Registry UI at 8002 and the KSQL Server at port 8088. The Kafka Manager listens at port 9000.

image

image

image

image

To run the KSQL Command Line, use this command to execute the shell in the Docker container called ksql-server:

docker exec -it vagrant_ksql-server_1 /bin/bash

Then, inside that container, simply type

ksql

And for example list all topics:

list topics;

image

Here follows the complete contents of the docker-compose.yml file (largely credited to Guido Schmutz):


version: '2'
services:
  zookeeper:
    image: "confluentinc/cp-zookeeper:4.1.0"
    hostname: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  broker-1:
    image: "confluentinc/cp-enterprise-kafka:4.1.0"
    hostname: broker-1
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_BROKER_RACK: rack-a
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_ADVERTISED_HOST_NAME: 192.168.188.102
      KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://192.168.188.102:9092'
      KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
      KAFKA_DELETE_TOPIC_ENABLE: "true"
      KAFKA_JMX_PORT: 9999
      KAFKA_JMX_HOSTNAME: 'broker-1'
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker-1:9092
      CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
      CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
      CONFLUENT_METRICS_ENABLE: 'true'
      CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'

  schema_registry:
    image: "confluentinc/cp-schema-registry:4.1.0"
    hostname: schema_registry
    container_name: schema_registry
    depends_on:
      - zookeeper
      - broker-1
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema_registry
      SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
      SCHEMA_REGISTRY_ACCESS_CONTROL_ALLOW_ORIGIN: '*'
      SCHEMA_REGISTRY_ACCESS_CONTROL_ALLOW_METHODS: 'GET,POST,PUT,OPTIONS'

  connect:
    image: confluentinc/cp-kafka-connect:3.3.0
    hostname: connect
    container_name: connect
    depends_on:
      - zookeeper
      - broker-1
      - schema_registry
    ports:
      - "8083:8083"
    environment:
      CONNECT_BOOTSTRAP_SERVERS: 'broker-1:9092'
      CONNECT_REST_ADVERTISED_HOST_NAME: connect
      CONNECT_REST_PORT: 8083
      CONNECT_GROUP_ID: compose-connect-group
      CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_KEY_CONVERTER: io.confluent.connect.avro.AvroConverter
      CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema_registry:8081'
      CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
      CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema_registry:8081'
      CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
    volumes:
      - ./kafka-connect:/etc/kafka-connect/jars

  rest-proxy:
    image: confluentinc/cp-kafka-rest
    hostname: rest-proxy
    depends_on:
      - broker-1
      - schema_registry
    ports:
      - "8084:8084"
    environment:
      KAFKA_REST_ZOOKEEPER_CONNECT: '192.168.188.102:2181'
      KAFKA_REST_LISTENERS: 'http://0.0.0.0:8084'
      KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema_registry:8081'
      KAFKA_REST_HOST_NAME: 'rest-proxy'

  adminer:
    image: adminer
    ports:
      - 8080:8080

  db:
    image: mujz/pagila
    environment:
      - POSTGRES_PASSWORD=sample
      - POSTGRES_USER=sample
      - POSTGRES_DB=sample

  kafka-manager:
    image: trivadisbds/kafka-manager
    hostname: kafka-manager
    depends_on:
      - zookeeper
    ports:
      - "9000:9000"
    environment:
      ZK_HOSTS: 'zookeeper:2181'
      APPLICATION_SECRET: 'letmein'   

  connect-ui:
    image: landoop/kafka-connect-ui
    container_name: connect-ui
    depends_on:
      - connect
    ports:
      - "8001:8000"
    environment:
      - "CONNECT_URL=http://connect:8083"

  schema-registry-ui:
    image: landoop/schema-registry-ui
    hostname: schema-registry-ui
    depends_on:
      - broker-1
      - schema_registry
    ports:
      - "8002:8000"
    environment:
      SCHEMAREGISTRY_URL: 'http://192.168.188.102:8081'

  ksql-server:
    image: "confluentinc/ksql-cli:4.1.0"
    hostname: ksql-server
    ports:
      - '8088:8088'
    depends_on:
      - broker-1
      - schema_registry
    # Note: The container's `run` script will perform the same readiness checks
    # for Kafka and Confluent Schema Registry, but that's ok because they complete fast.
    # The reason we check for readiness here is that we can insert a sleep time
    # for topic creation before we start the application.
    command: "bash -c 'echo Waiting for Kafka to be ready... && \
                       cub kafka-ready -b 192.168.188.102:9092 1 20 && \
                       echo Waiting for Confluent Schema Registry to be ready... && \
                       cub sr-ready schema_registry 8081 20 && \
                       echo Waiting a few seconds for topic creation to finish... && \
                       sleep 2 && \
                       /usr/bin/ksql-server-start /etc/ksql/ksql-server.properties'"
    environment:
      KSQL_CONFIG_DIR: "/etc/ksql"
      KSQL_OPTS: "-Dbootstrap.servers=192.168.188.102:9092 -Dksql.schema.registry.url=http://schema_registry:8081 -Dlisteners=http://0.0.0.0:8088"
      KSQL_LOG4J_OPTS: "-Dlog4j.configuration=file:/etc/ksql/log4j-rolling.properties"

    extra_hosts:
      - "moby:127.0.0.1"
      

Resources

Vagrant File: https://gist.github.com/lucasjellema/7593677f6d03285236c8f0391f1a78c2

Docker Compose file: https://gist.github.com/lucasjellema/c06f8a790114396f11eadd10434d9b7e

The post Running Kafka, KSQL and the Confluent Open Source Platform 4.x using Docker Compose on a Windows machine appeared first on AMIS Oracle and Java Blog.

GDPR or AVG - regain control Part 1: Your Own Cloud

Frank van Bortel - Wed, 2018-05-23 06:08
Create your own Cloud Replace Google or Dropbox, and gain control over you own data. Encrypt it, protect it, share your data only with who you want. ARM, Ubuntu and secured Nextcloud This episode will be followed by an entry on email. For now, I settle for a relatively cheap ARM device (an ODroid XU4, to be precise), run it with Ubuntu, and install NextCloud. The choice for Ubuntu has Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

Oracle Developer Cloud - New Continuous Integration Engine Deep Dive

OTN TechBlog - Wed, 2018-05-23 02:00

We introduced our new Build Engine in Oracle Developer Cloud in our April release. This new build engine now comes with the capability to define build pipelines visually. Read more about it in my previous blog.

In this blog we will delve deeper into some of the functionalities of Build Pipeline feature of the new CI Engine in Oracle Developer Cloud.

Auto Start

Auto Start is an option given to the user while creating a build pipeline on Oracle Developer Cloud Service. The below screenshot shows the dialog to create a new Pipeline, where you have a checkbox which needs to be checked to ensure the pipeline execution auto starts when one of the build job in the pipeline is executed externally, then that would trigger the execution of rest of the build jobs in the pipeline.

The below screen shot shows the pipeline for NodeJS application created on Oracle Developer Cloud Pipelines. The build jobs used in the pipeline are build-microservice, test-microservices and loadtest-microservice. And in parallel to the microservice build sequence we have, WiremockInstall and WiremockConfigure.

Scenarios When Auto Start is enabled for the Pipeline:

Scenario 1:

If we run build-microservice build job externally, then it will lead to the execution of the test-microservice and loadtest-microservice build jobs in that order subsequently. But note this does not trigger the execution of WiremockInstall or WiremockConfigure build jobs as they are part of a separate sequence. Please refer the screen shot below, which shows only the build jobs executed in green.

Scenario 2:

If we run test-microservice build job externally, then it will lead to the execution of the loadtest-microservice build job only. Please refer the screen shot below, which shows only the build jobs executed in green.

Scenario 3:

If we run loadtest-microservice build job externally, then it will lead to no other build job execution in the pipeline across both the build sequences.

Exclusive Build

This enables the users to disallow the pipeline jobs to be built externally in parallel to the execution of the build pipeline. It is an option given to the user while creating a build pipeline on Oracle Developer Cloud Service. The below screenshot shows the dialog to create a new Pipeline, where you have a checkbox which needs to be checked to ensure that the execution of build jobs in pipeline will not be allowed to be built in parallel to the pipeline execution.

When you run the pipeline you would see the build jobs queued for execution which you can see in the Build History. In this case you would see two build jobs queued, one would be build-micorservice and other would be WiremockInstall as they are parallel sequences part of the same pipeline.

Now if you try to run any of the build jobs in the pipeline, for example; like test-microservice, you will be given an error message, as shown in the screenshot below.

 

Pipeline Instances:

If you click the Build Pipeline name link in the Pipelines tab you will be able to see the pipeline instances. Pipeline instance is the instance at which it was executed. 

Below screen shot shows the pipeline instances with time stamp of when it was executed. It will show if the pipeline got Auto Started (hover on the status icon of the pipeline instance) due to an external execution of the build job or shows the success status if all the build jobs of the pipeline were build successfully. It also shows the build jobs that executed successfully in green for that particular pipeline instance. The build jobs that did not get executed have a white background.  You also get an option to cancel while the pipeline is getting executed and you may choose to delete the instance post execution of the pipeline.

 

Conditional Build:

The visual build pipeline editor in Oracle Developer Cloud has a feature to support conditional builds. You will have to double click the link connecting the two build jobs and select any one of the conditions as given below:

Successful: To proceed to the next build job in the sequence if the previous one was a success.

Failed: To proceed to the next build job in the sequence if the previous one failed.

Test Failed: To proceed to the next build job in the sequence if the test failed in the previous build job in the pipeline.

 

Fork and Join:

Scenario 1: Fork

In this scenario if you have a build job like build-microservice on which the other three build jobs, “DockerBuild” which builds a deployable Docker image for the code, “terraformBuild” which builds the instance on Oracle Cloud Infrastructure and deploy the code artifact and “ArtifactoryUpload” build job to upload the generated artifact to Artifactory are dependent on then you will be able to fork the build jobs as shown below.

 

Scenario 2: Join

If you have a build job test-microservice which is dependent on two other build jobs, build-microservice which build and deploys the application and another build job WiremockConfigure to configure the service stub, then in this case you need to create a join in the pipeline as shown in the screen shot below.

 

You can refer the Build Pipeline documentation here.

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

ADF Performance Tuning: Manage Your Fetched Data

Amis Blog - Wed, 2018-05-23 01:41

In this blog I want to stress how important it is to manage the data that you fetch and load into your ADF application. I blogged on this subject earlier. It is still underestimated in my opinion. Recently I was involved in troubleshooting the performance in two different ADF projects. They had one thing in common: their servers became frequently unavailable, and they fetched far too many rows from the database. This will likely lead to memory over-consumption, ‘stop the world’ garbage collections that can run far too long, a much slower application, or in the worst case even servers that run into an OutOfMemoryError and become unavailable.

Developing a plan to manage and monitor fetched data during the whole lifetime of your ADF application is an absolute must. Keeping your sessions small is indispensable to your performance success. This blog shows a few examples of what can happen if you do not do that.

Normal JVM Heap and Garbage Collection

First, just or our reference, let’s have a look at how a normal, ‘healthy’ JVM heap and garbage collection should look like (left bottom). The ADF Performance Monitor shows real-time or historic heap usage and garbage collection times. The heap space (purple) over time is like a saw-tooth shaped line – showing a healthy JVM. There are many small and just a few big garbage collections (pink). This is because there are basically two types of garbage collectors. The big garbage collections do not run longer than 5 seconds:

Read more on adfpm.com.

The post ADF Performance Tuning: Manage Your Fetched Data appeared first on AMIS Oracle and Java Blog.

Professional Tips For Building Mockups For Web And Mobile

Nilesh Jethwa - Tue, 2018-05-22 23:57

Given how incredibly easy it is to develop a web or mobile application, startups have become commonplace as a lot of players have already joined the bandwagon. You might be here because you want to learn how to compete against … Continue reading ?

Original: MockupTiger Wireframes

InfoCaptor Reviews – Customer Feedback and Testimonials

Nilesh Jethwa - Tue, 2018-05-22 23:52

My organization was looking for a way to chart MyQSL data on our website. One of the requirements was to use something that didn’t require extensive coding or multiple js files to run. InfoCaptor fit the bill. The programmer interface … Continue reading ?

Credit: InfoCaptor Dashboard

InfoCaptor Stack bar label fix

Nilesh Jethwa - Tue, 2018-05-22 23:47

A quick patch was released to fix the problem in display of data labels for stack bar chart. Please grab the new versions

By: InfoCaptor Dashboard

Wireframe Software – MockupTiger new version released

Nilesh Jethwa - Tue, 2018-05-22 23:42

A quick note that MockupTiger new version is now available. This release is for the cloud access. Desktop version of the wireframe tool is coming very soon. Link : Wireframe software online and desktop Key highlights of this mockup and … Continue reading ?

Via: InfoCaptor Dashboard

How to edit connection on existing dashboard or visualization

Nilesh Jethwa - Tue, 2018-05-22 23:37

Scenario: You have built a dashboard on a development database connection and you need to migrate to a production server using production database. How do you change the connection on the dashboard charts and visualization? There are two ways to … Continue reading ?

By: InfoCaptor Dashboard

How to delete or disable JDBC or database connection

Nilesh Jethwa - Tue, 2018-05-22 23:32

Here are the steps to disable or remove any connection handle From the launch pad, click on Manage project/Users Switch to the “Database” Tab Select the connection handle by clicking on it Next, click on the pencil icon to edit … Continue reading ?

Source: InfoCaptor Dashboard

Firebird JDBC example and SQL to build dashboard

Nilesh Jethwa - Tue, 2018-05-22 23:27

Here are the detailed steps to get connected with Firebird using JDBC and build dashboard with InfoCaptor. Assumptions Make sure Firebird is installed Make sure Firebird server is running Setup SYSDBA user and password (the default password is masterkey) 1. … Continue reading ?

Source: InfoCaptor Dashboard

Top 12 Rank Tracking Software and services for your business needs

Nilesh Jethwa - Tue, 2018-05-22 23:22

The role of Search Engine Optimization and keyword tracking tools are important in this technological age. This is especially true for people involved in business. One sure way to track the performance of a business is to use software specifically … Continue reading ?

By: InfoCaptor Dashboard

How to Attract Visitors to Your Site?

Nilesh Jethwa - Tue, 2018-05-22 23:17

It’s not that easy to go from zero visitors to thousands of potential customers in an instant. But if you implement the right traffic-generating strategy, you can increase the number of visitors coming in to your website. If you can … Continue reading ?

Via: InfoCaptor Dashboard

What Are the Steps to Optimizing Your Website

Nilesh Jethwa - Tue, 2018-05-22 23:12

People use search engines like Google when looking for products or brands these days. In fact, 60 percent of consumers take advantage of Google search just to find what they exactly want, and more than 80 percent of online search … Continue reading ?

Credit: InfoCaptor Dashboard

Best Tools for Keyword Rank Tracking

Nilesh Jethwa - Tue, 2018-05-22 23:08

An important element of search engine optimization (SEO) is choosing the right keyword. With the right keywords, you can make your content rank on search engines. But the work doesn’t stop after ranking, you still need to track the position … Continue reading ?

Hat Tip To: InfoCaptor Dashboard

ORA-01659 on creation of a not unique global partitioned index

Tom Kyte - Tue, 2018-05-22 19:06
Dear Tom, I have a table that stores climatic data with this layer: idcell,day,field1,.... This table is locally partitioned by range on day and it has a local PK index: idcell,day. I want to create a not unique global partitioned index on i...
Categories: DBA Blogs

PL/SQL programming to write to file in batches of 2 million rows

Tom Kyte - Tue, 2018-05-22 19:06
I have been assigned to the task. Task :- In one table , I have 10 million records and I need to export table data into a CSV files/Text files but the condition is that(I need to export into 5 files ,each file should contain 2 million records) 1) ...
Categories: DBA Blogs

Partition pruning with MEMBER OF operator

Tom Kyte - Tue, 2018-05-22 19:06
Hello Tom ! Is it possible to force Oracle to use (sub-)partition pruning when MEMBER OF operator is used on some nested table? For example: <code>SELECT * FROM A_TABLE WHERE COL_1 MEMBER OF NUMBER_TAB_TYPE(1,10,4);</code> where NUMBER_TAB_TY...
Categories: DBA Blogs

HOW TO GET OLD VALUE OF A DATA

Tom Kyte - Tue, 2018-05-22 19:06
HI,THERE I have a situation here, in one of my table, i hvae loc_id column my requirement is that i want all loc_id that have changed to new loc_id eg: LOC_ID CUST_NAME ---------- -----------------...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator