Running Docker Container

by Chandrakant Rai

June 20, 2017

In this blog series we are going to explore how to run Docker containers on different container orchestration service like Kubernetes, Docker Swarm etc. We will also explore the container service by different cloud providers like AWS ECS, Azure Container Service, and Google Container Engine.

This article does not give any best practice to run Docker Swarm or Kubernetes in Production environments, it is just a simple orchestration setup for our internal DevOps practice.

Initially we will explore setting up a simple NGINX container and orchestrate it using Docker swarm mode.

Pre-requisite: A minimum of 2 VM instances on Google cloud platform with Docker CE version 1.12 or above installed

Docker Overview

Docker is a platform that enables users to build, package, ship and run applications. Docker users package their applications into a Docker image, these images are portable artifacts that can be distributed across Linux environments. It is based on Linux LXC and the main secret sauce behind Docker is Namespace, Cgroups and Union Filesystem. Namespace & Cgroups provide isolation to the container environment. Some of the Namespaces docker uses are Pid, Net, IPC, Mnt, Uts and Cgroups limit resources like Memory, CPU etc. Docker containers are lightweight and portable and enables consistent environments or immutable infrastructure hence solving the common problem of mismatch between dev, test and prod environments.

New release of Docker (from 1.12 onwards) has introduced native orchestration and cluster management capabilities which can help scale up or scale down your infrastructure based on the capacity need of the application. For example, an ecommerce site during peak holiday season need to scale up their infrastructure rapidly to meet the demand and scale down during off-peak season.

Docker Installation

We will install latest stable version of Docker CE (Community Edition) on CentOS 7 VM on Google Cloud Platform. We will follow the recommended approach of setting up the Docker Repository and installing from it.

To setup the Docker repo for CE the following steps need to be followed:

Install yum-utils.

 

pic 1

 

Use the following command to setup the stable repo.

 

pic 2

 

 

Update yum package index

 

pic3

 

 

Install latest version of docker

 

pic 4

 

 

Start Docker and verify that Docker is installed correctly by running the hello-world image.

 

pic 5

 

pic6

 

 

Repeat the above steps on the second VM instance.

Create a Swarm Cluster

 Initialize the swarm on the first VM using the command below. This VM is added to the Swarm cluster as manager node. All administrative commands can be executed only on the manager node.

 

pic7

 

 

pic8

 

 

 

 

 

Copy the swarm join command from the output above and run it on worker node. Currently there is only one node in the swarm cluster, which can be confirmed by running the following command on manager node.

 

pic9

 

 

 

pic10

 

pic11

 

 

 

Creating Services and Scale Up/Down the service

Next we will be creating a simple service from NGINX container and show how to scale the service up or down based on the load. This is example of not a real world type micro-service application, but it could be a simple static web site service in the overall ecommerce micro-service application. In future blog posts we will cover how to create an app stack and install the whole app stack on swarm cluster.

The steps to create a service and scale it up or down are shown below.

Run the following command to create a docker service from nginx container.

 

pic12

 

 

 

pic13

 

 

 

You can confirm if nginx service got installed by going to the url http://ip_of_vm:80 or curl http://ip_vm1:80

Currently only one service of nginx is running, which can be confirmed by running the “docker service ps” command

 

pic14

 

 

Now let’s scale the service by running the following command and docker will spread the services evenly on all nodes of the swarm cluster. You can confirm that the service was started on node 2 by looking at the output of the command below. In real application both these nodes of swarm cluster would be behind the load balancer.

 

pic15

 

 

 

 

 

In case of less load and you would like to scale down your service and run the application on reduced number of cloud instances, you can drain one of the nodes of swarm cluster and scale down the service using the following commands.

 

pic16

 

 

 

 

After draining the instance-2, scale down the web service to 3 and note that docker spreads the 3 services on instance-1 only.

 

pic17

 

 

 

 

 

 

This was short introduction to container orchestration using Docker Swarm, in future blog posts we will delve into how an application stack built on micro-service architecture (using the sample voting app from Docker site) can be composed as docker stack and orchestrated on swarm cluster. So stay tuned for future blog posts. Also we will be delving into how Oracle Service Bus stack can be orchestrated on Docker Swarm. Also you can check out the blog post of Oracle who are also tinkering with running Oracle Service Bus using Docker Swarm.

 References:

https://docs.docker.com/engine/getstarted-voting-app/create-swarm/#initialize-the-swarm

https://blogs.oracle.com/reynolds/entry/building_an_fmw_cluster_using