Stage 2: Raising docker by script and the consul cluster using supervisor

Obsolete. See http://www.disasterproject.com/2016/07/stage-6-redesign-of-project-using.html for Docker 1.12 and next.


We have the virtual machines running and the infrastructure is prepared to accommodate a container service and a little synchronized database with the information of the containers 


Deploying docker engine, two consul cluster nodes, and three consul client nodes

In our simple design all the machines will need the docker engine to be installed and a service discovery.

Containers is a very powerful technology, and let us to deploy small services encapsulated in something like a box. But containers have problems, one is where is the container, other is know when the container is alive.

Docker is the most used container technology for Linux. Docker can raise a application in a separate space, with limited resources (memory, cpu, io, network) and can do it faster as a virtual machine with less resources.


Consul Servers joined in a cluster.



Installing Docker

There are a easy process to install in Ubuntu with the unique dependency of curl, luckily, curl comes with default Ubuntu packaging.

curl -sSL https://get.docker.com/ | sh
This execute a script that check the linux distribution of the virtual machine, configure the repositories and install any needed package. At last install the docker package for your distribution. More for docker install in https://docs.docker.com/engine/installation/

Configuring Consul

Consul is a simple but powerful replicated database with a lot of associated services. The information is saved in a key/value format from the providers (for example Swarm).
Consul can form clusters (in production must be used at least three cluster servers), where the data is replicated between the nodes. The nodes out the cluster must run in client mode, joining to all the nodes in the cluster to communicate to the cluster.

Consul can be installed from github, and build in Go lang, but is a hard manner to work with it, the easy is download the binary and put it to run. 

mkdir -p /opt/consul/bin && wget --quiet -N https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_linux_amd64.zip && unzip -d /opt/consul/bin /opt/consul/consul_0.6.4_linux_amd64.zip

Supervisor

The Consul service must be execute from a service launcher, to make things easy, the lab will use a service called supervisord.

This service can be installed from package:

apt-get install supervisord
And have the configuration for each service in files located in /etc/supervisord/conf.d,  the better thing about supervisord is how maintain running the services, restarting them when needed.


VagrantFile

    The Vagrant configuration "VagrantFile" has some changes, in the first part of the file we add some scripts to install consul, docker and add a swap file to better memory management of the virtual machine.



    And then the Vagrantfile must run the script, select where to put the correct Consul configuration and at last install supervisor and run Consul.




    The servers.yaml file where each machine is defined, has no changes from Stage1




    Stage 2 Shell Execution

    These are the steps to raise the virtual machines and the video with full execution.
    $ git clone -b stage2 https://github.com/aescanero/elasticmmldap elasticmmldap_stage2
    $ cd elasticmmldap_stage2
    ~/elasticmmldap_stage2$ vagrant up
    The test to check if everything is ok:
    ~/elasticmmldap_stage2$ vagrant ssh swarm-master-1
    $ sudo /opt/consul/bin/consul members
    The result must be:

    
    Node            Address             Status  Type    Build  Protocol  DC
    swarm-master-1  192.168.8.2:8301    alive   server  0.6.4  2         elasticldap
    swarm-master-2  192.168.8.3:8301    alive   server  0.6.4  2         elasticldap
    swarm-node-1    192.168.8.100:8301  alive   client  0.6.4  2         elasticldap
    swarm-node-2    192.168.8.101:8301  alive   client  0.6.4  2         elasticldap
    swarm-node-3    192.168.8.102:8301  alive   client  0.6.4  2         elasticldap
    To clear the lab:
    ~/elasticmmldap_stage2$ vagrant destroy -f
    You can access to the code of this lab in: https://github.com/aescanero/elasticmmldap/tree/stage2
    The video of the deployment:





    Comments