Stage 6. Redesign of project using docker 1.12



Docker 1.12 give us a bunch of changes making all the actual design obsolete.

A lot of technologies used in the lab are outdated with the features of docker in this release.

Faster Deploy of Swarm

We don't need a protocol or orchestration tool to deploy Swarm. Only need to init and join new monitors and nodes.


Now is really easy, just type docker swarm init in the first node and then the manager is raised without other any tool to do it.
Add any other node as manager (docker swarm join --manager MASTER_IP) or node (docker swarm join MASTER_IP) is so easy that leverage the deploy of Swarm to the minimal effort.



Swarm manage the cluster

And it does important changes in the design, because consul isn't needed now to raise and maintain the cluster.



Manage networking from inside

We know that Docker can create private networks. But now Swarm publish ports to outside in al the machines and reroute the traffic to a balanced internal network where the containers resides.

This make a lot of design changes making not necessary to use weave for network connectivity.



Tag Engines

Another interesting feature is select engines where we want to run the group of containers of one class (called now as services)

We tag the engine putting a label in the configuration (/etc/docker/docker.json) of each engine, for example the engines with the haproxy service will have this docker.json:

{"labels":["elasticmmldap.service=haproxy"]}

New design and glusterfs

GlusterFS is a filesystem developed to support cluster environments. Is really powerful and could be deployed easily in our Lab. The mission of glusterfs is give us a coherent filesystem usable for the ldap service without problems.


GlusterFS will run in disperse mode (The files are broken in pieces, add recovery information. The volumen has a redundancy value which define how many peers can be lost)  and we can lost one server without incident.
Each ldap container will use a folder below the /persistent-storage in the engine.






Changes in the Vagrant file and commands



The server.yaml file with the commands to execute in each server:



After launch Vagrant up, there are some steps to activate the distributed storage and must be executed in the first master node:

Snapshot with Vagrant

Vagrant can save the actual state of the virtual machines in snapshots, with all the infrastructure deployed we use snapshots to make tests faster like see in the next figure:



Stage 6 Command Line Execution

Case 1: From Stage 5
These are the steps to raise the virtual machines and the video with full execution.


$ if [ -d base_docker ];then rm -rf base_docker;fi
$ git clone -b base_docker https://github.com/aescanero/elasticmmldap base_docker
$ cd base_docker
~/base_docker$ vagrant box remove elasticmmldap/base_docker
~/base_docker$ vagrant box remove elasticmmldap/base_swarm
~/base_docker$ vagrant box remove elasticmmldap/base_weave
~/base_docker$ vagrant box update
~/base_docker$ vagrant plugin install vagrant-vbguest
~/base_docker$ vagrant up
~/base_docker$ vagrant halt
~/base_docker$ vagrant package
~/base_docker$ vagrant box add package.box --name elasticmmldap/base_docker
~/base_docker$ vagrant destroy -f
~/base_docker$ cd ..
$ if [ -d elasticmmldap ];then rm -rf elasticmmldap;fi
$ git clone https://github.com/aescanero/elasticmmldap elasticmmldap
$ cd elasticmmldap
~/elasticmmldap$ vagrant up
~/elasticmmldap$ for i in swarm-node-1 swarm-node-2 swarm-node-3; do vagrant ssh $i -c "sudo mount /persistent-storage";done
A video with the execution:





To "save" the lab in snapshots to recover later:
~/elasticmmldap$ for i in swarm-master-1 swarm-master-2 swarm-node-1 swarm-node-2 swarm-node-3; do vagrant snapshot push $i;done
All the code of this lab in: https://github.com/aescanero/elasticmmldap

In next stage: basic ldap service.

Comments