Stage 3: Time for Puppet and Swarm

Obsolete. See http://www.disasterproject.com/2016/07/stage-6-redesign-of-project-using.html for Docker 1.12 and next.

Docker is up, and we have a Docker base box for Vagrant, is time to face the problem to move all to a cluster solution: Swarm.

Swarm is a simple and fast cluster solution from Docker, when a container is raised Swarm select where (usually with lower load) and we can update network configuration in all the nodes in one time.

To deploy Swarm we use Puppet. Puppet is a powerfull tool to manage and orchestrate a datacenter, we really don't need this tool to raise Swarm, but will need it in a short future to make complex labs.

First. As we do in Stage 2.1 is time to do a Swarm base box to make this and future labs faster.



Time for Puppet and Swarm




Will use the code in https://github.com/aescanero/elasticmmldap/tree/base_swarm and launch this process:


$ git clone -b base_swarm https://github.com/aescanero/elasticmmldap base_swarm
$ cd base_swarm
~/base_swarm$ vagrant up
~/base_swarm$ vagrant halt
~/base_swarm$ vagrant package --output base_swarm.box base-swarm
~/base_swarm$ vagrant box add base_swarm.box --name elasticmmldap/base_swarm
~/base_swarm$ vagrant destroy
The code behind this stuff: first the Vagrantfile:


The Vagrant file has two interesting parts, the first box is necessary to install Puppet in Debian like distributions, but there are a interesting line:


puppet module install scotty-docker_swarm
scotty-docker_swarm is a puppet plugin to raise Swarm, the command automatically install the dependencies, one is garethr-docker plugin, necessary to raise and configure Docker containers.

With the plugins loaded the next stage is the second box. In the provision the VM apply a puppet manifest, the default manifest must be in puppet/manifests/default.pp.






Swarm Cluster

Well, the next part is a little chaotic. A break to show we have advanced:


  1. Vagrant raise ubuntu virtual machines (nodes)
  2. Install in each virtual machine the Docker service
  3. Deploy a Discover Service called Consul, and configure a bootstrap master, a master and tree agent nodes
  4. Install Puppet with plugins for Docker and Swarm in each node
  5. Install go and Swarm

Almost near to have the Cluster running, we need now to connect all the services.

        All the Docker are using the local Consul node and there are three swarm services to raise. In all the nodes there are a Swarm agent to give cluster service to Docker, and in the master nodes, two services more, replication and master, needed to manage the cluster distribution and information. 

        I select to use supervisor to raise the swarm services because I want to run the services in start up and with independence of the Ubuntu release.





The needed changes in the Vagranfile to let cloned Docker to run correctly and the part to configure supervisor with the swarm init scripts.





Stage 3 Shell Execution

These are the steps to raise the virtual machines and the video with full execution.
$ git clone -b stage3 https://github.com/aescanero/elasticmmldap elasticmmldap_stage3
$ cd elasticmmldap_stage3
~/elasticmmldap_stage3$ vagrant up
For checks see the last part of the video:




To clear the lab:
~/elasticmmldap_stage2$ vagrant destroy -f
Remember. Access to the code of this lab in: https://github.com/aescanero/elasticmmldap/tree/stage3


Comments