Wednesday, July 27, 2016

Stage 6. Redesign of project using docker 1.12



Docker 1.12 give us a bunch of changes making all the actual design obsolete.

A lot of technologies used in the lab are outdated with the features of docker in this release.

Faster Deploy of Swarm

We don't need a protocol or orchestration tool to deploy Swarm. Only need to init and join new monitors and nodes.


Now is really easy, just type docker swarm init in the first node and then the manager is raised without other any tool to do it.
Add any other node as manager (docker swarm join --manager MASTER_IP) or node (docker swarm join MASTER_IP) is so easy that leverage the deploy of Swarm to the minimal effort.



Swarm manage the cluster

And it does important changes in the design, because consul isn't needed now to raise and maintain the cluster.



Manage networking from inside

We know that Docker can create private networks. But now Swarm publish ports to outside in al the machines and reroute the traffic to a balanced internal network where the containers resides.

This make a lot of design changes making not necessary to use weave for network connectivity.



Tag Engines

Another interesting feature is select engines where we want to run the group of containers of one class (called now as services)

We tag the engine putting a label in the configuration (/etc/docker/docker.json) of each engine, for example the engines with the haproxy service will have this docker.json:

{"labels":["elasticmmldap.service=haproxy"]}

New design and glusterfs

GlusterFS is a filesystem developed to support cluster environments. Is really powerful and could be deployed easily in our Lab. The mission of glusterfs is give us a coherent filesystem usable for the ldap service without problems.


GlusterFS will run in disperse mode (The files are broken in pieces, add recovery information. The volumen has a redundancy value which define how many peers can be lost)  and we can lost one server without incident.
Each ldap container will use a folder below the /persistent-storage in the engine.






Changes in the Vagrant file and commands



The server.yaml file with the commands to execute in each server:



After launch Vagrant up, there are some steps to activate the distributed storage and must be executed in the first master node:

Snapshot with Vagrant

Vagrant can save the actual state of the virtual machines in snapshots, with all the infrastructure deployed we use snapshots to make tests faster like see in the next figure:



Stage 6 Command Line Execution

Case 1: From Stage 5
These are the steps to raise the virtual machines and the video with full execution.


$ if [ -d base_docker ];then rm -rf base_docker;fi
$ git clone -b base_docker https://github.com/aescanero/elasticmmldap base_docker
$ cd base_docker
~/base_docker$ vagrant box remove elasticmmldap/base_docker
~/base_docker$ vagrant box remove elasticmmldap/base_swarm
~/base_docker$ vagrant box remove elasticmmldap/base_weave
~/base_docker$ vagrant box update
~/base_docker$ vagrant plugin install vagrant-vbguest
~/base_docker$ vagrant up
~/base_docker$ vagrant halt
~/base_docker$ vagrant package
~/base_docker$ vagrant box add package.box --name elasticmmldap/base_docker
~/base_docker$ vagrant destroy -f
~/base_docker$ cd ..
$ if [ -d elasticmmldap ];then rm -rf elasticmmldap;fi
$ git clone https://github.com/aescanero/elasticmmldap elasticmmldap
$ cd elasticmmldap
~/elasticmmldap$ vagrant up
~/elasticmmldap$ for i in swarm-node-1 swarm-node-2 swarm-node-3; do vagrant ssh $i -c "sudo mount /persistent-storage";done
A video with the execution:





To "save" the lab in snapshots to recover later:
~/elasticmmldap$ for i in swarm-master-1 swarm-master-2 swarm-node-1 swarm-node-2 swarm-node-3; do vagrant snapshot push $i;done
All the code of this lab in: https://github.com/aescanero/elasticmmldap

In next stage: Networking, haproxy and basic ldap service.

Monday, May 30, 2016

Stage 5. Redundant, Distributed and Persistent storage for Swarm with GlusterFS



In this stage we can raise containers with ldap services, but data in containers don't are permanent and ldap database is loose when a container is stopped.

We need persistent storage and this can be ever the same. This is a important problem because the ldap containers can be raised in any host and the data of the ldap service must be persistent across sessions.

Configuring DNS resolution

Name resolution is critical in cluster environment. In this lab design have three sources for name resolution: Weave, Consul and external.

Do it easy with dnsmasq, three configuration files define how to access to the sources:
  • Resolve .consul domains (for example for hosts) in the consul dns service in 8600 port in each host: server=/consul/127.0.0.1#8600
  •  .weave.local domains (usable for containers in weave network) to the weave dns service in 5353 port in each host: server=/weave.local/127.0.0.1#5353
  • All the other names must be resolve by a external source (using the gateway):  server=/*/GWIP


Distributed Storage with GlusterFS

GlusterFS is a filesystem developed to support cluster environments. Is really powerful and could be deployed easily in our Lab. The mission of glusterfs is give us a coherent filesystem usable for each server without problems.



Gluster can be defined in four modes: replication, distribution and disperse, each mode has it own features and can be mixed with the other modes (more information in glusterfs documentation).

  1. Replication: A file can be saved in n different storage peers depending in the number of replicas we need. It give us protection about storage lost.
  2. Distribution: File A and B can be saved in different storage peers.
  3. Stripped: File A can be broken in pieces and save in different storage peers. The main goal is make faster access to information.
  4. Disperse: The files are broken in pieces, add recovery information. The volumen has a redundancy value which define how many peers can be lost.
We will choose a disperse configuration where we can lost two peers. This give us a folder in each server called /persistent-storage to be published in the docker containers.






Changes in the Vagrant file and commands



After launch Vagrant up, there are some steps to activate the distributed storage and must be executed in the first master node:

  • Add the other peers: sudo gluster peer probe PEER
  • Create the volume in disperse mode, see the disperse and redundancy values:  sudo gluster volume create persistent-storage disperse 3 redundancy 2 transport tcp ...
  • Activate the volume: sudo gluster volume start persistent-storage
And then we can mount the shared folders (mount /persistent-storage)

Snapshot with Vagrant

Vagrant can save the actual state of the virtual machines in snapshots, with all the infrastructure deployed we use snapshots to make tests faster like see in the next figure:



Stage 5 Command Line Execution

These are the steps to raise the virtual machines and the video with full execution.

$ vagrant box remove elasticmmldap/base_docker
$ vagrant box remove elasticmmldap/base_swarm
$ vagrant box remove elasticmmldap/base_weave
$ if [ -d base_docker ];then rm -rf base_docker;fi
$ git clone -b base_docker https://github.com/aescanero/elasticmmldap base_docker
$ cd base_docker
~/base_docker$ vagrant up
~/base_docker$ vagrant halt
~/base_docker$ vagrant package
~/base_docker$ vagrant box add package.box --name elasticmmldap/base_docker
~/base_docker$ vagrant destroy -f
~/base_docker$ cd ..
$ if [ -d base_swarm ];then rm -rf base_swarm;fi
$ git clone -b base_swarm https://github.com/aescanero/elasticmmldap base_swarm
$ cd base_swarm
~/base_swarm$ vagrant up
~/base_swarm$ vagrant halt
~/base_swarm$ vagrant package
~/base_swarm$ vagrant box add package.box --name elasticmmldap/base_swarm
~/base_swarm$ vagrant destroy -f
~/base_swarm$ cd ..
$ if [ -d base_weave ];then rm -rf base_weave;fi
$ git clone -b base_weave https://github.com/aescanero/elasticmmldap base_weave
$ cd base_weave
~/base_weave$ vagrant up
~/base_weave$ vagrant halt
~/base_weave$ vagrant package
~/base_weave$ vagrant box add package.box --name elasticmmldap/base_weave
~/base_weave$ vagrant destroy -f
~/base_weave$ cd ..
$ git clone https://github.com/aescanero/elasticmmldap elasticmmldap
$ cd elasticmmldap
~/elasticmmldap$ vagrant up
~/elasticmmldap$ for i in swarm-master-2 swarm-node-1 swarm-node-2 swarm-node-3; do vagrant ssh swarm-master-1 -c "sudo gluster peer probe $i";done
~/elasticmmldap$ vagrant ssh swarm-master-1 -c "sudo gluster volume create persistent-storage disperse 3 redundancy 2 transport tcp swarm-master-1:/glusterfs swarm-master-2:/glusterfs swarm-node-1:/glusterfs swarm-node-2:/glusterfs swarm-node-3:/glusterfs force"
~/elasticmmldap$ vagrant ssh swarm-master-1 -c "sudo gluster volume start persistent-storage"
~/elasticmmldap$ for i in swarm-master-1 swarm-master-2 swarm-node-1 swarm-node-2 swarm-node-3; do vagrant ssh $i -c "sudo mount /persistent-storage";done


A video with the execution:





To "save" the lab in snapshots to recover later:
~/elasticmmldap$ for i in swarm-master-1 swarm-master-2 swarm-node-1 swarm-node-2 swarm-node-3; do vagrant push $i;done
All the code of this lab in: https://github.com/aescanero/elasticmmldap


Monday, May 16, 2016

Stage 4.1: Networking in Swarm with Weave



Managing Swarm Network with Weave


Even with Swarm up and with the network overlay feature of Swarm, we find some problems to get access to the deployed containers. We need a better network layer and there are some interesting projects working in this area:

  1. Flannel: A network layer for Coreos project and use the distributed key value store Etcd. Is a good solution if you use CoreOS. There aren't any option to use Consul as we are using.
  2. Weave: A neutral solution, can be used in Swarm or Kubernetes clusters. Create a bridge between the container and the host and manage communications between the hosts via VxLAN and the hosts itself and own containers.
  3. OpenVswitch: SDN is like a swiss army knife for network management, but need some special effort to run it.

From the three solutions, the second is clearly near to the design. It need a new box (the weave box based in the swarm box) as we did in Stage 3 to raise virtual machines faster.
The third option will be discussed in another stage.





The code to create the weave box is in https://github.com/aescanero/elasticmmldap/tree/base_weave and this is how to package and add the box:


$ git clone -b base_weave https://github.com/aescanero/elasticmmldap base_weave
$ cd base_weave
~/base_weave$ vagrant up
~/base_weave$ vagrant halt
~/base_weave$ vagrant package
~/base_weave$ vagrant box add package.box --name elasticmmldap/base_weave
~/base_weave$ vagrant destroy -f
Looking in what are we doing:




To install weave we only need to download a script called weave from git.io/weave, in the base package we only need the actual versiĆ³n of the script because weave is build upon containers:  router,  manager, dns and proxy, and database.

With the base configured and added to vagrant, is easy to raise the five virtual machines. 

Swarm + Weave Cluster

Inside in each node of the swarm cluster we launch the Weave containers, and then connect each node with the other.

Weave will give us a DNS solution and access from the hosts to any container in any node.

With the command 'weave status', we'll get all the information about the network. Two very important changes with the Swarm stage:

  1. Swarm agent will communicate with Docker service thought a Weave proxy in port 12375 (After weave launch, the proxy is stopped and launched with correct configuration).
  2. Each node will change the default bridge to weave and expose a IP for Weave.
  3. A script will query to consul nodes to connect the Weave nodes each other.







Stage 4 Command Line Execution

These are the steps to raise the virtual machines and the video with full execution.




$ vagrant box remove elasticmmldap/base_docker
$ vagrant box remove elasticmmldap/base_swarm
$ vagrant box remove elasticmmldap/base_weave
$ if [ -d base_docker ];then rm -rf base_docker;fi
$ git clone -b base_docker https://github.com/aescanero/elasticmmldap base_docker
$ cd base_docker
~/base_docker$ vagrant up
~/base_docker$ vagrant halt
~/base_docker$ vagrant package
~/base_docker$ vagrant box add package.box --name elasticmmldap/base_docker
~/base_docker$ vagrant destroy -f
~/base_docker$ cd ..
$ if [ -d base_swarm ];then rm -rf base_swarm;fi
$ git clone -b base_swarm https://github.com/aescanero/elasticmmldap base_swarm
$ cd base_swarm
~/base_swarm$ vagrant up
~/base_swarm$ vagrant halt
~/base_swarm$ vagrant package
~/base_swarm$ vagrant box add package.box --name elasticmmldap/base_swarm
~/base_swarm$ vagrant destroy -f
~/base_swarm$ cd ..
$ if [ -d base_weave ];then rm -rf base_weave;fi
$ git clone -b base_weave https://github.com/aescanero/elasticmmldap base_weave
$ cd base_weave
~/base_weave$ vagrant up
~/base_weave$ vagrant halt
~/base_weave$ vagrant package
~/base_weave$ vagrant box add package.box --name elasticmmldap/base_weave
~/base_weave$ vagrant destroy -f
~/base_weave$ cd ..
$ git clone -b stage4 https://github.com/aescanero/elasticmmldap elasticmmldap_stage4
$ cd elasticmmldap_stage4
~/elasticmmldap_stage4$ vagrant up


A video with the execution:






Some checks:



To clear the lab:
~/elasticmmldap_stage4$ vagrant destroy -f
~/elasticmmldap_stage4$ vagrant box remove elasticmmldap/base_docker
~/elasticmmldap_stage4$ vagrant box remove elasticmmldap/base_swarm
~/elasticmmldap_stage4$ vagrant box remove elasticmmldap/base_weave
All to the code of this lab in: https://github.com/aescanero/elasticmmldap/tree/stage4

Next lab: persistent storage with glusterfs


Thursday, May 5, 2016

Stage 3: Time for Puppet and Swarm

Obsolete. See http://www.disasterproject.com/2016/07/stage-6-redesign-of-project-using.html for Docker 1.12 and next.

Docker is up, and we have a Docker base box for Vagrant, is time to face the problem to move all to a cluster solution: Swarm.

Swarm is a simple and fast cluster solution from Docker, when a container is raised Swarm select where (usually with lower load) and we can update network configuration in all the nodes in one time.

To deploy Swarm we use Puppet. Puppet is a powerfull tool to manage and orchestrate a datacenter, we really don't need this tool to raise Swarm, but will need it in a short future to make complex labs.

First. As we do in Stage 2.1 is time to do a Swarm base box to make this and future labs faster.



Time for Puppet and Swarm




Will use the code in https://github.com/aescanero/elasticmmldap/tree/base_swarm and launch this process:


$ git clone -b base_swarm https://github.com/aescanero/elasticmmldap base_swarm
$ cd base_swarm
~/base_swarm$ vagrant up
~/base_swarm$ vagrant halt
~/base_swarm$ vagrant package --output base_swarm.box base-swarm
~/base_swarm$ vagrant box add base_swarm.box --name elasticmmldap/base_swarm
~/base_swarm$ vagrant destroy
The code behind this stuff: first the Vagrantfile:


The Vagrant file has two interesting parts, the first box is necessary to install Puppet in Debian like distributions, but there are a interesting line:


puppet module install scotty-docker_swarm
scotty-docker_swarm is a puppet plugin to raise Swarm, the command automatically install the dependencies, one is garethr-docker plugin, necessary to raise and configure Docker containers.

With the plugins loaded the next stage is the second box. In the provision the VM apply a puppet manifest, the default manifest must be in puppet/manifests/default.pp.






Swarm Cluster

Well, the next part is a little chaotic. A break to show we have advanced:


  1. Vagrant raise ubuntu virtual machines (nodes)
  2. Install in each virtual machine the Docker service
  3. Deploy a Discover Service called Consul, and configure a bootstrap master, a master and tree agent nodes
  4. Install Puppet with plugins for Docker and Swarm in each node
  5. Install go and Swarm

Almost near to have the Cluster running, we need now to connect all the services.

        All the Docker are using the local Consul node and there are three swarm services to raise. In all the nodes there are a Swarm agent to give cluster service to Docker, and in the master nodes, two services more, replication and master, needed to manage the cluster distribution and information. 

        I select to use supervisor to raise the swarm services because I want to run the services in start up and with independence of the Ubuntu release.





The needed changes in the Vagranfile to let cloned Docker to run correctly and the part to configure supervisor with the swarm init scripts.





Stage 3 Shell Execution

These are the steps to raise the virtual machines and the video with full execution.
$ git clone -b stage3 https://github.com/aescanero/elasticmmldap elasticmmldap_stage3
$ cd elasticmmldap_stage3
~/elasticmmldap_stage3$ vagrant up
For checks see the last part of the video:




To clear the lab:
~/elasticmmldap_stage2$ vagrant destroy -f
Remember. Access to the code of this lab in: https://github.com/aescanero/elasticmmldap/tree/stage3


Tuesday, April 19, 2016

Stage 2 Addendum 1: Using Vagrant to make a baseline

Vagrant and the box life cycle


In Stage 2 we build five virtual machines with docker and consul, but it was slow, and we need to make faster changes, for example in Stage 3 will test two options, with new and complex software, and we have time to lose waiting to build the same machine again and again.


We need to make a base for all the others machines, only once to install consul and docker. Then we can build a hierarchical set of base machines for another uses, like swarm servers of other specialized servers, and even build base machines based in another bases.


For build a base machine we need to package a full virtual machine and select this package to create other virtual machines and packages.


In Vagrant this feature is called box, you can build a box from any virtual machine and import it in a repository to be used again in any moment.


We need to know some Vagrant commands and know how to use to manipulate virtual machines and boxes.


Vagrant Box Life cycle

  • Vagrant up MACHINE_NAME

UP command call Vagrant to read the file called "Vagranfile" and execute it. If is called without arguments will raise all the machines configured in the Vagranfile.

  • Vagrant halt MACHINE_NAME

HALT shut running machines, is necessary for build a package of a virtual machine.

  • Vagrant package MACHINE_NAME

PACKAGE build boxes. A box is a virtual machine (stopped) encapsulated to be importable to create other virtual machines.


To make a base machine and package it as a box we can do:

~/base_docker$ vagrant halt
~/base_docker$ vagrant package --output ~/repository/base_docker.box base-docker


  • Vagrant box MACHINE_NAME

BOX let us to manage external boxes and add our own boxes created by the package command. In our case we can add the box with

~/base_docker$ vagrant box add ~/repository/base_docker.box --name elasticmmldap/base_docker --box-version=`date +"%Y%m%d%H%M"`

  • Vagrant destroy MACHINE_NAME

DESTROY do the job opposite to UP, stop and erase all machines or the machine named.

In each stage is possible that we need to destroy all the machines to raise them again in the next stage.

For example after stage 1 and stage 2, all the machines must be cleared.

~/elasticmmldap_stage1$ vagrant destroy -f

All the process in only one script:

$ mkdir ~/repository
$ git clone -b base_docker https://github.com/aescanero/elasticmmldap base_docker
$ cd base_docker
~/base_docker$ vagrant up
~/base_docker$ vagrant halt
~/base_docker$ vagrant package --output ~/repository/base_docker.box base-docker
~/base_docker$ vagrant box add ~/repository/base_docker.box --name elasticmmldap/base_docker
~/base_docker$ vagrant destroy