Docker: Reduce the size of a container

In container environments (Docker or Kubernetes) we need to deploy quickly, and the most important thing for this is their size. We must reduce them so that the download of them from the registry and their execution is slower the larger the container is and that minimally affects the complexity of the relations between services.

For a demonstration, we going to use a PowerDNS-based solution, I find that the original PowerDNS-Admin service container (https://github.com/ngoduykhanh/PowerDNS-Admin/blob/master/docker/Production/Dockerfile) has the following features:

  • The developer is very active and includes python code, javascript (nodejs) and css. The images in docker.hub are obsolete with respect code.
  • Production Dockerfile does not generate a valid image
  • It is based on Debian Slim, which although deletes a large number of files, is not sufficient small.

In Docker Hub there are images, but few are sufficient recent or do not use the original repository so the result comes from an old version of the code. For example, the most downloaded image (really/powerdns-admin) does not take into account the modifications of the last year and does not use yarn for the nodejs layer.

First step: Decide whether to create a new Docker image

Sometimes it is a matter of necessity or time, in this case it has been decided to create a new image taking into account the above. As minimum requirements we need a GitHub account (https://www.github.com), a Docker Hub account (https://hub.docker.com/) and basic knowledge of git as well as advanced knowledge of creating Dockerfile.

In this case, https://github.com/aescanero/docker-powerdns-admin-alpine and https://cloud.docker.com/repository/docker/aescanero/powerdns-admin is created and linked to automatically create the images when a Dockerfile is uploaded to GitHub.

Second step: Choose the base of a container

Using a base of very small size and oriented to the reduction of each of the components (executables, libraries, etc.)… Read the rest “Docker: Reduce the size of a container”

How to launch a Helm Chart without install Tiller

One of the most interesting details that I found when using K3s (https://k3s.io/) is the way to deploy Traefik, in which it uses a Helm chart (note: in Kubernetes the command to execute is sudo kubectl, but in k3s is sudo k3s kubectl because it is integrated to use minimal resources).

$ sudo k3s kubectl get pods -A
NAMESPACE        NAME                              READY   STATUS      RESTARTS   AGE
...
kube-system      helm-install-traefik-ksqsj        0/1     Completed   0          10m
kube-system      traefik-9cfbf55b6-5cds5           1/1     Running     0          9m28s
$ sudo k3s kubectl get jobs -A
NAMESPACE      NAME                        COMPLETIONS   DURATION   AGE
kube-system    helm-install-traefik        1/1           50s        12m

We found that helm is not installed, but we can see a job running the helm client so that we can have its power without the need to have tiller running (the helm server) that uses resources that we can save, but how does it work?

Klipper Helm

The first detail we can see is the use of a job (a task that is usually executed only once as a container) based on the image “rancher/klipper-helm” (https://github.com/rancher/klipper-helm) running a helm environment by simply downloading it and running a single script: https://raw.githubusercontent.com/rancher/klipper-helm/master/entry

As a requirement you are going to require a system account with administrator permissions in the kube-system namespace, for “traefik” it is:

$ sudo k3s kubectl get clusterrolebinding helm-kube-system-traefik -o yaml
...
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: helm-traefik
  namespace: kube-system

What we must take into account is the need to create the service account and at the end of the installation task with helm remove it since it will not be necessary until another removal or update helm operation.
As an example we will create a task to install a weave-scope service using the helm chart (https://github.com/helm/charts/tree/master/stable/weave-scope)

Service Creation

We create a workspace to isolate the new service (namespace in kubernetes, project in Openshift) that we will call helm-weave-scope:

$ sudo k3s kubectl create namespace helm-weave-scope
namespace/helm-weave-scope created

We create a new system account and assign the administrator permissions:

$ sudo k3s kubectl create serviceaccount helm-installer-weave-scope -n helm-weave-scope
serviceaccount/helm-installer-weave-scope created
$ sudo k3s kubectl create clusterrolebinding helm-installer-weave-scope --clusterrole=cluster-admin --serviceaccount=helm-weave-scope:helm-installer-weave-scope
clusterrolebinding.rbac.authorization.k8s.io/helm-installer-weave-scope
Read the rest “How to launch a Helm Chart without install Tiller”
Open post
K3s: Simplify Kubernetes 2

K3s: Simplify Kubernetes

What is K3s?

K3s (https://k3s.io/) is a Kubernetes solution created by Rancher Labs (https://rancher.com/) that promises easy installation, few requirements and minimal memory usage.

For the approach of a Demo/Development environment this becomes a great improvement on what we have talked about previously at Kubernetes: Create a minimal environment for demos , where we can see that the creation of the Kubernetes environment is complex and requires too many resources even if Ansible is the one who performs the difficult work.

We will see if what is presented to us is true and if we can include the Metallb tools that will allow us to emulate the power of the Cloud providers balancers and K8dash environments that will allow us to track the infrastructure status.

K3s Download

We configure the virtual machines in the same way as for Kubernetes, with the installation of dependencies:

#Debian
sudo apt-get install -y ebtables ethtool socat libseccomp2 conntrack ipvsadm
#Centos
sudo yum install -y ebtables ethtool socat libseccomp conntrack-tools ipvsadm

We download the latest version of k3s from https://github.com/rancher/k3s/releases/latest/download/k3s and put it in /usr/bin with execution permissions. We must do it in all the nodes.

What is K3s?

K3s includes three “extra” services that will change the initial approach we use for Kubernetes, the first is Flannel, integrated into K3s will make the entire layer of internal network management of Kubernetes, although it is not as complete in features as Weave (for example multicast support) it complies with being compatible with Metallb. A very complete comparison of Kubernetes network providers can be seen at https://rancher.com/blog/2019/2019-03-21-comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/ .

The second service is Traefik that performs input functions from outside the Kubernetes cluster, it is a powerful reverse proxy/balancer with multiple features that will perform at the Network Layer 7, running behind Metallb that will perform the functions of network layer 3 as balancer.… Read the rest “K3s: Simplify Kubernetes”

Open post
Kubernetes: Create a minimal environment for demos 4

Kubernetes: Create a minimal environment for demos

Every day more business environments are making a migration to Cloud or Kubernetes/Openshift and it is necessary to meet these requirements for demonstrations.

Kubernetes is not a friendly environment to carry it in a notebook with medium capacity (8GB to 16GB of RAM) and less with a demo that requires certain resources.

Deploy Kubernetes on kubeadm, containerd, metallb and weave

This case is based on the Kubeadm-based deployment (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) for Kubernetes deployment, using containerd (https://containerd.io/) as the container life cycle manager and to obtain a minimum network management we will use metallb (https://metallb.universe.tf/) that will allow us to emulate the power of the cloud providers balancers (as AWS Elastic Load Balancer) and Weave (https://www.weave.works/blog/weave-net-kubernetes-integration/) that allows us to manage container networks and integrate seamlessly with metallb.

Finally, taking advantage of the infrastructure, we deploy the real-time resource manager K8dash (https://github.com/herbrandson/k8dash) that will allow us to track the status of the infrastructure and the applications that we deploy in it.

Although the Ansible roles that we have used before (see https://github.com/aescanero/disasterproject) allow us to deploy the environment with ease and cleanliness, we will examine it to understand how the changes we will use in subsequent chapters (using k3s) have an important impact on the availability and performance of the deployed demo/development environment.

First step: Install Containerd

The first step in the installation is the dependencies that Kubernetes has and a very good reference about them is the documentation that Kelsey Hightower makes available to those who need to know Kubernetes thoroughly (https://github.com/kelseyhightower/kubernetes-the-hard-way), especially of all those who are interested in Kubernetes certifications such as CKA (https://www.cncf.io/certification/cka/).

Kubernetes: Create a minimal environment for demos 5

We start with a series of network packages

#Debian
sudo apt-get install -y ebtables ethtool socat libseccomp2 conntrack ipvsadm
#Centos
sudo yum install -y ebtables ethtool socat libseccomp conntrack-tools ipvsadm

We install the container life manager (a Containerd version that includes CRI and CNI) and take advantage of the packages that come with the Kubernetes network interfaces (CNI or Container Network Interface)

sudo sh -c "curl -LSs https://storage.googleapis.com/cri-containerd-release/cri-containerd-cni-1.2.7.linux-amd64.tar.gz
Read the rest “Kubernetes: Create a minimal environment for demos”

Choose between Docker or Podman for test and development environments

When we must choose between Docker or Podman?

A lot of times we find that there are very few resources and we need an environment to perform a complete product demonstration at customer.

In those cases we’ll need to simulate an environment in the simplest way possible and with minimal resources. For this we’ll adopt containers, but which is the best solution for those small environments?

Docker

Docker is the standard container environment, it is the most widespread and put together a set of powerful tools such as a client on the command line, an API server, a container lifecycle manager (containerd), and a container launcher (runc).

running docker with containerd

Install docker is easy, since docker supplies a script that execute the process of prepare and configure the necessary requirements and repositories and finally installs and configures docker leaving the service ready to use.

Podman

Podman is a container environment that does not use a service and therefore does not have an API server, requests are made only from the command line, which has advantages and disadvantages that we will explain at the article.

Install podman is easy in a Centos environment (yum install -y podman for Centos 7 and yum install -y container-tools for Centos 8) but you need some work in a Debian environment:

# sudo apt update && sudo apt install -y software-properties-common dirmngr
# sudo apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys 0x018BA5AD9DF57A4448F0E6CF8BECF1637AD8C79D
# sudo sh -c "echo 'deb http://ppa.launchpad.net/projectatomic/ppa/ubuntu bionic main' > /etc/apt/sources.list.d/container.list"
# sudo apt update && sudo apt install -y podman skopeo buildah uidmap debootstrap

Deploy with Ansible

In our case we have used the Ansible roles developed at https://github.com/aescanero/disasterproject, to deploy two virtual machines, one with podman and the other with docker.

In the case of using a Debian based distribution we must to install Ansible:

$ sudo sh -c 'echo "deb http://ppa.launchpad.net/ansible/ansible/ubuntu
Read the rest “Choose between Docker or Podman for test and development environments”

Linux virtual machine with KVM from the command line

If we want to raise virtual machines in a Linux environment that does not have a graphical environment, we can raise virtual machines using the command line with a XML template.

This article explains how the deployment performed with Ansible-libvirt at KVM, Ansible and how to deploy a test environment works internally

Install Qemu-KVM and Libvirt

Linux virtual machine with KVM from the command line 7

First: we must to install libvirt and Qemu-KVM. In Ubuntu / Debian is installed with:

$ sudo apt-get install -y libvirt-daemon-system python-libvirt python-lxml

And in CentOS / Redhat with:

$ sudo yum install -y libvirt-daemon-kvm python-lxml

To launch the service we must do: $ sudo systemctl enable libvirtd && sudo systemctl start libvirtd

Configure a network template

Libvirt provides us with a powerful tool for managing virtual machines called ‘virsh’, which we must use to be able to manage KVM virtual machines from the command line.

For a virtual machine we mainly need three elements, the first is a network configuration that provides IP to virtual machines via DHCP. To do this libvirt needs XML template like the next template (which we will designate “net.xml”):

<network<nameNETWORK_NAME</name<forward mode='nat'<nat<port start='1' end='65535'/</nat</forward<bridge name='BRIDGE_NAME' stp='on' delay='0'/<ip address='IP_HOST' netmask='NETWORK_MASK'<dhcp<range start='BEGIN_DHCP_RANGE' end='END_DHCP_RANGE'/</dhcp</ip</network

Whose main elements are:

  • NETWORK_NAME: Descriptive name that we are going to use to designate the network, for example, “test_net” or “production_net”.
  • BRIDGE_NAME: Each network creates an interface on the host server that will serve as gateway of the input/output packets of that previous network to the outside. Here we assign a descriptive name that let as identify the interface.
  • IP_HOST: The IP that such interface will have on the host server and that will be the gateway of the virtual machines.
  • NETWORK_MASK: Depends on the network, usually for testing must be use a class C (255.255.255.0)
  • BEGIN_DHCP_RANGE: To assign IPs to virtual machines using libvirt, there are an internal DHCP server (dnsmasq based), here we define the first IP of the range that we can serve to virtual machines.
Read the rest “Linux virtual machine with KVM from the command line”

Posts navigation

1 2 3
Scroll to top