Open post
PowerDNS on Docker or Podman, easy to run 1

PowerDNS on Docker or Podman, easy to run

What is PowerDNS and DNS as a critical service

PowerDNS is a DNS server, an especially critical service in any infrastructure that we want to deploy, since this is the main connection between services and operators.

If of all the options we find when we look for a DNS server (we can see a long list at https://en.wikipedia.org/wiki/Comparison_of_DNS_server_software) we look for the following three conditions: can be easily managed, simple deployment and OpenSource. We are going to stay in only one option: PowerDNS and for its management PowerDNS-admin.

PowerDNS (whose development can be seen at https://github.com/PowerDNS/pdns and has more than 1800 stars) is a powerful DNS server whose most interesting features for management are a web service with a powerful API and be able to store information in databases, such as MySQL.

And we select PowerDNS-Admin for two reasons: It is actively maintained (https://github.com/ngoduykhanh/PowerDNS-Admin, more than 750 stars) and visually it is a more friendly environment by having similar format as RedHat tools are currently using.

Why PowerDNS with PowerDNS-Admin?

Because they make up a powerful package where we have the following advantages:

  • Easy to install
  • Easy to maintain
  • Intuitive interface
  • Everything is stored in a database (which facilitates replication, backups, high availability, etc.)
  • It does not require special browser settings (such as RedHat IDM that requires installing the server certificate in clients)
  • Has authentication against multiple sources (LDAP, AD, SAML, Github, Google, etc.)
  • Has domain access permissions

To these advantages we must add the existence of multiple containers images that greatly facilitate how to deploy and update this solution.

PowerDNS on Docker or Podman, easy to run 2

Deploy PowerDNS with Docker-Composer

La solución con PowerDNS consta de tres partes: el servidor dns, para el cual haremos uso de contenedor pschiffe/pdns-mysql:alpine (https://github.com/pschiffe/docker-pdns/tree/master/pdns), el servidor de base de datos mariadb a través del contenedor yobasystems/alpine-mariadb
(https://github.com/yobasystems/alpine-mariadb) y el contenedor aescanero/powerdns-admin que hemos explicado en un post anterior (https://www.disasterproject.com/index.php/2019/08/docker-reducir-tamano/).… Read the rest “PowerDNS on Docker or Podman, easy to run”

Kubernetes: adventures and misadventures of patching (kubectl patch).

Kubernetes is a powerful container orchestration tool where many different objects are executed and at some point in time we will be interested in modifying.

For this, Kubernetes offers us an interesting mechanism: patch, we are going to explain how to patch and we will see that this tool is far from being enough tool as would be desirable.

Patching operations in Kubernetes

According to the Kubernetes documentation and the Kubernetes API rules, three types are defined (with –type):

Strategic

This is the type of patch that Kubernetes uses by default and is a native type, defined in the SIG. It follows the structure of the original object but indicating the changes (by default join: merge, that’s why it is known as strategic merge patch) in a yaml file. For example: if we have the following service (in service.yml file):

apiVersion: v1
kind: Service
metadata:
  labels:
    app: traefik
  name: traefik
  namespace: kube-system
spec:
  clusterIP: 10.43.122.171
  externalTrafficPolicy: Cluster
  ports:
  - name: http
    nodePort: 30200
    port: 80
    protocol: TCP
    targetPort: http
  - name: https
    nodePort: 31832
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app: traefik
  type: LoadBalancer

We are going to use the command kubectl patch -f service.yml --type="strategic" -p "$(cat patch.yml)" --dry-run -o yaml to allow us to perform tests on objects without the danger of modifying its content in the Kubernetes cluster.

In this case, if we want this service to listen for an additional port, we will use the “merge” strategy and apply the following patch (patch.yml):

spec:
  ports:
  - name: dashboard
    port: 8080
    protocol: TCP
    targetPort: dashboard

As we can see, the patch only follows the service object as far as we want to make the change (the “ports” array) and being a “strategic merge” type change it will be added to the list as seen in the command dump:

...
Read the rest “Kubernetes: adventures and misadventures of patching (kubectl patch).”

Docker: Reduce the size of a container

In container environments (Docker or Kubernetes) we need to deploy quickly, and the most important thing for this is their size. We must reduce them so that the download of them from the registry and their execution is slower the larger the container is and that minimally affects the complexity of the relations between services.

For a demonstration, we going to use a PowerDNS-based solution, I find that the original PowerDNS-Admin service container (https://github.com/ngoduykhanh/PowerDNS-Admin/blob/master/docker/Production/Dockerfile) has the following features:

  • The developer is very active and includes python code, javascript (nodejs) and css. The images in docker.hub are obsolete with respect code.
  • Production Dockerfile does not generate a valid image
  • It is based on Debian Slim, which although deletes a large number of files, is not sufficient small.

In Docker Hub there are images, but few are sufficient recent or do not use the original repository so the result comes from an old version of the code. For example, the most downloaded image (really/powerdns-admin) does not take into account the modifications of the last year and does not use yarn for the nodejs layer.

First step: Decide whether to create a new Docker image

Sometimes it is a matter of necessity or time, in this case it has been decided to create a new image taking into account the above. As minimum requirements we need a GitHub account (https://www.github.com), a Docker Hub account (https://hub.docker.com/) and basic knowledge of git as well as advanced knowledge of creating Dockerfile.

In this case, https://github.com/aescanero/docker-powerdns-admin-alpine and https://cloud.docker.com/repository/docker/aescanero/powerdns-admin is created and linked to automatically create the images when a Dockerfile is uploaded to GitHub.

Second step: Choose the base of a container

Using a base of very small size and oriented to the reduction of each of the components (executables, libraries, etc.) that are going to be used is the minimum requirement to reduce the size of a container, and the choice should always be to use Alpine (https://alpinelinux.org/).… Read the rest “Docker: Reduce the size of a container”

How to launch a Helm Chart without install Tiller

One of the most interesting details that I found when using K3s (https://k3s.io/) is the way to deploy Traefik, in which it uses a Helm chart (note: in Kubernetes the command to execute is sudo kubectl, but in k3s is sudo k3s kubectl because it is integrated to use minimal resources).

$ sudo k3s kubectl get pods -A
NAMESPACE        NAME                              READY   STATUS      RESTARTS   AGE
...
kube-system      helm-install-traefik-ksqsj        0/1     Completed   0          10m
kube-system      traefik-9cfbf55b6-5cds5           1/1     Running     0          9m28s
$ sudo k3s kubectl get jobs -A
NAMESPACE      NAME                        COMPLETIONS   DURATION   AGE
kube-system    helm-install-traefik        1/1           50s        12m

We found that helm is not installed, but we can see a job running the helm client so that we can have its power without the need to have tiller running (the helm server) that uses resources that we can save, but how does it work?

Klipper Helm

The first detail we can see is the use of a job (a task that is usually executed only once as a container) based on the image “rancher/klipper-helm” (https://github.com/rancher/klipper-helm) running a helm environment by simply downloading it and running a single script: https://raw.githubusercontent.com/rancher/klipper-helm/master/entry

As a requirement you are going to require a system account with administrator permissions in the kube-system namespace, for “traefik” it is:

$ sudo k3s kubectl get clusterrolebinding helm-kube-system-traefik -o yaml
...
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: helm-traefik
  namespace: kube-system

What we must take into account is the need to create the service account and at the end of the installation task with helm remove it since it will not be necessary until another removal or update helm operation.
As an example we will create a task to install a weave-scope service using the helm chart (https://github.com/helm/charts/tree/master/stable/weave-scope)

Service Creation

We create a workspace to isolate the new service (namespace in kubernetes, project in Openshift) that we will call helm-weave-scope:

$ sudo k3s kubectl create namespace helm-weave-scope
namespace/helm-weave-scope created

We create a new system account and assign the administrator permissions:

$ sudo k3s kubectl create serviceaccount helm-installer-weave-scope -n helm-weave-scope
serviceaccount/helm-installer-weave-scope created
$ sudo k3s kubectl create clusterrolebinding helm-installer-weave-scope --clusterrole=cluster-admin --serviceaccount=helm-weave-scope:helm-installer-weave-scope
clusterrolebinding.rbac.authorization.k8s.io/helm-installer-weave-scope created

Our next step is to create the task, for this we create it in the task.yml file:

---
apiVersion: batch/v1
kind: Job
metadata:
  name: helm-install-weave-scope
  namespace: helm-weave-scope
spec:
  backoffLimit: 1000
  completions: 1
  parallelism: 1
  template:
    metadata:
      labels:
        jobname: helm-install-weave-scope
    spec:
      containers:
      - args:
        - install
        - --namespace
        - helm-weave-scope
        - --name
        - helm-weave-scope
        - --set-string
        - service.type=LoadBalancer
        - stable/weave-scope 
        env:
        - name: NAME
          value: helm-weave-scope
        image: rancher/klipper-helm:v0.1.5
        name: helm-weave-scope
      serviceAccount: helm-installer-weave-scope
      serviceAccountName: helm-installer-weave-scope
      restartPolicy: OnFailure

Execution

And we execute it with:

$ sudo k3s kubectl apply -f tarea.yml
job.batch/helm-install-weave-scope created

What will launch all the processes that the chart has:

# k3s kubectl get pods -A -w
NAMESPACE          NAME                                   READY   STATUS      RESTARTS   AGE
helm-weave-scope   helm-install-weave-scope-vhwk2         1/1     Running     0          9s
helm-weave-scope   weave-scope-agent-helm-weave-scope-lrfs2   0/1     Pending     0          0s
helm-weave-scope   weave-scope-agent-helm-weave-scope-drl8v   0/1     Pending     0          0s
helm-weave-scope   weave-scope-agent-helm-weave-scope-lrfs2   0/1     Pending     0          0s
helm-weave-scope   weave-scope-agent-helm-weave-scope-drl8v   0/1     Pending     0          0s
helm-weave-scope   weave-scope-frontend-helm-weave-scope-844c4b9f6f-d22mn   0/1     Pending     0          0s
helm-weave-scope   weave-scope-frontend-helm-weave-scope-844c4b9f6f-d22mn   0/1     Pending     0          0s
helm-weave-scope   weave-scope-agent-helm-weave-scope-lrfs2                 0/1     ContainerCreating   0          1s
helm-weave-scope   weave-scope-agent-helm-weave-scope-drl8v                 0/1     ContainerCreating   0          1s
helm-weave-scope   weave-scope-frontend-helm-weave-scope-844c4b9f6f-d22mn   0/1     ContainerCreating   0          1s
helm-weave-scope   helm-install-weave-scope-vhwk2                           0/1     Completed           0          10s
helm-weave-scope   weave-scope-agent-helm-weave-scope-lrfs2                 1/1     Running             0          13s
helm-weave-scope   weave-scope-agent-helm-weave-scope-drl8v                 1/1     Running             0          20s
helm-weave-scope   weave-scope-frontend-helm-weave-scope-844c4b9f6f-d22mn   1/1     Running             0          20s

Result

We can observe the correct installation of the application without the need to install Helm or tiller running on the system:

# k3s kubectl get services -A -w
NAMESPACE          NAME                           TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                                     AGE
helm-weave-scope   helm-weave-scope-weave-scope   LoadBalancer   10.43.182.173   192.168.8.242   80:32567/TCP                                7m5s
How to launch a Helm Chart without install Tiller 12

Update

With the arrival of Helm v3 Tiller is not necessary and its use is much simpler.… Read the rest “How to launch a Helm Chart without install Tiller”

Open post
K3s: Simplify Kubernetes 13

K3s: Simplify Kubernetes

What is K3s?

K3s (https://k3s.io/) is a Kubernetes solution created by Rancher Labs (https://rancher.com/) that promises easy installation, few requirements and minimal memory usage.

For the approach of a Demo/Development environment this becomes a great improvement on what we have talked about previously at Kubernetes: Create a minimal environment for demos , where we can see that the creation of the Kubernetes environment is complex and requires too many resources even if Ansible is the one who performs the difficult work.

We will see if what is presented to us is true and if we can include the Metallb tools that will allow us to emulate the power of the Cloud providers balancers and K8dash environments that will allow us to track the infrastructure status.

K3s Download

We configure the virtual machines in the same way as for Kubernetes, with the installation of dependencies:

#Debian
sudo apt-get install -y ebtables ethtool socat libseccomp2 conntrack ipvsadm
#Centos
sudo yum install -y ebtables ethtool socat libseccomp conntrack-tools ipvsadm

We download the latest version of k3s from https://github.com/rancher/k3s/releases/latest/download/k3s and put it in /usr/bin with execution permissions. We must do it in all the nodes.

What is K3s?

K3s includes three “extra” services that will change the initial approach we use for Kubernetes, the first is Flannel, integrated into K3s will make the entire layer of internal network management of Kubernetes, although it is not as complete in features as Weave (for example multicast support) it complies with being compatible with Metallb. A very complete comparison of Kubernetes network providers can be seen at https://rancher.com/blog/2019/2019-03-21-comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/ .

The second service is Traefik that performs input functions from outside the Kubernetes cluster, it is a powerful reverse proxy/balancer with multiple features that will perform at the Network Layer 7, running behind Metallb that will perform the functions of network layer 3 as balancer.… Read the rest “K3s: Simplify Kubernetes”

Open post
Kubernetes: Create a minimal environment for demos 15

Kubernetes: Create a minimal environment for demos

Every day more business environments are making a migration to Cloud or Kubernetes/Openshift and it is necessary to meet these requirements for demonstrations.

Kubernetes is not a friendly environment to carry it in a notebook with medium capacity (8GB to 16GB of RAM) and less with a demo that requires certain resources.

Deploy Kubernetes on kubeadm, containerd, metallb and weave

This case is based on the Kubeadm-based deployment (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) for Kubernetes deployment, using containerd (https://containerd.io/) as the container life cycle manager and to obtain a minimum network management we will use metallb (https://metallb.universe.tf/) that will allow us to emulate the power of the cloud providers balancers (as AWS Elastic Load Balancer) and Weave (https://www.weave.works/blog/weave-net-kubernetes-integration/) that allows us to manage container networks and integrate seamlessly with metallb.

Finally, taking advantage of the infrastructure, we deploy the real-time resource manager K8dash (https://github.com/herbrandson/k8dash) that will allow us to track the status of the infrastructure and the applications that we deploy in it.

Although the Ansible roles that we have used before (see https://github.com/aescanero/disasterproject) allow us to deploy the environment with ease and cleanliness, we will examine it to understand how the changes we will use in subsequent chapters (using k3s) have an important impact on the availability and performance of the deployed demo/development environment.

First step: Install Containerd

The first step in the installation is the dependencies that Kubernetes has and a very good reference about them is the documentation that Kelsey Hightower makes available to those who need to know Kubernetes thoroughly (https://github.com/kelseyhightower/kubernetes-the-hard-way), especially of all those who are interested in Kubernetes certifications such as CKA (https://www.cncf.io/certification/cka/).

Kubernetes: Create a minimal environment for demos 16

We start with a series of network packages

#Debian
sudo apt-get install -y ebtables ethtool socat libseccomp2 conntrack ipvsadm
#Centos
sudo yum install -y ebtables ethtool socat libseccomp conntrack-tools ipvsadm

We install the container life manager (a Containerd version that includes CRI and CNI) and take advantage of the packages that come with the Kubernetes network interfaces (CNI or Container Network Interface)

sudo sh -c "curl -LSs https://storage.googleapis.com/cri-containerd-release/cri-containerd-cni-1.2.7.linux-amd64.tar.gz |tar --no-overwrite-dir -C / -xz"

The package includes the service for systemd so it is enough to start the service:

sudo systemctl enable containerd
sudo systemctl start containerd

Second Step: kubeadm and kubelet

Now we download the executables of kubernetes, in the case of the first machine to configure it will be the “master” and we have to download the kubeadm binaries (the installer of kubernetes), kubelet (the agent that will connect with containerd on each machine.… Read the rest “Kubernetes: Create a minimal environment for demos”

Posts navigation

1 2
Scroll to top