How to publish Kubernetes with External DNS, MetalLB and Traefik.

Kubernetes with External DNS, MetalLB and Traefik will help us to have web applications (in a microservice environment or not) be published, since the basic requirements are to resolve the name of the computer and the web path that leads to the DNS. application. The big map After the steps taken in K3s: Simplify Kubernetes and Helm v3 to deploy PowerDNS over Kubernetes we are going to shape a more complete Kubernetes solution so that you can publish services under your own domain and route and be accessible from outside. Always using the minimum resources in this task. MetalLB MetalLB will allow us to emulate the power of the load balancers of the Cloud providers, the requirements of this solution are Kubernetes version 1.13 or higher, and must be no other network balancer operating and that the network controller is supported in the list indicated in https://metallb.universe.tf/installation/network-addons/, we must bear in mind that K3s includes flannel that is supported and that in the case of others like Weave some modifications are required. To install MetalLB you only need to apply the “yaml” file that deploy all the elements: And to activate MetalLB we must create a configuration (file pool.xml) that contains something like this: When applied with k3s kubectl apply -f pool.yml will configure MetalLB so that if there are services with “loadBalancer” they use one of the IPs defined in the specified range (in this case 192.168.9.240/28). MetalLB give us a great advantage over other types of local solutions, since it does not require the use of SDN (such as Kubernetes on VMware NSX) or specific servers for publishing (such as OpenShift, which in addition to SDN, need specific machines to publish services). Traefik Traefik It is a router service with multiple features such as: Edge router Service Discovery Layer 7 load balancer TLS terminator and support Let’s Encrypt (ACME) It has a Kubernetes Ingress Controller It has an IngressRoute CRD Allows Canary Deployments Traces, metrics and registration With K3s Traefik is automatically deployed when starting the master node, in the case of Kubernetes it will be done through Helm and for configuration we need a yaml file like the following: dashboard: enabled: “true” domain: “traefik-dashboard.DOMINIO” auth: basic: admin: $apr1$zjjGWKW4$W2JIcu4m26WzOzzESDF0W/ rbac: enabled: “true” ssl: enabled: “true” mtls: enabled: “true” optional: “true” generateTLS: “true” kubernetes: ingressEndpoint: publishedService: “kube-system/traefik” metrics: prometheus: enabled: “true” This configuration is very similar to the one that K3s […]

Helm v3 to deploy PowerDNS over Kubernetes

In the article about PowerDNS on Docker or Podman, easy to run we leave pending its realization on Kubernetes, this is because Kubernetes service structure is much more complex than that Docker or Podman and therefore a completely different approach must be made. Package with Helm v3 The first thing to keep in mind is package applications so that the deployment doesn’t require extensive knowledge about Kubernetes (make life easier for the user, the developer) and the second is that we can have many users on the same environment wishing to raise the same application and we can’t create a package for each one, we must to be able to reuse the package we have created. In Kubernetes the package standard is Helm , that will allow us to manage deployments easily and be reusable by user, project, etc. The Helm package consists of: Chart.yaml: Where the meta information about the package itself is. templates: Folder where we are going to define the objects that are going to be deployed in Kubernetes, but that will have certain modifications so that they are flexible. values.yaml: In the objects to be deployed there will be a series of variables to be defined (eg servers to access, access codes, databases on a remote server, etc.), in this file we define the predefined values of each variable, later, the user who launches the package can adjust it. NOTES.txt: Is placed within “templates” this file and will contain a message about the result of the package installation, such as the IP obtained or entry URL. _helpers.tpl: This file that we will also find inside the “templates” folder and contains definitions of variables that we can use in the objects, such as the name of the package and will allow us to make multiple deployments of the same application in the same namespace simply by changing the deployment version name. Helm v3 is the stable version of Helm (at the time of writing these lines Helm goes for version v3.0.2) but it has such interesting capabilities as life cycle management or not requiring integrated services in Kubernetes to deploy packages ( to do something similar in version 2 see How to launch a Helm Chart without install Tiller, which requires a job and a launcher pod with Tiller, which represents a greater complexity). Using Helm Once we have arranged our Kubernetes cluster (see Kubernetes: Create a […]

Open post
PowerDNS on Docker or Podman, easy to run 1

PowerDNS on Docker or Podman, easy to run

What is PowerDNS and DNS as a critical service PowerDNS is a DNS server, an especially critical service in any infrastructure that we want to deploy, since this is the main connection between services and operators. If of all the options we find when we look for a DNS server (we can see a long list at https://en.wikipedia.org/wiki/Comparison_of_DNS_server_software) we look for the following three conditions: can be easily managed, simple deployment and OpenSource. We are going to stay in only one option: PowerDNS and for its management PowerDNS-admin. PowerDNS (whose development can be seen at https://github.com/PowerDNS/pdns and has more than 1800 stars) is a powerful DNS server whose most interesting features for management are a web service with a powerful API and be able to store information in databases, such as MySQL. And we select PowerDNS-Admin for two reasons: It is actively maintained (https://github.com/ngoduykhanh/PowerDNS-Admin, more than 750 stars) and visually it is a more friendly environment by having similar format as RedHat tools are currently using. Why PowerDNS with PowerDNS-Admin? Because they make up a powerful package where we have the following advantages: Easy to install Easy to maintain Intuitive interface Everything is stored in a database (which facilitates replication, backups, high availability, etc.) It does not require special browser settings (such as RedHat IDM that requires installing the server certificate in clients) Has authentication against multiple sources (LDAP, AD, SAML, Github, Google, etc.) Has domain access permissions To these advantages we must add the existence of multiple containers images that greatly facilitate how to deploy and update this solution. Deploy PowerDNS with Docker-Composer La solución con PowerDNS consta de tres partes: el servidor dns, para el cual haremos uso de contenedor pschiffe/pdns-mysql:alpine (https://github.com/pschiffe/docker-pdns/tree/master/pdns), el servidor de base de datos mariadb a través del contenedor yobasystems/alpine-mariadb(https://github.com/yobasystems/alpine-mariadb) y el contenedor aescanero/powerdns-admin que hemos explicado en un post anterior (https://www.disasterproject.com/index.php/2019/08/docker-reducir-tamano/). The solution with PowerDNS consists of three parts: the dns server, for which we will use the pschiffe/pdns-mysql:alpine (https://github.com/pschiffe/docker-pdns/tree/master/pdns), the mariadb database server through the yobasystems/alpine-mariadb(https://github.com/yobasystems/alpine-mariadb) and the aescanero/powerdns-admin container that we explained in a previous post (Docker: Reduce the size of a container). It is important to indicate that the three containers have active maintenance and are small in size, which allows rapid deployment. Ports 53/UDP and 9191/TCP must be available on the machine running the containers. In order to provide storage space in the database, a volume has been added […]

Kubernetes: adventures and misadventures of patching (kubectl patch).

Kubernetes is a powerful container orchestration tool where many different objects are executed and at some point in time we will be interested in modifying. For this, Kubernetes offers us an interesting mechanism: patch, we are going to explain how to patch and we will see that this tool is far from being enough tool as would be desirable. Patching operations in Kubernetes According to the Kubernetes documentation and the Kubernetes API rules, three types are defined (with –type): Strategic This is the type of patch that Kubernetes uses by default and is a native type, defined in the SIG. It follows the structure of the original object but indicating the changes (by default join: merge, that’s why it is known as strategic merge patch) in a yaml file. For example: if we have the following service (in service.yml file): We are going to use the command kubectl patch -f service.yml –type=”strategic” -p “$(cat patch.yml)” –dry-run -o yaml to allow us to perform tests on objects without the danger of modifying its content in the Kubernetes cluster. In this case, if we want this service to listen for an additional port, we will use the “merge” strategy and apply the following patch (patch.yml): As we can see, the patch only follows the service object as far as we want to make the change (the “ports” array) and being a “strategic merge” type change it will be added to the list as seen in the command dump: But if instead of “merge” we use “replace” what we do is eliminate all the content of the subtree where we are indicating the label “$patch: replace” and instead, directly put the content of the patch. For example to change the content of the array we use the file “patch.yml”: In this example, the entire contents of “ports:” are deleted and instead the object defined after the “$patch: replace” tag is left, although the order is not important, the tag can go back and has the same effect . The result of the above is: Finally, “delete” indicated by “$patch: delete” deletes the content of the subtree, even if there are new content, it is not added. The result will be the empty spec content: Merge This type of patch is a radical change compared to “strategic” because it requires the use of JSON Merge Patch (RFC7386), it can be applied as yaml or […]

Docker: Reduce the size of a container

In container environments (Docker or Kubernetes) we need to deploy quickly, and the most important thing for this is their size. We must reduce them so that the download of them from the registry and their execution is slower the larger the container is and that minimally affects the complexity of the relations between services. For a demonstration, we going to use a PowerDNS-based solution, I find that the original PowerDNS-Admin service container (https://github.com/ngoduykhanh/PowerDNS-Admin/blob/master/docker/Production/Dockerfile) has the following features: The developer is very active and includes python code, javascript (nodejs) and css. The images in docker.hub are obsolete with respect code. Production Dockerfile does not generate a valid image It is based on Debian Slim, which although deletes a large number of files, is not sufficient small. In Docker Hub there are images, but few are sufficient recent or do not use the original repository so the result comes from an old version of the code. For example, the most downloaded image (really/powerdns-admin) does not take into account the modifications of the last year and does not use yarn for the nodejs layer. First step: Decide whether to create a new Docker image Sometimes it is a matter of necessity or time, in this case it has been decided to create a new image taking into account the above. As minimum requirements we need a GitHub account (https://www.github.com), a Docker Hub account (https://hub.docker.com/) and basic knowledge of git as well as advanced knowledge of creating Dockerfile. In this case, https://github.com/aescanero/docker-powerdns-admin-alpine and https://cloud.docker.com/repository/docker/aescanero/powerdns-admin is created and linked to automatically create the images when a Dockerfile is uploaded to GitHub. Second step: Choose the base of a container Using a base of very small size and oriented to the reduction of each of the components (executables, libraries, etc.) that are going to be used is the minimum requirement to reduce the size of a container, and the choice should always be to use Alpine (https://alpinelinux.org/). In addition to the points already indicated, standard distributions (based on Debian or RedHat) require a large number of items for package management, base system, auxiliary libraries, etc. Alpine eliminates these dependencies and provides a simple package system that allows identifying which package groups have been installed together to be able to eliminate them together (very useful for developments as we will see later) Using Alpine can reduce the size and deployment time of a container by up […]

Open post
K3s: Simplify Kubernetes 2

K3s: Simplify Kubernetes

What is K3s? K3s (https://k3s.io/) is a Kubernetes solution created by Rancher Labs (https://rancher.com/) that promises easy installation, few requirements and minimal memory usage. For the approach of a Demo/Development environment this becomes a great improvement on what we have talked about previously at Kubernetes: Create a minimal environment for demos , where we can see that the creation of the Kubernetes environment is complex and requires too many resources even if Ansible is the one who performs the difficult work. We will see if what is presented to us is true and if we can include the Metallb tools that will allow us to emulate the power of the Cloud providers balancers and K8dash environments that will allow us to track the infrastructure status. K3s Download We configure the virtual machines in the same way as for Kubernetes, with the installation of dependencies: We download the latest version of k3s from https://github.com/rancher/k3s/releases/latest/download/k3s and put it in /usr/bin with execution permissions. We must do it in all the nodes. What is K3s? K3s includes three “extra” services that will change the initial approach we use for Kubernetes, the first is Flannel, integrated into K3s will make the entire layer of internal network management of Kubernetes, although it is not as complete in features as Weave (for example multicast support) it complies with being compatible with Metallb. A very complete comparison of Kubernetes network providers can be seen at https://rancher.com/blog/2019/2019-03-21-comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/ . The second service is Traefik that performs input functions from outside the Kubernetes cluster, it is a powerful reverse proxy/balancer with multiple features that will perform at the Network Layer 7, running behind Metallb that will perform the functions of network layer 3 as balancer. The last “extra” service of K3s is servicelb, which allows us to have an application load balancer, the problem with this service is that it works in layer 3 and this is the same layer where metallb work, so we cannot install it. K3s Master Install On the first node (which will be the master) to be installed, we execute /usr/bin/k3s server –no-deploy servicelb –bind-address IP_MACHINE, if we want the execution to be carried out every time the machine starts we need to create a service file /etc/systemd/system/k3smaster.service And then, we execute To launch K3s and install the master node, at the end of the installation (about 20 seconds), we will save the contents of […]

Posts navigation

1 2
Scroll to top