How to keep Kubernetes Nodes labels up to date.


k8s-labels

Let’s say we have a Kubernetes cluster on Bare-Metal or in a Cloud Provider that don’t keep track of labels after a Maintenance windows.

Every time they are recreated or when a new node is added to the cluster manually or with the Auto-Scale Group, these new nodes don’t come with the labels that your pods are using and needs to schedule pods to. It might sound familiar for you. Let’s see how I resolved it.

I wrote a microservice here that intends to make those labels survive after a maintance windows, Node failures or when Adding a new Node to the cluster.

When you spin up Kubernetes cluster in any Cloud provider. By default, they setup default labels in the metadata fields.

This microservice uses these default labels as reference to keep custom labels up to date. You can use any default label as a reference and define them in the code with any combination.

Let’s take an example when creating a K8s cluster in AWS using KOPS. Below an example of default labels that are set by Cloud provider:

beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t2.xlarge
beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-1
failure-domain.beta.kubernetes.io/zone=us-east-1b
kops.k8s.io/instancegroup=nodes,kubernetes.io/hostname=ip-172-40-72-196.ec2.internal
kubernetes.io/role=node

Let’s say you want:

  • Label all nodes with instance-type=t2.xlarge with custom label app=web
  • label all nodes with zone=us-east-1b with custom label app=api and environment=production
  • You can use any combination of labeling.

CKA

This can be defined in the microservice as:

var RulesList = []Rules{
    Rules{
        DefaultLabel: "beta.kubernetes.io/instance-type",
        DefaultValue: "t2.xlarge",
        CustomLabel:  "app",
        CustomValue:  "web",
    },
    Rules{
        DefaultLabel: "failure-domain.beta.kubernetes.io/zone",
        DefaultValue: "us-east-1b",
        CustomLabel:  "app",
        CustomValue:  "api",
    },
    Rules{
        DefaultLabel: "failure-domain.beta.kubernetes.io/zone",
        DefaultValue: "us-east-1b",
        CustomLabel:  "environment",
        CustomValue:  "production",
    },
}

It is easy as-is. Once you run this pod in the Kubernetes cluster, it will make sure labels for your Nodes keep up to date all the time. Doesn’t matter if there was a maintenance windows, a new Node has been added, Node recreated due a failure or Node Added/Deleted by an Auto-Scale group.

The instructions to how to run it are in the GitHub repository here

Improvements

We can’t use always default labels as a reference to set our custom ones. Thanks to the flexibility of the Kubernetes API and objects we can use another metric as a reference in order to setup labels. For instance, we can set labels based in the CPU and RAM that has the node. In order to know all possible metrics, you can use as a reference and extend the microservice you can perform in your Kubernetes cluster:

kubectl get --raw /api/v1/nodes | python -m json.tool

As usual, if you have any question, send me a message at contact@wecloudpro.com

Back to blog