Declarative Kubernetes
With docker, we used docker run ...
for each docker instance. We came at some point to situation where our commands
became too long with many parameters or that running the same command again and again was time-consuming and more
error-prone. That is where stopped using imperative approach and started to use declarative
approach with
docker-compose
yaml files. Same is with imperative approach using kubectl ....
as I showed you in the previous
blog post about Kubernetes basics. This time lets look into how we can
achieve same more advanced stuff but with declarative approach with Kubernetes Resource definition files.
Resource definition
Resources can be defined declaratively using yaml
syntax, something like config.yaml
in your project folder. The
config file will contain all details of the resource that we want to create. We will create two resources,
Deployment
and Service
in this post.
Deployment definition
So let’s define first yaml
file with a Pod
resource:
apiVersion: apps/v1
kind: Deployment # Deployment is a resource that manages a set of pods
metadata:
name: web-app # name of the deployment
spec:
replicas: 1
selector: # selector is used to select which pods are part of this deployment
matchLabels:
app: web-app
template:
metadata:
labels: # labels are used to identify pods and be able to matchLabels of pods to selector of Deployment
app: web-app # label of the pod
spec: # spec is used to define the pod resource
containers: # containers is a list of containers that will be run in the pod
- name: api # name of the container
image: nginx:latest
ports:
- containerPort: 80
Download here.
Selectors and labels
Selectors are used to select which pods are part of the deployment or service. First, we label
the pods with any
combination of key and value, or with multiple of labels. These labels are then used in the selector of the deployment
or service to select the pods. This way we can select multiple pods with the same label, or we can select only one pod
with a specific label. We used matchLabels
in the selector to select pods that match the label app: web-app
.
The other option would be matchExpressions
where we can use more complex expressions to select pods. The use curly
braces {}
to define the expression, and we can use key
, operator
and values
to define the expression. For example
key: app, operator: In, values: [web-app]
would select all pods with the label app
that has the value web-app
.
Valid operator for now are In
, NotIn
, Exists
, DoesNotExist
, Gt
, Lt
and Equals
. In my experience, I have
never used matchExpressions
because I could do with just matchLabels
but it is worth knowing what this is about.
Applying the Deployment
resource
We have now defined a Deployment
resource that will create a Pod
resource with a single nginx
container running on
port 8080
. We can now apply this resource definition to Kubernetes with kubectl apply -f deployment.yaml
command.
$ kubectl apply -f deployment.yaml
deployment.apps/web-app created
We can now check if the deployment was created with kubectl get deployments
command.
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
web-app 1/1 1 1 2m
This nginx runs, but we can’t access it from outside the cluster. We need to expose it to outside with a Service
resource.
Service definition
As we mentioned in previous post, the Service is a resource
that exposes a set of pods as a network service. We can define a Service
resource in a service.yaml
file:
apiVersion: v1
kind: Service # Service is a resource that exposes a set of pods as a network service
metadata:
name: web-app # name of the service
spec:
selector: # selector is used to select which pods are part of this service
app: web-app # label of the pod where the service will be exposed
ports:
- name: http
protocol: 'TCP'
port: 80 # port on which the service will be exposed
targetPort: 80 # port on which the pod is listening
- name: https
protocol: 'TCP'
port: 443
targetPort: 443
type: LoadBalancer # type of the service, same as with kubectl expose command
Download here. We can now apply this resource definition to Kubernetes with:
$ kubectl apply -f service.yaml
service/web-app created
We can now check if the service was created with kubectl get services
command.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-app LoadBalancer
or to see it in the browser:
$ minikube service web-app
|-----------|---------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|---------|-------------|---------------------------|
| default | web-app | http/80 | http://192.168.49.2:31258 |
| | | https/443 | http://192.168.49.2:30923 |
|-----------|---------|-------------|---------------------------|
We can now access the nginx running in the pod from outside the cluster with the URL provided by the minikube service
command.
Updating resources
We can simply update the resource yaml files, apply them again with kubectl apply -f ...
command and Kubernetes will
update the resources accordingly.
Deleting resources
We can delete resources with kubectl delete -f <yaml config>
command. This will delete all resources defined in the
config file. You can even delete multiple resources with kubectl delete -f <yaml config1> -f <yaml config2>
command.
Configuration definition
As we have seen with docker, we can configure some environment variables used by our application and therefore there needs to be a way to pass these variables with kubernetes to the pods/containers.
First, we need to update our application code to read
the environment variables. In the case of nginx, I have just
checked the nginx docker page
and I can see that there is an environment variable NGINX_VERSION
that can be set. We will use it for our example,
but in real life, we would set some more useful configs. If we had a real python application, we would need to update
our application to read and use the environment variable with something like this:
import os
port = os.getenv('MY_ENV_NAME', 'my_default_value')
or in case of javascript:
const port = process.env.MY_ENV_NAME || 'my_default_value';
With kubernetes, there are two ways of setting these environment variables or so-called application configurations,
the first is doing it inline directly in deployment resource definition and the second way is to define ConfigMap
resource and reference it in the deployment resource definition. Let’s look at both ways.
Inline configuration
Let’s adapt our previously used nginx container specification with inline env
field. So let’s set it in the
deployment resource definition.
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 1
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: api
image: nginx:latest
ports:
- containerPort: 80
env: # inline configuration
- name: NGINX_VERSION # name of the environment variable
value: "1.26.0" # value of the environment variable
- name: MY_OTHER_ENV_NAME # another environment variable
value: "hello world" # value of another environment variable
- name: MESSAGE
value: "$(MY_OTHER_ENV_NAME) from Kubernetes" # we can reference other environment variables
Download here. In this case we set our docker container environment variable NGINX_VERSION
to 1.26.0
. We can now apply this resource definition to Kubernetes with kubectl apply -f deployment-w-env.yaml
command.
$ kubectl apply -f deployment-w-env.yaml -f service.yaml
deployment.apps/web-app configured
service/web-app unchanged
The kubernetes will tell to pod to start docker container with the environment variable NGINX_VERSION
and specific
value wil be used when starting docker image. We will not see real change in the application behaviour, but we can
check if the environment variable is set with kubectl exec -it <pod-name> -- env
command.
$ kubectl get pods ⎈ minikube 13:11:32
NAME READY STATUS RESTARTS AGE
web-app-68bc6fd8d-hphdd 1/1 Running 0 3m34s
$ kubectl exec -it web-app-68bc6fd8d-hphdd -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=web-app-68bc6fd8d-hphdd
TERM=xterm
NGINX_VERSION=1.26.0
...
ConfigMap configuration
The second way of setting environment variables is to use ConfigMap
resources, which is basically a k8 object with
key-value pairs. These values can be injected into your application by referencing ConfigMap
keys in your pod’s
definitions or even in command line arguments of the container. Let’s create such an example with ConfigMap
resource
called web-app-config.yaml
:
apiVersion: v1
kind: ConfigMap # ConfigMap is a resource type that holds key-value pairs
metadata: # as with each resource, we want to be able to reference it
name: web-app-config # name of the ConfigMap that we will reference in the deployment
data: # start of the section where we define key-value pairs
NGINX_VERSION: '1.26.0' # a key NGINX_VERSION with value 1.26.0
MY_OTHER_ENV_NAME: 'hello world' # another key MY_OTHER_ENV_NAME with value hello world
Download here. Before we can use this ConfigMap
in our deployment resource definition, we need
to reference it in the deployment resource definition. We will change the env part of the container specification and
call it deployment-w-configmap.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 1
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: api
image: nginx:latest
ports:
- containerPort: 80
env:
- name: NGINX_VERSION
valueFrom: # valueFrom is used to reference defined ConfigMaps
configMapKeyRef:
name: web-app-config # name of the ConfigMap from the metadata section of the ConfigMap
key: NGINX_VERSION # key of the ConfigMap from the data section of the ConfigMap
Download here. We can now apply this resource definition to Kubernetes with
kubectl apply -f deployment-w-configmap.yaml
command.
$ kubectl apply -f web-app-config.yaml -f deployment-w-configmap.yaml -f service.yaml
configmap/web-app-config created
deployment.apps/web-app configured
service/web-app unchanged
And we should be able to see that the pods are terminated and new pods are created with the new environment variables.
With this selective approach you can select key by key, but alternatively you can use envFrom
to inject all keys from
the ConfigMap into the pod’s environment.
...
# env:
# - name: NGINX_VERSION
# valueFrom:
# configMapKeyRef:
# name: web-app-config
# key: NGINX_VERSION
envFrom: # inject all keys from the ConfigMap into the pod's environment
- configMapRef: # reference to the ConfigMap
name: web-app-config # name of the ConfigMap
It is worth mentioning that ConfigMap
can be used to store any kind of configuration, not just environment variables.
It is not intended for storing secrets, as k8 has there a separate resource called Secret
. You can read more about it
in official k8 documentation.
Free tips
Strategies for managing resources
I have created two config files, for Deployment
and Service
resources, because that makes sense to me, but you are
not limited to this way of working. If you are creating too many resources, you could create a single config file that
defines all resources. In such a case make sure to not mix the resource definitions in a single file, by using three
dashes ---
to separate the resource definitions. Not two, not four, but three dashes!
apiVersion: apps/v1
kind: Deployment
...
--- # this is a separator between 2 resource definitions
apiVersion: v1
kind: Service
...
Health checks
The pod will autorestart when the kubelet detects that the container has crashed, but sometimes the container is up and
running, but the application can become unresponsive, slow or throwing errors. That is usually caused by various
processes that are also running in the container. So how we make sure that the applications is in great condition or how
we tell kubernetes to restart it? The solution is already in kubernetes core version, so no need for additional
extensions and it is called livenessProbe
. This can be added to spcification of any pod and you just define endpoint
that should be pinged and how often. The pod is considered healthy if it responds in given timeframe or unhealthy if it
doesn’t respond.
livenessProbe:
httpGet:
path: /healthcheck # this url will be checked
port: 80
periodSeconds: 10 # the healthcheck URL will be checked every 10 seconds
initialDelaySeconds: 30 # and healthy pod needs to respond in under 30 seconds
Download the full example here
Conclusion
Declarative approach is easier to audit and maintain than imperative approach, because you see all definitions in one place. So if you want to share the way how you are running your apps with your colleagues in your team, it is much easier to see the yaml files than to send over long commands. Declarative approach can be combined with imperative approach, so it is not replacement but rather enhancement. I hope examples in this post will help you to start using kubernetes.