Tuesday, 22 February 2022

Rollout and Rollback on Kubernetes

 

Rollback and rolling updates

How to manage rolling updates using deployment in Kubernetes?

A Kubernetes deployment is a resource object in Kubernetes that provides declarative updates to applications. It allows you to explain an application’s life cycle, such as which images to use for the app, the number of pods replicas for the app, and the mechanism in which the should be updated

we will cover :

  • Revisit updating and rolling out deployments
  • How to roll back the update using deployment?
  • How to check the rollback and rollout status?

Revisiting Updating Deployments in K8s:

When you want to make any changes to the deployment type workloads, you can do so by changing the specifications defined in .spec.template.

Remember!

A Deployment’s rollout is triggered if and only if the Deployment’s pod template (i.e, .spec.template) is modified. If you modify the scaling parameter, it will not rollout, but if you are changing the deployment labels or container images info, it will trigger the deployment rollout to update it.

Let’s first create a deployment-type workload and deploy it in our cluster.

Pre-requisites:

As a pre-requisite, you must have k3s installed, if not follow the given link below to get started with  installation. https://www.devopstreams.com/2022/07/how-to-install-k3s.html

Defining Deployment, Rolling it out & Managing Rollback :

Defining Deployment Using the imperative command:

Here we are making use of imperative commands to create a deployment named: test-deploy,

With a docker image = nginx having 3 replicas, as using the kubectl CLI as shown below:

$ kubectl create deployment test-deploy --image=nginx --replicas=3

When you run the above command in your command line /terminal and execute :

$ kubectl get deployments

In the output image below you can see that our test-deploy workload is up and running in the clusters

Now as our deployment workload is up and ready, let’s make some changes to it and try to roll back the same.

Updating the test-deploy file :

As discussed earlier, if any changes like labels or container images of the deployment template are identified then only an update will be triggered against the given deployment.

Let’s update the nginx Pods to use the nginx:1.16.1 image, instead of the nginx:1.14.2 image, which we used earlier to create our deployment

We can update the image type by using the below given imperative command

$ kubectl set image deployment.v1.apps/test-deploy nginx=nginx:1.16.1

Output:

We can see below that our test-deploy file has been updated with a new nginx version

Let’s use describe command to see the change in the deployment:

$ kubectl describe deploy test-deploy

Output:

As can be seen in the highlighted output below, the deployment has been updated with a new nginx image type :

Image: nginx:1.16.1

Rolling back the Update in the deployment:

What if the update in the deployment has created some mess and your deployment workload is crashing and not stable. Don’t worry Kubernetes has a roll-back feature in place.

In K8s by default, all of the Deployment’s rollout history is kept in the system so that you can roll back anytime you want (you can change that by modifying the revision history limit).

Let’s understand how to roll back the deployment by example

Earlier we have rolled out an update by changing the nginx image to nginx:1.16.1.

Suppose that while updating the deployment one mess with

Suppose that while updating the deployment developer by mistake changes the nginx image name to nginx:1.191 , which is not a valid nginx version.

  • $kubectl set image deployment/test-deploy nginx=nginx:1.191

Now let’s check the rollout status by using the below-given commands:

$kubectl rollout status deployment/test-deploy

We can see from the output that our test-deploy roll-out is kind of stuck. So as a k8s cluster administrator/developer, you need to roll back the updates.

We can further investigate the issue by running

 $kubectl get pods

You can see that the test-deploy workload is showing an error: ImagePullBackOff

Now if we want to roll back the new update to its older version, you can use the rollout undo command

Rolling Back to an Older Version:

To roll back the existing deployment to any previous version, k8s provides a rollout undo functionality

Type the following command on your terminal

kubectl rollout undo deployment/test-deploy

The output below clearly shows that the deployment has been rolled back

Let’s go ahead and check the deployment status

$ kubectl rollout status deployment/test-deploy$ kubectl get deployment test-deploy

we can see that our deployment has now rolled back and is up and running.

Rolling Back to a specific version:

We can also roll back the deployment to a specific version. As K8s maintains the revision history of the deployment workload

So let's check the history to find the revision details and then pick the specific revision tag to roll back

$ kubectl rollout history deployment/test-deploy

Output:

We can see that our test-deploy file has got three revisions 1,3,4

Now if we want to roll back to a specific revision tag we can use the following command

$ kubectl rollout undo deployment/test-deploy --to-revision=3

Let’s see the output :

Let’s check the test-deploy workload details, first, we will use describe command and then see the revision history details

$ kubectl describe deployment test-deploy$ kubectl rollout history deployment/test-deploy

The output will look like this:

We can see in describe command output that our deployment with revision version 3 has been revised and our rollout history shows the revision version as 1,4,5 which earlier was, 1,3,4




Deploying an Application to a Cluster

 let's try to deploy Application on the cluster using deployment and service yaml file 

The command to create deployment and service 

kubectl create -f filename

vi DemoApp01.yml and copy and paste the below 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy1
  labels:
    app: app-v1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: app-v1
  template:
    metadata:
      labels:
        app: app-v1
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: beta.kubernetes.io/arch
                operator: In
                values:
                - amd64
                - arm64
      containers:
      - name: deploy-images
        image: kellyamadin/d-imags:v1
        ports:
        - containerPort: 8080


Create service file by below command

vi ServiceApp01.yml

apiVersion: v1
kind: Service
metadata:
  name: svc1
  labels:
    app: app-v1
spec:
  ports:
  - port: 8080
    nodePort: 32000
    protocol: TCP
  selector:
    app: app-v1
  type: NodePort



copy the public cluster ip and with the port being expose in the SG(32000) and paste in the browse 



 You can change type from NodePort to LoadBalancer by below service file

apiVersion: v1

kind: Service

metadata:

  name: svc1

  labels:

    app: app-v1

spec:

  ports:

  - port: 8080

    nodePort: 32000

    protocol: TCP

  selector:

    app: app-v1

  type: LoadBalancer



Go to the LoadBalancerDNS:8080




Pods - Deploying pods and Exposing it as a Service

 

  1. Deploying Nginx Container

    kubectl create deployment sample-nginx --image=nginx --replicas=2 --port=80
    kubectl get pods
    kubectl get deployments
  2. Expose the deployment as service. This will create an ELB in front of those 2 containers and allow us to publicly access them.

    kubectl expose deployment sample-nginx --port=80 --type=LoadBalancer
    kubectl get services -o wide
    
    

    copy the loadbalancer into your Brower to access the nginx application




Pods- Multicontainer pod -Use Case

 

Example of Multi-Container Pod

Let’s talk about communication between containers in a Pod. Having multiple containers in a single Pod makes it relatively straightforward for them to communicate with each other. They can do this using several different methods.

Use Cases for Multi-Container Pods

The primary purpose of a multi-container Pod is to support co-located, co-managed helper processes for a primary application. There are some general patterns for using helper processes in Pods:

Sidecar containers help the main container. Some examples include log or data change watchers, monitoring adapters, and so on. A log watcher, for example, can be built once by a different team and reused across different applications. Another example of a sidecar container is a file or data loader that generates data for the main container.

Proxies, bridges, and adapters connect the main container with the external world. For example, Apache HTTP server or nginx can serve static files. It can also act as a reverse proxy to a web application in the main container to log and limit HTTP requests. Another example is a helper container that re-routes requests from the main container to the external world. This makes it possible for the main container to connect to the localhost to access, for example, an external database, but without any service discovery.

Shared volumes in a Kubernetes Pod

In Kubernetes, you can use a shared Kubernetes Volume as a simple and efficient way to share data between containers in a Pod. For most cases, it is sufficient to use a directory on the host that is shared with all containers within a Pod.

Kubernetes Volumes enables data to survive container restarts, but these volumes have the same lifetime as the Pod. That means that the volume (and the data it holds) exists exactly as long as that Pod exists. If that Pod is deleted for any reason, even if an identical replacement is created, the shared Volume is also destroyed and created anew.

A standard use case for a multi-container Pod with a shared Volume is when one container writes logs or other files to the shared directory, and the other container reads from the shared directory. For example, we can create a Pod like so (pods03.yaml):

apiVersion: v1
kind: Pod
metadata:
  name: mc1
spec:
  volumes:
  - name: html
    emptyDir: {}
  containers:
  - name: 1st
    image: nginx
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  - name: 2nd
    image: debian
    volumeMounts:
    - name: html
      mountPath: /html
    command: ["/bin/sh", "-c"]
    args:
      - while true; do
          date >> /html/index.html;
          sleep 1;
        done

In this file (pods03.yaml) a volume named html has been defined. Its type is emptyDir, which means that the volume is first created when a Pod is assigned to a node, and exists as long as that Pod is running on that node. As the name says, it is initially empty. The 1st container runs nginx server and has the shared volume mounted to the directory /usr/share/nginx/html. The 2nd container uses the Debian image and has the shared volume mounted to the directory /html. Every second, the 2nd container adds the current date and time into the index.html file, which is located in the shared volume. When the user makes an HTTP request to the Pod, the Nginx server reads this file and transfers it back to the user in response to the request.

Image

kubectl apply -f pods03.yaml
[Captains-Bay]🚩 >  kubectl get po,svc
NAME      READY     STATUS    RESTARTS   AGE
po/mc1    2/2       Running   0          11s

NAME             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes   ClusterIP   10.15.240.1   <none>        443/TCP   1h
[Captains-Bay]🚩 >  kubectl describe po mc1
Name:         mc1
Namespace:    default
Node:         gke-k8s-lab1-default-pool-fd9ef5ad-pc18/10.140.0.16
Start Time:   Wed, 08 Jan 2020 14:29:08 +0530
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"mc1","namespace":"default"},"spec":{"containers":[{"image":"nginx","name":"1st","v...
              kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container 1st; cpu request for container 2nd
Status:       Running
IP:           10.12.2.6
Containers:
  1st:
    Container ID:   docker://b08eb646f90f981cd36c605bf8fead3ca62178c7863598fd4558cb026ed067dd
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:36b77d8bb27ffca25c7f6f53cadd059aca2747d46fb6ef34064e31727325784e
    Port:           <none>
    State:          Running
      Started:      Wed, 08 Jan 2020 14:29:09 +0530
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /usr/share/nginx/html from html (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xhgmm (ro)
  2nd:
    Container ID:  docker://63180b4128d477810d6062342f4b8e499de684ffd69ad245c29118e1661eafcb
    Image:         debian
    Image ID:      docker-pullable://debian@sha256:c99ed5d068d4f7ff36c7a6f31810defebecca3a92267fefbe0e0cf2d9639115a
    Port:          <none>
    Command:
      /bin/sh
      -c
    Args:
      while true; do date >> /html/index.html; sleep 1; done
    State:          Running
      Started:      Wed, 08 Jan 2020 14:29:14 +0530
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /html from html (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xhgmm (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  html:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  
  default-token-xhgmm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-xhgmm
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                                              Message
  ----    ------     ----  ----                                              -------
  Normal  Scheduled  18s   default-scheduler                                 Successfully assigned default/mc1 to gke-k8s-lab1-default-pool-fd9ef5ad-pc18
  Normal  Pulling    17s   kubelet, gke-k8s-lab1-default-pool-fd9ef5ad-pc18  pulling image "nginx"
  Normal  Pulled     17s   kubelet, gke-k8s-lab1-default-pool-fd9ef5ad-pc18  Successfully pulled image "nginx"
  Normal  Created    17s   kubelet, gke-k8s-lab1-default-pool-fd9ef5ad-pc18  Created container
  Normal  Started    17s   kubelet, gke-k8s-lab1-default-pool-fd9ef5ad-pc18  Started container
  Normal  Pulling    17s   kubelet, gke-k8s-lab1-default-pool-fd9ef5ad-pc18  pulling image "debian"
  Normal  Pulled     13s   kubelet, gke-k8s-lab1-default-pool-fd9ef5ad-pc18  Successfully pulled image "debian"
  Normal  Created    12s   kubelet, gke-k8s-lab1-default-pool-fd9ef5ad-pc18  Created container
  Normal  Started    12s   kubelet, gke-k8s-lab1-default-pool-fd9ef5ad-pc18  Started container
$ kubectl exec mc1 -c 1st -- /bin/cat /usr/share/nginx/html/index.html
...
Wed Jan  8 08:59:14 UTC 2020
Wed Jan  8 08:59:15 UTC 2020
Wed Jan  8 08:59:16 UTC 2020
 
$ kubectl exec mc1 -c 2nd -- /bin/cat /html/index.html
...
Wed Jan  8 08:59:14 UTC 2020
Wed Jan  8 08:59:15 UTC 2020
Wed Jan  8 08:59:16 UTC 2020

Cleaning Up

kubectl delete -f pods03.yaml

Pods- MultiContainer Pods

 




Adding a 2nd container to a Pod

In the microservices architecture, each module should live in its own space and communicate with other modules following a set of rules. But, sometimes we need to deviate a little from this principle. Suppose you have an Nginx web server running and we need to analyze its web logs in real-time. The logs we need to parse are obtained from GET requests to the web server. The developers created a log watcher application that will do this job and they built a container for it. In typical conditions, you’d have a pod for Nginx and another for the log watcher. However, we need to eliminate any network latency so that the watcher can analyze logs the moment they are available. A solution for this is to place both containers on the same pod.

Having both containers on the same pod allows them to communicate through the loopback interface (ifconfig lo) as if they were two processes running on the same host. They also share the same storage volume.

Let us see how a pod can host more than one container. Let’s take a look to the multipod.yaml file. It contains the following lines:

apiVersion: v1
kind: Pod
metadata:
  name: webserver
spec:
  containers:
  - name: webserver
    image: nginx:latest
    ports:
    - containerPort: 80
  - name: webwatcher
    image: afakharany/watcher:latest

Run the following command:

$ kubectl apply -f mulipod.yaml
$ kubectl get po -o wide
NAME        READY   STATUS              RESTARTS   AGE   IP       NODE                                                NOMINATED NODE   READINESS GATES
webserver   0/2     ContainerCreating   0          13s   <none>   gke-standard-cluster-1-default-pool-78257330-5hs8   <none>           <none>
$ kubectl get po,svc,deploy
NAME            READY   STATUS    RESTARTS   AGE
pod/webserver   2/2     Running   0          3m6s
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.12.0.1    <none>        443/TCP   107m
$ kubectl get po -o wide
NAME        READY   STATUS    RESTARTS   AGE     IP         NODE                                                NOMINATED NODE   READINESS GATES
webserver   2/2     Running   0          3m37s   10.8.0.5   gke-standard-cluster-1-default-pool-78257330-5hs8   <none>           <none>

How to verify 2 containers are running inside a Pod?

$ kubectl describe po
Containers:
  webserver:
    Container ID:   docker://0564fcb88f7c329610e7da24cba9de6555c0183814cf517e55d2816c6539b829
    Image:          nginx:latest
    Image ID:       docker-pullable://nginx@sha256:36b77d8bb27ffca25c7f6f53cadd059aca2747d46fb6ef34064e31727325784e
    Port:           80/TCP
    State:          Running
      Started:      Wed, 08 Jan 2020 13:21:57 +0530
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xhgmm (ro)
  webwatcher:
    Container ID:   docker://4cebbb220f7f9695f4d6492509e58152ba661f3ab8f4b5d0a7adec6c61bdde26
    Image:          afakharany/watcher:latest
    Image ID:       docker-pullable://afakharany/watcher@sha256:43d1b12bb4ce6e549e85447678a28a8e7b9d4fc398938a6f3e57d2908a9b7d80
    Port:           <none>
    State:          Running
      Started:      Wed, 08 Jan 2020 13:22:26 +0530
    Ready:          True
    Restart Count:  0
    Requests:

Since we have two containers in a pod, we will need to use the -c option with kubectl when we need to address a specific container. For example:

$ kubectl exec -it webserver -c webwatcher -- /bin/bash

root@webserver:/# cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.8.0.5        webserver

Please exit from the shell (/bin/bash) session.

root@webserver:/# exit

Cleaning up

kubectl delete -f multipod.yaml

Pods- Creating pods using Command

 Creating a Pod using Command

kubectl run

Run a particular image on the cluster.

Synopsis

Create and run a particular image, possibly replicated. Creates a deployment or job to manage the created container(s).

kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json] [--command] -- [COMMAND] [args...]

Deploying Nginx pods on Kubernetes


Note: pod Shortcode = po

  1. Deploying Nginx Container

    kubectl run sample-nginx --image=nginx --replicas=2 --port=80
    kubectl get pods
    kubectl get deployments

How to upgrade Maven

  java.lang.IllegalStateException I had installed maven in my ubuntu using command  apt install maven This installed maven in path /usr/shar...