Showing posts with label cluster. Show all posts
Showing posts with label cluster. Show all posts

Tuesday, 22 February 2022

What is Kubernetes- New

 Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google's experience running production workloads at scale with best-of-breed ideas and practices from the community

Going back in time

Let's take a look at why Kubernetes is so useful by going back in time.



Traditional deployment era: Early on, organizations ran applications on physical servers. There was no way to define resource boundaries for applications in a physical server, and this caused resource allocation issues. For example, if multiple applications run on a physical server, there can be instances where one application would take up most of the resources, and as a result, the other applications would underperform. A solution for this would be to run each application on a different physical server. But this did not scale as resources were underutilized, and it was expensive for organizations to maintain many physical servers.

Virtualized deployment era: As a solution, virtualization was introduced. It allows you to run multiple Virtual Machines (VMs) on a single physical server's CPU. Virtualization allows applications to be isolated between VMs and provides a level of security as the information of one application cannot be freely accessed by another application.

Virtualization allows better utilization of resources in a physical server and allows better scalability because an application can be added or updated easily, reduces hardware costs, and much more. With virtualization you can present a set of physical resources as a cluster of disposable virtual machines.

Each VM is a full machine running all the components, including its own operating system, on top of the virtualized hardware.

Container deployment era: Containers are similar to VMs, but they have relaxed isolation properties to share the Operating System (OS) among the applications. Therefore, containers are considered lightweight. Similar to a VM, a container has its own filesystem, share of CPU, memory, process space, and more. As they are decoupled from the underlying infrastructure, they are portable across clouds and OS distributions.

Containers have become popular because they provide extra benefits, such as:

  • Agile application creation and deployment: increased ease and efficiency of container image creation compared to VM image use.
  • Continuous development, integration, and deployment: provides for reliable and frequent container image build and deployment with quick and easy rollbacks (due to image immutability).
  • Dev and Ops separation of concerns: create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure.
  • Observability not only surfaces OS-level information and metrics, but also application health and other signals.
  • Environmental consistency across development, testing, and production: Runs the same on a laptop as it does in the cloud.
  • Cloud and OS distribution portability: Runs on Ubuntu, RHEL, CoreOS, on-premises, on major public clouds, and anywhere else.
  • Application-centric management: Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources.
  • Loosely coupled, distributed, elastic, liberated micro-services: applications are broken into smaller, independent pieces and can be deployed and managed dynamically – not a monolithic stack running on one big single-purpose machine.
  • Resource isolation: predictable application performance.
  • Resource utilization: high efficiency and density

Why you need Kubernetes and what it can do


Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn't it be easier if this behavior was handled by a system?

That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.

Kubernetes provides you with:

  • Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
  • Storage orchestration Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
  • Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
  • Automatic bin packing You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
  • Self-healing Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.
  • Secret and configuration management Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.

The architecture of kubernetes(Master/Worker Node)


What happens in the Kubernetes control plane?

Control plane

Let’s begin in the nerve center of our Kubernetes cluster: The control plane. Here we find the Kubernetes components that control the cluster, along with data about the cluster’s state and configuration. These core Kubernetes components handle the important work of making sure your containers are running in sufficient numbers and with the necessary resources. 

The control plane is in constant contact with your compute machines. You’ve configured your cluster to run a certain way. The control plane makes sure it does.

kube-apiserver

Need to interact with your Kubernetes cluster? Talk to the API. The Kubernetes API is the front end of the Kubernetes control plane, handling internal and external requests. The API server determines if a request is valid and, if it is, processes it. You can access the API through REST calls, through the kubectl command-line interface, or through other command-line tools such as kubeadm.

kube-scheduler

Is your cluster healthy? If new containers are needed, where will they fit? These are the concerns of the Kubernetes scheduler.

The scheduler considers the resource needs of a pod, such as CPU or memory, along with the health of the cluster. Then it schedules the pod to an appropriate compute node.

kube-controller-manager

Controllers take care of actually running the cluster, and the Kubernetes controller-manager contains several controller functions in one. One controller consults the scheduler and makes sure the correct number of pods is running. If a pod goes down, another controller notices and responds. A controller connects services to pods, so requests go to the right endpoints. And there are controllers for creating accounts and API access tokens.

etcd

Configuration data and information about the state of the cluster lives in etcd, a key-value store database. Fault-tolerant and distributed, etcd is designed to be the ultimate source of truth about your cluster.

What happens in a Kubernetes node?

Nodes

A Kubernetes cluster needs at least one compute node, but will normally have many. Pods are scheduled and orchestrated to run on nodes. Need to scale up the capacity of your cluster? Add more nodes.

Pods

A pod is the smallest and simplest unit in the Kubernetes object model. It represents a single instance of an application. Each pod is made up of a container or a series of tightly coupled containers, along with options that govern how the containers are run. Pods can be connected to persistent storage in order to run stateful applications.

Container runtime engine

To run the containers, each compute node has a container runtime engine. Docker is one example, but Kubernetes supports other Open Container Initiative-compliant runtimes as well, such as rkt and CRI-O.

kubelet

Each compute node contains a kubelet, a tiny application that communicates with the control plane. The kublet makes sure containers are running in a pod. When the control plane needs something to happen in a node, the kubelet executes the action.

kube-proxy

Each compute node also contains kube-proxy, a network proxy for facilitating Kubernetes networking services. The kube-proxy handles network communications inside or outside of your cluster—relying either on your operating system’s packet filtering layer, or forwarding the traffic itself.

How to set up EKS(Elastic Kubernetes Services) on Amazon Services


command to follow 

Pre-requistes:

Create EC2 with  Amazon Linux 2 AMI with Executable permission under Security Credentials  ---->Role by giving a unique name 



And Attach the role your created from above to the EC2 

Click on action-->Security---->Modify IAM role and select the role you created ealier 



Command to Install Amazon EKS 

pip: curl -O https://bootstrap.pypa.io/get-pip.py

   yum install python3-pip

    ls -a ~

   export PATH=~/.local/bin:$PATH

   source ~/.bash_profile

   pip3

AWS CLI

   pip3 install awscli --upgrade --user

   aws --version

   aws configure

ENTER: Your Region Eg(us-east-1) Format: json


eksctl

 curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp

sudo mv /tmp/eksctl /usr/local/bin

  eksctl version

If you get something like "no command found" enter the below command

cp /usr/local/bin/eksctl /usr/bin -rf


kubectl

 curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/kubectl

 chmod +x ./kubectl

mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH

 kubectl version --short --client

5) aws-iam-authenticator

   curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator

    chmod +x ./aws-iam-authenticator

   mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$PATH:$HOME/bin

   echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc

   aws-iam-authenticator help

Cluster creation

eksctl create cluster --name EKSDemo003 --version 1.18 --region us-east-2 --nodegroup-name standard-workers --node-type t2.medium --nodes 3 --nodes-min 1 --nodes-max 3 --managed

Delete the cluster

eksctl delete cluster --region us-east-2  --name EKSDemo003


it will take around 10-15 minutes for the cluster 

Validate the cluster by the follow command 

kubectl get nodes


Rollout and Rollback on Kubernetes

 

Rollback and rolling updates

How to manage rolling updates using deployment in Kubernetes?

A Kubernetes deployment is a resource object in Kubernetes that provides declarative updates to applications. It allows you to explain an application’s life cycle, such as which images to use for the app, the number of pods replicas for the app, and the mechanism in which the should be updated

we will cover :

  • Revisit updating and rolling out deployments
  • How to roll back the update using deployment?
  • How to check the rollback and rollout status?

Revisiting Updating Deployments in K8s:

When you want to make any changes to the deployment type workloads, you can do so by changing the specifications defined in .spec.template.

Remember!

A Deployment’s rollout is triggered if and only if the Deployment’s pod template (i.e, .spec.template) is modified. If you modify the scaling parameter, it will not rollout, but if you are changing the deployment labels or container images info, it will trigger the deployment rollout to update it.

Let’s first create a deployment-type workload and deploy it in our cluster.

Pre-requisites:

As a pre-requisite, you must have k3s installed, if not follow the given link below to get started with  installation. https://www.devopstreams.com/2022/07/how-to-install-k3s.html

Defining Deployment, Rolling it out & Managing Rollback :

Defining Deployment Using the imperative command:

Here we are making use of imperative commands to create a deployment named: test-deploy,

With a docker image = nginx having 3 replicas, as using the kubectl CLI as shown below:

$ kubectl create deployment test-deploy --image=nginx --replicas=3

When you run the above command in your command line /terminal and execute :

$ kubectl get deployments

In the output image below you can see that our test-deploy workload is up and running in the clusters

Now as our deployment workload is up and ready, let’s make some changes to it and try to roll back the same.

Updating the test-deploy file :

As discussed earlier, if any changes like labels or container images of the deployment template are identified then only an update will be triggered against the given deployment.

Let’s update the nginx Pods to use the nginx:1.16.1 image, instead of the nginx:1.14.2 image, which we used earlier to create our deployment

We can update the image type by using the below given imperative command

$ kubectl set image deployment.v1.apps/test-deploy nginx=nginx:1.16.1

Output:

We can see below that our test-deploy file has been updated with a new nginx version

Let’s use describe command to see the change in the deployment:

$ kubectl describe deploy test-deploy

Output:

As can be seen in the highlighted output below, the deployment has been updated with a new nginx image type :

Image: nginx:1.16.1

Rolling back the Update in the deployment:

What if the update in the deployment has created some mess and your deployment workload is crashing and not stable. Don’t worry Kubernetes has a roll-back feature in place.

In K8s by default, all of the Deployment’s rollout history is kept in the system so that you can roll back anytime you want (you can change that by modifying the revision history limit).

Let’s understand how to roll back the deployment by example

Earlier we have rolled out an update by changing the nginx image to nginx:1.16.1.

Suppose that while updating the deployment one mess with

Suppose that while updating the deployment developer by mistake changes the nginx image name to nginx:1.191 , which is not a valid nginx version.

  • $kubectl set image deployment/test-deploy nginx=nginx:1.191

Now let’s check the rollout status by using the below-given commands:

$kubectl rollout status deployment/test-deploy

We can see from the output that our test-deploy roll-out is kind of stuck. So as a k8s cluster administrator/developer, you need to roll back the updates.

We can further investigate the issue by running

 $kubectl get pods

You can see that the test-deploy workload is showing an error: ImagePullBackOff

Now if we want to roll back the new update to its older version, you can use the rollout undo command

Rolling Back to an Older Version:

To roll back the existing deployment to any previous version, k8s provides a rollout undo functionality

Type the following command on your terminal

kubectl rollout undo deployment/test-deploy

The output below clearly shows that the deployment has been rolled back

Let’s go ahead and check the deployment status

$ kubectl rollout status deployment/test-deploy$ kubectl get deployment test-deploy

we can see that our deployment has now rolled back and is up and running.

Rolling Back to a specific version:

We can also roll back the deployment to a specific version. As K8s maintains the revision history of the deployment workload

So let's check the history to find the revision details and then pick the specific revision tag to roll back

$ kubectl rollout history deployment/test-deploy

Output:

We can see that our test-deploy file has got three revisions 1,3,4

Now if we want to roll back to a specific revision tag we can use the following command

$ kubectl rollout undo deployment/test-deploy --to-revision=3

Let’s see the output :

Let’s check the test-deploy workload details, first, we will use describe command and then see the revision history details

$ kubectl describe deployment test-deploy$ kubectl rollout history deployment/test-deploy

The output will look like this:

We can see in describe command output that our deployment with revision version 3 has been revised and our rollout history shows the revision version as 1,4,5 which earlier was, 1,3,4




How to upgrade Maven

  java.lang.IllegalStateException I had installed maven in my ubuntu using command  apt install maven This installed maven in path /usr/shar...