Wednesday, 3 November 2021

Automate Kubernetes EKS Cluster with Terraform

 Amazon EKS is a managed service to run Kubernetes on AWS without installing, operating, and maintaining your own Kubernetes cluster. Building an EKS Cluster with Terraform allows you to create resources quickly, efficiently, and with an automated approach.

In this lab, you will learn how to build and run a Terraform configuration to build an EKS cluster with Terraform step by step. 

Note: Before you can start using kubectl, you have to install the AWS CLI and KUBECTL on your computer.

Prerequisites
  •  Create an IAM User 
  •  Configure Terraform Backend with S3 Storage
  •  Setting up CD Pipeline for EKS Cluster
  •  Create Terraform Workspace for EKS Cluster
  •  Run Pipeline Job
  •  Configure AWS 
  •  Map IAM User to EKS using ClusterRole & ClusterRoleBinding
  •  Verify EKS Cluster is Active and Nodes are Visible

  • Create an IAM User 
  • Go to AWS Console
  • Search for IAM as shown below



  • Select Users and create a User called terraform-user with Console access and AdministratorAccess policy attached. Ensure to download your credentials once the user has been created as this is required to login to the EKS Cluster.



  • Navigate to Policies and select create policy called eks-assume


  • Select JSON and copy the below code highlighted in yellow to create policy: (Note: Ensure to change the AWS-ACCOUNT-NUMBER) 

    {

        "Version": "2012-10-17",

        "Statement": {

            "Effect": "Allow",

            "Action": "sts:AssumeRole",

            "Resource": "arn:aws:iam::AWS-ACCOUNT-NUMBER:role/terraform-eks-cluster"

        }

    }




  • Navigate to Policies and select create policy called eks-permission

  • Select JSON and copy the below code highlighted in yellow to create policy:

    {

        "Version": "2012-10-17",

        "Statement": [

            {

                "Effect": "Allow",

                "Action": [

                    "eks:DescribeNodegroup",

                    "eks:ListNodegroups",

                    "eks:DescribeCluster",

                    "eks:ListClusters",

                    "eks:AccessKubernetesApi",

                    "ssm:GetParameter",

                    "eks:ListUpdates",

                    "eks:ListFargateProfiles"

                ],

                "Resource": "*"

            }

        ]

    }


  • Create Group called eksgroup and attach the eks-permission and eks-assume policy to terrform-user

  • Verify group and policy is attached to terraform-user by navigating to Users.


Configure Terraform Backend with S3 Storage
  • Create S3 bucket in AWS to configure the backend and store terraform state files in storage. (Name the S3 Bucket whatever you prefer)


Setting up CD Pipeline for EKS Cluster
  • Go to Jenkins > New Items. Enter eks-pipeline in name field > Choose Pipeline > Click OK


  • Select Configure after creation.
  • Go to Build Triggers and enable Trigger builds remotely.
  • Enter tf_token as Authentication Token

 

Bitbucket Changes
    • Create a new Bitbucket Repo and call it eks-pipeline
    • Go to Repository Settings after creation and select Webhooks
    • Click Add Webhooks
    • Enter tf_token as the Title
    • Copy and paste the url as shown below
              http://JENKINS_URL:8080/job/eks-pipeline/buildWithParameters?token=tf_token

  • Status should be active
  • Click on skip certificate verification
  • triggers --> repository push
  • Go back to Jenkins and select Configure
  • Scroll down to Pipeline and click on the drop down to select Pipeline Script From SCM
  • Enter credentials for Bitbucket, Leave the Branch master as the default, Make sure script path is Jenkinsfile.
  • Apply and Save.


Create Terraform Workspace for EKS Pipeline

  • Open File Explorer, navigate to Desktop and create a folder my-

    eks-cluster

  • Once folder has been created, open Visual Code Studio and add folder to workspace







  • Open a New Terminal
  • Run the command before cloning repo: git init
  • Navigate to eks-pipeline repo in Bitbucket
  • Clone the repo with SSH or HTTPS
  • Make sure to cd eks-pipeline and create new files in the eks-pipeline folder

 
  • Create a new file eks-asg.tf and copy the below code in yellow color


















 


resource "aws_eks_cluster" "tf_eks" {

  name            = local.cluster_name

  enabled_cluster_log_types = ["authenticator","api", "controllerManager", "scheduler"]

  role_arn        = aws_iam_role.tf-eks-master.arn

  version         = var.kube_version


  vpc_config {

    security_group_ids = [aws_security_group.eks-master-sg.id]

    subnet_ids         = data.aws_subnet_ids.public.ids

  }


  timeouts {

    create = var.cluster_create_timeout

    delete = var.cluster_delete_timeout

  }  


  depends_on = [

    aws_iam_role_policy_attachment.tf-cluster-AmazonEKSClusterPolicy,

    aws_iam_role_policy_attachment.tf-cluster-AmazonEKSServicePolicy,

  ]

  

  tags = local.common_tags

}


########################################################################################

# Setup AutoScaling Group for worker nodes

########################################################################################


locals {

  tf-eks-node-userdata = <<USERDATA

#!/bin/bash

set -o xtrace

/etc/eks/bootstrap.sh --apiserver-endpoint '${aws_eks_cluster.tf_eks.endpoint}' --b64-cluster-ca '${aws_eks_cluster.tf_eks.certificate_authority.0.data}' '${local.cluster_name}'

USERDATA

}


resource "aws_launch_configuration" "config" {

  associate_public_ip_address = true

  iam_instance_profile        = aws_iam_instance_profile.node.name

  image_id                    = data.aws_ami.eks-worker.id

  instance_type               = var.instance_type

  name_prefix                 = "my-eks-cluster"

  security_groups             = [aws_security_group.eks-node-sg.id, aws_security_group.worker_ssh.id]

  user_data_base64            = base64encode(local.tf-eks-node-userdata)

  key_name                    = var.keypair-name


  lifecycle {

    create_before_destroy = true

  }

  ebs_optimized           = true

  root_block_device {

    volume_size           = 100

    delete_on_termination = true

  }

}


resource "aws_autoscaling_group" "asg" {

  desired_capacity     = 2

  launch_configuration = aws_launch_configuration.config.id

  max_size             = 2

  min_size             = 2

  name                 = local.cluster_name

  vpc_zone_identifier  = data.aws_subnet_ids.public.ids


  tag {

    key                 = "eks-worker-nodes"

    value               = local.cluster_name

    propagate_at_launch = true

  }


  tag {

    key                 = "kubernetes.io/cluster/${aws_eks_cluster.tf_eks.name}"

    value               = "owned"

    propagate_at_launch = true

  }

}


  • Create a new file iam.tf and copy the below code in yellow color

# Setup for IAM role needed to setup an EKS clusters

resource "aws_iam_role" "tf-eks-master" {

  name = "terraform-eks-cluster"


  assume_role_policy = <<POLICY

{

  "Version": "2012-10-17",

  "Statement": [

    {

      "Effect": "Allow",

      "Principal": {

        "Service": "eks.amazonaws.com",

        "AWS": "arn:aws:iam::AWS-ACCOUNT-NUMBER:user/terraform-user"

      },

      "Action": "sts:AssumeRole"

    }

  ]

}

POLICY

  lifecycle {

    create_before_destroy = true

  }

}


resource "aws_iam_role_policy_attachment" "tf-cluster-AmazonEKSClusterPolicy" {

  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"

  role       = aws_iam_role.tf-eks-master.name

}


resource "aws_iam_role_policy_attachment" "tf-cluster-AmazonEKSServicePolicy" {

  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"

  role       = aws_iam_role.tf-eks-master.name

}


resource "aws_iam_role_policy_attachment" "tf-eks-node-AmazonEKSWorkerNode" {

  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"

  role       = aws_iam_role.tf-eks-master.name

}


########################################################################################

# Setup IAM role & instance profile for worker nodes


resource "aws_iam_role" "tf-eks-node" {

  name = "terraform-eks-tf-eks-node"


  assume_role_policy = <<POLICY

{

  "Version": "2012-10-17",

  "Statement": [

    {

      "Effect": "Allow",

      "Principal": {

        "Service": "ec2.amazonaws.com"

      },

      "Action": "sts:AssumeRole"

    }

  ]

}

POLICY

}


resource "aws_iam_role_policy_attachment" "tf-eks-node-AmazonEKSWorkerNodePolicy" {

  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"

  role       = aws_iam_role.tf-eks-node.name

}


resource "aws_iam_role_policy_attachment" "tf-eks-node-AmazonEKS_CNI_Policy" {

  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"

  role       = aws_iam_role.tf-eks-node.name

}


resource "aws_iam_role_policy_attachment" "tf-eks-node-AmazonEC2ContainerRegistryReadOnly" {

  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"

  role       = aws_iam_role.tf-eks-node.name

}


resource "aws_iam_instance_profile" "node" {

  name = "terraform-eks-node"

  role = aws_iam_role.tf-eks-node.name

}


  • Create a new file kube.tf and copy the below code in yellow color

########################################################################################

# Setup provider for kubernetes

# ---------------------------------------------------------------------------------------

# Get an authentication token to communicate with the EKS cluster.

# By default (before other roles are added to the Auth ConfigMap), you can authenticate to EKS cluster only by assuming the role that created the cluster.

# `aws_eks_cluster_auth` uses IAM credentials from the AWS provider to generate a temporary token.

# If the AWS provider assumes an IAM role, `aws_eks_cluster_auth` will use the same IAM role to get the auth token.

# https://www.terraform.io/docs/providers/aws/d/eks_cluster_auth.html


data "aws_eks_cluster_auth" "aws_iam_authenticator" {

  name = "${aws_eks_cluster.tf_eks.name}"

}


data "aws_iam_user" "terraform_user" {

  user_name = "terraform-user"

}


locals {

  # roles to allow kubernetes access via cli and allow ec2 nodes to join eks cluster

  configmap_roles = [{

    rolearn  = "${data.aws_iam_user.terraform_user.arn}"

    username = "{{SessionName}}"

    groups   = ["system:masters"]

  },

  {

    rolearn  =  "${aws_iam_role.tf-eks-node.arn}"

    username = "system:node:{{EC2PrivateDNSName}}"

    groups   = ["system:bootstrappers","system:nodes"]

  },

    {

    rolearn  = "${aws_iam_role.tf-eks-master.arn}"

    username = "{{SessionName}}"

    groups   = ["system:masters"]

  },]

}


# Allow worker nodes to join cluster via config map

resource "kubernetes_config_map" "aws_auth" {

  metadata {

    name = "aws-auth"

    namespace = "kube-system"

  }

 data = {

    mapRoles = yamlencode(local.configmap_roles)

  }

}




locals {

  kubeconfig = <<KUBECONFIG

apiVersion: v1

clusters:

- cluster:

    server: ${aws_eks_cluster.tf_eks.endpoint}

    certificate-authority-data: ${aws_eks_cluster.tf_eks.certificate_authority.0.data}

  name: kubernetes

contexts:

- context:

    cluster: kubernetes

    user: aws

  name: aws

current-context: aws

kind: Config

preferences: {}

users:

- name: aws

  user:

    exec:

      apiVersion: client.authentication.k8s.io/v1alpha1

      command: aws-iam-authenticator

      args:

        - "token"

        - "-i"

        - "${aws_eks_cluster.tf_eks.name}"

KUBECONFIG

}



  • Create a new file output.tf and copy the below code in yellow color

output "eks_kubeconfig" {

  value = "${local.kubeconfig}"

  depends_on = [

    aws_eks_cluster.tf_eks

  ]

}


  • Create a new file provider.tf and copy the below code in yellow color

terraform {

backend "s3" {

      bucket = "S3-BUCKET-NAME"

      key    = "eks/terraform.tfstste"

      region = "us-east-2"

   }

}


provider "aws" {

    region     = var.region

    version    = "~> 2.0"

 }


provider "kubernetes" {

  host                      = aws_eks_cluster.tf_eks.endpoint

  cluster_ca_certificate    = base64decode(aws_eks_cluster.tf_eks.certificate_authority.0.data)

  token                     = data.aws_eks_cluster_auth.aws_iam_authenticator.token

}


  • Create a new file sg-eks.tf and copy the below code in yellow color

# # #SG to control access to worker nodes

resource "aws_security_group" "eks-master-sg" {

    name        = "terraform-eks-cluster"

    description = "Cluster communication with worker nodes"

    vpc_id      = var.vpc_id


    egress {

        from_port   = 0

        to_port     = 0

        protocol    = "-1"

        cidr_blocks = ["0.0.0.0/0"]

    }

    

    tags = merge(

    local.common_tags,

    map(

      "Name","eks-cluster",

      "kubernetes.io/cluster/${local.cluster_name}","owned"

    )

  )

}


resource "aws_security_group" "eks-node-sg" {

        name        = "terraform-eks-node"

        description = "Security group for all nodes in the cluster"

        vpc_id      = var.vpc_id


        egress {

            from_port   = 0

            to_port     = 0

            protocol    = "-1"

            cidr_blocks = ["0.0.0.0/0"]

        }


        tags = merge(

    local.common_tags,

    map(

      "Name","eks-worker-node",

      "kubernetes.io/cluster/${aws_eks_cluster.tf_eks.name}","owned"

    )

  )

}


resource "aws_security_group" "worker_ssh" {

  name_prefix = "worker_ssh"

  vpc_id      = var.vpc_id

  egress {

    from_port   = 0

    to_port     = 0

    protocol    = "-1"

    cidr_blocks = ["0.0.0.0/0"]

  }

  ingress {

    from_port = 22

    to_port   = 22

    protocol  = "tcp"


    cidr_blocks = ["0.0.0.0/0"]

  }

  tags = merge(

    local.common_tags,

    map(

      "Name","worker_ssh",

    )

  )

}


  • Create a new file sg-rules-eks.tf and copy the below code in yellow color


# Allow inbound traffic from your local workstation external IP

# to the Kubernetes. You will need to replace A.B.C.D below with

# your real IP. Services like icanhazip.com can help you find this.

resource "aws_security_group_rule" "tf-eks-cluster-ingress-workstation-https" {

  cidr_blocks       = ["0.0.0.0/0"]

  description       = "Allow workstation to communicate with the cluster API Server"

  from_port         = 443

  protocol          = "tcp"

  security_group_id = aws_security_group.eks-master-sg.id

  to_port           = 443

  type              = "ingress"

}


########################################################################################

# Setup worker node security group


resource "aws_security_group_rule" "tf-eks-node-ingress-self" {

  description              = "Allow node to communicate with each other"

  from_port                = 0

  protocol                 = "-1"

  security_group_id        = aws_security_group.eks-node-sg.id

  source_security_group_id = aws_security_group.eks-node-sg.id

  to_port                  = 65535

  type                     = "ingress"

}


resource "aws_security_group_rule" "tf-eks-node-ingress-cluster" {

  description              = "Allow worker Kubelets and pods to receive communication from the cluster control plane"

  from_port                = 1025

  protocol                 = "tcp"

  security_group_id        = aws_security_group.eks-node-sg.id

  source_security_group_id = aws_security_group.eks-master-sg.id

  to_port                  = 65535

  type                     = "ingress"

}


# allow worker nodes to access EKS master

resource "aws_security_group_rule" "tf-eks-cluster-ingress-node-https" {

  description              = "Allow pods to communicate with the cluster API Server"

  from_port                = 443

  protocol                 = "tcp"

  security_group_id        = aws_security_group.eks-node-sg.id

  source_security_group_id = aws_security_group.eks-master-sg.id

  to_port                  = 443

  type                     = "ingress"

}


resource "aws_security_group_rule" "tf-eks-node-ingress-master" {

  description              = "Allow cluster control to receive communication from the worker Kubelets"

  from_port                = 443

  protocol                 = "tcp"

  security_group_id        = aws_security_group.eks-master-sg.id

  source_security_group_id = aws_security_group.eks-node-sg.id

  to_port                  = 443

  type                     = "ingress"

}


  • Create a new file variables.tf and copy the below code in yellow color

# Setup data source to get amazon-provided AMI for EKS nodes

data "aws_ami" "eks-worker" {

  filter {

    name   = "name"

    values = ["amazon-eks-node-1.21-*"]

  }


  most_recent = true

  owners      = ["602401143452"] # Amazon EKS AMI Account ID

}



data "aws_subnet_ids" "public" {

  vpc_id = var.vpc_id

  

  filter {

    name   = "tag:Name"

    values = ["subnet-public-*"]

  }

}


variable region {

  type        = string

  default = "us-east-2"


}


variable "cluster_create_timeout" {

  description = "Timeout value when creating the EKS cluster."

  type        = string

  default     = "30m"

}


variable "cluster_delete_timeout" {

  description = "Timeout value when deleting the EKS cluster."

  type        = string

  default     = "15m"

}


variable "vpc_id" {

  type = string

  default = "PASTE-VPC-ID-HERE"

}


variable "keypair-name" {

  type = string

  default = "KEY-NAME"

}


variable "creator" {

  description = "Creator of deployed servers"

  type        = string

  default     = "YOUR-NAME"

}


variable "instance_type" {}


variable "env" {}


## Application/workspace specific inputs

variable "app" {

  description = "Name of Application"

  type        = string

  default     = "my-eks"

}


variable "kube_version" {

  type        = string

  description = "Kubernetes version for eks"

}



## Tagging naming convention

locals {

  common_tags = {

  env = var.env,

  creator  = var.creator,

  app = var.app

  }

  cluster_name = "${var.app}-${var.env}"

}


  • Create a new file Jenkinsfile and copy the below code in yellow color

pipeline {

    agent {

      node {

        label "master"

      } 

    }


    parameters {

        choice(choices: ['dev', 'qa', 'prod'], description: 'Select Lifecycle to deploy', name: 'Environment')

        choice(choices: ['master', 'feature_1', 'feature_2'], description: 'Select Branch to clone', name: 'Branch')

        choice(choices: ['m4.large', 'm4.xlarge', 'm4.2xlarge'], description: 'Select Instance Size', name: 'InstanceSize')

        choice(choices: ['1.18', '1.20', '1.21'], description: 'Select Kubernetes Version', name: 'KubeV')

        booleanParam(name: 'autoApprove', defaultValue: false, description: 'Automatically run apply after generating plan?')

        booleanParam(name: 'ACCEPTANCE_TESTS_LOG_TO_FILE', defaultValue: true, description: 'Should debug logs be written to a separate file?')

        choice(name: 'ACCEPTANCE_TESTS_LOG_LEVEL', choices: ['WARN', 'ERROR', 'DEBUG', 'INFO', 'TRACE'], description: 'The Terraform Debug Level')

    }



     environment {

        AWS_ACCESS_KEY_ID     = credentials('AWS_ACCESS_KEY_ID')

        AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')

        TF_LOG                = "${params.ACCEPTANCE_TESTS_LOG_LEVEL}"

        TF_LOG_PATH           = "${params.ACCEPTANCE_TESTS_LOG_TO_FILE ? 'tf_log.log' : '' }"

        TF_VAR_env = "${params.Environment}"

        TF_VAR_instance_type = "${params.InstanceSize}"

        TF_VAR_kube_version = "${params.KubeV}"

        TF_VAR_environment = "${params.Branch}"

    }

// 


    stages {

      stage('checkout') {

        steps {

            echo "Pulling changes from the branch ${params.Branch}"

            git credentialsId: 'bitbucket', url: 'https://bitbucket.org/username/eks-sample.git' , branch: "${params.Branch}"

        }

      }


        stage('terraform plan') {

            steps {

                sh "pwd ; terraform init -input=true"

                sh "terraform validate"

                sh "terraform plan -input=true -out tfplan"

                sh 'terraform show -no-color tfplan > tfplan.txt'

}

            }

        

        stage('terraform apply approval') {

           when {

               not {

                   equals expected: true, actual: params.autoApprove

               }

           }


           steps {

               script {

                    def plan = readFile 'tfplan.txt'

                    input message: "Do you want to apply the plan?",

                    parameters: [text(name: 'Plan', description: 'Please review the plan', defaultValue: plan)]

               }

           }

       }


        stage('terraform apply') {

            steps {

                sh "terraform apply -input=true tfplan"

            }

        }

        

        stage('terraform destroy approval') {

            steps {

                input 'Run terraform destroy?'

            }

        }

        stage('terraform destroy') {

            steps {

                sh 'terraform destroy -force'

            }

        }

    }


  }


  • Commit and push code changes to Repo via Command Line or VSCode
    
    • Run the following commands to commit code to bitbucket:
      - git pull
      - git add *
      - git commit -m "update"
    • - git push

      OR

      In Vscode, navigate to Source Code Icon on the right tabs on the side (Note: Only works with SSH configured with bitbucket)
    • Enter commit message
    • Click the + icon to stage changes 

    • Push changes by clicking on the ðŸ”„0 ⬇️ 1 ⬆️ as shown below

 

Run Pipeline Job

  • Go to eks-pipeline on Jenkins and run build 
Note: The pipeline job will fail the first time to capture the parameters in Jenkinsfile

  • The next time you run a build you should see as shown below


  • Select dev in the Environment field
  • Select master as the branch
  • Choose m4.large, m4.xlarge or m4.2xlarge for EKS Cluster.
  • Choose Kubernetes version 1.18, 1.20 or 1.21.
  • Check the box ACCEPTANCE_TESTS_LOG_TO_FILE to enable Terraform logging
  • Select Trace for debug logging
  • Go to Console Output to track progress
Note: You can abort the destroy step and rerun the step by installing Blue Ocean Plugin on Jenkins to delete the resources created.


  • Configure AWS Credentials

  • Open a GitBash Terminal

  • Run the following command to configure your credentials and use the secret and access keys of the terraform-user.

  • - aws configure


Once configured, run command vi ~/.aws/config and add the following block of code: 

[profile adminrole]

role_arn = arn:aws:iam::AWS-ACCOUNT-NUMBER:role/terraform-eks-cluster

source_profile = default 


  • Map IAM User to EKS using ClusterRole & ClusterRoleBinding 
  • Run the following command to update EKS config

aws eks --region us-east-2 update-kubeconfig --name eks-sample-dev

aws eks --region us-east-2 update-kubeconfig --name eks-sample-dev --profile adminrole 

  • In the Terminal, cd ~ and create a file named touch rbac.yaml in and paste the highlighted code:

    ---

    apiVersion: rbac.authorization.k8s.io/v1

    kind: ClusterRole

    metadata:

      name: eks-console-dashboard-full-access-clusterrole

    rules:

    - apiGroups:

      - ""

      resources:

      - nodes

      - namespaces

      - pods

      verbs:

      - get

      - list

    - apiGroups:

      - apps

      resources:

      - deployments

      - daemonsets

      - statefulsets

      - replicasets

      verbs:

      - get

      - list

    - apiGroups:

      - batch

      resources:

      - jobs

      verbs:

      - get

      - list

    ---

    apiVersion: rbac.authorization.k8s.io/v1

    kind: ClusterRoleBinding

    metadata:

      name: eks-console-dashboard-full-access-binding

    subjects:

    - kind: User

      name: terraform-user

      apiGroup: rbac.authorization.k8s.io

    roleRef:

      kind: ClusterRole

      name: eks-console-dashboard-full-access-clusterrole

      apiGroup: rbac.authorization.k8s.io

Run command once file is created, kubectl apply -f rbac.yaml (This will bind/give IAM User the permission to perform operations.)

Use this command kubectl edit configmap aws-auth -n kube-system to edit and change configmap to the following: 

data:

  mapRoles: |

    - groups:

      - system:masters

      rolearn: arn:aws:iam::AWS-ACCOUNT-NUMBER:user/terraform-user

      username": {{SessionName}}

    - groups:

      - system:bootstrappers

      - system:nodes

      rolearn: arn:aws:iam::AWS-ACCOUNT-NUMBER:role/terraform-eks-tf-eks-node

      username: system:node:{{EC2PrivateDNSName}}

    - groups:

      - system:masters

      rolearn: arn:aws:iam::AWS-ACCOUNT-NUMBER:role/terraform-eks-cluster

      username: {{SessionName}}

  mapUsers: |

    - userarn: arn:aws:iam::AWS-ACCOUNT-NUMBER:user/terraform-user

      username: terraform-user

      groups:

      - system:masters 

Once edited, :wq! to quit and save.


  • Verify EKS Cluster is Active and Nodes are Visible
  • Login to AWS Console with terraform-user credentials
  • Navigate to EKS, select eks-sample-dev and the nodes should be visible.
  • Open Terminal cd ~ and login to EKS with command:
  • aws eks --region us-east-2 update-kubeconfig --name eks-sample-dev --profile adminrole 
  • Verify you are logged in with command: kubectl get nodes or kubectl get pods --all-namespaces

 

Sunday, 31 October 2021

Difference between APT and APT-GET

 

Difference Between apt and apt-get Explained

Brief: This article explains the difference between apt and apt-get commands of Linux. It also lists some of the most commonly used apt commands that replace the older apt-get commands.

One of the noticeable new features of Ubuntu 16.04 was the ‘introduction’ of apt command. The reality is that the first stable version of apt was released in the year 2014 but people started noticing it in 2016 with the release of Ubuntu 16.04.

It became common to see apt install package instead of the usual apt-get install package. Eventually, many other distributions followed Ubuntu’s footsteps and started to encourage users to use apt instead of apt-get.

You might be wondering what’s the difference between apt-get and apt? And if they have a similar command structure, what was the need for the new apt command? You might also be thinking if apt is better than apt-get? Should you be using the new apt command or stick with the good old apt-get commands?

I’ll explain all these questions in this article and I hope that by the end of this article, you’ll have a clearer picture.

apt vs apt-get

What's the difference between apt vs apt-get

Just a quick word for Linux Mint users. A few years ago, Linux Mint implemented a python wrapper called apt that actually uses apt-get but provides more friendly options. This apt which we are discussing here is not the same as the one in Linux Mint.

Before we see the difference between apt and apt-get, let’s go into the backdrop of these commands and what exactly they try to achieve.

Why apt was introduced in the first place?

Debian, mother Linux of distributions like Ubuntu, Linux Mint, elementary OS etc, has a robust packaging system and every component and application is built into a package that is installed on your system. Debian uses a set of tools called Advanced Packaging Tool (APT) to manage this packaging system. Don’t confuse it with the command apt, it’s not the same.

There are various tools that interact with APT and allow you to install, remove and manage packages in Debian based Linux distributions. apt-get is one such command-line tool which is widely popular. Another popular tool is Aptitude with both GUI and command-line options.

If you have read my guide on apt-get commands, you might have come across a number of similar commands such as apt-cache. And this is where the problem arises.

You see, these commands are way too low level and they have so many functionalities which are perhaps never used by an average Linux user. On the other hand, the most commonly used package management commands are scattered across apt-get and apt-cache.

The apt commands have been introduced to solve this problem. apt consists some of the most widely used features from apt-get and apt-cache leaving aside obscure and seldom used features. It can also manage apt.conf file.

With apt, you don’t have to fiddle your way from apt-get commands to apt-cache. apt is more structured and provides you with necessary options needed to manage packages.

Bottom line: apt=most common used command options from apt-get and apt-cache.

Difference between apt and apt-get

So with apt, you get all the necessary tools in one place. You won’t be lost under tons of command options. The main aim of apt is to provide an efficient way of handling package in a way “pleasant for end users”.

When Debian says “pleasant for end users”, it actually means that. It has fewer but sufficient command options but in a more organized way. On top of that, it enables a few options by default that is actually helpful for the end users.

For example, you get to see the progress bar while installing or removing a program in apt.

apt vs apt-get difference
apt shows the progress bar

apt also prompts you with the number of packages that can be upgraded when you update the repository database.

difference apt and apt-get
apt shows the number of packages that can be upgraded

You can achieve the same with apt-get as well if you use additional command options. apt enables them by default and takes the pain away.

Difference between apt and apt-get commands

While apt does have some similar command options as apt-get, it’s not backward compatible with apt-get. That means it won’t always work if you just replace the apt-get part of an apt-get command with apt.

Let’s see which apt command replaces which apt-get and apt-cache command options.

apt commandthe command it replacesfunction of the command
apt installapt-get installInstalls a package
apt removeapt-get removeRemoves a package
apt purgeapt-get purgeRemoves package with configuration
apt updateapt-get updateRefreshes repository index
apt upgradeapt-get upgradeUpgrades all upgradable packages
apt autoremoveapt-get autoremoveRemoves unwanted packages
apt full-upgradeapt-get dist-upgradeUpgrades packages with auto-handling of dependencies
apt searchapt-cache searchSearches for the program
apt showapt-cache showShows package details

apt has a few commands of its own as well.

new apt commandfunction of the command
apt listLists packages with criteria (installed, upgradable etc)
apt edit-sourcesEdits sources list

One point to note here is that apt is under continuous development. So you may see a few new options added to the command in the future versions.

Is apt-get deprecated?

I didn’t find any information that says that apt-get will be discontinued. And it actually shouldn’t be. It still has a lot more functionalities to offer than apt.

For low-level operations, in scripting etc, apt-get will still be used.

Should I use apt or apt-get?

You might be thinking if you should use apt or apt-get. And as a regular Linux user, my answer is to go with apt.

apt is the command that is being recommended by the Linux distributions. It provides the necessary option to manage the packages. Most important of all, it is easier to use with its fewer but easy to remember options.

I see no reason to stick with apt-get unless you are going to do specific operations that utilize more features of apt-get.

Conclusion

I hope I was able to explain the difference between apt and apt-get. In the end, to summarize the apt vs apt-get debate:

  • apt is a subset of apt-get and apt-cache commands providing necessary commands for package management
  • while apt-get won’t be deprecated, as a regular user, you should start using apt more often

How to upgrade Maven

  java.lang.IllegalStateException I had installed maven in my ubuntu using command  apt install maven This installed maven in path /usr/shar...