Sunday, 31 October 2021

Difference between APT and APT-GET

 

Difference Between apt and apt-get Explained

Brief: This article explains the difference between apt and apt-get commands of Linux. It also lists some of the most commonly used apt commands that replace the older apt-get commands.

One of the noticeable new features of Ubuntu 16.04 was the ‘introduction’ of apt command. The reality is that the first stable version of apt was released in the year 2014 but people started noticing it in 2016 with the release of Ubuntu 16.04.

It became common to see apt install package instead of the usual apt-get install package. Eventually, many other distributions followed Ubuntu’s footsteps and started to encourage users to use apt instead of apt-get.

You might be wondering what’s the difference between apt-get and apt? And if they have a similar command structure, what was the need for the new apt command? You might also be thinking if apt is better than apt-get? Should you be using the new apt command or stick with the good old apt-get commands?

I’ll explain all these questions in this article and I hope that by the end of this article, you’ll have a clearer picture.

apt vs apt-get

What's the difference between apt vs apt-get

Just a quick word for Linux Mint users. A few years ago, Linux Mint implemented a python wrapper called apt that actually uses apt-get but provides more friendly options. This apt which we are discussing here is not the same as the one in Linux Mint.

Before we see the difference between apt and apt-get, let’s go into the backdrop of these commands and what exactly they try to achieve.

Why apt was introduced in the first place?

Debian, mother Linux of distributions like Ubuntu, Linux Mint, elementary OS etc, has a robust packaging system and every component and application is built into a package that is installed on your system. Debian uses a set of tools called Advanced Packaging Tool (APT) to manage this packaging system. Don’t confuse it with the command apt, it’s not the same.

There are various tools that interact with APT and allow you to install, remove and manage packages in Debian based Linux distributions. apt-get is one such command-line tool which is widely popular. Another popular tool is Aptitude with both GUI and command-line options.

If you have read my guide on apt-get commands, you might have come across a number of similar commands such as apt-cache. And this is where the problem arises.

You see, these commands are way too low level and they have so many functionalities which are perhaps never used by an average Linux user. On the other hand, the most commonly used package management commands are scattered across apt-get and apt-cache.

The apt commands have been introduced to solve this problem. apt consists some of the most widely used features from apt-get and apt-cache leaving aside obscure and seldom used features. It can also manage apt.conf file.

With apt, you don’t have to fiddle your way from apt-get commands to apt-cache. apt is more structured and provides you with necessary options needed to manage packages.

Bottom line: apt=most common used command options from apt-get and apt-cache.

Difference between apt and apt-get

So with apt, you get all the necessary tools in one place. You won’t be lost under tons of command options. The main aim of apt is to provide an efficient way of handling package in a way “pleasant for end users”.

When Debian says “pleasant for end users”, it actually means that. It has fewer but sufficient command options but in a more organized way. On top of that, it enables a few options by default that is actually helpful for the end users.

For example, you get to see the progress bar while installing or removing a program in apt.

apt vs apt-get difference
apt shows the progress bar

apt also prompts you with the number of packages that can be upgraded when you update the repository database.

difference apt and apt-get
apt shows the number of packages that can be upgraded

You can achieve the same with apt-get as well if you use additional command options. apt enables them by default and takes the pain away.

Difference between apt and apt-get commands

While apt does have some similar command options as apt-get, it’s not backward compatible with apt-get. That means it won’t always work if you just replace the apt-get part of an apt-get command with apt.

Let’s see which apt command replaces which apt-get and apt-cache command options.

apt commandthe command it replacesfunction of the command
apt installapt-get installInstalls a package
apt removeapt-get removeRemoves a package
apt purgeapt-get purgeRemoves package with configuration
apt updateapt-get updateRefreshes repository index
apt upgradeapt-get upgradeUpgrades all upgradable packages
apt autoremoveapt-get autoremoveRemoves unwanted packages
apt full-upgradeapt-get dist-upgradeUpgrades packages with auto-handling of dependencies
apt searchapt-cache searchSearches for the program
apt showapt-cache showShows package details

apt has a few commands of its own as well.

new apt commandfunction of the command
apt listLists packages with criteria (installed, upgradable etc)
apt edit-sourcesEdits sources list

One point to note here is that apt is under continuous development. So you may see a few new options added to the command in the future versions.

Is apt-get deprecated?

I didn’t find any information that says that apt-get will be discontinued. And it actually shouldn’t be. It still has a lot more functionalities to offer than apt.

For low-level operations, in scripting etc, apt-get will still be used.

Should I use apt or apt-get?

You might be thinking if you should use apt or apt-get. And as a regular Linux user, my answer is to go with apt.

apt is the command that is being recommended by the Linux distributions. It provides the necessary option to manage the packages. Most important of all, it is easier to use with its fewer but easy to remember options.

I see no reason to stick with apt-get unless you are going to do specific operations that utilize more features of apt-get.

Conclusion

I hope I was able to explain the difference between apt and apt-get. In the end, to summarize the apt vs apt-get debate:

  • apt is a subset of apt-get and apt-cache commands providing necessary commands for package management
  • while apt-get won’t be deprecated, as a regular user, you should start using apt more often

Sunday, 24 October 2021

Deploy Nginx with Terraform and Ansible

 The goal is to implement Devops best practices to use Terraform for provisioning and Ansible to configure your server in Jenkins Pipelines. We will go over the main concepts that need to be considered and a Jenkinsfile that runs Terraform and Ansible. The Jenkinsfile will consists of parameters that allows us to pass data as variables in our pipeline job. 

  • Install Terraform on Jenkins Server
  • Install Python on Jenkins Server
  • Install Ansible on Jenkins Server
  • Install Terraform & Credentials Plugin on Jenkins
  • Configure Terraform
  • Store and Encrypt Credentials in Jenkins
  • Setting up CD Pipeline with Terraform + Ansible to Deploy Nginx
  • Playbook to Deploy Nginx
  • Run Pipeline Job


Install Terraform on Jenkins Server


Use the following commands to install Terraform on Jenkins server and move the binaries to the correct path as shown below.


  • sudo apt install unzip
  • wget https://releases.hashicorp.com/terraform/0.12.24/terraform_0.12.24_linux_amd64.zip
  • unzip terraform_0.12.24_linux_amd64.zip
  • sudo mv terraform /usr/bin/

Install Python on Jenkins Server


Use the following commands to install Python on Jenkins server 

  • sudo apt-get update
    sudo apt-get install python-minimal -y
    python --version


Install Ansible on Jenkins Server


Use the following commands to install Ansible on Jenkins server 

  • sudo apt-get update 
  • sudo apt-get install software-properties-common

  • sudo apt-add-repository ppa:ansible/ansible
  • sudo apt-get update
  • sudo apt install ansible
  • ansible --version
  • (Note: Ansible version should show 2.9.27 or higher)

Install Terraform & Credentials plugin on Jenkins

Go to Manage Jenkins > Manage Plugins >Available > search Terraform as shown below:



Ensure to install Credentials and Credentials Binding as this is required for securing your credentials.

As you can see, Terraform Plugin is already installed on my Jenkins hence why it's displayed in the Installed section.

Store and Encrypt Credentials in Jenkins (Access and Secret Key) 

In this step, we will be storing and encrypting the access and secret key in Jenkins to maximize security and minimize the chances of exposing our credentials.

    • Go to Manage Jenkins > Manage Credentials > Click on Jenkins the highlighted link as shown below


    • Select Add Credentials
    • Choose Secret text in the Kind field
    •  Enter the following below:
    Note: Modify the yellow highlighted text with the right value.
    • Secret = EnterYourSecretKeyHere
    • ID = AWS_SECRET_ACCESS_KEY
    • Description = AWS_SECRET_ACCESS_KEY
    Click OK

    Add another credential and enter the following:

    • Secret = EnterYourAccessIDHere
    • ID = AWS_ACCESS_KEY_ID
    • Description = AWS_ACCESS_KEY_ID

    Click OK







    • Choose SSH Username with private key in the Kind field
    • ID = ENTER-KEY-NAME
    • DESCRIPTION = ENTER-KEY-NAME
    • DESCRIPTION = ENTER-KEY-NAME
    • PRIVATE KEY = ENTER-PRIVATE-KEY

    • Click OK

    Configure Terraform

    Go to Manage Jenkins > Global Tool Configuration > It will display Terraform on the list.

    • Enter terraform in the Name field
    • Provide the path /usr/bin/ as shown below














    Setting up CD Pipeline with Terraform + Ansible to Deploy Nginx


    • Go to Jenkins > New Items. Enter nginx-pipeline in name field > Choose Pipeline > Click OK


    • Select Configure after creation.
    • Go to Build Triggers and enable Trigger builds remotely.
    • Enter tf_token as Authentication Token

     

    Bitbucket Changes
      • Create a new Bitbucket Repo and call it nginx-pipeline
      • Go to Repository Settings after creation and select Webhooks
      • Click Add Webhooks
      • Enter tf_token as the Title
      • Copy and paste the url as shown below
                  http://JENKINS_URL:8080/job/nginx-pipeline/buildWithParameters?token=tf_token

    • Status should be active
    • Click on skip certificate verification
    • triggers --> repository push
    • Go back to Jenkins and select Configure
    • Scroll down to Pipeline and click on the drop down to select Pipeline Script From SCM
    • Enter credentials for Bitbucket, Leave the Branch master as the default, Make sure script path is Jenkinsfile
    • Right click on Pipeline Syntax and open in a new tab. 
    • Choose Checkout from Version Control in the Sample Step field
    • Enter Bitbucket Repository URL and Credentials, leave the branches blank

    • Click GENERATE PIPELINE SCRIPT, copy credentialsId and url (This is required for Jenkinsfile script)



    Create Workspace for Terraform and Ansible Pipeline

    • Open File Explorer, navigate to Desktop and create a folder

      terraform_ansible

    • Once folder has been created, open Visual Code Studio and add folder to workspace







    • Open a New Terminal
    • Run the command before cloning repo: git init
    • Navigate to nginx-pipeline repo in Bitbucket
    • Clone the repo with SSH or HTTPS
    • Create S3 bucket in AWS to configure the backend and store terraform state files in storage. (Name the S3 Bucket whatever you prefer)

    • Create a new file main.tf and copy the below code in yellow color



















    provider "aws" {

        region = var.region

        version = "~> 2.0"

    }


    terraform {

    backend "s3" {

          bucket = "S3-BUCKET-NAME"

          key    = "nginx/terraform.tfstste"

          region = "us-east-2"

       }

    }


    locals {

      ssh_user         = "ubuntu"

      key_name         = "name-of-key"

      private_key_path = "files/name-of-key.pem"

    }

    resource "aws_instance" "nginx" {

      ami                         = data.aws_ami.ubuntu.id

      subnet_id                   = "subnet-7ce9c814"

      instance_type               = "t2.micro"

      associate_public_ip_address = true

      security_groups             = [aws_security_group.nginx.id]

      key_name                    = local.key_name


      provisioner "remote-exec" {

        inline = [

            "echo 'Wait until SSH is ready'",      

            ]


    ## PASS PRIVATE KEY AS VARIABLE


        connection {

          type        = "ssh"

          user        = local.ssh_user

          private_key = file(local.private_key_path) 

          host        = aws_instance.nginx.public_ip

        }

      }

      provisioner "local-exec" {

        command = "ansible-playbook  -i ${aws_instance.nginx.public_ip}, --private-key ${local.private_key_path} nginx.yml"

      }

      tags = {

        Name = "u2-${var.environment}-${var.application}"

        CreatedBy = var.launched_by

        Application = var.application

        OS = var.os

        Environment = var.environment

      }

    }


    output "nginx_ip" {

      value = aws_instance.nginx.public_ip

    }


    data "aws_ami" "ubuntu" {

        most_recent = true


        filter {

            name   = "name"

            values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-*"]

        }


        filter {

            name   = "virtualization-type"

            values = ["hvm"]

        }


        owners = ["099720109477"] # Canonical

    }

    • Create a new file nginx.yml and copy the below code in yellow color

    ---

    - name: Install Nginx

      hosts: all

      remote_user: ubuntu

      become: yes


      roles:

      - nginx

    • Create a new file security.tf and copy the below code in yellow color

    resource "aws_security_group" "nginx" {
    name = "u2-${var.environment}-sg-${var.application}"
    description = "EC2 SG"
    ingress {
    from_port = 22
    to_port = 22
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    }
    ingress {
         from_port   = 8080
    to_port     = 8080
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
       }
       ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    }
    #Allow all outbound
    egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    }
    }

    • Create a new file variable.tf and copy the below code in yellow color. 

    variable region {
      type        = string
      default = "us-east-2"
    }
    variable "instance_type" {}
    variable "application" {}
    variable "environment" {}
    ############## tags
    variable os {
      type        = string
      default = "Ubuntu"
    }
    variable launched_by {
      type        = string
      default = "USER"
    }
    ############## end tags


    Ansible Playbook to Deploy Nginx

    • Create this folders roles/nginx/tasks in the same directory then create a file called main.yml and copy the below code in yellow color. 

    ---


    - name: Install Python Dependencies

      raw: "{{ item }}"

      loop:

        - sudo apt-get update

        - sudo apt-get -y install python

      become: true

      ignore_errors: true


    - name: Ensure Nginx is at the latest version

      apt:

        name: nginx

        state: latest

        update_cache: yes


    - name: Make sure Nginx is running

      systemd:

        state: started

        name: nginx



    • Create this folders roles/nginx/handlers in the same directory then create a file called main.yml and copy the below code in yellow color. 

    - name: restart nginx

      service:

        name: nginx

        state: restarted


    • Create a new file Jenkinsfile and copy the below code in yellow color. 


      pipeline {

          agent {

            node {

              label "master"

            } 

          }


          parameters {

              string(name: 'AppName', defaultValue: 'Enter App Name', description: 'Name of application', )

              choice(choices: ['master', 'dev', 'qa', 'prod'], description: 'Select lifecycle to Deploy', name: 'Branch')

              choice(choices: ['t2.micro', 't2.small', 't2.medium'], description: 'Select Instance Size', name: 'InstanceSize')

              booleanParam(name: 'autoApprove', defaultValue: false, description: 'Automatically run apply after generating plan?')

          }



           environment {

              AWS_ACCESS_KEY_ID     = credentials('AWS_ACCESS_KEY_ID')

              AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')

              TF_VAR_instance_type = "${params.InstanceSize}"

              TF_VAR_environment = "${params.Branch}"

              TF_VAR_application = "${params.AppName}"

          }

      // 


          stages {

            stage('checkout') {

              steps {

                  echo "Pulling changes from the branch ${params.Branch}"

                  git credentialsId: 'bitbucket', url: 'paste-url-here' , branch: "${params.Branch}"

              }

            }


              stage('terraform plan') {

                  steps {

                      withCredentials([sshUserPrivateKey(credentialsId: 'name-of-key', keyFileVariable: 'SSH_KEY')]) 

                      {

                      sh 'mkdir -p files'

                      sh 'if [ -f "files/name-of-key.pem" ] ; then rm -rf files/name-of-key.pem; else echo "No entry found"; fi'

                      sh 'cp "$SSH_KEY" files/name-of-key.pem'

                      sh "pwd ; terraform init -input=true"

                      sh "terraform plan -input=true -out tfplan"

                      sh 'terraform show -no-color tfplan > tfplan.txt'

      }

                  }

              }

              

              stage('terraform apply approval') {

                 when {

                     not {

                         equals expected: true, actual: params.autoApprove

                     }

                 }


                 steps {

                     script {

                          def plan = readFile 'tfplan.txt'

                          input message: "Do you want to apply the plan?",

                          parameters: [text(name: 'Plan', description: 'Please review the plan', defaultValue: plan)]

                     }

                 }

             }


              stage('terraform apply') {

                  steps {

                      sh "terraform apply -input=true tfplan"

                  }

              }

              

              stage('terraform destroy approval') {

                  steps {

                      input 'Run terraform destroy?'

                  }

              }

              stage('terraform destroy') {

                  steps {

                      sh 'terraform destroy -force'

                  }

              }

          }


        }


    • Commit and push code changes to Repo with the following:
      • In Vscode, navigate to Source Code Icon on the right tabs on the side
      • Enter commit message
      • Click the + icon to stage changes 

      • Push changes by clicking on the ðŸ”„0 ⬇️ 1 ⬆️ as shown below

    Run Pipeline Job

    • Go to nginx-pipeline on Jenkins and run build 
    Note: The pipeline job will fail the first time to capture the parameters in Jenkinsfile

    • The next time you run a build you should see as shown below





    • Enter Nginx in the AppName field
    • Select a Branch/Lifecycle to deploy server
    • Choose t2.small or t2.medium for Nginx server.
    • Go to Console Output to track progress
    Note: You can abort the destroy step and rerun the step by installing Blue Ocean Plugin on Jenkins to delete the resources created.


    How to upgrade Maven

      java.lang.IllegalStateException I had installed maven in my ubuntu using command  apt install maven This installed maven in path /usr/shar...