Friday, 11 March 2022

How to Build Terraform Script Scratch to SkyScrapper

Step 1: Create the main terraform config file - main.tf

Go to your terraform work space....and launch vscode


Step 2: Get the Code Block for The Provider Section

Go to https://registry.terraform.io/ and Select Browse Provider


Select your provider- Aws

Step 3: 

Step 4: Copy the code block skeleton: See below and paste in your main.tf file
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "4.4.0"
    }
  }
}

provider "aws" {
  # Configuration options
}

Step 5: Lets get the config options: Click on documentation nd scroll down to usage example:



Step 6: copy the config options as pictured above and replace with ur region, secret key, accesskey
So our code will now look like below:
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "4.4.0"
    }
  }
}

provider "aws" {
  # Configuration options
  region     = "us-east-2"
  access_key = "ur access key"
  secret_key = "ur secret key"
}


Step 7: Now let us add a default tag to our code. Scroll down on the page to default tag usage and copy the code(pls modify as required)


See code to copy below, Add it below secret key
default_tags {
    tags = {
      Environment = "Test"
      Name        = "Provider Tag"
    }
  }
it will look like below:


Our Code will now look like this:
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "4.4.0"
    }
  }
}

provider "aws" {
  # Configuration options
  region     = "us-east-2"
  access_key = "AKetetettetetettetwwuquuququq"
  secret_key = "wtwtetetett2tt22266262wfwffwf"
  default_tags {
    tags = {
      Environment = "Dev"
      Name        = "aws_dev"
    }
  }
}

This takes care of our connection to Aws. The next thing will be to create a resource. To do this you have to browse the available resource for aws




Aws has lots of resources , scroll to the one you want. this eg we will create ec2

Copy the aws_instance resource block and add to ur script, this will form our template








So our code will look like this:
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "4.4.0"
    }
  }
}

provider "aws" {
  # Configuration options
  region     = "us-east-2"
  access_key = "Axccxcxcxcxccxcxcxccxcxc"
  secret_key = "pxxcxcxcxccxvxvvx"
  default_tags {
    tags = {
      Environment = "Dev"
      Name        = "aws_dev"
    }
  }
}

resource "aws_instance" "web" {
  




}

Now we have the resource block ready, now its time to inject the config variables for the instance resource


Go to modules and select the resource you want.  we want to create and ec2 instance so we will select a module for that
Step 8:  Go to registry: https://registry.terraform.io/   ....Browse Modules





In the search type ec2 and look for the module for creating an ec2 instance, Scroll to find it under modules




Scroll Down and copy the below block and add to your code






Your code will now look like below





Pls note the ff variables arent required
  • name not required
  • source - not required
  • version -not required

Our new code will look like below

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "4.4.0"
    }
  }
}

provider "aws" {
  # Configuration options
  region     = "us-east-2"
  access_key = "Axccxcxcxcxccxcxcxccxcxc"
  secret_key = "pxxcxcxcxccxvxvvx"
  default_tags {
    tags = {
      Environment = "Dev"
      Name        = "aws_dev"
    }
  }
}

resource "aws_instance""myec2-instance" {
  ami                    = "ami-ebd02392"
  instance_type          = "t2.micro"
  key_name               = "Augustkey"
  vpc_security_group_ids = ["sg-12345678"]
  tags = {
    Terraform   = "true"
    Environment = "Dev"
    Name        = "My_ec2 instance"
  }
}

Now modify the vpc_security_group_ids so that we will use the default security group
  vpc_security_group_ids = [aws_security_group.ec2_sg.id]

Our code will now look like below:
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "4.4.0"
    }
  }
}

provider "aws" {
  # Configuration options
  region     = "us-east-2"
  access_key = "xcxcxcxcvxvxbxbbxbxbx"
  secret_key = "sggsgsgsgsggsgsgsggs"
  default_tags {
    tags = {
      Environment = "Dev"
      Name        = "aws_dev"
    }
  }
}

resource "ec2_instance""single-instance" {
  ami                    = "ami-ebd02392"
  instance_type          = "t2.micro"
  key_name               = "Augustkey"
  vpc_security_group_ids = [aws_security_group.ec2_sg.id]
  tags = {
    Terraform   = "true"
    Environment = "Dev"
    Name        = "My_ec2 instance"
  }
}


Now to Add Security Group resource











Copy the code and add at the bottom of the script. we will modify to suit our enviroment

Modify the ingress port to suit your env, the egress doesnt need to be changed
vpc_id: is optional
Our code will now look like:


terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "4.4.0"
    }
  }
}

provider "aws" {
  # Configuration options
  region     = "us-east-2"
  access_key = "xcxcxcxcvxvxbxbbxbxbx"
  secret_key = "sggsgsgsgsggsgsgsggs"
  default_tags {
    tags = {
      Environment = "Dev"
      Name        = "aws_dev"
    }
  }
}

resource "ec2_instance""single-instance" {
  ami                    = "ami-ebd02392"
  instance_type          = "t2.micro"
  key_name               = "Augustkey"
  vpc_security_group_ids = [aws_security_group.ec2_sg.id]
  tags = {
    Terraform   = "true"
    Environment = "Dev"
    Name        = "My_ec2 instance"
  }
}

resource "aws_security_group" "ec2_sg" {
    name = "ec2-dev-sg"
    description = "EC2 SG"

    ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["10.0.0.0/8"]
    }

    ingress {
        from_port = 80
        to_port = 80
        protocol = "tcp"
        cidr_blocks = ["10.0.0.0/8"]
    }

    #Allow all outbound
    egress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }
tags = {
    Name = "ec2-dev-sg"
  }
}

Now we have all the code blocks to create our env. Save
in your terminal enter
$terraform init
$terraform plan
$terraform apply

Friday, 4 March 2022

Setting Up Cd Pipeline for Terraform

 

  • Go back to Jenkins and select your terraform pipeline and click  Configure
  • Scroll down to Pipeline and click on the drop down to select Pipeline Script From SCM
  • Enter credentials for Bitbucket, Leave the Branch master as the default, Make sure script path is Jenkinsfile
  • Right click on Pipeline Syntax and open in a new tab. 
  • Choose Checkout from Version Control in the Sample Step field
  • Enter Bitbucket Repository URL and Credentials, leave the branches blank
  • Click GENERATE PIPELINE SCRIPT, copy credentialsId and url (This is required for Jenkinsfile script)



Create Workspace for Terraform Pipeline
  • Open File Explorer, navigate to Desktop and create a folder cd_pipeline

  • Once folder has been created, open Visual Code Studio and add folder to workspace







  • Open a New Terminal
  • Run the command before cloning repo: git init
  • Navigate to terraform-pipeline repo in Bitbucket
  • Clone the repo with SSH or HTTPS
  • Create a new file main.tf and copy the below code in yellow color



















provider "aws" {
region = var.region
version = "~> 2.0"
}
resource "aws_instance" "ec2" {
user_data   = base64encode(file("deploy.sh"))
ami = "ami-0782e9ee97725263d"   ##Change AMI to meet OS requirement as needed.
root_block_device {
    volume_type           = "gp2"
    volume_size           = 200
    delete_on_termination = true
    encrypted             = true
  }
tags = {
Name = "u2-${var.environment}-${var.application}"
CreatedBy = var.launched_by
Application = var.application
OS = var.os
Environment = var.environment
}
instance_type = var.instance_type
key_name = "Enter_KEYPAIR_Name_Here"
vpc_security_group_ids = [aws_security_group.ec2_SecurityGroups.id]
}
output "ec2_ip" {
value = [aws_instance.ec2.*.private_ip]
}
output "ec2_ip_public" {
value = [aws_instance.ec2.*.public_ip]
}
output "ec2_name" {
value = [aws_instance.ec2.*.tags.Name]
}
output "ec2_instance_id" {
value = aws_instance.ec2.*.id


  • Create a new file security.tf and copy the below code in yellow color

resource "aws_security_group" "ec2_SecurityGroups" {
name = "u2-${var.environment}-sg-${var.application}"
description = "EC2 SG"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
     from_port   = 8081
to_port     = 8081
protocol    = "tcp"
cidr_blocks = ["0.0.0.0/0"]
   }
ingress {
     from_port   = 8082
to_port     = 8082
protocol    = "tcp"
cidr_blocks = ["0.0.0.0/0"]
   }
   ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
#Allow all outbound
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

  • Create a new file variable.tf and copy the below code in yellow color. 

variable region {
  type        = string
  default = "us-east-2"
}
variable "instance_type" {}
variable "application" {}
variable "environment" {}
############## tags
variable os {
  type        = string
  default = "Ubuntu"
}
variable launched_by {
  type        = string
  default = "USER"
}
############## end tags


Bash Script to Deploy Artifactory

  • Create a new file deploy.sh and copy the below code in yellow color. 
#!/bin/bash
set -x

exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1 

echo ""
echo "........................................"
echo "Installation of application"
echo "........................................"
echo "Today's date: `date`"
echo "........................................"
echo ""
sudo pip install awscli
sudo apt-get install -y unzip
sudo apt update
sudo apt dist-upgrade
sudo apt autoremove
sudo apt update
sudo apt-get install openjdk-8-jdk openjdk-8-doc
java -version
sudo apt install wget software-properties-common
sudo wget -qO - https://api.bintray.com/orgs/jfrog/keys/gpg/public.key | sudo apt-key add - 
sudo add-apt-repository "deb [arch=amd64] https://jfrog.bintray.com/artifactory-debs $(lsb_release -cs) main"
sudo apt update
sudo apt install jfrog-artifactory-oss
sudo systemctl stop artifactory.service
sudo systemctl start artifactory.service
sudo systemctl enable artifactory.service
sudo systemctl status artifactory.service
echo ""
echo "........................................"
echo "Installation of application"
echo "........................................"
echo "Today's date: `date`"
echo "........................................"
echo ""




  • Create a new file Jenkinsfile and copy the below code in yellow color. 



pipeline {
    agent{ label '!master' }
    parameters {
        string(name: 'AppName', defaultValue: 'Enter App Name', description: 'Name of application', )
        choice(choices: ['master', 'dev', 'qa', 'prod'], description: 'Select lifecycle to Deploy', name: 'Branch')
        choice(choices: ['t2.micro', 't2.small', 't2.medium'], description: 'Select Instance Size', name: 'InstanceSize')
        booleanParam(name: 'autoApprove', defaultValue: false, description: 'Automatically run apply after generating plan?')
    }


     environment {
        AWS_ACCESS_KEY_ID     = credentials('AWS_ACCESS_KEY_ID')
        AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')
        TF_VAR_instance_type = "${params.InstanceSize}"
        TF_VAR_environment = "${params.Branch}"
        TF_VAR_application = "${params.AppName}"
    }
// 

    stages {
      stage('checkout') {
        steps {
            echo "Pulling changes from the branch ${params.Branch}"
            git credentialsId: 'paste-credentialsId-here', url: 'paste-url-here' , branch: "${params.Branch}"
        }
      }

        stage('terraform plan') {
            steps {
                sh "pwd ; terraform init -input=true"
                sh "terraform plan -input=true -out tfplan"
                sh 'terraform show -no-color tfplan > tfplan.txt'
}
            }
        
        stage('terraform apply approval') {
           when {
               not {
                   equals expected: true, actual: params.autoApprove
               }
           }

           steps {
               script {
                    def plan = readFile 'tfplan.txt'
                    input message: "Do you want to apply the plan?",
                    parameters: [text(name: 'Plan', description: 'Please review the plan', defaultValue: plan)]
               }
           }
       }

        stage('terraform apply') {
            steps {
                sh "terraform apply -input=true tfplan"
            }
        }
        
        stage('terraform destroy approval') {
            steps {
                input 'Run terraform destroy?'
            }
        }
        stage('terraform destroy') {
            steps {
                sh 'terraform destroy -force'
            }
        }
    }

  }

  • Commit and push code changes to Repo with the following:
    • In Vscode, navigate to Source Code Icon on the right tabs on the side
    • Enter commit message
    • Click the + icon to stage changes 

    • Push changes by clicking on the ðŸ”„0 ⬇️ 1 ⬆️ as shown below

Run Pipeline Job

  • Go to terraform-pipeline on Jenkins and run build 
Note: The pipeline job will fail the first time to capture the parameters in Jenkinsfile

  • The next time you run a build you should see as shown below





  • Enter Artifactory in the AppName field
  • Select a Branch/Lifecycle to deploy server
  • Choose t2.small or t2.medium for Artifactory server.
  • Go to Console Output to track progress
Note: You can abort the destroy step and rerun the step by installing Blue Ocean Plugin on Jenkins to delete the resources created.

How to configure Webhooks for Pipeline(Terraform)

 Create a Pipeline in jenkins for your terraform automation

  • Go to Jenkins > New Items. Enter terraform-pipeline in name field > Choose Pipeline > Click OK


  • Select Configure after creation.
  • Go to Build Triggers and enable Trigger builds remotely.
  • Enter tf_token as Authentication Token

 









Now Save

Next 

Bitbucket Changes

    • Create a new Bitbucket Repo and call it terraform-pipeline
    • Go to Repository Settings after creation and select Webhooks
    • Click Add Webhooks
    • Enter tf_token as the Title
    • Copy and paste the url as shown below
              http://JENKINS_URL:8080/job/terraform-pipeline/buildWithParameters?token=tf_token
    • Status should be active
    • Click on skip certificate verification
    • triggers --> repository push
Now Whenever you push changes, to bitbucket , it will trigger the pipeline

How to upgrade Maven

  java.lang.IllegalStateException I had installed maven in my ubuntu using command  apt install maven This installed maven in path /usr/shar...