Tuesday, 24 November 2020

How to Committing Changes to Docker Image

 Step 1: Pull a Docker Image from docker registry 

To illustrate how to commit changes, you first need to have an image to work with. In this article, we work with the latest Ubuntu image for Docker. Download the image from Docker’s library with:

sudo docker pull centos

Copy IMAGE ID for later use.

Step 2: Deploy the Container

Add the IMAGE ID to the command that will create a container based on the image:

docker run -it  0d120b6ccaa8 bin/bash

The –it options instruct the container to launch in interactive mode and enable a terminal typing interface. Upon executing the command, a new container launches and moves you to a new shell prompt for working inside of it.


Step 3: Modify the Container

Now that you are in the container, you can modify the image. In the following example, we add the git software for network discovery and security auditing:

yum install nmap

 or 

you can install git as well with below command


yum install git 


The command will download the Nmap package and install it inside the running container.


You can verify the installation by running:

nmap --version

The output shows you that Nmap version 7.70 is installed and ready to use.

Once you finish modifying the new container, exit out of it:

exit

Prompt the system to display a list of launched containers:

sudo docker ps -a

You will need the CONTAINER ID to save the changes you made to the existing image. Copy the ID value from the output.

Step 4: Commit Changes to Image

Finally, create a new image by committing the changes using the following syntax:

sudo docker commit [CONTAINER_ID] [new_image_name]

Therefore, in our example it will be:

docker commit 85b95fd4423a  devop-app

Where 85b95fd4423a  is the CONTAINER ID and devop-app is the name of the new image.


Your newly created image should now be available on the list of local images. You can verify by checking the image list again:

sudo docker images

Conclusion

Now that you have learned how to commit changes to a Docker image and create a new one for future uses, take a look at our tutorial on how to set up and use Private/public Docker Registry

Thursday, 29 October 2020

Automate the creation of EC2 Instance with Terraform

TERRAFORM LAB 2

Please reference link for INSTALLING TERRAFORM ON YOUR LOCAL MACHINE before starting this lab.


Deploying AWS EC2 instances with Terraform is one of the easiest ways to build infrastructure as code, and automate the provisioning, deployment and maintenance of resources to EC2 as well as custom solutions. This lab will walk you through the basics of configuring a single instance using a simple configuration file and the Terraform provider.

Prerequisites:

AWS access and secret keys are required to provision resources on AWS cloud.

  • Open Visual Code Studio then click on File > Preferences > Extensions then search and install Terraform extension


























  • Login to AWS console, click on Username on top right corner and go to My Security Credentials



  • Click on Access Keys and Create New Key













Step I: Open File Explorer, navigate to Desktop and create a folder terraform_workspace.



















Step II: Once folder has been created, open Visual Code Studio and add folder to workspace













Step III: Create a new file main.tf and copy the below code in yellow color


















provider "aws" {

        access_key = "ACCESS KEY"
        secret_key  = "SECRET KEY"
        region         = "us-east-2"

}

resource "aws_instance" "ec2" {

  ami           = "ami-0a91cd140a1fc148a"

  instance_type = "t2.micro"
  vpc_security_group_ids = [aws_security_group.ec2_sg.id]
  tags = {
    Name = "ec2_instance"
  }

}


Add the block below in main.tf to output the Private IP, Public IP and EC2 Name after creation. (Note: This is not required)

output "ec2_ip" {

    value = [aws_instance.ec2.*.private_ip]

}


output "ec2_ip_public" {

    value = [aws_instance.ec2.*.public_ip]

}


output "ec2_name" {

    value = [aws_instance.ec2.*.tags.Name]

}



Step IV: Create a new file security.tf and copy the below code in yellow color

resource "aws_security_group" "ec2_sg" {
name = "ec2-dev-sg"
description = "EC2 SG"

ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["10.0.0.0/8"]
}

  ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["10.0.0.0/8"]
}

#Allow all outbound
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
    Name = "ec2-dev-sg"
  }
}


Step V: Open Terminal in VSCode
















Step V: Execute command below

terraform init
the above command will download the necessary plugins for AWS.

terraform plan
the above command will show how many resources will be added.
Plan: 2 to add, 0 to change, 0 to destroy.

Execute the below command
terraform apply
Plan: 2 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Yay! 
We have successfully deployed our first ec2 instance with terraform............................

Now login to AWS console, to verify the new instance is up and running



Friday, 16 October 2020

How to use Jenkinsfile in a Pipeline(Pipeline As Code)/Convert your Scripted Pipeline into Jenkinsfile

 Prerequisite: Please make sure you have completed Exercise on :https://violetstreamstechnology.blogspot.com/2020/09/understanding-pipelines-how-to-create.html


What is a JenkinsFile?

Jenkins pipelines can be defined using a text file called JenkinsFile. You can implement pipeline as code using JenkinsFile, and this can be defined by using a domain specific language (DSL). With JenkinsFile, you can write the steps needed for running a Jenkins pipeline.

The benefits of using JenkinsFile are:

  • You can create pipelines automatically for all branches and execute pull requests with just one JenkinsFile.
  • You can review your code on the pipeline
  • You can audit your Jenkins pipeline
  • This is the singular source for your pipeline and can be modified by multiple users.



This pipeline was defined by the groovy code that was placed in the pipeline section of the Job. 

You would notice something, Anyone that has access to this job can modify the pipeline as they wish. This can cause lots of problems especially when you have large teams. Developers can manipulate their builds to always pass, No accountability and integrity of process, Zero maintainability, and lots more
To remediate all this issues Jenkins gives us the ability to use Jenkinsfile so that the code we store in Jenkins can be placed in a Repo and can be version controlled.

How to convert your existing Jenkins Pipeline to Jenkinsfile

Step 1:
Go to your Project in your computer and open git bash.

Step2: Go into your repo(cd myfirstrepo) then open vscode






step3: Create a New File in Vscode and name it Jenkinsfile(note: this file has no extensions)


step 4 Go to your Existing Jenkins Pipeline and copy the pipeline code and paste into the Jenkinsfile



Your code should look like this:
node {
 stage ('Checkout')  {

     build job: 'CheckOut' 
 }
stage ('Build') {
     build job: 'Build' 
    }
stage ('Code Quality scan') {
      build job: 'Code_Quality' 
        }
        
stage('Archive Artifacts') {
build job: 'Archive_Artifacts'
}
 stage('Publish to Artifactory') {
build job: 'Publish_To_Artifactory'
}
stage ('DEV Approve')
      {
            echo "Taking approval from DEV"
               
            timeout(time: 7, unit: 'DAYS') {
            input message: 'Do you want to deploy?', submitter: 'admin'
            }
     }
stage ('DEV Deploy')
         {
             build job: 'Deploy_To_Container'
          }

stage ('Slack notification') {
     
build job: 'Slack_Notification'
    }
  
}

Step 5: Save and push your changes to your repo( You can do this with Vscode  also, but I will use git bash)



Check for your Jenkinsfile on the repo


Step 6: Go to jenkins and create a new Pipeline Job: Pipeline_From_JenkinsFile
Select Pipeline From SCM

Enter credentials for  Bitbucket, Specify the Branch, Make sure script path is Jenkinsfile




Step 7: Save and Run








Thursday, 15 October 2020

How to use vault In Ansible Tower to Encrypt Credentials and sensitive data

 


Ansible Vault is a feature of ansible that allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plaintext in playbooks or roles. These vault files can then be distributed or placed in source control

Remember of previous tutorial on How to create EC2 instance with Ansible : https://violetstreamstechnology.blogspot.com/2020/09/how-to-create-ec2-instance-using.html

If you notice we had the aws_secret_key and aws_access_key placed as extra variables in Ansible Tower


This is not the best practice. Best practice is to encrypt the access_key and secret_key using Ansible vault to hide this sensitive data.

Lets do this to make our project comply with best practices:

Step 1:

Create a vault credential

Create the “Vault AWX” vault credential.

Left Menu (Credentials) > click [+] > Fill the form > click [SAVE]


Give name as  access_key: Any name will work


VAULT_PASSWORD:Give it any password you desire: For this tutorial you can use:admin123


Step 2: Encrypt the access_key and secret_key strings

step 2.1 :Log into the Ansible Tower machine

step 2.2: Use the below command:

ansible-vault encrypt_string "AKIAXR5FQWYQMB" --name "access_key"

Replace Highlighted green above with your access_key




You will be prompted for password. use admin123

You should get result like below:


Copy the encrypted output and save in a notepad(see eg below, copy text in green)

access_key: !vault |
          $ANSIBLE_VAULT;1.1;AES256
          32666533393238663538663035343932386637386562383830363963643163356537646161316565
          6631303132633362663138313334653531306230333866310a373866356135623732613765643234
          31643430313236306664376633356564343639376637323832323832313036346231353964336236
          6435313139666530380a316331633837376263613637623630633033343734333839326234396131
          30303933623736393735393762353863333262313431663130643235636663663236

If you get an error like

Error reading config file (/etc/ansible/ansible.cfg): File contains no section headers.

file: <???>, line:  9 

u'collections_paths = /home/ubuntu/.ansible/collections/ansible_collections\n'

open /etc/ansible/ansible.cfg  with vi editor and insert 
collections_paths = /home/ubuntu/.ansible/collections/ansible_collections  below [Default] header

$ sudo vi /etc/ansible/ansible.cfg



Then try again



Step 3: Repeat the same process for secret_key: we will use same password:admin123

Step 4: Go to your Playbook from previous exercise, Extra Variables And Paste the encrypted variable

---
- hosts: "{{ host }}"
gather_facts: true
vars:
access_key: YOUR_ACCESSKEY
secret_key: YOUR_SECRETKEY
tasks:
- name: Provision instance
ec2:
aws_access_key: "{{ access_key}}"
aws_secret_key: "{{ secret_key }}"
key_name: "{{ pem_key }}"
instance_type: t2.micro
image: ami-03657b56516ab7912
wait: yes
count: 1
region: us-east-2





Replace your accesskeys and secret keys variables with the encrypted string u copied from vault
New playbook should look like below:

---
- hosts: "{{ host }}"
gather_facts: true
vars:
access_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
34376266633337386531356334323464633063633238356564623535653733346531663638393833
3439633230316565363365326436313063363865396565640a306136623863383365613231396166
64303062633561306338346364633132656435396166623361666534353730616365383134663532
3934363563613764310a313661643034666530663235316438336266663833323933343562306337
64343738633030346537386363653464616166343832616561336231313763616266
secret_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
37333631633938653231633238353434373063663865666434343266383636346336343936643336
6338316330316461336365373165313163363432333630360a343334316665643336333762363665
62383035383534386238376363373339666531613262376239393466653234376330326138633239
6361646661323037640a306530663331616339343062333164366666343263383332333962643936
31316338653139633837303563396463313461343232396166346664376230316565376330356166
3436366138363430653838313064653563653731626539306664
tasks:
- name: Provision instance
ec2:
aws_access_key: "{{ access_key}}"
aws_secret_key: "{{ secret_key }}"
key_name: "{{ pem_key }}"
instance_type: t2.micro
image: ami-03657b56516ab7912
wait: yes
count: 1
region: us-east-2

Step 5: In Templates go to credentials Section
Select your Vault credentials for access_key






Save and run your template. The playbook will use the new encrypted variable








How to upgrade Maven

  java.lang.IllegalStateException I had installed maven in my ubuntu using command  apt install maven This installed maven in path /usr/shar...