Tuesday, 16 November 2021

How to enable code coverage report using JaCoCo, Maven and Jenkins

 

What is Code Coverage?

Code coverage is the percentage of code which is covered by automated tests. Code coverage measurement simply determines which statements in a body of code have been executed through a test run, and which statements have not. In general, a code coverage system collects information about the running program and then combines that with source information to generate a report on the test suite's code coverage.

Code coverage is part of a feedback loop in the development process. As tests are developed, code coverage highlights aspects of the code which may not be adequately tested and which require additional testing. This loop will continue until coverage meets some specified target.

Why Measure Code Coverage?

It is well understood that unit testing improves the quality and predictability of your software releases. Do you know, however, how well your unit tests actually test your code? How many tests are enough? Do you need more tests? These are the questions code coverage measurement seeks to answer.

Coverage measurement also helps to avoid test entropy. As your code goes through multiple release cycles, there can be a tendency for unit tests to atrophy. As new code is added, it may not meet the same testing standards you put in place when the project was first released. Measuring code coverage can keep your testing up to the standards you require. You can be confident that when you go into production there will be minimal problems because you know the code not only passes its tests but that it is well tested.

In summary, we measure code coverage for the following reasons:

  • To know how well our tests actually test our code
  • To know whether we have enough testing in place
  • To maintain the test quality over the lifecycle of a project

Code coverage is not a panacea. Coverage generally follows an 80-20 rule. Increasing coverage values becomes difficult, with new tests delivering less and less incrementally. If you follow defensive programming principles, where failure conditions are often checked at many levels in your software, some code can be very difficult to reach with practical levels of testing. Coverage measurement is not a replacement for good code review and good programming practices.

In general you should adopt a sensible coverage target and aim for even coverage across all of the modules that make up your code. Relying on a single overall coverage figure can hide large gaps in coverage.

Code coverage is important aspect for maintaining quality. There are different ways to manage code quality. one of the effective ways is to measure code coverage by using plug-ins such as JaCoCo, Cobertura.


We will see how to enable code coverage for your Java project and view coverage report in Jenkins UI.

step # 1: Add Maven JaCoCo plugin in POM.xml under <finalName>MyWebApp</finalName> in your project pom.xml


<plugins>
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.7.7.201606060606</version>
<executions>
<execution>
<id>jacoco-initialize</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>jacoco-report</id>
<phase>test</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>

It should look similar to below:




Step 2 : Add JaCoCo plug-in in Jenkins:


Step 3:

For Freestyle Job:
Enable in Jenkins job to view code coverage report by going to post build action and add Record JaCoCo coverage report



Step 4 : Run the job by clicking Build now

Step 5:
 Click on the job to view code coverage report.

Tuesday, 9 November 2021

Project March 2022(latest)

 Cloud Eta LLC is an Emerging Consulting Firm that designs business solutions for emerging markets.

They currently have a legacy web Application called FOI App Written in Java and hosted by their private server : https://projectfoiappdevops.s3.us-east-2.amazonaws.com/FoiAppLanding/index.html

It usually takes 5hrs to update their application and updates are manual, which incurs alot of downtime and is affecting their business because clients get locked out which gives their competitors upper hand.




Your Task is to migrate this Application into the cloud and implement Devops Practices to their entire Software Development Life Cycle

You should show concepts that implement Plan --Code--Build--Test--Deploy--Monitor



TASK A - Documentation: Setup a Wiki Server for your Project (Containerization)

a.

You can get the docker-compose file from below link

https://github.com/bitnami/bitnami-docker-dokuwiki/blob/master/docker-compose.yml 

Or

Use the below command on your Terminal to get the Yaml code and create a Docker Compose File

curl -sSL https://raw.githubusercontent.com/bitnami/bitnami-docker-dokuwiki/master/docker-compose.yml

b. mount your own Data Volume on this container

Hint: by modifying the Docker Compose file eg.



c. Change default port of Wiki Server to be running on Port 100

d. Change the default User and password

 to 

         Username: Foi

         Password:  admin

hint: Use the official image documentation to find details to accomplish all this

https://github.com/bitnami/bitnami-docker-dokuwiki

TASK A  Acceptance Criteria: 

i. The Wiki Server should be up and running and serving on 100

ii. Mount your own container volume to persist data

iii. Login with Credentials Foi/admin


TASK B: Version Control The FoiApp Project

Plan & Code

App Name: FoiApp

  • WorkStation A- Team Lion- 3.145.18.54
  • WorkStation B- Team Eagle- 3.22.241.224
  • WorkStation C - Team Elephant-    3.21.105.249 
  • WorkStation D- Team Bear-  3.145.96.17  
  • WorkStation E- Team Unicorn-  3.17.181.196 
Developer Workstations are windows machines, Your Project Supervisor will provide you the password you will use to log into the machine assigned to ur group: You can use Mobaxterm or RemoteDesktop to connect. The Username is Administrator

When you access the Developer workstation assigned to your group, you will find the code base in the below location:
C:---->Documents---->App--->FoiApp


(You can use Github or Bitbucket )- 

1) Set up 2 repos, a Build Repo to store all the code base and a Deployment Repo to store all your deployment scripts and name them accordingly as you see below(in green): 

  • Build repo : FoiApp_Build  --->Developers Access
  • Deployment repo: FoiApp_Deployment   --->-Your Team Access

2)Version control the FoiApp project located in the Developers WorkStation to enable the Developers migrate their code to the Source Control Management Tool(Bitbucket/Git)

  • Set up Developers workstations ssh-keys in bitbucket to access Build Repo and Your Team(Devops) workstation ssh-keys in bitbucket to access the Deployment Repo

3)Git branching Strategy for FoiAp_Build

  • master
  • release: eg    release/release-v1
  • feature:   eg  feature/feature-v1
  • develop

4)Git branching Strategy for FoiApp_Deploy
  • master
  • feature eg feature/feature-v1
  • develop

TASK B Acceptance Criteria: 

1. You should be able to push and pull code from the Developer Workstation assigned to your Team to the FoiApp_Build repo in Source Control Management(SCM) 

2. Your Team (Devops) Should be able to pull and push code from your individual workstations to the FoiApp_Deploy repo

3. Demonstrate the git branching Strategy


TASK C: Set up your Infrastructure

1. Set up your Environment: DEV, UAT, QA, PROD A, PROD B

Provision 6 Apache Tomcat Servers (You can use Any IAC Tool(Terraform, Cloud Formation, Ansible Tower)You can host this use any cloud provider - Aws, Google Cloud, Azure

i. DEV - t2micro -8gb

ii. UAT(User Acceptance Testing)- t2small -10gb

iii. QA(Quality Assurance) - T2Large-20gb

iv. PROD A- T2Xlarge-30gb

v. PROD B- T2xLarge-30gb

Apache Tomcat Servers should be exposed on Port 4444

Linux Distribution for Apache Tomcat Servers: Ubuntu 16.04

Note: When Bootstrapping your servers make sure you install the Datadog Agent

2. Set up your Devops tools servers:

(These can be provisioned Manually or with IAC Tool, Be Free to use any Linux Distributions on theses eg Linux 2, Debian, Ubuntu,etc)

1 Jenkins(CI/CD) t2 xlarge 20gb

1 SonarQube(codeAnalysis) t2small 8gb

1 Ansible Tower T2xxl- 15gb

1 Artifactory Server T2xxl - 8gb

1 Kubernetes Server-You can use EKS, k3s,kubeadm or minikube(Note your kubernetes can be installed in your Jenkins 

 TASK D: Set Up A 3 Node kubernetes Cluster(Container Orchestration) and Deploy dockuwiki server you set up in Task A into it

Label the Nodes: Dev, QA, Prod

1. Set up a Jenkins pipeline to Create/Delete the cluster

2. Set up a Jenkins pipeline to deploy the dokuwiki server into any of the Nodes(Dev, QA,Prod) within your cluster

3. Expose the application using a Load balancer or NodePort

Tip: Convert your docker-compose.yml to kubernetes deployment and service file using kompose

 TASK D Acceptance Criteria: 

1. You should be able to create/delete a kubernetes cluster

2. Be able to deploy your application into any Node(Dev,Qa,Prod)

3. You should be able to access the application through Nodeport or LoadBalancer

TASK E: Set Up Automated Build for Developers 

The Developers make use of Maven to Compile the code

a. Set up a C/I  Pipeline in Jenkins using Jenkinsfile 

b. Enable Webhooks in bitbucket to trigger Automated build to the Pipeline Job

c. The CI Pipeline job should run on an Agent(Slave)

d. Help the developers to version their artifacts, so that each build has a unique artifact version

Tips: https://jfrog.com/knowledge-base/configuring-build-artifacts-with-appropriate-build-numbers-for-jenkins-maven-project/


Pipeline job Name: FoiApp_Build

Pipeline should be able to checkout the code from SCM and build using Maven build tool, Provide code analysis ,codecoverage with sonarqube and upload artifacts to artifactory, and also send email to the team and provide versioning of artifacts

Pipeline should have slack channel notification to notify build status


i. Acceptance Criteria:

 Automated build after code is pushed to the repository

1. Sonar Analysis on the sonarqube server

2. Artifact uploaded to artifactory

3. Email notification on success or failure

4. Slack Channel Notification

5. Each artifact has a unique version number

6. Code coverage displayed


TASK F: Deploy & Operate (Continous Deployment)

a. Set up a C/D pipeline in Jenkins using Jenkinsfile

create 4 CD pipeline jobs for each env (Dev,Uat, QA,Prod) or 1 pipeline that can select any of the 4 enviroments

Pipeline job Name:eg FoiApp_Dev_Deploy


i. Pipeline should be able to deploy any of your LLE (Dev, Uat, Qa) or HLE (Prod A, PROD B) 

You can use DeploytoContainer plugin in jenkins or Deploy using Ansible Tower to pull artifact from artifactory and deploy to either  Dev, Uat , Qa or  Prod

ii. Pipeline should have slack channel notification to notify deployment status

iii. Pipeline should have email notification

iv. Deployment Gate

1. Acceptance criteria:

i. Deployment is seen and verified in either Dev, Uat, Qa or Prod

ii. Notification is seen in slack channel

iii. Email notification


TASK G: Monitoring

a. Set up continous monitoring with Datadog by installing Datadog Agent on all your servers

 Acceptance criteria: 

 i All your infrastructure Sever metrics should be monitored(Infrastructure Monitoring)

ii All running Processes on all your Servers should be monitored(Process monitoring)

ii Tag all your servers on the Datadog dashboard


TASK H: Deployment and Rollback

a. Automate the manual deployment of a Specific Version of the Deli Application using Ansible Tower

Manual Deployment Process is Below:


step 1: login to tomcat server

step 2 :download the artifact

step 3: switch to root

step 4: extract the artifact to Deployment folder 

Deployment folder:  /var/lib/tomcat8/webapps

Use service id : ubuntu


Acceptance Criteria:

i. Deploy new artifact from artifactory to either Dev, Uat, Qa or  Prod

ii. Rollback to an older artfact from Artifactory either to Dev, Uat, Qa or Prod

iii. All credentials should be encrypted


TASK I: Demonstrate Bash Automation of 

i. Tomcat

ii. jenkins

iii. Apache


Acceptance criteria: 

1. Show bash scripts and successfully execute them

Bonus Task:

Add an application or Elastic Loadbalancer to manage traffic between your ProdA and Prod B Servers

Register a Domain using Route 53, eg www.teamdevops.com

Point that domain to the Elastic/Application Loadbalancer 

Acceptance Criteria: When you Enter your domain in the browser, it should Point to Either Prod A or Prod B

Project Team
Team Leads In Yellow
    Team A (Supervisor- Valentine)Lion
    Voke - Team Lead
    Pelatiah
    Bidemi
    Godswill
    Joseph
    vitalis

    Team B(Supervisor - Johnson)Eagle
    Peter 
    Sean - Team Lead
    Victoria Ojo
    Apple
    Shantel
    Damian

    Team C(Supervisor- Juwon)Elephant
    Franklin --Team Lead
    Rita
    Ezekiel
    Onuma
    Mahammad
    Victory

    Team D(Supervisor- Etim/Themmy)Bear
    Paul
    Okoye--Team Lead
    Chidiebere
    henry
    Benard Ogbu
    minie
    Jonathan Henson

    Team E(Supervisor- Adaeze)Unicorn
    Kc
    Solomon-----Team Lead
    Benjamin
    Deji
    iyiola
    Oluwatosin

    Lead Architect - Prince

    • Each Team is to work independently with their supervisors to complete this project.
    • Every Task is expected to be completed within 1 week
    • We are adopting Agile style so each Team is expected to have 15mins Daily Stand up meetings with your supervisors or in some cases the Lead Architect where you will discuss your progress(what you did yesterday, what you will do today, How far you are in achieving your goals and give general updates
    • This will be a 2 week Sprint After which you will have a Demo to Present all your accomplishments.
    • Please Note: DOE(Devops Engineers) and Architects from other establishments have been invited to your Demo so be prepared
    Demo Date :02/03/2022 Time 8pm 



    TASK E: 



    Sunday, 7 November 2021

    Deploy Prometheus and Grafana Helm Chart on Kubernetes using Terraform

    Helm is a package manager for Kubernetes applications event the more complex via charts. Basically, it templates the YAML components from a single file containing the custom values. There are Terraform providers for Helm and Kubernetes to integrate them in your codebase.
    In this lab, we'll set up a monitoring stack with Prometheus and Grafana. Terraform will configure the chart values to make them communicate together. When you use the Helm provider the charts are considered as resources. Reused attributes from a Helm chart in another will create a dependency and manage the deployment orchestration.


    Note: Before you can start using kubectl, you have to install the AWS CLI and KUBECTL on your computer.

    Prerequisites
    •  Create an IAM User 
    •  Configure Terraform Backend with S3 Storage
    •  Setting up CD Pipeline for EKS Cluster
    •  Create Terraform Workspace for EKS Cluster
    •  Map IAM User to EKS using ClusterRole & ClusterRoleBinding
    •  Run Pipeline Job
    •  Configure AWS 
    •  Authenticate to EKS Cluster 
    •  Verify EKS Cluster is Active and Nodes are Visible
    •  Verify Helm Deployment
    •  Access Grafana Dashboard
    •  Access Prometheus UI

    • Create an IAM User 
    • Go to AWS Console
    • Search for IAM as shown below



    • Select Users and create a User called terraform-user with Console access and AdministratorAccess policy attached. Ensure to download your credentials once the user has been created as this is required to login to the EKS Cluster.



    • Navigate to Policies and select create policy called eks-assume


    • Select JSON and copy the below code highlighted in yellow to create policy: (Note: Ensure to change the AWS-ACCOUNT-NUMBER) 

      {

          "Version": "2012-10-17",

          "Statement": {

              "Effect": "Allow",

              "Action": "sts:AssumeRole",

              "Resource": "arn:aws:iam::AWS-ACCOUNT-NUMBER:role/terraform-eks-cluster"

          }

      }




    • Navigate to Policies and select create policy called eks-permission

    • Select JSON and copy the below code highlighted in yellow to create policy:

      {

          "Version": "2012-10-17",

          "Statement": [

              {

                  "Effect": "Allow",

                  "Action": [

                      "eks:DescribeNodegroup",

                      "eks:ListNodegroups",

                      "eks:DescribeCluster",

                      "eks:ListClusters",

                      "eks:AccessKubernetesApi",

                      "ssm:GetParameter",

                      "eks:ListUpdates",

                      "eks:ListFargateProfiles"

                  ],

                  "Resource": "*"

              }

          ]

      }


    • Create Group called eksgroup and attach the eks-permission and eks-assume policy to terrform-user

    • Verify group and policy is attached to terraform-user by navigating to Users.


    Configure Terraform Backend with S3 Storage
    • Create S3 bucket in AWS to configure the backend and store terraform state files in storage. (Name the S3 Bucket whatever you prefer)


    Setting up CD Pipeline for EKS Cluster
    • Go to Jenkins > New Items. Enter eks-pipeline in name field > Choose Pipeline > Click OK


    • Select Configure after creation.
    • Go to Build Triggers and enable Trigger builds remotely.
    • Enter tf_token as Authentication Token

     

    Bitbucket Changes
      • Create a new Bitbucket Repo and call it eks-pipeline
      • Go to Repository Settings after creation and select Webhooks
      • Click Add Webhooks
      • Enter tf_token as the Title
      • Copy and paste the url as shown below
                  http://JENKINS_URL:8080/job/eks-pipeline/buildWithParameters?token=tf_token

    • Status should be active
    • Click on skip certificate verification
    • triggers --> repository push
    • Go back to Jenkins and select Configure
    • Scroll down to Pipeline and click on the drop down to select Pipeline Script From SCM
    • Enter credentials for Bitbucket, Leave the Branch master as the default, Make sure script path is Jenkinsfile.
    • Apply and Save.


    Create Terraform Workspace for EKS Pipeline

    • Open File Explorer, navigate to Desktop and create a folder my-

      eks-cluster

    • Once folder has been created, open Visual Code Studio and add folder to workspace







    • Open a New Terminal
    • Run the command before cloning repo: git init
    • Navigate to eks-pipeline repo in Bitbucket
    • Clone the repo with SSH or HTTPS
    • Make sure to cd eks-pipeline and create new files in the eks-pipeline folder

     
    • Create a new file eks-asg.tf and copy the below code in yellow color


















     


    resource "aws_eks_cluster" "tf_eks" {

      name            = local.cluster_name

      enabled_cluster_log_types = ["authenticator","api", "controllerManager", "scheduler"]

      role_arn        = aws_iam_role.tf-eks-master.arn

      version         = var.kube_version


      vpc_config {

        security_group_ids = [aws_security_group.eks-master-sg.id]

        subnet_ids         = data.aws_subnet_ids.public.ids

      }


      timeouts {

        create = var.cluster_create_timeout

        delete = var.cluster_delete_timeout

       


      depends_on = [

        aws_iam_role_policy_attachment.tf-cluster-AmazonEKSClusterPolicy,

        aws_iam_role_policy_attachment.tf-cluster-AmazonEKSServicePolicy,

      ]

      

      tags = local.common_tags

    }


    ########################################################################################

    # Setup AutoScaling Group for worker nodes

    ########################################################################################


    locals {

      tf-eks-node-userdata = <<USERDATA

    #!/bin/bash

    set -o xtrace

    /etc/eks/bootstrap.sh --apiserver-endpoint '${aws_eks_cluster.tf_eks.endpoint}' --b64-cluster-ca '${aws_eks_cluster.tf_eks.certificate_authority.0.data}' '${local.cluster_name}'

    USERDATA

    }


    resource "aws_launch_configuration" "config" {

      associate_public_ip_address = true

      iam_instance_profile        = aws_iam_instance_profile.node.name

      image_id                    = data.aws_ami.eks-worker.id

      instance_type               = var.instance_type

      name_prefix                 = "my-eks-cluster"

      security_groups             = [aws_security_group.eks-node-sg.id, aws_security_group.worker_ssh.id]

      user_data_base64            = base64encode(local.tf-eks-node-userdata)

      key_name                    = var.keypair-name


      lifecycle {

        create_before_destroy = true

      }

      ebs_optimized           = true

      root_block_device {

        volume_size           = 100

        delete_on_termination = true

      }

    }


    resource "aws_autoscaling_group" "asg" {

      desired_capacity     = 2

      launch_configuration = aws_launch_configuration.config.id

      max_size             = 2

      min_size             = 2

      name                 = local.cluster_name

      vpc_zone_identifier  = data.aws_subnet_ids.public.ids


      tag {

        key                 = "eks-worker-nodes"

        value               = local.cluster_name

        propagate_at_launch = true

      }


      tag {

        key                 = "kubernetes.io/cluster/${aws_eks_cluster.tf_eks.name}"

        value               = "owned"

        propagate_at_launch = true

      }

    }


    • Create a new file iam.tf and copy the below code in yellow color

    # Setup for IAM role needed to setup an EKS clusters

    resource "aws_iam_role" "tf-eks-master" {

      name = "terraform-eks-cluster"


      assume_role_policy = <<POLICY

    {

      "Version": "2012-10-17",

      "Statement": [

        {

          "Effect": "Allow",

          "Principal": {

            "Service": "eks.amazonaws.com",

            "AWS": "arn:aws:iam::AWS-ACCOUNT-NUMBER:user/terraform-user"

          },

          "Action": "sts:AssumeRole"

        }

      ]

    }

    POLICY

      lifecycle {

        create_before_destroy = true

      }

    }


    resource "aws_iam_role_policy_attachment" "tf-cluster-AmazonEKSClusterPolicy" {

      policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"

      role       = aws_iam_role.tf-eks-master.name

    }


    resource "aws_iam_role_policy_attachment" "tf-cluster-AmazonEKSServicePolicy" {

      policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"

      role       = aws_iam_role.tf-eks-master.name

    }


    resource "aws_iam_role_policy_attachment" "tf-eks-node-AmazonEKSWorkerNode" {

      policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"

      role       = aws_iam_role.tf-eks-master.name

    }


    ########################################################################################

    # Setup IAM role & instance profile for worker nodes


    resource "aws_iam_role" "tf-eks-node" {

      name = "terraform-eks-tf-eks-node"


      assume_role_policy = <<POLICY

    {

      "Version": "2012-10-17",

      "Statement": [

        {

          "Effect": "Allow",

          "Principal": {

            "Service": "ec2.amazonaws.com"

          },

          "Action": "sts:AssumeRole"

        }

      ]

    }

    POLICY

    }


    resource "aws_iam_role_policy_attachment" "tf-eks-node-AmazonEKSWorkerNodePolicy" {

      policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"

      role       = aws_iam_role.tf-eks-node.name

    }


    resource "aws_iam_role_policy_attachment" "tf-eks-node-AmazonEKS_CNI_Policy" {

      policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"

      role       = aws_iam_role.tf-eks-node.name

    }


    resource "aws_iam_role_policy_attachment" "tf-eks-node-AmazonEC2ContainerRegistryReadOnly" {

      policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"

      role       = aws_iam_role.tf-eks-node.name

    }


    resource "aws_iam_instance_profile" "node" {

      name = "terraform-eks-node"

      role = aws_iam_role.tf-eks-node.name

    }


    • Create a new file kube.tf and copy the below code in yellow color

    ########################################################################################

    # Setup provider for kubernetes

    # ---------------------------------------------------------------------------------------

    # Get an authentication token to communicate with the EKS cluster.

    # By default (before other roles are added to the Auth ConfigMap), you can authenticate to EKS cluster only by assuming the role that created the cluster.

    # `aws_eks_cluster_auth` uses IAM credentials from the AWS provider to generate a temporary token.

    # If the AWS provider assumes an IAM role, `aws_eks_cluster_auth` will use the same IAM role to get the auth token.

    # https://www.terraform.io/docs/providers/aws/d/eks_cluster_auth.html


    data "aws_eks_cluster_auth" "aws_iam_authenticator" {

      name = "${aws_eks_cluster.tf_eks.name}"

    }


    data "aws_iam_user" "terraform_user" {

      user_name = "terraform-user"

    }


    locals {

      # roles to allow kubernetes access via cli and allow ec2 nodes to join eks cluster

      configmap_roles = [{

        rolearn  = "${data.aws_iam_user.terraform_user.arn}"

        username = "{{SessionName}}"

        groups   = ["system:masters"]

      },

      {

        rolearn  =  "${aws_iam_role.tf-eks-node.arn}"

        username = "system:node:{{EC2PrivateDNSName}}"

        groups   = ["system:bootstrappers","system:nodes"]

      },

        {

        rolearn  = "${aws_iam_role.tf-eks-master.arn}"

        username = "{{SessionName}}"

        groups   = ["system:masters"]

      },]

    }


    # Allow worker nodes to join cluster via config map

    resource "kubernetes_config_map" "aws_auth" {

      metadata {

        name = "aws-auth"

        namespace = "kube-system"

      }

     data = {

        mapRoles = yamlencode(local.configmap_roles)

      }

    }




    locals {

      kubeconfig = <<KUBECONFIG

    apiVersion: v1

    clusters:

    - cluster:

        server: ${aws_eks_cluster.tf_eks.endpoint}

        certificate-authority-data: ${aws_eks_cluster.tf_eks.certificate_authority.0.data}

      name: kubernetes

    contexts:

    - context:

        cluster: kubernetes

        user: aws

      name: aws

    current-context: aws

    kind: Config

    preferences: {}

    users:

    - name: aws

      user:

        exec:

          apiVersion: client.authentication.k8s.io/v1alpha1

          command: aws-iam-authenticator

          args:

            - "token"

            - "-i"

            - "${aws_eks_cluster.tf_eks.name}"

    KUBECONFIG

    }



    • Create a new file output.tf and copy the below code in yellow color

    output "eks_kubeconfig" {

      value = "${local.kubeconfig}"

      depends_on = [

        aws_eks_cluster.tf_eks

      ]

    }


    • Create a new file provider.tf and copy the below code in yellow color

    terraform {

    backend "s3" {

          bucket = "S3-BUCKET-NAME"

          key    = "eks/terraform.tfstste"

          region = "us-east-2"

       }

    }


    provider "aws" {

        region     = var.region

        version    = "~> 2.0"

     }


    provider "kubernetes" {

      host                      = aws_eks_cluster.tf_eks.endpoint

      cluster_ca_certificate    = base64decode(aws_eks_cluster.tf_eks.certificate_authority.0.data)

      token                     = data.aws_eks_cluster_auth.aws_iam_authenticator.token

    }


    provider "helm" {

      kubernetes {

      host                   = aws_eks_cluster.tf_eks.endpoint

      cluster_ca_certificate = base64decode(aws_eks_cluster.tf_eks.certificate_authority.0.data)

      token                  = data.aws_eks_cluster_auth.aws_iam_authenticator.token

      }

    }

    • Create a new file sg-eks.tf and copy the below code in yellow color

    # # #SG to control access to worker nodes

    resource "aws_security_group" "eks-master-sg" {

        name        = "terraform-eks-cluster"

        description = "Cluster communication with worker nodes"

        vpc_id      = var.vpc_id


        egress {

            from_port   = 0

            to_port     = 0

            protocol    = "-1"

            cidr_blocks = ["0.0.0.0/0"]

        }

        

        tags = merge(

        local.common_tags,

        map(

          "Name","eks-cluster",

          "kubernetes.io/cluster/${local.cluster_name}","owned"

        )

      )

    }


    resource "aws_security_group" "eks-node-sg" {

            name        = "terraform-eks-node"

            description = "Security group for all nodes in the cluster"

            vpc_id      = var.vpc_id


            egress {

                from_port   = 0

                to_port     = 0

                protocol    = "-1"

                cidr_blocks = ["0.0.0.0/0"]

            }


            tags = merge(

        local.common_tags,

        map(

          "Name","eks-worker-node",

          "kubernetes.io/cluster/${aws_eks_cluster.tf_eks.name}","owned"

        )

      )

    }


    resource "aws_security_group" "worker_ssh" {

      name_prefix = "worker_ssh"

      vpc_id      = var.vpc_id

      egress {

        from_port   = 0

        to_port     = 0

        protocol    = "-1"

        cidr_blocks = ["0.0.0.0/0"]

      }

      ingress {

        from_port = 22

        to_port   = 22

        protocol  = "tcp"


        cidr_blocks = ["0.0.0.0/0"]

      }

      tags = merge(

        local.common_tags,

        map(

          "Name","worker_ssh",

        )

      )

    }


    • Create a new file sg-rules-eks.tf and copy the below code in yellow color


    # Allow inbound traffic from your local workstation external IP

    # to the Kubernetes. You will need to replace A.B.C.D below with

    # your real IP. Services like icanhazip.com can help you find this.

    resource "aws_security_group_rule" "tf-eks-cluster-ingress-workstation-https" {

      cidr_blocks       = ["0.0.0.0/0"]

      description       = "Allow workstation to communicate with the cluster API Server"

      from_port         = 443

      protocol          = "tcp"

      security_group_id = aws_security_group.eks-master-sg.id

      to_port           = 443

      type              = "ingress"

    }


    ########################################################################################

    # Setup worker node security group


    resource "aws_security_group_rule" "tf-eks-node-ingress-self" {

      description              = "Allow node to communicate with each other"

      from_port                = 0

      protocol                 = "-1"

      security_group_id        = aws_security_group.eks-node-sg.id

      source_security_group_id = aws_security_group.eks-node-sg.id

      to_port                  = 65535

      type                     = "ingress"

    }


    resource "aws_security_group_rule" "tf-eks-node-ingress-cluster" {

      description              = "Allow worker Kubelets and pods to receive communication from the cluster control plane"

      from_port                = 1025

      protocol                 = "tcp"

      security_group_id        = aws_security_group.eks-node-sg.id

      source_security_group_id = aws_security_group.eks-master-sg.id

      to_port                  = 65535

      type                     = "ingress"

    }


    # allow worker nodes to access EKS master

    resource "aws_security_group_rule" "tf-eks-cluster-ingress-node-https" {

      description              = "Allow pods to communicate with the cluster API Server"

      from_port                = 443

      protocol                 = "tcp"

      security_group_id        = aws_security_group.eks-node-sg.id

      source_security_group_id = aws_security_group.eks-master-sg.id

      to_port                  = 443

      type                     = "ingress"

    }


    resource "aws_security_group_rule" "tf-eks-node-ingress-master" {

      description              = "Allow cluster control to receive communication from the worker Kubelets"

      from_port                = 443

      protocol                 = "tcp"

      security_group_id        = aws_security_group.eks-master-sg.id

      source_security_group_id = aws_security_group.eks-node-sg.id

      to_port                  = 443

      type                     = "ingress"

    }


    • Create a new file variables.tf and copy the below code in yellow color

    # Setup data source to get amazon-provided AMI for EKS nodes

    data "aws_ami" "eks-worker" {

      filter {

        name   = "name"

        values = ["amazon-eks-node-1.21-*"]

      }


      most_recent = true

      owners      = ["602401143452"] # Amazon EKS AMI Account ID

    }



    data "aws_subnet_ids" "public" {

      vpc_id = var.vpc_id

      

      filter {

        name   = "tag:Name"

        values = ["subnet-public-*"]

      }

    }


    variable region {

      type        = string

      default = "us-east-2"


    }


    variable "cluster_create_timeout" {

      description = "Timeout value when creating the EKS cluster."

      type        = string

      default     = "30m"

    }


    variable "cluster_delete_timeout" {

      description = "Timeout value when deleting the EKS cluster."

      type        = string

      default     = "15m"

    }


    variable "vpc_id" {

      type = string

      default = "PASTE-VPC-ID-HERE"

    }


    variable "keypair-name" {

      type = string

      default = "KEY-NAME"

    }


    variable "creator" {

      description = "Creator of deployed servers"

      type        = string

      default     = "YOUR-NAME"

    }


    variable "instance_type" {}


    variable "env" {}


    variable "grafana_password" {}


    ## Application/workspace specific inputs

    variable "app" {

      description = "Name of Application"

      type        = string

      default     = "my-eks"

    }


    variable "kube_version" {

      type        = string

      description = "Kubernetes version for eks"

    }



    ## Tagging naming convention

    locals {

      common_tags = {

      env = var.env,

      creator  = var.creator,

      app = var.app

      }

      cluster_name = "${var.app}-${var.env}"

    }

    • Create a new folder templates then a new file in templates folder called grafana-values.yaml and copy the below code in yellow color

    • rbac:

        create: true

        pspEnabled: true

        pspUseAppArmor: true

        namespaced: true

        extraRoleRules: []

        # - apiGroups: []

        #   resources: []

        #   verbs: []

        extraClusterRoleRules: []

        # - apiGroups: []

        #   resources: []

        #   verbs: []

      serviceAccount:

        create: true

        name: ${GRAFANA_SERVICE_ACCOUNT}

        nameTest:

      #  annotations:


      replicas: 1


      ## See `kubectl explain poddisruptionbudget.spec` for more

      ## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/

      podDisruptionBudget: {}

      #  minAvailable: 1

      #  maxUnavailable: 1


      ## See `kubectl explain deployment.spec.strategy` for more

      ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy

      deploymentStrategy:

        type: RollingUpdate


      readinessProbe:

        httpGet:

          path: /api/health

          port: 3000


      livenessProbe:

        httpGet:

          path: /api/health

          port: 3000

        initialDelaySeconds: 60

        timeoutSeconds: 30

        failureThreshold: 10


      ## Use an alternate scheduler, e.g. "stork".

      ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/

      ##

      # schedulerName: "default-scheduler"


      image:

        repository: grafana/grafana

        tag: 7.1.1

        sha: ""

        pullPolicy: IfNotPresent


        ## Optionally specify an array of imagePullSecrets.

        ## Secrets must be manually created in the namespace.

        ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

        ##

        # pullSecrets:

        #   - myRegistrKeySecretName


      testFramework:

        enabled: true

        image: "bats/bats"

        tag: "v1.1.0"

        imagePullPolicy: IfNotPresent

        securityContext: {}


      securityContext:

        runAsUser: 472

        runAsGroup: 472

        fsGroup: 472



      extraConfigmapMounts: []

        # - name: certs-configmap

        #   mountPath: /etc/grafana/ssl/

        #   subPath: certificates.crt # (optional)

        #   configMap: certs-configmap

      #   readOnly: true



      extraEmptyDirMounts: []

        # - name: provisioning-notifiers

      #   mountPath: /etc/grafana/provisioning/notifiers



      ## Assign a PriorityClassName to pods if set

      # priorityClassName:


      downloadDashboardsImage:

        repository: curlimages/curl

        tag: 7.70.0

        sha: ""

        pullPolicy: IfNotPresent


      downloadDashboards:

        env: {}

        resources: {}


      ## Pod Annotations

      # podAnnotations: {}


      ## Pod Labels

      podLabels:

        app: grafana


      podPortName: grafana


      ## Deployment annotations

      # annotations: {}


      ## Expose the grafana service to be accessed from outside the cluster (LoadBalancer service).

      ## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.

      ## ref: http://kubernetes.io/docs/user-guide/services/

      ##

      service:

        type: ClusterIP

        port: 80

        targetPort: 3000

        # targetPort: 4181 To be used with a proxy extraContainer

        annotations: {}

        labels:

          app: grafana

        portName: service


      extraExposePorts: []

        # - name: keycloak

        #   port: 8080

        #   targetPort: 8080

      #   type: ClusterIP


      # overrides pod.spec.hostAliases in the grafana deployment's pods

      hostAliases: []

        # - ip: "1.2.3.4"

        #   hostnames:

      #     - "my.host.com"


      ingress:

        enabled: false

        # Values can be templated

        #annotations: 

          #kubernetes.io/ingress.class: nginx

          #kubernetes.io/tls-acme: "true"

        labels: {}

        path: /

        hosts:

          - chart-example.local

        ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

        extraPaths: []

        # - path: /*

        #   backend:

        #     serviceName: ssl-redirect

        #     servicePort: use-annotation

        tls: []

        #  - secretName: chart-example-tls

        #    hosts:

        #      - chart-example.local


      resources:

        limits:

          cpu: 100m

          memory: 128Mi

        requests:

          cpu: 100m

          memory: 128Mi


      ## Node labels for pod assignment

      ## ref: https://kubernetes.io/docs/user-guide/node-selection/

      #

      nodeSelector: {}


      ## Tolerations for pod assignment

      ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/

      ##

      tolerations: []


      ## Affinity for pod assignment

      ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity

      ##

      affinity: {}


      extraInitContainers: []


      ## Enable an Specify container in extraContainers. This is meant to allow adding an authentication proxy to a grafana pod

      extraContainers: |

      # - name: proxy

      #   image: quay.io/gambol99/keycloak-proxy:latest

      #   args:

      #   - -provider=github

      #   - -client-id=

      #   - -client-secret=

      #   - -github-org=<ORG_NAME>

      #   - -email-domain=*

      #   - -cookie-secret=

      #   - -http-address=http://0.0.0.0:4181

      #   - -upstream-url=http://127.0.0.1:3000

      #   ports:

      #     - name: proxy-web

      #       containerPort: 4181


      ## Volumes that can be used in init containers that will not be mounted to deployment pods

      extraContainerVolumes: []

      #  - name: volume-from-secret

      #    secret:

      #      secretName: secret-to-mount

      #  - name: empty-dir-volume

      #    emptyDir: {}


      ## Enable persistence using Persistent Volume Claims

      ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/

      ##

      persistence:

        type: pvc

        enabled: false

        # storageClassName: default

        accessModes:

          - ReadWriteOnce

        size: 10Gi

        # annotations: {}

        finalizers:

          - kubernetes.io/pvc-protection

        # subPath: ""

        # existingClaim:


      initChownData:

        ## If false, data ownership will not be reset at startup

        ## This allows the prometheus-server to be run with an arbitrary user

        ##

        enabled: true


        ## initChownData container image

        ##

        image:

          repository: busybox

          tag: "1.31.1"

          sha: ""

          pullPolicy: IfNotPresent


        ## initChownData resource requests and limits

        ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/

        ##

        resources: {}

        #  limits:

        #    cpu: 100m

        #    memory: 128Mi

        #  requests:

        #    cpu: 100m

        #    memory: 128Mi



      # Administrator credentials when not using an existing secret (see below)

      adminUser: ${GRAFANA_ADMIN_USER}

      adminPassword: ${GRAFANA_ADMIN_PASSWORD}


      ## Define command to be executed at startup by grafana container

      ## Needed if using `vault-env` to manage secrets (ref: https://banzaicloud.com/blog/inject-secrets-into-pods-vault/)

      ## Default is "run.sh" as defined in grafana's Dockerfile

      # command:

      # - "sh"

      # - "/run.sh"


      ## Use an alternate scheduler, e.g. "stork".

      ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/

      ##

      # schedulerName:


      ## Extra environment variables that will be pass onto deployment pods

      env: {}


      ## "valueFrom" environment variable references that will be added to deployment pods

      ## ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#envvarsource-v1-core

      ## Renders in container spec as:

      ##   env:

      ##     ...

      ##     - name: <key>

      ##       valueFrom:

      ##         <value rendered as YAML>

      envValueFrom: {}


      ## The name of a secret in the same kubernetes namespace which contain values to be added to the environment

      ## This can be useful for auth tokens, etc. Value is templated.

      envFromSecret: ""


      ## Sensible environment variables that will be rendered as new secret object

      ## This can be useful for auth tokens, etc

      envRenderSecret: {}


      ## Additional grafana server secret mounts

      # Defines additional mounts with secrets. Secrets must be manually created in the namespace.

      extraSecretMounts: []

        # - name: secret-files

        #   mountPath: /etc/secrets

        #   secretName: grafana-secret-files

        #   readOnly: true

      #   subPath: ""


      ## Additional grafana server volume mounts

      # Defines additional volume mounts.

      extraVolumeMounts: []

        # - name: extra-volume

        #   mountPath: /mnt/volume

        #   readOnly: true

      #   existingClaim: volume-claim


      ## Pass the plugins you want installed as a list.

      ##

      plugins: []

        # - digrich-bubblechart-panel

      # - grafana-clock-panel


      ## Configure grafana datasources

      ## ref: http://docs.grafana.org/administration/provisioning/#datasources

      ##

      datasources:

        datasources.yaml:

          apiVersion: 1

          datasources:

          - name: Prometheus

            type: prometheus

            url: http://${PROMETHEUS_SVC}.${NAMESPACE}.svc.cluster.local

            access: proxy

            isDefault: true


      ## Configure notifiers

      ## ref: http://docs.grafana.org/administration/provisioning/#alert-notification-channels

      ##

      notifiers: {}

      #  notifiers.yaml:

      #    notifiers:

      #    - name: email-notifier

      #      type: email

      #      uid: email1

      #      # either:

      #      org_id: 1

      #      # or

      #      org_name: Main Org.

      #      is_default: true

      #      settings:

      #        addresses: an_email_address@example.com

      #    delete_notifiers:


      ## Configure grafana dashboard providers

      ## ref: http://docs.grafana.org/administration/provisioning/#dashboards

      ##

      ## `path` must be /var/lib/grafana/dashboards/<provider_name>

      ##

      dashboardProviders:

        dashboardproviders.yaml:

          apiVersion: 1

          providers:

          - name: 'default'

            orgId: 1

            folder: ''

            type: file

            disableDeletion: false

            editable: true

            options:

              path: /var/lib/grafana/dashboards/default


      ## Configure grafana dashboard to import

      ## NOTE: To use dashboards you must also enable/configure dashboardProviders

      ## ref: https://grafana.com/dashboards

      ##

      ## dashboards per provider, use provider name as key.

      ##

      dashboards:

        # default:

        #   some-dashboard:

        #     json: |

        #       $RAW_JSON

        #   custom-dashboard:

        #     file: dashboards/custom-dashboard.json

        #   prometheus-stats:

        #     gnetId: 2

        #     revision: 2

        #     datasource: Prometheus

        #   local-dashboard:

        #     url: https://example.com/repository/test.json

        #   local-dashboard-base64:

        #     url: https://example.com/repository/test-b64.json

        #   b64content: true

        default:

          prometheus-stats:

            gnetId: 10000

            revision: 1

            datasource: Prometheus




      ## Reference to external ConfigMap per provider. Use provider name as key and ConfiMap name as value.

      ## A provider dashboards must be defined either by external ConfigMaps or in values.yaml, not in both.

      ## ConfigMap data example:

      ##

      ## data:

      ##   example-dashboard.json: |

      ##     RAW_JSON

      ##

      dashboardsConfigMaps: {}

      #  default: ""


      ## Grafana's primary configuration

      ## NOTE: values in map will be converted to ini format

      ## ref: http://docs.grafana.org/installation/configuration/

      ##

      grafana.ini:

        paths:

          data: /var/lib/grafana/data

          logs: /var/log/grafana

          plugins: /var/lib/grafana/plugins

          provisioning: /etc/grafana/provisioning

        analytics:

          check_for_updates: true

        log:

          mode: console

        grafana_net:

          url: https://grafana.net

            ## grafana Authentication can be enabled with the following values on grafana.ini

            # server:

          # The full public facing url you use in browser, used for redirects and emails

        #    root_url:

        # https://grafana.com/docs/grafana/latest/auth/github/#enable-github-in-grafana

        # auth.github:

        #    enabled: false

        #    allow_sign_up: false

        #    scopes: user:email,read:org

        #    auth_url: https://github.com/login/oauth/authorize

        #    token_url: https://github.com/login/oauth/access_token

        #    api_url: https://github.com/user

        #    team_ids:

        #    allowed_organizations:

        #    client_id:

        #    client_secret:

        ## LDAP Authentication can be enabled with the following values on grafana.ini

        ## NOTE: Grafana will fail to start if the value for ldap.toml is invalid

        # auth.ldap:

        #   enabled: true

        #   allow_sign_up: true

        #   config_file: /etc/grafana/ldap.toml


      ## Grafana's LDAP configuration

      ## Templated by the template in _helpers.tpl

      ## NOTE: To enable the grafana.ini must be configured with auth.ldap.enabled

      ## ref: http://docs.grafana.org/installation/configuration/#auth-ldap

      ## ref: http://docs.grafana.org/installation/ldap/#configuration

      ldap:

        enabled: false

        # `existingSecret` is a reference to an existing secret containing the ldap configuration

        # for Grafana in a key `ldap-toml`.

        existingSecret: ""

        # `config` is the content of `ldap.toml` that will be stored in the created secret

        config: ""

        # config: |-

        #   verbose_logging = true


        #   [[servers]]

        #   host = "my-ldap-server"

        #   port = 636

        #   use_ssl = true

        #   start_tls = false

        #   ssl_skip_verify = false

        #   bind_dn = "uid=%s,ou=users,dc=myorg,dc=com"


      ## Grafana's SMTP configuration

      ## NOTE: To enable, grafana.ini must be configured with smtp.enabled

      ## ref: http://docs.grafana.org/installation/configuration/#smtp

      smtp:

        # `existingSecret` is a reference to an existing secret containing the smtp configuration

        # for Grafana.

        existingSecret: ""

        userKey: "user"

        passwordKey: "password"


      ## Sidecars that collect the configmaps with specified label and stores the included files them into the respective folders

      ## Requires at least Grafana 5 to work and can't be used together with parameters dashboardProviders, datasources and dashboards

      sidecar:

        image:

          repository: kiwigrid/k8s-sidecar

          tag: 0.1.151

          sha: ""

        imagePullPolicy: IfNotPresent

        resources: {}

        #   limits:

        #     cpu: 100m

        #     memory: 100Mi

        #   requests:

        #     cpu: 50m

        #     memory: 50Mi

        # skipTlsVerify Set to true to skip tls verification for kube api calls

        # skipTlsVerify: true

        enableUniqueFilenames: false

        dashboards:

          enabled: false

          SCProvider: true

          # label that the configmaps with dashboards are marked with

          label: grafana_dashboard

          # folder in the pod that should hold the collected dashboards (unless `defaultFolderName` is set)

          folder: /tmp/dashboards

          # The default folder name, it will create a subfolder under the `folder` and put dashboards in there instead

          defaultFolderName: null

          # If specified, the sidecar will search for dashboard config-maps inside this namespace.

          # Otherwise the namespace in which the sidecar is running will be used.

          # It's also possible to specify ALL to search in all namespaces

          searchNamespace: null

          # provider configuration that lets grafana manage the dashboards

          provider:

            # name of the provider, should be unique

            name: sidecarProvider

            # orgid as configured in grafana

            orgid: 1

            # folder in which the dashboards should be imported in grafana

            folder: ''

            # type of the provider

            type: file

            # disableDelete to activate a import-only behaviour

            disableDelete: false

            # allow updating provisioned dashboards from the UI

            allowUiUpdates: false

        datasources:

          enabled: false

          # label that the configmaps with datasources are marked with

          label: grafana_datasource

          # If specified, the sidecar will search for datasource config-maps inside this namespace.

          # Otherwise the namespace in which the sidecar is running will be used.

          # It's also possible to specify ALL to search in all namespaces

          searchNamespace: null

        notifiers:

          enabled: false

          # label that the configmaps with notifiers are marked with

          label: grafana_notifier

          # If specified, the sidecar will search for notifier config-maps inside this namespace.

          # Otherwise the namespace in which the sidecar is running will be used.

          # It's also possible to specify ALL to search in all namespaces

          searchNamespace: null


      ## Override the deployment namespace

      ##

      namespaceOverride: ""

    • Create a new file helm-prometheus.tf and copy the below code in yellow color
    • resource "helm_release" "prometheus" {

        chart = "prometheus"

        name = "prometheus"

        namespace = "default"

        repository = "https://prometheus-community.github.io/helm-charts"


        # When you want to directly specify the value of an element in a map you need \\ to escape the point.

        set {

          name = "podSecurityPolicy\\.enabled"

          value = true

        }


        set {

          name = "server\\.persistentVolume\\.enabled"

          value = false

        }


        set {

          name = "server\\.resources"

          # You can provide a map of value using yamlencode  

          value = yamlencode({

            limits = {

              cpu = "200m"

              memory = "50Mi"

            }

            requests = {

              cpu = "100m"

              memory = "30Mi"

            }

          })

        }

      }

    • Create a new file helm-grafana.tf and copy the below code in yellow color
    • data "template_file" "grafana_values" {

          template = file("templates/grafana-values.yaml")


          vars = {

            GRAFANA_SERVICE_ACCOUNT = "grafana"

            GRAFANA_ADMIN_USER = "admin"

            GRAFANA_ADMIN_PASSWORD = var.grafana_password

            PROMETHEUS_SVC = "${helm_release.prometheus.name}-server"

            NAMESPACE = "default"

          }

      }


      resource "helm_release" "grafana" {

        chart = "grafana"

        name = "grafana"

        repository = "https://grafana.github.io/helm-charts"

        namespace = "default"


        values = [

          data.template_file.grafana_values.rendered

        ]

        set {

          name  = "service.type"

          value = "LoadBalancer"

        }

      }


    • Create a new file Jenkinsfile and copy the below code in yellow color

    pipeline {

        agent {

          node {

            label "master"

          } 

        }


        parameters {

            choice(choices: ['dev', 'qa', 'prod'], description: 'Select Lifecycle to deploy', name: 'Environment')

            password(name: 'GrafanaPassword', description: 'Enter Grafana Password Here')

            choice(choices: ['master', 'feature_1', 'feature_2'], description: 'Select Branch to clone', name: 'Branch')

            choice(choices: ['m4.large', 'm4.xlarge', 'm4.2xlarge'], description: 'Select Instance Size', name: 'InstanceSize')

            choice(choices: ['1.18', '1.20', '1.21'], description: 'Select Kubernetes Version', name: 'KubeV')

            booleanParam(name: 'autoApprove', defaultValue: false, description: 'Automatically run apply after generating plan?')

            booleanParam(name: 'ACCEPTANCE_TESTS_LOG_TO_FILE', defaultValue: true, description: 'Should debug logs be written to a separate file?')

            choice(name: 'ACCEPTANCE_TESTS_LOG_LEVEL', choices: ['WARN', 'ERROR', 'DEBUG', 'INFO', 'TRACE'], description: 'The Terraform Debug Level')

        }



         environment {

            AWS_ACCESS_KEY_ID     = credentials('AWS_ACCESS_KEY_ID')

            AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')

            TF_LOG                = "${params.ACCEPTANCE_TESTS_LOG_LEVEL}"

            TF_LOG_PATH           = "${params.ACCEPTANCE_TESTS_LOG_TO_FILE ? 'tf_log.log' : '' }"

            TF_VAR_grafana_password = "${params.GrafanaPassword}"

            TF_VAR_env = "${params.Environment}"

            TF_VAR_instance_type = "${params.InstanceSize}"

            TF_VAR_kube_version = "${params.KubeV}"

            TF_VAR_environment = "${params.Branch}"

        }

    // 


        stages {

          stage('checkout') {

            steps {

                echo "Pulling changes from the branch ${params.Branch}"

                git credentialsId: 'bitbucket', url: 'https://bitbucket.org/username/eks-sample.git' , branch: "${params.Branch}"

            }

          }


            stage('terraform plan') {

                steps {

                    sh "pwd ; terraform init -input=true"

                    sh "terraform validate"

                    sh "terraform plan -input=true -out tfplan"

                    sh 'terraform show -no-color tfplan > tfplan.txt'

    }

                }

            

            stage('terraform apply approval') {

               when {

                   not {

                       equals expected: true, actual: params.autoApprove

                   }

               }


               steps {

                   script {

                        def plan = readFile 'tfplan.txt'

                        input message: "Do you want to apply the plan?",

                        parameters: [text(name: 'Plan', description: 'Please review the plan', defaultValue: plan)]

                   }

               }

           }


            stage('terraform apply') {

                steps {

                    sh "terraform apply -input=true tfplan"

                }

            }

            

            stage('terraform destroy approval') {

                steps {

                    input 'Run terraform destroy?'

                }

            }

            stage('terraform destroy') {

                steps {

                    sh 'terraform destroy -force'

                }

            }

        }


      }


    • Map IAM User to EKS using ClusterRole & ClusterRoleBinding 
    • Create a new file called kube-rolebinding.tf and paste the highlighted code. 
    • (This will bind/give IAM User the permission to perform operations)
    • resource "kubernetes_role" "admin_role" {

        metadata {

          name      = "eks-console-dashboard-full-access-clusterrole"

          namespace = "default"

        }


        rule {

          api_groups = ["*"]

          resources  = ["*"]

          verbs      = ["get", "list", "patch", "update", "watch"]

        }

      }


      resource "kubernetes_role_binding" "admin_role_binding" {

        metadata {

          name      = "eks-console-dashboard-full-access-binding"

          namespace = "default"

        }

        role_ref {

          api_group = "rbac.authorization.k8s.io"

          kind      = "Role"

          name      = "eks-console-dashboard-full-access-clusterrole"

        }


        subject {

          kind      = "User"

          name      = "terraform-user"

          api_group = "rbac.authorization.k8s.io"

        }

      }

    • Commit and push code changes to Repo via Command Line or VSCode
        
      • Run the following commands to commit code to bitbucket:
        - git pull
        - git add *
        - git commit -m "update"
      • - git push

        OR

        In Vscode, navigate to Source Code Icon on the right tabs on the side (Note: Only works with SSH configured with bitbucket)
      • Enter commit message
      • Click the + icon to stage changes 

      • Push changes by clicking on the ðŸ”„0 ⬇️ 1 ⬆️ as shown below

     

    Run Pipeline Job

    • Go to eks-pipeline on Jenkins and run build 
    Note: The pipeline job will fail the first time to capture the parameters in Jenkinsfile

    • The next time you run a build you should see as shown below


    • Select dev in the Environment field
    • Enter Grafana Password 
    • Select master as the branch
    • Choose m4.large, m4.xlarge or m4.2xlarge for EKS Cluster.
    • Choose Kubernetes version 1.18, 1.20 or 1.21.
    • Check the box ACCEPTANCE_TESTS_LOG_TO_FILE to enable Terraform logging
    • Select Trace for debug logging
    • Go to Console Output to track progress
    Note: You can abort the destroy step and rerun the step by installing Blue Ocean Plugin on Jenkins to delete the resources created.


    • Configure AWS Credentials

    • Open a GitBash Terminal

    • Run the following command to configure your credentials and use the secret and access keys of the terraform-user.

    • aws configure


    Once configured, run command vi ~/.aws/config and add the following block of code: 

    [profile adminrole]

    role_arn = arn:aws:iam::AWS-ACCOUNT-NUMBER:role/terraform-eks-cluster

    source_profile = default 




    • Authenticate into the EKS Cluster
    • Open Terminal cd ~ and login to EKS with command:
    • aws eks --region us-east-2 update-kubeconfig --name eks-sample-dev
    • aws eks --region us-east-2 update-kubeconfig --name eks-sample-dev --profile adminrole
    • Use this command kubectl edit configmap aws-auth -n kube-system to edit and change configmap to the following: 

    data:

      mapRoles: |

        - groups:

          - system:masters

          rolearn: arn:aws:iam::AWS-ACCOUNT-NUMBER:user/terraform-user

          username": {{SessionName}}

        - groups:

          - system:bootstrappers

          - system:nodes

          rolearn: arn:aws:iam::AWS-ACCOUNT-NUMBER:role/terraform-eks-tf-eks-node

          username: system:node:{{EC2PrivateDNSName}}

        - groups:

          - system:masters

          rolearn: arn:aws:iam::AWS-ACCOUNT-NUMBER:role/terraform-eks-cluster

          username: {{SessionName}}

      mapUsers: |

        - userarn: arn:aws:iam::AWS-ACCOUNT-NUMBER:user/terraform-user

          username: terraform-user

          groups:

          - system:masters 

    Once edited, :wq! to quit and save.


    • Verify EKS Cluster is Active and Nodes are Visible
    • Login to AWS Console with terraform-user credentials
    • Navigate to EKS, select eks-sample-dev and the nodes should be visible.
    • Open Terminal cd ~ and login to EKS with command:
    • aws eks --region us-east-2 update-kubeconfig --name eks-sample-dev --profile adminrole 
    • Verify you are logged in with command: kubectl get nodes or kubectl get pods --all-namespaces

    • Verify Helm Deployment
      Note: This step is not required, this only shows the list of helm deployments.
    • Ensure you have Helm installed on your computer before running the following command:
    • - helm list

    • Access Grafana Dashboard
    • Run the following command kubectl get svc and copy grafana's EXTERNAL-IP which is a LoadBalancer DNS as shown below.


    • You will be able to access Grafana from http://loadbalancer-dns
    • Use the following default username and password to log in. Once you log in with default credentials, it will prompt you to change the default password.

      User: admin
      Pass: YOUR-GRAFANA-PASSWORD

     

    • Access Prometheus UI
    • Now you should be able to access the Prometheus UI with port forwarding using the following command.
    • kubectl port-forward -n default svc/prometheus-server 8080:80


    • You will be to access Prometheus from
      http://127.0.0.1:8080

    How to upgrade Maven

      java.lang.IllegalStateException I had installed maven in my ubuntu using command  apt install maven This installed maven in path /usr/shar...