- Create an IAM User
- Configure Terraform Backend with S3 Storage
- Setting up CD Pipeline for EKS Cluster
- Create Terraform Workspace for EKS Cluster
- Run Pipeline Job
- Configure AWS
- Map IAM User to EKS using ClusterRole & ClusterRoleBinding
- Verify EKS Cluster is Active and Nodes are Visible
- Create an IAM User
- Go to AWS Console
- Search for IAM as shown below
- Select Users and create a User called terraform-user with Console access and AdministratorAccess policy attached. Ensure to download your credentials once the user has been created as this is required to login to the EKS Cluster.
- Navigate to Policies and select create policy called eks-assume
- Select JSON and copy the below code highlighted in yellow to create policy: (Note: Ensure to change the AWS-ACCOUNT-NUMBER)
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::AWS-ACCOUNT-NUMBER:role/terraform-eks-cluster"
}
}
- Navigate to Policies and select create policy called eks-permission
- Select JSON and copy the below code highlighted in yellow to create policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:DescribeNodegroup",
"eks:ListNodegroups",
"eks:DescribeCluster",
"eks:ListClusters",
"eks:AccessKubernetesApi",
"ssm:GetParameter",
"eks:ListUpdates",
"eks:ListFargateProfiles"
],
"Resource": "*"
}
]
}
- Create Group called eksgroup and attach the eks-permission and eks-assume policy to terrform-user
- Verify group and policy is attached to terraform-user by navigating to Users.
Configure Terraform Backend with S3 Storage
Create S3 bucket in AWS to configure the backend and store terraform state files in storage. (Name the S3 Bucket whatever you prefer)
Setting up CD Pipeline for EKS Cluster
- Go to Jenkins > New Items. Enter eks-pipeline in name field > Choose Pipeline > Click OK
Bitbucket Changes
- Create a new Bitbucket Repo and call it eks-pipeline
- Go to Repository Settings after creation and select Webhooks
- Click Add Webhooks
- Enter tf_token as the Title
- Copy and paste the url as shown below
- Status should be active
- Click on skip certificate verification
- triggers --> repository push
- Go back to Jenkins and select Configure
- Scroll down to Pipeline and click on the drop down to select Pipeline Script From SCM
- Enter credentials for Bitbucket, Leave the Branch master as the default, Make sure script path is Jenkinsfile.
- Apply and Save.
Create Terraform Workspace for EKS Pipeline
Open File Explorer, navigate to Desktop and create a folder my-
eks-cluster
Once folder has been created, open Visual Code Studio and add folder to workspace
- Open a New Terminal
- Run the command before cloning repo: git init
- Navigate to eks-pipeline repo in Bitbucket
- Clone the repo with SSH or HTTPS
- Make sure to cd eks-pipeline and create new files in the eks-pipeline folder

resource "aws_eks_cluster" "tf_eks" {
name = local.cluster_name
enabled_cluster_log_types = ["authenticator","api", "controllerManager", "scheduler"]
role_arn = aws_iam_role.tf-eks-master.arn
version = var.kube_version
vpc_config {
security_group_ids = [aws_security_group.eks-master-sg.id]
subnet_ids = data.aws_subnet_ids.public.ids
}
timeouts {
create = var.cluster_create_timeout
delete = var.cluster_delete_timeout
}
depends_on = [
aws_iam_role_policy_attachment.tf-cluster-AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.tf-cluster-AmazonEKSServicePolicy,
]
tags = local.common_tags
}
########################################################################################
# Setup AutoScaling Group for worker nodes
########################################################################################
locals {
tf-eks-node-userdata = <<USERDATA
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh --apiserver-endpoint '${aws_eks_cluster.tf_eks.endpoint}' --b64-cluster-ca '${aws_eks_cluster.tf_eks.certificate_authority.0.data}' '${local.cluster_name}'
USERDATA
}
resource "aws_launch_configuration" "config" {
associate_public_ip_address = true
iam_instance_profile = aws_iam_instance_profile.node.name
image_id = data.aws_ami.eks-worker.id
instance_type = var.instance_type
name_prefix = "my-eks-cluster"
security_groups = [aws_security_group.eks-node-sg.id, aws_security_group.worker_ssh.id]
user_data_base64 = base64encode(local.tf-eks-node-userdata)
key_name = var.keypair-name
lifecycle {
create_before_destroy = true
}
ebs_optimized = true
root_block_device {
volume_size = 100
delete_on_termination = true
}
}
resource "aws_autoscaling_group" "asg" {
desired_capacity = 2
launch_configuration = aws_launch_configuration.config.id
max_size = 2
min_size = 2
name = local.cluster_name
vpc_zone_identifier = data.aws_subnet_ids.public.ids
tag {
key = "eks-worker-nodes"
value = local.cluster_name
propagate_at_launch = true
}
tag {
key = "kubernetes.io/cluster/${aws_eks_cluster.tf_eks.name}"
value = "owned"
propagate_at_launch = true
}
}
- Create a new file iam.tf and copy the below code in yellow color
# Setup for IAM role needed to setup an EKS clusters
resource "aws_iam_role" "tf-eks-master" {
name = "terraform-eks-cluster"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com",
"AWS": "arn:aws:iam::AWS-ACCOUNT-NUMBER:user/terraform-user"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
lifecycle {
create_before_destroy = true
}
}
resource "aws_iam_role_policy_attachment" "tf-cluster-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.tf-eks-master.name
}
resource "aws_iam_role_policy_attachment" "tf-cluster-AmazonEKSServicePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = aws_iam_role.tf-eks-master.name
}
resource "aws_iam_role_policy_attachment" "tf-eks-node-AmazonEKSWorkerNode" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.tf-eks-master.name
}
########################################################################################
# Setup IAM role & instance profile for worker nodes
resource "aws_iam_role" "tf-eks-node" {
name = "terraform-eks-tf-eks-node"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "tf-eks-node-AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.tf-eks-node.name
}
resource "aws_iam_role_policy_attachment" "tf-eks-node-AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.tf-eks-node.name
}
resource "aws_iam_role_policy_attachment" "tf-eks-node-AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.tf-eks-node.name
}
resource "aws_iam_instance_profile" "node" {
name = "terraform-eks-node"
role = aws_iam_role.tf-eks-node.name
}
- Create a new file kube.tf and copy the below code in yellow color
########################################################################################
# Setup provider for kubernetes
# ---------------------------------------------------------------------------------------
# Get an authentication token to communicate with the EKS cluster.
# By default (before other roles are added to the Auth ConfigMap), you can authenticate to EKS cluster only by assuming the role that created the cluster.
# `aws_eks_cluster_auth` uses IAM credentials from the AWS provider to generate a temporary token.
# If the AWS provider assumes an IAM role, `aws_eks_cluster_auth` will use the same IAM role to get the auth token.
# https://www.terraform.io/docs/providers/aws/d/eks_cluster_auth.html
data "aws_eks_cluster_auth" "aws_iam_authenticator" {
name = "${aws_eks_cluster.tf_eks.name}"
}
data "aws_iam_user" "terraform_user" {
user_name = "terraform-user"
}
locals {
# roles to allow kubernetes access via cli and allow ec2 nodes to join eks cluster
configmap_roles = [{
rolearn = "${data.aws_iam_user.terraform_user.arn}"
username = "{{SessionName}}"
groups = ["system:masters"]
},
{
rolearn = "${aws_iam_role.tf-eks-node.arn}"
username = "system:node:{{EC2PrivateDNSName}}"
groups = ["system:bootstrappers","system:nodes"]
},
{
rolearn = "${aws_iam_role.tf-eks-master.arn}"
username = "{{SessionName}}"
groups = ["system:masters"]
},]
}
# Allow worker nodes to join cluster via config map
resource "kubernetes_config_map" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data = {
mapRoles = yamlencode(local.configmap_roles)
}
}
locals {
kubeconfig = <<KUBECONFIG
apiVersion: v1
clusters:
- cluster:
server: ${aws_eks_cluster.tf_eks.endpoint}
certificate-authority-data: ${aws_eks_cluster.tf_eks.certificate_authority.0.data}
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "${aws_eks_cluster.tf_eks.name}"
KUBECONFIG
}
- Create a new file output.tf and copy the below code in yellow color
output "eks_kubeconfig" {
value = "${local.kubeconfig}"
depends_on = [
aws_eks_cluster.tf_eks
]
}
- Create a new file provider.tf and copy the below code in yellow color
terraform {
backend "s3" {
bucket = "S3-BUCKET-NAME"
key = "eks/terraform.tfstste"
region = "us-east-2"
}
}
provider "aws" {
region = var.region
version = "~> 2.0"
}
provider "kubernetes" {
host = aws_eks_cluster.tf_eks.endpoint
cluster_ca_certificate = base64decode(aws_eks_cluster.tf_eks.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.aws_iam_authenticator.token
}
- Create a new file sg-eks.tf and copy the below code in yellow color
# # #SG to control access to worker nodes
resource "aws_security_group" "eks-master-sg" {
name = "terraform-eks-cluster"
description = "Cluster communication with worker nodes"
vpc_id = var.vpc_id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = merge(
local.common_tags,
map(
"Name","eks-cluster",
"kubernetes.io/cluster/${local.cluster_name}","owned"
)
)
}
resource "aws_security_group" "eks-node-sg" {
name = "terraform-eks-node"
description = "Security group for all nodes in the cluster"
vpc_id = var.vpc_id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = merge(
local.common_tags,
map(
"Name","eks-worker-node",
"kubernetes.io/cluster/${aws_eks_cluster.tf_eks.name}","owned"
)
)
}
resource "aws_security_group" "worker_ssh" {
name_prefix = "worker_ssh"
vpc_id = var.vpc_id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = merge(
local.common_tags,
map(
"Name","worker_ssh",
)
)
}
- Create a new file sg-rules-eks.tf and copy the below code in yellow color
# Allow inbound traffic from your local workstation external IP
# to the Kubernetes. You will need to replace A.B.C.D below with
# your real IP. Services like icanhazip.com can help you find this.
resource "aws_security_group_rule" "tf-eks-cluster-ingress-workstation-https" {
cidr_blocks = ["0.0.0.0/0"]
description = "Allow workstation to communicate with the cluster API Server"
from_port = 443
protocol = "tcp"
security_group_id = aws_security_group.eks-master-sg.id
to_port = 443
type = "ingress"
}
########################################################################################
# Setup worker node security group
resource "aws_security_group_rule" "tf-eks-node-ingress-self" {
description = "Allow node to communicate with each other"
from_port = 0
protocol = "-1"
security_group_id = aws_security_group.eks-node-sg.id
source_security_group_id = aws_security_group.eks-node-sg.id
to_port = 65535
type = "ingress"
}
resource "aws_security_group_rule" "tf-eks-node-ingress-cluster" {
description = "Allow worker Kubelets and pods to receive communication from the cluster control plane"
from_port = 1025
protocol = "tcp"
security_group_id = aws_security_group.eks-node-sg.id
source_security_group_id = aws_security_group.eks-master-sg.id
to_port = 65535
type = "ingress"
}
# allow worker nodes to access EKS master
resource "aws_security_group_rule" "tf-eks-cluster-ingress-node-https" {
description = "Allow pods to communicate with the cluster API Server"
from_port = 443
protocol = "tcp"
security_group_id = aws_security_group.eks-node-sg.id
source_security_group_id = aws_security_group.eks-master-sg.id
to_port = 443
type = "ingress"
}
resource "aws_security_group_rule" "tf-eks-node-ingress-master" {
description = "Allow cluster control to receive communication from the worker Kubelets"
from_port = 443
protocol = "tcp"
security_group_id = aws_security_group.eks-master-sg.id
source_security_group_id = aws_security_group.eks-node-sg.id
to_port = 443
type = "ingress"
}
- Create a new file variables.tf and copy the below code in yellow color
# Setup data source to get amazon-provided AMI for EKS nodes
data "aws_ami" "eks-worker" {
filter {
name = "name"
values = ["amazon-eks-node-1.21-*"]
}
most_recent = true
owners = ["602401143452"] # Amazon EKS AMI Account ID
}
data "aws_subnet_ids" "public" {
vpc_id = var.vpc_id
filter {
name = "tag:Name"
values = ["subnet-public-*"]
}
}
variable region {
type = string
default = "us-east-2"
}
variable "cluster_create_timeout" {
description = "Timeout value when creating the EKS cluster."
type = string
default = "30m"
}
variable "cluster_delete_timeout" {
description = "Timeout value when deleting the EKS cluster."
type = string
default = "15m"
}
variable "vpc_id" {
type = string
default = "PASTE-VPC-ID-HERE"
}
variable "keypair-name" {
type = string
default = "KEY-NAME"
}
variable "creator" {
description = "Creator of deployed servers"
type = string
default = "YOUR-NAME"
}
variable "instance_type" {}
variable "env" {}
## Application/workspace specific inputs
variable "app" {
description = "Name of Application"
type = string
default = "my-eks"
}
variable "kube_version" {
type = string
description = "Kubernetes version for eks"
}
## Tagging naming convention
locals {
common_tags = {
env = var.env,
creator = var.creator,
app = var.app
}
cluster_name = "${var.app}-${var.env}"
}
- Create a new file Jenkinsfile and copy the below code in yellow color
pipeline {
agent {
node {
label "master"
}
}
parameters {
choice(choices: ['dev', 'qa', 'prod'], description: 'Select Lifecycle to deploy', name: 'Environment')
choice(choices: ['master', 'feature_1', 'feature_2'], description: 'Select Branch to clone', name: 'Branch')
choice(choices: ['m4.large', 'm4.xlarge', 'm4.2xlarge'], description: 'Select Instance Size', name: 'InstanceSize')
choice(choices: ['1.18', '1.20', '1.21'], description: 'Select Kubernetes Version', name: 'KubeV')
booleanParam(name: 'autoApprove', defaultValue: false, description: 'Automatically run apply after generating plan?')
booleanParam(name: 'ACCEPTANCE_TESTS_LOG_TO_FILE', defaultValue: true, description: 'Should debug logs be written to a separate file?')
choice(name: 'ACCEPTANCE_TESTS_LOG_LEVEL', choices: ['WARN', 'ERROR', 'DEBUG', 'INFO', 'TRACE'], description: 'The Terraform Debug Level')
}
environment {
AWS_ACCESS_KEY_ID = credentials('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')
TF_LOG = "${params.ACCEPTANCE_TESTS_LOG_LEVEL}"
TF_LOG_PATH = "${params.ACCEPTANCE_TESTS_LOG_TO_FILE ? 'tf_log.log' : '' }"
TF_VAR_env = "${params.Environment}"
TF_VAR_instance_type = "${params.InstanceSize}"
TF_VAR_kube_version = "${params.KubeV}"
TF_VAR_environment = "${params.Branch}"
}
//
stages {
stage('checkout') {
steps {
echo "Pulling changes from the branch ${params.Branch}"
git credentialsId: 'bitbucket', url: 'https://bitbucket.org/username/eks-sample.git' , branch: "${params.Branch}"
}
}
stage('terraform plan') {
steps {
sh "pwd ; terraform init -input=true"
sh "terraform validate"
sh "terraform plan -input=true -out tfplan"
sh 'terraform show -no-color tfplan > tfplan.txt'
}
}
stage('terraform apply approval') {
when {
not {
equals expected: true, actual: params.autoApprove
}
}
steps {
script {
def plan = readFile 'tfplan.txt'
input message: "Do you want to apply the plan?",
parameters: [text(name: 'Plan', description: 'Please review the plan', defaultValue: plan)]
}
}
}
stage('terraform apply') {
steps {
sh "terraform apply -input=true tfplan"
}
}
stage('terraform destroy approval') {
steps {
input 'Run terraform destroy?'
}
}
stage('terraform destroy') {
steps {
sh 'terraform destroy -force'
}
}
}
}
- Commit and push code changes to Repo via Command Line or VSCode
- Run the following commands to commit code to bitbucket:
- git pull
- git add *
- git commit -m "update" - - git push
OR
In Vscode, navigate to Source Code Icon on the right tabs on the side (Note: Only works with SSH configured with bitbucket) - Enter commit message
- Click the + icon to stage changes
- Push changes by clicking on the 🔄0 ⬇️ 1 ⬆️ as shown below
Run Pipeline Job
- Go to eks-pipeline on Jenkins and run build
- The next time you run a build you should see as shown below
- Select dev in the Environment field
- Select master as the branch
- Choose m4.large, m4.xlarge or m4.2xlarge for EKS Cluster.
- Choose Kubernetes version 1.18, 1.20 or 1.21.
- Check the box ACCEPTANCE_TESTS_LOG_TO_FILE to enable Terraform logging
- Select Trace for debug logging
- Go to Console Output to track progress
- Configure AWS Credentials
Open a GitBash Terminal
Run the following command to configure your credentials and use the secret and access keys of the terraform-user.
- aws configure
Once configured, run command vi ~/.aws/config and add the following block of code:
[profile adminrole]
role_arn = arn:aws:iam::AWS-ACCOUNT-NUMBER:role/terraform-eks-cluster
source_profile = default
- Map IAM User to EKS using ClusterRole & ClusterRoleBinding
- Run the following command to update EKS config
- aws eks --region us-east-2 update-kubeconfig --name eks-sample-dev
- aws eks --region us-east-2 update-kubeconfig --name eks-sample-dev --profile adminrole
- In the Terminal, cd ~ and create a file named touch rbac.yaml in and paste the highlighted code:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: eks-console-dashboard-full-access-clusterrole
rules:
- apiGroups:
- ""
resources:
- nodes
- namespaces
- pods
verbs:
- get
- list
- apiGroups:
- apps
resources:
- deployments
- daemonsets
- statefulsets
- replicasets
verbs:
- get
- list
- apiGroups:
- batch
resources:
- jobs
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: eks-console-dashboard-full-access-binding
subjects:
- kind: User
name: terraform-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: eks-console-dashboard-full-access-clusterrole
apiGroup: rbac.authorization.k8s.io
Run command once file is created, kubectl apply -f rbac.yaml (This will bind/give IAM User the permission to perform operations.)
Use this command kubectl edit configmap aws-auth -n kube-system to edit and change configmap to the following:
data:
mapRoles: |
- groups:
- system:masters
rolearn: arn:aws:iam::AWS-ACCOUNT-NUMBER:user/terraform-user
username": {{SessionName}}
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::AWS-ACCOUNT-NUMBER:role/terraform-eks-tf-eks-node
username: system:node:{{EC2PrivateDNSName}}
- groups:
- system:masters
rolearn: arn:aws:iam::AWS-ACCOUNT-NUMBER:role/terraform-eks-cluster
username: {{SessionName}}
mapUsers: |
- userarn: arn:aws:iam::AWS-ACCOUNT-NUMBER:user/terraform-user
username: terraform-user
groups:
- system:masters
Once edited, :wq! to quit and save.
- Verify EKS Cluster is Active and Nodes are Visible
- Login to AWS Console with terraform-user credentials
- Navigate to EKS, select eks-sample-dev and the nodes should be visible.
- Open Terminal cd ~ and login to EKS with command:
- aws eks --region us-east-2 update-kubeconfig --name eks-sample-dev --profile adminrole
- Verify you are logged in with command: kubectl get nodes or kubectl get pods --all-namespaces







