Showing posts with label Terraform. Show all posts
Showing posts with label Terraform. Show all posts

Saturday, 7 February 2026

Key Terraform Rule in Execution, files, folders and directories

 

 Key Terraform Rule

Terraform loads and merges ALL .tf files in a directory automatically.

There is:

  • ❌ no “main file”

  • ❌ no execution order by filename

  • ✅ one configuration per directory

So:

terraform apply

applies everything in that folder.


✅ How You SHOULD structure your files

๐Ÿ“ Recommended folder structure

terraform-lab/ ├── provider.tf ├── data.tf ├── instance.tf ├── outputs.tf ├── variables.tf

Terraform reads them all together.

LAB- BREAK YOUR MAIN.TF INTO DIFFERENT COMPONENTS

provider.tf

provider "aws" { region = "us-east-1" }

data.tf

data "aws_vpc" "default" { default = true } data "aws_subnets" "default" { filter { name = "vpc-id" values = [data.aws_vpc.default.id] } }

instance.tf

resource "aws_instance" "web" { ami = "ami-0c02fb55956c7d316" instance_type = "t3.micro" subnet_id = data.aws_subnets.default.ids[0] tags = { Name = "terraform-lab" } }

outputs.tf

output "instance_id" { value = aws_instance.web.id }

▶️ Running Terraform

From the directory:

terraform init terraform plan terraform apply

Terraform automatically:

  • loads all .tf files

  • builds the dependency graph

  • applies in the correct order


❌ Common misconception

“Terraform executes files top to bottom”

Wrong.

Terraform:

  • builds a dependency graph

  • executes based on references

  • ignores file order and filenames


๐Ÿง  KEY TAKEAWYS

Terraform directory = one application
.tf files = chapters in the same book

You don’t run chapters — you run the book.


๐Ÿงช Advanced (Optional): Lab separation strategies

Option A — New folder per lab (recommended for beginners)

labs/ ├── lab1-default-vpc/ ├── lab2-alb/ ├── lab3-asg/

Option B — Same folder, comment/uncomment (not ideal)

Option C — Use variables / count (advanced)


⚠️ One important rule

Terraform only reads files in the current directory.

Subfolders are ignored unless you use modules (advanced topic).


✅ 

  • You don’t “apply a file”

  • You apply a directory

  • Terraform merges all .tf files automatically

  • File naming is for human readability only


๐Ÿง  One-sentence takeaway for students

Terraform applies directories, not files.

Understanding VPC, Filter Blocks in Terraform

 

Confirm the Default VPC (AWS Console)

  1. Open AWS Console → VPC

  2. Go to Your VPCs

  3. Identify the VPC marked Default = Yes

  4. Go to Subnets

    • Notice one subnet per Availability Zone

๐Ÿ’ก Key Concept

EC2 instances are launched into subnets, and subnets belong to VPCs.


๐Ÿ”ฌ LAB 2 — Create Terraform Project

Create main.tf:

provider "aws" { region = "us-east-1" }

Initialize:

terraform init

๐Ÿ”ฌ LAB 3 — Look Up the Default VPC (Data Source)

Add to main.tf:

data "aws_vpc" "default" { default = true }

Add output:

output "default_vpc_id" { value = data.aws_vpc.default.id }

Run:

terraform apply -auto-approve

✅ Terraform prints the default VPC ID.

๐Ÿ’ก Key Concept

data blocks read existing infrastructure — they do NOT create anything.


๐Ÿ”ฌ LAB 4 — Find Subnets Using a filter Block (Core Concept)

Now we want subnets that belong ONLY to the default VPC.

Add:

data "aws_subnets" "default" { filter { name = "vpc-id" values = [data.aws_vpc.default.id] } }

Add output:

output "default_subnet_ids" { value = data.aws_subnets.default.ids }

Apply:

terraform apply -auto-approve

๐Ÿ” Understanding the filter Block (IMPORTANT)

What the filter block does

It tells Terraform:
“Only return AWS resources that match this condition.”

In this case:

“Give me only the subnets that belong to the default VPC.”


Line-by-line explanation

filter { name = "vpc-id" values = [data.aws_vpc.default.id] }
  • filter {}
    Defines a condition AWS must match

  • name = "vpc-id"
    The AWS API attribute we are filtering on
    (This is an AWS field, not a Terraform keyword)

  • values = [...]
    Acceptable value(s) for that attribute
    Here, it dynamically uses the default VPC ID


What Terraform is doing behind the scenes

Terraform sends AWS a request like:

“List all subnets WHERE vpc-id = vpc-xxxxxxxx”

AWS returns only matching subnets.


remember this

Think of AWS like a database:

SELECT * FROM subnets WHERE vpc_id = 'vpc-xxxxxxxx';

That’s exactly what the filter block does.


Why this is better than hardcoding

❌ Bad:

subnet_id = "subnet-0abc123"

✅ Good:

subnet_id = data.aws_subnets.default.ids[0]

Benefits:

  • Works across AWS accounts

  • Works across regions

  • Real-world Terraform pattern

⚠️ Note for students

The order of subnet IDs is not guaranteed.
Using [0] is fine for labs, but production code should be deterministic.


๐Ÿ”ฌ LAB 5 — Launch EC2 in the Default VPC

Add:

resource "aws_instance" "web" { ami = "ami-0c02fb55956c7d316" # Amazon Linux 2 (us-east-1) instance_type = "t3.micro" subnet_id = data.aws_subnets.default.ids[0] tags = { Name = "terraform-default-vpc-lab" } }

Apply:

terraform apply -auto-approve

✅ EC2 instance launches in the default VPC.


๐Ÿ”ฌ LAB 6 — Use the Default Security Group (Optional but Best Practice)

Add:

data "aws_security_group" "default" { name = "default" vpc_id = data.aws_vpc.default.id }

Update EC2:

vpc_security_group_ids = [ data.aws_security_group.default.id ]

Apply again.

๐Ÿ’ก Teaching Point

Never assume defaults — always declare dependencies explicitly.


๐Ÿ”ฌ LAB 7 — Cleanup (Critical Habit)

terraform destroy -auto-approve

๐Ÿง  Key Takeaways (Interview / Exam Ready)

  • aws_instance has no vpc_id

  • ✅ EC2 → Subnet → VPC

  • filter blocks safely query AWS

  • ❌ Hardcoding IDs is fragile

  • ✅ Default VPC is OK for labs, not production



Friday, 11 March 2022

How to Build Terraform Script Scratch to SkyScrapper

Step 1: Create the main terraform config file - main.tf

Go to your terraform work space....and launch vscode


Step 2: Get the Code Block for The Provider Section

Go to https://registry.terraform.io/ and Select Browse Provider


Select your provider- Aws

Step 3: 

Step 4: Copy the code block skeleton: See below and paste in your main.tf file
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "4.4.0"
    }
  }
}

provider "aws" {
  # Configuration options
}

Step 5: Lets get the config options: Click on documentation nd scroll down to usage example:



Step 6: copy the config options as pictured above and replace with ur region, secret key, accesskey
So our code will now look like below:
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "4.4.0"
    }
  }
}

provider "aws" {
  # Configuration options
  region     = "us-east-2"
  access_key = "ur access key"
  secret_key = "ur secret key"
}


Step 7: Now let us add a default tag to our code. Scroll down on the page to default tag usage and copy the code(pls modify as required)


See code to copy below, Add it below secret key
default_tags {
    tags = {
      Environment = "Test"
      Name        = "Provider Tag"
    }
  }
it will look like below:


Our Code will now look like this:
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "4.4.0"
    }
  }
}

provider "aws" {
  # Configuration options
  region     = "us-east-2"
  access_key = "AKetetettetetettetwwuquuququq"
  secret_key = "wtwtetetett2tt22266262wfwffwf"
  default_tags {
    tags = {
      Environment = "Dev"
      Name        = "aws_dev"
    }
  }
}

This takes care of our connection to Aws. The next thing will be to create a resource. To do this you have to browse the available resource for aws




Aws has lots of resources , scroll to the one you want. this eg we will create ec2

Copy the aws_instance resource block and add to ur script, this will form our template








So our code will look like this:
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "4.4.0"
    }
  }
}

provider "aws" {
  # Configuration options
  region     = "us-east-2"
  access_key = "Axccxcxcxcxccxcxcxccxcxc"
  secret_key = "pxxcxcxcxccxvxvvx"
  default_tags {
    tags = {
      Environment = "Dev"
      Name        = "aws_dev"
    }
  }
}

resource "aws_instance" "web" {
  




}

Now we have the resource block ready, now its time to inject the config variables for the instance resource


Go to modules and select the resource you want.  we want to create and ec2 instance so we will select a module for that
Step 8:  Go to registry: https://registry.terraform.io/   ....Browse Modules





In the search type ec2 and look for the module for creating an ec2 instance, Scroll to find it under modules




Scroll Down and copy the below block and add to your code






Your code will now look like below





Pls note the ff variables arent required
  • name not required
  • source - not required
  • version -not required

Our new code will look like below

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "4.4.0"
    }
  }
}

provider "aws" {
  # Configuration options
  region     = "us-east-2"
  access_key = "Axccxcxcxcxccxcxcxccxcxc"
  secret_key = "pxxcxcxcxccxvxvvx"
  default_tags {
    tags = {
      Environment = "Dev"
      Name        = "aws_dev"
    }
  }
}

resource "aws_instance""myec2-instance" {
  ami                    = "ami-ebd02392"
  instance_type          = "t2.micro"
  key_name               = "Augustkey"
  vpc_security_group_ids = ["sg-12345678"]
  tags = {
    Terraform   = "true"
    Environment = "Dev"
    Name        = "My_ec2 instance"
  }
}

Now modify the vpc_security_group_ids so that we will use the default security group
  vpc_security_group_ids = [aws_security_group.ec2_sg.id]

Our code will now look like below:
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "4.4.0"
    }
  }
}

provider "aws" {
  # Configuration options
  region     = "us-east-2"
  access_key = "xcxcxcxcvxvxbxbbxbxbx"
  secret_key = "sggsgsgsgsggsgsgsggs"
  default_tags {
    tags = {
      Environment = "Dev"
      Name        = "aws_dev"
    }
  }
}

resource "ec2_instance""single-instance" {
  ami                    = "ami-ebd02392"
  instance_type          = "t2.micro"
  key_name               = "Augustkey"
  vpc_security_group_ids = [aws_security_group.ec2_sg.id]
  tags = {
    Terraform   = "true"
    Environment = "Dev"
    Name        = "My_ec2 instance"
  }
}


Now to Add Security Group resource











Copy the code and add at the bottom of the script. we will modify to suit our enviroment

Modify the ingress port to suit your env, the egress doesnt need to be changed
vpc_id: is optional
Our code will now look like:


terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "4.4.0"
    }
  }
}

provider "aws" {
  # Configuration options
  region     = "us-east-2"
  access_key = "xcxcxcxcvxvxbxbbxbxbx"
  secret_key = "sggsgsgsgsggsgsgsggs"
  default_tags {
    tags = {
      Environment = "Dev"
      Name        = "aws_dev"
    }
  }
}

resource "ec2_instance""single-instance" {
  ami                    = "ami-ebd02392"
  instance_type          = "t2.micro"
  key_name               = "Augustkey"
  vpc_security_group_ids = [aws_security_group.ec2_sg.id]
  tags = {
    Terraform   = "true"
    Environment = "Dev"
    Name        = "My_ec2 instance"
  }
}

resource "aws_security_group" "ec2_sg" {
    name = "ec2-dev-sg"
    description = "EC2 SG"

    ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["10.0.0.0/8"]
    }

    ingress {
        from_port = 80
        to_port = 80
        protocol = "tcp"
        cidr_blocks = ["10.0.0.0/8"]
    }

    #Allow all outbound
    egress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }
tags = {
    Name = "ec2-dev-sg"
  }
}

Now we have all the code blocks to create our env. Save
in your terminal enter
$terraform init
$terraform plan
$terraform apply

Friday, 4 March 2022

Setting Up Cd Pipeline for Terraform

 

  • Go back to Jenkins and select your terraform pipeline and click  Configure
  • Scroll down to Pipeline and click on the drop down to select Pipeline Script From SCM
  • Enter credentials for Bitbucket, Leave the Branch master as the default, Make sure script path is Jenkinsfile
  • Right click on Pipeline Syntax and open in a new tab. 
  • Choose Checkout from Version Control in the Sample Step field
  • Enter Bitbucket Repository URL and Credentials, leave the branches blank
  • Click GENERATE PIPELINE SCRIPT, copy credentialsId and url (This is required for Jenkinsfile script)



Create Workspace for Terraform Pipeline
  • Open File Explorer, navigate to Desktop and create a folder cd_pipeline

  • Once folder has been created, open Visual Code Studio and add folder to workspace







  • Open a New Terminal
  • Run the command before cloning repo: git init
  • Navigate to terraform-pipeline repo in Bitbucket
  • Clone the repo with SSH or HTTPS
  • Create a new file main.tf and copy the below code in yellow color



















terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

provider "aws" {
  region = var.region
}

data "aws_vpc" "default" {
  default = true
}

data "aws_subnets" "default" {
  filter {
    name   = "vpc-id"
    values = [data.aws_vpc.default.id]
  }
}

resource "aws_instance" "ec2" {
  ami                    = "ami-0782e9ee97725263d" # MUST be Ubuntu 22 if using deploy.sh
  instance_type           = var.instance_type
  subnet_id               = data.aws_subnets.default.ids[0]
  vpc_security_group_ids  = [aws_security_group.ec2_SecurityGroups.id]

  user_data = file("deploy.sh")

  root_block_device {
    volume_type           = "gp3"
    volume_size           = 200
    delete_on_termination = true
    encrypted             = true
  }

  tags = {
    Name        = "u2-${var.environment}-${var.application}"
    CreatedBy   = var.launched_by
    Application = var.application
    OS          = var.os
    Environment = var.environment
  }
}

output "ec2_private_ip"  { value = aws_instance.ec2.private_ip }
output "ec2_public_ip"   { value = aws_instance.ec2.public_ip }
output "ec2_name"        { value = aws_instance.ec2.tags["Name"] }
output "ec2_instance_id" { value = aws_instance.ec2.id }



  • Create a new file security.tf and copy the below code in yellow color

resource "aws_security_group" "ec2_SecurityGroups" {
name = "u2-${var.environment}-sg-${var.application}"
description = "EC2 SG"
  vpc_id      = data.aws_vpc.default.id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
     from_port   = 8081
to_port     = 8081
protocol    = "tcp"
cidr_blocks = ["0.0.0.0/0"]
   }
ingress {
     from_port   = 8082
to_port     = 8082
protocol    = "tcp"
cidr_blocks = ["0.0.0.0/0"]
   }
   ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
#Allow all outbound
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

  • Create a new file variable.tf and copy the below code in yellow color. 

variable "region" {
  type    = string
  default = "us-east-2"
}
variable "os" {
    type = string
    default = "Ubuntu" 
    }
variable "launched_by"  {
     type = string
     default = "USER"
      }
variable "instance_type" { type = string }
variable "application"   { type = string }
variable "environment"   { type = string }


############## end tags


Bash Script to Deploy Artifactory

  • Create a new file deploy.sh and copy the below code in yellow color. 
#!/usr/bin/env bash
set -euo pipefail

# Log user-data output
exec > >(tee /var/log/user-data-artifactory.log | logger -t user-data -s 2>/dev/console) 2>&1

ART_VERSION="7.21.5"
ART_TGZ="jfrog-artifactory-oss-${ART_VERSION}-linux.tar.gz"
ART_URL="https://releases.jfrog.io/artifactory/bintray-artifactory/org/artifactory/oss/jfrog-artifactory-oss/${ART_VERSION}/${ART_TGZ}"

BASE_DIR="/opt/artifactory"
APP_DIR="${BASE_DIR}/app"     # final target
RUN_USER="artifactory"

echo "==== Installing prerequisites ===="
apt-get update -y
apt-get install -y curl tar gzip unzip jq

echo "==== Installing OpenJDK 17 ===="
apt-get install -y openjdk-17-jdk
java -version || true

echo "==== Creating artifactory user (if needed) ===="
if ! id "${RUN_USER}" >/dev/null 2>&1; then
  useradd --system --create-home --home-dir /var/opt/jfrog --shell /usr/sbin/nologin "${RUN_USER}"
fi

echo "==== Creating directories ===="
mkdir -p "${BASE_DIR}"
cd "${BASE_DIR}"

echo "==== Downloading Artifactory OSS ${ART_VERSION} ===="
# download only if not already present
if [ ! -f "${BASE_DIR}/${ART_TGZ}" ]; then
  curl -fL "${ART_URL}" -o "${BASE_DIR}/${ART_TGZ}"
fi

echo "==== Extracting ===="
# Clean previous extract folder if exists (safe for fresh builds)
rm -rf "${BASE_DIR}/artifactory-oss-${ART_VERSION}" || true
tar -xzf "${BASE_DIR}/${ART_TGZ}"

echo "==== Normalizing folder layout to /opt/artifactory/app ===="
# The tarball extracts as: artifactory-oss-7.21.5/{app,var}
# We'll move that folder to /opt/artifactory/app so you end up with:
# /opt/artifactory/app/app/bin/artifactory.sh (matches your confirmed structure)
rm -rf "${APP_DIR}" || true
mv "${BASE_DIR}/artifactory-oss-${ART_VERSION}" "${APP_DIR}"

echo "==== Setting ownership ===="
chown -R "${RUN_USER}:${RUN_USER}" "${BASE_DIR}"

echo "==== Creating systemd unit ===="
cat >/etc/systemd/system/artifactory.service <<'EOF'
[Unit]
Description=JFrog Artifactory OSS
After=network.target

[Service]
Type=forking
User=artifactory
Group=artifactory
WorkingDirectory=/opt/artifactory/app
ExecStart=/opt/artifactory/app/app/bin/artifactory.sh start
ExecStop=/opt/artifactory/app/app/bin/artifactory.sh stop
TimeoutStartSec=180
TimeoutStopSec=180
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

echo "==== Enabling and starting Artifactory ===="
systemctl daemon-reload
systemctl enable artifactory.service
systemctl restart artifactory.service

echo "==== Status (last 20 lines) ===="
systemctl --no-pager --full status artifactory.service | tail -n 40 || true

echo "==== Done. Access Artifactory on port 8081 (default) ===="





  • Create a new file Jenkinsfile and copy the below code in yellow color. 



pipeline {
    agent any
    parameters {
        string(name: 'AppName', defaultValue: 'Enter App Name', description: 'Name of application', )
        choice(choices: ['main', 'dev', 'qa', 'prod'], description: 'Select lifecycle to Deploy', name: 'Branch')
        choice(choices: ['t3.micro', 't2.small', 't2.medium'], description: 'Select Instance Size', name: 'InstanceSize')
        booleanParam(name: 'autoApprove', defaultValue: false, description: 'Automatically run apply after generating plan?')
    }


     environment {
        AWS_ACCESS_KEY_ID     = credentials('AWS_ACCESS_KEY_ID')
        AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')
        TF_VAR_instance_type = "${params.InstanceSize}"
        TF_VAR_environment = "${params.Branch}"
        TF_VAR_application = "${params.AppName}"
    }
// 

    stages {
      stage('checkout') {
        steps {
            echo "Pulling changes from the branch ${params.Branch}"
            git credentialsId: '3eb34bc6-f086-49b6-bf20-8ca407bf2063', url: 'https://maworld9284@bitbucket.org/maworld9284/terraform.git' , branch: "${params.Branch}"
        }
      }

        stage('terraform plan') {
            steps {
                sh "pwd ; terraform init -input=true"
                sh "terraform plan -input=true -out tfplan"
                sh 'terraform show -no-color tfplan > tfplan.txt'
}
            }
        
        stage('terraform apply approval') {
           when {
               not {
                   equals expected: true, actual: params.autoApprove
               }
           }

           steps {
               script {
                    def plan = readFile 'tfplan.txt'
                    input message: "Do you want to apply the plan?",
                    parameters: [text(name: 'Plan', description: 'Please review the plan', defaultValue: plan)]
               }
           }
       }

        stage('terraform apply') {
            steps {
                sh "terraform apply -input=true tfplan"
            }
        }
        
        stage('terraform destroy approval') {
            steps {
                input 'Run terraform destroy?'
            }
        }
        stage('terraform destroy') {
            steps {
                sh 'terraform destroy -auto-approve'
            }
        }
    }

  }

  • Commit and push code changes to Repo with the following:
    • In Vscode, navigate to Source Code Icon on the right tabs on the side
    • Enter commit message
    • Click the + icon to stage changes 

    • Push changes by clicking on the ๐Ÿ”„0 ⬇️ 1 ⬆️ as shown below

Run Pipeline Job

  • Go to terraform-pipeline on Jenkins and run build 
Note: The pipeline job will fail the first time to capture the parameters in Jenkinsfile

  • The next time you run a build you should see as shown below





  • Enter Artifactory in the AppName field
  • Select a Branch/Lifecycle to deploy server
  • Choose t2.small or t2.medium for Artifactory server.
  • Go to Console Output to track progress
Note: You can abort the destroy step and rerun the step by installing Blue Ocean Plugin on Jenkins to delete the resources created.


Install Proper Graph Plugin

If graph still doesn’t show, install:

Recommended Modern Plugin:

Pipeline: Stage View

Plugin ID:

pipeline-stage-view

Go to:

Manage Jenkins → Plugins → Available

Search:

Pipeline Stage View

Install and restart.

Bash Script To Install Ansible Automation Platform ( AWX)

#!/bin/bash # --- Configuration --- AWX_OPERATOR_VERSION="2.19.1" NAMESPACE="awx" KUBECONFIG_PATH="/etc/rancher/k3s...