Saturday, 28 February 2026

Bash Script To Install Ansible Automation Platform ( AWX)

#!/bin/bash


# --- Configuration ---

AWX_OPERATOR_VERSION="2.19.1"

NAMESPACE="awx"

KUBECONFIG_PATH="/etc/rancher/k3s/k3s.yaml"


echo "๐Ÿงน Phase 1: Cleaning up existing K3s for a fresh start..."

[ -f /usr/local/bin/k3s-uninstall.sh ] && /usr/local/bin/k3s-uninstall.sh

# Remove old manifests to avoid conflicts

rm -f kustomization.yaml awx-instance.yaml


echo "๐Ÿ“ฆ Phase 2: Installing fresh K3s..."

curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644

export KUBECONFIG=$KUBECONFIG_PATH


echo "⏳ Waiting for K3s node to reach 'Ready' state..."

sleep 20

kubectl wait --for=condition=Ready node/$(hostname) --timeout=90s


# Create Namespace

kubectl create namespace $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -


echo "๐Ÿ—️ Phase 3: Deploying AWX Operator via Kustomize (with Image Fixes)..."


# This Kustomization solves the 404 URL error AND the gcr.io ImagePullBackOff error

cat <<EOF > kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1

kind: Kustomization

resources:

  - github.com/ansible/awx-operator/config/default?ref=$AWX_OPERATOR_VERSION

images:

  - name: quay.io/ansible/awx-operator

    newTag: $AWX_OPERATOR_VERSION

  - name: gcr.io/kubebuilder/kube-rbac-proxy

    newName: quay.io/brancz/kube-rbac-proxy

    newTag: v0.15.0

namespace: $NAMESPACE

EOF


# Apply the operator

kubectl apply -k .


echo "๐Ÿ“ Phase 4: Creating AWX Instance manifest..."

cat <<EOF > awx-instance.yaml

apiVersion: awx.ansible.com/v1beta1

kind: AWX

metadata:

  name: awx-demo

  namespace: $NAMESPACE

spec:

  service_type: nodeport

  postgres_storage_class: local-path

EOF


# Ensure CRDs are registered before applying the instance

echo "๐Ÿ›ฐ️ Waiting for CRDs to settle, then deploying AWX Instance..."

sleep 20

kubectl apply -f awx-instance.yaml


echo "----------------------------------------------------------"

echo "๐Ÿš€ AWX DEPLOYMENT INITIALIZED"

echo "----------------------------------------------------------"


# Final Phase: Credential Discovery

echo "๐Ÿ”‘ Waiting for AWX to generate the admin password..."

until kubectl get secret awx-demo-admin-password -n $NAMESPACE &> /dev/null; do

  echo -n "."

  sleep 10

done


# Grab details automatically

ADMIN_PASS=$(kubectl get secret awx-demo-admin-password -n $NAMESPACE -o jsonpath='{.data.password}' | base64 --decode)

NODE_PORT=$(kubectl get svc awx-demo-service -n $NAMESPACE -o jsonpath='{.spec.ports[0].nodePort}')

SERVER_IP=$(hostname -I | awk '{print $1}')


echo -e "\n\n✅ INSTALL COMPLETE!"

echo "----------------------------------------------------------"

echo "ACCESS URL: http://$SERVER_IP:$NODE_PORT"

echo "USERNAME:   admin"

echo "PASSWORD:   $ADMIN_PASS"

echo "----------------------------------------------------------"

echo "๐Ÿ” Watch progress: kubectl get pods -n $NAMESPACE -w"



-------------------------------------------------------------------------------------------------------------------









enter the below for the password

kubectl get secret awx-demo-admin-password -n awx -o jsonpath='{.data.password}' | base64 --decode; echo


# Find the NodePort (it will be the 5-digit number after the '80:')

kubectl get svc awx-demo-service -n awx


# Find your Public/Private IP

hostname -I | awk '{print $1}'

Saturday, 7 February 2026

Key Terraform Rule in Execution, files, folders and directories

 

 Key Terraform Rule

Terraform loads and merges ALL .tf files in a directory automatically.

There is:

  • ❌ no “main file”

  • ❌ no execution order by filename

  • ✅ one configuration per directory

So:

terraform apply

applies everything in that folder.


✅ How You SHOULD structure your files

๐Ÿ“ Recommended folder structure

terraform-lab/ ├── provider.tf ├── data.tf ├── instance.tf ├── outputs.tf ├── variables.tf

Terraform reads them all together.

LAB- BREAK YOUR MAIN.TF INTO DIFFERENT COMPONENTS

provider.tf

provider "aws" { region = "us-east-1" }

data.tf

data "aws_vpc" "default" { default = true } data "aws_subnets" "default" { filter { name = "vpc-id" values = [data.aws_vpc.default.id] } }

instance.tf

resource "aws_instance" "web" { ami = "ami-0c02fb55956c7d316" instance_type = "t3.micro" subnet_id = data.aws_subnets.default.ids[0] tags = { Name = "terraform-lab" } }

outputs.tf

output "instance_id" { value = aws_instance.web.id }

▶️ Running Terraform

From the directory:

terraform init terraform plan terraform apply

Terraform automatically:

  • loads all .tf files

  • builds the dependency graph

  • applies in the correct order


❌ Common misconception

“Terraform executes files top to bottom”

Wrong.

Terraform:

  • builds a dependency graph

  • executes based on references

  • ignores file order and filenames


๐Ÿง  KEY TAKEAWYS

Terraform directory = one application
.tf files = chapters in the same book

You don’t run chapters — you run the book.


๐Ÿงช Advanced (Optional): Lab separation strategies

Option A — New folder per lab (recommended for beginners)

labs/ ├── lab1-default-vpc/ ├── lab2-alb/ ├── lab3-asg/

Option B — Same folder, comment/uncomment (not ideal)

Option C — Use variables / count (advanced)


⚠️ One important rule

Terraform only reads files in the current directory.

Subfolders are ignored unless you use modules (advanced topic).


✅ 

  • You don’t “apply a file”

  • You apply a directory

  • Terraform merges all .tf files automatically

  • File naming is for human readability only


๐Ÿง  One-sentence takeaway for students

Terraform applies directories, not files.

Understanding VPC, Filter Blocks in Terraform

 

Confirm the Default VPC (AWS Console)

  1. Open AWS Console → VPC

  2. Go to Your VPCs

  3. Identify the VPC marked Default = Yes

  4. Go to Subnets

    • Notice one subnet per Availability Zone

๐Ÿ’ก Key Concept

EC2 instances are launched into subnets, and subnets belong to VPCs.


๐Ÿ”ฌ LAB 2 — Create Terraform Project

Create main.tf:

provider "aws" { region = "us-east-1" }

Initialize:

terraform init

๐Ÿ”ฌ LAB 3 — Look Up the Default VPC (Data Source)

Add to main.tf:

data "aws_vpc" "default" { default = true }

Add output:

output "default_vpc_id" { value = data.aws_vpc.default.id }

Run:

terraform apply -auto-approve

✅ Terraform prints the default VPC ID.

๐Ÿ’ก Key Concept

data blocks read existing infrastructure — they do NOT create anything.


๐Ÿ”ฌ LAB 4 — Find Subnets Using a filter Block (Core Concept)

Now we want subnets that belong ONLY to the default VPC.

Add:

data "aws_subnets" "default" { filter { name = "vpc-id" values = [data.aws_vpc.default.id] } }

Add output:

output "default_subnet_ids" { value = data.aws_subnets.default.ids }

Apply:

terraform apply -auto-approve

๐Ÿ” Understanding the filter Block (IMPORTANT)

What the filter block does

It tells Terraform:
“Only return AWS resources that match this condition.”

In this case:

“Give me only the subnets that belong to the default VPC.”


Line-by-line explanation

filter { name = "vpc-id" values = [data.aws_vpc.default.id] }
  • filter {}
    Defines a condition AWS must match

  • name = "vpc-id"
    The AWS API attribute we are filtering on
    (This is an AWS field, not a Terraform keyword)

  • values = [...]
    Acceptable value(s) for that attribute
    Here, it dynamically uses the default VPC ID


What Terraform is doing behind the scenes

Terraform sends AWS a request like:

“List all subnets WHERE vpc-id = vpc-xxxxxxxx”

AWS returns only matching subnets.


remember this

Think of AWS like a database:

SELECT * FROM subnets WHERE vpc_id = 'vpc-xxxxxxxx';

That’s exactly what the filter block does.


Why this is better than hardcoding

❌ Bad:

subnet_id = "subnet-0abc123"

✅ Good:

subnet_id = data.aws_subnets.default.ids[0]

Benefits:

  • Works across AWS accounts

  • Works across regions

  • Real-world Terraform pattern

⚠️ Note for students

The order of subnet IDs is not guaranteed.
Using [0] is fine for labs, but production code should be deterministic.


๐Ÿ”ฌ LAB 5 — Launch EC2 in the Default VPC

Add:

resource "aws_instance" "web" { ami = "ami-0c02fb55956c7d316" # Amazon Linux 2 (us-east-1) instance_type = "t3.micro" subnet_id = data.aws_subnets.default.ids[0] tags = { Name = "terraform-default-vpc-lab" } }

Apply:

terraform apply -auto-approve

✅ EC2 instance launches in the default VPC.


๐Ÿ”ฌ LAB 6 — Use the Default Security Group (Optional but Best Practice)

Add:

data "aws_security_group" "default" { name = "default" vpc_id = data.aws_vpc.default.id }

Update EC2:

vpc_security_group_ids = [ data.aws_security_group.default.id ]

Apply again.

๐Ÿ’ก Teaching Point

Never assume defaults — always declare dependencies explicitly.


๐Ÿ”ฌ LAB 7 — Cleanup (Critical Habit)

terraform destroy -auto-approve

๐Ÿง  Key Takeaways (Interview / Exam Ready)

  • aws_instance has no vpc_id

  • ✅ EC2 → Subnet → VPC

  • filter blocks safely query AWS

  • ❌ Hardcoding IDs is fragile

  • ✅ Default VPC is OK for labs, not production



Capstone Project: Mastering the Modern Web Stack (HealthPulse Portal)

  ๐Ÿš€ Capstone Project: Mastering the Modern Web Stack (HealthPulse Portal) Welcome to the final frontier of your DevOps journey! In ...