Sunday, 15 March 2026

HealthPulse Portal — Complete Capstone Project

 

HealthPulse Portal — Complete Capstone Project



*HealthPulse Inc.** is a healthcare technology startup that has built a patient portal as a **React/TypeScript single-page application**. The application allows patients to view appointments, lab results, medications, and communicate with their care team.


Currently, the development team **manually builds and deploys** the application by:

1. Running `npm run build` on a developer's laptop

2. SCP-ing the `dist/` folder to a single Nginx server

3. SSHing into the server and restarting Nginx

and hosted by their private server

https://healthpulse-capstone.vercel.app/

This process takes **45 minutes per deployment**, is error-prone, and has caused **3 production outages** in the last quarter from misconfigurations. There is **no testing in the pipeline**, **no code quality checks**, **no security scanning**, and **no monitoring**.


**HealthPulse Inc. has hired your DevOps team** to design and implement a complete CI/CD pipeline, multi-environment infrastructure, container orchestration, and observability platform on **AWS**.


---

## Application Details


| Item | Detail |

|------|--------|

| **App Name** | HealthPulse Portal |

| **Tech Stack** | React 18, TypeScript, Vite, shadcn/ui, Tailwind CSS |

| **Testing** | Vitest (unit), Playwright (e2e) |

| **Build Output** | Static files (`dist/`) served by Nginx |

| **Container** | Multi-stage Dockerfile (Node build → Nginx serve) |

| **Health Endpoint** | `GET /health` → `{"status":"healthy"}` |



Stack: React 18 + TypeScript + Vite + shadcn/ui + Tailwind CSS + Recharts



## Repository Structure

healthpulse-capstone/
├── src/                        # Application source code
│   ├── components/ui/          # shadcn/ui components
│   ├── components/layout/      # Layout (Sidebar, Header)
│   ├── pages/                  # Login, Dashboard, Appointments, LabResults, etc.
│   ├── data/                   # Mock data
│   ├── types/                  # TypeScript types
│   ├── lib/                    # Utilities
│   └── test/                   # Unit tests
├── tests/e2e/                  # Playwright e2e tests

## Tools & Technologies

| Category | Tool | Purpose |
|----------|------|---------|
| **CI/CD** | Jenkins OR GitLab CI OR Azure DevOps | Pipeline automation (student chooses one) |
| **Cloud** | AWS (ECS Fargate, ALB, VPC, Route 53) | Infrastructure hosting |
| **IaC** | Terraform | Infrastructure provisioning |
| **Config Mgmt** | Ansible Tower | Application deployment & rollback |
| **Containers** | Docker | Application containerization |
| **Orchestration** | Kubernetes (EKS) | Container orchestration |
| **Artifact Repo** | JFrog Artifactory | Docker images + build artifacts |
| **Code Quality** | SonarQube | Static analysis + code coverage |
| **Security** | Snyk | Dependency vulnerability scanning |
| **Monitoring** | Datadog | Infrastructure + application monitoring |
| **Version Control** | Git (Bitbucket/GitHub/GitLab) | Source code management |

---






### TASK A: Documentation Platform (Docs-as-Code)

Set up a **MkDocs Material** documentation site using the docs-as-code approach. Documentation lives in the Git repository as Markdown files and is built/served via Docker.

#### Why Docs-as-Code?
This is how top DevOps teams (AWS, Kubernetes, Terraform) manage documentation — Markdown files in Git, built by CI, deployed as a static site. You'll use the same multi-stage Docker pattern as the main application.

| Requirement | Detail |
|-------------|--------|
| Tool | MkDocs with Material theme |
| Container Port | `84` |
| Build | Multi-stage Docker (mkdocs build → nginx serve) |
| Dev Mode | `mkdocs serve` with live reload on port `8084` |
| Location | `docs/` directory in the deployment repo |


New FilePurpose
docs/mkdocs.ymlMkDocs config with Material theme, dark/light toggle, nav, extensions
docs/DockerfileMulti-stage build (mkdocs-material → nginx:alpine)
docs/docker-compose.ymlProd on port 84 + live-reload dev mode on port 8084
docs/docs/index.mdHome page with project overview, team roster template
docs/docs/architecture.mdADR templates (CI/CD platform + container orchestration)
docs/docs/environments.mdEnvironment matrix table (Dev/UAT/QA/Prod)
docs/docs/runbooks.md4 runbook templates (deploy, rollback, scale, incident)
docs/docs/pipeline.mdCI/CD pipeline stage docs with diagrams




#### Required Documentation Pages

| Page | Content |
|------|---------|
| Home (`index.md`) | Project overview, team roster, quick links |
| Architecture Decisions (`architecture.md`) | ADR-001: CI/CD Platform choice, ADR-002: Container orchestration choice |
| Environment Matrix (`environments.md`) | Dev/UAT/QA/Prod table with IPs, URLs, instance sizes |
| Runbooks (`runbooks.md`) | Deploy, rollback, scale, incident response procedures |
| CI/CD Pipeline (`pipeline.md`) | Pipeline stages, tools, configuration notes |

#### Commands
```bash
# Build and serve docs (production)
cd docs && docker-compose up docs-prod
# → Docs at http://localhost:84

# Live reload dev mode
cd docs && docker-compose up docs-dev
# → Docs at http://localhost:8084 (auto-refreshes on file save)
```

**Acceptance Criteria:**
- [ ] MkDocs site builds via multi-stage Dockerfile
- [ ] Docs served on port 84 via docker-compose
- [ ] Live reload dev mode working on port 8084
- [ ] All 5 documentation pages created with real content
- [ ] `mkdocs.yml` and all Markdown files committed to Git
- [ ] Docs auto-build in CI pipeline on changes to `docs/` folder


Summary Task A 

TASK A: Documentation Platform (Docs-as-Code)

1. Set up MkDocs with Material theme inside the deployment repo
2. Create a docker-compose.yml to serve docs on port 84
3. Write initial documentation pages:
   - Team roster and roles
   - ADR: "Why we chose [Jenkins/GitLab/Azure DevOps]"
   - Environment matrix (Dev/UAT/QA/Prod)
   - Runbook template
4. Build docs via Docker (multi-stage: mkdocs build → nginx serve)
5. CI pipeline auto-builds docs site on push to /docs folder

Acceptance Criteria:
- Docs served on port 84 via Docker
- mkdocs.yml and all markdown files committed to Git
- Multi-stage Dockerfile builds and serves the docs
- 4 documentation pages created with real content


MORE ABOUT MKDOCS  

1. Live Reload Dev Mode

When writing documentation (editing the Markdown files), you need to see how their changes look in real-time. That's what dev mode does:

Student edits runbooks.md → saves file → browser auto-refreshes → sees updated page instantly

Without dev mode: Edit markdown → rebuild Docker image → restart container → refresh browser → check result. That's painful and slow.

With dev mode: MkDocs watches the files. The second you hit save, the browser updates automatically. It's the same concept as npm run dev for the React app — hot reload for docs.

In the docker-compose.yml, there are two services:

ServicePortPurpose
docs-prod84Built static site served by Nginx (what users/team see)
docs-dev8084Live preview with auto-refresh (only used while writing docs)

Students use 8084 while writing, then build and deploy to 84 for production. It's a workflow thing — not two permanent servers.

2. Runbook Template

A runbook is an operational instruction manual — step-by-step procedures for when things happen in production. Think of it like a recipe book, but for servers.

Every real DevOps team has them. Eg When it's 2 AM and production is down, you don't want the on-call engineer guessing — you want them following a tested checklist.

Here's An Example of what the you would fill in as you complete the project:

RUNBOOK: Deploy New Version
═══════════════════════════
When to use:  New release ready for production
Who can run:  DevOps team lead

Steps:
  1. Verify build passed in Jenkins → check #healthpulse-builds Slack
  2. Confirm SonarQube quality gate passed
  3. Approve deployment in pipeline (manual gate)
  4. Monitor Datadog dashboard during rollout
  5. Verify /health endpoint returns 200
  6. If health check fails → pipeline auto-rolls back via Ansible

───────────────────────────

RUNBOOK: Rollback Production
════════════════════════════
When to use:  Production deployment caused errors
Who can run:  Any DevOps team member

Steps:
  1. Run: ./scripts/k8s-manage.sh rollback
     OR: Trigger Ansible Tower rollback job
  2. Verify previous version is serving traffic
  3. Check Datadog for error rate returning to normal
  4. Post incident summary in wiki

───────────────────────────

RUNBOOK: Scale Application
══════════════════════════
When to use:  High traffic / slow response times
Who can run:  Any DevOps team member

Steps:
  1. Check Datadog → confirm CPU/memory is the bottleneck
  2. Run: REPLICAS=6 ./scripts/k8s-manage.sh scale
  3. Monitor HPA: kubectl get hpa -n healthpulse-prod
  4. Scale back down after traffic normalizes

TIPS: REPO: 
https://github.com/princexav/mkdocs

 CHANGE PORT 100 - 84

healthpulse-docs/
├── mkdocs.yml                  # Site config + navigation
├── Dockerfile                  # Multi-stage build (mkdocs → nginx)
├── docker-compose.yml          # Prod (port 84) + dev (port 8084)
└── docs/
    ├── index.md                # Home — project overview, team roster, quick links
    ├── architecture.md         # ADR templates (CI/CD choice, orchestration choice)
    ├── environments.md         # Environment matrix (IPs, URLs, sizing)
    ├── pipeline.md             # CI/CD pipeline stages and config
    ├── setup-template.md       # Reusable template — copy for each tool install
    ├── runbooks.md             # Deploy, rollback, scale, health check procedures
    ├── incidents.md            # Incident log template — track issues + root causes
    └── changelog.md            # Weekly progress log — what was built, when, by whom

How Students Use It

PageWhen
Setup TemplateCopy to setup-jenkins.mdsetup-sonarqube.mdsetup-artifactory.mdsetup-ansible-tower.mdsetup-datadog.md — one per tool they install. Documents every command they ran.
RunbooksFill in real commands and URLs as they complete Tasks F-H
Incident LogEvery time something breaks during the project, they log it
ChangelogWeekly entries tracking progress across all tasks
Architecture/Environments/PipelineFill in as they make decisions and provision infrastructure

One template, students create as many copies as they need. Keeps it simple.

The docs site is fully self-contained — it'll build and run independently:

https://github.com/princexav/mkdocs


cd healthpulse-docs
docker compose up docs-prod   # → port 84
docker compose up docs-dev    # → port 8084 (live reload)




TASK B: Version Control & Code Security

Plan & Code

App Name: Healthpulse


  • WorkStation A- Team Pipeline Pirates - 3.15.209.165
  • WorkStation B - Team DevopsAvengers - 3.143.221.53
  • WorkStation C- Team Devius - 3.142.240.0
Developer Workstations are windows machines, Your Project Supervisor will provide you their ip/dns and credentials you will use to log into the machine assigned to ur group: You can use Mobaxterm or RemoteDesktop to connect. The Username is Administrator

When you access the Developer workstation assigned to your group, you will find the code base in the below location:
This PC:---->Desktop---->healthpulseapp

B.1 — Repository Setup

Create two repositories:

RepositoryPurposeAccess
HealthPulse_AppApplication source codeDevelopers
HealthPulse_DeploymentIaC, Ansible, pipelines, scriptsDevOps team

B.2 — Branching Strategy

Implement GitFlow in the App repository:

main ─────────────────────────────────────────►
  └── develop ─────────────────────────────────►
        ├── feature/login-page ──► (merge to develop)
        ├── feature/dashboard ───► (merge to develop)
        └── release/1.0.0 ───────► (merge to main + develop)

B.3 — Repository Security (Layer 1 & Layer 3)

Secure your repo:

Repository security follows a defense-in-depth approach with 3 layers. In this task you set up Layer 1 (local hooks) and Layer 3 (branch protection). Layer 2 (gitleaks in the CI pipeline) comes later in Task F once the pipeline exists.

Layer 1 (this task):  Local hooks      → fast feedback for developers
Layer 2 (Task F):     CI pipeline scan  → server-side safety net
Layer 3 (this task):  Branch protection → platform-enforced rules

Layer 1: Local Git Hooks (pre-commit + pre-push)

Install pre-commit and pre-push hooks so developers get early feedback when they accidentally commit secrets. Understand that developers can bypass these with --no-verify — that's why Layer 3 exists.

HookToolPurpose
pre-commitdetect-secretsScans staged changes for secrets using entropy + pattern analysis
pre-pushcustom scriptWarns on direct push to main/develop

Use the provided .pre-commit-config.yaml and scripts/setup-git-hooks.sh.

# Step 1: Install the pre-commit framework
curl -O https://raw.githubusercontent.com/princexav/security/refs/heads/main/.pre-commit-config.yaml


pip install pre-commit

# Step 2: Install hooks into the repo
pre-commit install

# Step 3: Test it — this should be BLOCKED
echo "AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" >> test.txt
git add test.txt && git commit -m "test secret"
# Expected: detect-secrets blocks the commit

# Step 4: Clean up
git checkout -- test.txt

# Step 5: Test the pre-push hook
git checkout main
git push origin main
# Expected: Warning message about direct push to protected branch

Key lesson: Run git commit --no-verify -m "test" and notice the hook is skipped entirely. This is why local hooks alone are NOT enough — you need Layer 3.

Layer 3: Branch Protection Rules (platform-level — cannot be bypassed)

Configure these in your Git hosting platform (GitHub / GitLab / Bitbucket). Unlike hooks, these are enforced by the server — no developer can skip them.

RuleSetting
Require pull request before mergingmain and develop
Require at least 1 approvalmain and develop
Do not allow bypassing the aboveEven admins must follow the rules

Note: The rule "Require CI status checks to pass" will be added in Task F once your pipeline is built. For now, configure the PR and approval requirements.

# Test it — this should be REJECTED by the platform
git checkout main
git commit --allow-empty -m "testing direct push"
git push origin main
# Expected: Rejected — branch protection requires a pull request

Acceptance Criteria:

  •  Both repos created with proper access controls
  •  GitFlow branching strategy demonstrated (main, develop, feature/, release/)
  •  SSH key authentication configured for repo access
  •  pre-commit install runs successfully and hooks are active
  •  Demonstrate: committing a fake AWS key is blocked by detect-secrets
  •  Demonstrate: --no-verify bypasses the hook (explain why this matters)
  •  Demonstrate: pre-push hook warns on direct push to main
  •  Branch protection rules configured on main and develop (screenshot required)
  •  PR requires at least 1 approval before merge
  •  Direct push to main is rejected by the platform (not just the hook)
  •  Document the security setup in your MkDocs wiki

TASK C: Bare-Metal Deployment (Nginx on EC2)

Before containers, deploy the application the traditional way — built files served directly by Nginx on an EC2 instance. This teaches what containers replace and why they exist.

C.1 — Provision the Server (Terraform)

Use the provided terraform/baremetal/ configuration to create a VPC, subnet, and EC2 instance with Nginx pre-installed. 

See GUIDES:

 https://www.devopstreams.com/2026/03/aws-credentials-setup-best-practices.html for IAM setup.

https://www.devopstreams.com/2026/03/task-c-bare-metal-deployment-nginx-on.html Step by Step Guide

https://github.com/princexav/mkdocs/tree/main/baremetal  TERRAFORM FILES

cd terraform/baremetal
terraform init
terraform plan -var-file=dev.tfvars -var="ssh_public_key=$(cat ~/.ssh/healthpulse-key.pub)"
terraform apply -var-file=dev.tfvars -var="ssh_public_key=$(cat ~/.ssh/healthpulse-key.pub)"

What Terraform creates:

ResourceDetail
VPC + SubnetIsolated network with internet gateway and route table
EC2 InstanceUbuntu 22.04, t2.micro
NginxInstalled and configured via user_data bootstrap
Security GroupPorts 22 (SSH), 80 (HTTP), 443 (HTTPS)
Elastic IPStatic public IP
Nginx ConfigSPA fallback, gzip, security headers, /health endpoint
Deploy Path/var/www/healthpulse

Detailed walkthrough: See guides/TASK-G-GUIDE.md for step-by-step instructions.

Manual deploy (for learning):

# SSH into the server
ssh -i ~/.ssh/healthpulse-key.pem ubuntu@<ELASTIC_IP>

# On the server — this is what Ansible automates
cd /var/www/healthpulse
# Copy dist/ files here
sudo systemctl reload nginx

# Verify
curl http://localhost/health
# → {"status":"healthy","deploy":"baremetal"}

Acceptance Criteria:

  •  EC2 instance provisioned via Terraform with Nginx running
  •  Application accessible at http://<ELASTIC_IP>
  •  Health check returns 200 at /health
  •  Pain points documented in MkDocs wiki
  •  SSH into the server and explain what Nginx is serving and from where

TASK D: Set up your Infrastructure


1. Create a 3 node cluster (Use K3s) (1 master + 2 workers) Using Terraform

2. Devops Tools

  • ToolInstance TypePurpose
    Jenkins/Gitlab/github Actions/AzureDevopst2.largeCI/CD server
    SonarQube
    t2.xlargeCode analysis
    Ansible Towert2.2xlargeConfiguration management
    JFrog Artifactoryt2.2xlargeArtifact repository

    Acceptance Criteria:

    •  k3s cluster provisioned with 3 nodes (1 master + 2 workers)kubectl get nodes — all Ready
    •  kubectl get nodes shows all nodes Ready
    •  Infrastructure tagged properly(All Names Spaces Created for Dev, Qa, Prod)
    •  Can terraform destroy and re-create cleanly
    •  HPA configured (kubectl get hpa shows targets)
    •  Can SSH into master and explain the cluster architecture
    •  documention in MkDocs wiki and devops tools set up
  • TASK E: Monitoring & Observability (Datadog)

    Install and configure Datadog agents on all servers.

    Use the provided monitoring/datadog/datadog-agent-setup.yml Ansible playbook.

    RequirementDetail
    Infrastructure metricsCPU, memory, disk, network
    Container monitoringDocker container metrics
    Process monitoringRunning process visibility
    Server taggingapp:healthpulseenv:<environment>team:<team-name>

    Acceptance Criteria:

    •  Datadog agent running on all servers
    •  Infrastructure metrics visible in Datadog dashboard
    •  Containers monitored with docker integration
    •  Process-level monitoring enabled
    •  All servers tagged and filterable by environment

    TASK F: DNS & Domain

    Register a team domain and configure DNS.

    RequirementDetail
    Domaine.g., team-healthpulse.com
    DNS ProviderRoute 53 (preferred), GoDaddy, etc.
    RecordsA/CNAME pointing to ALB
    Environmentsdev.team-healthpulse.comuat.team-healthpulse.comteam-healthpulse.com

    Acceptance Criteria:

    •  Domain registered
    •  DNS records pointing to load balancers
    •  Application accessible via domain name






Saturday, 28 February 2026

Bash Script To Install Ansible Automation Platform ( AWX)

#!/bin/bash


# --- Configuration ---

AWX_OPERATOR_VERSION="2.19.1"

NAMESPACE="awx"

KUBECONFIG_PATH="/etc/rancher/k3s/k3s.yaml"


echo "🧹 Phase 1: Cleaning up existing K3s for a fresh start..."

[ -f /usr/local/bin/k3s-uninstall.sh ] && /usr/local/bin/k3s-uninstall.sh

# Remove old manifests to avoid conflicts

rm -f kustomization.yaml awx-instance.yaml


echo "📦 Phase 2: Installing fresh K3s..."

curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644

export KUBECONFIG=$KUBECONFIG_PATH


echo "⏳ Waiting for K3s node to reach 'Ready' state..."

sleep 20

kubectl wait --for=condition=Ready node/$(hostname) --timeout=90s


# Create Namespace

kubectl create namespace $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -


echo "🏗️ Phase 3: Deploying AWX Operator via Kustomize (with Image Fixes)..."


# This Kustomization solves the 404 URL error AND the gcr.io ImagePullBackOff error

cat <<EOF > kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1

kind: Kustomization

resources:

  - github.com/ansible/awx-operator/config/default?ref=$AWX_OPERATOR_VERSION

images:

  - name: quay.io/ansible/awx-operator

    newTag: $AWX_OPERATOR_VERSION

  - name: gcr.io/kubebuilder/kube-rbac-proxy

    newName: quay.io/brancz/kube-rbac-proxy

    newTag: v0.15.0

namespace: $NAMESPACE

EOF


# Apply the operator

kubectl apply -k .


echo "📝 Phase 4: Creating AWX Instance manifest..."

cat <<EOF > awx-instance.yaml

apiVersion: awx.ansible.com/v1beta1

kind: AWX

metadata:

  name: awx-demo

  namespace: $NAMESPACE

spec:

  service_type: nodeport

  postgres_storage_class: local-path

EOF


# Ensure CRDs are registered before applying the instance

echo "🛰️ Waiting for CRDs to settle, then deploying AWX Instance..."

sleep 20

kubectl apply -f awx-instance.yaml


echo "----------------------------------------------------------"

echo "🚀 AWX DEPLOYMENT INITIALIZED"

echo "----------------------------------------------------------"


# Final Phase: Credential Discovery

echo "🔑 Waiting for AWX to generate the admin password..."

until kubectl get secret awx-demo-admin-password -n $NAMESPACE &> /dev/null; do

  echo -n "."

  sleep 10

done


# Grab details automatically

ADMIN_PASS=$(kubectl get secret awx-demo-admin-password -n $NAMESPACE -o jsonpath='{.data.password}' | base64 --decode)

NODE_PORT=$(kubectl get svc awx-demo-service -n $NAMESPACE -o jsonpath='{.spec.ports[0].nodePort}')

SERVER_IP=$(hostname -I | awk '{print $1}')


echo -e "\n\n✅ INSTALL COMPLETE!"

echo "----------------------------------------------------------"

echo "ACCESS URL: http://$SERVER_IP:$NODE_PORT"

echo "USERNAME:   admin"

echo "PASSWORD:   $ADMIN_PASS"

echo "----------------------------------------------------------"

echo "🔍 Watch progress: kubectl get pods -n $NAMESPACE -w"



-------------------------------------------------------------------------------------------------------------------









enter the below for the password

kubectl get secret awx-demo-admin-password -n awx -o jsonpath='{.data.password}' | base64 --decode; echo


# Find the NodePort (it will be the 5-digit number after the '80:')

kubectl get svc awx-demo-service -n awx


# Find your Public/Private IP

hostname -I | awk '{print $1}'

Saturday, 7 February 2026

Key Terraform Rule in Execution, files, folders and directories

 

 Key Terraform Rule

Terraform loads and merges ALL .tf files in a directory automatically.

There is:

  • ❌ no “main file”

  • ❌ no execution order by filename

  • ✅ one configuration per directory

So:

terraform apply

applies everything in that folder.


✅ How You SHOULD structure your files

📁 Recommended folder structure

terraform-lab/ ├── provider.tf ├── data.tf ├── instance.tf ├── outputs.tf ├── variables.tf

Terraform reads them all together.

LAB- BREAK YOUR MAIN.TF INTO DIFFERENT COMPONENTS

provider.tf

provider "aws" { region = "us-east-1" }

data.tf

data "aws_vpc" "default" { default = true } data "aws_subnets" "default" { filter { name = "vpc-id" values = [data.aws_vpc.default.id] } }

instance.tf

resource "aws_instance" "web" { ami = "ami-0c02fb55956c7d316" instance_type = "t3.micro" subnet_id = data.aws_subnets.default.ids[0] tags = { Name = "terraform-lab" } }

outputs.tf

output "instance_id" { value = aws_instance.web.id }

▶️ Running Terraform

From the directory:

terraform init terraform plan terraform apply

Terraform automatically:

  • loads all .tf files

  • builds the dependency graph

  • applies in the correct order


❌ Common misconception

“Terraform executes files top to bottom”

Wrong.

Terraform:

  • builds a dependency graph

  • executes based on references

  • ignores file order and filenames


🧠 KEY TAKEAWYS

Terraform directory = one application
.tf files = chapters in the same book

You don’t run chapters — you run the book.


🧪 Advanced (Optional): Lab separation strategies

Option A — New folder per lab (recommended for beginners)

labs/ ├── lab1-default-vpc/ ├── lab2-alb/ ├── lab3-asg/

Option B — Same folder, comment/uncomment (not ideal)

Option C — Use variables / count (advanced)


⚠️ One important rule

Terraform only reads files in the current directory.

Subfolders are ignored unless you use modules (advanced topic).


✅ 

  • You don’t “apply a file”

  • You apply a directory

  • Terraform merges all .tf files automatically

  • File naming is for human readability only


🧠 One-sentence takeaway for students

Terraform applies directories, not files.

Understanding VPC, Filter Blocks in Terraform

 

Confirm the Default VPC (AWS Console)

  1. Open AWS Console → VPC

  2. Go to Your VPCs

  3. Identify the VPC marked Default = Yes

  4. Go to Subnets

    • Notice one subnet per Availability Zone

💡 Key Concept

EC2 instances are launched into subnets, and subnets belong to VPCs.


🔬 LAB 2 — Create Terraform Project

Create main.tf:

provider "aws" { region = "us-east-1" }

Initialize:

terraform init

🔬 LAB 3 — Look Up the Default VPC (Data Source)

Add to main.tf:

data "aws_vpc" "default" { default = true }

Add output:

output "default_vpc_id" { value = data.aws_vpc.default.id }

Run:

terraform apply -auto-approve

✅ Terraform prints the default VPC ID.

💡 Key Concept

data blocks read existing infrastructure — they do NOT create anything.


🔬 LAB 4 — Find Subnets Using a filter Block (Core Concept)

Now we want subnets that belong ONLY to the default VPC.

Add:

data "aws_subnets" "default" { filter { name = "vpc-id" values = [data.aws_vpc.default.id] } }

Add output:

output "default_subnet_ids" { value = data.aws_subnets.default.ids }

Apply:

terraform apply -auto-approve

🔍 Understanding the filter Block (IMPORTANT)

What the filter block does

It tells Terraform:
“Only return AWS resources that match this condition.”

In this case:

“Give me only the subnets that belong to the default VPC.”


Line-by-line explanation

filter { name = "vpc-id" values = [data.aws_vpc.default.id] }
  • filter {}
    Defines a condition AWS must match

  • name = "vpc-id"
    The AWS API attribute we are filtering on
    (This is an AWS field, not a Terraform keyword)

  • values = [...]
    Acceptable value(s) for that attribute
    Here, it dynamically uses the default VPC ID


What Terraform is doing behind the scenes

Terraform sends AWS a request like:

“List all subnets WHERE vpc-id = vpc-xxxxxxxx”

AWS returns only matching subnets.


remember this

Think of AWS like a database:

SELECT * FROM subnets WHERE vpc_id = 'vpc-xxxxxxxx';

That’s exactly what the filter block does.


Why this is better than hardcoding

❌ Bad:

subnet_id = "subnet-0abc123"

✅ Good:

subnet_id = data.aws_subnets.default.ids[0]

Benefits:

  • Works across AWS accounts

  • Works across regions

  • Real-world Terraform pattern

⚠️ Note for students

The order of subnet IDs is not guaranteed.
Using [0] is fine for labs, but production code should be deterministic.


🔬 LAB 5 — Launch EC2 in the Default VPC

Add:

resource "aws_instance" "web" { ami = "ami-0c02fb55956c7d316" # Amazon Linux 2 (us-east-1) instance_type = "t3.micro" subnet_id = data.aws_subnets.default.ids[0] tags = { Name = "terraform-default-vpc-lab" } }

Apply:

terraform apply -auto-approve

✅ EC2 instance launches in the default VPC.


🔬 LAB 6 — Use the Default Security Group (Optional but Best Practice)

Add:

data "aws_security_group" "default" { name = "default" vpc_id = data.aws_vpc.default.id }

Update EC2:

vpc_security_group_ids = [ data.aws_security_group.default.id ]

Apply again.

💡 Teaching Point

Never assume defaults — always declare dependencies explicitly.


🔬 LAB 7 — Cleanup (Critical Habit)

terraform destroy -auto-approve

🧠 Key Takeaways (Interview / Exam Ready)

  • aws_instance has no vpc_id

  • ✅ EC2 → Subnet → VPC

  • filter blocks safely query AWS

  • ❌ Hardcoding IDs is fragile

  • ✅ Default VPC is OK for labs, not production



TASK D: Kubernetes Deployment (k3s on EC2) — Step-by-Step Guide

  Overview In this task, you deploy the HealthPulse Portal to a  real Kubernetes cluster  running on AWS EC2 instances. You'll use  k3s ...