Sunday, 15 March 2026

HealthPulse Portal — Complete Capstone Project

 

HealthPulse Portal — Complete Capstone Project



*HealthPulse Inc.** is a healthcare technology startup that has built a patient portal as a **React/TypeScript single-page application**. The application allows patients to view appointments, lab results, medications, and communicate with their care team.


Currently, the development team **manually builds and deploys** the application by:

1. Running `npm run build` on a developer's laptop

2. SCP-ing the `dist/` folder to a single Nginx server

3. SSHing into the server and restarting Nginx

and hosted by their private server

https://healthpulse-capstone.vercel.app/

This process takes **45 minutes per deployment**, is error-prone, and has caused **3 production outages** in the last quarter from misconfigurations. There is **no testing in the pipeline**, **no code quality checks**, **no security scanning**, and **no monitoring**.


**HealthPulse Inc. has hired your DevOps team** to design and implement a complete CI/CD pipeline, multi-environment infrastructure, container orchestration, and observability platform on **AWS**.


---

## Application Details


| Item | Detail |

|------|--------|

| **App Name** | HealthPulse Portal |

| **Tech Stack** | React 18, TypeScript, Vite, shadcn/ui, Tailwind CSS |

| **Testing** | Vitest (unit), Playwright (e2e) |

| **Build Output** | Static files (`dist/`) served by Nginx |

| **Container** | Multi-stage Dockerfile (Node build → Nginx serve) |

| **Health Endpoint** | `GET /health` → `{"status":"healthy"}` |



Stack: React 18 + TypeScript + Vite + shadcn/ui + Tailwind CSS + Recharts



## Repository Structure

healthpulse-capstone/
├── src/                        # Application source code
│   ├── components/ui/          # shadcn/ui components
│   ├── components/layout/      # Layout (Sidebar, Header)
│   ├── pages/                  # Login, Dashboard, Appointments, LabResults, etc.
│   ├── data/                   # Mock data
│   ├── types/                  # TypeScript types
│   ├── lib/                    # Utilities
│   └── test/                   # Unit tests
├── tests/e2e/                  # Playwright e2e tests

## Tools & Technologies

| Category | Tool | Purpose |
|----------|------|---------|
| **CI/CD** | Jenkins OR GitLab CI OR Azure DevOps | Pipeline automation (student chooses one) |
| **Cloud** | AWS (ECS Fargate, ALB, VPC, Route 53) | Infrastructure hosting |
| **IaC** | Terraform | Infrastructure provisioning |
| **Config Mgmt** | Ansible Tower | Application deployment & rollback |
| **Containers** | Docker | Application containerization |
| **Orchestration** | Kubernetes (EKS) | Container orchestration |
| **Artifact Repo** | JFrog Artifactory | Docker images + build artifacts |
| **Code Quality** | SonarQube | Static analysis + code coverage |
| **Security** | Snyk | Dependency vulnerability scanning |
| **Monitoring** | Datadog | Infrastructure + application monitoring |
| **Version Control** | Git (Bitbucket/GitHub/GitLab) | Source code management |

---






### TASK A: Documentation Platform (Docs-as-Code)

Set up a **MkDocs Material** documentation site using the docs-as-code approach. Documentation lives in the Git repository as Markdown files and is built/served via Docker.

#### Why Docs-as-Code?
This is how top DevOps teams (AWS, Kubernetes, Terraform) manage documentation — Markdown files in Git, built by CI, deployed as a static site. You'll use the same multi-stage Docker pattern as the main application.

| Requirement | Detail |
|-------------|--------|
| Tool | MkDocs with Material theme |
| Container Port | `84` |
| Build | Multi-stage Docker (mkdocs build → nginx serve) |
| Dev Mode | `mkdocs serve` with live reload on port `8084` |
| Location | `docs/` directory in the deployment repo |


New FilePurpose
docs/mkdocs.ymlMkDocs config with Material theme, dark/light toggle, nav, extensions
docs/DockerfileMulti-stage build (mkdocs-material → nginx:alpine)
docs/docker-compose.ymlProd on port 84 + live-reload dev mode on port 8084
docs/docs/index.mdHome page with project overview, team roster template
docs/docs/architecture.mdADR templates (CI/CD platform + container orchestration)
docs/docs/environments.mdEnvironment matrix table (Dev/UAT/QA/Prod)
docs/docs/runbooks.md4 runbook templates (deploy, rollback, scale, incident)
docs/docs/pipeline.mdCI/CD pipeline stage docs with diagrams




#### Required Documentation Pages

| Page | Content |
|------|---------|
| Home (`index.md`) | Project overview, team roster, quick links |
| Architecture Decisions (`architecture.md`) | ADR-001: CI/CD Platform choice, ADR-002: Container orchestration choice |
| Environment Matrix (`environments.md`) | Dev/UAT/QA/Prod table with IPs, URLs, instance sizes |
| Runbooks (`runbooks.md`) | Deploy, rollback, scale, incident response procedures |
| CI/CD Pipeline (`pipeline.md`) | Pipeline stages, tools, configuration notes |

#### Commands
```bash
# Build and serve docs (production)
cd docs && docker-compose up docs-prod
# → Docs at http://localhost:84

# Live reload dev mode
cd docs && docker-compose up docs-dev
# → Docs at http://localhost:8084 (auto-refreshes on file save)
```

**Acceptance Criteria:**
- [ ] MkDocs site builds via multi-stage Dockerfile
- [ ] Docs served on port 84 via docker-compose
- [ ] Live reload dev mode working on port 8084
- [ ] All 5 documentation pages created with real content
- [ ] `mkdocs.yml` and all Markdown files committed to Git
- [ ] Docs auto-build in CI pipeline on changes to `docs/` folder


Summary Task A 

TASK A: Documentation Platform (Docs-as-Code)

1. Set up MkDocs with Material theme inside the deployment repo
2. Create a docker-compose.yml to serve docs on port 84
3. Write initial documentation pages:
   - Team roster and roles
   - ADR: "Why we chose [Jenkins/GitLab/Azure DevOps]"
   - Environment matrix (Dev/UAT/QA/Prod)
   - Runbook template
4. Build docs via Docker (multi-stage: mkdocs build → nginx serve)
5. CI pipeline auto-builds docs site on push to /docs folder

Acceptance Criteria:
- Docs served on port 84 via Docker
- mkdocs.yml and all markdown files committed to Git
- Multi-stage Dockerfile builds and serves the docs
- 4 documentation pages created with real content


MORE ABOUT MKDOCS  

1. Live Reload Dev Mode

When writing documentation (editing the Markdown files), you need to see how their changes look in real-time. That's what dev mode does:

Student edits runbooks.md → saves file → browser auto-refreshes → sees updated page instantly

Without dev mode: Edit markdown → rebuild Docker image → restart container → refresh browser → check result. That's painful and slow.

With dev mode: MkDocs watches the files. The second you hit save, the browser updates automatically. It's the same concept as npm run dev for the React app — hot reload for docs.

In the docker-compose.yml, there are two services:

ServicePortPurpose
docs-prod84Built static site served by Nginx (what users/team see)
docs-dev8084Live preview with auto-refresh (only used while writing docs)

Students use 8084 while writing, then build and deploy to 84 for production. It's a workflow thing — not two permanent servers.

2. Runbook Template

A runbook is an operational instruction manual — step-by-step procedures for when things happen in production. Think of it like a recipe book, but for servers.

Every real DevOps team has them. Eg When it's 2 AM and production is down, you don't want the on-call engineer guessing — you want them following a tested checklist.

Here's An Example of what the you would fill in as you complete the project:

RUNBOOK: Deploy New Version
═══════════════════════════
When to use:  New release ready for production
Who can run:  DevOps team lead

Steps:
  1. Verify build passed in Jenkins → check #healthpulse-builds Slack
  2. Confirm SonarQube quality gate passed
  3. Approve deployment in pipeline (manual gate)
  4. Monitor Datadog dashboard during rollout
  5. Verify /health endpoint returns 200
  6. If health check fails → pipeline auto-rolls back via Ansible

───────────────────────────

RUNBOOK: Rollback Production
════════════════════════════
When to use:  Production deployment caused errors
Who can run:  Any DevOps team member

Steps:
  1. Run: ./scripts/k8s-manage.sh rollback
     OR: Trigger Ansible Tower rollback job
  2. Verify previous version is serving traffic
  3. Check Datadog for error rate returning to normal
  4. Post incident summary in wiki

───────────────────────────

RUNBOOK: Scale Application
══════════════════════════
When to use:  High traffic / slow response times
Who can run:  Any DevOps team member

Steps:
  1. Check Datadog → confirm CPU/memory is the bottleneck
  2. Run: REPLICAS=6 ./scripts/k8s-manage.sh scale
  3. Monitor HPA: kubectl get hpa -n healthpulse-prod
  4. Scale back down after traffic normalizes

TIPS: REPO: 
https://github.com/princexav/mkdocs

 CHANGE PORT 100 - 84

healthpulse-docs/
├── mkdocs.yml                  # Site config + navigation
├── Dockerfile                  # Multi-stage build (mkdocs → nginx)
├── docker-compose.yml          # Prod (port 84) + dev (port 8084)
└── docs/
    ├── index.md                # Home — project overview, team roster, quick links
    ├── architecture.md         # ADR templates (CI/CD choice, orchestration choice)
    ├── environments.md         # Environment matrix (IPs, URLs, sizing)
    ├── pipeline.md             # CI/CD pipeline stages and config
    ├── setup-template.md       # Reusable template — copy for each tool install
    ├── runbooks.md             # Deploy, rollback, scale, health check procedures
    ├── incidents.md            # Incident log template — track issues + root causes
    └── changelog.md            # Weekly progress log — what was built, when, by whom

How Students Use It

PageWhen
Setup TemplateCopy to setup-jenkins.mdsetup-sonarqube.mdsetup-artifactory.mdsetup-ansible-tower.mdsetup-datadog.md — one per tool they install. Documents every command they ran.
RunbooksFill in real commands and URLs as they complete Tasks F-H
Incident LogEvery time something breaks during the project, they log it
ChangelogWeekly entries tracking progress across all tasks
Architecture/Environments/PipelineFill in as they make decisions and provision infrastructure

One template, students create as many copies as they need. Keeps it simple.

The docs site is fully self-contained — it'll build and run independently:

https://github.com/princexav/mkdocs


cd healthpulse-docs
docker compose up docs-prod   # → port 84
docker compose up docs-dev    # → port 8084 (live reload)




TASK B: Version Control & Code Security

Plan & Code

App Name: Healthpulse


  • WorkStation A- Team Pipeline Pirates - 3.15.209.165
  • WorkStation B - Team DevopsAvengers - 3.143.221.53
  • WorkStation C- Team Devius - 3.142.240.0
Developer Workstations are windows machines, Your Project Supervisor will provide you their ip/dns and credentials you will use to log into the machine assigned to ur group: You can use Mobaxterm or RemoteDesktop to connect. The Username is Administrator

When you access the Developer workstation assigned to your group, you will find the code base in the below location:
This PC:---->Desktop---->healthpulseapp

B.1 — Repository Setup

Create two repositories:

RepositoryPurposeAccess
HealthPulse_AppApplication source codeDevelopers
HealthPulse_DeploymentIaC, Ansible, pipelines, scriptsDevOps team

B.2 — Branching Strategy

Implement GitFlow in the App repository:

main ─────────────────────────────────────────►
  └── develop ─────────────────────────────────►
        ├── feature/login-page ──► (merge to develop)
        ├── feature/dashboard ───► (merge to develop)
        └── release/1.0.0 ───────► (merge to main + develop)

B.3 — Repository Security (Layer 1 & Layer 3)

Secure your repo:

Repository security follows a defense-in-depth approach with 3 layers. In this task you set up Layer 1 (local hooks) and Layer 3 (branch protection). Layer 2 (gitleaks in the CI pipeline) comes later in Task F once the pipeline exists.

Layer 1 (this task):  Local hooks      → fast feedback for developers
Layer 2 (Task F):     CI pipeline scan  → server-side safety net
Layer 3 (this task):  Branch protection → platform-enforced rules

Layer 1: Local Git Hooks (pre-commit + pre-push)

Install pre-commit and pre-push hooks so developers get early feedback when they accidentally commit secrets. Understand that developers can bypass these with --no-verify — that's why Layer 3 exists.

HookToolPurpose
pre-commitdetect-secretsScans staged changes for secrets using entropy + pattern analysis
pre-pushcustom scriptWarns on direct push to main/develop

Use the provided .pre-commit-config.yaml and scripts/setup-git-hooks.sh.

# Step 1: Install the pre-commit framework
curl -O https://raw.githubusercontent.com/princexav/security/refs/heads/main/.pre-commit-config.yaml


pip install pre-commit

# Step 2: Install hooks into the repo
pre-commit install

# Step 3: Test it — this should be BLOCKED
echo "AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" >> test.txt
git add test.txt && git commit -m "test secret"
# Expected: detect-secrets blocks the commit

# Step 4: Clean up
git checkout -- test.txt

# Step 5: Test the pre-push hook
git checkout main
git push origin main
# Expected: Warning message about direct push to protected branch

Key lesson: Run git commit --no-verify -m "test" and notice the hook is skipped entirely. This is why local hooks alone are NOT enough — you need Layer 3.

Layer 3: Branch Protection Rules (platform-level — cannot be bypassed)

Configure these in your Git hosting platform (GitHub / GitLab / Bitbucket). Unlike hooks, these are enforced by the server — no developer can skip them.

RuleSetting
Require pull request before mergingmain and develop
Require at least 1 approvalmain and develop
Do not allow bypassing the aboveEven admins must follow the rules

Note: The rule "Require CI status checks to pass" will be added in Task F once your pipeline is built. For now, configure the PR and approval requirements.

# Test it — this should be REJECTED by the platform
git checkout main
git commit --allow-empty -m "testing direct push"
git push origin main
# Expected: Rejected — branch protection requires a pull request

Acceptance Criteria:

  •  Both repos created with proper access controls
  •  GitFlow branching strategy demonstrated (main, develop, feature/, release/)
  •  SSH key authentication configured for repo access
  •  pre-commit install runs successfully and hooks are active
  •  Demonstrate: committing a fake AWS key is blocked by detect-secrets
  •  Demonstrate: --no-verify bypasses the hook (explain why this matters)
  •  Demonstrate: pre-push hook warns on direct push to main
  •  Branch protection rules configured on main and develop (screenshot required)
  •  PR requires at least 1 approval before merge
  •  Direct push to main is rejected by the platform (not just the hook)
  •  Document the security setup in your MkDocs wiki

TASK C: Bare-Metal Deployment (Nginx on EC2)

Before containers, deploy the application the traditional way — built files served directly by Nginx on an EC2 instance. This teaches what containers replace and why they exist.

C.1 — Provision the Server (Terraform)

Use the provided terraform/baremetal/ configuration to create a VPC, subnet, and EC2 instance with Nginx pre-installed. 

See GUIDES:

 https://www.devopstreams.com/2026/03/aws-credentials-setup-best-practices.html for IAM setup.

https://www.devopstreams.com/2026/03/task-c-bare-metal-deployment-nginx-on.html Step by Step Guide

https://github.com/princexav/mkdocs/tree/main/baremetal  TERRAFORM FILES

cd terraform/baremetal
terraform init
terraform plan -var-file=dev.tfvars -var="ssh_public_key=$(cat ~/.ssh/healthpulse-key.pub)"
terraform apply -var-file=dev.tfvars -var="ssh_public_key=$(cat ~/.ssh/healthpulse-key.pub)"

What Terraform creates:

ResourceDetail
VPC + SubnetIsolated network with internet gateway and route table
EC2 InstanceUbuntu 22.04, t2.micro
NginxInstalled and configured via user_data bootstrap
Security GroupPorts 22 (SSH), 80 (HTTP), 443 (HTTPS)
Elastic IPStatic public IP
Nginx ConfigSPA fallback, gzip, security headers, /health endpoint
Deploy Path/var/www/healthpulse

Detailed walkthrough: See guides/TASK-G-GUIDE.md for step-by-step instructions.

Manual deploy (for learning):

# SSH into the server
ssh -i ~/.ssh/healthpulse-key.pem ubuntu@<ELASTIC_IP>

# On the server — this is what Ansible automates
cd /var/www/healthpulse
# Copy dist/ files here
sudo systemctl reload nginx

# Verify
curl http://localhost/health
# → {"status":"healthy","deploy":"baremetal"}

Acceptance Criteria:

  •  EC2 instance provisioned via Terraform with Nginx running
  •  Application accessible at http://<ELASTIC_IP>
  •  Health check returns 200 at /health
  •  Pain points documented in MkDocs wiki
  •  SSH into the server and explain what Nginx is serving and from where

Saturday, 28 February 2026

Bash Script To Install Ansible Automation Platform ( AWX)

#!/bin/bash


# --- Configuration ---

AWX_OPERATOR_VERSION="2.19.1"

NAMESPACE="awx"

KUBECONFIG_PATH="/etc/rancher/k3s/k3s.yaml"


echo "🧹 Phase 1: Cleaning up existing K3s for a fresh start..."

[ -f /usr/local/bin/k3s-uninstall.sh ] && /usr/local/bin/k3s-uninstall.sh

# Remove old manifests to avoid conflicts

rm -f kustomization.yaml awx-instance.yaml


echo "📦 Phase 2: Installing fresh K3s..."

curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644

export KUBECONFIG=$KUBECONFIG_PATH


echo "⏳ Waiting for K3s node to reach 'Ready' state..."

sleep 20

kubectl wait --for=condition=Ready node/$(hostname) --timeout=90s


# Create Namespace

kubectl create namespace $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -


echo "🏗️ Phase 3: Deploying AWX Operator via Kustomize (with Image Fixes)..."


# This Kustomization solves the 404 URL error AND the gcr.io ImagePullBackOff error

cat <<EOF > kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1

kind: Kustomization

resources:

  - github.com/ansible/awx-operator/config/default?ref=$AWX_OPERATOR_VERSION

images:

  - name: quay.io/ansible/awx-operator

    newTag: $AWX_OPERATOR_VERSION

  - name: gcr.io/kubebuilder/kube-rbac-proxy

    newName: quay.io/brancz/kube-rbac-proxy

    newTag: v0.15.0

namespace: $NAMESPACE

EOF


# Apply the operator

kubectl apply -k .


echo "📝 Phase 4: Creating AWX Instance manifest..."

cat <<EOF > awx-instance.yaml

apiVersion: awx.ansible.com/v1beta1

kind: AWX

metadata:

  name: awx-demo

  namespace: $NAMESPACE

spec:

  service_type: nodeport

  postgres_storage_class: local-path

EOF


# Ensure CRDs are registered before applying the instance

echo "🛰️ Waiting for CRDs to settle, then deploying AWX Instance..."

sleep 20

kubectl apply -f awx-instance.yaml


echo "----------------------------------------------------------"

echo "🚀 AWX DEPLOYMENT INITIALIZED"

echo "----------------------------------------------------------"


# Final Phase: Credential Discovery

echo "🔑 Waiting for AWX to generate the admin password..."

until kubectl get secret awx-demo-admin-password -n $NAMESPACE &> /dev/null; do

  echo -n "."

  sleep 10

done


# Grab details automatically

ADMIN_PASS=$(kubectl get secret awx-demo-admin-password -n $NAMESPACE -o jsonpath='{.data.password}' | base64 --decode)

NODE_PORT=$(kubectl get svc awx-demo-service -n $NAMESPACE -o jsonpath='{.spec.ports[0].nodePort}')

SERVER_IP=$(hostname -I | awk '{print $1}')


echo -e "\n\n✅ INSTALL COMPLETE!"

echo "----------------------------------------------------------"

echo "ACCESS URL: http://$SERVER_IP:$NODE_PORT"

echo "USERNAME:   admin"

echo "PASSWORD:   $ADMIN_PASS"

echo "----------------------------------------------------------"

echo "🔍 Watch progress: kubectl get pods -n $NAMESPACE -w"



-------------------------------------------------------------------------------------------------------------------









enter the below for the password

kubectl get secret awx-demo-admin-password -n awx -o jsonpath='{.data.password}' | base64 --decode; echo


# Find the NodePort (it will be the 5-digit number after the '80:')

kubectl get svc awx-demo-service -n awx


# Find your Public/Private IP

hostname -I | awk '{print $1}'

Saturday, 7 February 2026

Key Terraform Rule in Execution, files, folders and directories

 

 Key Terraform Rule

Terraform loads and merges ALL .tf files in a directory automatically.

There is:

  • ❌ no “main file”

  • ❌ no execution order by filename

  • ✅ one configuration per directory

So:

terraform apply

applies everything in that folder.


✅ How You SHOULD structure your files

📁 Recommended folder structure

terraform-lab/ ├── provider.tf ├── data.tf ├── instance.tf ├── outputs.tf ├── variables.tf

Terraform reads them all together.

LAB- BREAK YOUR MAIN.TF INTO DIFFERENT COMPONENTS

provider.tf

provider "aws" { region = "us-east-1" }

data.tf

data "aws_vpc" "default" { default = true } data "aws_subnets" "default" { filter { name = "vpc-id" values = [data.aws_vpc.default.id] } }

instance.tf

resource "aws_instance" "web" { ami = "ami-0c02fb55956c7d316" instance_type = "t3.micro" subnet_id = data.aws_subnets.default.ids[0] tags = { Name = "terraform-lab" } }

outputs.tf

output "instance_id" { value = aws_instance.web.id }

▶️ Running Terraform

From the directory:

terraform init terraform plan terraform apply

Terraform automatically:

  • loads all .tf files

  • builds the dependency graph

  • applies in the correct order


❌ Common misconception

“Terraform executes files top to bottom”

Wrong.

Terraform:

  • builds a dependency graph

  • executes based on references

  • ignores file order and filenames


🧠 KEY TAKEAWYS

Terraform directory = one application
.tf files = chapters in the same book

You don’t run chapters — you run the book.


🧪 Advanced (Optional): Lab separation strategies

Option A — New folder per lab (recommended for beginners)

labs/ ├── lab1-default-vpc/ ├── lab2-alb/ ├── lab3-asg/

Option B — Same folder, comment/uncomment (not ideal)

Option C — Use variables / count (advanced)


⚠️ One important rule

Terraform only reads files in the current directory.

Subfolders are ignored unless you use modules (advanced topic).


✅ 

  • You don’t “apply a file”

  • You apply a directory

  • Terraform merges all .tf files automatically

  • File naming is for human readability only


🧠 One-sentence takeaway for students

Terraform applies directories, not files.

Understanding VPC, Filter Blocks in Terraform

 

Confirm the Default VPC (AWS Console)

  1. Open AWS Console → VPC

  2. Go to Your VPCs

  3. Identify the VPC marked Default = Yes

  4. Go to Subnets

    • Notice one subnet per Availability Zone

💡 Key Concept

EC2 instances are launched into subnets, and subnets belong to VPCs.


🔬 LAB 2 — Create Terraform Project

Create main.tf:

provider "aws" { region = "us-east-1" }

Initialize:

terraform init

🔬 LAB 3 — Look Up the Default VPC (Data Source)

Add to main.tf:

data "aws_vpc" "default" { default = true }

Add output:

output "default_vpc_id" { value = data.aws_vpc.default.id }

Run:

terraform apply -auto-approve

✅ Terraform prints the default VPC ID.

💡 Key Concept

data blocks read existing infrastructure — they do NOT create anything.


🔬 LAB 4 — Find Subnets Using a filter Block (Core Concept)

Now we want subnets that belong ONLY to the default VPC.

Add:

data "aws_subnets" "default" { filter { name = "vpc-id" values = [data.aws_vpc.default.id] } }

Add output:

output "default_subnet_ids" { value = data.aws_subnets.default.ids }

Apply:

terraform apply -auto-approve

🔍 Understanding the filter Block (IMPORTANT)

What the filter block does

It tells Terraform:
“Only return AWS resources that match this condition.”

In this case:

“Give me only the subnets that belong to the default VPC.”


Line-by-line explanation

filter { name = "vpc-id" values = [data.aws_vpc.default.id] }
  • filter {}
    Defines a condition AWS must match

  • name = "vpc-id"
    The AWS API attribute we are filtering on
    (This is an AWS field, not a Terraform keyword)

  • values = [...]
    Acceptable value(s) for that attribute
    Here, it dynamically uses the default VPC ID


What Terraform is doing behind the scenes

Terraform sends AWS a request like:

“List all subnets WHERE vpc-id = vpc-xxxxxxxx”

AWS returns only matching subnets.


remember this

Think of AWS like a database:

SELECT * FROM subnets WHERE vpc_id = 'vpc-xxxxxxxx';

That’s exactly what the filter block does.


Why this is better than hardcoding

❌ Bad:

subnet_id = "subnet-0abc123"

✅ Good:

subnet_id = data.aws_subnets.default.ids[0]

Benefits:

  • Works across AWS accounts

  • Works across regions

  • Real-world Terraform pattern

⚠️ Note for students

The order of subnet IDs is not guaranteed.
Using [0] is fine for labs, but production code should be deterministic.


🔬 LAB 5 — Launch EC2 in the Default VPC

Add:

resource "aws_instance" "web" { ami = "ami-0c02fb55956c7d316" # Amazon Linux 2 (us-east-1) instance_type = "t3.micro" subnet_id = data.aws_subnets.default.ids[0] tags = { Name = "terraform-default-vpc-lab" } }

Apply:

terraform apply -auto-approve

✅ EC2 instance launches in the default VPC.


🔬 LAB 6 — Use the Default Security Group (Optional but Best Practice)

Add:

data "aws_security_group" "default" { name = "default" vpc_id = data.aws_vpc.default.id }

Update EC2:

vpc_security_group_ids = [ data.aws_security_group.default.id ]

Apply again.

💡 Teaching Point

Never assume defaults — always declare dependencies explicitly.


🔬 LAB 7 — Cleanup (Critical Habit)

terraform destroy -auto-approve

🧠 Key Takeaways (Interview / Exam Ready)

  • aws_instance has no vpc_id

  • ✅ EC2 → Subnet → VPC

  • filter blocks safely query AWS

  • ❌ Hardcoding IDs is fragile

  • ✅ Default VPC is OK for labs, not production



Monday, 19 January 2026

Understanding Software Testing Using a Simple JSP Web App

 

🎯 Lab Context (Very Important)

This project is a basic Maven web application created for learning DevOps concepts.

  • It displays a simple Hello World page (index.jsp)

  • It runs on Tomcat

  • Initially, it has no Java logic and no tests

  • Our goal is NOT to build a full application

  • Our goal IS to understand testing in CI/CD

⚠️ This lab focuses on learning testing concepts, not building features.


 

1️⃣ What Is Software Testing? 

✅ What

Software testing is how we verify that code behaves as expected.

In simple words:

“If I change something, how do I know I didn’t break it?”

✅ Why Testing Exists (DevOps Perspective)

In DevOps:

  • Code changes are frequent

  • Builds are automated

  • Deployments are fast

Without tests:

  • Bugs reach production

  • Pipelines deploy broken code

  • Teams lose confidence

Tests act as safety checks in the pipeline. 

✅ Types of Testing (High-Level)

TypeWhat it testsTool
 Unit TestJava logicJUnit
UI TestWeb pagesSelenium
Integration TestMultiple componentsTest frameworks
Security TestVulnerabilitiesSnyk
Quality ScanCode qualitySonarQube








2️⃣ Why We Do NOT Test index.jsp with JUnit

What index.jsp is:

  • A view

  • Mostly HTML

  • Rendered by Tomcat

What JUnit is designed for:

  • Java classes

  • Java methods

  • Business logic

JUnit:
❌ does not start Tomcat
❌ does not render JSPs
❌ does not test HTML

✅ Key Learning

Not all code is tested the same way.

This is a very important DevOps concept.

3️⃣ The Real-World Testing Pattern (What Companies Do)

Instead of testing JSPs directly, companies:

  • Keep JSPs simple

  • Put logic in Java classes

  • Unit-test the Java logic

Simple Architecture

Browser ↓ index.jsp (VIEW – not unit tested) ↓ HelloService (LOGIC – unit tested)

This keeps testing:

  • fast

  • reliable

  • automation-friendly



4️⃣ Why We Add a Small Java Class (Even for Hello World)

You may ask:

“Why add Java code if the app is just Hello World?”

Answer:

Because testing needs executable logic.

Without Java code:

  • No tests can run

  • No coverage can be generated

  • JaCoCo has nothing to measure

So we add the smallest possible logic to teach testing correctly.


5️⃣ Step-by-Step: Add Testable Java Logic

Step 5.1 — Create a simple Java class

Path

MyWebApp/src/main/java/com/mywebapp/HelloService.java

You can do this directly in bitbucket or locally and push
to bitbucket


package com.mywebapp; public class HelloService { public String getMessage() { return "Hello World"; } }
Commit changes

📘 This class:

  • Contains logic

  • Can be executed

  • Can be tested

  • Can be measured by coverage tools


6️⃣ Step-by-Step: Write a Unit Test (JUnit)

Step 6.1 — Create a test class

Path

MyWebApp/src/test/java/com/mywebapp/HelloServiceTest.java


package com.mywebapp; import org.junit.jupiter.api.Test; import static org.junit.jupiter.api.Assertions.*; class HelloServiceTest { @Test void shouldReturnHelloWorldMessage() { HelloService service = new HelloService(); assertEquals("Hello World", service.getMessage()); } }

What this test does:

  • Calls the method

  • Checks the output

  • Passes if correct

  • Fails if changed


7️⃣ Running Tests (Understanding the Commands)

🔹 Run tests only

mvn clean test

This:

  • Compiles code

  • Runs unit tests

  • Does not generate coverage


🔹 Run tests WITH coverage (important for this lab)

mvn clean verify -Pcoverage

What -Pcoverage means:

  • Activates the coverage profile

  • Enables JaCoCo

  • Attaches the JaCoCo agent

  • Generates coverage data

📍 Output file:

MyWebApp/target/site/jacoco/jacoco.xml

8️⃣ What Is Code Coverage? (Simple Explanation)

Coverage answers:

“How much of my Java code was executed by tests?”

Examples:

  • 0% → No tests ran

  • 50% → Some code executed

  • 100% → All code executed

⚠️ Coverage does NOT mean “bug-free”
It means tested execution paths


9️⃣ How This Fits into Jenkins (Big Picture)

In Jenkins:

  1. Code is checked out

  2. Maven runs tests

  3. JaCoCo generates coverage

  4. SonarQube reads the coverage

  5. Pipeline decides:

    • PASS → deploy

    • FAIL → stop

This is real CI/CD behavior.


In Your Free style job Add below in your maven config

change from

clean install -Dv=${BUILD_NUMBER} sonar:sonar

To

clean verify -Pcoverage -Dv=${BUILD_NUMBER} sonar:sonar -Dsonar.coverage.jacoco.xmlReportPaths=target/site/jacoco/jacoco.xml


✅ Report File Pattern

**/target/site/jacoco/jacoco.xml



1) Add Maven Surefire plugin

Your build log showed an ancient surefire (2.12.4)

which often results in “No tests to run”

even when tests exist.

UPDATE YOUR POM

Add this inside <build><plugins> (outside the profile):

<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>3.2.5</version> </plugin>

FINAL POM BELOW

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mywebapp</groupId> <artifactId>mywebapp</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <name>MyWebApp</name> <properties> <maven.compiler.source>21</maven.compiler.source> <maven.compiler.target>21</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <jacoco.version>0.8.11</jacoco.version> </properties> <profiles> <profile> <id>coverage</id> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>3.2.5</version> </plugin> <plugin> <groupId>org.jacoco</groupId> <artifactId>jacoco-maven-plugin</artifactId> <version>${jacoco.version}</version> <!-- 1️⃣ Prepare the agent before tests --> <executions> <execution> <id>prepare-agent</id> <goals> <goal>prepare-agent</goal> </goals> </execution> <!-- 2️⃣ Generate the XML report after tests --> <execution> <id>report</id> <phase>verify</phase> <goals> <goal>report</goal> </goals> <configuration> <!-- We only need the XML form for SonarQube --> <outputDirectory>${project.build.directory}/site/jacoco</outputDirectory> <outputEncoding>UTF-8</outputEncoding> <formats> <format>XML</format> </formats> </configuration> </execution> </executions> </plugin> </plugins> </build> </profile> </profiles> <dependencies> <!-- Old JUnit already present — keep it --> <!-- Example dependency for unit tests --> <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter-api</artifactId> <version>5.10.0</version> <scope>test</scope> </dependency> <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter-engine</artifactId> <version>5.10.0</version> <scope>test</scope> </dependency> </dependencies> <build> <finalName>MyWebApp</finalName> <plugins> <!-- JDK 21 Maven Compiler --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.11.0</version> <configuration> <release>21</release> </configuration> </plugin> <!-- WAR packaging support --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>3.4.0</version> </plugin> <plugin> <groupId>org.sonarsource.scanner.maven</groupId> <artifactId>sonar-maven-plugin</artifactId> <version>5.2.0.4988</version> </plugin> </plugins> </build> </project>


TASK C: Bare-Metal Deployment (Nginx on EC2) — Step-by-Step Guide

Overview In this task, you will deploy the HealthPulse Portal the  traditional way  — static files served directly by Nginx on an EC2 instan...