Friday, 1 May 2026

TASK F - GUIDE

 

DNS & Domain (Route 53) — Step-by-Step Guide

Overview

In this task, you register a team domain and configure DNS records so your HealthPulse application is accessible via a human-readable URL (like team-healthpulse.com) instead of raw IP addresses (like 107.21.5.223).

What is DNS?

  • DNS (Domain Name System) is the phone book of the internet
  • It translates domain names (google.com) into IP addresses (142.250.80.46)
  • Without DNS, users would have to memorize IP addresses for every website

What is Route 53?

  • AWS's managed DNS service
  • Named after TCP/UDP port 53 (the DNS protocol port)
  • You create a hosted zone (a container for DNS records) and add records that map names to IPs
  • Route 53 runs on a global network of DNS servers — queries resolve from the nearest location

What you'll do:

  1. Register a domain (or use an existing one)
  2. Understand DNS concepts: hosted zones, record types, TTL, nameservers
  3. Add Route 53 IAM permissions
  4. Create a Route 53 hosted zone with Terraform
  5. Create DNS records pointing to your infrastructure
  6. Update your domain registrar's nameservers
  7. Verify DNS resolution
  8. Configure Traefik Ingress on k3s for hostname-based routing
  9. Test everything end-to-end
  10. Document in MkDocs

Prerequisites

Before starting Task E, ensure you have completed:

  •  — Bare-metal EC2 server provisioned (you need its Elastic IP) -OPTIONAL
  •   — k3s cluster provisioned (you need the master's Elastic IP)- OPTIONAL
  •  AWS CLI configured with healthpulse profile-OPTIONAL
  •  A registered domain name (see Step 1)

Tip: Get your IPs now — you'll need them throughout this guide:

# Bare-metal Elastic IP ---OPTIONAL FOR TESTING ONLY
cd terraform/baremetal && terraform output public_ip

# k3s master Elastic IP
cd terraform/k3s && terraform output master_public_ip

Step 1: Register a Domain

You need a domain name. There are two options:

Option A: Register via Route 53 (Simplest)

Route 53 can act as both registrar (where you buy the domain) and DNS host (where records live). This means nameservers are already configured — skip Step 6 entirely.

  1. Go to AWS Console → Route 53 → Registered domains
  2. Click Register domains
  3. Search for your team domain (e.g., team-healthpulse.com)
  4. Select and proceed to checkout
  5. Fill in contact details → Register
Domain ExtensionApproximate Cost
.com~$13/year
.net~$11/year
.io~$39/year
.dev~$12/year
.click~$3/year

Budget option: .click domains are ~$3/year. team-healthpulse.click works just as well for a capstone project.

Option B: Register via External Registrar

If you already have a domain from GoDaddy, Namecheap, Google Domains, etc., you can use it. You'll just need to update its nameservers to point to Route 53 (Step 6).

Option C: Free Subdomain (No Cost)

If budget is zero, use a free subdomain service like FreeDNS. Note: some free services have limitations. For learning purposes, this works fine.


Step 2: Understand DNS Concepts

Before touching Terraform, understand what you're building:

The Architecture

┌──────────────────────────────────────────────────────────────────────┐
│                     HOW DNS WORKS FOR YOUR PROJECT                    │
│                                                                      │
│  User types: team-healthpulse.com                                    │
│       │                                                              │
│       ▼                                                              │
│  Browser asks DNS: "What IP is team-healthpulse.com?"                │
│       │                                                              │
│       ▼                                                              │
│  DNS Resolver → Root servers → .com servers → Route 53 nameservers   │
│       │                                                              │
│       ▼                                                              │
│  Route 53 answers: "It's 54.123.45.67"  (your k3s master EIP)       │
│       │                                                              │
│       ▼                                                              │
│  Browser connects to 54.123.45.67:80                                 │
│       │                                                              │
│       ▼                                                              │
│  k3s Traefik Ingress receives request                                │
│       │  Checks: "Which hostname was requested?"                     │
│       │  Matches: team-healthpulse.com → healthpulse-service         │
│       ▼                                                              │
│  HealthPulse Pod serves the page                                     │
└──────────────────────────────────────────────────────────────────────┘

Key Concepts

ConceptWhat It IsAnalogy
Hosted ZoneA container for all DNS records of one domainA phone book for one company
A RecordMaps a domain name to an IPv4 address"John's number is 555-1234"
CNAME RecordMaps a domain name to another domain name"Call John's number when you dial Jane"
NS RecordLists the nameservers responsible for the zone"This company's phone book is maintained by AWS"
TTLTime-to-live — how long resolvers cache the answer (seconds)"This number is valid for 5 minutes before you should check again"
NameserversThe DNS servers that Route 53 assigns to your hosted zoneThe phone operators who answer queries about your domain

DNS Record Types You'll Use

A Record:
  team-healthpulse.com → 54.123.45.67 (k3s master EIP)
  "When someone asks for team-healthpulse.com, send them to this IP"

A Record (subdomain):
  baremetal.team-healthpulse.com → 107.21.5.223 (bare-metal EIP)
  "When someone asks for baremetal.team-healthpulse.com, send them to this other IP"

Wildcard A Record:
  *.team-healthpulse.com → 54.123.45.67 (k3s master EIP)
  "Any subdomain not explicitly defined? Send to the k3s master"

Your DNS Record Map

team-healthpulse.com            → k3s master EIP    (production — k3s)
├── baremetal.team-healthpulse.com → bare-metal EIP  (bare-metal Nginx)- OPTIONAL
├── dev.team-healthpulse.com       → bare-metal EIP  (Task G deployment)
├── k8s.team-healthpulse.com       → k3s master EIP  (k3s API/apps)
├── uat.team-healthpulse.com       → k3s master EIP  (k3s UAT namespace)
├── prod.team-healthpulse.com      → k3s master EIP  (k3s prod namespace)
└── *.team-healthpulse.com         → k3s master EIP  (wildcard catch-all)

Why does dev point to bare-metal? Because Task G deploys directly to the Nginx EC2 server — that's your "traditional" deployment. The k3s subdomains are your Kubernetes deployments.


Step 3: Add Route 53 IAM Permissions

Your existing IAM policy covers EC2/VPC but not Route 53. You need to add a new policy.

3.1 — Create the Route 53 Policy

  1. Go to IAM → Policies → Create policy
  2. Click JSON tab and paste:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Route53Read",
      "Effect": "Allow",
      "Action": [
        "route53:GetHostedZone",
        "route53:ListHostedZones",
        "route53:ListHostedZonesByName",
        "route53:GetHostedZoneCount",
        "route53:ListResourceRecordSets",
        "route53:GetChange",
        "route53:ListTagsForResource"
      ],
      "Resource": "*"
    },
    {
      "Sid": "Route53Mutate",
      "Effect": "Allow",
      "Action": [
        "route53:CreateHostedZone",
        "route53:DeleteHostedZone",
        "route53:ChangeResourceRecordSets",
        "route53:ChangeTagsForResource"
      ],
      "Resource": "*"
    },
    {
      "Sid": "Route53Domains",
      "Effect": "Allow",
      "Action": [
        "route53domains:GetDomainDetail",
        "route53domains:ListDomains",
        "route53domains:UpdateDomainNameservers"
      ],
      "Resource": "*"
    }
  ]
}
  1. Name it: HealthPulseRoute53Policy
  2. Click Create policy

3.2 — Attach to Your Group

  1. Go to IAM → User groups → healthpulse-devops
  2. Click Permissions → Add permissions → Attach policies
  3. Search for HealthPulseRoute53Policy → check it → Attach policies

Why no region lock? Route 53 is a global AWS service — it doesn't belong to any specific region. You can't restrict it to us-east-1 like you did with EC2.

3.3 — Verify Permissions

export AWS_PROFILE=healthpulse

# This should work now (returns empty list if no zones exist yet)
aws route53 list-hosted-zones

Expected output:

{
    "HostedZones": []
}

Step 4: Review the Terraform Configuration

Before running Terraform, understand what terraform/dns/ creates.

4.1 — File Structure

terraform/dns/
├── main.tf          # Route 53 hosted zone + DNS records
├── variables.tf     # Domain name, infrastructure IPs
├── outputs.tf       # Nameservers, URLs
└── dev.tfvars       # Your team's values

4.2 — What main.tf Creates

ResourcePurpose
aws_route53_zone.mainHosted zone — the container for all your DNS records
aws_route53_record.rootteam-healthpulse.com → k3s master IP
aws_route53_record.baremetalbaremetal.team-healthpulse.com → bare-metal IP(OPTIONAL)
aws_route53_record.k8sk8s.team-healthpulse.com → k3s master IP
aws_route53_record.devdev.team-healthpulse.com → bare-metal IP
aws_route53_record.uatuat.team-healthpulse.com → k3s master IP
aws_route53_record.prodprod.team-healthpulse.com → k3s master IP
aws_route53_record.wildcard*.team-healthpulse.com → k3s master IP

4.3 — How DNS Records Work in Terraform

resource "aws_route53_record" "baremetal" {
  zone_id = aws_route53_zone.main.zone_id   # Which hosted zone
  name    = "baremetal.${var.domain_name}"   # The subdomain
  type    = "A"                              # A record = name → IP
  ttl     = 300                              # Cache for 5 minutes
  records = [var.baremetal_ip]               # The IP address
}

TTL = 300 means DNS resolvers cache this answer for 5 minutes. After changing a record, it takes up to 5 minutes for the change to propagate. In production you'd use 3600 (1 hour) or higher for stability; 300 is good for development where you change IPs often.


Step 5: Run Terraform

5.1 — Get Your Infrastructure IPs

You need the Elastic IPs from your other Terraform configurations:

# Get bare-metal IP- OPTONAL
cd terraform/baremetal
terraform output public_ip
# Example: 107.21.5.223

# Get k3s master IP
cd ../k3s
terraform output master_public_ip
# Example: 54.123.45.67

5.2 — Update dev.tfvars

Edit terraform/dns/dev.tfvars with your actual values:

environment   = "dev"
team_name     = "team-excellence"

domain_name   = "team-healthpulse.com"       # Your registered domain
baremetal_ip  = "107.21.5.223"               # From step 5.1
k3s_master_ip = "54.123.45.67"               # From step 5.1

5.3 — Initialize and Apply

cd terraform/dns

# Initialize Terraform (downloads AWS provider)
terraform init

# Preview what will be created
terraform plan -var-file=dev.tfvars

# Apply — creates the hosted zone and all records
terraform apply -var-file=dev.tfvars

5.4 — Check the Output

After terraform apply completes:

terraform output

Expected output:

domain_name = "team-healthpulse.com"
nameservers = tolist([
  "ns-1234.awsdns-26.org",
  "ns-567.awsdns-08.net",
  "ns-890.awsdns-44.co.uk",
  "ns-12.awsdns-01.com",
])
registrar_instructions = "Update your domain registrar's nameservers to: ns-1234.awsdns-26.org, ns-567.awsdns-08.net, ns-890.awsdns-44.co.uk, ns-12.awsdns-01.com"
urls = {
  "baremetal" = "http://baremetal.team-healthpulse.com"
  "dev"       = "http://dev.team-healthpulse.com"
  "k8s"       = "http://k8s.team-healthpulse.com"
  "prod"      = "http://prod.team-healthpulse.com"
  "root"      = "http://team-healthpulse.com"
  "uat"       = "http://uat.team-healthpulse.com"
}
zone_id = "Z1234567890ABC"

Save those nameservers — you need them in the next step.


Step 6: Update Your Registrar's Nameservers

Skip this step if you registered the domain via Route 53 (Option A in Step 1). Route 53 automatically uses its own nameservers when it's both registrar and DNS host.

This is the critical step that connects your domain to Route 53. Your domain registrar (GoDaddy, Namecheap, etc.) currently points to its own DNS servers. You need to tell it: "Route 53 is now responsible for my domain's DNS."

How It Works

BEFORE (domain registered at GoDaddy, DNS at GoDaddy):
  User → "team-healthpulse.com?" → GoDaddy DNS → ???  (no records configured)

AFTER (domain registered at GoDaddy, DNS at Route 53):
  User → "team-healthpulse.com?" → Route 53 DNS → 54.123.45.67  (your k3s master)

6.1 — For GoDaddy

  1. Log into GoDaddy
  2. Go to My Products → your domain → DNS → Nameservers
  3. Click Change → Enter my own nameservers (advanced)
  4. Replace all existing nameservers with the 4 from Terraform output:
    ns-1234.awsdns-26.org
    ns-567.awsdns-08.net
    ns-890.awsdns-44.co.uk
    ns-12.awsdns-01.com
    
  5. Click Save

6.2 — For Namecheap

  1. Log into Namecheap
  2. Go to Domain List → your domain → Manage
  3. Under Nameservers → select Custom DNS
  4. Enter the 4 Route 53 nameservers
  5. Click the green checkmark to save

6.3 — For Google Domains / Squarespace

  1. Go to Google Domains
  2. Select your domain → DNS
  3. Switch to Custom name servers
  4. Add the 4 Route 53 nameservers
  5. Click Save

Propagation time: Nameserver changes can take up to 48 hours to propagate globally, but usually complete within 15-30 minutes. Be patient.


Step 7: Verify DNS Resolution

7.1 — Check Nameserver Propagation

After updating nameservers, verify they're pointing to Route 53:

# Check which nameservers are authoritative for your domain
nslookup -type=NS team-healthpulse.com

# Or with dig (Linux/Mac)
dig NS team-healthpulse.com +short

Expected: You should see the 4 Route 53 nameservers from your Terraform output.

If you still see the old registrar's nameservers, wait a few more minutes and try again.

7.2 — Check A Record Resolution

# Check the root domain
nslookup team-healthpulse.com

# Check subdomains
nslookup baremetal.team-healthpulse.com

nslookup dev.team-healthpulse.com
nslookup k8s.team-healthpulse.com
nslookup prod.team-healthpulse.com

Expected output for root domain:

Server:    8.8.8.8
Address:   8.8.8.8#53

Non-authoritative answer:
Name:    team-healthpulse.com
Address: 54.123.45.67       ← Your k3s master EIP

Expected output for baremetal subdomain:

Name:    baremetal.team-healthpulse.com
Address: 107.21.5.223       ← Your bare-metal EIP

7.3 — Test in the Browser

# Bare-metal server (Task G deployment)
curl -I http://baremetal.team-healthpulse.com
curl http://dev.team-healthpulse.com/health
# → {"status":"healthy","deploy":"baremetal"}

# k3s master (if app is deployed via NodePort)
curl -I http://team-healthpulse.com

Note: The bare-metal subdomains (baremetaldev) should work immediately — Nginx is already listening on port 80 for any hostname. The k3s subdomains need Traefik Ingress configured first (Step 8).

7.4 — Troubleshooting DNS

ProblemCauseFix
nslookup returns old/wrong IPDNS cacheWait for TTL (5 min). Flush local cache: ipconfig /flushdns (Windows)
nslookup says "can't find domain"Nameservers not propagated yetWait 15-30 minutes. Check registrar NS settings.
Browser shows "This site can't be reached"DNS works but server isn't listeningCheck: is Nginx running? Is the security group allowing port 80?
SERVFAILRoute 53 zone has no SOA/NS recordsTerraform should create these. Run terraform apply again.

Step 8: Configure Traefik Ingress on k3s

Your k3s cluster comes with Traefik as a built-in ingress controller. Traefik listens on ports 80/443 on the master node and routes traffic to the correct Kubernetes service based on the hostname in the HTTP request.

8.1 — Why Ingress?

Without Ingress, you'd access k3s services via NodePort (e.g., http://54.123.45.67:31234). With Ingress:

WITHOUT Ingress:
  http://54.123.45.67:31234   ← ugly, hard to remember, port required

WITH Ingress:
  http://prod.team-healthpulse.com   ← clean, standard port 80

8.2 — How Traefik Ingress Works

Browser request: "GET / HTTP/1.1  Host: prod.team-healthpulse.com"
    │
    ▼
DNS resolves prod.team-healthpulse.com → 54.123.45.67 (k3s master EIP)
    │
    ▼
Traefik (running on k3s, listening :80) receives the request
    │
    ├─ Host = prod.team-healthpulse.com?  → route to healthpulse-service in healthpulse-prod
    ├─ Host = uat.team-healthpulse.com?   → route to healthpulse-service in healthpulse-uat
    └─ Host = k8s.team-healthpulse.com?   → route to healthpulse-service in healthpulse-dev

8.3 — Review the Ingress Manifest

The provided kubernetes/ingress.yml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: healthpulse-ingress
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
  rules:
    - host: REPLACE_WITH_YOUR_SUBDOMAIN    # ← change this per namespace
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: healthpulse-service   # ← must match your Service name
                port:
                  number: 80

8.4 — Apply Ingress for Each Namespace

You need to apply the Ingress in each namespace with the correct hostname. Use sed to replace the placeholder:

# Set your kubeconfig
export KUBECONFIG=~/.kube/healthpulse-config
# ─── Production namespace ───
kubectl apply -f kubernetes/ingress-dev.yml
kubectl apply -f kubernetes/ingress-uat.yml
kubectl apply -f kubernetes/ingress-prod.yml

On Windows without sed? SSH into the master and run these commands there, or manually edit the YAML for each namespace.

8.5 — Verify Ingress Resources

# Check all ingress resources across namespaces
kubectl get ingress -A

# Expected:
# NAMESPACE           NAME                   CLASS     HOSTS                            ADDRESS   PORTS   AGE
# healthpulse-prod    healthpulse-ingress    traefik   prod.team-healthpulse.com        80        80      1m
# healthpulse-uat     healthpulse-ingress    traefik   uat.team-healthpulse.com         80        80      1m
# healthpulse-dev     healthpulse-ingress    traefik   k8s.team-healthpulse.com         80        80      1m

# Check Traefik is running (k3s installs it in kube-system)
kubectl get pods -n kube-system | grep traefik
# traefik-xxxxx-xxxxx   1/1     Running   0   2d

8.6 — Test Hostname-Based Routing

# Test production
curl -H "Host: prod.team-healthpulse.com" http://<K3S_MASTER_IP>/health
# → {"status":"healthy"}

# Test UAT
curl -H "Host: uat.team-healthpulse.com" http://<K3S_MASTER_IP>/health
# → {"status":"healthy"}

# Once DNS is propagated, use the actual domain:
curl http://prod.team-healthpulse.com/health
curl http://uat.team-healthpulse.com/health

The -H "Host: ..." trick: This lets you test Ingress routing before DNS propagates. You're hitting the IP directly but telling the server "pretend I came from this hostname."


Step 9: Verify Everything End-to-End

Run through the full checklist:

9.1 — Bare-Metal Subdomains

# These go to the bare-metal Nginx server
curl http://baremetal.team-healthpulse.com/health
# → {"status":"healthy","deploy":"baremetal"}

curl http://dev.team-healthpulse.com/health
# → {"status":"healthy","deploy":"baremetal"}

# Open in browser — should see the HealthPulse app
# http://baremetal.team-healthpulse.com
# http://dev.team-healthpulse.com

9.2 — k3s Subdomains (via Traefik Ingress)

# These go to k3s → Traefik → correct namespace
curl http://prod.team-healthpulse.com/health
curl http://uat.team-healthpulse.com/health
curl http://k8s.team-healthpulse.com/health

# Open in browser
# http://prod.team-healthpulse.com

9.3 — Root Domain

curl http://team-healthpulse.com/health
# → Should resolve to k3s master IP

Note: The root domain points to the k3s master IP, but unless you have a Traefik Ingress rule matching team-healthpulse.com (without a subdomain prefix), Traefik may return a 404. If you want the root domain to work, add another Ingress rule with host: team-healthpulse.com in your production namespace.

9.4 — View in Route 53 Console

  1. Go to AWS Console → Route 53 → Hosted zones
  2. Click your domain
  3. You should see:
Record NameTypeValue
team-healthpulse.comNS(4 nameservers — auto-created)
team-healthpulse.comSOA(auto-created)
team-healthpulse.comA54.123.45.67
baremetal.team-healthpulse.comA107.21.5.223
dev.team-healthpulse.comA107.21.5.223
k8s.team-healthpulse.comA54.123.45.67
uat.team-healthpulse.comA54.123.45.67
prod.team-healthpulse.comA54.123.45.67
*.team-healthpulse.comA54.123.45.67

Step 10: Document in MkDocs

Add DNS information to your MkDocs documentation site.

10.1 — Update environments.md

Add a URL column to your environment matrix:

## Environment Matrix

| Environment | Deploy Type | IP Address | URL | Deployment Method |
|-------------|-----------|------------|-----|-------------------|
| Dev (bare-metal) | Bare-metal | 107.21.5.223 | dev.team-healthpulse.com | SCP + MANUAL |
| Dev (k3s) | Kubernetes | 54.123.45.67 | k8s.team-healthpulse.com | kubectl apply |
| UAT | Kubernetes | 54.123.45.67 | uat.team-healthpulse.com | kubectl apply |
| Production | Kubernetes | 54.123.45.67 | prod.team-healthpulse.com | kubectl apply |

10.2 — Document DNS Setup in architecture.md

Add an Architecture Decision Record:

## ADR-003: DNS Strategy

**Status:** Accepted

**Context:** The application is deployed across a bare-metal server and a k3s cluster.
Users need friendly URLs instead of IP addresses.

**Decision:** Use AWS Route 53 as the DNS provider with:
- Subdomains per environment (dev, uat, prod)
- Bare-metal deployments on the `baremetal` and `dev` subdomains
- Kubernetes deployments on `k8s`, `uat`, and `prod` subdomains
- Wildcard record for future flexibility
- Traefik Ingress on k3s for hostname-based routing

**Consequences:**
- Route 53 costs $0.50/month per hosted zone + $0.40 per million queries
- Nameserver changes required at the domain registrar
- TTL of 300s means DNS changes propagate within 5 minutes

Step 11: Cleanup

When you're done with the capstone:

# Destroy DNS records and hosted zone
cd terraform/dns
terraform destroy -var-file=dev.tfvars

# IMPORTANT: After destroying the hosted zone, update your registrar's
# nameservers back to the default (or the domain will stop resolving entirely)

Order matters: Destroy DNS before destroying EC2 instances. If you destroy EC2 first, the DNS records will point to dead IPs (not harmful, just broken).


Troubleshooting

DNS Not Resolving

# Step 1: Check if Route 53 has your records
aws route53 list-resource-record-sets \
  --hosted-zone-id $(cd terraform/dns && terraform output -raw zone_id) \
  --query "ResourceRecordSets[?Type=='A']"

# Step 2: Check nameserver delegation
nslookup -type=NS team-healthpulse.com 8.8.8.8

# Step 3: Query Route 53 directly (bypass cache)
nslookup team-healthpulse.com ns-1234.awsdns-26.org

# Step 4: Flush local DNS cache
# Windows:
ipconfig /flushdns
# Mac:
sudo dscacheutil -flushcache
# Linux:
sudo systemd-resolve --flush-caches

Traefik Returns 404

# Check Traefik logs
kubectl logs -n kube-system -l app.kubernetes.io/name=traefik --tail=50

# Check Ingress is correct
kubectl describe ingress healthpulse-ingress -n healthpulse-prod

# Common causes:
# 1. host: doesn't match the domain in the request
# 2. Service name in Ingress doesn't match actual Service name
# 3. Service is in a different namespace
# 4. Pods aren't running (kubectl get pods -n healthpulse-prod)

Permission Errors in Terraform

# Verify Route 53 permissions
aws route53 list-hosted-zones
# If "AccessDenied" → check IAM policy is attached to your group

# Verify your identity
aws sts get-caller-identity --profile healthpulse

Key Concepts Summary

ConceptWhat You Learned
DNS resolution chainBrowser → resolver → root → TLD → authoritative (Route 53) → IP
Hosted zoneContainer for DNS records. Route 53 assigns 4 nameservers per zone.
A recordMaps domain name directly to an IP address
Wildcard record*.domain.com catches any undefined subdomain
TTLHow long resolvers cache the answer. Lower = faster propagation, more queries.
Nameserver delegationTelling your registrar "Route 53 answers queries for my domain"
Ingress controllerReverse proxy inside Kubernetes that routes by hostname (Traefik in k3s)
Host-based routingSame IP, different hostname → different backend service

Cost

ResourceMonthly Cost
Route 53 hosted zone$0.50
DNS queries$0.40 per million queries (negligible for a capstone)
Domain registration$3–$39/year depending on extension
Total~$0.50/month + domain registration

Remember: terraform destroy removes the hosted zone and records. The domain registration (if bought via Route 53) is non-refundable and persists even afte

TASK H - GUIDE - Dockerize the App

  Containerization & Image Management (Docker) — Step-by-Step Guide Overview In this task, you containerize the HealthPulse Portal using...