In this task, you register a team domain and configure DNS records so your HealthPulse application is accessible via a human-readable URL (like team-healthpulse.com) instead of raw IP addresses (like 107.21.5.223).
What is DNS?
- DNS (Domain Name System) is the phone book of the internet
- It translates domain names (
google.com) into IP addresses (142.250.80.46) - Without DNS, users would have to memorize IP addresses for every website
What is Route 53?
- AWS's managed DNS service
- Named after TCP/UDP port 53 (the DNS protocol port)
- You create a hosted zone (a container for DNS records) and add records that map names to IPs
- Route 53 runs on a global network of DNS servers — queries resolve from the nearest location
What you'll do:
- Register a domain (or use an existing one)
- Understand DNS concepts: hosted zones, record types, TTL, nameservers
- Add Route 53 IAM permissions
- Create a Route 53 hosted zone with Terraform
- Create DNS records pointing to your infrastructure
- Update your domain registrar's nameservers
- Verify DNS resolution
- Configure Traefik Ingress on k3s for hostname-based routing
- Test everything end-to-end
- Document in MkDocs
Before starting Task E, ensure you have completed:
- — Bare-metal EC2 server provisioned (you need its Elastic IP) -OPTIONAL
- — k3s cluster provisioned (you need the master's Elastic IP)- OPTIONAL
- AWS CLI configured with
healthpulseprofile-OPTIONAL - A registered domain name (see Step 1)
Tip: Get your IPs now — you'll need them throughout this guide:
# Bare-metal Elastic IP ---OPTIONAL FOR TESTING ONLY cd terraform/baremetal && terraform output public_ip # k3s master Elastic IP cd terraform/k3s && terraform output master_public_ip
You need a domain name. There are two options:
Route 53 can act as both registrar (where you buy the domain) and DNS host (where records live). This means nameservers are already configured — skip Step 6 entirely.
- Go to AWS Console → Route 53 → Registered domains
- Click Register domains
- Search for your team domain (e.g.,
team-healthpulse.com) - Select and proceed to checkout
- Fill in contact details → Register
| Domain Extension | Approximate Cost |
|---|---|
.com | ~$13/year |
.net | ~$11/year |
.io | ~$39/year |
.dev | ~$12/year |
.click | ~$3/year |
Budget option:
.clickdomains are ~$3/year.team-healthpulse.clickworks just as well for a capstone project.
If you already have a domain from GoDaddy, Namecheap, Google Domains, etc., you can use it. You'll just need to update its nameservers to point to Route 53 (Step 6).
If budget is zero, use a free subdomain service like FreeDNS. Note: some free services have limitations. For learning purposes, this works fine.
Before touching Terraform, understand what you're building:
┌──────────────────────────────────────────────────────────────────────┐
│ HOW DNS WORKS FOR YOUR PROJECT │
│ │
│ User types: team-healthpulse.com │
│ │ │
│ ▼ │
│ Browser asks DNS: "What IP is team-healthpulse.com?" │
│ │ │
│ ▼ │
│ DNS Resolver → Root servers → .com servers → Route 53 nameservers │
│ │ │
│ ▼ │
│ Route 53 answers: "It's 54.123.45.67" (your k3s master EIP) │
│ │ │
│ ▼ │
│ Browser connects to 54.123.45.67:80 │
│ │ │
│ ▼ │
│ k3s Traefik Ingress receives request │
│ │ Checks: "Which hostname was requested?" │
│ │ Matches: team-healthpulse.com → healthpulse-service │
│ ▼ │
│ HealthPulse Pod serves the page │
└──────────────────────────────────────────────────────────────────────┘
| Concept | What It Is | Analogy |
|---|---|---|
| Hosted Zone | A container for all DNS records of one domain | A phone book for one company |
| A Record | Maps a domain name to an IPv4 address | "John's number is 555-1234" |
| CNAME Record | Maps a domain name to another domain name | "Call John's number when you dial Jane" |
| NS Record | Lists the nameservers responsible for the zone | "This company's phone book is maintained by AWS" |
| TTL | Time-to-live — how long resolvers cache the answer (seconds) | "This number is valid for 5 minutes before you should check again" |
| Nameservers | The DNS servers that Route 53 assigns to your hosted zone | The phone operators who answer queries about your domain |
A Record:
team-healthpulse.com → 54.123.45.67 (k3s master EIP)
"When someone asks for team-healthpulse.com, send them to this IP"
A Record (subdomain):
baremetal.team-healthpulse.com → 107.21.5.223 (bare-metal EIP)
"When someone asks for baremetal.team-healthpulse.com, send them to this other IP"
Wildcard A Record:
*.team-healthpulse.com → 54.123.45.67 (k3s master EIP)
"Any subdomain not explicitly defined? Send to the k3s master"
team-healthpulse.com → k3s master EIP (production — k3s) ├──baremetal.team-healthpulse.com → bare-metal EIP (bare-metal Nginx)- OPTIONAL├── dev.team-healthpulse.com → bare-metal EIP (Task G deployment) ├── k8s.team-healthpulse.com → k3s master EIP (k3s API/apps) ├── uat.team-healthpulse.com → k3s master EIP (k3s UAT namespace) ├── prod.team-healthpulse.com → k3s master EIP (k3s prod namespace) └── *.team-healthpulse.com → k3s master EIP (wildcard catch-all)
Why does dev point to bare-metal? Because Task G deploys directly to the Nginx EC2 server — that's your "traditional" deployment. The k3s subdomains are your Kubernetes deployments.
Your existing IAM policy covers EC2/VPC but not Route 53. You need to add a new policy.
- Go to IAM → Policies → Create policy
- Click JSON tab and paste:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Route53Read",
"Effect": "Allow",
"Action": [
"route53:GetHostedZone",
"route53:ListHostedZones",
"route53:ListHostedZonesByName",
"route53:GetHostedZoneCount",
"route53:ListResourceRecordSets",
"route53:GetChange",
"route53:ListTagsForResource"
],
"Resource": "*"
},
{
"Sid": "Route53Mutate",
"Effect": "Allow",
"Action": [
"route53:CreateHostedZone",
"route53:DeleteHostedZone",
"route53:ChangeResourceRecordSets",
"route53:ChangeTagsForResource"
],
"Resource": "*"
},
{
"Sid": "Route53Domains",
"Effect": "Allow",
"Action": [
"route53domains:GetDomainDetail",
"route53domains:ListDomains",
"route53domains:UpdateDomainNameservers"
],
"Resource": "*"
}
]
}- Name it:
HealthPulseRoute53Policy - Click Create policy
- Go to IAM → User groups →
healthpulse-devops - Click Permissions → Add permissions → Attach policies
- Search for
HealthPulseRoute53Policy→ check it → Attach policies
Why no region lock? Route 53 is a global AWS service — it doesn't belong to any specific region. You can't restrict it to
us-east-1like you did with EC2.
export AWS_PROFILE=healthpulse
# This should work now (returns empty list if no zones exist yet)
aws route53 list-hosted-zonesExpected output:
{
"HostedZones": []
}Before running Terraform, understand what terraform/dns/ creates.
terraform/dns/
├── main.tf # Route 53 hosted zone + DNS records
├── variables.tf # Domain name, infrastructure IPs
├── outputs.tf # Nameservers, URLs
└── dev.tfvars # Your team's values
| Resource | Purpose |
|---|---|
aws_route53_zone.main | Hosted zone — the container for all your DNS records |
aws_route53_record.root | team-healthpulse.com → k3s master IP |
aws_route53_record.baremetal | baremetal.team-healthpulse.com → bare-metal IP(OPTIONAL) |
aws_route53_record.k8s | k8s.team-healthpulse.com → k3s master IP |
aws_route53_record.dev | dev.team-healthpulse.com → bare-metal IP |
aws_route53_record.uat | uat.team-healthpulse.com → k3s master IP |
aws_route53_record.prod | prod.team-healthpulse.com → k3s master IP |
aws_route53_record.wildcard | *.team-healthpulse.com → k3s master IP |
resource "aws_route53_record" "baremetal" {
zone_id = aws_route53_zone.main.zone_id # Which hosted zone
name = "baremetal.${var.domain_name}" # The subdomain
type = "A" # A record = name → IP
ttl = 300 # Cache for 5 minutes
records = [var.baremetal_ip] # The IP address
}TTL = 300 means DNS resolvers cache this answer for 5 minutes. After changing a record, it takes up to 5 minutes for the change to propagate. In production you'd use 3600 (1 hour) or higher for stability; 300 is good for development where you change IPs often.
You need the Elastic IPs from your other Terraform configurations:
# Get bare-metal IP- OPTONAL
cd terraform/baremetal
terraform output public_ip
# Example: 107.21.5.223
# Get k3s master IP
cd ../k3s
terraform output master_public_ip
# Example: 54.123.45.67Edit terraform/dns/dev.tfvars with your actual values:
environment = "dev"
team_name = "team-excellence"
domain_name = "team-healthpulse.com" # Your registered domain
baremetal_ip = "107.21.5.223" # From step 5.1
k3s_master_ip = "54.123.45.67" # From step 5.1cd terraform/dns
# Initialize Terraform (downloads AWS provider)
terraform init
# Preview what will be created
terraform plan -var-file=dev.tfvars
# Apply — creates the hosted zone and all records
terraform apply -var-file=dev.tfvarsAfter terraform apply completes:
terraform outputExpected output:
domain_name = "team-healthpulse.com"
nameservers = tolist([
"ns-1234.awsdns-26.org",
"ns-567.awsdns-08.net",
"ns-890.awsdns-44.co.uk",
"ns-12.awsdns-01.com",
])
registrar_instructions = "Update your domain registrar's nameservers to: ns-1234.awsdns-26.org, ns-567.awsdns-08.net, ns-890.awsdns-44.co.uk, ns-12.awsdns-01.com"
urls = {
"baremetal" = "http://baremetal.team-healthpulse.com"
"dev" = "http://dev.team-healthpulse.com"
"k8s" = "http://k8s.team-healthpulse.com"
"prod" = "http://prod.team-healthpulse.com"
"root" = "http://team-healthpulse.com"
"uat" = "http://uat.team-healthpulse.com"
}
zone_id = "Z1234567890ABC"
Save those nameservers — you need them in the next step.
Skip this step if you registered the domain via Route 53 (Option A in Step 1). Route 53 automatically uses its own nameservers when it's both registrar and DNS host.
This is the critical step that connects your domain to Route 53. Your domain registrar (GoDaddy, Namecheap, etc.) currently points to its own DNS servers. You need to tell it: "Route 53 is now responsible for my domain's DNS."
BEFORE (domain registered at GoDaddy, DNS at GoDaddy):
User → "team-healthpulse.com?" → GoDaddy DNS → ??? (no records configured)
AFTER (domain registered at GoDaddy, DNS at Route 53):
User → "team-healthpulse.com?" → Route 53 DNS → 54.123.45.67 (your k3s master)
- Log into GoDaddy
- Go to My Products → your domain → DNS → Nameservers
- Click Change → Enter my own nameservers (advanced)
- Replace all existing nameservers with the 4 from Terraform output:
ns-1234.awsdns-26.org ns-567.awsdns-08.net ns-890.awsdns-44.co.uk ns-12.awsdns-01.com - Click Save
- Log into Namecheap
- Go to Domain List → your domain → Manage
- Under Nameservers → select Custom DNS
- Enter the 4 Route 53 nameservers
- Click the green checkmark to save
- Go to Google Domains
- Select your domain → DNS
- Switch to Custom name servers
- Add the 4 Route 53 nameservers
- Click Save
Propagation time: Nameserver changes can take up to 48 hours to propagate globally, but usually complete within 15-30 minutes. Be patient.
After updating nameservers, verify they're pointing to Route 53:
# Check which nameservers are authoritative for your domain
nslookup -type=NS team-healthpulse.com
# Or with dig (Linux/Mac)
dig NS team-healthpulse.com +shortExpected: You should see the 4 Route 53 nameservers from your Terraform output.
If you still see the old registrar's nameservers, wait a few more minutes and try again.
# Check the root domain
nslookup team-healthpulse.com
# Check subdomains
nslookup baremetal.team-healthpulse.com
nslookup dev.team-healthpulse.com
nslookup k8s.team-healthpulse.com
nslookup prod.team-healthpulse.comExpected output for root domain:
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
Name: team-healthpulse.com
Address: 54.123.45.67 ← Your k3s master EIP
Expected output for baremetal subdomain:
Name: baremetal.team-healthpulse.com
Address: 107.21.5.223 ← Your bare-metal EIP
# Bare-metal server (Task G deployment)
curl -I http://baremetal.team-healthpulse.com
curl http://dev.team-healthpulse.com/health
# → {"status":"healthy","deploy":"baremetal"}
# k3s master (if app is deployed via NodePort)
curl -I http://team-healthpulse.comNote: The bare-metal subdomains (
baremetal,dev) should work immediately — Nginx is already listening on port 80 for any hostname. The k3s subdomains need Traefik Ingress configured first (Step 8).
| Problem | Cause | Fix |
|---|---|---|
nslookup returns old/wrong IP | DNS cache | Wait for TTL (5 min). Flush local cache: ipconfig /flushdns (Windows) |
nslookup says "can't find domain" | Nameservers not propagated yet | Wait 15-30 minutes. Check registrar NS settings. |
| Browser shows "This site can't be reached" | DNS works but server isn't listening | Check: is Nginx running? Is the security group allowing port 80? |
SERVFAIL | Route 53 zone has no SOA/NS records | Terraform should create these. Run terraform apply again. |
Your k3s cluster comes with Traefik as a built-in ingress controller. Traefik listens on ports 80/443 on the master node and routes traffic to the correct Kubernetes service based on the hostname in the HTTP request.
Without Ingress, you'd access k3s services via NodePort (e.g., http://54.123.45.67:31234). With Ingress:
WITHOUT Ingress:
http://54.123.45.67:31234 ← ugly, hard to remember, port required
WITH Ingress:
http://prod.team-healthpulse.com ← clean, standard port 80
Browser request: "GET / HTTP/1.1 Host: prod.team-healthpulse.com"
│
▼
DNS resolves prod.team-healthpulse.com → 54.123.45.67 (k3s master EIP)
│
▼
Traefik (running on k3s, listening :80) receives the request
│
├─ Host = prod.team-healthpulse.com? → route to healthpulse-service in healthpulse-prod
├─ Host = uat.team-healthpulse.com? → route to healthpulse-service in healthpulse-uat
└─ Host = k8s.team-healthpulse.com? → route to healthpulse-service in healthpulse-dev
The provided kubernetes/ingress.yml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: healthpulse-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: REPLACE_WITH_YOUR_SUBDOMAIN # ← change this per namespace
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: healthpulse-service # ← must match your Service name
port:
number: 80You need to apply the Ingress in each namespace with the correct hostname. Use sed to replace the placeholder:
# Set your kubeconfig
export KUBECONFIG=~/.kube/healthpulse-config
# ─── Production namespace ───
kubectl apply -f kubernetes/ingress-dev.yml
kubectl apply -f kubernetes/ingress-uat.yml
kubectl apply -f kubernetes/ingress-prod.ymlOn Windows without
sed? SSH into the master and run these commands there, or manually edit the YAML for each namespace.
# Check all ingress resources across namespaces
kubectl get ingress -A
# Expected:
# NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
# healthpulse-prod healthpulse-ingress traefik prod.team-healthpulse.com 80 80 1m
# healthpulse-uat healthpulse-ingress traefik uat.team-healthpulse.com 80 80 1m
# healthpulse-dev healthpulse-ingress traefik k8s.team-healthpulse.com 80 80 1m
# Check Traefik is running (k3s installs it in kube-system)
kubectl get pods -n kube-system | grep traefik
# traefik-xxxxx-xxxxx 1/1 Running 0 2d# Test production
curl -H "Host: prod.team-healthpulse.com" http://<K3S_MASTER_IP>/health
# → {"status":"healthy"}
# Test UAT
curl -H "Host: uat.team-healthpulse.com" http://<K3S_MASTER_IP>/health
# → {"status":"healthy"}
# Once DNS is propagated, use the actual domain:
curl http://prod.team-healthpulse.com/health
curl http://uat.team-healthpulse.com/healthThe
-H "Host: ..."trick: This lets you test Ingress routing before DNS propagates. You're hitting the IP directly but telling the server "pretend I came from this hostname."
Run through the full checklist:
# These go to the bare-metal Nginx server
curl http://baremetal.team-healthpulse.com/health
# → {"status":"healthy","deploy":"baremetal"}
curl http://dev.team-healthpulse.com/health
# → {"status":"healthy","deploy":"baremetal"}
# Open in browser — should see the HealthPulse app
# http://baremetal.team-healthpulse.com
# http://dev.team-healthpulse.com# These go to k3s → Traefik → correct namespace
curl http://prod.team-healthpulse.com/health
curl http://uat.team-healthpulse.com/health
curl http://k8s.team-healthpulse.com/health
# Open in browser
# http://prod.team-healthpulse.comcurl http://team-healthpulse.com/health
# → Should resolve to k3s master IPNote: The root domain points to the k3s master IP, but unless you have a Traefik Ingress rule matching
team-healthpulse.com(without a subdomain prefix), Traefik may return a 404. If you want the root domain to work, add another Ingress rule withhost: team-healthpulse.comin your production namespace.
- Go to AWS Console → Route 53 → Hosted zones
- Click your domain
- You should see:
| Record Name | Type | Value |
|---|---|---|
team-healthpulse.com | NS | (4 nameservers — auto-created) |
team-healthpulse.com | SOA | (auto-created) |
team-healthpulse.com | A | 54.123.45.67 |
baremetal.team-healthpulse.com | A | 107.21.5.223 |
dev.team-healthpulse.com | A | 107.21.5.223 |
k8s.team-healthpulse.com | A | 54.123.45.67 |
uat.team-healthpulse.com | A | 54.123.45.67 |
prod.team-healthpulse.com | A | 54.123.45.67 |
*.team-healthpulse.com | A | 54.123.45.67 |
Add DNS information to your MkDocs documentation site.
Add a URL column to your environment matrix:
## Environment Matrix
| Environment | Deploy Type | IP Address | URL | Deployment Method |
|-------------|-----------|------------|-----|-------------------|
| Dev (bare-metal) | Bare-metal | 107.21.5.223 | dev.team-healthpulse.com | SCP + MANUAL |
| Dev (k3s) | Kubernetes | 54.123.45.67 | k8s.team-healthpulse.com | kubectl apply |
| UAT | Kubernetes | 54.123.45.67 | uat.team-healthpulse.com | kubectl apply |
| Production | Kubernetes | 54.123.45.67 | prod.team-healthpulse.com | kubectl apply |Add an Architecture Decision Record:
## ADR-003: DNS Strategy
**Status:** Accepted
**Context:** The application is deployed across a bare-metal server and a k3s cluster.
Users need friendly URLs instead of IP addresses.
**Decision:** Use AWS Route 53 as the DNS provider with:
- Subdomains per environment (dev, uat, prod)
- Bare-metal deployments on the `baremetal` and `dev` subdomains
- Kubernetes deployments on `k8s`, `uat`, and `prod` subdomains
- Wildcard record for future flexibility
- Traefik Ingress on k3s for hostname-based routing
**Consequences:**
- Route 53 costs $0.50/month per hosted zone + $0.40 per million queries
- Nameserver changes required at the domain registrar
- TTL of 300s means DNS changes propagate within 5 minutesWhen you're done with the capstone:
# Destroy DNS records and hosted zone
cd terraform/dns
terraform destroy -var-file=dev.tfvars
# IMPORTANT: After destroying the hosted zone, update your registrar's
# nameservers back to the default (or the domain will stop resolving entirely)Order matters: Destroy DNS before destroying EC2 instances. If you destroy EC2 first, the DNS records will point to dead IPs (not harmful, just broken).
# Step 1: Check if Route 53 has your records
aws route53 list-resource-record-sets \
--hosted-zone-id $(cd terraform/dns && terraform output -raw zone_id) \
--query "ResourceRecordSets[?Type=='A']"
# Step 2: Check nameserver delegation
nslookup -type=NS team-healthpulse.com 8.8.8.8
# Step 3: Query Route 53 directly (bypass cache)
nslookup team-healthpulse.com ns-1234.awsdns-26.org
# Step 4: Flush local DNS cache
# Windows:
ipconfig /flushdns
# Mac:
sudo dscacheutil -flushcache
# Linux:
sudo systemd-resolve --flush-caches# Check Traefik logs
kubectl logs -n kube-system -l app.kubernetes.io/name=traefik --tail=50
# Check Ingress is correct
kubectl describe ingress healthpulse-ingress -n healthpulse-prod
# Common causes:
# 1. host: doesn't match the domain in the request
# 2. Service name in Ingress doesn't match actual Service name
# 3. Service is in a different namespace
# 4. Pods aren't running (kubectl get pods -n healthpulse-prod)# Verify Route 53 permissions
aws route53 list-hosted-zones
# If "AccessDenied" → check IAM policy is attached to your group
# Verify your identity
aws sts get-caller-identity --profile healthpulse| Concept | What You Learned |
|---|---|
| DNS resolution chain | Browser → resolver → root → TLD → authoritative (Route 53) → IP |
| Hosted zone | Container for DNS records. Route 53 assigns 4 nameservers per zone. |
| A record | Maps domain name directly to an IP address |
| Wildcard record | *.domain.com catches any undefined subdomain |
| TTL | How long resolvers cache the answer. Lower = faster propagation, more queries. |
| Nameserver delegation | Telling your registrar "Route 53 answers queries for my domain" |
| Ingress controller | Reverse proxy inside Kubernetes that routes by hostname (Traefik in k3s) |
| Host-based routing | Same IP, different hostname → different backend service |
| Resource | Monthly Cost |
|---|---|
| Route 53 hosted zone | $0.50 |
| DNS queries | $0.40 per million queries (negligible for a capstone) |
| Domain registration | $3–$39/year depending on extension |
| Total | ~$0.50/month + domain registration |
Remember:
terraform destroyremoves the hosted zone and records. The domain registration (if bought via Route 53) is non-refundable and persists even afte