In this task, you will deploy the HealthPulse Portal the traditional way — static files served directly by Nginx on an EC2 instance. This is intentionally done before container deployment (Task H) so you understand what problems containers solve.
What you'll do:
- Generate an SSH key pair for server access
- Provision an EC2 instance with Terraform (Nginx pre-installed)
- Build the application locally
- Deploy manually via SSH (feel the pain)
- Deploy automatically via Ansible
- Test rollback via Ansible
- Integrate with your CI/CD pipeline
- Document the pain points (sets up the "why containers?" lesson)
Time estimate: This is a Week 5 task.
Before starting Task C, ensure you have completed:
- APP is built — is built and producing
dist/artifacts - Terraform CLI installed (
terraform --version→ v1.5+) - AWS CLI configured with
healthpulseprofile (aws sts get-caller-identity --profile healthpulseworks) - AWS credentials set up https://www.devopstreams.com/2026/03/aws-credentials-setup-best-practices.html
Note: Terraform creates the VPC, subnet, internet gateway, and route table automatically. You do NOT need a pre-existing VPC.
You need an SSH key to access the EC2 instance.
# Generate a new key pair (if you don't already have one)
ssh-keygen -t ed25519 -f ~/.ssh/healthpulse-key -N "" -C "healthpulse-capstone"
# Verify the files were created
ls -la ~/.ssh/healthpulse-key*
# → healthpulse-key (private key — NEVER share this)
# → healthpulse-key.pub (public key — this goes to AWS)Why ED25519 and not RSA? AWS
ImportKeyPairhas a 2048-byte limit on public key material. RSA-4096 keys exceed this limit. ED25519 keys are shorter (68 chars), faster, and more secure than RSA.
Windows users (Git Bash): The same command works. Your key will be at
C:\Users\<you>\.ssh\healthpulse-key.
Security note: The private key (
healthpulse-key) stays on your machine. The public key (healthpulse-key.pub) is uploaded to the EC2 instance by Terraform.
Before running anything, understand the infrastructure:
┌─────────────────────────────────────────────────┐
│ AWS VPC │
│ ┌───────────────────────────────────────────┐ │
│ │ Public Subnet │ │
│ │ ┌─────────────────────────────────────┐ │ │
│ │ │ EC2 Instance (t2.micro) │ │ │
│ │ │ ┌───────────────────────────────┐ │ │ │
│ │ │ │ Ubuntu 22.04 │ │ │ │
│ │ │ │ ┌─────────────────────────┐ │ │ │ │
│ │ │ │ │ Nginx Server │ │ │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ /var/www/healthpulse │ │ │ │ │
│ │ │ │ │ ├── index.html │ │ │ │ │
│ │ │ │ │ ├── assets/ │ │ │ │ │
│ │ │ │ │ └── ... │ │ │ │ │
│ │ │ │ └─────────────────────────┘ │ │ │ │
│ │ │ └───────────────────────────────┘ │ │ │
│ │ └─────────────────────────────────────┘ │ │
│ └───────────────────────────────────────────┘ │
│ ↑ Elastic IP: x.x.x.x │
└─────────────────────────────────────────────────┘
Security Group:
✅ Port 22 (SSH) — restricted to your IP
✅ Port 80 (HTTP) — open to all
✅ Port 443 (HTTPS) — open to all
Open terraform/baremetal/main.tf and read through it. Pay attention to:
- The
user_datascript — this is the bootstrap script that runs when the EC2 instance first boots - It installs Nginx, creates
/var/www/healthpulse, and configures the Nginx virtual host - The
/healthendpoint is hardcoded in the Nginx config — it returns{"status":"healthy","deploy":"baremetal"} - SPA fallback (
try_files $uri $uri/ /index.html) ensures React Router works
cd terraform/baremetal
# Initialize Terraform (downloads the AWS provider)
terraform initYou should see:
Terraform has been successfully initialized!
Now run a plan to preview what will be created:
terraform plan \
-var-file=dev.tfvars \
-var="ssh_public_key=$(cat ~/.ssh/healthpulse-key.pub)" \
-var="ssh_allowed_cidr=$(curl -s ifconfig.me)/32"What's happening here:
-var-file=dev.tfvarsloads environment-specific values (instance size, team name, VPC CIDR)ssh_public_keyreads your public key file and passes it to AWSssh_allowed_cidrrestricts SSH access to your current IP address (security best practice)- Terraform creates the entire network (VPC, subnet, gateway, routes) automatically
Review the plan output. You should see 8 resources to be created:
aws_vpc.baremetal— your isolated networkaws_internet_gateway.baremetal— allows internet accessaws_subnet.public— where the EC2 instance livesaws_route_table.public+aws_route_table_association.public— routes traffic to the internetaws_key_pair.deployer— your SSH key uploaded to AWSaws_security_group.web— firewall rules (SSH, HTTP, HTTPS)aws_instance.web— the EC2 server with Nginxaws_eip.web— static public IP
terraform apply \
-var-file=dev.tfvars \
-var="ssh_public_key=$(cat ~/.ssh/healthpulse-key.pub)" \
-var="ssh_allowed_cidr=$(curl -s ifconfig.me)/32"Type yes when prompted. Wait 2–3 minutes for the instance to launch and bootstrap.
terraform outputYou'll see:
instance_id = "i-0abc123def456..."
public_ip = "54.210.XX.XX"
app_url = "http://54.210.XX.XX"
ssh_command = "ssh -i ~/.ssh/healthpulse-key ubuntu@54.210.XX.XX"
deploy_path = "/var/www/healthpulse"
Save these values! You'll need the IP address for the rest of this task.
# Test SSH access (wait 1-2 minutes after apply for bootstrap to finish)
ssh -i ~/.ssh/healthpulse-key ubuntu@<ELASTIC_IP>
# Once connected, verify Nginx is running
sudo systemctl status nginx
# → Active: active (running)
# Check the Nginx config
sudo nginx -t
# → syntax is ok / test is successful
# Check the deploy directory exists
ls -la /var/www/healthpulse/
# → index.html (placeholder from bootstrap)
# Test the health endpoint
curl http://localhost/health
# → {"status":"healthy","deploy":"baremetal"}
# Visit in your browser
# → http://<ELASTIC_IP> should show the placeholder page
# Exit SSH
exitCheckpoint: At this point you have a running EC2 instance with Nginx but no application deployed. The health endpoint works because it's hardcoded in the Nginx config.
Build the app on your local machine:
# Go to the project root
cd /path/to/healthpulse-capstone
# Install dependencies
pnpm install
# Build the application
pnpm build
# Verify the dist/ directory was created
ls dist/
# → index.html assets/ ...
# Check the size
du -sh dist/
# → approximately 1-3 MBThis step is intentionally manual. You are doing what the fictional HealthPulse team does today: SCP files to a server and restart Nginx. The goal is to feel why this is painful so Task H (containers) makes sense.
# From your local machine, copy the entire dist/ directory to the server
scp -i ~/.ssh/healthpulse-key -r dist/* ubuntu@<ELASTIC_IP>:/var/www/healthpulse/What just happened:
scp= Secure Copy Protocol (copies files over SSH)-r= recursive (copies directories and their contents)dist/*= all files in your build outputubuntu@<IP>:/var/www/healthpulse/= destination on the remote serverPain point #1: You're manually copying files. What if you forget a file? What if your local build is different from someone else's?
# SSH into the server
ssh -i ~/.ssh/healthpulse-key ubuntu@<ELASTIC_IP>
# Verify the files were copied
ls -la /var/www/healthpulse/
# → You should see index.html, assets/, etc.
# Test Nginx config (should still be valid)
sudo nginx -t
# Reload Nginx to pick up the new files
sudo systemctl reload nginx
# Test the health endpoint
curl http://localhost/health
# → {"status":"healthy","deploy":"baremetal"}Pain point #2: You had to SSH into the server and run commands manually. What if you forget to reload? What if the config test fails?
Open your browser and navigate to:
http://<ELASTIC_IP>
You should see the HealthPulse Portal login page!
Navigate around the app:
- Try
/dashboard,/appointments,/lab-results - Verify that page refresh works on any route (this proves SPA fallback is working)
- Check the
/healthendpoint in your browser
Pain point #3: How do you know this is the right version? There's no version tag, no deploy log, no audit trail.
Make any small change to the app (e.g., change a title in src/pages/Dashboard.tsx), rebuild, and redeploy:
# On your local machine
pnpm build
# Copy again
scp -i ~/.ssh/healthpulse-key -r dist/* ubuntu@<ELASTIC_IP>:/var/www/healthpulse/
# SSH in and reload
ssh -i ~/.ssh/healthpulse-key ubuntu@<ELASTIC_IP>
sudo systemctl reload nginx
exitPain point #4: You just deployed a new version. How do you roll back to the previous one? The old files are gone — you overwrote them. There's no backup, no versioning, no way to undo.
At this point, write down in your notes:
- How many steps did the manual deploy take?
- How many SSH sessions did you open?
- How long did the whole process take?
- What could go wrong at each step?
- How would you do this for 10 servers? 50 servers?
This is the most important part of Task G. After completing the deployment, create a page in your MkDocs wiki (Task A) that answers these questions.
Add a new page docs/baremetal-deployment.md to your MkDocs site:
# Bare-Metal Deployment — Lessons Learned
## Deployment Process
Describe the steps you took to deploy the application to a bare-metal Nginx server.
## Pain Points
### 5. Reproducibility
**Question:** How long does it take to set up a brand new server from scratch?
**Your Answer:** _(describe the time cost of server provisioning and setup)_
## Time
| Step | Manual Deploy |
|------|--------------|
| Build | ___ min | ___ min |
| Transfer files | ___ min | ___ sec |
| Restart server | ___ min | ___ sec |
| Verify | ___ min | ___ sec |
| **Total** | **___ min** | **___ sec** |When you're done with Task C and ready to move on:
# Destroy the bare-metal infrastructure to avoid AWS charges
cd terraform/baremetal
terraform destroy \
-var-file=dev.tfvars \
-var="ssh_public_key=$(cat ~/.ssh/healthpulse-key.pub)"Don't destroy yet if you're about to start Task H! You'll want the bare-metal server running alongside the container deployment to compare them side-by-side.
Before marking Task G as complete, verify:
- EC2 instance provisioned via Terraform with Nginx running
- Application accessible at
http://<ELASTIC_IP> - Health check returns 200 at
/health - Pain points documented in MkDocs wiki
- SSH into the server and explain what Nginx is serving and from where
Be prepared to:
- SSH into the server live and show the file structure at
/var/www/healthpulse - Explain the Nginx config — what is
try_files? Why is it needed for a React SPA? - Deploy a new version using the Ansible playbook while the instructor watches
- Articulate the pain points — what would happen if you had 50 servers instead of 1?
# Check security group allows port 22 from your IP
aws ec2 describe-security-groups --group-ids <SG_ID> \
--query "SecurityGroups[0].IpPermissions[?FromPort==\`22\`]"
# Your IP may have changed — update ssh_allowed_cidr and re-apply Terraform# SSH in and check Nginx config
ssh -i ~/.ssh/healthpulse-key ubuntu@<IP>
sudo nginx -t
sudo cat /etc/nginx/sites-enabled/healthpulse
# Check if files exist in deploy path
ls -la /var/www/healthpulse/
# Check Nginx error logs
sudo tail -20 /var/log/nginx/error.log# Test locally on the server
ssh -i ~/.ssh/healthpulse-key ubuntu@<IP>
curl -v http://localhost/health
# If 404, the Nginx config might not have the /health location block
# Check: sudo cat /etc/nginx/sites-available/healthpulseAnsible "Permission denied" error
# Check AWS credentials
aws sts get-caller-identity
# Check the AMI ID exists in your region
aws ec2 describe-images --image-ids <AMI_ID>
# If AMI not found, find the current Ubuntu 22.04 AMI for your region:
aws ec2 describe-images \
--owners 099720109477 \
--filters "Name=name,Values=ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*" \
--query "Images | sort_by(@, &CreationDate) | [-1].ImageId" --output text| Concept | What It Means |
|---|---|
| user_data | A bash script that runs once when an EC2 instance first launches. Used to bootstrap software. |
| Elastic IP | A static public IP address that persists even if you stop/start the instance. |
| SPA fallback | try_files $uri $uri/ /index.html — if Nginx can't find the requested file, it serves index.html and lets React Router handle the URL. Without this, refreshing /dashboard gives a 404. |
| Idempotent | Running the same operation multiple times produces the same result. |
| Configuration drift | When servers that should be identical gradually become different due to manual changes. This is the #1 problem bare-metal deployment creates. |