How I Built This

A technical deep-dive into the infrastructure and CI/CD pipeline powering this portfolio

Architecture Overview

Developer
git push
GitHub Actions CI/CD
Lint & Audit
Build Image
Push to Hub
Trivy Scan
Update Helm
📦Docker HubImage Store
eks-helm-chartsvalues.yaml
pulls imagewatchesdeploys
ArgoCDGitOps Engine
AWS EC2 t3.small — k3s
Kubernetes Workloads
🔒cert-managerLet's Encrypt
🌐ingress-nginxServiceLB
fuhriman-websiteThis Website!
iptables: Hairpin NAT fix (pod CIDR → kube-proxy chains)
https://fuhriman.org

Cost-Optimized Design: A single t3.small EC2 instance (2GB RAM, $17/mo) runs k3s (lightweight Kubernetes) instead of managed EKS. No NAT Gateway or multiple nodes needed, reducing costs from ~$80/mo to ~$22/mo while maintaining GitOps best practices.

Technology Stack

Frontend

Next.js

React framework for the website

Container

Docker

Optimized production builds (AMD64)

Orchestration

k3s

Lightweight Kubernetes on EC2

GitOps

ArgoCD

Continuous deployment from Git

IaC

Terraform

Infrastructure as Code for AWS

CI/CD

GitHub Actions

Build and push automation

TLS

cert-manager

Let's Encrypt certificates

Networking

ingress-nginx

Traffic routing and TLS termination

Infrastructure as Code

The entire AWS infrastructure is defined in Terraform, organized into reusable modules:

terraform/
terraform/
├── tf-modules/
│   ├── aws-vpc/           # VPC, public subnet, Internet Gateway
│   └── aws-k3s/           # EC2 instance, k3s, ArgoCD (cloud-init)
├── main.tf                # Module composition
├── providers.tf           # AWS provider configuration
├── backend.tf             # S3 state with DynamoDB locking
├── budget.tf              # AWS budget alert ($25/mo)
└── variables.tf           # Configuration variables

VPC Module

Creates a simple VPC (10.0.0.0/16) with a single public subnet in one availability zone. No NAT Gateway needed since everything runs in the public subnet, significantly reducing costs.

k3s Module

Provisions a single t3.small EC2 instance (Amazon Linux 2023, 2GB RAM) and installs k3s via cloud-init. ArgoCD and the app-of-apps pattern are deployed via Helm charts during bootstrap, with all output logged to /var/log/k3s-init.log.

Hairpin NAT Fix

Installs iptables rules that jump pod CIDR traffic destined for the public IP directly into kube-proxy's KUBE-EXT chains. This solves the AWS hairpin NAT problem for cert-manager HTTP-01 validation at the network layer.

GitOps with ArgoCD

ArgoCD implements the GitOps pattern where Git is the single source of truth for the desired cluster state.

1

App of Apps Pattern

A parent Application manages child Applications, enabling hierarchical deployment of the entire stack.

2

Sync Waves

Applications deploy in order: cert-manager (-2) → ingress-nginx (-1) → website (0), ensuring dependencies are ready.

3

Auto-Sync & Self-Heal

ArgoCD automatically applies Git changes and reverts any manual cluster modifications to maintain desired state.

CI/CD Pipeline

Every push to the main branch triggers a fully automated build and deployment pipeline:

1

Lint & Audit

ESLint checks code quality and npm audit --audit-level=critical scans dependencies for known vulnerabilities before anything builds.

2

Build & Push

Multi-stage Docker build via Buildx creates an optimized AMD64 image, pushed to Docker Hub with a timestamp tag (ga-YYYY.MM.DD-HHMM) and latest.

3

Scan

Trivy scans the pushed image for CRITICAL and HIGH CVEs. The pipeline fails if unfixed vulnerabilities are found, preventing insecure images from deploying.

4

Update

The Helm chart's values.yaml is updated with the new image tag and committed to eks-helm-charts, triggering ArgoCD to sync.

.github/workflows/build-deploy.yaml
name: Build and Deploy
on:
  push:
    branches: [main]

permissions:
  contents: read           # Least-privilege security

jobs:
  lint:                     # Gate: code quality + dependency audit
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@11bd7190...  # Pinned SHA
      - uses: actions/setup-node@49933ea...
      - run: npm ci
      - run: npm audit --audit-level=critical
      - run: npm run lint

  build-and-deploy:
    needs: lint             # Only runs if lint passes
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@11bd7190...
      - run: echo "tag=ga-$(date +'%Y.%m.%d-%H%M')" >> $GITHUB_OUTPUT

      - uses: docker/login-action@74a5d142...
      - uses: docker/setup-buildx-action@b5ca5143...

      - uses: docker/build-push-action@26343531...
        with:
          push: true
          tags: furryman/fuhriman-website:${{ steps.tag.outputs.tag }},latest

      # Trivy CVE scan — fails on CRITICAL/HIGH
      - uses: aquasecurity/trivy-action@6c175e9c...
        with:
          severity: CRITICAL,HIGH
          exit-code: 1

      # Update Helm chart to trigger ArgoCD
      - uses: actions/checkout@11bd7190...
        with:
          repository: furryman/eks-helm-charts
          token: ${{ secrets.GH_PAT }}
      - run: yq -i '.image.tag = "..."' fuhriman-chart/values.yaml
      - run: git commit -am "Update image" && git push

Kubernetes Resources

The website runs as a Deployment with associated Service and Ingress resources:

Deployment

  • 1 replica (sufficient for single-node cluster)
  • Resource limits: 100m CPU, 128Mi memory
  • Liveness and readiness probes on port 3000
  • Rolling update strategy with health checks

Service

  • ClusterIP type for internal access
  • Port 80 → target port 3000
  • Label selector for pod discovery

Ingress

  • NGINX ingress class with LoadBalancer (k3s ServiceLB)
  • TLS termination with Let's Encrypt
  • Hosts: fuhriman.org, www.fuhriman.org
  • Automatic certificate renewal via cert-manager
  • SSL redirect disabled for ACME HTTP-01 challenges

Repository Structure

The project is organized across 4 repositories following separation of concerns:

Certificate Management & Hairpin NAT Solution

One of the interesting technical challenges was getting Let's Encrypt certificates to work on a single-node cluster behind a public IP.

The Hairpin NAT Problem

When cert-manager validates HTTP-01 challenges, it connects to the public IP from inside the cluster. AWS VPC doesn't support hairpin NAT — the VPC router won't loop packets back to the same host — so these connections fail even though external validation works fine.

iptables Network-Layer Fix

During cloud-init on Amazon Linux 2023, the script waits for ArgoCD to deploy ingress-nginx, then waits for kube-proxy to create its LoadBalancer iptables rules (fixing a race condition where chain discovery would fail). It then discovers the KUBE-EXT chain names and adds rules that jump pod CIDR (10.42.0.0/16) traffic destined for the public IP directly into those chains — piggy-backing on kube-proxy's existing DNAT-to-pod routing with no application-level workarounds.

Why Not Simple DNAT?

iptables DNAT is a terminating target — once it fires, the packet exits the chain. A native DNAT to the private IP would bypass kube-proxy's service routing rules entirely. By jumping into kube-proxy's own chains instead, the packet follows the same path as external traffic.

iptables Hairpin NAT Rules (from user_data.sh)
# Wait for ArgoCD to deploy ingress-nginx
until kubectl get svc -n ingress-nginx ingress-nginx-controller &>/dev/null; do
  sleep 5
done

# Race condition fix: wait for kube-proxy to create LoadBalancer rules
# (lags behind service creation)
until iptables -t nat -L KUBE-SERVICES -n 2>/dev/null \
  | grep -q "ingress-nginx-controller:http loadbalancer"; do
  sleep 2
done

# Discover kube-proxy's KUBE-EXT chain names
HTTP_CHAIN=$(iptables -t nat -L KUBE-SERVICES -n \
  | grep "ingress-nginx-controller:http loadbalancer" | awk '{print $1}')
HTTPS_CHAIN=$(iptables -t nat -L KUBE-SERVICES -n \
  | grep "ingress-nginx-controller:https loadbalancer" | awk '{print $1}')

# Jump pod traffic to the public IP into kube-proxy's chains
iptables -t nat -A PREROUTING -s 10.42.0.0/16 -d $PUBLIC_IP \
  -p tcp --dport 80 -j $HTTP_CHAIN
iptables -t nat -A PREROUTING -s 10.42.0.0/16 -d $PUBLIC_IP \
  -p tcp --dport 443 -j $HTTPS_CHAIN

Key DevOps Principles

Infrastructure as Code

All infrastructure is version-controlled in Terraform, enabling reproducible deployments and peer review of changes.

GitOps

Git is the single source of truth. All changes flow through pull requests, providing audit trails and rollback capabilities.

Immutable Infrastructure

Each deployment creates a new container image with a unique tag. No in-place modifications to running containers.

Declarative Configuration

Desired state is declared in YAML manifests. Kubernetes and ArgoCD continuously reconcile actual state to match.

Cost Optimization

Using k3s on a single t3.small instance instead of managed EKS reduces monthly costs from ~$80 to ~$22 while maintaining production-grade GitOps practices.

Automation First

Certificate renewal, hairpin NAT configuration, and application deployment are fully automated. Zero manual intervention required after initial setup.