This project demonstrates the deployment of a full-stack e-commerce application (YOLO) on Google Kubernetes Engine (GKE) using modern DevOps practices. The application features a React frontend, Node.js/Express backend, and MongoDB database with persistent storage.
π Application URL: http://34.67.157.94
Status: β
Running on GKE
Cluster: yolo-cluster (us-central1-a)
Deployment Date: November 7, 2025
Jacob Taraya
- GitHub: @jtaraya
- DockerHub: jtaraya
- Repository: https://github.com/jtaraya/yolo.git
- Email: jacobtaraya@gmail.com
| Layer | Technology | Version | Port | Replicas |
|---|---|---|---|---|
| Frontend | React + Nginx | 18 / 1.29.3 | 80 | 2 |
| Backend | Node.js/Express | 16+ | 5000 | 2 |
| Database | MongoDB | 5.0 | 27017 | 1 |
INTERNET
β
βΌ
βββββββββββββββββββ
β Load Balancer β
β 34.67.157.94 β
ββββββββββ¬βββββββββ
β Port 80
βΌ
ββββββββββββββββββββββββββββ
β Frontend Service β
β (LoadBalancer) β
ββββββββββββ¬ββββββββββββββββ
β
ββββββββββββββββ΄βββββββββββββββ
β β
βΌ βΌ
βββββββββββ βββββββββββ
βFrontend β βFrontend β
β Pod 1 β β Pod 2 β
β (Nginx) β β (Nginx) β
βββββββββββ βββββββββββ
β β
ββββββββββββββββ¬βββββββββββββββ
β http://backend-service:5000
βΌ
ββββββββββββββββββββββββββββ
β Backend Service β
β (ClusterIP - Internal) β
ββββββββββββ¬ββββββββββββββββ
β
ββββββββββββββββ΄βββββββββββββββ
β β
βΌ βΌ
βββββββββββ βββββββββββ
βBackend β βBackend β
β Pod 1 β β Pod 2 β
β(Node.js)β β(Node.js)β
βββββββββββ βββββββββββ
β β
ββββββββββββββββ¬βββββββββββββββ
β mongodb://mongodb-service:27017
βΌ
ββββββββββββββββββββββββββββ
β MongoDB Service β
β (Headless - ClusterIP) β
ββββββββββββ¬ββββββββββββββββ
β
βΌ
βββββββββββ
β MongoDB β
β Pod β
β(StatefulSet)
ββββββ¬βββββ
β
βΌ
ββββββββββββββββ
β Persistent β
β Volume β
β (5Gi) β
ββββββββββββββββ
| Service | Image | Tag | DockerHub |
|---|---|---|---|
| Backend | jtaraya/yolo-backend |
v1.0.0 | View |
| Frontend | jtaraya/yolo-frontend |
v1.0.0 | View |
| Database | mongo |
5.0 | Official |
Format: jtaraya/<app-name>:<version>
-
docker build -t jtaraya/yolo-backend:v1.0.0 .
-
docker push jtaraya/yolo-backend:v1.0.0
-
docker build -t jtaraya/yolo-frontend:v1.0.0 .
-
docker push jtaraya/yolo-frontend:v1.0.0
Benefits:
- Clear ownership identification
- Semantic versioning for tracking
- Easy rollbacks to previous versions
- Professional Docker Hub organization
Ensure you have the following installed:
- β
Google Cloud SDK (
gcloud) - Install - β kubectl - Install
- β Docker (optional, for building images) - Install
- β Git - Install
- β GCP account with billing enabled
gcloud --version
kubectl version --client
docker --version
git --version
Successfully Update Docker Images
Successfully Update Docker Images
Successfully created yolo-cluster with 3 e2-medium nodes in us-central1-a
Applying MongoDB StatefulSet, Backend Deployment, and Frontend Deployment
All 5 pods showing 1/1 Ready status
NAME READY STATUS RESTARTS AGE
backend-deployment-5f9b5946f5-hz6qs 1/1 Running 0 17m
backend-deployment-5f9b5946f5-kn2g6 1/1 Running 0 17m
frontend-deployment-67965d4479-8ntg4 1/1 Running 0 57s
frontend-deployment-67965d4479-wpkkc 1/1 Running 0 50s
mongodb-0 1/1 Running 0 7h4m
Frontend service with LoadBalancer type and external IP assigned
NAME TYPE EXTERNAL-IP PORT(S)
frontend-service LoadBalancer 34.67.157.94 80:30141/TCP
backend-service ClusterIP 34.118.238.220 5000/TCP
mongodb-service ClusterIP None 27017/TCP
MongoDB StatefulSet with 5Gi PersistentVolumeClaim in Bound status
YOLO e-commerce homepage accessible at http://34.67.157.94
Successfully adding products to shopping cart
Cart items persist after MongoDB pod deletion - proving persistent storage works!
GKE workloads visible in Google Cloud Console
Backend logs showing successful MongoDB connection
Attempting to connect to MongoDB with URI: mongodb://admin:password@mongodb-service:27017/yolomy?authSource=admin
Server listening on port 5000
Database connected successfully
git clone https://github.com/jtaraya/yolo.git
cd yolo# Login to Google Cloud
gcloud auth login
# Set your project ID
gcloud config set project YOUR_PROJECT_ID
# Set default zone
gcloud config set compute/zone us-central1-a
gcloud config set compute/region us-central1# Enable Compute Engine API
gcloud services enable compute.googleapis.com
# Enable Kubernetes Engine API
gcloud services enable container.googleapis.com
# Verify APIs are enabled
gcloud services list --enabled | grep -E 'container|compute'# Create cluster with 3 nodes
gcloud container clusters create yolo-cluster \
--num-nodes=3 \
--zone=us-central1-a \
--machine-type=e2-medium \
--disk-size=20 \
--enable-autoupgrade \
--enable-autorepair
# This takes approximately 5-10 minutes βExpected Output:
Created [https://container.googleapis.com/.../yolo-cluster].
kubeconfig entry generated for yolo-cluster.
NAME LOCATION MASTER_VERSION MACHINE_TYPE NUM_NODES STATUS
yolo-cluster us-central1-a 1.33.5-gke... e2-medium 3 RUNNING
gcloud container clusters get-credentials yolo-cluster \
--zone=us-central1-a
# Verify connection
kubectl get nodesExpected Output:
NAME STATUS ROLES AGE VERSION
gke-yolo-cluster-default-pool-xxxxx-xxxx Ready <none> 5m v1.33.5-gke...
gke-yolo-cluster-default-pool-xxxxx-yyyy Ready <none> 5m v1.33.5-gke...
gke-yolo-cluster-default-pool-xxxxx-zzzz Ready <none> 5m v1.33.5-gke...
# Deploy MongoDB with persistent storage
kubectl apply -f manifests/mongodb-statefulset.yaml
# Wait for MongoDB to be ready
kubectl wait --for=condition=ready pod -l app=mongodb --timeout=300s
# Verify deployment
kubectl get statefulset
kubectl get pvcExpected Output:
NAME READY AGE
mongodb 1/1 2m
NAME STATUS VOLUME CAPACITY STORAGECLASS
mongodb-data-mongodb-0 Bound pvc-... 5Gi standard
# Deploy backend
kubectl apply -f manifests/backend-deployment.yaml
# Wait for backend to be ready
kubectl wait --for=condition=ready pod -l app=backend --timeout=300s
# Check logs
kubectl logs -l app=backend --tail=10Expected Output:
Attempting to connect to MongoDB with URI: mongodb://admin:password@mongodb-service:27017/yolomy?authSource=admin
Server listening on port 5000
Database connected successfully
# Deploy frontend
kubectl apply -f manifests/frontend-deployment.yaml
# Wait for frontend to be ready
kubectl wait --for=condition=ready pod -l app=frontend --timeout=300s# Get services
kubectl get svc
# Wait for EXTERNAL-IP (may take 2-3 minutes)
kubectl get svc frontend-service -wPress Ctrl+C when you see the external IP.
Expected Output:
NAME TYPE EXTERNAL-IP PORT(S)
frontend-service LoadBalancer 34.67.157.94 80:30141/TCP
# Get the URL
export FRONTEND_URL=$(kubectl get svc frontend-service -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo "π Application URL: http://$FRONTEND_URL"
# Open in browser
xdg-open http://$FRONTEND_URLYour application is now live at: http://34.67.157.94
# View all resources
kubectl get all
# Check pods (should all be 1/1 Ready)
kubectl get pods
# Check services
kubectl get svc
# Check persistent volumes
kubectl get pvcExpected Output:
NAME READY STATUS RESTARTS AGE
backend-deployment-xxxxx 1/1 Running 0 10m
backend-deployment-yyyyy 1/1 Running 0 10m
frontend-deployment-xxxxx 1/1 Running 0 8m
frontend-deployment-yyyyy 1/1 Running 0 8m
mongodb-0 1/1 Running 0 15m
1. Homepage Test:
curl -I http://34.67.157.94
# Should return: HTTP/1.1 200 OK2. Browser Test:
- Visit: http://34.67.157.94
- β Homepage loads
- β Products display
- β Navigation works
3. Cart Functionality:
- β Add items to cart
- β View cart
- β Items persist
This test verifies that your StatefulSet with PVC is working correctly:
# Step 1: Add items to cart using the browser
# Visit http://34.67.157.94 and add 2-3 products
# Step 2: Verify MongoDB pod is running
kubectl get pods -l app=mongodb
# Step 3: Delete the MongoDB pod
kubectl delete pod mongodb-0
# Step 4: Watch the pod recreate
kubectl get pods -l app=mongodb -w
# Press Ctrl+C when mongodb-0 shows 1/1 Running
# Step 5: Refresh your browser
# β
Cart items should still be there!
# Step 6: Check PVC is still bound
kubectl get pvcResult: β Data persists after pod deletion, proving persistent storage works!
yolo/
βββ .gitignore # Git ignore rules
βββ README.md # This file
βββ explanation.md # Assignment objectives explanation
βββ backend/
β βββ Dockerfile # Backend container definition
β βββ server.js # Node.js/Express server
β βββ package.json # Node dependencies
β βββ ...
βββ client/ # Frontend application
β βββ Dockerfile # Frontend container definition
β βββ package.json # React dependencies
β βββ src/
β βββ public/
β βββ ...
βββ manifests/ # Kubernetes YAML files
β βββ mongodb-statefulset.yaml # MongoDB StatefulSet with PVC
β βββ backend-deployment.yaml # Backend Deployment and Service
β βββ frontend-deployment.yaml # Frontend Deployment and LoadBalancer
βββ screenshots/ # Deployment evidence
βββ 01-gke-cluster-creation.png
βββ 02-kubectl-apply.png
βββ 03-pods-running.png
βββ 04-services-external-ip.png
βββ 05-statefulset-pvc.png
βββ 06-application-home.png
βββ 07-add-to-cart.png
βββ 08-data-persistence-test.png
βββ 09-gcp-console-workloads.png
βββ 10-pod-logs.png
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb
spec:
serviceName: "mongodb-service"
replicas: 1
volumeClaimTemplates:
- metadata:
name: mongodb-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5GiWhy StatefulSet?
- β Stable, unique network identifiers
- β Ordered deployment and scaling
- β Persistent storage per pod
- β Perfect for databases
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 2
strategy:
type: RollingUpdateWhy Deployment?
- β Stateless applications
- β Easy horizontal scaling
- β Rolling updates with zero downtime
- β Self-healing capabilities
LoadBalancer (Frontend):
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80- Exposes application to internet
- Automatic external IP provisioning
ClusterIP (Backend):
spec:
type: ClusterIP
ports:
- port: 5000
targetPort: 5000- Internal-only access
- Security best practice
Headless (MongoDB):
spec:
clusterIP: None- For StatefulSet DNS
- Direct pod-to-pod communication
Backend:
env:
- name: MONGO_URI
value: "mongodb://admin:password@mongodb-service:27017/yolomy?authSource=admin"
- name: PORT
value: "5000"Frontend:
env:
- name: REACT_APP_BACKEND_URL
value: "http://backend-service:5000"Backend:
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"Frontend:
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"MongoDB:
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"TCP Probes (Backend & Frontend):
readinessProbe:
tcpSocket:
port: 5000 # or 80 for frontend
initialDelaySeconds: 5
periodSeconds: 5Why TCP instead of HTTP?
- β More reliable for our application
- β Doesn't require specific endpoints
- β Checks if port is accepting connections
Symptoms:
kubectl get pods
# backend-deployment-xxxxx 0/1 CrashLoopBackOffDiagnosis:
kubectl logs backend-deployment-xxxxx
kubectl describe pod backend-deployment-xxxxxCommon Causes:
- Environment variable mismatch
- Can't connect to MongoDB
- Image pull errors
Solutions:
# Check environment variables
kubectl describe pod backend-deployment-xxxxx | grep -A 5 "Environment:"
# Check MongoDB is running
kubectl get pods -l app=mongodb
# Verify image exists
docker pull jtaraya/yolo-backend:v1.0.0Symptoms:
kubectl get svc frontend-service
# EXTERNAL-IP <pending>Solutions:
# Wait 2-3 minutes for GCP to provision
# If still pending after 5 minutes:
kubectl describe svc frontend-service
# Check events
kubectl get events --sort-by='.lastTimestamp'
# Delete and recreate
kubectl delete svc frontend-service
kubectl apply -f manifests/frontend-deployment.yamlSymptoms:
kubectl logs backend-deployment-xxxxx
# Error: connect ECONNREFUSEDSolutions:
# Verify MongoDB is running
kubectl get pods -l app=mongodb
# Check service exists
kubectl get svc mongodb-service
# Test DNS resolution
kubectl exec -it backend-deployment-xxxxx -- nslookup mongodb-service
# Verify environment variable
kubectl exec -it backend-deployment-xxxxx -- env | grep MONGO_URISymptoms:
kubectl get pods
# backend-deployment-xxxxx 0/1 RunningSolutions:
# Check readiness probe
kubectl describe pod backend-deployment-xxxxx | grep -A 10 "Readiness:"
# Check logs
kubectl logs backend-deployment-xxxxx
# Check if port is open
kubectl exec -it backend-deployment-xxxxx -- netstat -tuln | grep 5000Symptoms:
kubectl get pvc
# mongodb-data-mongodb-0 PendingSolutions:
# Check PVC details
kubectl describe pvc mongodb-data-mongodb-0
# Check storage class
kubectl get storageclass
# Check if volume can be provisioned
gcloud compute disks list# View all resources
kubectl get all
# Check pod logs
kubectl logs <pod-name>
kubectl logs <pod-name> --previous # Previous container
# Describe resources
kubectl describe pod <pod-name>
kubectl describe svc <service-name>
kubectl describe pvc <pvc-name>
# Execute commands in pod
kubectl exec -it <pod-name> -- /bin/bash
kubectl exec -it <pod-name> -- env
# Check events
kubectl get events --sort-by='.lastTimestamp'
# Monitor resources
kubectl top pods
kubectl top nodes
# Port forwarding for testing
kubectl port-forward svc/backend-service 5000:5000
kubectl port-forward pod/mongodb-0 27017:27017# Delete all application resources
kubectl delete -f manifests/mongodb-statefulset.yaml
kubectl delete -f manifests/backend-deployment.yaml
kubectl delete -f manifests/frontend-deployment.yaml
# Verify deletion
kubectl get all
kubectl get pvcNote: PVCs may need manual deletion:
kubectl delete pvc mongodb-data-mongodb-0# Delete the GKE cluster (THIS REMOVES EVERYTHING)
gcloud container clusters delete yolo-cluster \
--zone=us-central1-a \
--quiet
# Verify deletion
gcloud container clusters list- Delete all pods, services, deployments
- Delete PersistentVolumes and data
- Delete load balancers
- Remove the entire cluster
π° Cost Note: Deleting the cluster stops all billing for GKE resources.
Running this cluster costs approximately:
| Resource | Quantity | Cost/Hour | Daily Cost |
|---|---|---|---|
| e2-medium nodes | 3 | $0.033 each | ~$2.40 |
| Load Balancer | 1 | $0.025 | ~$0.60 |
| Persistent Disk (5Gi) | 1 | ~$0.0002/Gi | ~$0.02 |
| Total | ~$3/day |
Monthly estimate: ~$90
π‘ Tip: Delete the cluster when not in use to avoid unnecessary charges!
gcloud container clusters delete yolo-cluster --zone=us-central1-a- explanation.md - Detailed explanation of design decisions and implementation choices
- Kubernetes Documentation - Official Kubernetes docs
- GKE Documentation - Google Kubernetes Engine docs
- Docker Documentation - Docker container docs
-
Kubernetes Orchestration
- StatefulSets vs Deployments
- Service types and networking
- Persistent storage management
-
Cloud Infrastructure
- GKE cluster management
- Load balancer provisioning
- Resource optimization
-
DevOps Practices
- Container orchestration
- Health monitoring
- High availability design
-
Version Control
- Descriptive commit messages
- Documentation best practices
- Project organization
This is an educational project for Moringa School DevOps Week 8 IP4. While this is primarily for coursework, feedback and suggestions are welcome!
- Fork the repository
- Create a feature branch (
git checkout -b feature/improvement) - Commit your changes (
git commit -m 'Add improvement') - Push to the branch (
git push origin feature/improvement) - Open a Pull Request
This project is created for educational purposes DevOps IP4 Orchestration.
- Google Cloud Platform - GKE infrastructure and $300 free credits
- Kubernetes Community - Excellent documentation and best practices
- Docker Community - Container technology and resources
- Technical Mentors - Guidance and support throughout the project
Jacob Taraya
- GitHub: @jtaraya
- Email: jacobtaraya@gmail.com
- DockerHub: jtaraya
Project Repository: https://github.com/jtaraya/yolo
Questions? Open an issue on GitHub or contact via email.
- Live Application: http://34.67.157.94
- GitHub Repository: https://github.com/jtaraya/yolo
- DockerHub - Backend: https://hub.docker.com/r/jtaraya/yolo-backend
- DockerHub - Frontend: https://hub.docker.com/r/jtaraya/yolo-frontend
- GCP Project: amplified-brook-460520-t1
- GKE Cluster: yolo-cluster (us-central1-a)