Skip to content

leenpaws/workerpod-operator

Repository files navigation

WorkerPod Operator

A Kubernetes operator that manages WorkerPod custom resources with automatic scaling and fault tolerance. Built with Kubebuilder, this operator ensures a specified number of worker pods are always running and automatically handles pod failures.

Overview

The WorkerPod operator implements a custom Kubernetes controller that:

  • Manages WorkerPod custom resources
  • Automatically scales pods up/down based on spec.replicas
  • Provides fault tolerance by recreating failed pods
  • Updates status with current replica counts
  • Includes comprehensive monitoring and logging

Prerequisites

  • Go: Version 1.21+ for development
  • Kubebuilder: For CRD generation and scaffolding
  • Kubernetes cluster: Minikube, Kind, or remote cluster (v1.11.3+)
  • kubectl: CLI tool (v1.11.3+)
  • Docker: Version 17.03+ for building images

Project Structure

workerpod-operator/
├── api/v1/                             # CRD definitions
│   └── workerpod_types.go              # WorkerPod type definitions
├── internal/controller/                # Controller logic
│   ├── workerpod_controller.go         # Reconciliation logic
│   ├── workerpod_controller_test.go    # Integration tests
│   └── suite_test.go                   # Test suite setup
├── config/                             # Kubernetes manifests
│   ├── crd/bases/                      # Generated CRD YAML
│   ├── samples/                        # Sample WorkerPod manifests
│   ├── rbac/                           # RBAC permissions
│   └── manager/                        # Manager deployment
├── main.go                             # Entry point
├── Dockerfile                          # Container image
└── Makefile                            # Build/deploy targets

Setup

1. Install CRDs

Install the WorkerPod Custom Resource Definition:

make install

2. Run the Operator

Option A: Run locally (development)

make run

Option B: Deploy to cluster

# Build and push image
make docker-build docker-push IMG=<your-registry>/workerpod-operator:tag

# Deploy to cluster
make deploy IMG=<your-registry>/workerpod-operator:tag

3. Apply Sample Manifests

Create a WorkerPod instance:

kubectl apply -f config/samples/orchestration_v1_workerpod.yaml

Sample WorkerPod manifest:

apiVersion: orchestration.example.com/v1
kind: WorkerPod
metadata:
  name: example-workerpod
spec:
  replicas: 3
  image: busybox

How to View Status and Logs

Check WorkerPod Status

# View WorkerPod resources
kubectl get workerpods

# Detailed status
kubectl describe workerpod example-workerpod

# Check status field
kubectl get workerpod example-workerpod -o yaml

View Managed Pods

# List pods with worker label
kubectl get pods -l app=worker

# Check specific WorkerPod's pods
kubectl get pods -l workerpod=example-workerpod

Operator Logs

# If running locally
# Logs appear in terminal where you ran 'make run'

# If deployed to cluster
kubectl logs -n workerpod-operator-system deployment/workerpod-operator-controller-manager

# Follow logs
kubectl logs -f -n workerpod-operator-system deployment/workerpod-operator-controller-manager

How to Run Tests and Sample Worker Scaling

Run Tests

# Run unit and integration tests
make test

# Run tests with coverage
go test -v -cover ./...

# Run specific test
go test -v ./internal/controller -run TestWorkerPodController

Test Sample Worker Scaling

  1. Create initial WorkerPod:

    kubectl apply -f config/samples/orchestration_v1_workerpod.yaml
  2. Verify pods are created:

    kubectl get pods -l app=worker
    kubectl get workerpod example-workerpod
  3. Test scaling up:

    kubectl patch workerpod example-workerpod --type='merge' -p='{"spec":{"replicas":5}}'
    kubectl get pods -l workerpod=example-workerpod
  4. Test scaling down:

    kubectl patch workerpod example-workerpod --type='merge' -p='{"spec":{"replicas":2}}'
    kubectl get pods -l workerpod=example-workerpod
  5. Test fault tolerance:

    # Delete a pod to simulate failure
    kubectl delete pod -l workerpod=example-workerpod --timeout=0s --grace-period=0
    
    # Watch automatic recreation
    kubectl get pods -l workerpod=example-workerpod -w
  6. Monitor status updates:

    kubectl get workerpod example-workerpod -o jsonpath='{.status.availableReplicas}'

Development

Local Development

# Install dependencies
go mod tidy

# Generate code
make generate

# Run locally
make run

Build and Deploy

# Build image
make docker-build IMG=<your-registry>/workerpod-operator:tag

# Push image
make docker-push IMG=<your-registry>/workerpod-operator:tag

# Deploy
make deploy IMG=<your-registry>/workerpod-operator:tag

Cleanup

# Delete WorkerPod instances
kubectl delete -f config/samples/

# Uninstall CRDs
make uninstall

# Remove operator deployment
make undeploy

Reference Documentation

Features

  • Automatic Scaling: Maintains desired replica count
  • Fault Tolerance: Recreates failed pods automatically
  • Status Reporting: Real-time status updates
  • Smart Deletion: Prioritizes failed pods for removal
  • Prometheus Metrics: Built-in observability
  • Comprehensive Tests: Unit and integration test coverage

License

Copyright 2025.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.


This README now includes all your requested sections and combines the best elements from both versions. What do you think of this structure? Does it cover everything your assignment evaluator needs to successfully deploy and test your operator? 1

Footnotes

  1. Configuration

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published