Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
d8fe81a
Add Copilot instructions and update dependencies
benc-uk Feb 11, 2026
dcb7fbf
Update Node.js version specification in CI workflow
benc-uk Feb 11, 2026
ace22f7
lots of things
benc-uk Feb 14, 2026
db91ca5
Add GitOps & Flux section with Kustomize examples and deployment conf…
benc-uk Feb 14, 2026
46460e2
Enhance workshop content and structure by adding detailed description…
benc-uk Feb 14, 2026
c56988e
Yeah stuff
benc-uk Feb 14, 2026
6be33ea
Update version and description in package.json for clarity and accuracy
benc-uk Feb 14, 2026
b56f74f
Refactor section titles for clarity and consistency in documentation
benc-uk Feb 14, 2026
3f9a8f4
Refactor code structure for improved readability and maintainability
benc-uk Feb 14, 2026
5529458
Update Gateway API installation command for accuracy and clarity
benc-uk Feb 17, 2026
c123f44
Enhance Operations Cheat Sheet with additional commands and improved …
benc-uk Feb 17, 2026
9e98b9e
Add Kind configuration and deployment manifests for local Kubernetes …
benc-uk Feb 17, 2026
d5e4b5a
Add custom sorted collections for section and extra tags
benc-uk Feb 17, 2026
67a1379
Merge branch 'main' into 2026-feb-update
benc-uk Feb 17, 2026
0400219
Fix formatting in postgres-service.yaml by adding a newline at the en…
benc-uk Feb 17, 2026
6344760
Remove commented tab completion command from Azure CLI and Helm verif…
benc-uk Feb 17, 2026
f310b40
Update theme toggle button text and remove sorting collections from E…
benc-uk Feb 17, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
75 changes: 75 additions & 0 deletions .github/copilot-instructions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# Copilot Instructions — kube-workshop

## Project Overview

An Eleventy v3 static site generating a hands-on Kubernetes (AKS) workshop. Content is authored in Markdown under `content/`, built to `_site/`, and deployed to GitHub Pages. The workshop walks developers through deploying a multi-tier app (Postgres → API → Frontend) on AKS.

The intent of this project is to provide a comprehensive, step-by-step learning experience for developers new to Kubernetes, with a focus on practical application and real-world scenarios. The content is structured into sections that cover everything from cluster setup to advanced operations, with a mix of explanations, code snippets, and exercises.

## Build & Dev Commands

- `npm start` — dev server with hot reload at `http://localhost:8080`
- `npm run build` — production build to `_site/`
- `npm run lint` — auto-format Markdown with Prettier (`--write`)
- `npm run lint:check` — CI formatting check (runs in GitHub Actions)
- `npm run clean` — remove `_site/`

## Content Authoring

### Frontmatter (required on every section page)

```yaml
---
tags: section # "section" (main flow 00–09), "extra" (bonus 10–12), or "alternative" (e.g. 09a)
index: 4 # Numeric order, matches directory prefix
title: Deploying The Backend
summary: One-line description for the home page listing
layout: default.njk # Always this value
icon: 🚀 # Single emoji shown in sidebar and headings
---
```

### Directory & File Conventions

- Section directories: `content/{NN}-{slug}/index.md` (zero-padded two-digit prefix)
- Supporting files (YAML manifests, `.sql`, `.png`, `.sh`, `.svg`) go alongside `index.md` — Eleventy copies them via passthrough
- YAML manifests use `__ACR_NAME__` as a user-replaceable placeholder
- The home page `content/index.md` has only `title` and `layout` (no tags/index/icon/summary)

### Content Patterns

- Raw HTML is enabled in Markdown (`html: true` in markdown-it config)
- Use `<details>`/`<summary>` for collapsible solution/cheat blocks containing YAML code
- Use `markdown-it-attrs` syntax (`{.class #id}`) for adding attributes to elements
- External links auto-open in new tabs (custom markdown-it plugin)
- Prefix external doc links with 📚 emoji, e.g. `[📚 Kubernetes Docs: Deployments](...)`
- Use emojis as sub-section visual markers (🔨, 🧪, 🌡️, etc.)
- Prettier config: 120-char width, `proseWrap: "always"` — run `npm run lint` before committing

### Navigation

- `tags: section` pages get automatic prev/next links and appear in sidebar
- `tags: extra` pages appear in a separate sidebar group below a divider
- `tags: alternative` pages are not auto-listed — link to them manually from related sections

## Key Files

- `eleventy.config.js` — Eleventy plugins (syntax highlight, markdown-it-attrs), passthrough copy rules, custom filters (`zeroPad`, `cssmin`), external links plugin
- `content/_includes/default.njk` — Single layout template with sidebar nav, theme toggle, prev/next footer
- `content/_includes/main.css` / `main.js` — Inlined (not linked) into the template via Nunjucks `{% include %}` + `cssmin` filter
- `content/.prettierrc` — Prettier config (`printWidth: 120`, `proseWrap: "always"`)
- `gitops/` — Kustomize manifests used by the GitOps/Flux section (section 11); contains `base/`, `apps/`, `disabled/` directories

## CI/CD

Single workflow `.github/workflows/ci-build-deploy.yaml`:

1. **lint** job — `npm run lint:check` on all pushes/PRs to `main`
2. **deploy** job — builds site and deploys to GitHub Pages (only on `main` branch)

## Gotchas

- Never edit files in `_site/` — it's a generated output directory
- The `archive/k3s/` directory contains a deprecated K3S workshop path — don't update it
- Collections are sorted by `index` field, not directory name — keep them in sync
- CSS/JS are inlined into the HTML template, not served as separate static files
6 changes: 3 additions & 3 deletions .github/workflows/ci-build-deploy.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,12 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6

- name: Set up Node
uses: actions/setup-node@v4
uses: actions/setup-node@v6
with:
node-version: "22"
node-version: "lts/*"

- name: Install dependencies
run: npm ci
Expand Down
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Copyright 2025 Ben Coleman
Copyright 2026 Ben Coleman

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

Expand Down
5 changes: 2 additions & 3 deletions content/00-pre-reqs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ layout: default.njk
icon: ⚒️
---

# {{ icon }} Workshop Pre Requisites
# {{ icon }} {{ title }}

As this is an entirely hands on workshop, you will need several things before you can start:

Expand Down Expand Up @@ -115,7 +115,6 @@ Double check that everything is installed and working correctly with:

```bash
# Verify Azure CLI and Helm are working
# Try commands with tab completion
az
helm
```
Expand Down Expand Up @@ -167,7 +166,7 @@ RES_GROUP="kube-workshop"
REGION="westeurope"
AKS_NAME="__change_me__"
ACR_NAME="__change_me__"
KUBE_VERSION="1.27.1"
KUBE_VERSION="1.33.6"
```

> New versions of Kubernetes are released all the time, and eventually older versions are removed from Azure. Rather
Expand Down
6 changes: 5 additions & 1 deletion content/00-pre-reqs/vars.sh.sample
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,8 @@ RES_GROUP="kube-workshop"
REGION="westeurope"
AKS_NAME="__change_me__"
ACR_NAME="__change_me__" # NOTE: Can not contain underscores or hyphens
KUBE_VERSION="1.33"
KUBE_VERSION="1.33.6"

# Do not edit below this line

echo -e "Using variables: Region=$REGION\nResource Group=$RES_GROUP\nAKS Name=$AKS_NAME\nACR Name=$ACR_NAME\nKubernetes Version=$KUBE_VERSION"
8 changes: 6 additions & 2 deletions content/01-cluster/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ layout: default.njk
icon: 🚀
---

# {{ icon }} Deploying Kubernetes
# {{ icon }} {{ title }}

Deploying AKS and Kubernetes can be extremely complex, with many networking, compute and other aspects to consider.
However for the purposes of this workshop, a default and basic cluster can be deployed very quickly.
Expand Down Expand Up @@ -65,7 +65,11 @@ This should take around 5 minutes to complete, and creates a new AKS cluster wit

- Two small B-Series _Nodes_ in a single node pool. _Nodes_ are what your workloads will be running on. This is about as
small and cheap as you can go and still have cluster that is useful for learning and experimentation.
- Basic 'Kubenet' networking, which creates an Azure network and subnet etc for us.
- It's quite possible the subscription you are using has limits or controls on what VM sizes can be used, if you get
an error about the VM size not being available try changing to a different size, e.g. `Standard_D4ds_v5`.
- It will use 'Azure CNI Overlay' networking, which creates an Azure network and subnet etc for us, we don't have to
worry about any of the underlying network configuration, and it will just work with Azure services.
[See docs if you wish to learn more about this topic](https://docs.microsoft.com/azure/aks/operator-best-practices-network)
[See docs if you wish to learn more about this topic](https://docs.microsoft.com/azure/aks/operator-best-practices-network)
- Local cluster admin account, with RBAC enabled, this means we don't need to worry about setting up users or assigning
roles etc.
Expand Down
10 changes: 5 additions & 5 deletions content/02-container-registry/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ layout: default.njk
icon: 📦
---

# {{ icon }} Container Registry & Images
# {{ icon }} {{ title }}

We will deploy & use a private registry to hold the application container images. This is not strictly necessary as we
could pull the images directly from the public, however using a private registry is a more realistic approach.
Expand All @@ -22,8 +22,7 @@ Deploying a new ACR is very simple:

```bash
az acr create --name $ACR_NAME --resource-group $RES_GROUP \
--sku Standard \
--admin-enabled true
--sku Standard
```

> When you pick a name for the resource with `$ACR_NAME`, this has to be **globally unique**, and not contain any
Expand Down Expand Up @@ -102,9 +101,10 @@ will need to proceed to the alternative approach below.

## 🔌 Connect AKS to ACR - Alternative Workaround

If you do not have 'Owner' permissions in Azure, you will need to fall back to an alternative approach. This involves
two things:
If you do not have 'Owner' permissions in Azure (to the resource group you are using), you will need to fall back to an
alternative approach. This involves two things:

- Enable password authentication by running `az acr update --name $ACR_NAME --admin-enabled true`
- Adding an _Secret_ to the cluster containing the credentials to pull images from the ACR.
- Including a reference to this _Secret_ in every _Deployment_ you create or update the _ServiceAccount_ used by the
_Pods_ to reference this _Secret_.
Expand Down
2 changes: 1 addition & 1 deletion content/03-the-application/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ layout: default.njk
icon: ❇️
---

# {{ icon }} Overview Of The Application
# {{ icon }} {{ title }}

This section simply serves as an introduction to the application, there are no tasks to be carried out.

Expand Down
2 changes: 1 addition & 1 deletion content/04-deployment/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ layout: default.njk
icon: 🚀
---

# {{ icon }} Deploying The Backend
# {{ icon }} {{ title }}

We'll deploy the app piece by piece, and at first we'll deploy & configure things in a sub-optimal way. This is in order
to explore the Kubernetes concepts and show their purpose. Then we'll iterate and improve towards the final
Expand Down
8 changes: 7 additions & 1 deletion content/05-network-basics/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ layout: default.njk
icon: 🌐
---

# {{ icon }} Basic Networking
# {{ icon }} {{ title }}

Pods are both ephemeral and "mortal", they should be considered effectively transient. Kubernetes can terminate and
reschedule pods for a whole range of reasons, including rolling updates, hitting resource limits, scaling up & down and
Expand All @@ -17,6 +17,12 @@ directly (e.g. by name or IP address).
Kubernetes solves this with _Services_, which act as a network abstraction over a group of pods, and have their own
independent and more stable life cycle. We can use them to greatly improve what we've deployed.

Networking in Kubernetes is a complex topic, and could be the subject of an entire workshop on its own. For now we will
cover just enough to get our app working, and in the next part we will look at how to expose the frontend to the
internet.

[📚 Kubernetes Docs: Cluster Networking](https://kubernetes.io/docs/concepts/cluster-administration/networking/)

## 🧩 Deploy PostgreSQL Service

Now to put a _Service_ in front of the PostgreSQL pod, if you want to create the service YAML yourself, you can refer to
Expand Down
2 changes: 1 addition & 1 deletion content/06-frontend/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ layout: default.njk
icon: 💻
---

# {{ icon }} Adding The Frontend
# {{ icon }} {{ title }}

We've ignored the frontend until this point, with the API and DB in place we are finally ready to deploy it. We need to
use a _Deployment_ and _Service_ just as before (you might be starting to see a pattern!). We can pick up the pace a
Expand Down
1 change: 0 additions & 1 deletion content/07-improvements/api-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ spec:
cpu: 50m
memory: 50Mi
limits:
cpu: 100m
memory: 128Mi

readinessProbe:
Expand Down
1 change: 0 additions & 1 deletion content/07-improvements/frontend-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ spec:
cpu: 50m
memory: 50Mi
limits:
cpu: 100m
memory: 128Mi

readinessProbe:
Expand Down
14 changes: 8 additions & 6 deletions content/07-improvements/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ layout: default.njk
icon: ✨
---

# {{ icon }} Path to Production Readiness
# {{ icon }} {{ title }}

We've cut several corners so far in order to simplify things and introduce concepts one at a time, now it is time to
make some improvements. What constitutes best practice is a moving target, and often subjective, but there are some
Expand All @@ -24,10 +24,14 @@ can do this two ways:
- **Resource requests**: Used by the Kubernetes scheduler to help assign _Pods_ to a node with sufficient resources.
This is only used when starting & scheduling pods, and not enforced after they start.
- **Resource limits**: _Pods_ will be prevented from using more resources than their assigned limits. These limits are
enforced and can result in a _Pod_ being terminated. It's highly recommended to set limits to prevent one workload
from monopolizing cluster resources and starving other workloads.
enforced and can result in a _Pod_ being terminated.

It's worth reading the offical docs especially on the units & specifiers used for memory and CPU, which can feel a
It's highly recommended to set **memory limits** to prevent one workload from monopolizing cluster resources and
starving other workloads. However, **CPU limits are widely considered harmful** and can cause more problems than they
solve, so we'll only set CPU requests, and not limits. You can do some further reading on this topic if you like, but
we'll skip the reasoning for this in the interest of time.

It's worth reading the official docs especially on the units & specifiers used for memory and CPU, which can feel a
little unintuitive at first.

[📚 Kubernetes Docs: Resource Management](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/)
Expand All @@ -42,7 +46,6 @@ resources:
cpu: 50m
memory: 50Mi
limits:
cpu: 100m
memory: 128Mi
```

Expand All @@ -53,7 +56,6 @@ resources:
cpu: 50m
memory: 100Mi
limits:
cpu: 100m
memory: 512Mi
```

Expand Down
1 change: 0 additions & 1 deletion content/07-improvements/postgres-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ spec:
cpu: 50m
memory: 100Mi
limits:
cpu: 100m
memory: 512Mi

readinessProbe:
Expand Down
27 changes: 14 additions & 13 deletions content/08-more-improvements/index.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
---
tags: section
index: 8
title: Production Readiness Continued
title: Production Readiness (Cont.)
summary: More recommended practices; ConfigMaps & Volumes
layout: default.njk
icon: 🏆
---

# {{ icon }} Production Readiness Continued
# {{ icon }} {{ title }}

We're not done improving things yet! This section is a continuation of the previous one, where we will further enhance
our deployment by adding a few more important features. Using _ConfigMaps_ and volumes, we'll continue stepping towards
Expand Down Expand Up @@ -48,23 +48,24 @@ kubectl create configmap nanomon-sql-init --from-file=nanomon_init.sql
> Like every object in Kubernetes, ConfigMaps can also be created with a YAML manifest, but when working with external
> files/scripts etc, kubectl is your only real option.

There are three mains ways to use a _ConfigMap_ in with a _Pod_: as container command and args, as environment
variables, or as files in a volume. In this section we'll use the volume method.
There are two main ways to use a _ConfigMap_ in with a _Pod_: as environment variables, or as (virtual) files in a
volume. In this section we'll use the volume method. Which means we need to explain volumes and volume mounts first,
before we can use the _ConfigMap_.

## 💾 Volumes & Volume Mounts

A Volume in Kubernetes is a directory that is accessible to containers in a pod. Volumes are used to persist data, share
data between containers, and manage configuration. When it comes to persisting data and storage in Kubernetes, it's a
stageringly complex & deep topic. However volumes can also be used to easily provide a container with access to
staggeringly complex & deep topic. However volumes can also be used to easily provide a container with access to
configuration files, via a _ConfigMap_.

[📚 Kubernetes Docs: Volumes](https://kubernetes.io/docs/concepts/storage/volumes/)

There's always two parts to using a volume:

1. Define the volume in the _Pod_ spec, and specify the source of the volume.
2. Define a volume mount in the container spec, which references the volume, and specifies the filesystem path inside
the container where the volume should be mounted.
1. Define the **volume** in the _Pod_ spec, and specify the source of the volume, there are many types of sources.
2. Define a **volume mount** in the container spec, which references the volume, and specifies the filesystem path
inside the container where the volume should be mounted.

Update the Postgres deployment manifest to include the volume and volume mount, as follows:

Expand All @@ -86,12 +87,12 @@ volumeMounts:
readOnly: true
```

Hey, what's this `/docker-entrypoint-initdb.d` path? Is this some Kubernetes thing? No, this is a special directory in
the official Postgres image. Any `*.sql` or `*.sh` files found in this directory when the container starts will be
automatically executed by the Postgres entrypoint script. This is a really useful feature of the official Postgres
image, and is why we don't need to create our own custom Postgres image.
Hey, what's this `/docker-entrypoint-initdb.d` path? Is this some Kubernetes thing? No, this is a special directory (yes
it looks like a file but is actually a directory) used by the official Postgres image. Any `*.sql` or `*.sh` files found
in this directory when the container starts will be automatically executed when the container is initialized. This is a
really useful feature of the official Postgres image, and is why we don't need to create our own custom Postgres image.

The last thing to do is to update the Postgres container spec to use the official Postgres image, hosted publically on
The last thing to do is to update the Postgres container spec to use the official Postgres image, hosted publicly on
Dockerhub, rather than our custom one. Change the image line to:

```yaml
Expand Down
1 change: 0 additions & 1 deletion content/08-more-improvements/postgres-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,6 @@ spec:
cpu: 50m
memory: 100Mi
limits:
cpu: 100m
memory: 512Mi

readinessProbe:
Expand Down
1 change: 0 additions & 1 deletion content/08-more-improvements/runner-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@ spec:
cpu: 50m
memory: 50Mi
limits:
cpu: 100m
memory: 128Mi

env:
Expand Down
1 change: 0 additions & 1 deletion content/09-helm-ingress/frontend-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ spec:
cpu: 50m
memory: 50Mi
limits:
cpu: 100m
memory: 128Mi

readinessProbe:
Expand Down
Loading