An interactive web app that teaches you how AI image generation (diffusion models) actually works. Built for people who learn by doing, not just reading.
Most ML explanations are either:
- Too simple: "AI learns from data" (useless)
- Too complex: Pages of math (overwhelming)
This course makes you work before you get answers. You predict, explain, build, and diagnose - then discover the concepts through your own reasoning.
8 modules, ~80 challenges:
- The Big Picture - Why AI image generation exists and how it works
- Text to Numbers - How words become something a model can use
- The Diffusion Process - How noise becomes images
- The Transformer - The architecture that makes it work
- Latent Space - Why we compress before generating
- Distillation - How to make it fast (why Z-Image needs only 8 steps)
- Putting It Together - The complete mental model
- Path to Contributing - From understanding to building your own
- Active learning: Predict → Struggle → Discover → Verify
- Spaced retrieval: Review system that resurfaces concepts at optimal intervals
- Progress tracking: localStorage persistence, understanding scores
- Accessible: Keyboard navigation, screen reader support
- No fluff: Minimal design, focused on learning
- Next.js 14 (App Router)
- TypeScript
- Tailwind CSS
- Framer Motion
npm install
npm run dev- Web developers curious about ML
- Anyone who wants to understand diffusion models without a PhD
- People who learn better by doing than reading papers
MIT