ReForge Stable Diffusion WebUI containers, updated weekly.
Currently builds:
- ReForge Neo.
ReForge Neo Stable Diffusion WebUI container with:
neobranch from Haoming02.- CUDA 13.
- Sageattention 2.
- Python 3.11 and UV.
- Latest Torch and Torchvision.
- Volumes and bind-mounts for all config, outputs, and runtime dependencies.
- Everything included in the image.
The following tags are available:
snw35/reforge:neo- Always points to the latest version of Neo.snw35/reforge:neo-x.x- Points to a specific version of Neo.
This container allows you to create AI images and videos with the latest models (Lumina, Quant, Wan, etc) on the most cutting-edge version of ReForge webui, with all the latest dependencies. You will need a host capable of running it (requirements outlined below). A modern gaming PC with an Nvidia GPU and Docker/Podman Desktop would be suitable without building something more serious (home server, etc).
This container is a large image (around 5GB) due to the WebUI's large number of dependencies. These are pre-packaged in the image, so nothing needs to be downloaded or compiled at runtime.
Important - your host MUST have the CUDA runtime at 13 or above installed. The CUDA version on the host must be equal to, or greater than, the version inside the container (13 in this case).
- An Nvidia GPU with 8GB+ VRAM and CUDA 13 support, RTX 20+ recommended.
- At least 16GB of RAM for SD/XL and distilled models, 64GB for full versions of models (e.g Flux).
- At least 200GB of disk space for dependencies and models (500GB or more recommended).
- Using a VM is fine, as long as your GPU is passed-through and detected inside it.
You need a container runtime that can support passing your GPU through to the container:
- Windows: Install CUDA 13, then either Docker Desktop with GPU pass-through or Podman Desktop with GPU pass-through.
- MacOS: Install CUDA 13, then Podman Desktop with GPU pass-through.
- Linux: Install CUDA 13, then Podman Desktop with GPU pass-through.
- Virtual Machine: A Linux VM with GPU pass-through works well (see below).
If you go the Virtual Machine route, or want to set up natively on Linux, you will need to install:
- CUDA 13 Runtime.
- A supported Linux distro for CUDA 13 (Ubuntu server 24.04 LTS recommended).
- Compatible Nvidia drivers (both 'open' and 'proprietary' flavours work).
- Docker-CE.
- Nvidia container toolkit.
Once you have all of the above installed and working, you can test this image in standalone mode to see if it will run:
docker run -it --runtime=nvidia --gpus all -p 7860:7860 snw35/reforge:neo
The webui should be available on port 7860 of the VM/host once it has started up, e.g http://your-host-ip:7860
Any errors that cause the container to exit, or to display CUDA/Nvidia runtime or driver errors will likely be fatal and need troubleshooting on the host to resolve.
For long-term use the bundled docker-compose.yaml file is required because it maps critical files (such as output images and configuration) to named volumes and the local directory, preserving them between restarts.
The purpose of each named volume is explained below:
reforge-config- Webui settings files.reforge-extensions- Any custom extensions installed by the user.
Clone the repo and start the compose project:
git clone https://github.com/snw35/reforge
docker compose up -d && docker compose logs -f
This will create two important directories:
outputs<- This is where your generated images are placed.models- Place your models, Loras, etc here.
See Haoming02's wiki page for where to download model files, and for references on how to run them.
Place your model files in the models folder in this linked layout. You will need to create the sub-folders for each type of model. This is bind-mounted into the container so all files are available to the webui.
- Put Checkpoint / UNet / DiT in models/Stable-diffusion.
- Put Text Encoders in /models/text_encoder.
- Put VAE in /models/VAE.
This container can be built by simply cloning the repo and running e.g docker build -t reforge:neo ./neo/.
The build is two-stage due to the compilers and dev packages needed to compile the venv. A fair number of dependencies are 'sdist', e.g source distributions that require compilation on install. These packages are not needed for runtime, so can be dropped and left behind in the 'builder' image.
Everything is packaged inside /home/ubuntu, including all runtime files and the WebUI install. The ubuntu user is used to run the WebUI itself. The CUDA -base- image is used as the base image, because the CUDA runtime libraries are required.
This repo will not provide containers or support for older CUDA versions (e.g 11.x, needed by some old Nvidia hardware) at the current time, as this project is focused on achieving the fastest performance on the the latest versions of ReForge and dependencies. The area of A.I image generation is fast moving and installation methods can vary widely between e.g CUDA 12.x and CUDA 13 for the same packages.