Easy setup for ComfyUI with WAN2.1 models and LoRA support.
BOOTSTRAP_TMPFILE=$(mktemp) && wget https://raw.githubusercontent.com/richowen/ComfyUI-Auto_installer/main/bootstrap.sh -O "$BOOTSTRAP_TMPFILE" && chmod +x "$BOOTSTRAP_TMPFILE" && bash "$BOOTSTRAP_TMPFILE" && rm -f "$BOOTSTRAP_TMPFILE"- WAN2.1 Models: Text-to-video and image-to-video capabilities
- Custom LoRAs: Download from civitai.ai and huggingface.co
- Model Options: bf16, fp16, fp8, and GGUF quantized versions
- GPU Optimization: Standard and low VRAM modes
- MultiGPU Support: Load and run models across multiple GPUs
- Custom Node Auto-setup: Automatically runs install.py scripts in custom nodes
During installation, the script will:
- Set up a Python virtual environment
- Install ComfyUI and core dependencies
- Install specialized packages (onnxruntime, insightface, facexlib, filterpy)
- Set up acceleration packages (sageattention, triton, xformers)
- Install custom nodes with their requirements
- Run any install.py scripts found in custom nodes to complete their setup
- Configure environment for optimal performance
After installation:
- Run one of the provided scripts:
./run_comfyui.sh(standard mode)./run_comfyui_lowvram.sh(low VRAM mode)./scripts/run_nvidia_gpu-sageattention.sh(SageAttention mode for better performance)
- Open http://localhost:8188 in your browser
./bootstrap.sh --directory PATH --yes--directory PATH: Custom install location--branch BRANCH: Use specific branch--yes: Non-interactive mode