A professional desktop application for creating storyboards with AI-powered image generation using ComfyUI. Export workflows from ComfyUI API and import them to the app. Auto-widget recognition should create the basic parameters. Anything else can be added by adding them manually from the Edit Workflow Parameters. Categorize each Workflow for I2I, T2I and Image editing categories, and start creating storyboards!!
- Features
- Screenshots
- Installation
- Quick Start
- Configuration
- Templates
- Camera Angles
- UI Overview
- Export Options
- Creating Templates
- Development
- Testing
- Troubleshooting
- Contributing
- License
- Template-Based Generation: Use pre-configured workflow templates for different AI models (Qwen, Z Image Turbo, Flux, Stable Diffusion)
- Camera Angle Library: 144 professionally organized camera angles (3 shot sizes × 4 heights × 12 view directions)
- LoRA Integration: Enable, disable, and configure multiple LoRA slots per template
- Parameter Control: Adjust sampling steps, CFG scale, dimensions, and more with intuitive widgets
- Reference Images: Support for img2img and ControlNet workflows with drag-and-drop image input
- Prompt Building: Comprehensive prompt editor with negative prompts, style tags, and quality presets
- Mask Painting: Draw masks on reference images for inpainting workflows with adjustable brush size
- Dark Theme: Professional dark interface optimized for creative work
- Responsive Grid: Scrollable storyboard grid with customizable column count
- Real-time Preview: Live parameter updates and image previews
- Drag-and-Drop: Easy image file selection with visual feedback
- Context Menus: Right-click actions for regenerate, export, view metadata, and delete
- Context Menus: Right-click actions for regenerate, export, view metadata, delete, import image, and re-use parameters (load a panel's saved workflow + parameters)
- Panel Notes: Add text notes under each panel for scene descriptions
- Metadata Viewing: View generation parameters (prompts, seed, LoRAs) for any panel
- Image Formats: PNG (lossless), JPEG (adjustable quality), WebP (compressed)
- PDF Grid Export: Professional storyboard layouts with configurable grids (2x3, 2x2, 3x3, 1x1, 3x2)
- CSV Export: Metadata export with prompt and parameter information
- Batch Operations: Export individual panels or entire storyboards
- Save Sessions: Preserve your storyboard with all images, notes, and metadata
- Load Sessions: Resume work on previous storyboards
- Auto-copied Images: Session images are stored independently from ComfyUI output
- Saved Generation Metadata: Newer generations include full metadata saved with each panel: template name, parameter values, prompts, LoRA settings, seed, camera angle token, whether the camera angle was enabled, whether "Next Scene" prefix was enabled, and reference image paths (if used). Use "Re-use Parameters" from the panel context menu to load these back into the generation panels.
Word of warning, the workflow included in the installation will probably not work for you, due to where the comfyui models on my setup are located.
- Python: 3.10 or higher
- ComfyUI: Running instance with REST API enabled
- Download from: https://github.com/comfyanonymous/ComfyUI
- Enable API: Add
--listen 0.0.0.0to launch command
Using uv (recommended):
# Install uv (if not already installed)
# Windows (PowerShell)
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone the repository
git clone https://github.com/yourusername/storyboard-maker.git
cd storyboard-maker
# Create virtual environment and install all dependencies (uses uv.lock)
uv sync
# Run the application with uv
uv run python -m storyboard_app.mainUsing venv/pip:
# Clone the repository
git clone https://github.com/yourusername/storyboard-maker.git
cd storyboard-maker
# Create virtual environment
python -m venv .venv
# Activate virtual environment (Windows)
.venv\Scripts\activate
# Activate virtual environment (macOS/Linux)
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txtUsing pip (editable install):
# Clone the repository
git clone https://github.com/yourusername/storyboard-maker.git
cd storyboard-maker
# Install with pip
pip install -e .
# Or install with development dependencies
pip install -e ".[dev]"Using poetry:
# Install with poetry
poetry installWhy pyproject.toml + uv.lock?
- Modern PEP 621 standard
- uv provides fast, reproducible installs via lockfile
- Works with uv, poetry, and pip
- Supports dependency groups ([dev])
- Better metadata and tooling configuration
# Run the application (with uv)
uv run python -m storyboard_app.main
# Or if using venv (activate first)
python -m storyboard_app.main
# With debug logging
python -m storyboard_app.main --debug# Navigate to ComfyUI directory
cd /path/to/ComfyUI
# Launch with API enabled
python main.py --listen 0.0.0.0 --port 8188python -m storyboard_app.main- Open File → Settings
- Verify ComfyUI URL (default:
http://127.0.0.1:8188) - Click Test Connection to verify
- Click Save
- Select Template: Choose from the dropdown (e.g., "Qwen Image Generator")
- Adjust Parameters: Modify steps, CFG scale, and dimensions as needed
- Add Camera Angle: Select from 144 available angles (optional)
- Write Prompt: Describe the scene in the prompt builder
- Generate: Click "Generate Panels" to create storyboard panels
Configuration is stored in storyboard_app/config.json:
{
"comfyui": {
"server_url": "http://127.0.0.1:8188",
"timeout": 120,
"max_retries": 3
},
"paths": {
"templates_dir": "templates",
"user_templates_dir": "user_templates",
"output_dir": "output",
"loras_dir": null
},
"ui": {
"theme": "dark",
"default_panel_count": 6,
"grid_columns": 3,
"panel_width": 300
},
"generation": {
"default_steps": 30,
"default_cfg_scale": 7.0,
"default_seed": -1
}
}Override configuration with environment variables:
# Set ComfyUI server URL
export STORYBOARD_COMFYUI_URL=http://192.168.1.100:8188
# Set generation timeout
export STORYBOARD_COMFYUI_TIMEOUT=180
# Set output directory
export STORYBOARD_OUTPUT_DIR=/path/to/outputFour sample templates are included:
| Template | Engine | Description |
|---|---|---|
flux_canny_model.json |
FLUX | Image to Image with canny |
flux_depth_lora.json |
FLUX | Image to Image with Depth |
flux_fill_outpaint.json |
FLUX | Image outpaint |
flux_schnell.json |
FLUX | Classic Flux Schnell Image generation |
head_swap_v4.json |
Qwen Edit | Qwen edit with head swap v4 lora |
image_z_image_turbo.json |
z-Image Turbo | Classic z-image turbo text to image workflow |
multiple-angles.json |
Qwen Edit | Deprecated Multiple Angles Qwen Edit 2511 workflow |
next_scene_and_multiple_angles.json |
Qwen Edit | improved Image to Image and Image editing workflow with Next Scene and Multiple Camera Angles with UI Loras |
qwen_edit_fusion.json |
Qwen Edit | Qwen Fusion Lora for Image to Image generation |
qwen_image.json |
Qwen | Classic Qwen 2512 Image generation |
Templates define:
- Metadata: Name, version, engine, category
- Parameters: Adjustable workflow parameters with constraints
- LoRA Slots: Configurable LoRA positions
- Image Inputs: Reference image requirements
- Workflow: Complete ComfyUI graph definition
Create custom templates:
- Export from ComfyUI: Click Template → Import from ComfyUI
- Load a workflow JSON file
- Add metadata (name, description, engine)
- Save to
user_templates/directory
Templates are automatically loaded on next refresh.
Camera Angles only work if you have the Qwen-Edit-2509-Multiple-angles.safetensors Lora from fal.ai [https://huggingface.co/fal/Qwen-Image-Edit-2511-Multiple-Angles-LoRA]
The angle library provides 144 camera angles organized by:
Shot Size (3 types):
- Close-up
- Medium Shot
- Wide Shot
Camera Height (4 types):
- Low-angle
- Eye-level
- Elevated
- High-angle
View Direction (12 types):
- Front, Front-right quarter, Right side, Back-right quarter
- Back, Back-left quarter, Left side, Front-left quarter
Angles use the Multiple Angles LoRA token format:
<sks> {direction} {height} shot {size}
Example:
<sks> front-right quarter view eye-level shot close-up
- Enable camera angle tokens in the Angle Selector panel
- Select desired angle from the dropdown
- Preview the generated token
- Include in prompt during generation
- Import Image: Right-click any panel and choose "Import Image..." to load an external image into a panel slot. Imported images can carry minimal metadata and are stored in your session when saving.
- Re-use Parameters: Right-click a populated panel and choose "Re-use Parameters" to load the workflow template and parameter values used to create that panel. This restores parameters and (for generations created after 2026-01-09) will also restore camera angle state and reference images if they still exist on disk.
┌─────────────────────────────────────────────────────────────────────┐
│ File Template Help [Main Toolbar] │
├──────────────────┬──────────────────────────────────────────────────┤
│ │ │
│ Template: │ Storyboard Grid (3 columns) │
│ [Select ▼] │ │
│ │ ┌─────┐ ┌─────┐ ┌─────┐ │
│ Parameters: │ │ #1 │ │ #2 │ │ #3 │ │
│ [Scrollable] │ │ │ │ │ │ │ │
│ │ └─────┘ └─────┘ └─────┘ │
│ Camera Angle: │ │
│ [☑ Enable] │ ┌─────┐ ┌─────┐ ┌─────┐ │
│ [Select ▼] │ │ #4 │ │ #5 │ │ #6 │ │
│ Token: <...> │ │ │ │ │ │ │ │
│ │ └─────┘ └─────┘ └─────┘ │
│ Prompt: │ │
│ [Text Area] │ [Scroll for more...] │
│ │ │
│ Style Tags: │ │
│ ☑ Cinematic ☑ Realistic │
│ │ │
│ [Generate Panels] │
│ │ │
├──────────────────┴──────────────────────────────────────────────────┤
│ Status: Connected to ComfyUI [Progress] │
└─────────────────────────────────────────────────────────────────────┘
Panels (left sidebar):
- Template Selector: Choose generation template
- Parameters Panel: Adjust workflow parameters
- LoRA Panel: Configure LoRA settings
- Image Inputs: Set reference images
- Angle Selector: Choose camera angle if the workflow has the correct Qwen Edit Loras
- Prompt Builder: Write and enhance prompts
Widgets (reusable components):
- Image Drop Zone: Drag-and-drop image selection
- Value Widgets: Parameter inputs (spinbox, slider, dropdown, checkbox)
- Panel Slot: Individual storyboard display with notes field and context menu
- Mask Painter: Draw masks on images for inpainting workflows
Context Menu Actions (right-click on panel):
- View Full Size: Open panel in full-resolution viewer
- View Metadata...: Show generation parameters
- Export...: Save panel to file
- Regenerate: Generate new image for this panel
- Delete: Clear panel
Supported Formats:
| Format | Description | Quality Settings |
|---|---|---|
| PNG | Lossless compression | 100% |
| JPEG | Lossy compression | 1-100 (default: 95) |
| WebP | Modern format | 1-100 (default: 90) |
Export Individual Panel:
Right-click panel → Export...
Export All Panels:
File → Export...
Generate professional storyboard PDF documents:
Grid Layouts:
| Preset | Grid | Per Page | Best For |
|---|---|---|---|
| 2×3 | 2 columns × 3 rows | 6 panels | Standard storyboards |
| 2×2 | 2 columns × 2 rows | 4 panels | Detailed review |
| 3×3 | 3 columns × 3 rows | 9 panels | Compact overview |
| 1×1 | 1 column × 1 row | 1 panel | Full-page presentation |
| 3×2 | 3 columns × 2 rows | 6 panels | Landscape layouts |
Features:
- Panel numbers displayed on each cell
- Optional notes text under each panel
- Professional borders and spacing
- Automatic page breaks
Export metadata for documentation:
- Panel number
- Generated images paths
- Prompt used
- Parameter values
- LoRA settings
- Camera angle
- Panel notes
Save Session (File → Save Session):
- Enter a session name
- All panel images are copied to the session folder
- Notes and metadata are preserved
- Sessions stored in
sessions/folder
Load Session (File → Load Session):
- Browse to select a previous session
- Restores all panels, images, notes, and metadata
- Confirms before replacing current storyboard
{
"meta": {
"name": "My Custom Template",
"version": "1.0.0",
"author": "Your Name",
"description": "Template description",
"engine": "flux",
"category": "generation",
"supports_angles": true,
"supports_next_scene": false,
"requires_images": false
}
}{
"parameters": [
{
"name": "steps",
"display_name": "Sampling Steps",
"type": "integer",
"node_id": "3",
"input_name": "steps",
"default": 30,
"constraints": {
"min": 10,
"max": 150,
"step": 1
},
"description": "Number of denoising steps"
}
]
}Parameter Types:
integer: Whole number with min/max/stepfloat: Decimal number with min/max/step (slider + spinbox)seed: Integer with random button (-1 = random)enum: Dropdown with predefined optionsboolean: Checkbox for on/off valuesprompt: Text input for prompt strings
{
"loras": [
{
"name": "style_lora",
"display_name": "Style LoRA",
"node_id": "10",
"strength_inputs": {
"model": "strength_model",
"clip": "strength_clip"
},
"default_enabled": false,
"default_strength": 0.75,
"required": false
}
]
}{
"inputs": [
{
"name": "reference_image",
"display_name": "Reference Image",
"node_id": "12",
"input_name": "image",
"type": "image",
"required": false,
"batch_min": 1,
"batch_max": 1,
"description": "Optional reference image"
}
]
}storyboard-maker/
├── storyboard_app/
│ ├── __init__.py
│ ├── main.py # Application entry point
│ ├── config.py # Configuration management
│ ├── models/ # Data models
│ │ ├── template.py
│ │ ├── parameter.py
│ │ ├── lora.py
│ │ └── image_input.py
│ ├── core/ # Business logic
│ │ ├── template_loader.py
│ │ ├── comfyui_client.py
│ │ ├── angle_library.py
│ │ ├── workflow_builder.py
│ │ ├── prompt_builder.py
│ │ ├── export_manager.py
│ │ └── session_manager.py
│ ├── ui/ # User interface
│ │ ├── main_window.py
│ │ ├── panels/
│ │ │ ├── template_selector.py
│ │ │ ├── template_editor.py
│ │ │ ├── settings_panel.py
│ │ │ ├── parameters_panel.py
│ │ │ ├── lora_panel.py
│ │ │ ├── image_inputs.py
│ │ │ ├── angle_selector.py
│ │ │ ├── prompt_builder.py
│ │ │ └── storyboard_grid.py
│ │ └── widgets/
│ │ ├── image_drop_zone.py
│ │ ├── value_widgets.py
│ │ └── panel_slot.py
│ ├── templates/ # Built-in templates
│ ├── user_templates/ # User-created templates
│ ├── data/ # Data files
│ │ └── angles.txt # Camera angle definitions
│ ├── output/ # Generated images
│ └── config.json # Configuration file
├── tests/ # Unit tests
├── requirements.txt # Python dependencies
├── README.md # This file
└── tasks.md # Development task tracking
Style Guide:
- Line length: 100 characters maximum
- Indentation: 4 spaces (no tabs)
- Quotes: Double quotes for strings
- Type hints: Required for all function signatures
- Imports: Standard library → Third-party → Local
Type Hints:
from typing import Any, Optional, List, Dict
def process_data(
data: Dict[str, Any],
config: Optional[Config] = None
) -> List[str]:
...Error Handling:
class CustomError(Exception):
"""Base exception for custom errors."""
pass
try:
result = operation()
except CustomError as e:
logger.error(f"Operation failed: {e}")
raise# Install test dependencies
pip install pytest pytest-qt pytest-cov requests-mock
# Run all tests
pytest tests/
# Run with coverage
pytest tests/ --cov=storyboard_app --cov-report=html
# Run specific test
pytest tests/test_config.py -vIssue: "Cannot connect to ComfyUI"
Solutions:
- Verify ComfyUI is running
- Check server URL in Settings
- Ensure
--listenflag is used when launching ComfyUI - Check firewall settings (port 8188)
Issue: "Template not found"
Solutions:
- Check templates are in correct directory (
storyboard_app/templates/) - Verify JSON syntax is valid
- Refresh template list (Template → Refresh Templates)
- Check template schema matches expected format
Issue: "Generation failed"
Solutions:
- Check ComfyUI console for error messages
- Verify ComfyUI has required models loaded
- Confirm parameter values are within valid ranges
- Check available GPU memory
- Reduce batch size or image dimensions
Issue: "Generation fails due to model and lora paths are different in the provided templates"
Solutions:
- Open each json file from the StoryboardMaker_Data\user_templates folder in a text editor and find the model and lora names. Change their path to your comfyui paths and save them. eg: From a block of
"loras": [
{
"name": "lora_1",
"display_name": "next-scene_lora_v1-3000",
"node_id": "73",
"strength_inputs": {
"model": "strength_model",
"clip": "strength_clip",
"lora_name": ""
},
"compatible_patterns": [],
"default_enabled": true,
"default_strength": 1,
"required": true
},
change: "display_name": "next-scene_lora_v1-3000" to "display_name": "my lora name"
- find the next block of code:
"inputs": {
"lora_name": "Qwen\\next-scene_lora_v1-3000.safetensors",
"strength_model": 1,
"model": [
"75",
0
]
},
"class_type": "LoraLoaderModelOnly",
"_meta": {
"title": "LoraLoaderModelOnly"
}
},
and change "lora_name": "Qwen\\next-scene_lora_v1-3000.safetensors" to "lora_name": "my path to the lora\\my lora name.safetensors"
- Do the same for all models and loras and the provided templates would work.
Issue: "Import error: No module named 'PyQt6'"
Solutions:
# Using uv (recommended)
uv sync
# Using pip
pip install -e .
# Reinstall PyQt6
uv pip install --reinstall PyQt6>=6.6.0Solutions:
# Reinstall PyQt6
pip uninstall PyQt6
pip install PyQt6>=6.6.0Issue: "PDF export failed"
Solutions:
# Install ReportLab
pip install reportlab>=4.0.0We welcome contributions! Please follow these guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
<type>(<scope>): <subject>
<body>
<footer>
Types:
feat: New featurefix: Bug fixdocs: Documentation changesstyle: Code style changes (formatting)refactor: Code refactoringtest: Adding testschore: Maintenance tasks
Example:
feat(lora): Add strength slider for individual LoRA slots
- Separate model and CLIP strength controls
- Add visual feedback for enabled/disabled state
- Update template schema for new inputs
Closes #42
This project is licensed under the MIT License - see the LICENSE file for details.
- ComfyUI: Node-based UI for Stable Diffusion
- PyQt6: Cross-platform GUI framework
- Multiple Angles LoRA: Camera angle control system
- Community: All contributors and users
- Issues: Report bugs at GitHub Issues
- Discussions: Ask questions in GitHub Discussions
- Documentation: Full docs at Project Wiki
Made with ❤️ for creators and filmmakers
If you find this project useful, please consider ⭐ starring it on GitHub!