A modern low-code visual programming platform built on NodeGraphQt and qfluentwidgets, supporting drag-and-drop component orchestration, asynchronous execution, file operations, control flow logic, and one-click export of workflows into standalone, executable projectsโenabling seamless transition from development to deployment.
| Traditional Low-Code Tools | CanvasMind |
|---|---|
| Static component assembly | Dynamic expressions + global variables drive parameters |
| Only serial execution | Full conditional branching, iteration, and loops |
| No custom logic | Embedded code editor for writing Python components freely |
| Execution = endpoint | One-click export to standalone projects (API, CLI, Docker) |
| AI disconnected from canvas | Deep LLM integration: yellow jump / purple create buttons for canvas-aware intelligent completion |
- Fluent Design โ Built with
qfluentwidgetsfor a polished, native look - Dark Theme โ Eye-friendly dark mode by default
- Responsive Layout โ Adapts to various screen sizes
- Drag-and-Drop Nodes โ Drag components from the panel onto the canvas
- Dataflow Connections โ Connect output and input ports to define dependencies
- Backdrop Grouping โ Visually group related nodes using Backdrop nodes
- Context Menus โ Full right-click menu for node operations
- Type-Aware Suggestions โ Automatically match compatible downstream components based on output port types
- Multi-Port Grouping โ Recommendations grouped by source port for clarity
- Visual Differentiation โ Color-coded suggestions per port type
- Cross-Canvas Learning โ Tracks component connection frequency to improve recommendations over time
- Yellow Jump Buttons: When the LLM references an existing node, a yellow
[Node Name](jump)button appearsโclick to instantly navigate to that node on the canvas. - Purple Create Buttons: When recommending a new capability, a purple
[Component Name](create)button is generatedโclick to instantiate the component from your library and auto-connect it. - Multimodal Context Injection: Automatically passes node JSON, variable states, and base64-encoded images to the LLM for precise, actionable suggestions.
- Canvas-Aware Completion: Supports simultaneous references to multiple existing nodes (yellow) and recommendations for missing components (purple), enabling end-to-end workflow completion.
- Non-Blocking Execution โ Uses
QThreadPoolto keep the UI responsive - Real-Time Status Visualization โ Node states shown via color (idle / running / success / failure)
- Topological Scheduling โ Automatically resolves dependencies and executes in correct order
- Efficient Serialization โ Uses
pickleandpyarrowfor fast data passing and storage
- Conditional Branching โ Enable/disable branches based on
$...$expressions (if/elselogic) - Iteration โ Loop over lists or arrays, executing subgraphs per element
- Loop Control โ Fixed-count or condition-driven loops
- Dynamic Subgraph Skipping โ Entire downstream subgraphs of inactive branches are skipped for efficiency
- Expression-Driven Logic โ Branch conditions, loop counts, etc., support dynamic expressions
- Structured Scopes โ Three variable scopes:
env(environment),custom(user-defined), andnode_vars(node outputs) - Dynamic Expressions โ Use
$env_user_id$or$custom_threshold * 2$in any parameter field - Runtime Evaluation โ Expressions resolved before execution, with support for nested dicts/lists
- Secure Sandbox โ Powered by
asteval; prevents unsafe operations and isolates environments viacontextmanager - UI Integration โ Select variables or type expressions directly in component property panels
- Full Python Logic โ Write complete
run()methods and helper functions inside nodes - Dynamic Ports โ Add/remove input/output ports via UI; bind global variables as defaults
- Full Feature Integration โ Leverages global variables, expressions, auto-dependency install, logging, and status visualization
- Safe Execution โ Runs in isolated subprocesses with timeout control, error capture, and retry support
- Developer-Friendly Editor โ Professional code editor with dark theme, syntax highlighting, intelligent autocomplete, folding, and error diagnostics
- Dynamic Loading โ Auto-scans
components/directory and loads new components - Pydantic Schemas โ Define inputs, outputs, and properties using Pydantic models
- Per-Node Logging โ Each node maintains its own execution log
- State Persistence โ Save/load entire workflows
- Auto Dependency Resolution โ Components declare
requirements; missing packages are auto-installed at runtime
- Subgraph Export โ Select any group of nodes and export as a self-contained project
- Train/Inference Separation โ Export only inference logic with trained models bundled
- Zero-Dependency Runtime โ Generated project runs independentlyโno CanvasMind required
- Multi-Environment Support โ Auto-generated
requirements.txtenables deployment to servers, Docker, or CLI environments
- Direct Invocation โ Canvas can call exported project scripts by name and retrieve results
- Parameter Passing โ Node properties define tool-call parameters, passed automatically at runtime
- Full Logging โ Detailed logs of tool execution are captured and returned for debugging
- LLM Function Calling Ready โ Standardized tool name, input/output schema, and examples for seamless LLM integration
- Python 3.8+
- PyQt5 or PySide2
pip install -r requirements.txtpython main.pypyinstaller --onedir --windowed --add-data "app;app" --add-data "icons;icons" -i icons/logo3.png main.py- Create a file in
components/
# components/data/my_component.py
class MyComponent(BaseComponent):
name = "My Processor"
category = "Data"
description = "Example component"
requirements = "pandas>=1.3.0"
inputs = [{"name": "input_data", "type": "TEXT"}]
outputs = [{"name": "output_data", "type": "TEXT"}]
properties = {"param1": {"type": "TEXT", "default": "hello"}}
def run(self, params, inputs=None):
"""
params: node properties from UI
inputs: upstream data (key = input port name)
return: output data (key = output port name)
"""
input_data = inputs.get("input_data") if inputs else None
param1 = params.get("param1", "default_value")
result = f"Processed: {input_data} + {param1}"
return {"output_data": result}- Auto-Loading โ Components are automatically scanned and added to the panel
- Auto Dependency Install โ If a component fails due to missing packages, CanvasMind installs them from the
requirementsfield and retries
| Type | Description | Example |
|---|---|---|
TEXT |
Text input | String parameters |
LONGTEXT |
Long text input | Multi-line strings |
INT |
Integer | Numeric values |
FLOAT |
Floating point | Decimal numbers |
BOOL |
Boolean | Toggle switches |
CSV |
CSV list data | Column selections |
JSON |
JSON structure | Dynamic nested data |
EXCEL |
Excel data | Cell ranges |
FILE |
File path | Local file reference |
UPLOAD |
Document upload | User-uploaded files |
SKLEARNMODEL |
Scikit-learn model | Trained .pkl models |
TORCHMODEL |
PyTorch model | .pt or .pth models |
IMAGE |
Image data | Base64 or file paths |
| Type | Description | Example |
|---|---|---|
TEXT |
Text input | Short strings |
LONGTEXT |
Long text input | Code snippets, prompts |
INT / FLOAT |
Numeric input | Thresholds, counts |
BOOL |
Toggle | Enable/disable flags |
CHOICE |
Dropdown | Predefined options |
DYNAMICFORM |
Dynamic form | Variable-length lists |
RANGE |
Numeric range | Min/max sliders |
- Create Node โ Drag from left panel to canvas
- Connect Nodes โ Drag from output port to input port
- Run Node โ Right-click โ โRun This Nodeโ
- View Logs โ Right-click โ โView Node Logsโ
- Loops โ Use Loop/Iterate nodes with Backdrop for structured iteration
- File Handling โ Click file picker in property panel
- Workflow Management โ Save/load via top-left buttons
- Node Grouping โ Select multiple nodes โ right-click โ โCreate Backdropโ
- Dependency Management โ Failed components auto-install missing
requirements
Ctrl+Rโ Run workflowCtrl+Sโ Save workflowCtrl+Oโ Load workflowCtrl+Aโ Select all nodesDelโ Delete selected nodes
- Idle โ Gray border
- Running โ Blue border
- Success โ Green border
- Failed โ Red border
- Idle โ Yellow
- Input Active โ Blue
- Output Active โ Green
- Each node has independent logs with timestamps
- Powered by Loguru โ use
self.loggerin components - All
print()output is automatically captured
- Inputs auto-populated from upstream outputs
- Outputs stored by port name
- Full multi-input/multi-output support
Export any subgraph as a self-contained project that runs in any Python environmentโno CanvasMind required.
- Train/Inference Split โ Export only inference logic with models bundled
- Team Sharing โ Share full workflows as runnable projects
- Production Deployment โ Run on servers or in Docker
- Offline Execution โ CLI-only environments
โ
Smart Dependency Analysis โ Copies only necessary component code
โ
Path Rewriting โ Model/data files copied and converted to relative paths
โ
Column Selection Preserved โ CSV column config fully retained
โ
Environment Isolation โ Auto-generated requirements.txt
โ
Ready-to-Run โ Includes run.py and api_server.py
- Select Nodes โ Choose any nodes on canvas (multi-select supported)
- Click Export โ Top-left โExport Modelโ button (๐ค icon)
- Choose Directory โ Project folder auto-generated
- Run Externally:
# Install dependencies
pip install -r requirements.txt
# Run model
python run.pymodel_xxxxxxxx/
โโโ model.workflow.json # Full workflow definition (nodes, connections, column selections)
โโโ project_spec.json # Input/output schema
โโโ preview.png # Canvas preview snapshot
โโโ README.md # Project overview
โโโ requirements.txt # Auto-analyzed dependencies
โโโ run.py # CLI entrypoint
โโโ api_server.py # FastAPI microservice
โโโ scan_components.py # Component loader
โโโ runner/
โ โโโ component_executor.py
โ โโโ workflow_runner.py
โโโ components/ # Original component code (preserved structure)
โ โโโ base.py
โ โโโ your_components/
โโโ inputs/ # Bundled models/data files
- Integrate LSP (Language Server Protocol) to replace Jedi-based static completion
- Submit workflows to remote servers / Kubernetes / Ray clusters
- Local = orchestration only; execution = distributed
- Ideal for LLMs and big data workloads
- Problem: Current execution is serial โ underutilizes CPU/GPU
- Solution:
- Parallel execution of independent nodes
- GPU-aware scheduling (e.g., assign PyTorch models to specific devices)
- โ Visual canvas (NodeGraphQt)
- โ Control flow: conditionals, loops, iteration
- โ Global variables + expression system
- โ Dynamic code components (embedded editor)
- โ Intelligent node recommendations
- โ One-click export (CLI + API)
- โ Multi-environment management
- โ LLM context integration (yellow jump / purple create buttons)
- โณ Code-to-canvas auto-creation (from editor โ new node)
- โณ Parallel & remote execution
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the GPLv3 License.
- NodeGraphQt โ Node graph framework
- PyQt-Fluent-Widgets โ Fluent Design UI library
- Loguru โ Elegant Python logging

