A Node-RED-like visual workflow editor with a Python backend. Create workflows by connecting Python nodes that process and route messages.
- Visual Node Editor: Drag-and-drop interface for creating workflows
- Python Backend: All nodes are Python classes that can be easily extended
- Fully Extensible: Third-party nodes can be added without modifying core code
- UI Components: Nodes can define interactive controls (buttons, toggles, displays) in their cards
- Node-RED Compatible Messages: Message structure with
payloadandtopicfields - Built-in Nodes:
- InjectNode: Generate messages with configurable payloads
- FunctionNode: Execute custom Python code on messages
- DebugNode: Display messages in the debug panel
- ChangeNode: Modify message properties
- SwitchNode: Route messages based on conditions
- DelayNode: Delay message delivery
- GateNode: Control message flow with real-time toggle
- RateProbeNode: Monitor message throughput
- Vision Nodes: Camera input, YOLO detection, image processing
- REST API: Complete API for programmatic workflow management
- Export/Import: Save and load workflows as JSON
- Dynamic Properties: Node properties and UI components defined in node classes
The setup scripts will create a virtual environment, detect CUDA if available, install PyTorch with appropriate GPU support, and install all dependencies.
Windows:
# Use Python from PATH
setup.bat
# Or specify Python path
setup.bat "C:\Python312\python.exe"Linux/Mac:
chmod +x setup.sh
./setup.shThe scripts will:
- Create a virtual environment in
appenv/ - Detect CUDA version and install matching PyTorch build
- Install CPU-only PyTorch if CUDA is not detected
- Install all required dependencies
- Optionally install node-specific dependencies
Activate the environment:
- Windows:
appenv\Scripts\activate.bat - Linux/Mac:
source appenv/bin/activate
If you prefer manual installation or have specific requirements:
pip install -e .pynode
# or
python -m pynodeNavigate to http://localhost:5000
PyNode can be run in a Docker container with GPU support (CUDA 12.6).
For mDNS service discovery to work correctly inside Docker, you need to set the HOST_IP environment variable to your host machine's IP address:
# Set the host IP address
export HOST_IP=$(hostname -I | awk '{print $1}')
# Start the container
docker compose up -dThe container will:
- Use NVIDIA CUDA 12.6 runtime (requires nvidia-docker)
- Install PyTorch with CUDA 12.6 support
- Install all dependencies including node-specific packages
- Expose port 5000 for web interface
- Support mDNS broadcasting with the correct host IP
Why set HOST_IP? When using the mDNS Broadcast Node inside Docker, it needs to advertise the host machine's IP address (not the container's internal IP like 172.x.x.x) so other devices on your network can discover and connect to the service.
Access the application:
- Web UI:
http://localhost:5000 - From other devices:
http://<your-host-ip>:5000
GPU Access: The Docker setup requires NVIDIA Container Toolkit to be installed on the host system.
For more details, see DOCKER.md.
PyNode is designed to be easily extended with custom nodes:
- Creating Custom Nodes - Complete guide to creating your own nodes
- UI Components Guide - Add interactive controls to your nodes
- Extensibility Overview - Architecture and design principles
pynode/ # Project root
├── pynode/ # Main package
│ ├── __init__.py
│ ├── __main__.py # Entry point for 'python -m pynode'
│ ├── _version.py # Version information
│ ├── main.py # CLI application
│ ├── server.py # Flask REST API with SSE support
│ ├── workflow_engine.py # Workflow management
│ ├── models/ # ML model storage
│ ├── nodes/ # Node implementations (each in its own folder)
│ │ ├── __init__.py
│ │ ├── base_node.py # BaseNode class
│ │ ├── InjectNode/ # Generate messages
│ │ ├── FunctionNode/ # Custom Python code
│ │ ├── DebugNode/ # Debug output
│ │ ├── ChangeNode/ # Modify messages
│ │ ├── SwitchNode/ # Route based on conditions
│ │ ├── DelayNode/ # Delay messages
│ │ ├── GateNode/ # Control message flow
│ │ ├── RateProbeNode/ # Monitor throughput
│ │ ├── QueueLengthProbeNode/ # Monitor queue lengths
│ │ ├── CounterNode/ # Count messages
│ │ ├── CameraNode/ # Camera input
│ │ ├── UltralyticsNode/ # YOLO detection
│ │ ├── ImageViewerNode/ # Display images
│ │ ├── ImageWriterNode/ # Save images
│ │ ├── TrackerNode/ # Object tracking
│ │ ├── MQTTNode/ # MQTT communication
│ │ ├── MessageWriterNode/ # Save message data
│ │ ├── VideoWriterNode/ # Save video
│ │ ├── OpenCV/ # OpenCV operations
│ │ └── ... # 30+ other node types
│ └── static/ # Web UI
│ ├── index.html
│ ├── style.css
│ ├── js/ # JavaScript modules
│ │ ├── nodes.js
│ │ ├── events.js
│ │ ├── connections.js
│ │ ├── debug.js
│ │ └── ...
│ └── images/ # UI assets
├── examples/ # Example workflows and tutorials
│ ├── camera_workflow.py
│ ├── camera_yolo_workflow.py
│ └── README.md
├── readmes/ # Extended documentation
├── models/ # ML models (YOLO, etc.)
├── docs/ # Documentation files
│ ├── CUSTOM_NODES.md # Guide to creating custom nodes
│ ├── UI_COMPONENTS.md # Guide to node UI components
│ └── EXTENSIBILITY.md # Extensibility overview
├── _backup/ # Workflow backups
├── setup.py # Package installation
├── setup.bat / setup.sh # Setup scripts
├── requirements.txt # Core dependencies
├── pyproject.toml # Build configuration
├── INSTALL.md # Installation guide
├── DOCKER.md # Docker setup
├── docker-compose.yml # Docker compose config
├── Dockerfile # Docker build (CUDA)
├── Dockerfile.cpu # Docker build (CPU only)
├── README.md
└── workflow.json # Current workflow
PyNode is fully extensible! All node information (visual properties, property schemas, behavior) is contained within the node class itself. The main application has no hardcoded knowledge of specific node types.
For a complete guide, see docs/CUSTOM_NODES.md
Here's a simple example:
from base_node import BaseNode
class MyCustomNode(BaseNode):
"""Example custom node."""
# Visual properties
category = 'custom'
color = '#FFA07A'
border_color = '#FF7F50'
text_color = '#000000'
# Property schema (appears in UI)
properties = [
{
'name': 'multiplier',
'label': 'Multiplier',
'type': 'text'
}
]
def __init__(self, node_id=None, name="custom"):
super().__init__(node_id, name)
self.configure({
'multiplier': 2
})
def on_input(self, msg, input_index=0):
# Process the incoming message
payload = msg[MessageKeys.PAYLOAD]
multiplier = float(self.config.get('multiplier', 2))
# Modify the payload
new_payload = payload * multiplier
# Create and send new message
new_msg = self.create_message(
payload=new_payload,
topic=msg.get('topic', '')
)
self.send(new_msg)
# Register your node in app.py
from nodes import MyCustomNode
engine.register_node_type(MyCustomNode)Messages follow the Node-RED format:
{
'payload': 'any data type', # The main message content
'topic': 'string', # Optional topic/category
'_msgid': 'unique-id', # Auto-generated message ID
# ... any additional properties
}GET /api/nodes- List all nodesPOST /api/nodes- Create a nodeGET /api/nodes/<id>- Get node detailsPUT /api/nodes/<id>- Update nodeDELETE /api/nodes/<id>- Delete nodePOST /api/nodes/<id>/<action>- Trigger node action (e.g., inject, start_broadcast, etc.)
POST /api/connections- Create connectionDELETE /api/connections- Delete connection
GET /api/workflow- Export workflowPOST /api/workflow- Import workflowPOST /api/workflow/start- Start workflowPOST /api/workflow/stop- Stop workflowGET /api/workflow/stats- Get statistics
GET /api/nodes/<id>/debug- Get debug messagesDELETE /api/nodes/<id>/debug- Clear debug messages
from pynode.workflow_engine import WorkflowEngine
from pynode.nodes import InjectNode, FunctionNode, DebugNode
# Create and configure engine
engine = WorkflowEngine()
engine.register_node_type(InjectNode)
engine.register_node_type(FunctionNode)
engine.register_node_type(DebugNode)
# Create nodes
inject = engine.create_node('InjectNode', name='source')
inject.configure({'payload': 10, 'payloadType': 'num'})
func = engine.create_node('FunctionNode', name='multiply')
func.configure({'func': 'msg["payload"] = msg["payload"] * 2\nreturn msg'})
debug = engine.create_node('DebugNode', name='output')
# Connect nodes: inject -> function -> debug
engine.connect_nodes(inject.id, func.id)
engine.connect_nodes(func.id, debug.id)
# Start and trigger
engine.start()
engine.trigger_inject_node(inject.id)
# Check debug output
messages = engine.get_debug_messages(debug.id)
print(messages) # Should show payload=20- Add Nodes: Drag nodes from the palette onto the canvas
- Connect Nodes: Click on an output port (right side) and drag to an input port (left side)
- Configure Nodes: Click a node to show its properties panel
- Test Workflow:
- Click "Start" to activate the workflow
- Use "Inject" button on inject nodes to send messages
- View output in the debug panel at the bottom
- Save/Load: Use Export/Import buttons to save workflows
- Create a new Python class in
pynode/nodes/ - Inherit from
BaseNode - Override
on_input()for message processing - Define
propertiesfor UI configuration - Create
requirements.txtin the node's directory if needed - Reload the server to automatically detect new nodes
Nodes can:
- Modify message payload
- Add/remove message properties
- Send to multiple outputs
- Send multiple messages
- Filter messages
- Store state between messages
- Background Processing: Use threading for long-running operations
- External APIs: Make HTTP requests from function nodes
- Database Integration: Store/retrieve data from databases
- File I/O: Read/write files in custom nodes
- Scheduling: Implement timed node execution
- Remove some nodes from standard set
- Create a new repo for extra nodes
- Model repository management
- Centralize more strings / constants
- Test all nodes
- Add multiple workspaces / canvases
- OCR (PaddlePaddle) Node
- Qwen VLM Node
- SAM3 Node
- REST Endpoint Node
- Webhook Node
- UDP/TCP Node
- Bird Seed level monitor
- Capture data send to Roboflow / Geti
- Track objects time in zone
- Live VLMs
- ANPR (Detect, Crop, OCR, MQTT)
- YOLO: Add custom model support
- YOLO: Add custom target HW string
- Roboflow: rfdetr
- Roboflow: upload images
- DeepSort: Add option to use different feature extractor model
MIT License - Feel free to use and modify!
Contributions welcome! Add new node types, improve the UI, or enhance the engine.