Train AI to create art autonomously
An MCP server that integrates Cables.gl visual programming with Reinforcement Learning for autonomous visual parameter optimization and generative art exploration.
Features โข Installation โข Quick Start โข Documentation โข Examples
Cables-RL connects Claude Code with Cables.gl (a powerful visual shader system) and Reinforcement Learning, allowing AI agents to:
- ๐ฏ Discover aesthetically interesting shader parameters automatically
- ๐ Optimize visual outputs using reward-based learning
- ๐ผ๏ธ Generate unique generative art by exploring parameter spaces
- ๐ Learn what looks "good" through aesthetic reward functions
Think of it as an autonomous visual artist that explores the creative possibilities of your Cables.gl patches.
| Feature | Description |
|---|---|
| ๐ฎ Real-time Control | Modify Cables.gl parameters via browser automation |
| ๐ง RL Integration | Support for PPO, SAC, and TD3 algorithms |
| ๐จ Aesthetic Rewards | Multi-component reward function (entropy, color harmony, motion coherence) |
| ๐ Novelty Exploration | Bonus rewards for discovering new visual territories |
| ๐พ Session Persistence | Save/resume exploration sessions |
| ๐ธ Auto-capture | Automatically save high-reward frames |
| โก High Performance | <0.01ms reward calculation, <0.2ms action generation |
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Claude Code (Client) โ
โ โ
โ Your prompts โโ cables-rl MCP โโ Playwright MCP โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโผโโโโโโโโโโโโโโ
โผ โผ โผ
โโโโโโโโโโโโโ โโโโโโโโโโโโโ โโโโโโโโโโโโโ
โ RL โ โ Cables.gl โ โ Reward โ
โ Agent โ โ Browser โ โ System โ
โ(PPO/SAC) โ โ Control โ โ(Aesthetic)โ
โโโโโโโโโโโโโ โโโโโโโโโโโโโ โโโโโโโโโโโโโ
- Node.js 18+
- Python 3.8+
- Claude Code CLI
# Clone the repository
git clone https://github.com/adkr1989/cables-rl-mcp.git
cd cables-rl-mcp
# Install Node.js dependencies
npm install
# Install Python dependencies
pip install numpy
# Build the project
npm run buildAdd to your Claude Code MCP settings (~/.claude.json or project settings):
{
"mcpServers": {
"cables-rl": {
"type": "stdio",
"command": "node",
"args": ["/path/to/cables-rl-mcp/dist/index.js"],
"env": {
"PYTHON_PATH": "python",
"OUTPUT_DIR": "/path/to/cables-rl-mcp/output"
}
},
"playwright": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@playwright/mcp@latest"]
}
}
}In Claude Code, ask:
Load the Anomaly patch from Cables.gl and show me what parameters are available
Change the blur amount to 15 and the chromatic aberration to 8, then take a screenshot
Initialize an SAC agent to optimize the Timer speed (0-5) and Blur amount (0-20) for aesthetic visuals
Run autonomous exploration for 5 minutes, saving any interesting frames
| Tool | Description |
|---|---|
cables_load_patch |
Load a Cables.gl patch in the browser |
cables_set_parameter |
Set a parameter value in real-time |
cables_get_parameters |
Get all available parameters |
cables_batch_set_parameters |
Set multiple parameters efficiently |
cables_get_frame_metrics |
Capture frame and compute visual metrics |
| Tool | Description |
|---|---|
rl_init_agent |
Initialize RL agent (PPO/SAC/TD3) |
rl_get_action |
Get next action from agent |
rl_update_reward |
Send reward signal to agent |
rl_training_step |
Execute training step(s) |
rl_save_checkpoint |
Save agent state |
rl_load_checkpoint |
Load agent state |
| Tool | Description |
|---|---|
rl_autonomous_explore |
Run autonomous visual exploration |
rl_stop_exploration |
Stop running exploration |
rl_get_exploration_status |
Get exploration status |
The aesthetic reward function combines multiple components:
reward = (
0.3 * entropy_score + # Visual complexity
0.2 * color_harmony + # Color distribution
0.2 * motion_coherence + # Smooth motion
0.2 * novelty_bonus - # Exploration bonus
0.1 * fps_penalty # Performance penalty
)| Component | Description |
|---|---|
| Entropy Score | Rewards balanced visual complexity (not too simple, not noise) |
| Color Harmony | Rewards balanced color distribution |
| Motion Coherence | Rewards smooth, intentional motion over jitter |
| Novelty Bonus | Bonus for exploring new parameter regions |
| FPS Penalty | Penalizes parameter combos that hurt performance |
| Metric | Result |
|---|---|
| Reward calculation | 0.007 ms |
| Action generation | 0.14 ms |
| Theoretical training rate | 150+ steps/sec |
| Cables.gl FPS | 145 FPS |
cables-rl-mcp/
โโโ src/
โ โโโ index.ts # MCP server entry point
โ โโโ types.ts # TypeScript definitions
โ โโโ tools/
โ โ โโโ cables-control.ts # Cables.gl browser automation
โ โ โโโ rl-agent.ts # RL agent bridge
โ โ โโโ reward-system.ts # Visual metrics & rewards
โ โ โโโ training-loop.ts # Autonomous training
โ โโโ bridge/
โ โโโ python-bridge.ts # Python process communication
โโโ python/
โ โโโ agent_wrapper.py # RL agent wrapper
โ โโโ reward_functions.py # Aesthetic reward calculations
โ โโโ env_cables.py # Gym environment
โโโ package.json
โโโ tsconfig.json
โโโ README.md
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Cables.gl - Visual programming platform
- Model Context Protocol - MCP specification
- Playwright - Browser automation
Made with โค๏ธ by A_Development
Empowering AI creativity through visual exploration