A multi-layered defensive architecture designed to contain, degrade, disrupt, and commandeer autonomous offensive AI agents.
Built by DaxxSec & Claude (Anthropic) · GitHub
TUI Dashboard — Live Event Log
Web Dashboard — Real-Time Monitoring
Autonomous AI agents are being deployed for offensive cyber operations — automated recon, exploitation, and lateral movement at machine speed. But AI agents have cognitive dependencies that humans don't — and almost nobody is building defenses that target those dependencies.
LABYRINTH changes that.
You need Docker (or a compatible runtime) and optionally Go 1.22+ to build from source.
macOS users: We recommend OrbStack over Docker Desktop. It's significantly faster, uses less memory, and is a drop-in replacement — all
dockeranddocker composecommands work identically.brew install orbstack
# Clone, build, and install (installs Go if needed)
git clone https://github.com/DaxxSec/labyrinth.git
cd labyrinth && ./scripts/install.sh
# Run the smoke test to verify everything works
./scripts/smoke-test.sh
# Deploy a test environment
labyrinth deploy -t
# Launch the TUI monitoring dashboard
labyrinth tuiOr grab a pre-built binary and run ./labyrinth --install.
labyrinth bait drop # Plant randomized bait for agents to discover
labyrinth attacker setup # Pick and configure an attacker agentThe deploy command prompts you to drop bait automatically. Bait generates a unique randomized identity (company, users, passwords) and plants discoverable credentials on the portal trap services — giving attacker agents breadcrumbs to find their way in. Every deployment gets a unique identity with no static signatures, preventing fingerprinting by scanners or adversarial AI.
Choose from PentAGI, PentestAgent, Strix, or bring your own tool. All agents run in isolated Docker containers on the LABYRINTH network — nothing touches your host.
labyrinth attacker list # See all agents and their status
labyrinth attacker run pentagi # Quick-launch an agent
labyrinth attacker stop --all # Stop all agents when done
labyrinth bait clean # Remove bait when doneAgent connects to portal trap service
└─→ Container spawned (Layer 1 — THRESHOLD)
└─→ Agent explores contradictory environment (Layer 2 — MINOTAUR)
└─→ Terminal encoding corrupted (Layer 3 — BLINDFOLD)
└─→ System prompt intercepted & rewritten (Layer 4 — PUPPETEER)
All activity captured → Dashboard at http://localhost:9000
LABYRINTH implements a reverse kill chain — the deeper an agent penetrates, the more compromised it becomes.
| Layer | Codename | Action | What It Does |
|---|---|---|---|
| L0 | BEDROCK |
HARDEN | AES-256 forensics, VLAN isolation, retention policy |
| L1 | THRESHOLD |
CONTAIN | Portal trap routes connections into isolated containers |
| L2 | MINOTAUR |
DEGRADE | Contradictory environments erode the agent's world model |
| L3 | BLINDFOLD |
DISRUPT | Encoding corruption blinds the agent's I/O parsing |
| L4 | PUPPETEER |
CONTROL | MITM intercept captures and rewrites the agent's instructions |
Depth of penetration = Depth of compromise
See Layer Architecture for the full technical breakdown.
| Document | Description |
|---|---|
| Getting Started | Installation, first deployment, prerequisites |
| CLI Reference | All commands, TUI keybindings, registry format |
| Testing Guide | Setting up attacker agents, safety, monitoring |
| Layer Architecture | L0-L4 technical details and configuration |
| Captured Output | Session reports, JSONL schema, JSON export, API |
| Architecture Spec | Condensed architecture reference |
| Threat Model | AI agent cognitive dependencies and countermeasures |
- Architecture specification (v0.2)
- Layer 0 operational security framework
- One-click test deployment (
labyrinth deploy -t) - Go CLI binary with full environment lifecycle
- TUI monitoring dashboard (5 tabs)
- Real-time web capture dashboard
- Attacker agent CLI (
labyrinth attacker— setup, run, stop, uninstall) - JSONL forensic event capture & export
- Build-from-source installer (
install.sh) - Structured documentation (docs/)
- Anti-fingerprinting (randomized identities per deployment — no static signatures)
- Layer 1 container orchestration (PAM hooks, bait watchers, HTTP portal trap)
- Layer 2 contradiction seeding engine (13 contradictions, adaptive density)
- Layer 3 encoding corruption payloads (enhanced: urandom, TERM, recovery traps)
- Layer 4 MITM proxy interception (5 AI APIs, 3 swap modes)
- Orchestrator event loop (watchdog, Docker SDK, session lifecycle)
- End-to-end smoke test (full kill chain verification)
- Auto CA cert injection on container spawn
- L0 BEDROCK runtime enforcement (VLAN validation, forensic encryption)
- SIEM integration (event push to external endpoints)
- Forensic data retention policy (automated cleanup/aging)
- Production deployment guide (Docker, K8s, Edge)
We welcome contributions from the defensive security community.
- Fork the repository
- Create a feature branch (
git checkout -b feature/your-feature) - Commit changes (
git commit -m 'Add your feature') - Push to branch (
git push origin feature/your-feature) - Open a Pull Request
Important
LABYRINTH does not phone home. All forensic data — captured credentials, session logs, HTTP access events — is stored locally on your machine in Docker volumes and ~/.labyrinth/. Nothing is transmitted to any remote server, cloud service, or third party. There is no telemetry, no analytics, no remote collection of any kind. You own your data, period.
Note
This project is intended for defensive security research only. The techniques described are designed to be deployed within controlled portal trap environments that the operator owns and controls. Always ensure compliance with applicable laws and organizational policies.
AGPL-3.0 License — see LICENSE for details.
This means you can freely use, modify, and distribute LABYRINTH, but if you deploy a modified version as a network service, you must release your source code under the same license. This prevents commercial exploitation while keeping the project fully open source.
LABYRINTH integrates with the following open-source offensive AI projects for testing. We thank their authors and communities:
| Project | Repository | Description |
|---|---|---|
| PentAGI | vxcontrol/pentagi | Fully autonomous multi-agent penetration testing system with web UI |
| PentestAgent | GH05TCREW/PentestAgent | AI-powered pentesting framework with TUI, Agent & Crew modes |
| Strix | UseStrix/strix | AI hacker agents with CLI/TUI and Docker sandbox isolation |
| Kali Linux | kali.org | Industry-standard penetration testing distribution (Docker images) |
Built by DaxxSec & Claude (Anthropic)
Defending against the future, today.
GitHub