-
Notifications
You must be signed in to change notification settings - Fork 0
CLI Reference
The labyrinth CLI is a Go binary built with Cobra. It manages the full lifecycle of LABYRINTH environments.
Deploy a new LABYRINTH environment.
labyrinth deploy -t [name] # Test environment (default name: labyrinth-test)
labyrinth deploy -p <name> --docker # Production Docker Compose
labyrinth deploy -p <name> --k8s # Production Kubernetes (coming soon)
labyrinth deploy -p <name> --edge # Production Edge (coming soon)
labyrinth deploy -p # List available production typesFlags:
| Flag | Description |
|---|---|
-t |
Test mode — single Docker Compose stack |
-p |
Production mode — requires a name and deployment type |
--docker |
Deploy as Docker Compose (production) |
--k8s |
Deploy to Kubernetes (coming soon) |
--edge |
Deploy as edge node (coming soon) |
What -t does:
- Runs preflight checks (Docker daemon, Compose binary, port availability)
- Builds all Docker images
- Starts
docker compose up -d - Registers the environment in the local registry
Show status of environments.
labyrinth status [name] # Specific environment (or all if omitted)Displays container states via docker compose ps.
List all registered environments.
labyrinth listShows a formatted table with name, type, mode, and creation timestamp.
Launch the interactive TUI monitoring dashboard.
labyrinth tui # Default
labyrinth tui --notify # Enable desktop notifications
labyrinth tui --webhook https://hooks... # Send alerts to Slack/Discord
labyrinth tui --env my-env # Target specific environmentFlags:
| Flag | Description |
|---|---|
--notify |
Enable desktop notifications for new sessions/events |
--webhook |
Webhook URL for Slack/Discord alert integration |
--env |
Target a specific registered environment by name |
See TUI-Dashboard for full details.
Reset an environment — kill session containers and clear forensic data without tearing down infrastructure.
labyrinth reset [name] # Reset specific env (defaults to first registered)What reset does:
- Sends
POST /api/resetto the orchestrator via the dashboard API - Force-removes all session containers (
label=layer=session) - Clears forensic files: session logs, auth events, HTTP logs
- Infrastructure containers (SSH, HTTP, proxy, orchestrator, dashboard) keep running
Use this when stale sessions or forensic data cause incorrect counts in the TUI/dashboard.
Tear down environments.
labyrinth teardown <name> # Specific environment
labyrinth teardown --all # All environmentsWhat teardown does:
- Stops and removes containers (
docker compose down -v) - Removes LABYRINTH-labeled Docker images
- Deletes the environment from the registry
Manage offensive AI agents for testing LABYRINTH environments.
labyrinth attacker # Show subcommand help
labyrinth attacker list # Table of all agents: name, description, status
labyrinth attacker setup # Interactive: pick agent, provide API key, install
labyrinth attacker status # Detailed status of installed agents
labyrinth attacker run <agent> # Quick-launch agent against the portal
labyrinth attacker stop <agent> # Stop a running agent
labyrinth attacker stop --all # Stop all running agents
labyrinth attacker uninstall <agent> # Remove agent containers/images/config
labyrinth attacker uninstall --all # Uninstall all agents
labyrinth attacker prompt # Print suggested prompt for current baitAgent Catalog:
| ID | Name | Type | Description |
|---|---|---|---|
pentagi |
PentAGI | Compose stack | Autonomous multi-agent system with Web UI |
pentestagent |
PentestAgent | Single container | AI pentesting framework with TUI |
strix |
Strix | Host CLI + sandbox | AI hacker agents for web app testing |
kali |
Custom Kali | Single container | Kali Linux with pre-installed pentest tools |
Status values:
| Status | Meaning |
|---|---|
| Available | Agent is in the catalog but not installed |
| Installed | Docker image exists locally or config saved |
| Active | Container(s) currently running |
| Stopped | Installed but not running |
Config storage: Agent configurations are stored in ~/.labyrinth/attackers/<id>/. API keys are never stored — they are read from environment variables (OPENAI_API_KEY, ANTHROPIC_API_KEY) or prompted each time.
Example workflow:
labyrinth deploy -t # Deploy the portal trap first
labyrinth attacker setup # Pick PentAGI, provide OpenAI key
labyrinth attacker list # Shows PentAGI as "Active"
labyrinth tui # Watch captures in real time
labyrinth attacker stop pentagi # Done testing
labyrinth attacker uninstall pentagi # Clean upPlant or remove randomized bait credentials on the portal trap services.
labyrinth bait drop # Generate and plant randomized bait
labyrinth bait clean # Remove all planted bait
labyrinth bait show # Display current bait credentialsWhat bait drop does:
- Generates a randomized identity (company name, domain, usernames, passwords, API keys)
- Creates SSH users with the random passwords in the
labyrinth-sshcontainer - Plants web-discoverable files in the
labyrinth-httpcontainer at paths agents commonly scan - Updates the SSH-side escalation bait file with randomized content
- Saves a manifest to
~/.labyrinth/bait.jsonfor cleanup
The bait trail:
Agent scans HTTP → discovers /robots.txt → finds sensitive paths
→ reads /.env, /backup/credentials.csv → extracts SSH credentials
→ logs into SSH with discovered creds → enters the portal trap
→ finds bait files inside SSH → triggers escalation (L2)
→ L2 contradictions + L3 blindfold activate automatically
Web paths planted:
| Path | Content |
|---|---|
/robots.txt |
Disallow hints pointing to sensitive paths |
/.env |
Randomized environment variables with SSH credentials |
/backup/credentials.csv |
Spreadsheet-style credential dump |
/backup/ssh-config.txt |
SSH config fragment with credentials in comments |
/server-info |
HTML status page with service accounts |
/api/internal/config |
JSON service config with SSH connection details |
/admin/notes.txt |
Operations notes with plaintext passwords |
Why randomized? Each bait drop generates a unique identity so the portal trap cannot be fingerprinted. No two deployments share the same company name, credentials, API keys, or bait content.
API keys are never stored — the manifest only records usernames and passwords for cleanup. Keys and secrets are generated fresh each time.
Generate a forensic attack report from captured session data.
labyrinth report # Latest session (terminal output)
labyrinth report <session-id> # Specific session
labyrinth report --all # All sessions
labyrinth report --format md -o report.md # Export Markdown with Mermaid attack graph
labyrinth report --format json # JSON to stdoutFlags:
| Flag | Default | Description |
|---|---|---|
--format |
terminal |
Output format: terminal, md, json
|
-o / --output
|
— | Write output to file instead of stdout |
--all |
false |
Generate reports for all sessions |
Report contents:
| Section | Description |
|---|---|
| Executive Summary | Duration, attacker type, layers reached, confusion score, risk level |
| Attack Timeline | Chronological events with MITRE ATT&CK tactic and technique mapping |
| Credentials | Planted vs captured credentials, bait hit rate, novel attempts |
| Services Explored | Phantom service engagement (connections, auth, queries, samples) |
| Tools Analysis | User agent, SDK fingerprint, API keys (masked), model usage, command inventory |
| Attack Graph | Mermaid flowchart showing attack progression with layer-colored nodes |
| Effectiveness Assessment | What worked, what didn't, intelligence captured |
Data sources: Tries the dashboard API first (http://localhost:9000), falls back to reading JSONL files directly from /var/labyrinth/forensics/.
The Markdown format produces a GitHub/VS Code-renderable document with a mermaid fenced block for the attack graph.
Run diagnostic health checks on the LABYRINTH environment.
labyrinth doctorRuns 12+ checks and reports pass/fail with remediation tips:
| Check | What It Verifies |
|---|---|
| Docker daemon | Docker is running and accessible |
| Docker Compose | Compose binary installed |
| Core containers | SSH, HTTP, orchestrator, proxy, dashboard running |
| Port bindings | Ports 22, 8080, 9000 are mapped correctly |
| Dashboard API |
/api/health responds |
| Proxy entrypoint |
start.sh running (not bare mitmdump) |
| Phantom services | PostgreSQL, Redis, ES, Consul, Jenkins, SSH relay listening |
| Identity config |
config.json loaded in proxy container |
| MITM interceptor | Interceptor addon active |
| Bait planted | Bait credentials exist |
| Bait sync | Credentials consistent across SSH and HTTP containers |
| L4 services API |
/api/l4/services responds |
Stream real-time color-coded logs from LABYRINTH containers.
labyrinth logs # All services
labyrinth logs ssh http # Specific services only
labyrinth logs --tail 50 # Show last 50 lines before followingFlags:
| Flag | Default | Description |
|---|---|---|
--tail |
20 |
Number of historical lines to show before streaming |
Services: ssh, http, orchestrator, proxy, dashboard. Each service gets a distinct color in the merged log stream.
Also available as labyrinth logwatch (alias).
Generate or install shell autocompletion.
labyrinth completion bash # Print bash completion script
labyrinth completion zsh # Print zsh completion script
labyrinth completion fish # Print fish completion script
labyrinth completion install # Auto-install for current shellInstall the binary to ~/.local/bin.
labyrinth --installCopies the binary and validates the PATH.
Launch an aggregated dashboard across all registered environments.
labyrinth multi-dashboard # Default port 9999
labyrinth multi-dashboard --port 8888 # Custom portExposes /api/environments, /api/aggregate/stats, and /api/aggregate/sessions endpoints that merge data from all registered environments.
Environments are tracked as JSON files in ~/.labyrinth/environments/:
{
"name": "labyrinth-test",
"type": "test",
"mode": "docker-compose",
"created": "2026-02-26T10:00:00Z",
"compose_project": "labyrinth-labyrinth-test"
}Both the Go CLI and deploy.sh can read/write the same registry files.
cd cli
go build -o labyrinth . # Dev build
go build -ldflags="-s -w" -o labyrinth . # Optimized build (~7-8MB)go test ./... # All tests (39 tests)
go test -race ./... # With race detector
go test ./internal/registry/... # Single package
go test ./test/... -v # Integration tests (verbose)
go test -cover ./... # Coverage reportGetting Started
Architecture
Layers
Operations