Skip to content

CLI Reference

Stephen edited this page Mar 5, 2026 · 5 revisions

CLI Reference

The labyrinth CLI is a Go binary built with Cobra. It manages the full lifecycle of LABYRINTH environments.


Commands

deploy

Deploy a new LABYRINTH environment.

labyrinth deploy -t [name]              # Test environment (default name: labyrinth-test)
labyrinth deploy -p <name> --docker     # Production Docker Compose
labyrinth deploy -p <name> --k8s        # Production Kubernetes (coming soon)
labyrinth deploy -p <name> --edge       # Production Edge (coming soon)
labyrinth deploy -p                     # List available production types

Flags:

Flag Description
-t Test mode — single Docker Compose stack
-p Production mode — requires a name and deployment type
--docker Deploy as Docker Compose (production)
--k8s Deploy to Kubernetes (coming soon)
--edge Deploy as edge node (coming soon)

What -t does:

  1. Runs preflight checks (Docker daemon, Compose binary, port availability)
  2. Builds all Docker images
  3. Starts docker compose up -d
  4. Registers the environment in the local registry

status

Show status of environments.

labyrinth status [name]                 # Specific environment (or all if omitted)

Displays container states via docker compose ps.


list

List all registered environments.

labyrinth list

Shows a formatted table with name, type, mode, and creation timestamp.


tui

Launch the interactive TUI monitoring dashboard.

labyrinth tui                              # Default
labyrinth tui --notify                     # Enable desktop notifications
labyrinth tui --webhook https://hooks...   # Send alerts to Slack/Discord
labyrinth tui --env my-env                 # Target specific environment

Flags:

Flag Description
--notify Enable desktop notifications for new sessions/events
--webhook Webhook URL for Slack/Discord alert integration
--env Target a specific registered environment by name

See TUI-Dashboard for full details.


reset

Reset an environment — kill session containers and clear forensic data without tearing down infrastructure.

labyrinth reset [name]                  # Reset specific env (defaults to first registered)

What reset does:

  1. Sends POST /api/reset to the orchestrator via the dashboard API
  2. Force-removes all session containers (label=layer=session)
  3. Clears forensic files: session logs, auth events, HTTP logs
  4. Infrastructure containers (SSH, HTTP, proxy, orchestrator, dashboard) keep running

Use this when stale sessions or forensic data cause incorrect counts in the TUI/dashboard.


teardown

Tear down environments.

labyrinth teardown <name>               # Specific environment
labyrinth teardown --all                # All environments

What teardown does:

  1. Stops and removes containers (docker compose down -v)
  2. Removes LABYRINTH-labeled Docker images
  3. Deletes the environment from the registry

attacker

Manage offensive AI agents for testing LABYRINTH environments.

labyrinth attacker                      # Show subcommand help
labyrinth attacker list                 # Table of all agents: name, description, status
labyrinth attacker setup                # Interactive: pick agent, provide API key, install
labyrinth attacker status               # Detailed status of installed agents
labyrinth attacker run <agent>          # Quick-launch agent against the portal
labyrinth attacker stop <agent>         # Stop a running agent
labyrinth attacker stop --all           # Stop all running agents
labyrinth attacker uninstall <agent>    # Remove agent containers/images/config
labyrinth attacker uninstall --all      # Uninstall all agents
labyrinth attacker prompt              # Print suggested prompt for current bait

Agent Catalog:

ID Name Type Description
pentagi PentAGI Compose stack Autonomous multi-agent system with Web UI
pentestagent PentestAgent Single container AI pentesting framework with TUI
strix Strix Host CLI + sandbox AI hacker agents for web app testing
kali Custom Kali Single container Kali Linux with pre-installed pentest tools

Status values:

Status Meaning
Available Agent is in the catalog but not installed
Installed Docker image exists locally or config saved
Active Container(s) currently running
Stopped Installed but not running

Config storage: Agent configurations are stored in ~/.labyrinth/attackers/<id>/. API keys are never stored — they are read from environment variables (OPENAI_API_KEY, ANTHROPIC_API_KEY) or prompted each time.

Example workflow:

labyrinth deploy -t                     # Deploy the portal trap first
labyrinth attacker setup                # Pick PentAGI, provide OpenAI key
labyrinth attacker list                 # Shows PentAGI as "Active"
labyrinth tui                           # Watch captures in real time
labyrinth attacker stop pentagi         # Done testing
labyrinth attacker uninstall pentagi    # Clean up

bait

Plant or remove randomized bait credentials on the portal trap services.

labyrinth bait drop                     # Generate and plant randomized bait
labyrinth bait clean                    # Remove all planted bait
labyrinth bait show                     # Display current bait credentials

What bait drop does:

  1. Generates a randomized identity (company name, domain, usernames, passwords, API keys)
  2. Creates SSH users with the random passwords in the labyrinth-ssh container
  3. Plants web-discoverable files in the labyrinth-http container at paths agents commonly scan
  4. Updates the SSH-side escalation bait file with randomized content
  5. Saves a manifest to ~/.labyrinth/bait.json for cleanup

The bait trail:

Agent scans HTTP → discovers /robots.txt → finds sensitive paths
  → reads /.env, /backup/credentials.csv → extracts SSH credentials
    → logs into SSH with discovered creds → enters the portal trap
      → finds bait files inside SSH → triggers escalation (L2)
        → L2 contradictions + L3 blindfold activate automatically

Web paths planted:

Path Content
/robots.txt Disallow hints pointing to sensitive paths
/.env Randomized environment variables with SSH credentials
/backup/credentials.csv Spreadsheet-style credential dump
/backup/ssh-config.txt SSH config fragment with credentials in comments
/server-info HTML status page with service accounts
/api/internal/config JSON service config with SSH connection details
/admin/notes.txt Operations notes with plaintext passwords

Why randomized? Each bait drop generates a unique identity so the portal trap cannot be fingerprinted. No two deployments share the same company name, credentials, API keys, or bait content.

API keys are never stored — the manifest only records usernames and passwords for cleanup. Keys and secrets are generated fresh each time.


report

Generate a forensic attack report from captured session data.

labyrinth report                           # Latest session (terminal output)
labyrinth report <session-id>              # Specific session
labyrinth report --all                     # All sessions
labyrinth report --format md -o report.md  # Export Markdown with Mermaid attack graph
labyrinth report --format json             # JSON to stdout

Flags:

Flag Default Description
--format terminal Output format: terminal, md, json
-o / --output Write output to file instead of stdout
--all false Generate reports for all sessions

Report contents:

Section Description
Executive Summary Duration, attacker type, layers reached, confusion score, risk level
Attack Timeline Chronological events with MITRE ATT&CK tactic and technique mapping
Credentials Planted vs captured credentials, bait hit rate, novel attempts
Services Explored Phantom service engagement (connections, auth, queries, samples)
Tools Analysis User agent, SDK fingerprint, API keys (masked), model usage, command inventory
Attack Graph Mermaid flowchart showing attack progression with layer-colored nodes
Effectiveness Assessment What worked, what didn't, intelligence captured

Data sources: Tries the dashboard API first (http://localhost:9000), falls back to reading JSONL files directly from /var/labyrinth/forensics/.

The Markdown format produces a GitHub/VS Code-renderable document with a mermaid fenced block for the attack graph.


doctor

Run diagnostic health checks on the LABYRINTH environment.

labyrinth doctor

Runs 12+ checks and reports pass/fail with remediation tips:

Check What It Verifies
Docker daemon Docker is running and accessible
Docker Compose Compose binary installed
Core containers SSH, HTTP, orchestrator, proxy, dashboard running
Port bindings Ports 22, 8080, 9000 are mapped correctly
Dashboard API /api/health responds
Proxy entrypoint start.sh running (not bare mitmdump)
Phantom services PostgreSQL, Redis, ES, Consul, Jenkins, SSH relay listening
Identity config config.json loaded in proxy container
MITM interceptor Interceptor addon active
Bait planted Bait credentials exist
Bait sync Credentials consistent across SSH and HTTP containers
L4 services API /api/l4/services responds

logs

Stream real-time color-coded logs from LABYRINTH containers.

labyrinth logs                             # All services
labyrinth logs ssh http                    # Specific services only
labyrinth logs --tail 50                   # Show last 50 lines before following

Flags:

Flag Default Description
--tail 20 Number of historical lines to show before streaming

Services: ssh, http, orchestrator, proxy, dashboard. Each service gets a distinct color in the merged log stream.

Also available as labyrinth logwatch (alias).


completion

Generate or install shell autocompletion.

labyrinth completion bash                  # Print bash completion script
labyrinth completion zsh                   # Print zsh completion script
labyrinth completion fish                  # Print fish completion script
labyrinth completion install               # Auto-install for current shell

install

Install the binary to ~/.local/bin.

labyrinth --install

Copies the binary and validates the PATH.


multi-dashboard

Launch an aggregated dashboard across all registered environments.

labyrinth multi-dashboard                  # Default port 9999
labyrinth multi-dashboard --port 8888      # Custom port

Exposes /api/environments, /api/aggregate/stats, and /api/aggregate/sessions endpoints that merge data from all registered environments.


Environment Registry

Environments are tracked as JSON files in ~/.labyrinth/environments/:

{
  "name": "labyrinth-test",
  "type": "test",
  "mode": "docker-compose",
  "created": "2026-02-26T10:00:00Z",
  "compose_project": "labyrinth-labyrinth-test"
}

Both the Go CLI and deploy.sh can read/write the same registry files.


Build from Source

cd cli
go build -o labyrinth .                        # Dev build
go build -ldflags="-s -w" -o labyrinth .       # Optimized build (~7-8MB)

Running Tests

go test ./...                                   # All tests (39 tests)
go test -race ./...                             # With race detector
go test ./internal/registry/...                 # Single package
go test ./test/... -v                           # Integration tests (verbose)
go test -cover ./...                            # Coverage report

Clone this wiki locally