Skip to content

Troubleshooting

Stephen edited this page Mar 5, 2026 · 2 revisions

Troubleshooting

Common issues and fixes when running LABYRINTH.

Tip: Run labyrinth doctor first — it performs 12+ automated health checks and provides specific remediation tips for each failure.


Docker Network Subnet Conflict

Error:

failed to create network labyrinth_labyrinth-net: Error response from daemon:
invalid pool request: Pool overlaps with other one on this address space

Cause: The 172.30.0.0/24 subnet defined in docker-compose.yml is already claimed by an existing Docker network — usually a leftover from a previous deployment under a different Compose project name.

This typically happens when:

  • You previously deployed with labyrinth deploy -t (project name labyrinth-labyrinth-test) and then run bare docker compose up (project name labyrinth from the directory name)
  • A previous teardown didn't fully clean up the network
  • Another Docker project is using the same subnet

Fix — Option A (recommended): Use the CLI, which tracks the correct project name:

labyrinth teardown labyrinth-test
labyrinth deploy -t

Fix — Option B: Use the correct Compose project name directly:

docker compose -p labyrinth-labyrinth-test down
docker compose -p labyrinth-labyrinth-test up -d

Fix — Option C: Find and remove the conflicting network manually:

# Find which network owns the subnet
docker network ls | grep labyrinth

# Remove it (stop containers first if needed)
docker compose -p <project-name> down
docker network rm <network-name>

# Then redeploy
labyrinth deploy -t

Prevention: Always use labyrinth deploy / labyrinth teardown instead of bare docker compose commands. The CLI tracks the Compose project name in the environment registry (~/.labyrinth/environments/) so operations always target the correct stack.

Important: When rebuilding individual services, you must pass the -p flag to match the project name used by the CLI. Bare docker compose derives the project name from the directory (e.g., labyrinth), but labyrinth deploy -t uses labyrinth-labyrinth-test. Using mismatched project names causes Docker to attempt creating a second network with the same subnet.

# Wrong — uses project name "labyrinth" from directory
docker compose build orchestrator && docker compose up -d orchestrator

# Correct — matches the CLI's project name
docker compose -p labyrinth-labyrinth-test build orchestrator
docker compose -p labyrinth-labyrinth-test up -d orchestrator

"Active Sessions" Count Doesn't Match Reality

Symptom: The TUI or web dashboard shows active sessions that no longer exist.

Cause: Session forensic files (.jsonl) persist on the Docker volume after sessions end. If a session was interrupted without a clean session_end event (e.g., containers restarted, teardown during active session), the dashboard counts those orphaned files as active.

Fix: Reset the environment to clear stale forensic data:

labyrinth reset

This kills any leftover session containers and removes forensic log files. Infrastructure containers keep running.


Dashboard API Returns 404

Symptom: labyrinth reset or other commands fail with unexpected status: 404.

Cause: The running dashboard container has old code that doesn't include the requested API endpoint. This happens after pulling new code without rebuilding the Docker images.

Fix: Rebuild and restart the affected containers:

labyrinth teardown labyrinth-test
labyrinth deploy -t

Or if you want to rebuild in place:

docker compose -p labyrinth-labyrinth-test build orchestrator dashboard
docker compose -p labyrinth-labyrinth-test up -d orchestrator dashboard

Port Already in Use

Error:

Error starting userland proxy: listen tcp4 0.0.0.0:22: bind: address already in use

Cause: Another process is using port 22, 8080, or 9000.

Fix: Find and stop the conflicting process:

lsof -i :22    # Check what's using the port

Or change the port mapping in docker-compose.yml:

ports:
  - "2223:22"    # Use a different host port

Containers Show "Exited" or Won't Start

Symptom: labyrinth status or the TUI shows containers in "exited" state.

Fix: Check container logs for the specific error:

docker logs labyrinth-orchestrator
docker logs labyrinth-ssh
docker logs labyrinth-dashboard

Common causes:

  • Orchestrator exits immediately: Docker socket not mounted or inaccessible
  • SSH container exits: Port conflict or missing configuration
  • Dashboard exits: Template file not found (build issue — rebuild with labyrinth deploy -t)

TUI Shows "NO CONNECTION"

Symptom: The TUI overview shows "NO CONNECTION" instead of "RUNNING".

Cause: The TUI can't reach the dashboard API at http://localhost:9000.

Fix:

  1. Check that the dashboard container is running: docker ps | grep dashboard
  2. Check that port 9000 is mapped: docker port labyrinth-dashboard
  3. Check dashboard logs: docker logs labyrinth-dashboard

The TUI will automatically fall back to reading forensic files directly from disk if the API is unavailable, but container status and some real-time features require the API connection.


Layer Status Shows Unexpected Values

Layer Shows "active" when... Shows "awaiting" when...
L0: FOUNDATION Infrastructure containers running N/A (shows standby if no containers)
L1: THRESHOLD SSH/HTTP portal containers running N/A (shows standby if portals down)
L2: MINOTAUR Session with container_spawned or depth_increase events Portal is up but no sessions yet
L3: BLINDFOLD Session with blindfold_activated event (depth >= 3) Waiting for session escalation
L4: INTERCEPT Session with api_intercepted event MITM proxy running but no interceptions

L2-L4 only go active based on forensic events, not container status. If you reset forensic data, they return to "awaiting" even though the infrastructure is ready.


Next Steps

Clone this wiki locally