-
Notifications
You must be signed in to change notification settings - Fork 0
Troubleshooting
Common issues and fixes when running LABYRINTH.
Tip: Run
labyrinth doctorfirst — it performs 12+ automated health checks and provides specific remediation tips for each failure.
Error:
failed to create network labyrinth_labyrinth-net: Error response from daemon:
invalid pool request: Pool overlaps with other one on this address space
Cause: The 172.30.0.0/24 subnet defined in docker-compose.yml is already claimed by an existing Docker network — usually a leftover from a previous deployment under a different Compose project name.
This typically happens when:
- You previously deployed with
labyrinth deploy -t(project namelabyrinth-labyrinth-test) and then run baredocker compose up(project namelabyrinthfrom the directory name) - A previous teardown didn't fully clean up the network
- Another Docker project is using the same subnet
Fix — Option A (recommended): Use the CLI, which tracks the correct project name:
labyrinth teardown labyrinth-test
labyrinth deploy -tFix — Option B: Use the correct Compose project name directly:
docker compose -p labyrinth-labyrinth-test down
docker compose -p labyrinth-labyrinth-test up -dFix — Option C: Find and remove the conflicting network manually:
# Find which network owns the subnet
docker network ls | grep labyrinth
# Remove it (stop containers first if needed)
docker compose -p <project-name> down
docker network rm <network-name>
# Then redeploy
labyrinth deploy -tPrevention: Always use labyrinth deploy / labyrinth teardown instead of bare docker compose commands. The CLI tracks the Compose project name in the environment registry (~/.labyrinth/environments/) so operations always target the correct stack.
Important: When rebuilding individual services, you must pass the
-pflag to match the project name used by the CLI. Baredocker composederives the project name from the directory (e.g.,labyrinth), butlabyrinth deploy -tuseslabyrinth-labyrinth-test. Using mismatched project names causes Docker to attempt creating a second network with the same subnet.# Wrong — uses project name "labyrinth" from directory docker compose build orchestrator && docker compose up -d orchestrator # Correct — matches the CLI's project name docker compose -p labyrinth-labyrinth-test build orchestrator docker compose -p labyrinth-labyrinth-test up -d orchestrator
Symptom: The TUI or web dashboard shows active sessions that no longer exist.
Cause: Session forensic files (.jsonl) persist on the Docker volume after sessions end. If a session was interrupted without a clean session_end event (e.g., containers restarted, teardown during active session), the dashboard counts those orphaned files as active.
Fix: Reset the environment to clear stale forensic data:
labyrinth resetThis kills any leftover session containers and removes forensic log files. Infrastructure containers keep running.
Symptom: labyrinth reset or other commands fail with unexpected status: 404.
Cause: The running dashboard container has old code that doesn't include the requested API endpoint. This happens after pulling new code without rebuilding the Docker images.
Fix: Rebuild and restart the affected containers:
labyrinth teardown labyrinth-test
labyrinth deploy -tOr if you want to rebuild in place:
docker compose -p labyrinth-labyrinth-test build orchestrator dashboard
docker compose -p labyrinth-labyrinth-test up -d orchestrator dashboardError:
Error starting userland proxy: listen tcp4 0.0.0.0:22: bind: address already in use
Cause: Another process is using port 22, 8080, or 9000.
Fix: Find and stop the conflicting process:
lsof -i :22 # Check what's using the portOr change the port mapping in docker-compose.yml:
ports:
- "2223:22" # Use a different host portSymptom: labyrinth status or the TUI shows containers in "exited" state.
Fix: Check container logs for the specific error:
docker logs labyrinth-orchestrator
docker logs labyrinth-ssh
docker logs labyrinth-dashboardCommon causes:
- Orchestrator exits immediately: Docker socket not mounted or inaccessible
- SSH container exits: Port conflict or missing configuration
-
Dashboard exits: Template file not found (build issue — rebuild with
labyrinth deploy -t)
Symptom: The TUI overview shows "NO CONNECTION" instead of "RUNNING".
Cause: The TUI can't reach the dashboard API at http://localhost:9000.
Fix:
- Check that the dashboard container is running:
docker ps | grep dashboard - Check that port 9000 is mapped:
docker port labyrinth-dashboard - Check dashboard logs:
docker logs labyrinth-dashboard
The TUI will automatically fall back to reading forensic files directly from disk if the API is unavailable, but container status and some real-time features require the API connection.
| Layer | Shows "active" when... | Shows "awaiting" when... |
|---|---|---|
| L0: FOUNDATION | Infrastructure containers running | N/A (shows standby if no containers) |
| L1: THRESHOLD | SSH/HTTP portal containers running | N/A (shows standby if portals down) |
| L2: MINOTAUR | Session with container_spawned or depth_increase events |
Portal is up but no sessions yet |
| L3: BLINDFOLD | Session with blindfold_activated event (depth >= 3) |
Waiting for session escalation |
| L4: INTERCEPT | Session with api_intercepted event |
MITM proxy running but no interceptions |
L2-L4 only go active based on forensic events, not container status. If you reset forensic data, they return to "awaiting" even though the infrastructure is ready.
- CLI-Reference — Full command reference
-
Configuration — Customize
labyrinth.yaml - Testing-with-Attackers — Set up an attacker agent
Getting Started
Architecture
Layers
Operations