Build and deploy AI agents in minutes, not weeks.
AgentShip is the production layer for AI agents. Built on Google ADK, it provides everything you need to ship agents to production: REST API, session management, observability, and one-command deployment.
git clone https://github.com/harshuljain13/ship-ai-agents.git
cd ship-ai-agents/ai/ai-ecosystem
make docker-setupThat's it! The script will:
- β Check Docker installation
- β
Create
.envfile - β Prompt for your API key
- β Start everything
Access your services:
- π API (Swagger): http://localhost:7001/swagger
- π Documentation: http://localhost:7001/docs
- π§ Debug UI: http://localhost:7001/debug-ui
make docker-up # Start containers (with hot-reload)
make docker-down # Stop containers
make docker-logs # View logsHot-reload enabled! Edit code in src/ and changes auto-reload.
AgentShip's architecture is designed for production-scale AI agent deployment:
The system includes:
- FastAPI Entrypoint: HTTP, SSE, and WebSocket support
- Main Ecosystem: YAML-based agent configurations, LLM sidecar, observability, and guardrails
- LLM Tooling Layer: Utils, tools, and MCP integration
- Memory Layer: Session memory, external context stores, caching, and file storage
- Data Ingestion Pipeline: Processes data from various sources
- Observability: OPIK & Langfuse integration for monitoring and evaluation
# 1. Create directory
mkdir -p src/agents/all_agents/my_agent
cd src/agents/all_agents/my_agent
# 2. Create main_agent.yaml
cat > main_agent.yaml << EOF
agent_name: my_agent
llm_provider_name: openai
llm_model: gpt-4o
temperature: 0.4
description: My helpful assistant
instruction_template: |
You are a helpful assistant that answers questions clearly.
EOF
# 3. Create main_agent.py
cat > main_agent.py << EOF
from src.agents.all_agents.base_agent import BaseAgent
from src.models.base_models import TextInput, TextOutput
from src.agents.utils.path_utils import resolve_config_path
class MyAgent(BaseAgent):
def __init__(self):
super().__init__(
config_path=resolve_config_path(relative_to=__file__),
input_schema=TextInput,
output_schema=TextOutput
)
EOFRestart server β Agent is automatically discovered!
AgentShip includes a Gradio-based Debug UI for testing agents interactively:
Access: http://localhost:7001/debug-ui (same port as API)
Features:
- π¬ Interactive chat with any registered agent
- π Dynamic input forms from Pydantic schemas
- π Real-time debug logs
- π Session management (new/clear conversations)
make docker-setup # First-time setup (builds + starts)
make docker-up # Start containers (after first setup)
make docker-down # Stop containers
make docker-restart # Restart containers
make docker-logs # View logsmake dev # Start dev server β http://localhost:7001
# Debug UI at β http://localhost:7001/debug-uimake heroku-deploy # Deploy to Heroku (one command)make help # See all commands
make test # Run testsAgentShip uses PostgreSQL for session storage. Different environments use different databases:
| Environment | Command | Database | Access |
|---|---|---|---|
| Docker | make docker-up |
Docker PostgreSQL (ai_agents_store) |
postgres:5432 (inside containers) |
| Local | make dev |
Local PostgreSQL (ai_agents_session_store) |
localhost:5432 |
| Heroku | Auto-provisioned | Heroku PostgreSQL | DATABASE_URL env var |
Note: Docker and local development use separate databases. Data does not sync between them.
Inside Docker, containers communicate via service names, not localhost:
- β
postgres:5432- Correct (Docker service name) - β
localhost:5432- Wrong (refers to container's own network)
The docker-compose.yml automatically overrides the database URL for Docker networking.
- π Full Documentation: http://localhost:7001/docs (when running locally)
- π Quick Start: http://localhost:7001/docs (see "Quick Start" in sidebar)
- π§ Building Agents: http://localhost:7001/docs (see "Building Agents" in sidebar)
- π‘ API Reference: http://localhost:7001/docs (see "API Reference" in sidebar)
All documentation is available at /docs when the server is running.
MIT License | GitHub

