The LLM Gateway for Multi-Provider AI Development
![]() |
Transform Claude Code (or any Anthropic API client) into a multi-provider client for OpenAI, Anthropic, Poe, Azure, Gemini, and other compatible APIs. |
Route requests to any LLM provider with simple model prefixes.
First of all, install uv if you don't have it. Then:
# Install in seconds
uv tool install vandamme-proxy
# Configure 1 or multiple API keys for production resilience
export POE_API_KEY="sk-key1 sk-key2 sk-key3"
# Run Claude Code CLI wrapped by Vandamme
claude.vdm
# Open dashboard page for monitoring
open http://localhost:8082/dashboard/`# Use with Claude Code CLI
export ANTHROPIC_BASE_URL=http://localhost:8082
claude --model openai:gpt-4o "Analyze this code"
claude --model poe:gemini-flash "Quick question"
claude --model fast "Fast response" # Smart aliasA lightweight, production-ready proxy with enterprise features:
- π Zero-Configuration Discovery - Providers auto-configured from environment variables
- π Dual API Format Support - Native OpenAI conversion + Anthropic passthrough
- π·οΈ Smart Model Aliases - Case-insensitive substring matching for cleaner workflows
- π Secure API Key Passthrough - Multi-tenant deployments with
!PASSTHRUsentinel - βοΈ Extensible Middleware - Chain-of-responsibility pattern for custom logic
- π Built-in Observability - Metrics, health checks, and structured logging
- Major Providers: OpenAI, Anthropic, Poe, Azure OpenAI, etc
- Custom Endpoints: Any OpenAI/Anthropic-compatible API
- Auto-Discovery: Configure via
{PROVIDER}_API_KEYenvironment variables - Mixed Formats: Run OpenAI conversion and Anthropic passthrough simultaneously
- Provider Prefix Routing:
provider:modelsyntax - Smart Model Aliases: Substring matching with priority ordering
- Dynamic Provider Selection: Switch providers per-request without configuration changes
- Multi-API Key Support: Configure multiple keys per provider with automatic round-robin rotation
- API Key Passthrough: Set
{PROVIDER}_API_KEY=!PASSTHRUto enable client-provided keys - Intelligent Failover: Automatic key rotation on authentication failures (401/403/429)
- Mixed Authentication: Static keys + passthrough simultaneously per-provider
- Isolated Configuration: Per-provider settings, custom headers, API versions
- Handy CLI (
vdm): Server management, health checks, configuration validation - Streaming support, metrics endpoints
- Extensible Architecture: Built-in middleware for Google Gemini thought signatures
- Hot reload support during development
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Your AI Application β
β (Claude Code CLI, Custom Clients) β
ββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββ
β Vandamme Proxy Gateway β
β http://localhost:8082 β
β β
β βββββββββββββββββββββββββββββββ β
β β Smart Alias Engine β β
β β "fast" β "poe:gemini-flash"β β
β βββββββββββββββββββββββββββββββ β
β β
β βββββββββββββββββββββββββββββββ β
β β Dynamic Provider Router β β
β β Dual Format Handler β β
β βββββββββββββββββββββββββββββββ β
βββββββββββββββββ¬ββββββββββββββββββββ
β
βββββββββββββββββΌβββββββββββββ¬ββββββββββββββ
β β β β
βΌ βΌ βΌ βΌ
ββββββββββ βββββββββββ ββββββββββ βββββββββββ
β OpenAI β βAnthropicβ β Poe β β Azure β
β β β Format: β β(!PASS β β Gemini β
β Static β βAnthropicβ β THRU) β β Custom β
β Key β β β β β β β
ββββββββββ βββββββββββ ββββββββββ βββββββββββ
Request Flow:
- Anthropic Client sends request to Vandamme Proxy
- Smart alias resolution (if applicable)
- Provider routing based on model prefix
- Format selection (OpenAI conversion vs Anthropic passthrough)
- Response transformation and middleware processing
# Using uv (fastest)
uv pip install vandamme-proxy
# Or using pip
pip install vandamme-proxy
# Verify installation
vdm version# Interactive setup (recommended for new users)
vdm config setup
# Or create .env file manually
cat > .env << 'EOF'
# Provider API Keys
OPENAI_API_KEY=sk-your-openai-key
POE_API_KEY=!PASSTHRU # Client provides key per-request
ANTHROPIC_API_KEY=sk-ant-your-key
ANTHROPIC_API_FORMAT=anthropic # Direct passthrough (no conversion)
# Smart Aliases (provider-specific)
POE_ALIAS_FAST=gemini-flash
ANTHROPIC_ALIAS_CHAT=claude-3-5-sonnet-20241022
OPENAI_ALIAS_CODE=gpt-4o
# Default Provider (when no prefix specified)
# Overrides the default-provider from src/config/defaults.toml
VDM_DEFAULT_PROVIDER=openai
EOF# Development mode (with hot reload)
vdm server start --reload
# Production mode
vdm server start --host 0.0.0.0 --port 8082# Point Claude Code to proxy
export ANTHROPIC_BASE_URL=http://localhost:8082
# Use provider routing
claude --model openai:gpt-4o "Analyze this code"
claude --model poe:gemini-flash "Quick question"
# Use smart aliases
claude --model fast "Fast response needed"
claude --model chat "Deep conversation"
# For passthrough providers (!PASSTHRU), provide your API key
ANTHROPIC_API_KEY=your-poe-key claude --model poe:gemini-flash "..."# Check server health
vdm health server
# Test upstream provider connectivity
vdm health upstream
# Show current configuration
vdm config show
# View active model aliases
curl http://localhost:8082/v1/aliasesπ You're all set! Now using multiple LLM providers through a single, elegant interface.
Vandamme Proxy uses a hierarchical configuration system. Settings from higher levels override those from lower levels:
Environment Variables (highest priority)
βββ Local: ./vandamme-config.toml
βββ User: ~/.config/vandamme-proxy/vandamme-config.toml
βββ Package: src/config/defaults.toml (lowest priority)
The default provider is determined in this order:
VDM_DEFAULT_PROVIDERenvironment variable (if set)default-providerfrom your local./vandamme-config.tomldefault-providerfrom your user config~/.config/vandamme-proxy/vandamme-config.tomldefault-providerfromsrc/config/defaults.toml(defaults to "openai")
The src/config/defaults.toml file provides built-in defaults:
- Default provider: "openai"
- Fallback model aliases for providers like Poe
You can override any of these settings using environment variables or your own TOML configuration files.
Route requests by prefixing model names with the provider identifier:
# Format: provider:model_name
claude --model openai:gpt-4o # Routes to OpenAI
claude --model poe:gemini-flash # Routes to Poe
claude --model anthropic:claude-3 # Routes to Anthropic
claude --model gpt-4o # Routes to VDM_DEFAULT_PROVIDERProviders are auto-discovered from environment variables:
OPENAI_API_KEYβ creates "openai" providerPOE_API_KEYβ creates "poe" providerCUSTOM_API_KEYβ creates "custom" provider
π Complete Routing Guide β
Create memorable shortcuts with powerful substring matching:
# .env configuration
POE_ALIAS_FAST=gemini-flash
POE_ALIAS_HAIKU=gpt-4o-mini
ANTHROPIC_ALIAS_CHAT=claude-3-5-sonnet-20241022Intelligent Matching Rules:
- Case-Insensitive:
fast,Fast,FASTall match - Substring Matching:
my-fast-modelmatchesFASTalias - Hyphen/Underscore:
my-aliasandmy_aliasboth matchMY_ALIAS - Provider-Scoped: Each alias is tied to a specific provider
- Priority Order: Exact match β Longest substring β Provider order β Alphabetical
- Automatic Fallbacks: Default mappings for
haiku,sonnet,opus - Project Overrides: Local configuration files
- User Preferences: System-wide defaults
OpenAI Format (default):
PROVIDER_API_FORMAT=openai # Requests converted to/from OpenAI formatAnthropic Format (passthrough):
PROVIDER_API_FORMAT=anthropic # Zero conversion overhead, direct passthroughMix formats in a single instance:
OPENAI_API_FORMAT=openai # Conversion mode
ANTHROPIC_API_FORMAT=anthropic # Passthrough mode
BEDROCK_API_FORMAT=anthropic # AWS Bedrock passthroughThis enables using Claude natively on AWS Bedrock, Google Vertex AI, or any Anthropic-compatible endpoint without conversion overhead.
π Anthropic API Support Guide β
Enable client-provided API keys with the !PASSTHRU sentinel:
# Proxy stores and uses a static API key
OPENAI_API_KEY=sk-your-static-key
# Client provides their own key per-request
POE_API_KEY=!PASSTHRUUse Cases:
- Multi-Tenant Deployments - Each client uses their own API keys
- Cost Distribution - Clients pay for their own API usage
- Client Autonomy - Users maintain control of their credentials
- Gradual Migration - Move providers to passthrough one at a time
π API Key Passthrough Guide β
VanDamme Proxy is designed around a specific problem space: acting as a multi-provider LLM gateway that is natively compatible with Claude Code and Anthropicβs SSE protocol, while still supporting OpenAI-style APIs and other providers.
Most alternatives solve adjacent but different problems. The comparisons below are scoped specifically to Claude Code compatibility and protocol behavior, not general LLM usage.
What it does well
- Native compatibility with Claude Code
- Correct implementation of Anthropicβs SSE protocol
- Simple, focused design
Limitations (by design)
- Primarily Anthropic-focused
- Simultaneous multi-provider routing is not a first-class concern
- Limited abstraction for adding heterogeneous providers with different API semantics
Summary
Claude Code Proxy is purpose-built for Anthropic and Claude Code. VanDamme builds on this idea but generalizes it into a provider-agnostic gateway, while preserving Claudeβs protocol semantics.
References
What it does well
- Broad multi-provider support
- OpenAI-compatible API normalization
- Production-oriented features (logging, retries, caching)
Limitations in the context of Claude Code
- Normalizes providers around OpenAI semantics
- Does not implement Anthropicβs native SSE event model
- Claude Code expects Anthropic-specific streaming events and will not function correctly with OpenAI-style streaming
Summary
LiteLLM is an excellent OpenAI-compatible gateway, but it is not designed to support Claude Code, which relies on Anthropicβs native streaming protocol rather than OpenAIβs.
References
What it does well
- Simple access to many hosted models
- No local infrastructure required
Limitations
- Fully hosted (not self-hostable)
- No control over routing, protocol handling, or extensions
- Not designed to proxy Claude Code traffic locally
Summary
OpenRouter is a hosted aggregation service, not a local gateway or protocol bridge. VanDamme targets self-hosted, local, and extensible workflows.
References
Choose Vandamme if you:
- Use Claude Code CLI and want seamless multi-provider support
- Need flexible per-provider API key passthrough for multi-tenant scenarios
- Want smart model aliases with substring matching
- Require Anthropic-format native passthrough (Z.Ai, AWS Bedrock, Google Vertex AI)
- Want extensible middleware for custom request/response logic
Choose LiteLLM if you:
- Need enterprise-grade load balancing and automatic failover
- Require extensive logging and observability integrations
- Want managed caching layers and retry strategies
Choose OpenRouter if you:
- Prefer a managed cloud service over self-hosting
- Want access to exclusive model partnerships and providers
- Don't require self-hosted infrastructure
For production deployments, configure multiple API keys per provider for automatic load balancing and failover:
# Multiple keys for automatic round-robin rotation
export OPENAI_API_KEY="sk-proj-key1 sk-proj-key2 sk-proj-key3"
export ANTHROPIC_API_KEY="sk-ant-prod1 sk-ant-prod2 sk-ant-backup"
# Start with high availability
vdm server startKey Features:
- β Round-Robin Load Balancing - Distribute requests across keys
- β Automatic Failover - Skip failed keys (401/403/429 errors)
- β Thread-Safe Operation - Process-global rotation state
- β Backward Compatible - Single-key configurations still work
π Learn more about multi-API key configuration
- Quick Start Guide - Get running in 5 minutes
- Dashboard Guide - Monitor and manage your proxy
- Architecture Overview - Deep dive into design decisions
- Development Workflows - Makefile targets and best practices
- Multi-API Key Support - Load balancing and automatic failover
- Multi-Provider Routing - Complete routing and configuration guide
- Smart Model Aliases - Alias configuration and matching rules
- Fallback Model Aliases - Automatic defaults for special model names
- API Key Passthrough - Security and multi-tenancy patterns
- Anthropic API Support - Dual-format operation details
POST /v1/messages- Chat completionsPOST /v1/messages/count_tokens- Token countingGET /v1/models- List available modelsGET /v1/aliases- View active model aliasesGET /health- Health check with provider statusGET /metrics/running-totals- Usage metrics
vdm server start- Start the proxy servervdm config setup- Interactive configurationvdm health server- Check server healthvdm health upstream- Test provider connectivityvdm test connection- Validate API accessvdm test models- List available models
# Clone the repository
git clone https://github.com/CedarVerse/vandamme-proxy.git
cd vandamme-proxy
# Initialize development environment (recommended)
make dev-env-init
# Or setup dependencies manually
make dev-env-setup
make check-install# Start development server with hot reload
make dev
# Run tests (excluding e2e by default)
make test
# Run all static checks (format + lint + typecheck, NO tests)
make sanitize
# Format code
make format
# Quick validation (format + lint + quick tests)
make validateThe project follows a three-tier testing pyramid:
- Unit Tests (~90%): Fast, mocked tests using RESPX for HTTP-layer mocking
- Integration Tests (~10%): Require running server, no external API calls
- E2E Tests (<5%): Real API calls for critical validation (requires API keys)
# Run specific test suites
make test-unit # Unit tests only (fastest)
make test-integration # Integration tests (requires server)
make test-e2e # E2E tests (requires API keys, incurs costs)
make test-all # All tests including E2EWe welcome contributions! Please see our development guide for details:
- Development Workflows - Makefile targets and best practices
- Architecture Overview - Design decisions and code structure
- Code Style Guide - Formatting and linting standards
OPENAI_API_KEY=sk-your-openai-key
ANTHROPIC_API_KEY=sk-ant-your-key
POE_API_KEY=your-poe-key
# Any {PROVIDER}_API_KEY creates a provider# Default provider for models without provider prefixes
# Overrides the default-provider from src/config/defaults.toml
# If not set, uses value from defaults.toml (defaults to "openai")
VDM_DEFAULT_PROVIDER=openai# API Format: "openai" (default) or "anthropic"
ANTHROPIC_API_FORMAT=anthropic
# Base URL (optional, has sensible defaults)
OPENAI_BASE_URL=https://api.openai.com/v1
AZURE_BASE_URL=https://your-resource.openai.azure.com
# API Version (for Azure)
AZURE_API_VERSION=2024-02-15-previewHOST=0.0.0.0 # Server host
PORT=8082 # Server port
LOG_LEVEL=INFO # Logging level
MAX_TOKENS_LIMIT=4096 # Maximum tokens
REQUEST_TIMEOUT=90 # Request timeout in seconds
MAX_RETRIES=2 # Retry attempts# Google Gemini thought signatures
GEMINI_THOUGHT_SIGNATURES_ENABLED=true
THOUGHT_SIGNATURE_CACHE_TTL=3600
THOUGHT_SIGNATURE_MAX_CACHE_SIZE=10000# Automatically converted to HTTP headers
CUSTOM_HEADER_ACCEPT=application/json
CUSTOM_HEADER_X_API_KEY=your-key# Build and start with Docker Compose
docker compose up -d
# View logs
docker compose logs -f
# Stop services
docker compose down# Create systemd service file
sudo tee /etc/systemd/system/vandamme-proxy.service > /dev/null <<EOF
[Unit]
Description=Vandamme Proxy
After=network.target
[Service]
Type=simple
User=vandamme
WorkingDirectory=/opt/vandamme-proxy
Environment=HOST=0.0.0.0
Environment=PORT=8082
# Wrap uses systemd logging by default; server can opt-in with --systemd
ExecStart=/opt/vandamme-proxy/.venv/bin/vdm server start --systemd
Restart=always
[Install]
WantedBy=multi-user.target
EOF
# Enable and start service
sudo systemctl enable vandamme-proxy
sudo systemctl start vandamme-proxy
# View logs (systemd journal)
journalctl -t vandamme-proxy -f- Vandamme Proxy depends on
systemd(systemd-python) and supports journal logging. vdm server start --systemdsends logs to the journal instead of console.- The
vdm wrapcommand always uses systemd logging (no flag needed). - View logs with
journalctl -t vandamme-proxy(use-fto follow). - If systemd is unavailable, logging falls back to console.
- Install with systemd dependency (required): already in base dependencies.
- For development without systemd, run without
--systemdto keep console output. - If running outside a systemd environment, the server will warn and fall back to console when
--systemdis used.
Example:
vdm server start --systemd
vdm wrap run # always systemdThis project is licensed under the MIT License - see the LICENSE file for details.
- π Issues: Report bugs and request features
- π¬ Discussions: Join community discussions
- π Repository: GitHub
Built with β€οΈ for the AI development community. Inspired by the need for seamless multi-provider integration in modern AI workflows.
Keywords: LLM gateway, API proxy, Claude Code, OpenAI, Anthropic, multi-provider, AI proxy, LLM router, API gateway, large language model, AI development, prompt engineering, model routing, API management
