Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .dev.vars.example
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,10 @@ ANTHROPIC_API_KEY=sk-ant-...
# OPENAI_API_KEY=sk-...

# Cloudflare AI Gateway (alternative to direct provider keys)
# CLOUDFLARE_AI_GATEWAY_API_KEY=your-gateway-api-key
# CLOUDFLARE_AI_GATEWAY_API_KEY=your-provider-api-key
# CF_AI_GATEWAY_ACCOUNT_ID=your-account-id
# CF_AI_GATEWAY_GATEWAY_ID=your-gateway-id
# CF_AI_GATEWAY_MODEL=workers-ai/@cf/meta/llama-3.3-70b-instruct-fp8-fast

# Legacy AI Gateway (still supported)
# AI_GATEWAY_API_KEY=your-key
Expand Down
10 changes: 9 additions & 1 deletion .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,9 @@ jobs:
env:
DISCORD_BOT_TOKEN: "fake-discord-bot-token-for-e2e"
DISCORD_DM_POLICY: "pairing"
- name: workers-ai
env:
CF_AI_GATEWAY_MODEL: "workers-ai/@cf/openai/gpt-oss-120b"

name: e2e (${{ matrix.config.name }})

Expand Down Expand Up @@ -98,7 +101,11 @@ jobs:
CF_ACCESS_TEAM_DOMAIN: ${{ secrets.E2E_CF_ACCESS_TEAM_DOMAIN }}
R2_ACCESS_KEY_ID: ${{ secrets.E2E_R2_ACCESS_KEY_ID }}
R2_SECRET_ACCESS_KEY: ${{ secrets.E2E_R2_SECRET_ACCESS_KEY }}
# AI provider (optional, for chat tests)
# AI provider — Cloudflare AI Gateway (preferred)
CLOUDFLARE_AI_GATEWAY_API_KEY: ${{ secrets.CLOUDFLARE_AI_GATEWAY_API_KEY }}
CF_AI_GATEWAY_ACCOUNT_ID: ${{ secrets.CF_AI_GATEWAY_ACCOUNT_ID }}
CF_AI_GATEWAY_GATEWAY_ID: ${{ secrets.CF_AI_GATEWAY_GATEWAY_ID }}
# AI provider — legacy (still supported)
AI_GATEWAY_API_KEY: ${{ secrets.AI_GATEWAY_API_KEY }}
AI_GATEWAY_BASE_URL: ${{ secrets.AI_GATEWAY_BASE_URL }}
# Unique test run ID for parallel isolation
Expand All @@ -108,6 +115,7 @@ jobs:
TELEGRAM_DM_POLICY: ${{ matrix.config.env.TELEGRAM_DM_POLICY }}
DISCORD_BOT_TOKEN: ${{ matrix.config.env.DISCORD_BOT_TOKEN }}
DISCORD_DM_POLICY: ${{ matrix.config.env.DISCORD_DM_POLICY }}
CF_AI_GATEWAY_MODEL: ${{ matrix.config.env.CF_AI_GATEWAY_MODEL }}
run: cctr -vv test/e2e

- name: Convert video and generate thumbnail
Expand Down
75 changes: 56 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,9 +65,10 @@ npm install
# Set your API key (direct Anthropic access)
npx wrangler secret put ANTHROPIC_API_KEY

# Or use AI Gateway instead (see "Optional: Cloudflare AI Gateway" below)
# npx wrangler secret put AI_GATEWAY_API_KEY
# npx wrangler secret put AI_GATEWAY_BASE_URL
# Or use Cloudflare AI Gateway instead (see "Optional: Cloudflare AI Gateway" below)
# npx wrangler secret put CLOUDFLARE_AI_GATEWAY_API_KEY
# npx wrangler secret put CF_AI_GATEWAY_ACCOUNT_ID
# npx wrangler secret put CF_AI_GATEWAY_GATEWAY_ID

# Generate and set a gateway token (required for remote access)
# Save this token - you'll need it to access the Control UI
Expand Down Expand Up @@ -348,42 +349,78 @@ See `skills/cloudflare-browser/SKILL.md` for full documentation.

## Optional: Cloudflare AI Gateway

You can route API requests through [Cloudflare AI Gateway](https://developers.cloudflare.com/ai-gateway/) for caching, rate limiting, analytics, and cost tracking. AI Gateway supports multiple providers — configure your preferred provider in the gateway and use these env vars:
You can route API requests through [Cloudflare AI Gateway](https://developers.cloudflare.com/ai-gateway/) for caching, rate limiting, analytics, and cost tracking. OpenClaw has native support for Cloudflare AI Gateway as a first-class provider.

AI Gateway acts as a proxy between OpenClaw and your AI provider (e.g., Anthropic). Requests are sent to `https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/anthropic` instead of directly to `api.anthropic.com`, giving you Cloudflare's analytics, caching, and rate limiting. You still need a provider API key (e.g., your Anthropic API key) — the gateway forwards it to the upstream provider.

### Setup

1. Create an AI Gateway in the [AI Gateway section](https://dash.cloudflare.com/?to=/:account/ai/ai-gateway/create-gateway) of the Cloudflare Dashboard.
2. Add a provider (e.g., Anthropic) to your gateway
3. Set the gateway secrets:

You'll find the base URL on the Overview tab of your newly created gateway. At the bottom of the page, expand the **Native API/SDK Examples** section and select "Anthropic".
2. Set the three required secrets:

```bash
# Your provider's API key (e.g., Anthropic API key)
npx wrangler secret put AI_GATEWAY_API_KEY
# Your AI provider's API key (e.g., your Anthropic API key).
# This is passed through the gateway to the upstream provider.
npx wrangler secret put CLOUDFLARE_AI_GATEWAY_API_KEY

# Your Cloudflare account ID
npx wrangler secret put CF_AI_GATEWAY_ACCOUNT_ID

# Your AI Gateway endpoint URL
npx wrangler secret put AI_GATEWAY_BASE_URL
# Enter: https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/anthropic
# Your AI Gateway ID (from the gateway overview page)
npx wrangler secret put CF_AI_GATEWAY_GATEWAY_ID
```

4. Redeploy:
All three are required. OpenClaw constructs the gateway URL from the account ID and gateway ID, and passes the API key to the upstream provider through the gateway.

3. Redeploy:

```bash
npm run deploy
```

The `AI_GATEWAY_*` variables take precedence over `ANTHROPIC_*` if both are set.
When Cloudflare AI Gateway is configured, it takes precedence over direct `ANTHROPIC_API_KEY` or `OPENAI_API_KEY`.

### Choosing a Model

By default, AI Gateway uses Anthropic's Claude Sonnet 4.5. To use a different model or provider, set `CF_AI_GATEWAY_MODEL` with the format `provider/model-id`:

```bash
npx wrangler secret put CF_AI_GATEWAY_MODEL
# Enter: workers-ai/@cf/meta/llama-3.3-70b-instruct-fp8-fast
```

This works with any [AI Gateway provider](https://developers.cloudflare.com/ai-gateway/usage/providers/):

| Provider | Example `CF_AI_GATEWAY_MODEL` value | API key is... |
|----------|-------------------------------------|---------------|
| Workers AI | `workers-ai/@cf/meta/llama-3.3-70b-instruct-fp8-fast` | Cloudflare API token |
| OpenAI | `openai/gpt-4o` | OpenAI API key |
| Anthropic | `anthropic/claude-sonnet-4-5` | Anthropic API key |
| Groq | `groq/llama-3.3-70b` | Groq API key |

**Note:** `CLOUDFLARE_AI_GATEWAY_API_KEY` must match the provider you're using — it's your provider's API key, forwarded through the gateway. You can only use one provider at a time through the gateway. For multiple providers, use direct keys (`ANTHROPIC_API_KEY`, `OPENAI_API_KEY`) alongside the gateway config.

#### Workers AI with Unified Billing

With [Unified Billing](https://developers.cloudflare.com/ai-gateway/features/unified-billing/), you can use Workers AI models without a separate provider API key — Cloudflare bills you directly. Set `CLOUDFLARE_AI_GATEWAY_API_KEY` to your [AI Gateway authentication token](https://developers.cloudflare.com/ai-gateway/configuration/authentication/) (the `cf-aig-authorization` token).

### Legacy AI Gateway Configuration

The previous `AI_GATEWAY_API_KEY` + `AI_GATEWAY_BASE_URL` approach is still supported for backward compatibility but is deprecated in favor of the native configuration above.

## All Secrets Reference

| Secret | Required | Description |
|--------|----------|-------------|
| `AI_GATEWAY_API_KEY` | Yes* | API key for your AI Gateway provider (requires `AI_GATEWAY_BASE_URL`) |
| `AI_GATEWAY_BASE_URL` | Yes* | AI Gateway endpoint URL (required when using `AI_GATEWAY_API_KEY`) |
| `ANTHROPIC_API_KEY` | Yes* | Direct Anthropic API key (fallback if AI Gateway not configured) |
| `ANTHROPIC_BASE_URL` | No | Direct Anthropic API base URL (fallback) |
| `CLOUDFLARE_AI_GATEWAY_API_KEY` | Yes* | Your AI provider's API key, passed through the gateway (e.g., your Anthropic API key). Requires `CF_AI_GATEWAY_ACCOUNT_ID` and `CF_AI_GATEWAY_GATEWAY_ID` |
| `CF_AI_GATEWAY_ACCOUNT_ID` | Yes* | Your Cloudflare account ID (used to construct the gateway URL) |
| `CF_AI_GATEWAY_GATEWAY_ID` | Yes* | Your AI Gateway ID (used to construct the gateway URL) |
| `CF_AI_GATEWAY_MODEL` | No | Override default model: `provider/model-id` (e.g. `workers-ai/@cf/meta/llama-3.3-70b-instruct-fp8-fast`). See [Choosing a Model](#choosing-a-model) |
| `ANTHROPIC_API_KEY` | Yes* | Direct Anthropic API key (alternative to AI Gateway) |
| `ANTHROPIC_BASE_URL` | No | Direct Anthropic API base URL |
| `OPENAI_API_KEY` | No | OpenAI API key (alternative provider) |
| `AI_GATEWAY_API_KEY` | No | Legacy AI Gateway API key (deprecated, use `CLOUDFLARE_AI_GATEWAY_API_KEY` instead) |
| `AI_GATEWAY_BASE_URL` | No | Legacy AI Gateway endpoint URL (deprecated) |
| `CF_ACCESS_TEAM_DOMAIN` | Yes* | Cloudflare Access team domain (required for admin UI) |
| `CF_ACCESS_AUD` | Yes* | Cloudflare Access application audience (required for admin UI) |
| `MOLTBOT_GATEWAY_TOKEN` | Yes | Gateway token for authentication (pass via `?token=` query param) |
Expand Down
15 changes: 15 additions & 0 deletions src/gateway/env.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,21 @@ describe('buildEnvVars', () => {
expect(result.OPENCLAW_DEV_MODE).toBe('true');
});

// AI Gateway model override
it('passes CF_AI_GATEWAY_MODEL to container', () => {
const env = createMockEnv({
CF_AI_GATEWAY_MODEL: 'workers-ai/@cf/meta/llama-3.3-70b-instruct-fp8-fast',
});
const result = buildEnvVars(env);
expect(result.CF_AI_GATEWAY_MODEL).toBe('workers-ai/@cf/meta/llama-3.3-70b-instruct-fp8-fast');
});

it('passes CF_ACCOUNT_ID to container', () => {
const env = createMockEnv({ CF_ACCOUNT_ID: 'acct-123' });
const result = buildEnvVars(env);
expect(result.CF_ACCOUNT_ID).toBe('acct-123');
});

it('combines all env vars correctly', () => {
const env = createMockEnv({
ANTHROPIC_API_KEY: 'sk-key',
Expand Down
2 changes: 2 additions & 0 deletions src/gateway/env.ts
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,8 @@ export function buildEnvVars(env: MoltbotEnv): Record<string, string> {
if (env.DISCORD_DM_POLICY) envVars.DISCORD_DM_POLICY = env.DISCORD_DM_POLICY;
if (env.SLACK_BOT_TOKEN) envVars.SLACK_BOT_TOKEN = env.SLACK_BOT_TOKEN;
if (env.SLACK_APP_TOKEN) envVars.SLACK_APP_TOKEN = env.SLACK_APP_TOKEN;
if (env.CF_AI_GATEWAY_MODEL) envVars.CF_AI_GATEWAY_MODEL = env.CF_AI_GATEWAY_MODEL;
if (env.CF_ACCOUNT_ID) envVars.CF_ACCOUNT_ID = env.CF_ACCOUNT_ID;
if (env.CDP_SECRET) envVars.CDP_SECRET = env.CDP_SECRET;
if (env.WORKER_URL) envVars.WORKER_URL = env.WORKER_URL;

Expand Down
16 changes: 11 additions & 5 deletions src/routes/api.ts
Original file line number Diff line number Diff line change
Expand Up @@ -38,9 +38,11 @@ adminApi.get('/devices', async (c) => {
await ensureMoltbotGateway(sandbox, c.env);

// Run OpenClaw CLI to list devices
// Must specify --url to connect to the gateway running in the same container
// Must specify --url and --token (OpenClaw v2026.2.3 requires explicit credentials with --url)
const token = c.env.MOLTBOT_GATEWAY_TOKEN;
const tokenArg = token ? ` --token ${token}` : '';
const proc = await sandbox.startProcess(
'openclaw devices list --json --url ws://localhost:18789',
`openclaw devices list --json --url ws://localhost:18789${tokenArg}`,
);
await waitForProcess(proc, CLI_TIMEOUT_MS);

Expand Down Expand Up @@ -93,8 +95,10 @@ adminApi.post('/devices/:requestId/approve', async (c) => {
await ensureMoltbotGateway(sandbox, c.env);

// Run OpenClaw CLI to approve the device
const token = c.env.MOLTBOT_GATEWAY_TOKEN;
const tokenArg = token ? ` --token ${token}` : '';
const proc = await sandbox.startProcess(
`openclaw devices approve ${requestId} --url ws://localhost:18789`,
`openclaw devices approve ${requestId} --url ws://localhost:18789${tokenArg}`,
);
await waitForProcess(proc, CLI_TIMEOUT_MS);

Expand Down Expand Up @@ -127,8 +131,10 @@ adminApi.post('/devices/approve-all', async (c) => {
await ensureMoltbotGateway(sandbox, c.env);

// First, get the list of pending devices
const token = c.env.MOLTBOT_GATEWAY_TOKEN;
const tokenArg = token ? ` --token ${token}` : '';
const listProc = await sandbox.startProcess(
'openclaw devices list --json --url ws://localhost:18789',
`openclaw devices list --json --url ws://localhost:18789${tokenArg}`,
);
await waitForProcess(listProc, CLI_TIMEOUT_MS);

Expand Down Expand Up @@ -158,7 +164,7 @@ adminApi.post('/devices/approve-all', async (c) => {
try {
// eslint-disable-next-line no-await-in-loop -- sequential device approval required
const approveProc = await sandbox.startProcess(
`openclaw devices approve ${device.requestId} --url ws://localhost:18789`,
`openclaw devices approve ${device.requestId} --url ws://localhost:18789${tokenArg}`,
);
// eslint-disable-next-line no-await-in-loop
await waitForProcess(approveProc, CLI_TIMEOUT_MS);
Expand Down
1 change: 1 addition & 0 deletions src/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ export interface MoltbotEnv {
CF_AI_GATEWAY_ACCOUNT_ID?: string; // Cloudflare account ID for AI Gateway
CF_AI_GATEWAY_GATEWAY_ID?: string; // AI Gateway ID
CLOUDFLARE_AI_GATEWAY_API_KEY?: string; // API key for requests through the gateway
CF_AI_GATEWAY_MODEL?: string; // Override model: "provider/model-id" e.g. "workers-ai/@cf/meta/llama-3.3-70b-instruct-fp8-fast"
// Legacy AI Gateway configuration (still supported for backward compat)
AI_GATEWAY_API_KEY?: string; // API key for the provider configured in AI Gateway
AI_GATEWAY_BASE_URL?: string; // AI Gateway URL (e.g., https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/anthropic)
Expand Down
59 changes: 48 additions & 11 deletions start-openclaw.sh
Original file line number Diff line number Diff line change
Expand Up @@ -189,16 +189,54 @@ if (process.env.OPENCLAW_DEV_MODE === 'true') {
config.gateway.controlUi.allowInsecureAuth = true;
}

// Legacy AI Gateway base URL override — patch into provider config
// (only needed when using AI_GATEWAY_BASE_URL, not native cloudflare-ai-gateway)
if (process.env.ANTHROPIC_BASE_URL && process.env.ANTHROPIC_API_KEY) {
const baseUrl = process.env.ANTHROPIC_BASE_URL.replace(/\/+$/, '');
config.models = config.models || {};
config.models.providers = config.models.providers || {};
config.models.providers.anthropic = config.models.providers.anthropic || {};
config.models.providers.anthropic.baseUrl = baseUrl;
config.models.providers.anthropic.apiKey = process.env.ANTHROPIC_API_KEY;
console.log('Patched Anthropic provider with base URL:', baseUrl);
// Legacy AI Gateway base URL override:
// ANTHROPIC_BASE_URL is picked up natively by the Anthropic SDK,
// so we don't need to patch the provider config. Writing a provider
// entry without a models array breaks OpenClaw's config validation.

// AI Gateway model override (CF_AI_GATEWAY_MODEL=provider/model-id)
// Adds a provider entry for any AI Gateway provider and sets it as default model.
// Examples:
// workers-ai/@cf/meta/llama-3.3-70b-instruct-fp8-fast
// openai/gpt-4o
// anthropic/claude-sonnet-4-5
if (process.env.CF_AI_GATEWAY_MODEL) {
const raw = process.env.CF_AI_GATEWAY_MODEL;
const slashIdx = raw.indexOf('/');
const gwProvider = raw.substring(0, slashIdx);
const modelId = raw.substring(slashIdx + 1);

const accountId = process.env.CF_AI_GATEWAY_ACCOUNT_ID;
const gatewayId = process.env.CF_AI_GATEWAY_GATEWAY_ID;
const apiKey = process.env.CLOUDFLARE_AI_GATEWAY_API_KEY;

let baseUrl;
if (accountId && gatewayId) {
baseUrl = 'https://gateway.ai.cloudflare.com/v1/' + accountId + '/' + gatewayId + '/' + gwProvider;
if (gwProvider === 'workers-ai') baseUrl += '/v1';
} else if (gwProvider === 'workers-ai' && process.env.CF_ACCOUNT_ID) {
baseUrl = 'https://api.cloudflare.com/client/v4/accounts/' + process.env.CF_ACCOUNT_ID + '/ai/v1';
}

if (baseUrl && apiKey) {
const api = gwProvider === 'anthropic' ? 'anthropic-messages' : 'openai-completions';
const providerName = 'cf-ai-gw-' + gwProvider;

config.models = config.models || {};
config.models.providers = config.models.providers || {};
config.models.providers[providerName] = {
baseUrl: baseUrl,
apiKey: apiKey,
api: api,
models: [{ id: modelId, name: modelId, contextWindow: 131072, maxTokens: 8192 }],
};
config.agents = config.agents || {};
config.agents.defaults = config.agents.defaults || {};
config.agents.defaults.model = { primary: providerName + '/' + modelId };
console.log('AI Gateway model override: provider=' + providerName + ' model=' + modelId + ' via ' + baseUrl);
} else {
console.warn('CF_AI_GATEWAY_MODEL set but missing required config (account ID, gateway ID, or API key)');
}
}

// Telegram configuration
Expand Down Expand Up @@ -244,7 +282,6 @@ if (process.env.SLACK_BOT_TOKEN && process.env.SLACK_APP_TOKEN) {

fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
console.log('Configuration patched successfully');
console.log('Config:', JSON.stringify(config, null, 2));
EOFPATCH

# ============================================================
Expand Down
2 changes: 1 addition & 1 deletion test/e2e/_setup.txt
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ WORKER_URL=$(cat "$CCTR_FIXTURE_DIR/worker-url.txt")
./pw --session=moltworker-e2e open "$WORKER_URL/?token=$TOKEN"
# Wait for pairing required message (worker shows loading screen first, then UI loads)
./pw --session=moltworker-e2e run-code "async page => {
await page.waitForSelector('text=Pairing required', { timeout: 300000 });
await page.waitForSelector('text=Pairing required', { timeout: 480000 });
}"
echo "Worker is ready"
---
Expand Down
31 changes: 31 additions & 0 deletions test/e2e/_teardown.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,34 @@
===
dump gateway logs for debugging
===
WORKER_URL=$(cat "$CCTR_FIXTURE_DIR/worker-url.txt" 2>/dev/null || echo "")
if [ -n "$WORKER_URL" ]; then
PROCS=$(./curl-auth -s "$WORKER_URL/debug/processes" 2>/dev/null || echo "")
PROC_ID=$(echo "$PROCS" | grep -o '"id":"[^"]*"' | head -1 | cut -d'"' -f4)
if [ -n "$PROC_ID" ]; then
echo "=== Gateway process logs ($PROC_ID) ==="
./curl-auth -s "$WORKER_URL/debug/logs?id=$PROC_ID" 2>/dev/null | python3 -c "
import sys, json
try:
d = json.load(sys.stdin)
if d.get('stdout'): print('STDOUT:', d['stdout'][-3000:])
if d.get('stderr'): print('STDERR:', d['stderr'][-3000:])
except: print('Failed to parse logs')
" || echo "Failed to fetch logs"
else
echo "No gateway process found"
echo "Processes: $PROCS"
fi
else
echo "No worker URL found"
fi
echo "dump complete"
---
{{ output }}
---
where
* output contains "dump complete"

===
stop video recording
===
Expand Down
15 changes: 15 additions & 0 deletions test/e2e/fixture/server/deploy
Original file line number Diff line number Diff line change
Expand Up @@ -70,12 +70,27 @@ echo "$R2_BUCKET" | npx wrangler secret put R2_BUCKET_NAME --name "$WORKER_NAME"
echo "$CLOUDFLARE_ACCOUNT_ID" | npx wrangler secret put CF_ACCOUNT_ID --name "$WORKER_NAME" >&2

# Set AI provider keys if available
# Cloudflare AI Gateway (preferred)
if [ -n "$CLOUDFLARE_AI_GATEWAY_API_KEY" ]; then
echo "$CLOUDFLARE_AI_GATEWAY_API_KEY" | npx wrangler secret put CLOUDFLARE_AI_GATEWAY_API_KEY --name "$WORKER_NAME" >&2
fi
if [ -n "$CF_AI_GATEWAY_ACCOUNT_ID" ]; then
echo "$CF_AI_GATEWAY_ACCOUNT_ID" | npx wrangler secret put CF_AI_GATEWAY_ACCOUNT_ID --name "$WORKER_NAME" >&2
fi
if [ -n "$CF_AI_GATEWAY_GATEWAY_ID" ]; then
echo "$CF_AI_GATEWAY_GATEWAY_ID" | npx wrangler secret put CF_AI_GATEWAY_GATEWAY_ID --name "$WORKER_NAME" >&2
fi
if [ -n "$CF_AI_GATEWAY_MODEL" ]; then
echo "$CF_AI_GATEWAY_MODEL" | npx wrangler secret put CF_AI_GATEWAY_MODEL --name "$WORKER_NAME" >&2
fi
# Legacy AI Gateway (still supported)
if [ -n "$AI_GATEWAY_API_KEY" ]; then
echo "$AI_GATEWAY_API_KEY" | npx wrangler secret put AI_GATEWAY_API_KEY --name "$WORKER_NAME" >&2
fi
if [ -n "$AI_GATEWAY_BASE_URL" ]; then
echo "$AI_GATEWAY_BASE_URL" | npx wrangler secret put AI_GATEWAY_BASE_URL --name "$WORKER_NAME" >&2
fi
# Direct provider keys
if [ -n "$ANTHROPIC_API_KEY" ]; then
echo "$ANTHROPIC_API_KEY" | npx wrangler secret put ANTHROPIC_API_KEY --name "$WORKER_NAME" >&2
fi
Expand Down
Loading