Skip to content

feat: support multiple LM Studio provider instances#10

Open
jleaders wants to merge 3 commits intoagustif:mainfrom
jleaders:feat/multi-host-support
Open

feat: support multiple LM Studio provider instances#10
jleaders wants to merge 3 commits intoagustif:mainfrom
jleaders:feat/multi-host-support

Conversation

@jleaders
Copy link

Summary

  • Scans config.provider for all keys matching /^lm.?studio/i (e.g. lmstudio, lm-studio, lm-studio-wooden, lm-studio-alienware) and queries each in parallel for model discovery
  • Previously only the exact key lmstudio was handled — lm-studio and any additional named instances were silently ignored
  • Updates isLMStudioProvider type guard to match the same pattern so the chat.params hook correctly validates models on all instances

Changes

  • enhance-config: extract processHost(), add findLMStudioProviders() to scan all lm?studio provider keys; run all hosts in parallel via Promise.allSettled
  • config-hook: replace hardcoded lmstudio key check with isLMStudioProviderKey(); sum models across all providers for logging
  • type-guards: update isLMStudioProvider to match /^lm.?studio/i
  • test: add multi-provider test; fix mockClearmockReset to prevent mock bleed-through between tests

Usage

Configure multiple LM Studio instances in opencode.json and the plugin will discover models from all of them automatically:

{
  "provider": {
    "lm-studio": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "LM Studio (local)",
      "options": { "baseURL": "http://localhost:1234/v1" }
    },
    "lm-studio-wooden": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "LM Studio (wooden)",
      "options": { "baseURL": "http://wooden.local:1234/v1" }
    }
  }
}

Test plan

  • All 17 existing tests pass (npm test)
  • Single provider config works as before
  • Multiple lm-studio-* providers each populate with models from their respective hosts
  • Offline hosts are skipped gracefully without blocking other providers

🤖 Generated with Claude Code

jleaders and others added 3 commits February 24, 2026 17:59
Scans config.provider for all keys matching /^lm.?studio/i (e.g.
lmstudio, lm-studio, lm-studio-wooden, lm-studio-alienware) and
queries each one in parallel for model discovery.

Previously only the exact key 'lmstudio' was handled, meaning
'lm-studio' and any additional named instances were silently ignored.

- enhance-config: extract processHost(), add findLMStudioProviders()
  to scan all lm?studio provider keys; run all hosts in parallel via
  Promise.allSettled
- config-hook: replace hardcoded lmstudio key check with
  isLMStudioProviderKey(); sum models across all providers for logging
- type-guards: update isLMStudioProvider to match /^lm.?studio/i so
  chat.params hook correctly validates models on all instances
- test: add multi-provider test; fix mockClear → mockReset to prevent
  mock bleed-through between tests

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Switch model discovery from /v1/models to LM Studio's /api/v0/models
endpoint which exposes loaded_context_length, max_context_length, type,
publisher, and capabilities per model.

- Sets limit.context from loaded_context_length (the active window) falling
  back to max_context_length, fixing the 0% context usage display in opencode
- Uses API-provided type field ('embeddings') instead of name heuristics for
  embedding detection
- Uses publisher field for organizationOwner instead of name parsing
- Sets tool_call: true for models advertising 'tool_use' capability
- Health check continues to use /v1/models (broadly supported); discovery
  now uses /api/v0/models

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…nd tests

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant