feat: support multiple LM Studio provider instances#10
Open
jleaders wants to merge 3 commits intoagustif:mainfrom
Open
feat: support multiple LM Studio provider instances#10jleaders wants to merge 3 commits intoagustif:mainfrom
jleaders wants to merge 3 commits intoagustif:mainfrom
Conversation
Scans config.provider for all keys matching /^lm.?studio/i (e.g. lmstudio, lm-studio, lm-studio-wooden, lm-studio-alienware) and queries each one in parallel for model discovery. Previously only the exact key 'lmstudio' was handled, meaning 'lm-studio' and any additional named instances were silently ignored. - enhance-config: extract processHost(), add findLMStudioProviders() to scan all lm?studio provider keys; run all hosts in parallel via Promise.allSettled - config-hook: replace hardcoded lmstudio key check with isLMStudioProviderKey(); sum models across all providers for logging - type-guards: update isLMStudioProvider to match /^lm.?studio/i so chat.params hook correctly validates models on all instances - test: add multi-provider test; fix mockClear → mockReset to prevent mock bleed-through between tests Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Switch model discovery from /v1/models to LM Studio's /api/v0/models
endpoint which exposes loaded_context_length, max_context_length, type,
publisher, and capabilities per model.
- Sets limit.context from loaded_context_length (the active window) falling
back to max_context_length, fixing the 0% context usage display in opencode
- Uses API-provided type field ('embeddings') instead of name heuristics for
embedding detection
- Uses publisher field for organizationOwner instead of name parsing
- Sets tool_call: true for models advertising 'tool_use' capability
- Health check continues to use /v1/models (broadly supported); discovery
now uses /api/v0/models
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…nd tests Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
config.providerfor all keys matching/^lm.?studio/i(e.g.lmstudio,lm-studio,lm-studio-wooden,lm-studio-alienware) and queries each in parallel for model discoverylmstudiowas handled —lm-studioand any additional named instances were silently ignoredisLMStudioProvidertype guard to match the same pattern so thechat.paramshook correctly validates models on all instancesChanges
enhance-config: extractprocessHost(), addfindLMStudioProviders()to scan alllm?studioprovider keys; run all hosts in parallel viaPromise.allSettledconfig-hook: replace hardcodedlmstudiokey check withisLMStudioProviderKey(); sum models across all providers for loggingtype-guards: updateisLMStudioProviderto match/^lm.?studio/itest: add multi-provider test; fixmockClear→mockResetto prevent mock bleed-through between testsUsage
Configure multiple LM Studio instances in
opencode.jsonand the plugin will discover models from all of them automatically:{ "provider": { "lm-studio": { "npm": "@ai-sdk/openai-compatible", "name": "LM Studio (local)", "options": { "baseURL": "http://localhost:1234/v1" } }, "lm-studio-wooden": { "npm": "@ai-sdk/openai-compatible", "name": "LM Studio (wooden)", "options": { "baseURL": "http://wooden.local:1234/v1" } } } }Test plan
npm test)lm-studio-*providers each populate with models from their respective hosts🤖 Generated with Claude Code