Skip to content

Conversation

@google-labs-jules
Copy link
Contributor

Added support for vLLM and LM Studio as explicit local providers.
Implemented smart default URL logic: if the user selects a specific provider (e.g., vLLM) and hasn't manually overridden the base URL, the extension automatically uses the provider's standard port (e.g., 8000 for vLLM, 1234 for LM Studio).
Removed the hardcoded "mistral" default for the local model, allowing it to be empty as requested.


PR created automatically by Jules for task 4977356095703715999 started by @gasatrya

- Update `package.json` to include 'vllm' and 'lmstudio' in provider options and remove default local model.
- Update `src/core/config.ts` to export `DEFAULT_LOCAL_URL` and set default `localModel` to empty string.
- Update `src/providers/local/index.ts` to register `vllm` and `lmstudio` with smart default URLs.
- Update `src/test/config.test.ts` to verify new defaults.
@google-labs-jules
Copy link
Contributor Author

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

- Rename 'local' provider to 'ollama' in `src/providers/local/index.ts` and `src/core/config.ts`.
- Update `package.json` default for `predicteCommit.localBaseUrl` to `""`.
- Implement logic to use provider-specific defaults (Ollama: 11434, vLLM: 8000, LM Studio: 1234) only when `localBaseUrl` is empty.
- Update `predicteCommit.provider` description and `predicteCommit.useLocal` description to be more accurate.
@gasatrya gasatrya marked this pull request as ready for review January 15, 2026 09:31
@gasatrya gasatrya merged commit e699422 into main Jan 15, 2026
3 checks passed
@gasatrya gasatrya deleted the feat-vllm-lmstudio-providers-4977356095703715999 branch January 15, 2026 14:54
@github-actions
Copy link

🎉 This PR is included in version 1.0.0 🎉

The release is available on:

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants