Skip to content

gdmka/openai-compat-endpoint

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 

Repository files navigation

Table of Contents

OpenAI-Compatible Client for LM Studio

Turn LM Studio into a cloud-ready powerhouse. Access local and cloud inference simultaneously in the same app! This fork lets you chat with Cerebras, Groq, OpenRouter, Claude, GPT-4o, Gemini-2.5, DeepSeek, Kimi-K2, GLM V4.5—or any OpenAI-shaped endpoint—without ever leaving the comfy LM Studio UI.

What’s new

  • Native OpenAI sampling knobs (temperature, top-p, frequency penalty, etc.) exposed in the GUI
  • Zero config: paste your API key, pick the model, start vibing
  • Keeps all local-model superpowers intact—switch between cloud and local on the fly
  • Added custom system prompt support

Install in 5 seconds

  1. Navigate to LM Studio Hub
  2. Hit “Run in LM Studio” on the plugin page
image
  1. Done—plugin will be available in Chat view under Your Generators section
image

Full text search

Unlock the full power of productivity by instantly navigating between search terms with built-in full-text search across your chat history.

image

Quick start

  1. Grab an API key from your favorite provider (Cerebras, Groq, OpenRouter, Anthropic, OpenAI, etc.)
  2. In LM Studio → Chat → Your Generators load the plugin → hit Show Settings shortcut → pick Generators tab → paste key
  3. Type model name, select AI provider, tweak sampling, chat away!
image

Roadmap & bugs

Got feature requests or bugs?
Drop them in the Issues tab—every ticket gets love.

Links

About

Chat with cloud AI models directly from LM Studio!

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published