Skip to content

Add LM Studio backend support alongside OpenClaw#1

Open
wende wants to merge 6 commits intomainfrom
claude/add-lm-studio-support-dOooD
Open

Add LM Studio backend support alongside OpenClaw#1
wende wants to merge 6 commits intomainfrom
claude/add-lm-studio-support-dOooD

Conversation

@wende
Copy link
Owner

@wende wende commented Feb 7, 2026

Summary

This PR adds support for LM Studio as an alternative backend to the existing OpenClaw WebSocket connection. Users can now choose between OpenClaw, LM Studio, or Demo mode when connecting, with each backend having its own configuration and connection handling.

Key Changes

  • New LM Studio backend module (lib/lmStudio.ts):

    • Implements OpenAI-compatible HTTP + Server-Sent Events (SSE) streaming
    • Supports model fetching, text streaming, reasoning/thinking content, and tool call streaming
    • Converts internal message format to OpenAI chat completion format
    • Provides callbacks for streaming lifecycle events
  • Updated connection setup dialog:

    • Added mode selector (segmented control) to choose between OpenClaw and LM Studio
    • Conditional UI rendering based on selected backend
    • LM Studio-specific fields: server URL, API key (optional), and model selector
    • "Fetch" button to dynamically load available models from LM Studio server
    • Persistent storage of selected mode and backend-specific credentials
  • Backend mode management:

    • New BackendMode type: "openclaw" | "lmstudio" | "demo"
    • New ConnectionConfig interface to unify connection parameters across backends
    • Separate handler refs for LM Studio (lmStudioHandlerRef, lmStudioConfigRef)
    • Auto-detection and restoration of previously used backend on app load
  • Message routing:

    • Demo and LM Studio modes bypass WebSocket entirely
    • LM Studio sends full conversation history via HTTP POST with SSE response streaming
    • OpenClaw continues to use existing WebSocket protocol
  • UI indicators:

    • Status badge shows current backend (Demo/LM Studio/OpenClaw)
    • Different colored indicators for each mode
  • Pull-to-refresh behavior:

    • LM Studio and Demo modes skip server history fetch (no persistent history)
    • OpenClaw continues to fetch history from server

Implementation Details

  • LM Studio handler uses AbortController for cancellation support
  • Tool calls are accumulated across multiple SSE chunks before firing callbacks
  • Message conversion handles system messages, user messages, assistant messages with tool calls
  • Tool result messages are filtered out when converting to OpenAI format (LM Studio doesn't execute tools)
  • localStorage keys namespaced by backend: openclaw-* and lmstudio-*
  • Graceful error handling with user-friendly error messages for connection failures

https://claude.ai/code/session_01JV6o5twKqGVp6RBQ3ifxGp

Add a new backend mode for connecting to LM Studio's OpenAI-compatible
API (HTTP+SSE), in addition to the existing OpenClaw WebSocket mode.

- New lib/lmStudio.ts: HTTP+SSE streaming client with support for text,
  reasoning/thinking blocks, and tool call streaming
- SetupDialog now has a segmented control to switch between OpenClaw and
  LM Studio modes, each with appropriate configuration fields
- LM Studio mode includes model fetching from /v1/models and a model
  selector dropdown
- Full conversation history sent with each request (client-side session)
- Backend mode persisted in localStorage for auto-reconnect
- Header status indicator shows "LM Studio" when in that mode

https://claude.ai/code/session_01JV6o5twKqGVp6RBQ3ifxGp
@vercel
Copy link

vercel bot commented Feb 7, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
mobileclaw Ready Ready Preview, Comment Feb 8, 2026 0:49am

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 security issues, and left some high level feedback:

Security issues:

  • Insecure WebSocket Detected. WebSocket Secure (wss) should be used for all WebSocket connections. (link)
  • Insecure WebSocket Detected. WebSocket Secure (wss) should be used for all WebSocket connections. (link)

General comments:

  • There are duplicated helpers like getTextFromContent in both app/page.tsx and lib/lmStudio.ts; consider extracting shared message/content utilities into a common module to avoid divergence and keep behavior consistent across backends.
  • In the initial auto-connect useEffect, savedMode === "demo" falls through to the OpenClaw branch; if you intend to persist demo mode, it may be clearer and safer to handle the demo value explicitly (e.g., set backendMode/isDemoMode and skip any OpenClaw auto-connect logic).
  • In handleDisconnect, you clear OpenClaw-specific keys and mobileclaw-mode but leave lmstudio-* keys; if a user disconnects from LM Studio and later reconnects, this stale state may be surprising—consider either clearing LM Studio keys as well or making the retention behavior explicit in the reconnection logic.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- There are duplicated helpers like `getTextFromContent` in both `app/page.tsx` and `lib/lmStudio.ts`; consider extracting shared message/content utilities into a common module to avoid divergence and keep behavior consistent across backends.
- In the initial auto-connect `useEffect`, `savedMode === "demo"` falls through to the OpenClaw branch; if you intend to persist demo mode, it may be clearer and safer to handle the `demo` value explicitly (e.g., set `backendMode`/`isDemoMode` and skip any OpenClaw auto-connect logic).
- In `handleDisconnect`, you clear OpenClaw-specific keys and `mobileclaw-mode` but leave `lmstudio-*` keys; if a user disconnects from LM Studio and later reconnects, this stale state may be surprising—consider either clearing LM Studio keys as well or making the retention behavior explicit in the reconnection logic.

## Individual Comments

### Comment 1
<location> `app/page.tsx:2320` </location>
<code_context>
    if (!config.url.startsWith("ws://") && !config.url.startsWith("wss://")) {
</code_context>

<issue_to_address>
**security (javascript.lang.security.detect-insecure-websocket):** Insecure WebSocket Detected. WebSocket Secure (wss) should be used for all WebSocket connections.

*Source: opengrep*
</issue_to_address>

### Comment 2
<location> `app/page.tsx:2321` </location>
<code_context>
      wsUrl = config.url.replace(/^http:\/\//, "ws://").replace(/^https:\/\//, "wss://");
</code_context>

<issue_to_address>
**security (javascript.lang.security.detect-insecure-websocket):** Insecure WebSocket Detected. WebSocket Secure (wss) should be used for all WebSocket connections.

*Source: opengrep*
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

lmStudioHandlerRef.current = null;
setOpenclawUrl(config.url);
let wsUrl = config.url;
if (!config.url.startsWith("ws://") && !config.url.startsWith("wss://")) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security (javascript.lang.security.detect-insecure-websocket): Insecure WebSocket Detected. WebSocket Secure (wss) should be used for all WebSocket connections.

Source: opengrep

setOpenclawUrl(config.url);
let wsUrl = config.url;
if (!config.url.startsWith("ws://") && !config.url.startsWith("wss://")) {
wsUrl = config.url.replace(/^http:\/\//, "ws://").replace(/^https:\/\//, "wss://");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security (javascript.lang.security.detect-insecure-websocket): Insecure WebSocket Detected. WebSocket Secure (wss) should be used for all WebSocket connections.

Source: opengrep

wende and others added 2 commits February 8, 2026 13:42
GLM models output thinking content immediately without an explicit
<think> opening tag, only using </think> to end thinking blocks.
Detect GLM from model name to start in thinking mode, add fallback
</think> detection in text mode for other models, and reset thinking
state after tool execution for subsequent agentic rounds.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants