Skip to content

Support for Custom base_url to enable Local LLMs (Ollama, LM Studio, vLLM) #3

@peschull

Description

@peschull

Is your feature request related to a problem? Please describe.

Currently, the framework appears to be optimized for OpenAI's API endpoints (api.openai.com). Many researchers and security professionals prefer testing vulnerabilities on local instances or self-hosted models for privacy, cost, and legal reasons. Tools like Ollama, LM Studio, and LocalAI provide OpenAI-compatible endpoints (e.g., http://localhost:11434/v1), but often require changing the client's base_url.

Describe the solution you'd like

I would like to request the ability to configure the base_url and api_key via the configuration file or environment variables.
For example, when initializing the OpenAI client:
client = OpenAI(
base_url=os.getenv("OPENAI_API_BASE", "[https://api.openai.com/v1\](https://api.openai.com/v1)"),
api_key=os.getenv("OPENAI_API_KEY")
)

Additional context

This change would allow users to run MasterKey against:

  • Ollama (Port 11434)
  • LM Studio (Port 1234)
  • vLLM (Port 8000)

This is crucial for testing "Attacker" vs. "Target" scenarios completely offline.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions