Skip to content

Add guide for setting up MasterKey with Local LLMs #5

@peschull

Description

@peschull

Is your feature request related to a problem? Please describe.

There is growing interest in using MasterKey with local inference servers like Ollama and LM Studio. However, it is not immediately clear how to configure the environment to redirect calls from OpenAI to localhost.

Describe the solution you'd like

It would be very helpful to add a section to the README.md or a docs/LOCAL_SETUP.md covering:

  1. Required environment variables (e.g., OPENAI_API_BASE).
  2. Recommended local models for the "Attacker" role (e.g., Uncensored models).
  3. Common default ports for tools like Ollama (11434) and LM Studio (1234).

Additional context

I have successfully tested similar setups by patching the client initialization manually and would love to see this officially documented to help others in the community.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions