-
Notifications
You must be signed in to change notification settings - Fork 6
Open
Description
Is your feature request related to a problem? Please describe.
There is growing interest in using MasterKey with local inference servers like Ollama and LM Studio. However, it is not immediately clear how to configure the environment to redirect calls from OpenAI to localhost.
Describe the solution you'd like
It would be very helpful to add a section to the README.md or a docs/LOCAL_SETUP.md covering:
- Required environment variables (e.g., OPENAI_API_BASE).
- Recommended local models for the "Attacker" role (e.g., Uncensored models).
- Common default ports for tools like Ollama (11434) and LM Studio (1234).
Additional context
I have successfully tested similar setups by patching the client initialization manually and would love to see this officially documented to help others in the community.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels