Skip to content

Add support for locally running Ollama as an LLM backend #5

@appenz

Description

@appenz

Right now this tool requires OpenAI to use. Instead the tool should use a locally running olllama server that it accesses via a local socket connection. Specifically:

  • Add ollama as an LLM backend
  • Add a command line option to select OpenAI or ollama
  • Add a menu toggle to switch between the two modes

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions