Skip to content

Reeyuki/PromptLama

Repository files navigation

Ollama AI Chat/Voice Frontend

Features

Connect to a ollama server and send messages to open source LLMs

Store message history

Speech recognition from microphone input

AI voice generation via EdgeTTS

Requirements:

Ollama server running in your system

FFmpeg installed in your system (for speech recognition)

Python
Python FastAPI
Ollama
Ollama

Quickstart

For End Users

Install binary from https://github.com/Reeyuki/PromptLama/releases

For Developers

To setup virtual env and install:

make install

To run server:

make run

Configuration options:

Available options for .env:

HOST

Hostname to bind the server at.

PORT

Port number to bind the server at.

OLLAMA_HOST

URL of ollama api. Defaults to http://127.0.0.1:11434

Todo:

Language dropdown to override recognized languageSet temperature of modelImplement model management uiExport chat history buttonSwitch to vite/npmConfigure eslint/typescriptGet rid of ffmpeg dependency

About

Ollama AI Chat/Voice Frontend

Resources

Stars

Watchers

Forks

Packages

No packages published