Skip to content

Reeyuki/YukiAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ollama AI Chat/Voice Frontend

Python
Python FastAPI
Ollama
Ollama

Features

Connect to a ollama server and send messages to open source LLMs

Store message history

Speech recognition from microphone input

AI voice generation via EdgeTTS

Requirements:

Ollama server running in your system

FFmpeg installed in your system (for speech recognition)

Quickstart

For End Users

Install binary from https://github.com/Reeyuki/YukiAI/releases

For Developers

To setup virtual env and install:

make install

To run server:

make run

Configuration options:

Available options for .env:

HOST

Hostname to bind the server at.

PORT

Port number to bind the server at.

OLLAMA_HOST

URL of ollama api. Defaults to http://127.0.0.1:11434

Todo:

  • Set temperature of model
  • Implement model management ui
  • Export chat history button & functionality
  • Remove ffmpeg dependency

About

Ollama AI Chat/Voice Frontend

Resources

Stars

Watchers

Forks

Packages

No packages published