This repo is a fork for publication, to distinguish it from the original thesis repo.
Any code changes after the thesis are done here.
An Edge-AI Chatbot for Symptom Extraction and Appointment Pre-Fill
VoiceMedi AI is an open-source healthcare chatbot. It helps users describe symptoms naturally, predicts conditions using machine learning, and shares structured appointment data. This prototype is a core module in a broader Edge-AI pipeline to automate clinical intake workflows.
- 🔍 Symptom-Based Condition Prediction – Machine learning-based diagnosis using natural language input
- 💬 Conversational Interface – Interprets user input via Ollama's Chat completion API
- 📦 Modular Design – Can be extended to cover other medical domains and scheduling use cases
- ⚙️ Modern Stack – FastAPI backend, React frontend, containerized with Docker
- 🧩 Edge-AI Ready – Completely offline designed with edge deployment scenarios in mind
-
Ollama installed
-
Docker & Docker Compose installed
-
Environment variables configured:
-
backend/.env:OLLAMA_URL=http://host.docker.internal:11434 ## incase of local deployment, use localhost DEBUG=True -
frontend/.env:BACKEND_URL=http://localhost:8000
-
VoiceMedi AI uses Ollama to run lightweight LLMs like LLaMA 3.2:1b locally or on edge devices.
curl -fsSL https://ollama.com/install.sh | shbrew install ollamaDownload the official installer: 👉 ollama
ollama pull llama3:2.1bThis will download and configure the model locally for inference.
ollama run llama3:2.1b# Clone the repository
git clone https://github.com/Prof-it/voice_chatbot.git
cd voice_chatbot
# Build and launch containers
docker-compose up --buildcd backend
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
uvicorn main:app --reloadcd frontend
npm install
npm start