Skip to content

An open-source healthcare chatbot that captures natural symptom descriptions, predicts likely conditions with ML, and outputs structured intake data. Built as a core module for an Edge-AI clinical-intake pipeline. Tech: Ollama, Llama, FastAPI, React.

Notifications You must be signed in to change notification settings

Prof-it/voice_chatbot

 
 

Repository files navigation

VoiceMedi AI 🩺🤖

This repo is a fork for publication, to distinguish it from the original thesis repo.
Any code changes after the thesis are done here.

An Edge-AI Chatbot for Symptom Extraction and Appointment Pre-Fill

VoiceMedi AI is an open-source healthcare chatbot. It helps users describe symptoms naturally, predicts conditions using machine learning, and shares structured appointment data. This prototype is a core module in a broader Edge-AI pipeline to automate clinical intake workflows.


🧠 Key Features

  • 🔍 Symptom-Based Condition Prediction – Machine learning-based diagnosis using natural language input
  • 💬 Conversational Interface – Interprets user input via Ollama's Chat completion API
  • 📦 Modular Design – Can be extended to cover other medical domains and scheduling use cases
  • ⚙️ Modern Stack – FastAPI backend, React frontend, containerized with Docker
  • 🧩 Edge-AI Ready – Completely offline designed with edge deployment scenarios in mind

🚀 Getting Started (with Docker)

Prerequisites:

  • Ollama installed

  • Docker & Docker Compose installed

  • Environment variables configured:

    • backend/.env:

      OLLAMA_URL=http://host.docker.internal:11434 ## incase of local deployment, use localhost
      DEBUG=True  
      
    • frontend/.env:

      BACKEND_URL=http://localhost:8000  
      

🦙 Ollama Setup (for LLaMA 3.2:1b Model Inference)

VoiceMedi AI uses Ollama to run lightweight LLMs like LLaMA 3.2:1b locally or on edge devices.

🛠️ Step 1: Install Ollama

Linux

curl -fsSL https://ollama.com/install.sh | sh

Mac

brew install ollama

Windows

Download the official installer: 👉 ollama

📥 Step 2: Download the LLaMA 3.2:1b Model

ollama pull llama3:2.1b

This will download and configure the model locally for inference.

🚀 Step 3: Run the Model

ollama run llama3:2.1b

Run the Project:

# Clone the repository
git clone https://github.com/Prof-it/voice_chatbot.git
cd voice_chatbot

# Build and launch containers
docker-compose up --build

🧪 Local Development Setup

Backend (FastAPI)

cd backend
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
uvicorn main:app --reload

Frontend (React)

cd frontend
npm install
npm start

🖥️ System Architecture

Voicehat architecture (1)


About

An open-source healthcare chatbot that captures natural symptom descriptions, predicts likely conditions with ML, and outputs structured intake data. Built as a core module for an Edge-AI clinical-intake pipeline. Tech: Ollama, Llama, FastAPI, React.

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 63.9%
  • HTML 16.0%
  • Python 11.5%
  • TypeScript 7.2%
  • Shell 0.9%
  • CSS 0.2%
  • Other 0.3%