PrepAI is a comprehensive AI-powered platform designed to help users prepare for interviews, generate personalized quizzes, manage study notes, and even participate in real-time mock interviews using conversational AI. It blends multiple modern technologies such as FastAPI, React, ChromaDB, PostgreSQL, Vapi, Groq, and LlamaIndex into a cohesive, production-ready system.
βΆ Watch the demo: https://youtu.be/WFycdZQijyA?si=aumRW2hEyBNleGGI
βΆ Try the project here: https://huggingface.co/spaces/aki-008/prepAI
PrepAI enables users to:
- Conduct AI-powered mock interviews with real-time voice interaction.
- Upload PDFs and chat with notes using vector search + RAG.
- Generate MCQ quizzes from resumes or study material.
- Maintain persistent chat sessions, interview transcripts, and summaries.
- Work inside a fully containerized architecture with isolated services.
This system is ideal for interview preparation platforms, ed-tech tools, or personal study automation.
PrepAI is built with a modular full-stack architecture:
- JWT Authentication
- Upload & process PDFs
- ChromaDB vector store integration
- Streaming LLM responses
- Vapi-powered live interview assistant
- Quiz generation using Groq/OpenAI
- User onboarding (Sign In / Sign Up)
- PDF upload, preview, rename, deletion
- Interactive note-chat interface
- Dashboard with metrics
- Seamless Vapi interview client
-
PostgreSQL stores:
- Users
- PDFs
- Chat sessions
- Messages
- Metadata
-
ChromaDB stores:
- Chunked PDF embeddings
- User-ingested knowledge
- Groq/OpenAI for LLM responses
- SentenceTransformers for embeddings
- Vapi for real-time voice conversations
- LlamaIndex for PDF parsing & chunking
aki-008-prepai/
βββ Backend/
β βββ app/
β βββ Dockerfile
β βββ requirements.txt
β
βββ Frontend/
β βββ src/
β βββ Dockerfile
β βββ vite.config.ts
β
βββ docker-compose.yml
βββ RUN.md
git clone <repo-url>
cd aki-008-prepai
Provide:
- PostgreSQL credentials
- Groq API key
- OpenAI key (if needed)
- VAPI_PRIVATE_KEY
- VAPI_PUBLIC_KEY
- VAPI_ASSISTANT_ID
Example:
DATABASE_URL=postgresql+asyncpg://postgres:password@db:5432/studentdb
GROQ_API_KEY=your-key
VAPI_PRIVATE_KEY=your-key
VAPI_PUBLIC_KEY=your-key
VAPI_ASSISTANT_ID=your-assistant-id
docker-compose up --build
Services started:
- Frontend β http://localhost:5173
- Backend β http://localhost:8000
- ChromaDB β http://localhost:8080
- PostgreSQL β http://localhost:5432
Backend:
cd Backend
python run.py
Frontend:
cd Frontend
npm install
npm run dev
ChromaDB:
chroma run --host 0.0.0.0 --port 8080 --path ./chroma_store
- PDFs are chunked using PyMuPDF + LlamaIndex.
- Embeddings generated via MiniLM.
- Chunks stored in ChromaDB.
- Users can open chats tied to each PDF with full chat history.
-
Dynamic prompt generation based on job-role, experience, difficulty.
-
Real-time Vapi-based interview with:
- Emotion recognition
- Adjustable voice
- Adaptive follow-ups
- Strict 5-minute flow
-
Transcripts saved automatically.
From resumes or notes:
- Strict rules enforced by the SYSTEM_PROMPT
- Always 10 MCQs with 4 options
- JSON-structured output
- Options + explanations
- Secure hashing using Argon2
- JWT tokens
- Protected routes for all user-specific actions
- Powered by FastAPI with async SQLAlchemy.
- Auto-table creation on startup.
- Organized into clear routers: Auth, Notes, Interview, Quiz.
- Streaming responses for chat.
- Built on React + TypeScript.
- Modern UI with Tailwind.
- Routes include: Home, Dashboard, Notes, Interview.
- ProtectedRoute ensures authentication.
docker-compose.yml orchestrates:
- PostgreSQL database
- ChromaDB vector server
- Backend (Python)
- Frontend (Nginx)
During every interview:
- All real-time transcripts are appended in
Backend/transcripts/<call_id>.txt. - Summary is appended at end of call.
Future improvements:
- User analytics dashboard
- Multi-file knowledge merging
- Advanced scoring for interview responses
- Multi-voice model selector
- Mobile-friendly front-end layout
Feel free to open issues or submit pull requests. Contributions are welcome for both frontend and backend.
This project is licensed under your chosen license (MIT recommended).
If you'd like additional documentation (API reference, UML diagrams, onboarding guide), just ask!
