Skip to content

JeevaSaravanan/Edapt

Repository files navigation

Edapt: AI-Powered Visual Textbook Companion

Empowering all learners through adaptive, multi-modal educational content

Edapt is an innovative AI-powered learning platform that transforms traditional textbook content into engaging, visual, and auditory learning experiences. By leveraging Google's Generative AI and Agent Development Kit (ADK), Edapt creates personalized learning paths with interactive mindmaps, animated explanations, and narrated content.

Demo

Watch the full demo video →

Features

Core Capabilities

  • 📚 Intelligent Content Processing: Upload or paste textbook content for AI-powered analysis
  • 🧠 Interactive Mindmaps: Auto-generated visual knowledge maps using Mermaid diagrams
  • 🎬 Manim Animations: Mathematical and scientific concept visualizations using Manim
  • 🔊 Text-to-Speech Narration: Google Cloud TTS for clear, educational audio explanations
  • 🎯 Personalized Learning: Adaptive content generation based on user queries and learning style
  • 📊 Progress Tracking: Monitor your learning journey through multiple content modules

alt text

AI Agent Architecture

  • Orchestrator Agent: Coordinates the entire content generation workflow
  • Content Generation Agent: Creates structured learning materials from raw input
  • Animation Agent: Generates dynamic visualizations using Manim
  • TTS Agent: Converts content to natural-sounding speech
  • RAG Service: Retrieval-augmented generation for enhanced context understanding

🏗️ Architecture

Frontend (React + TypeScript)

src/
├── components/Edapt/       # Core learning interface components
│   ├── TextbookInput.tsx   # Content input interface
│   ├── MindmapViewer.tsx   # Interactive mindmap display
│   ├── AudioControls.tsx   # Audio playback controls
│   ├── AnimatedVideoViewer.tsx  # Manim animation viewer
│   └── LearningSidebar.tsx # Navigation and progress
├── pages/
│   ├── Index.tsx           # Main landing page
│   ├── LearningPage.tsx    # Learning interface
│   └── UploadPage.tsx      # Content upload
└── hooks/
    └── use-content-generation.ts  # Content generation logic

Backend (Python + FastAPI)

server/
├── main.py                 # FastAPI application entry
├── agents/
│   ├── orchestrator_agent.py      # Main workflow coordinator
│   ├── content_generation_agent.py # Content creator
│   └── animation_agent.py         # Animation generator
├── tts_agent/             # Text-to-speech service
│   ├── google_tts_agent.py
│   └── audio_manager.py
├── mindmap/               # Knowledge graph services
│   ├── workflow_service.py
│   └── rag_service.py
└── animation_manim/       # Manim animation scripts

🚀 Getting Started

Prerequisites

  • Node.js (v18+)
  • Python (3.9+)
  • Google Cloud Account (for TTS and Generative AI APIs)
  • Manim (for animation generation)

Installation

1. Clone the Repository

git clone <repository-url>
cd Edapt

2. Frontend Setup

# Install dependencies
npm install

# Start development server
npm run dev

3. Backend Setup

# Navigate to server directory
cd server

# Create virtual environment
python -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install Python dependencies
pip install -r requirements.txt

# Install Manim
pip install manim

# Start FastAPI server
uvicorn main:app --reload --port 8000

4. Configure Environment Variables

Create a .env file in the project root:

# Google Cloud Configuration
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-key.json
GOOGLE_CLOUD_PROJECT=your-project-id

# API Keys
GEMINI_API_KEY=your-gemini-api-key

# Server Configuration
BACKEND_URL=http://localhost:8000

Google Cloud Setup

  1. Enable Required APIs:

    • Cloud Text-to-Speech API
    • Vertex AI API (for Gemini models)
  2. Create Service Account:

    gcloud iam service-accounts create edapt-service \
        --display-name="Edapt AI Service Account"
    
    gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \
        --member="serviceAccount:edapt-service@YOUR_PROJECT_ID.iam.gserviceaccount.com" \
        --role="roles/aiplatform.user"
    
    gcloud iam service-accounts keys create key.json \
        --iam-account=edapt-service@YOUR_PROJECT_ID.iam.gserviceaccount.com
  3. Set Credentials:

    export GOOGLE_APPLICATION_CREDENTIALS="/path/to/key.json"

📖 Usage

Basic Workflow

  1. Input Content: Paste or upload textbook content
  2. Generate Mindmap: AI creates an interactive knowledge graph
  3. Request Explanations: Ask questions about specific concepts
  4. View Animations: Watch Manim-generated visual explanations
  5. Listen to Narration: Audio explanations for auditory learners

API Endpoints

Generate Learning Content

POST /api/generate-content

Request body:

{
  "query": "Explain the derivative of x^2",
  "narrative_style": "intuitive",
  "target_duration": 120,
  "include_video": true
}

Response:

{
  "session_id": "uuid",
  "status": "processing",
  "message": "Content generation started"
}

Check Generation Status

GET /api/status/{session_id}

Get Generated Content

GET /api/content/{session_id}

🛠️ Technology Stack

Frontend

  • React 18 with TypeScript
  • Vite for build tooling
  • Tailwind CSS for styling
  • shadcn/ui component library
  • Radix UI primitives
  • TanStack Query for data fetching
  • Mermaid for mindmap visualization
  • Lucide React icons

Backend

  • FastAPI web framework
  • Google Generative AI (Gemini)
  • Google ADK (Agent Development Kit)
  • Google Cloud TTS
  • Manim for mathematical animations
  • Pydantic for data validation

AI/ML

  • Google Vertex AI (Gemini 1.5 Pro)
  • RAG (Retrieval-Augmented Generation)
  • Multi-agent orchestration
  • Neural TTS voices

📁 Project Structure

Edapt/
├── src/                    # Frontend React application
├── server/                 # Backend Python/FastAPI server
├── public/                 # Static assets
├── Data/                   # Training/reference data
├── media/                  # Generated media content
│   ├── audio/             # TTS audio files
│   ├── videos/            # Manim animations
│   └── images/            # Generated images
├── components.json         # shadcn/ui configuration
├── tailwind.config.ts     # Tailwind CSS configuration
├── vite.config.ts         # Vite configuration
└── package.json           # Frontend dependencies

🎯 Key Components

Text-to-Speech Agent

Located in server/tts_agent/, this module handles all audio generation:

  • Multiple voice options (male/female, various accents)
  • Customizable speaking rate and pitch
  • SSML support for natural speech patterns
  • Segment-based or full narration generation

See TTS Agent Documentation for details.

Animation Generator

Uses Manim to create educational animations:

  • Mathematical concept visualizations
  • Step-by-step explanations
  • Custom scene generation based on content

Example animations in server/animation_manim/

Mindmap Service

RAG-powered knowledge graph generation:

  • Automatic concept extraction
  • Relationship mapping
  • Interactive Mermaid diagrams

🔧 Development

Running Tests

# Frontend tests
npm run test

# Backend tests
cd server
pytest

Building for Production

Frontend

npm run build

Backend

# Production-ready FastAPI
uvicorn main:app --host 0.0.0.0 --port 8000

🤝 Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

🙏 Acknowledgments

  • Google Cloud for Generative AI and TTS services
  • Manim Community for the animation engine
  • shadcn/ui for the beautiful component library
  • FastAPI for the excellent Python web framework

Built with ❤️ for learners everywhere

About

No description or website provided.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published