Skip to content

LDJ-creat/video-helper

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

159 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Video Helper (Video Analysis Assistant)

🌐 语言 / Language: δΈ­ζ–‡ | English

πŸ“– Introduction

Video Helper is an AI-powered smart video learning assistant designed to significantly improve the efficiency of video learning and knowledge review.

This project adopts a full-stack Monorepo architecture and integrates advanced LLM analysis pipelines. Users simply provide a video link (e.g., Bilibili, YouTube) or upload a local video, and the system automatically extracts core content, generating structured Mind Maps and Key Summaries.

The core highlight lies in its outstanding interactive linkage: clicking on a mind map node precisely navigates to the corresponding key content module, and clicking on a content module can jump to the corresponding video segment. Additionally, the built-in AI assistant supports multi-turn Q&A and can generate practice questions based on video knowledge points to help users consolidate what they have learned.

✨ Key Features

  • Smart Pipeline Analysis: Automated handling of video downloading, audio transcription, content extraction, and structured analysis. It supports LLM-guided keyframe extraction via FFmpeg to provide visual context alongside key summaries.
  • Dynamic Mind Map: Generates visual knowledge structure maps supporting zooming, dragging, and adding/deleting/editing nodes.
  • Bi-directional Interaction:
    • Mind Map -> Content: Click a map node to automatically locate the corresponding key content module.
    • Content -> Video: Click summary highlights to jump the video stream to the corresponding timestamp.
  • AI Q&A: Supports multi-turn dialogue with the user based on video context, explaining difficult points in depth.
  • Quiz Canvas: AI automatically generates questions based on video knowledge points, providing targeted practice and feedback to form a learning loop.
  • Flexible Editing: Supports manual adjustment of mind map logic and summary content to customize personalized learning notes.

πŸ—οΈ Architecture

This project uses Monorepo architecture to manage frontend and backend, ensuring efficient code maintenance and scalability.

  • Frontend: apps/web
    • Framework: Next.js 16 (App Router)
    • Language: TypeScript, React 19
    • Styling: Tailwind CSS v4
    • Visualization: ReactFlow (Mind Map), Tiptap (Rich Text Notes)
  • Backend: services/core
    • Framework: FastAPI
    • Language: Python 3.12+
    • Database: SQLite + SQLAlchemy (ORM) + Alembic (Migrations)
    • Package Management: uv
    • AI Pipeline: Integrates whisper (transcription), LLM (analysis/summarization)

Architecture diagrams

System architecture overview

Figure: System architecture overview.

Core video analysis flow

Figure: Core video analysis flow.

πŸš€ Getting Started

Choose one of three options based on your use case:


πŸ–₯️ Option 1: Download the Client

No environment setup required. Download the pre-built installer for your platform and run it directly:

Windows MacOS Linux
Windows macOS Linux
Setup.exe dmg/zip AppImage

🐳 Option 2: Deploy with Docker

Ideal for deploying on a server or anyone who wants a running instance without a local dev environment.

1. Clone the repository

git clone https://github.com/LDJ-creat/video-helper.git
cd video-helper

2. Start services

docker compose up -d

3. Open

Data is persisted to the ./data folder in the project root.

Port conflicts (if 8000 or 3000 is already in use)

To resolve port conflicts, switch to different ports:

# Linux / macOS
CORE_HOST_PORT=8001 WEB_HOST_PORT=3001 docker compose up -d
# Windows (PowerShell)
$env:CORE_HOST_PORT="8001"; $env:WEB_HOST_PORT="3001"; docker compose up -d

πŸ› οΈ Option 3: Build from Source (For developers)

For contributors, developers who want to modify the code, or those running the full stack locally.

Prerequisites

  • Node.js >= 20.x
  • Python >= 3.12
  • uv (Python package manager, install: pip install uv)
  • FFmpeg (must be in system PATH)

1. Clone the repository

git clone https://github.com/LDJ-creat/video-helper.git
cd video-helper

2. Start the backend

cd services/core

# Create config file from template
cp .env.example .env          # Linux/macOS
Copy-Item .env.example .env   # Windows (PowerShell)

# First run automatically creates a virtualenv and installs deps
# Start API service (port 8000)
uv run python main.py

Common command: uv run pytest -q (run tests)

3. Start the frontend

cd apps/web
pnpm install

cp .env.example .env.local          # Linux/macOS
Copy-Item .env.example .env.local   # Windows (PowerShell)

pnpm run dev

Open your browser at http://localhost:3000.

4. Desktop App (Electron) Startup & Build

Development mode (run from project root β€” auto-launches backend, frontend, and Electron):

node apps/desktop/scripts/dev.js

Local packaging test:

cd apps/desktop
pnpm run pack

Build full release installer (Windows only):

# Run from project root in PowerShell
powershell -ExecutionPolicy Bypass -File apps\desktop\scripts\build-all.ps1

To build Docker images locally (developer override):

docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d --build

⚑Using as an AI Skill

You can also use the backend service of this project as a skill within AI editors like Claude Code, Antigravity, or GitHub Copilot. In this mode, you don't need to configure LLMs in the backend project itself; instead, the AI editor's LLM handles the analysis.

To use it:

  1. Download the source code and start the backend service.
  2. Download and install the dedicated skill from: video-helper-skill.
  3. Follow the usage guide in the skill repository to perform video analysis using your AI editor, and view the structured results in the web or desktop app.

πŸ“‚ Directory Structure

video-helper/
β”œβ”€β”€ apps/
β”‚   β”œβ”€β”€ web/                # Next.js Frontend App
β”‚   └── desktop/            # Electron Desktop App
β”œβ”€β”€ services/
β”‚   └── core/               # Python FastAPI Backend
β”œβ”€β”€ docs/                   # Documentation
β”œβ”€β”€ scripts/                # Automation Scripts (e.g., Smoke Tests)
β”œβ”€β”€ _bmad-output/           # Architecture & Planning Artifacts
β”œβ”€β”€ docker-compose.yml      # (Optional) Docker setup
└── README.md               # Project Documentation

License

This project is licensed under the MIT License – see the LICENSE file for details.

🀝 Contribution

Issues and Pull Requests are welcome! Before submitting code, please ensure it passes the project's Smoke Tests and adheres to code standards.


Created with ❀️ by the Open Source Community

About

Upload video or paste link to generate chapters, mind maps, and notes automatically. Rapidly boost your efficiency in mastering knowledge from videos.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors