A comprehensive web-based platform for creating, labeling, training, and deploying YOLO object detection models. Built with FastAPI (Python), PostgreSQL, and Next.js.
- 🎯 Dataset Management: Create and manage custom YOLO datasets
- 🏷️ Web-Based Labeling: Label images directly in the browser with an intuitive interface
- 🚀 One-Click Training: Train YOLO models with customizable parameters
- đź§Ş Model Testing: Test your trained models with new images
- 📦 Model Deployment: Deploy models as REST APIs for inference
- đź’ľ Model Download: Download trained models for local use
- 📤 Custom Model Upload: Load and use your own pre-trained YOLO models
- RESTful API with FastAPI
- PostgreSQL database for data persistence
- Redis for caching and task queuing
- Ultralytics YOLO for training and inference
- SQLAlchemy ORM for database operations
- JWT authentication
- React-based UI with Next.js 14
- TypeScript for type safety
- Tailwind CSS for styling
- Axios for API communication
- React Konva for image annotation
- Users and authentication
- Datasets and images
- Annotations and labels
- Models and training jobs
- Training metrics and logs
yolo-trainer/
├── backend/
│ ├── app/
│ │ ├── api/ # API endpoints
│ │ │ ├── auth.py # Authentication
│ │ │ ├── datasets.py # Dataset management
│ │ │ ├── models_api.py # Model management
│ │ │ ├── training.py # Training jobs
│ │ │ └── predictions.py # Inference
│ │ ├── core/ # Core functionality
│ │ │ ├── config.py # Configuration
│ │ │ └── security.py # Security utilities
│ │ ├── db/ # Database
│ │ │ └── session.py # Database session
│ │ ├── models/ # Database models
│ │ │ └── models.py # SQLAlchemy models
│ │ ├── schemas/ # Pydantic schemas
│ │ │ └── schemas.py # API schemas
│ │ ├── services/ # Business logic
│ │ │ └── training_service.py # Training service
│ │ └── main.py # FastAPI application
│ ├── requirements.txt # Python dependencies
│ ├── Dockerfile # Docker configuration
│ └── .env.example # Environment variables example
├── frontend/
│ ├── src/
│ │ ├── app/ # Next.js app directory
│ │ │ ├── layout.tsx # Root layout
│ │ │ ├── page.tsx # Home page
│ │ │ └── globals.css # Global styles
│ │ ├── components/ # React components
│ │ ├── lib/ # Utilities
│ │ │ └── api.ts # API client
│ │ └── types/ # TypeScript types
│ │ └── index.ts # Type definitions
│ ├── package.json # Node dependencies
│ ├── tsconfig.json # TypeScript config
│ ├── tailwind.config.js # Tailwind config
│ └── Dockerfile # Docker configuration
├── docker-compose.yml # Docker Compose configuration
├── uploads/ # Uploaded files
├── models/ # Trained models
├── datasets/ # Dataset storage
└── README.md # This file
- Docker and Docker Compose (recommended)
- OR Python 3.11+, Node.js 18+, PostgreSQL 15+, Redis
- Clone the repository:
git clone https://github.com/npsg02/yolo-trainer.git
cd yolo-trainer- Create environment file:
cp backend/.env.example backend/.env
# Edit backend/.env with your configuration- Start all services:
docker-compose up -d- Access the application:
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- API Documentation: http://localhost:8000/api/v1/docs
- Create and activate virtual environment:
cd backend
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies:
pip install -r requirements.txt- Create
.envfile:
cp .env.example .env
# Edit .env with your database and configuration-
Start PostgreSQL and Redis (if not using Docker)
-
Run the application:
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000- Install dependencies:
cd frontend
npm install- Create environment file:
echo "NEXT_PUBLIC_API_URL=http://localhost:8000" > .env.local- Run the development server:
npm run dev- Navigate to http://localhost:3000
- Click "Get Started" and register a new account
- Log in with your credentials
- Go to the "Datasets" page
- Click "Create New Dataset"
- Enter dataset name and description
- Upload images to your dataset
- Open your dataset
- Click on an image to start labeling
- Draw bounding boxes around objects
- Assign class labels to each box
- Save annotations
- Go to the "Training" page
- Click "Create Training Job"
- Select your dataset
- Choose model type (YOLOv8n/s/m/l/x)
- Configure training parameters:
- Epochs
- Batch size
- Image size
- Learning rate
- Click "Start Training"
- Monitor training progress in real-time
- Go to the "Testing" page
- Select a trained model
- Upload a test image
- View detection results with bounding boxes
- Go to the "Models" page
- Select a trained model
- Click "Deploy Model"
- Use the provided API endpoint for inference
import requests
# Inference example
with open('image.jpg', 'rb') as f:
response = requests.post(
'http://localhost:8000/api/v1/predictions/infer',
params={'model_id': 1, 'confidence': 0.25},
files={'file': f},
headers={'Authorization': 'Bearer YOUR_TOKEN'}
)
predictions = response.json()
print(predictions)- Go to the "Models" page
- Select a model
- Click "Download Model"
- Save the .pt file for local use
- Go to the "Models" page
- Create a new model entry
- Click "Upload Custom Model"
- Select your .pt or .pth file
- Deploy and use your custom model
POST /api/v1/auth/register- Register new userPOST /api/v1/auth/login- Login userGET /api/v1/auth/me- Get current user
GET /api/v1/datasets/- List datasetsPOST /api/v1/datasets/- Create datasetGET /api/v1/datasets/{id}- Get datasetPUT /api/v1/datasets/{id}- Update datasetDELETE /api/v1/datasets/{id}- Delete datasetPOST /api/v1/datasets/{id}/images- Upload imageGET /api/v1/datasets/{id}/images- List imagesPOST /api/v1/datasets/{id}/images/{image_id}/annotations- Create annotationGET /api/v1/datasets/{id}/images/{image_id}/annotations- Get annotationsGET /api/v1/datasets/{id}/statistics- Get dataset statistics
GET /api/v1/models/- List modelsPOST /api/v1/models/- Create modelGET /api/v1/models/{id}- Get modelPUT /api/v1/models/{id}- Update modelDELETE /api/v1/models/{id}- Delete modelPOST /api/v1/models/{id}/upload- Upload model fileGET /api/v1/models/{id}/download- Download modelPOST /api/v1/models/{id}/deploy- Deploy modelPOST /api/v1/models/{id}/undeploy- Undeploy model
GET /api/v1/training/- List training jobsPOST /api/v1/training/- Create training jobGET /api/v1/training/{id}- Get training jobDELETE /api/v1/training/{id}- Cancel/delete training jobGET /api/v1/training/{id}/logs- Get training logs
POST /api/v1/predictions/infer- Run inferencePOST /api/v1/predictions/test/{model_id}- Test model
Backend (.env):
DATABASE_URL=postgresql://yolouser:yolopass@localhost:5432/yolodb
REDIS_URL=redis://localhost:6379
SECRET_KEY=your-secret-key-change-in-production
API_V1_STR=/api/v1
UPLOAD_DIR=./uploads
MODEL_DIR=./models
DATASET_DIR=./datasetsFrontend (.env.local):
NEXT_PUBLIC_API_URL=http://localhost:8000Backend:
cd backend
pytestFrontend:
cd frontend
npm testBackend:
black app/
isort app/Frontend:
npm run lint
npm run format- Update environment variables for production
- Use production-ready database and Redis instances
- Enable HTTPS
- Configure proper CORS settings
- Set up monitoring and logging
docker-compose -f docker-compose.prod.yml up -d- YOLOv8n (Nano) - Fastest, smallest
- YOLOv8s (Small) - Balanced
- YOLOv8m (Medium) - Good accuracy
- YOLOv8l (Large) - High accuracy
- YOLOv8x (Extra Large) - Best accuracy
- Database connection errors: Ensure PostgreSQL is running and credentials are correct
- Redis connection errors: Ensure Redis is running
- Training failures: Check GPU availability and memory
- Upload failures: Check file size limits and permissions
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
MIT License
For issues and questions, please open an issue on GitHub.
- Ultralytics YOLO for the object detection framework
- FastAPI for the excellent web framework
- Next.js for the React framework