High-performance people counting system with C++ (OpenCV, ONNX Runtime) + YOLOv8. Designed for edge devices with real-time analytics dashboard.
Most "AI video projects" stop at a notebook demo. Smart-Counter is built for production deployment β optimized C++ inference engine, persistent analytics storage, and Dockerized deployment.
- β‘ Real-Time Detection β YOLOv8 on GPU using ONNX Runtime (C++) achieving ~100 FPS on RTX 3060
- π― Bi-Directional Counting β Tracks both entry (IN) and exit (OUT) flows with virtual counting lines
- πΎ Data Persistence β SQLite database logs all analytics with automatic drift protection
- π Live Dashboard β Streamlit-based real-time visualization with historical analytics
- π³ Fully Dockerized β One-command deployment with Docker Compose (CPU/GPU support)
- π§ Production Ready β Modular architecture, error handling, and comprehensive logging
| Component | Technology |
|---|---|
| Core Engine | C++17, OpenCV 4.8+ |
| AI Inference | ONNX Runtime (CUDA Execution Provider) |
| Detection Model | YOLOv8 (Ultralytics) |
| Tracking | Custom Centroid Tracker with state memory |
| Database | SQLite3 with analytics schema |
| Dashboard | Python 3.9+, Streamlit, Pandas |
| Build System | CMake 3.10+ |
| DevOps | Docker, Docker Compose |
ββββββββββββββββ
β Video Source β
ββββββββ¬ββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββ
β C++ Detector (YOLO + Tracker) β
β βββββββββββ βββββββββββ βββββββββββ β
β β Detect ββ β Track ββ β Count β β
β βββββββββββ βββββββββββ βββββββββββ β
ββββββββββββ¬βββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββ
β SQLite DB β β Persistent Analytics
ββββββββ¬βββββββ
β
βΌ
ββββββββββββββββββββ
β Python Dashboard β β Real-time Visualization
ββββββββββββββββββββ
Detailed documentation: docs/ARCHITECTURE.md | docs/BI_DIRECTIONAL_COUNTING.md
- Docker & Docker Compose
- NVIDIA GPU + NVIDIA Container Toolkit (for GPU acceleration)
- Linux (Ubuntu 20.04+, or similar)
# 1. Clone repository
git clone https://github.com/bigalex95/Smart-Counter.git
cd Smart-Counter
# 2. Allow X11 forwarding (for visualization)
xhost +local:docker
# 3. Build and run
docker compose up --buildThe dashboard will be available at http://localhost:8501
GPU Support: See docs/DOCKER_COMPOSE_GPU.md
Configuration: Edit .env file or use environment variables
For development or edge deployment without Docker:
# 1. Build C++ engine
./scripts/build.sh
# 2. Run detector
./build/SmartCounter --model models/yolov8s.onnx \
--input data/videos/video.mp4 \
--db logs/analytics.db
# Or use combined script
./scripts/build_and_run.shAvailable scripts:
build.shβ Build C++ projectrun.shβ Run the detectorbuild_and_run.shβ Build and run in one stepcheck_cuda.shβ Check CUDA availabilitytest_database.shβ Test database connection
See docs/CLI_USAGE.md for all CLI options.
Smart-Counter/
βββ src/ # C++ source code (Detector, Tracker, Database)
β βββ main.cpp # Main application entry
β βββ detector.cpp # YOLO inference engine
β βββ tracker.cpp # Centroid tracking algorithm
β βββ database.cpp # SQLite analytics logger
βββ include/ # C++ headers
βββ dashboard/ # Python Streamlit analytics dashboard
β βββ app.py # Real-time dashboard UI
β βββ Dockerfile # Dashboard container
βββ python/ # Python utilities
β βββ prototype.py # Python prototype (testing)
β βββ convert.py # ONNX model conversion
βββ scripts/ # Build and deployment automation
βββ models/ # ONNX models (YOLOv8)
βββ data/ # Videos and output
βββ logs/ # SQLite database (analytics.db)
βββ docs/ # Comprehensive documentation
βββ docker-compose.yml # Multi-container orchestration
βββ Dockerfile # C++ backend container
βββ CMakeLists.txt # Build configuration
# Run with custom video and settings
MODEL_PATH=models/yolov8s.onnx \
INPUT_VIDEO=data/videos/my_video.mp4 \
HEADLESS_MODE=true \
docker compose up./build/SmartCounter \
--model models/yolov8s.onnx \
--input data/videos/video.mp4 \
--output data/output/result.mp4 \
--db logs/analytics.db \
--headless \
--loopcd dashboard
streamlit run app.py -- --db ../logs/analytics.dbsource venv/bin/activate
python python/prototype.pyMODEL_PATH=models/yolov8s.onnx # Model path
INPUT_VIDEO=data/videos/video.mp4 # Input video
OUTPUT_VIDEO=data/output/out.mp4 # Output video
DB_PATH=logs/analytics.db # Database path
HEADLESS_MODE=true # No GUI display
LOOP_VIDEO=true # Loop video playback
USE_CPU=false # Force CPU inference./build/SmartCounter --help
Options:
--model PATH Path to ONNX model
--input PATH Input video file
--output PATH Output video file (optional)
--db PATH SQLite database path
--headless Run without GUI
--loop Loop video playback
--cpu Use CPU instead of GPUSee docs/CLI_USAGE.md for advanced configuration.
| Component | Performance |
|---|---|
| Inference (GPU) | ~100 FPS (YOLOv8s) |
| Full Pipeline | ~50-80 FPS |
| Latency | < 20ms per frame |
| Memory | ~2GB GPU / ~500MB CPU |
Tested on NVIDIA GeForce RTX 3060 Laptop GPU with YOLOv8s
| Model | Speed | Accuracy | Recommended For |
|---|---|---|---|
yolov8n |
β‘β‘β‘β‘β‘ | βββ | Edge devices, high FPS |
yolov8s |
β‘β‘β‘β‘ | ββββ | Balanced (default) |
yolov8m |
β‘β‘β‘ | βββββ | Higher accuracy |
yolov8l |
β‘β‘ | ββββββ | Maximum accuracy |
# CPU-only deployment
docker compose up
# GPU deployment
docker compose -f docker-compose-gpu.yml up- AWS: ECS/EKS with GPU instances
- GCP: Cloud Run / GKE with T4/V100
- Azure: Container Instances with GPU
- NVIDIA Jetson (Nano, Xavier, Orin) β Optimized for edge AI
- Intel NUC β CPU inference mode
- Custom hardware β Via ONNX Runtime compatibility
See docs/DEPLOYMENT.md for detailed deployment guides.
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Ultralytics for YOLOv8
- Microsoft for ONNX Runtime
- OpenCV community
- All open-source contributors
Alibek Erkabayev - @bigalex95
Project Link: https://github.com/bigalex95/Smart-Counter
- Quick Start Guide β Get running in 5 minutes
- Architecture Overview β System design and data flow
- Bi-Directional Counting β How counting works
- Database Schema β Analytics storage structure
- CLI Usage β Command-line reference
- Docker Compose Guide β Container deployment
- Deployment Guide β Production deployment strategies
- Tech Stack Details β Technology deep dive
Most computer vision projects are:
- β Python-only (slow, not production-ready)
- β No tracking (just detection)
- β No persistence (analytics lost on restart)
- β No deployment story (hard to run)
Smart-Counter is:
- β Production C++ β Optimized for real-world performance
- β Complete Pipeline β Detection β Tracking β Counting β Analytics
- β Data Persistence β SQLite with automatic logging
- β Deploy Anywhere β Docker, cloud, edge devices
What's Working Now:
- β C++ Inference Engine β YOLOv8 ONNX Runtime with GPU/CPU support
- β Custom Centroid Tracker β Simple, fast tracking algorithm
- β Bi-Directional Counting β Tracks IN/OUT flows across counting line
- β SQLite Database β Persistent analytics storage with drift protection
- β Streamlit Dashboard β Real-time visualization and historical data
- β Docker Deployment β Multi-container setup with docker-compose
- β CLI Interface β Full command-line control with multiple options
- β Video Recording β Output processed video with annotations
Planned Improvements:
- π Advanced Tracking β Replace simple tracker with BoTSORT/ByteTrack
- π Multi-Zone Support β Define multiple counting zones
- π Heatmap Generation β Visualize traffic patterns
- π REST API β HTTP API for integration with other systems
- π WebSocket Streaming β Real-time video feed to dashboard
- π Model Optimization β TensorRT support for even faster inference
- π Multi-Camera Support β Process multiple video streams
- π Alert System β Notifications for crowd thresholds
- π Time-Series Analytics β Advanced statistical analysis
- π Cloud Storage Integration β S3/GCS for video archival
Contributions welcome! See Contributing section.
Built with β€οΈ for production ML deployment