Multiple Video Editing OpenCV is a specialized tool designed for synchronized multi-camera video editing and annotation, particularly tailored for human activity recognition research. This tool enables researchers to efficiently process, synchronize, and annotate video data from multiple camera perspectives simultaneously.
This is a Python-based video editing and annotation platform that allows researchers to:
- Synchronize multiple camera views from different perspectives (robot view, RGB cameras, omnidirectional cameras)
- Interactively navigate and edit video sequences with frame-level precision
- Create annotated video segments for machine learning datasets
- Export synchronized clips for training activity recognition models
- Manage action classes with automatic folder organization
Primary Use Cases:
- Dataset Creation: Generate annotated video datasets for human activity recognition research
- π€ Robotics Research: Process multi-perspective robot house recordings for behavior analysis
- Human Activity Analysis: Study and annotate human activities from multiple viewpoints
- Ambient Assisted Living: Analyze daily activities in smart home environments
- Multi-camera Synchronization: Align and process videos from different camera systems
- Computer Vision Research: Create ground truth data for action recognition algorithms
Research Applications:
- Human-robot interaction studies
- Activity recognition in assisted living scenarios
- Multi-view behavior analysis
- Smart home monitoring systems
- Eldercare technology research
- Surveillance and security applications
Tip
This tool was specifically developed for the RHM (Robot House Multi-view) dataset but can be adapted for any multi-camera video synchronization and annotation tasks.
Author: Mohammad Hossein Bamorovat Abadi
This work is a part of the Human Activity Recognition project at Robot House
Project Page: RHM Dataset
This code is implemented with Python programming Language and OpenCV Library.
Important
If you use the RHM dataset or this code in your research, please cite the relevant papers below.
Mohammad Hossein Bamorovat Abadi, Mohamad Reza Shahabian Alashti, Patrick Holthaus, Catherine Menon, and Farshid Amirabdollahian. "RHM: Robot House Multi-view Human Activity Recognition Dataset." ACHI 2023, Venice, Italy, IARIA.
BibTeX:
@inproceedings{bamorovat2023rhm,
title={RHM: Robot House Multi-view Human Activity Recognition Dataset},
author={Bamorovat Abadi, Mohammad Hossein and Shahabian Alashti, Mohamad Reza and Holthaus, Patrick and Menon, Catherine and Amirabdollahian, Farshid},
booktitle={ACHI 2023: The Sixteenth International Conference on Advances in Computer-Human Interactions},
year={2023},
organization={IARIA},
address={Venice, Italy}
}Mohammad Hossein Bamorovat Abadi, Mohamad Reza Shahabian Alashti, Patrick Holthaus, Catherine Menon, and Farshid Amirabdollahian. "Robot house human activity recognition dataset." 4th UK-RAS Conference: Robotics at Home (UKRAS21), 19β20. Hatfield, UK, 2021.
BibTeX:
@inproceedings{bamorovat2021robot,
title={Robot house human activity recognition dataset},
author={Bamorovat Abadi, Mohammad Hossein and Shahabian Alashti, Mohamad Reza and Holthaus, Patrick and Menon, Catherine and Amirabdollahian, Farshid},
booktitle={4th UK-RAS Conference: Robotics at Home (UKRAS21)},
pages={19--20},
year={2021},
address={Hatfield, UK}
}Mohammad Hossein Bamorovat Abadi, Mohamad Reza Shahabian Alashti, Patrick Holthaus, Catherine Menon, and Farshid Amirabdollahian. "Robotic Vision and Multi-View Synergy: Action and activity recognition in assisted living scenarios." BioRob 2024, Heidelberg, Germany, IEEE.
BibTeX:
@inproceedings{bamorovat2024robotic,
title={Robotic Vision and Multi-View Synergy: Action and activity recognition in assisted living scenarios},
author={Bamorovat Abadi, Mohammad Hossein and Shahabian Alashti, Mohamad Reza and Holthaus, Patrick and Menon, Catherine and Amirabdollahian, Farshid},
booktitle={2024 10th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob)},
year={2024},
organization={IEEE},
address={Heidelberg, Germany}
}Important
This project is licensed under the GNU General Public License v3.0. See the LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
Tip
When contributing, ensure your code follows the existing modular structure and includes appropriate documentation.
Important
This tool requires Python 3.8+ and OpenCV 4.0+ for optimal performance.
# Clone the repository
git clone https://github.com/Bamorovat/MultipleVideoEditingOpenCV.git
cd MultipleVideoEditingOpenCV
# Install dependencies
pip install -r requirements.txtThe tool features a modern modular architecture with separate configuration and core logic components.
Tip
Use the command line interface for the best experience with built-in help and configuration options.
Option 1: Command Line Interface (Recommended)
# Basic usage with default video '10'
python main.py
# Specify a different video
python main.py --video 15
# Custom paths
python main.py --video-path /path/to/videos --action-path /path/to/output
# Show configuration
python main.py --list-config
# Help
python main.py --helpOption 2: Programmatic Usage
from src.core import MultipleVideoEditor
# Initialize the editor
editor = MultipleVideoEditor()
# Run with default video '10'
editor.run()
# Or specify custom paths and video
editor = MultipleVideoEditor(
video_path='/custom/video/path/',
action_path='/custom/action/path/'
)
editor.run('your_video_name')The project is organized with a clean modular folder structure:
MultipleVideoEditingOpenCV/
βββ main.py # Entry point with CLI interface
βββ requirements.txt # Dependencies
βββ README.md # Documentation
βββ src/ # Source code package
βββ __init__.py
βββ config/ # Configuration module
β βββ __init__.py
β βββ config.py # Settings, constants, and color definitions
βββ core/ # Core functionality module
βββ __init__.py
βββ video_editor.py # Main MultipleVideoEditor class
Key Components:
main.py: Entry point with command-line interface and argument parsingsrc/core/video_editor.py: CoreMultipleVideoEditorclass with all video processing logicsrc/config/config.py: Configuration settings, constants, and color definitions
- Modular Architecture: Clean separation of concerns with dedicated config and core logic
- Multi-camera Support: Synchronize and edit videos from 4 different camera views:
- Robot view camera
- RGB living room camera
- Omnidirectional living room camera
- Sofa view camera
- Interactive Controls:
W: Play |S: Stay/Pause |A: Previous Frame |D: Next FrameZ: Set Start Frame |X: Set End Frame & Record |C: Take SnapshotQ: Slower |E: Faster |Esc: Exit
- π Action Classification: Organize recorded segments by action classes with automatic folder creation
- Frame Synchronization: Configurable trimming offsets for perfect camera synchronization
- Command Line Interface: Full CLI with help, configuration display, and path customization
- Flexible Configuration: Easily customizable paths, dimensions, codecs, and settings
The tool uses a centralized configuration system in src/config/config.py. Key settings include:
- Camera Views: Robot, RGB livingroom, Omni livingroom, Sofa
- Trimming Offsets: Frame synchronization offsets for each camera
- Video Settings: Codec (MJPG), frame rate (30 FPS), dimensions per camera
- Control Mappings: Keyboard shortcuts and window positions
Tip
You can customize paths, camera settings, and synchronization offsets in the configuration file without modifying the core logic.
Warning
Ensure your video dataset follows the expected directory structure with synchronized multi-camera recordings.
Expected directory structure:
video_dataset/
βββ robot_view/
β βββ 10.avi
β βββ 15.avi
β βββ ...
βββ rgbd_livingroom/
β βββ 10.avi
β βββ 15.avi
β βββ ...
βββ omni_livingroom/
β βββ 10.avi
β βββ 15.avi
β βββ ...
βββ rgbd_sofa/
βββ 10.avi
βββ 15.avi
βββ ...
Tip
Place your video files in separate directories for each camera view to maintain organization and enable proper synchronization.