This Python project provides a class-based implementation for processing videos from the RHM (Robot House Multiview) dataset. It extracts specific frames and applies various feature extraction techniques, saving them in organized folder structures. This aids in training deep learning models for human activity recognition.
- Object-oriented design with clean, modular architecture
- Frame extraction from RHM dataset videos
- Seven different feature extraction methods:
- Motion Aggregation (MAg)
- Frame Variation Mapping (FVM)
- Differential Motion Trajectory (DMT)
- Normal Frame Extraction
- Subtract Frame Method
- Optical Flow
- Motion History Images (MHI)
- Configurable and optional feature extraction
- Flexible frame extraction count and debug options
- Automated folder creation for frame storage
- Easy-to-use API with method-based feature extraction
Note
The project follows object-oriented design principles for better maintainability and extensibility.
The project is built around the RHMFeatureExtractor class, which provides:
- Individual methods for each feature extraction technique
- Configurable feature enabling/disabling
- Batch processing of entire datasets or single videos
- Debug and visualization capabilities
Tip
You can easily enable or disable specific feature extractions using the set_feature_flags() method to optimize processing time and storage requirements.
from rhm_feature_extractor import RHMFeatureExtractor
# Initialize the extractor
extractor = RHMFeatureExtractor(rhm_path="/path/to/dataset")
# Configure which features to extract
extractor.set_feature_flags(
motion_aggregation=True,
optical_flow=True,
motion_history_images=True,
frame_variation_mapper=False, # Disable this feature
differential_motion_trajectory=False, # Disable this feature
normal=False,
subtract=False
)- Python 3.x
- OpenCV (
cv2library) - NumPy
- Pandas
-
Ensure Python 3.x is installed.
-
Install required Python packages:
pip install opencv-python numpy pandas
Tip
For first-time users, start with the basic usage pattern and default settings.
from src import RHMFeatureExtractor
# Initialize the feature extractor
extractor = RHMFeatureExtractor(
rhm_path="/path/to/your/RHM/dataset",
extract_frame_number=17,
debug=False
)
# Process the entire dataset with default settings
extractor.process_dataset()Important
For large datasets, consider disabling computationally expensive features to reduce processing time.
from src import RHMFeatureExtractor
# Initialize with custom settings
extractor = RHMFeatureExtractor(
rhm_path="/path/to/your/RHM/dataset",
extract_frame_number=20,
debug=True,
show_debug=False
)
# Configure specific features
extractor.set_feature_flags(
motion_aggregation=True,
optical_flow=True,
motion_history_images=True,
# Disable expensive features for faster processing
differential_motion_trajectory=False,
frame_variation_mapper=False,
normal=False,
subtract=False
)
# Process the dataset
extractor.process_dataset()# Process a specific video file
extractor.process_video(
video_path="/path/to/video.avi",
split="train",
view="FrontView",
action_class="Walking",
video_name="1"
)Tip
For quick testing, you can use the command line interface with default settings.
python main.pypython examples.py__init__(rhm_path, extract_frame_number, debug, show_debug): Initialize the extractorset_feature_flags(**kwargs): Configure which features to extractprocess_dataset(): Process the entire RHM datasetprocess_video(video_path, split, view, action_class, video_name): Process a single video
extract_motion_aggregation(frames): Motion aggregation featureextract_frame_variation_mapping(first_frame, last_frame): Frame variation mappingextract_differential_motion_trajectory(frames): Differential motion trajectoryextract_optical_flow(frames): Optical flow computationextract_motion_history_images(frames): Motion history imagesextract_subtract_frames(frames): Frame subtraction method
get_enabled_features(): Get list of currently enabled featurescreate_folder_structure(): Create organized output directories
Note
Ensure all dependencies are properly installed before running the feature extractor.
- Python 3.7+
- OpenCV (
cv2library) - NumPy
-
Ensure Python 3.7+ is installed.
-
Install required Python packages:
pip install opencv-python numpy
Warning
Make sure you have sufficient disk space as feature extraction can generate large amounts of data depending on your video dataset size.
Note
The RHM dataset is required to use this feature extractor. Contact the dataset maintainers for access.
For assistance with obtaining the RHM dataset, send an email to Patrick at p.holthaus@herts.ac.uk. More information about the Robot House, where the dataset was collected, can be found at Robot House Website.
Important
If you use the RHM dataset or this code in your research, please cite the relevant papers below.
Mohammad Hossein Bamorovat Abadi, Mohamad Reza Shahabian Alashti, Patrick Holthaus, Catherine Menon, and Farshid Amirabdollahian. "RHM: Robot House Multi-view Human Activity Recognition Dataset." ACHI 2023, Venice, Italy, IARIA.
BibTeX:
@inproceedings{bamorovat2023rhm,
title={RHM: Robot House Multi-view Human Activity Recognition Dataset},
author={Bamorovat Abadi, Mohammad Hossein and Shahabian Alashti, Mohamad Reza and Holthaus, Patrick and Menon, Catherine and Amirabdollahian, Farshid},
booktitle={ACHI 2023: The Sixteenth International Conference on Advances in Computer-Human Interactions},
year={2023},
organization={IARIA},
address={Venice, Italy}
}Mohammad Hossein Bamorovat Abadi, Mohamad Reza Shahabian Alashti, Patrick Holthaus, Catherine Menon, and Farshid Amirabdollahian. "Robot house human activity recognition dataset." 4th UK-RAS Conference: Robotics at Home (UKRAS21), 19β20. Hatfield, UK, 2021.
BibTeX:
@inproceedings{bamorovat2021robot,
title={Robot house human activity recognition dataset},
author={Bamorovat Abadi, Mohammad Hossein and Shahabian Alashti, Mohamad Reza and Holthaus, Patrick and Menon, Catherine and Amirabdollahian, Farshid},
booktitle={4th UK-RAS Conference: Robotics at Home (UKRAS21)},
pages={19--20},
year={2021},
address={Hatfield, UK}
}Mohammad Hossein Bamorovat Abadi, Mohamad Reza Shahabian Alashti, Patrick Holthaus, Catherine Menon, and Farshid Amirabdollahian. "Robotic Vision and Multi-View Synergy: Action and activity recognition in assisted living scenarios." BioRob 2024, Heidelberg, Germany, IEEE.
BibTeX:
@inproceedings{bamorovat2024robotic,
title={Robotic Vision and Multi-View Synergy: Action and activity recognition in assisted living scenarios},
author={Bamorovat Abadi, Mohammad Hossein and Shahabian Alashti, Mohamad Reza and Holthaus, Patrick and Menon, Catherine and Amirabdollahian, Farshid},
booktitle={2024 10th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob)},
year={2024},
organization={IEEE},
address={Heidelberg, Germany}
}π Paper Links:
RHM
ββ FrontView
β ββ train
β β ββ 14 classes (folders)
β ββ test
β β ββ 14 ...
β ββ val
β ββ 14 ...
ββ BackView
β ββ ...
ββ OmniView
β ββ ...
ββ RobotView
ββ ...
- Mohammad Hossein Bamorovat Abadi
- Contact: m.bamorovvat@gmail.com
- Personal Webpage: https://bamorovat.com/
- Project Page: https://bamorovat.com/projects/rhm-dataset.html
- 2.0.0 (2025/09/08): Refactored to class-based architecture with modular design
- 1.0.0 (2024/01/05): Initial procedural implementation
