Code for The Multi Modal Recyclable Waste Management Dataset workflow using synchronized RGB, depth, video, and impact audio.
Temporary / complementary repository: This code repository complements the dataset repository and the related publication material. It will be updated and further cleaned when I become available.
- Dataset pipeline: tools for dataset creation/refinement/annotation/validation (scripts + helpers)
- Classification pipeline: experiments, training utilities, and inference scripts for multimodal MSW classification
To keep this GitHub repository lightweight and reproducible:
- The dataset itself is not included (see the Zenodo link below).
- Model weights/checkpoints are not included (
*.keras,*.h5,*.pt,*.pth,*.ckpt, etc.). - Local data folders such as
dataset/andpredict_data/are ignored.
- Paper DOI:
[DOI_PLACEHOLDER_TO_BE_ADDED_AFTER_PUBLICATION] - Zenodo dataset link:
https://zenodo.org/records/18364275
DatasetPipeline/— dataset curation, annotation tools, validators, utilitiesCapture/— capture scripts and helper utilitiesDataExploration/— exploratory notebooks (image/audio analysis)NeuralNetwork/— training code + environment filesOtherClassifiers/— additional classifier experimentsPrediction/— inference / prediction scripts
Environment references are provided in:
NeuralNetwork/nnenv.ymlNeuralNetwork/gpu_nnenv.yml
Example (Conda):
-
Create environment (CPU example):
conda env create -f NeuralNetwork/nnenv.yml conda activate nnenv
-
Run scripts from the relevant submodule (
DatasetPipeline/,NeuralNetwork/,Prediction/).
Notes:
- Some scripts expect local paths to the dataset; those inputs are intended to be provided via CLI args and/or config variables (dataset not shipped here).
- If you encounter missing-package errors, use the environment YAMLs as the source of truth.
Please cite the paper and the dataset/repository resources:
- Paper DOI:
[DOI_PLACEHOLDER_TO_BE_ADDED_AFTER_PUBLICATION] - Dataset (Zenodo):
https://zenodo.org/records/18364275
- This is a temporary complementary code repository to the dataset repository and will be updated.
- Final code retouching/restructuring and repository preparation were completed with assistance from GitHub Copilot (GPT-5.2).