SaiKiran Tedla*, Junyong Lee*, Beixuan Yang, Mahmoud Afifi, Michael S. Brown
Samsung AI Center Toronto & York University
📄 Paper (PDF)
📄 Project Page & Dataset
If you use our dataset or code, please cite:
@inproceedings{TedlaLee2025Multispectral,
title={{Multispectral Demosaicing via Dual Cameras}},
author={{Tedla, SaiKiran and Lee, Junyong and Yang, Beixuan and Afifi, Mahmoud and Brown, Michael S}},
booktitle={{Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}},
year={{2025}}
}This section describes how to train and test our unified demosaicing and denoising model.
conda env create -f p312.yml
conda activate p312- Copy the DNG provided in the dataset link into
PyTorch/utilities/.
This DNG is used for visualization and computing the DeltaE metric. - Create a Weights & Biases (wandb) account for experiment tracking.
Update thewandbsettings in the YAML config files accordingly.
- Download the dataset and visualization DNG from our project page.
- Unzip all dataset files into the same directory.
- Update the config files with the correct dataset paths.
- The train/val/test split is specified in
PyTorch/utilities/multi_spectral_dataset.py. - Use
PyTorch/utilities/color_space_converterfor visualization.
We provide a sample DNG used to extract metadata for rendering.
Before training, set test: false in the corresponding config file, update dataset paths, and configure your wandb project/entity.
We provide configs for both full-size and super-resolution experiments from our paper.
Stage 1 – Train RGB and MS networks:
python PyTorch/runner.py --config PyTorch/configs/stage1_full_ms.yaml
python PyTorch/runner.py --config PyTorch/configs/stage1_full_rgb.yamlStage 2 – Train enhancement/fusion network:
python PyTorch/runner.py --config PyTorch/configs/stage2_naf_full.yaml
Before testing, set test: true in the config file and provide paths to the pretrained checkpoints (available on the project page). Base_models (stage 1) and models (stage 2) folders should by placed inside PyTorch.
For example, you may run:
python PyTorch/runner.py --config PyTorch/configs/stage2_naf_full.yaml- All config files are initially in test mode. Switch to train mode by setting
test: false. We provide configs for training stage 1 and stage 2 networks. - Checkpoints for reproducing paper results are available on the project page. Base_models (stage 1) and models (stage 2) folders should by placed inside PyTorch.
- For visualization, the provided DNG and color space converter will help render multispectral data.
For questions or issues, please reach out via the project page or open an issue on the repository.