Official PyTorch Implementation for the paper "Analysis of Deep Image Prior and Exploiting Self-Guidance for Image Reconstruction" (IEEE TCI 2025).
This repository implements the Self-Guided Deep Image Prior (DIP) technique. While standard DIP leverages the inductive bias of Convolutional Neural Networks (CNNs) for unsupervised image restoration, it often requires careful early stopping to avoid overfitting to noise.
Our approach introduces a Self-Guidance mechanism that utilizes the model's own predictions during training to regularize the optimization process. This enables robust image restoration (including denoising, inpainting, and MRI reconstruction) without the need for ground-truth supervision.
- Self-Guided DIP: A novel regularization strategy to prevent overfitting in DIP.
- Versatile Architectures: Implementations of various backbones including standard UNet and custom Deep/Shallow variants.
- Unsupervised Learning: Perform high-quality image restoration without external training datasets.
📦 Self-Guided-DIP
┣ 📂 models # Core model architectures (Baseline & Proposed)
┣ 📂 unet # UNet implementation details
┣ 📂 utils # Utility functions (Dataloaders, Metrics, Visualization)
┣ 📂 data # (Place your dataset files here)
┣ 📜 self_guided_DIP_demo.ipynb # Main entry point: Jupyter Notebook for training/testing
┣ 📜 two_channel_dataset_DIP_github_demo.ipynb # Handles MRI k-space data loading, VDRS mask generation, and data preprocessing.
┗ 📜 requirements.txt # Python dependencies
git clone https://github.com/sjames40/Self-Guided-DIP.git
cd Self-Guided-DIPWe recommend using Miniconda to manage dependencies.
# Create a new conda environment
conda create --name self-guided-dip python=3.9
conda activate self-guided-dip
# Install dependencies
pip install -r requirements.txtTo reproduce the results, please download the specific k-space datasets used in our experiments.
-
Dataset:
- fastMRI Dataset website (Download will take some time)
- Stanford 2D FSE website (or download our copy via Google Drive)
-
Setup:
We recommend downloading the fastMRI dataset first, as it is the primary dataset used to generate the results inself_guided_DIP_demo.ipynb.Download Instructions:
- For fastMRI: Please visit the official website to obtain the license/agreement and then download the data.
- For Stanford 2D FSE: The full dataset is available on the official website. We also provide a partial dataset (subset) via Google Drive for quick testing.
Once downloaded, unzip the files and place them into the project directory (e.g., inside a folder named
dataor as specified in the notebook).
The core logic and experiments are contained in the Jupyter Notebook.
- Launch Jupyter Notebook:
jupyter notebook
- Open
self_guided_DIP_demo.ipynb. - Run the cells sequentially to initialize the model, load data, and perform restoration.
If you find this code useful for your research, please cite our paper:
@article{DBLP:journals/tci/LiangBQWR25,
author = {Shijun Liang and
Evan Bell and
Qing Qu and
Rongrong Wang and
Saiprasad Ravishankar},
title = {Analysis of Deep Image Prior and Exploiting Self-Guidance for Image Reconstruction},
journal = {{IEEE} Trans. Computational Imaging},
volume = {11},
pages = {435--451},
year = {2025},
url = {https://doi.org/10.1109/TCI.2025.3540706},
doi = {10.1109/TCI.2025.3540706}
}For questions regarding the paper or code, please contact:
- Shijun Liang:
liangs16@msu.edu - Haijie Yuan:
yuanhai1@msu.edu - Prof. Saiprasad Ravishankar:
ravisha3@msu.edu