
Example of super-resolution with randomized smoothing vs. with adversarial training, under adversarial attack.
This repository contains the implementation and experiments for the bachelor's thesis "Randomized Smoothing as an Adversarial Defense Mechanism for Inverse Problems" submitted at the Technical University of Munich. The work explores randomized smoothing as a defense against adversarial attacks in image super-resolution and compares it to adversarial training.
Randomized smoothing is a mechanism that can achieve certifiable robustness of neural network-based
classifiers against
Deep neural networks for super-resolution are vulnerable to adversarial attacks - subtle input perturbations that cause dramatic output distortions while remaining imperceptible to humans. While empirical defenses like adversarial training exist, they lack theoretical guarantees (especially against unknown attacks) and often degrade output quality. This repository implements randomized smoothing for super-resolution. Randomized smoothing offers certifiable robustness within a proven
randomized-smoothing-adv-sr/
├── data/
│ ├── README.md
│ └── imagenet-mini/
├── models/
│ ├── __init__.py
│ └── unet_sr.py # U-Net for super-resolution
├── notebooks/
│ ├── attacksrs_compare.ipynb # Comparison of attacks
│ ├── optimize_sigma_smoothing.ipynb # Noise parameter optimization
│ ├── plot_presi.ipynb # Presentation plots
│ └── visualize.ipynb # Result visualization
├── paper/
│ ├── bachelor_thesis_presentation.pdf
│ └── randomized_smoothing_inverse_problems_thesis.pdf
├── src/
│ ├── adv.py # Adversarial attack implementation
│ ├── dataset.py # Dataset handling
│ ├── evaluate_adv.py # Adversarial evaluation
│ ├── smoothened_estimate.py # Randomized smoothing implementation
│ ├── train_adv.py # Adversarial training
│ └── train_rs.py # Randomized smoothing training
├── .gitignore
├── README.md
└── requirements.txt
For classifier f, the smoothed classifier g is:
For inverse problems, we use:
- Linear estimators gain no robustness from smoothing
- Training on noisy inputs is crucial for tight bounds
- Noise level σ controls robustness/accuracy trade-off
We evaluate randomized smoothing on single-image super-resolution (SISR), an ill-posed inverse problem where:
- Multiple high-resolutions can match one low-resolution input
- Small input changes cause large reconstruction errors
- Robustness is critical for real-world applications
- Mini-ImageNet-1000 subset
- 34,745 training / 3,923 test images
- 160×160 crops, bilinear downsampling (×2, ×4)
Custom U-Net with:
- 5 down/up-sampling steps
- Channel sizes: [8,16,32,64,128]
- 3×3 convolutions + ReLU activations

U-Net model architecture visualization
- Train base U-Net on noisy inputs: y + δ, δ ∼ N(0,σ_f²I)
- Approximate g(y) with k=50 noise samples at test time
- PGD attacks during training
- Perturbation radius ϵ = √d · ϵ_rel
- 5 attack iterations per batch
- PSNR: Peak Signal-to-Noise Ratio
- SSIM: Structural Similarity Index (better for perceptual quality)
- MSE: Mean Squared Error
Outputs under up to ϵ_rel=0.02 attack
- For small attacks (ϵ_rel ≤ 0.02), randomized smoothing matches adversarial training
- For larger attacks, randomized smoothing achieves better SSIM scores
- Optimal σ_f varies by attack strength (see Appendix in thesis)
This work demonstrates that:
- Randomized smoothing can be effectively adapted to inverse problems, such as super-resolution
- Provides certifiable robustness against ℓ₂ attacks
- Produces better visual quality than adversarial training
- Offers flexible robustness-accuracy trade-off via σ_f
Future directions include combining with adversarial training and extending to other inverse problems.
