| PyPI | Versions | Docs & License | Tests | Reference |
|---|---|---|---|---|
|
Remote-Sensing-SRGAN is a research-grade GAN framework for super-resolution of Sentinel-2 and other remote-sensing imagery. It is specifically not meant for SOTA performance, but quick implementation and experimentation. It supports arbitrary band counts, configurable generator/discriminator designs, scalable depth/width, and a modular loss system designed for stable GAN training on EO data. Check out how inference and training works right now in this interactove example!
Full docs live at srgan.opensr.eu. They cover usage, configuration, training recipes, and deployment tips in depth.
- Flexible models: swap between SRResNet, RCAB, RRDB, LKA, ESRGAN, and stochastic generators with YAML-only changes.
- Remote-sensing aware losses: combine spectral, perceptual, and adversarial objectives with tunable weights.
- Stable training loop: generator pretraining, adversarial ramp-ups, EMA, and multi-GPU Lightning support out of the box.
- PyPI distribution:
pip install opensr-srganfor ready-to-use presets or custom configs. - Extensive Logging: Logging all important information automatically to
WandBfor optimal insights.
All key knobs are exposed via YAML in the opensr_srgan/configs folder:
- Model:
in_channels,n_channels,n_blocks,scale, ESRGAN knobs (growth_channels,res_scale,out_channels),block_type ∈ {SRResNet, res, rcab, rrdb, lka} - Losses:
l1_weight,sam_weight,perceptual_weight,tv_weight,adv_loss_beta - Training:
pretrain_g_only,g_pretrain_steps,adv_loss_ramp_steps,label_smoothing, generator LR warmup (Schedulers.g_warmup_steps,Schedulers.g_warmup_type), discriminator cadence controls - Data: band order, normalization stats, crop sizes, augmentations
- G‑only pretraining: Train with content/perceptual losses while the adversarial term is held at zero during the first
g_pretrain_steps. - Adversarial ramp‑up: Increase the BCE adversarial weight linearly or smoothly (cosine) over
adv_loss_ramp_stepsuntil it reachesadv_loss_beta. - Generator LR warmup: Ramp the generator optimiser with a cosine or linear schedule for the first 1–5k steps via
Schedulers.g_warmup_steps/g_warmup_typebefore switching to plateau-based reductions. - EMA smoothing: Enable
Training.EMA.enabledto keep a shadow copy of the generator. Decay values in the 0.995–0.9999 range balance responsiveness with stability and are swapped in automatically for validation/inference. - Spectral normalization: Optional for the SRGAN discriminator via
Discriminator.use_spectral_normto better control its Lipschitz constant and stabilize adversarial updates. Miyato et al., 2018 - Wasserstein critic + R1 penalty: Switch
Training.Losses.adv_loss_type: wassersteinto enable a critic objective and pair it with the configurableTraining.Losses.r1_gammagradient penalty on real images for smoother discriminator updates. Arjovsky et al., 2017; Mescheder et al., 2018 The schedule and ramp make training easier, safer, and more reproducible.
| Component | Options | Config keys |
|---|---|---|
| Generators | SRResNet, res, rcab, rrdb, lka, esrgan, stochastic_gan |
Generator.model_type, depth via Generator.n_blocks, width via Generator.n_channels, kernels/scale plus ESRGAN-specific growth_channels, res_scale, out_channels. |
| Discriminators | standard SRGAN CNN, patchgan, esrgan |
Discriminator.model_type, granularity with Discriminator.n_blocks, spectral norm toggle via Discriminator.use_spectral_norm, ESRGAN-specific base_channels, linear_size. |
| Content losses | L1, Spectral Angle Mapper, VGG19/LPIPS perceptual metrics, Total Variation | Weighted by Training.Losses.* (e.g. l1_weight, sam_weight, perceptual_weight, perceptual_metric, tv_weight). |
| Adversarial loss | BCE‑with‑logits on real/fake logits | Warmup via Training.pretrain_g_only, ramped by adv_loss_ramp_steps, capped at adv_loss_beta, optional label smoothing. |
The YAML keeps the SRGAN flexible: swap architectures or rebalance perceptual vs. spectral fidelity without touching the code.
Follow the installation instructions for package, source, and dependency setup options.
- To test the package immediately, launch the Google Colab right now and follow along the introduction!
- Datasets: Grab the bundled example dataset or learn how to register your own sources in the data guide.
- Training: Launch training with
python -m opensr_srgan.train --config opensr_srgan/configs/config.yamlor importtrainfrom the package as described in the training walkthrough. - Inference: Ready-made presets and large-scene pipelines are described in the inference section.
All tunable knobs—architectures, loss weights, schedulers, and EMA—are exposed via YAML files under opensr_srgan/configs. Strategy tips for warm-ups, adversarial ramps, and EMA usage are summarised in the training concepts chapter.
SRGAN/
├── opensr_srgan/ # Library + training code
├── docs/ # MkDocs documentation sources
├── paper/ # Publication, figures, and supporting material
├── pyproject.toml # Packaging metadata
└── requirements.txt # Development dependencies
If you wish to contribute (such as new models, data or functionalities), please review the contribution guidelines and open a PR here on Github. If you're having problems or need support, please open an Issue here on Github.
- OpenSR-Model – Latent Diffusion SR (LDSR-S2)
- OpenSR-Utils – Large-scale inference & data plumbing
- OpenSR-Test – Benchmarks & metrics
- SEN2NEON – Multispectral HR reference dataset
If you use this work, please cite:
@misc{donike2025opensrsrganflexiblesuperresolutionframework,
title={OpenSR-SRGAN: A Flexible Super-Resolution Framework for Multispectral Earth Observation Data},
author={Simon Donike and Cesar Aybar and Julio Contreras and Luis Gómez-Chova},
year={2025},
eprint={2511.10461},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2511.10461},
}Developed within the ESA Φ-lab / OpenSR initiative. Simon Donike is the main contributor and maintainer of the repository. Cesar Aybar and Julio Contreras contributed the datasets as well as implementation, documentation and publishing support. Prof. Luis Gómez-Chova contributed the remote sensing-spefific perspective and signal processing advice.
The development history of this code began in 2020 with the implementation of an SR-GAN for a MSc thesis project. Since then, over several iterations, the codebase has been expanded and many training tweaks implemented, based on the experiences made training SR-GANs for the OpenSR project. The fundamental training outline, training tweaks, normalizations, and inference procedures are built upon that experience.
The added complexity that came with (a) the implementation of many different models and blocks, (b) more data sources, (c) according normalizations, and (d) complex testing and documentation structures, was handled to varying degrees with the help of Codex. Specifically, the docs, the automated testing workflows, and the normalizer class are in part AI generated. This code and its functionalities have been verified and tested to the best of my ability.

