Skip to content

ESAOpenSR/SRGAN

Repository files navigation

PyPI Versions Docs & License Tests Reference
PyPI PythonVersion
PLVersion
Docs
License: Apache
CI
codecov
arXiv
DOI
status

|

banner

🌍 Single Image Super-Resolution Remote Sensing 'SRGAN'

Remote-Sensing-SRGAN is a research-grade GAN framework for super-resolution of Sentinel-2 and other remote-sensing imagery. It is specifically not meant for SOTA performance, but quick implementation and experimentation. It supports arbitrary band counts, configurable generator/discriminator designs, scalable depth/width, and a modular loss system designed for stable GAN training on EO data. Check out how inference and training works right now in this interactove example! Open In Colab


📖 Documentation

Full docs live at srgan.opensr.eu. They cover usage, configuration, training recipes, and deployment tips in depth.

🧠 Highlights

  • Flexible models: swap between SRResNet, RCAB, RRDB, LKA, ESRGAN, and stochastic generators with YAML-only changes.
  • Remote-sensing aware losses: combine spectral, perceptual, and adversarial objectives with tunable weights.
  • Stable training loop: generator pretraining, adversarial ramp-ups, EMA, and multi-GPU Lightning support out of the box.
  • PyPI distribution: pip install opensr-srgan for ready-to-use presets or custom configs.
  • Extensive Logging: Logging all important information automatically to WandB for optimal insights.

🏗️ Configuration Examples

All key knobs are exposed via YAML in the opensr_srgan/configs folder:

  • Model: in_channels, n_channels, n_blocks, scale, ESRGAN knobs (growth_channels, res_scale, out_channels), block_type ∈ {SRResNet, res, rcab, rrdb, lka}
  • Losses: l1_weight, sam_weight, perceptual_weight, tv_weight, adv_loss_beta
  • Training: pretrain_g_only, g_pretrain_steps, adv_loss_ramp_steps, label_smoothing, generator LR warmup (Schedulers.g_warmup_steps, Schedulers.g_warmup_type), discriminator cadence controls
  • Data: band order, normalization stats, crop sizes, augmentations

🎚️ Training Stabilization Strategies

  • G‑only pretraining: Train with content/perceptual losses while the adversarial term is held at zero during the first g_pretrain_steps.
  • Adversarial ramp‑up: Increase the BCE adversarial weight linearly or smoothly (cosine) over adv_loss_ramp_steps until it reaches adv_loss_beta.
  • Generator LR warmup: Ramp the generator optimiser with a cosine or linear schedule for the first 1–5k steps via Schedulers.g_warmup_steps/g_warmup_type before switching to plateau-based reductions.
  • EMA smoothing: Enable Training.EMA.enabled to keep a shadow copy of the generator. Decay values in the 0.995–0.9999 range balance responsiveness with stability and are swapped in automatically for validation/inference.
  • Spectral normalization: Optional for the SRGAN discriminator via Discriminator.use_spectral_norm to better control its Lipschitz constant and stabilize adversarial updates. Miyato et al., 2018
  • Wasserstein critic + R1 penalty: Switch Training.Losses.adv_loss_type: wasserstein to enable a critic objective and pair it with the configurable Training.Losses.r1_gamma gradient penalty on real images for smoother discriminator updates. Arjovsky et al., 2017; Mescheder et al., 2018 The schedule and ramp make training easier, safer, and more reproducible.

⚙️ Config‑driven components

Component Options Config keys
Generators SRResNet, res, rcab, rrdb, lka, esrgan, stochastic_gan Generator.model_type, depth via Generator.n_blocks, width via Generator.n_channels, kernels/scale plus ESRGAN-specific growth_channels, res_scale, out_channels.
Discriminators standard SRGAN CNN, patchgan, esrgan Discriminator.model_type, granularity with Discriminator.n_blocks, spectral norm toggle via Discriminator.use_spectral_norm, ESRGAN-specific base_channels, linear_size.
Content losses L1, Spectral Angle Mapper, VGG19/LPIPS perceptual metrics, Total Variation Weighted by Training.Losses.* (e.g. l1_weight, sam_weight, perceptual_weight, perceptual_metric, tv_weight).
Adversarial loss BCE‑with‑logits on real/fake logits Warmup via Training.pretrain_g_only, ramped by adv_loss_ramp_steps, capped at adv_loss_beta, optional label smoothing.

The YAML keeps the SRGAN flexible: swap architectures or rebalance perceptual vs. spectral fidelity without touching the code.

🧰 Installation

Follow the installation instructions for package, source, and dependency setup options.


🚀 Quickstart

  • To test the package immediately, launch the Google Colab right now and follow along the introduction! Open In Colab
  • Datasets: Grab the bundled example dataset or learn how to register your own sources in the data guide.
  • Training: Launch training with python -m opensr_srgan.train --config opensr_srgan/configs/config.yaml or import train from the package as described in the training walkthrough.
  • Inference: Ready-made presets and large-scene pipelines are described in the inference section.

🏗️ Configuration & Stabilization

All tunable knobs—architectures, loss weights, schedulers, and EMA—are exposed via YAML files under opensr_srgan/configs. Strategy tips for warm-ups, adversarial ramps, and EMA usage are summarised in the training concepts chapter.

📂 Repository Structure

SRGAN/
├── opensr_srgan/         # Library + training code
├── docs/                 # MkDocs documentation sources
├── paper/                # Publication, figures, and supporting material
├── pyproject.toml        # Packaging metadata
└── requirements.txt      # Development dependencies

Contribution and Issues

If you wish to contribute (such as new models, data or functionalities), please review the contribution guidelines and open a PR here on Github. If you're having problems or need support, please open an Issue here on Github.


📚 Related Projects

  • OpenSR-Model – Latent Diffusion SR (LDSR-S2)
  • OpenSR-Utils – Large-scale inference & data plumbing
  • OpenSR-Test – Benchmarks & metrics
  • SEN2NEON – Multispectral HR reference dataset

✍️ Citation

If you use this work, please cite:

@misc{donike2025opensrsrganflexiblesuperresolutionframework,
      title={OpenSR-SRGAN: A Flexible Super-Resolution Framework for Multispectral Earth Observation Data}, 
      author={Simon Donike and Cesar Aybar and Julio Contreras and Luis Gómez-Chova},
      year={2025},
      eprint={2511.10461},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2511.10461}, 
}

🧑‍🚀 Authors, Acknowledgements, and Notes on AI Usage

Developed within the ESA Φ-lab / OpenSR initiative. Simon Donike is the main contributor and maintainer of the repository. Cesar Aybar and Julio Contreras contributed the datasets as well as implementation, documentation and publishing support. Prof. Luis Gómez-Chova contributed the remote sensing-spefific perspective and signal processing advice.

The development history of this code began in 2020 with the implementation of an SR-GAN for a MSc thesis project. Since then, over several iterations, the codebase has been expanded and many training tweaks implemented, based on the experiences made training SR-GANs for the OpenSR project. The fundamental training outline, training tweaks, normalizations, and inference procedures are built upon that experience.
The added complexity that came with (a) the implementation of many different models and blocks, (b) more data sources, (c) according normalizations, and (d) complex testing and documentation structures, was handled to varying degrees with the help of Codex. Specifically, the docs, the automated testing workflows, and the normalizer class are in part AI generated. This code and its functionalities have been verified and tested to the best of my ability.


About

Flexible implementation of different SR-GANs for Remote Sensing.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published