This is the official PyTorch implementation of the paper MedFuncta: A Unified Framework for Learning Efficient Medical Neural Fields by Paul Friedrich, Florentin Bieder, Julian McGinnis, Julia Wolleb, Daniel Rueckert and Philippe C. Cattin.
If you find our work useful, please consider to ⭐ star this repository and 📝 cite our paper:
@article{friedrich2025medfuncta,
title={MedFuncta: A Unified Framework for Learning Efficient Medical Neural Fields},
author={Friedrich, Paul and Bieder, Florentin and McGinnis, Julian and Wolleb, Julia and Rueckert, Daniel and Cattin, Philippe C},
journal={arXiv preprint arXiv:2502.14401},
year={2025}
}Research in medical imaging primarily focuses on discrete data representations that poorly scale with grid resolution and fail to capture the often continuous nature of the underlying signal.
Neural Fields (NFs) offer a powerful alternative by modeling data as continuous functions.
While single-instance NFs have successfully been applied in medical contexts, extending them to large-scale medical datasets remains an open challenge.
We therefore introduce MedFuncta, a unified framework for large-scale NF training on diverse medical signals.
Building on Functa, our approach encodes data into a unified representation, namely a 1D latent vector, that modulates a shared, meta-learned NF, enabling generalization across a dataset.
We revisit common design choices, introducing a non-constant frequency parameter
We recommend using a conda environment to install the required dependencies.
You can create and activate such an environment called medfuncta by running the following commands:
mamba env create -f environment.yaml
mamba activate medfunctaTo obtain meta-learned shared model parameters, simply run the following command with the correct config.yaml:
python train.py --config ./configs/experiments/DATASET_RESOLUTION.yamlTo perform reconstruction experiments (evaluate the reconstruction quality), simply run the following command with the correct config.yaml:
python eval.py --config ./configs/eval/experiments/DATASET_RESOLUTION.yamlTo convert a dataset into our MedFuncta representation, simply run the following command with the correct config.yaml:
python fit_NFset.py --config ./configs/fit/DATASET_RESOLUTION.yamlThe source code for reproducing our classification experiments can be found in /downstream_tasks/classification.py.
All arguments can be set in the Args class in this script.
We release MedNF a large-scale dataset containing more than 500 k medical NFs. More information on the dataset can be found in our paper (Appendix D). The dataset can be accessed here: https://doi.org/10.5281/zenodo.14898708.
The dataset consists of the following 7 sub-datasets:
To ensure good reproducibility, we trained and evaluated our network on publicly available datasets:
-
MedMNIST, a large-scale MNIST-like collection of standardized biomedical images. More information is avilable here.
-
MIT-BIH Arryhythmia, a heartbeat classification dataset. We use a preprocessed version that is available here.
-
BRATS 2023: Adult Glioma, a dataset containing routine clinically-acquired, multi-site multiparametric magnetic resonance imaging (MRI) scans of brain tumor patients. We just used the T1-weighted images for training. The data is available here.
-
LIDC-IDRI, a dataset containing multi-site, thoracic computed tomography (CT) scans of lung cancer patients. The data is available here.
The provided code works for the following data structure (you might need to adapt the directories in data/dataset.py):
data
└───BRATS
└───BraTS-GLI-00000-000
└───BraTS-GLI-00000-000-seg.nii.gz
└───BraTS-GLI-00000-000-t1c.nii.gz
└───BraTS-GLI-00000-000-t1n.nii.gz
└───BraTS-GLI-00000-000-t2f.nii.gz
└───BraTS-GLI-00000-000-t2w.nii.gz
└───BraTS-GLI-00001-000
└───BraTS-GLI-00002-000
...
└───LIDC-IDRI
└───LIDC-IDRI-0001
└───preprocessed.nii.gz
└───LIDC-IDRI-0002
└───LIDC-IDRI-0003
...
└───MIT-BIH
└───mitbih_test.csv
└───mitbih_train.csv
...
We provide a script for preprocessing LIDC-IDRI. Simply run the following command with the correct path to the downloaded DICOM files DICOM_PATH and the directory you want to store the processed nifti files NIFTI_PATH:
python data/preproc_lidc-idri.py --dicom_dir DICOM_PATH --nifti_dir NIFTI_PATHOur code is based on / inspired by the following repositories:

