VIEW THE PRESENTATION HERE
VIEW THE REPORT HERE
- visualizations_and_testing.ipynb: file to generate all visualizations, evaluations etc... and test dataloader utilities
- util/dataloader: the dataloader I use for cell segmentation
- model/equivariant.py: the file I sourced for implementing equivariant models from previous research
- model/equivariantUnet.py
- model/unet.py
- model/escnnUnet.py: broken
- visualization/: directory for all generated visualizations
- all .sh files are used for running jupyter notebooks on the cluster and tunneling in or submitting Slurm jobs for training
A conda environment with the following:
pytorch
seaborn
escnn
e2cnn
numpy
torchvision
torchmetrics
monai
albumentations
scikit-learn
pandas
matplotlib
NOTE: Unless you have cluster access, I don't recommend training anything.
-
Install all requirements in a python environment
-
Run setup.sh (I need to double check this)
-
activate your environment
-
run the v1_0.ipynb notebook to generate important remaining files and visualizations
-
To train, configure the appropriate .sh script for training (according to your cluster and resources)
-
Set up comet_ml account for training tracking and input your API key as ENV var and other into to the training script
Equivariance and invariance, a special case of equivariance, are useful properties when dealing with data containing symmetries. Classical convolution neural networks are considered translationally equivariant. That is, if the input image is translated, the feature maps are translated in the same manner, preserving the spatial relationships. This allows features of the image to be recognized in the same way, regardless of their position within the image.
However, this does not generalize to convolution on rotations of the input. Additionally, while rotations in
Many data types, medical imaging data among them, contain rotational (e.g. tumors are the same regardless of orientation in a WSI), reflectional (e.g. bilateral structures of brain or body), and translational symmetries. Firstly, equivariance would be useful in a medical imaging context to increase generalizability of models regardless of the orientation that data is presented. Secondly, I claim that equivariance implicitly deals with the need for certain kinds of data augmentations, improving overall training efficiency.
Tool I used for logging info during training on cluster: Comet_ML: https://www.comet.com/joeshmoe03/deep-learning/view/new/panels#manage
Good resource on equivariance:
- Ineresting repo: https://github.com/QUVA-Lab/e2cnn
- Documentation: https://quva-lab.github.io/e2cnn/
- Steerable Kernels: https://arxiv.org/pdf/1911.08251