This repository contains the implementation of Multi-modal networks for real-time monitoring of intracranial acoustic field during transcranial focused ultrasound therapy.
- Convolutional neural network (CNN) based Autoencoder and U-Net
- Swin Transformer based U-Net
- Python codes for training, evaluation, loading dataset
- Pre-trained model weights
Note: Dataset is not provided due to privacy concerns.
- torch
- tqdm
- timm
- einops
- matplotlib
- numpy
- h5py
- natsort
Install all prerequisites with pip install -r requirements.txt
Clone this repository: git clone https://github.com/Minjee-Seo/tFUS-Multimodal.git
You can get following model weights from here:
- CT-trained Autoencoder (AECT.pth)
- MR-trained Autoencoder (AEMR.pth)
- CT-trained CNN U-Net (UNetCT.pth)
- MR-trained CNN U-Net (UNetMR.pth)
- CT-trained Swin U-Net (SwinCT.pth)
- MR-trained Swin U-Net (SwinMR.pth)
- Transfer learning CNN U-Net (UNetCTpMR.pth)
- Edit
dataset.pyto load your data. - Run
python train.py --your_project_namefor training. - Run
python test.py --your_project_namefor evaluation.
For any queries, please reach out to Minjee Seo.