ARXIV Paper Title MSANet: Multi-Similarity and Attention Guidance for Boosting Few-Shot Segmentation
This is the official implementation of the paper Few-Shot Segmentation using Multi-Similarity and Attention Guidance
Authors: Ehtesham Iqbal; Sirojbek Safarov; Seongdeok Bang; Sajid Javed; Yahya Zweiri; Yusra Abdulrahman
Abstract: Few-shot segmentation (FSS) methods aim to segment objects of novel classes with relatively few annotated samples. Prototype learning, a popular approach in FSS, employs prototype vectors to transfer information from known classes (support images) to novel classes(query images) for segmentation. However, using only prototype vectors may not be sufficient to represent all features of the support image. To extract abundant features and make more precise predictions, we propose a Multi-Similarity and Attention Network (MSANet) including two novel modules, a multi-similarity module and an attention module. The multi-similarity module exploits multiple feature-map of support images and query images to estimate accurate semantic relationships. The attention module instructs the MSANet to concentrate on class-relevant information. We evaluated the proposed network on standard FSS datasets, PASCAL-5i 1-shot, PASCAL-5i 5-shot, COCO-20i 1-shot, and COCO-20i 5-shot. An MSANet model with a ResNet101 backbone achieved state-of-the-art performance for all four benchmark datasets with mean intersection over union (mIoU) values of 69.13%, 73.99%, 51.09%, and 56.80%, respectively. The code used is available at https://github.com/AIVResearch/MSANet.
- Python 3.9
- PyTorch 1.11.0
- cuda 11.0
- torchvision 0.8.1
- tensorboardX 2.14
Download PASCAL, COCO and Base annotation dataset and put in MSANet/data directrory.
-
COCO-20i: COCO2014
-
Download base annotation created by BAM from here
Download the data lists (.txt files) and put them into the
MSANet/listsdirectory.
- Download the pre-trained backbones from here and put them into the
MSANet/initmodeldirectory. - Download our trained base learners from OneDrive and put them under
initmodel/PSPNet. - We provide all trained MSANet models for performance evaluation. Backbone: VGG16 & ResNet50; Dataset: PASCAL-5i & COCO-20i; Setting: 1-shot & 5-shot.
- Change configuration and add weight path to
.yamlfiles inMSHNet/config, then run thetest.pyfile for testing.
Performance comparison with the state-of-the-art approaches (i.e., HSNet, BAM and VAT in terms of average mIoU across all folds.
-
Backbone Method 1-shot 5-shot VGG16 BAM 64.41 68.76 MSANet(ours) 65.76 (+1.35) 70.40 (+1.64) ResNet50 BAM 67.81 70.91 MSANet(ours) 68.52 (+0.71) 72.60 (+1.69) ResNet101 VAT 67.50 71.60 MSANet(ours) 69.13 (+1.63) 73.99 (+2.39) -
Backbone Method 1-shot 5-shot ResNet50 BAM 46.23 51.16 MSANet(ours) 48.03 (+1.8) 53.67 (+2.51) ResNet101 HSNet 41.20 49.50 MSANet(ours) 51.09 (+9.89) 56.80 (+7.30)
This repo is mainly built based on PFENet, HSNet, and BAM. Thanks for their great work!
### BibTeX
If you find this research useful, please consider citing:
````BibTeX
@ARTICLE{11095423,
author={Iqbal, Ehtesham and Safarov, Sirojbek and Bang, Seongdeok and Javed, Sajid and Zweiri, Yahya and Abdulrahman, Yusra},
journal={IEEE Open Journal of the Computer Society},
title={Few-Shot Segmentation using Multi-Similarity and Attention Guidance},
year={2025},
volume={},
number={},
pages={1-12},
keywords={Prototypes;Visualization;Vectors;Feature extraction;Semantic segmentation;Training;Accuracy;Semantics;Correlation;Convolutional neural networks;Few-shot learning;Image segmentation;Deep Learning},
doi={10.1109/OJCS.2025.3592291}}

