TigerNet: Individual Identification of Stripe Patterns in Amur Tigers Like Human Fingerprint
Chunmei Shi, Hao Fan, Nathan James Roberts, Wannian Cheng, Chenbing Chu, Yao Ning, Guangshun Jiang
The following are the main runtime environment dependencies for running the repository:
- linux (We use Ubuntu 22.0.4)
- cuda 11.8
- python 3.9.15
- pytorch 2.1.0
- torchvision 0.16.0
- numpy 1.26.4
- opencv 4.7.0
- timm 1.0.19
- pillow 10.2.0
You can also view detailed environment information in file environment.yaml.
In this section, you can learn about our project structure.
You can click on the directory below to expand and view the project structure:
📁 TigerNet
- 📁 data | (Store dataset files)
- 📁 origin_data
- 📁 left body
- 📁 individual 1
- 📷 1.png
- 📷 2.png
- 📷 ...
- 📁 individual 2
- 📁 individual 3
- 📁 ...
- 📁 right body
- 📁 stripe_data_100
- 📁 zebra_origin
- 📁 zebra_stripe_data_100
- 📁 Dataset | (Dataset class pytorch implementation)
- 🐍 TigerDataSet.py
- 📁 misc | (Store readme related images)
- 📁 model | (Storage model configuration, weights, and logs)
- 📁 EfficientNetV2S_TripletAndCosineLoss | (Complete configuration of TigerNet)
- 📁 left
- 📁 train_log
- 📁 weights
- 📜 config.yaml
- 📁 right
- 📁 zebra
- 📁 MiewID
- 🐍 tools.py
- 📁 nets | (PyTorch implementation of model network)
- 📁 backbone
- 🐍 EfficientNetV2.py
- 📁 similarity_module
- 🐍 dist_block.py
- 🐍 loss_function.py
- 🐍 MiewID.py
- 🐍 TigerNet.py
- 📁 utils | (Utility class)
- 📜 environment.yaml
- 🐍 image_script.py
- 🐍 predict.py
- 🐍 test.py
- 🐍 train.py
- 🐍 train_miewID.py
Due to copyright restrictions, we are unable to publicly release the relevant dataset files mentioned in the paper.
If you have data that requires processing, you can use the stripe processing script we provide to achieve the tiger stripe feature enhancement processing described in our paper.
Specifically, you will need to first place the corresponding individual animal image data according to the project structure described in the previous section, and then modify the following configurations in 🐍 image_script.py:
58 image_to_stripe_image(os.path.join('data', 'origin_data'),
59 os.path.join('data', 'stripe_data_100'), ExistingNotToConvert=True, row=100)After that, simply run:
python image_script.pyPS: Since the processing time is relatively long, the script supports resuming from breakpoints.
This section will introduce the training methods of the model.
Modify the following configuration in 🐍 train.py according to your actual situation:
os.environ["CUDA_VISIBLE_DEVICES"] = "0, 1, 2, 3, 4, 5, 6, 7"
...
config_yaml_file_path = os.path.join('model', 'EfficientNetV2S_TripletAndCosineLoss', 'left', 'config.yaml')Then run it directly:
python train.pyAfter modifying the relevant configurations, run directly:
python test.pyIf you have two images that need to be compared, you can use our pre-written prediction script for similarity discrimination. Modify the image path configuration in 🐍 predict.py:
108 image1_path = r'image_1_path'
...
119 image2_path = r'image_2_path'Then run it directly:
python predict.pyOur implementation is built upon Pytorch and Pillow - we gratefully acknowledge their excellent work.
MiewID implementation reference from MiewID.
