-
Notifications
You must be signed in to change notification settings - Fork 16
Description
Dear authors,
I hope this message finds you well. Following the dataset processing method provided by you previously, I conducted data processing and proceeded with training using 'image_train.py.' However, when I attempted to test the trained model using 'image_sample.py,' I encountered the following error. It is worth noting that I followed the training settings outlined in your paper and GitHub repository, and testing with the pre-trained model provided by you in 'image_sample.py' was successful.The error persists despite my adherence to the guidelines mentioned. I would greatly appreciate it if you could shed some light on the possible reasons for this discrepancy. Your insights and guidance on this matter would be immensely valuable to me.
Thank you in advance for your attention and support. I look forward to hearing from you soon.
error:
Traceback (most recent call last):
File "/newdata/xws/GESCO-main/image_sample.py", line 414, in
main()
File "/newdata/xws/GESCO-main/image_sample.py", line 53, in main
model.load_state_dict(th.load(args.model_path, map_location='cuda:0'))
File "/root/anaconda3/envs/pytorch_3.9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1671, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for UNetModel:
size mismatch for middle_block.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for middle_block.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for middle_block.2.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for middle_block.2.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.0.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.0.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.1.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.1.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.2.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.2.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.2.2.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.2.2.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.3.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.3.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.4.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.4.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.5.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.5.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.5.2.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.5.2.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.6.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.6.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.7.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.7.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.8.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.8.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.8.2.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.8.2.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.9.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.9.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.10.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.10.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.11.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.11.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.11.1.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.11.1.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.12.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.12.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.13.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.13.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.14.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.14.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.14.1.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.14.1.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.15.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.15.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.16.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.16.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.17.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.17.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).