Since many medical images contain sensitive information, it is necessary to encrypt these medical images before storing and further analyzing them. Some researches proposed to utilize deep learning based methods for medical image encryption and decryption, which can provide high security for medical image protection. However, our research shows that the deep learning-based encryption models can also be threatened by adopting backdoor attacks. In this paper, a backdoor attack paradigm for encryption and decryption network is proposed and corresponding attacks are designed for encryption and decryption scenarios, respectively. For attacking the encryption model, a backdoor discriminator is adopted, which is trained randomly with the normal discriminator, to confuse the encryption process. In the decryption scenario, some parameters of the subnetwork are replaced and the subnetwork can be activated when detecting the trigger embedded into the input (encrypted image) to greatly destroy the decryption performance. Moreover, considering the model performance degradation caused by parameter replacement, the model pruning is also adopted to further strengthen the attacking performance . Furthermore, the image steganography is adopted to generate invisible triggers for each image, which can greatly improve the stealthies of backdoor attacks. To the best of our knowledge, our research on designing backdoor attacks for encryption and decryption network can serve as an attacking mode for such networks, and provides another research direction for improving the security of such models . This research is also one of the earliest works to realize the backdoor attack on the deep learning based medical encryption and decryption network to evaluate the security performance of these networks. Extensive experimental results show that the proposed method can effectively threaten the security performance both for the encryption and decryption network.
miserrman/BEDN
Folders and files
| Name | Name | Last commit date | ||
|---|---|---|---|---|