Are vision transformers more robust than CNNs for Backdoor attacks?

dc.contributor.authorSubramanya, Akshayvarun
dc.contributor.authorSaha, Aniruddha
dc.contributor.authorKoohpayegani, Soroush Abbasi
dc.contributor.authorTejankar, Ajinkya
dc.contributor.authorPirsiavash, Hamed
dc.date.accessioned2023-11-10T14:20:45Z
dc.date.available2023-11-10T14:20:45Z
dc.date.issued2023-02-13
dc.descriptionICLR 2023 Eleventh International Conference on Learning Representations; Kigali, Rwanda; Mon May 1 — Fri May 5, 2023en_US
dc.description.abstractTransformer architectures are based on a self-attention mechanism that processes images as a sequence of patches. As their design is quite different compared to CNNs, it is interesting to study if transformers are vulnerable to backdoor attacks and how different transformer architectures affect attack success rates. Backdoor attacks happen when an attacker poisons a small part of the training images with a specific trigger or backdoor which will be activated later. The model performance is good on clean test images, but the attacker can manipulate the decision of the model by showing the trigger on an image at test time. In this paper, we perform a comparative study of state-of-the-art architectures through the lens of backdoor robustness, specifically how attention mechanisms affect robustness. We show that the popular vision transformer architecture (ViT) is the least robust architecture and ResMLP, which belongs to a class called Feed Forward Networks (FFN), is the most robust one to backdoor attacks among state-of-the-art architectures. We also find an intriguing difference between transformers and CNNs – interpretation algorithms effectively highlight the trigger on test images for transformers but not for CNNs. Based on this observation, we find that a test-time image blocking defense reduces the attack success rate by a large margin for transformers. We also show that such blocking mechanisms can be incorporated during the training process to improve robustness even further. We believe our experimental findings will encourage the community to understand the building block components in developing novel architectures robust to backdoor attacks.en_US
dc.description.urihttps://openreview.net/forum?id=7P_yIFi6zaAen_US
dc.format.extent12 pagesen_US
dc.genreconference papers and proceedingsen_US
dc.genrepreprintsen_US
dc.identifierdoi:10.13016/m2mnnl-dtgu
dc.identifier.urihttp://hdl.handle.net/11603/30672
dc.language.isoen_USen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.en_US
dc.titleAre vision transformers more robust than CNNs for Backdoor attacks?en_US
dc.typeTexten_US

Files

Original bundle

Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
2075_are_vision_transformers_more_r.pdf
Size:
3.83 MB
Format:
Adobe Portable Document Format
Description:
No Thumbnail Available
Name:
2075_are_vision_transformers_more_r-Supplementary Material.zip
Size:
4.25 MB
Format:
Unknown data format
Description:
Supplementary material

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: