Hidden Trigger Backdoor Attacks

dc.contributor.authorSaha, Aniruddha
dc.contributor.authorSubramanya, Akshayvarun
dc.contributor.authorPirsiavash, Hamed
dc.date.accessioned2020-03-11T18:24:45Z
dc.date.available2020-03-11T18:24:45Z
dc.date.issued2019-12-21
dc.descriptionProceedings of the AAAI Conference on Artificial Intelligence
dc.description.abstractWith the success of deep learning algorithms in various domains, studying adversarial attacks to secure deep models in real world applications has become an important research topic. Backdoor attacks are a form of adversarial attacks on deep networks where the attacker provides poisoned data to the victim to train the model with, and then activates the attack by showing a specific small trigger pattern at the test time. Most state-of-the-art backdoor attacks either provide mislabeled poisoning data that is possible to identify by visual inspection, reveal the trigger in the poisoned data, or use noise to hide the trigger. We propose a novel form of backdoor attack where poisoned data look natural with correct labels and also more importantly, the attacker hides the trigger in the poisoned data and keeps the trigger secret until the test time. We perform an extensive study on various image classification settings and show that our attack can fool the model by pasting the trigger at random locations on unseen images although the model performs well on clean data. We also show that our proposed attack cannot be easily defended using a state-of-the-art defense algorithm for backdoor attacks.en_US
dc.description.sponsorshipThis work was performed under the following financial assistance award: 60NANB18D279 from U.S. Department of Commerce, National Institute of Standards and Technology, funding from SAP SE, and also NSF grant 1845216.en_US
dc.description.urihttps://aaai.org/ojs/index.php/AAAI/article/view/6871en_US
dc.format.extent9 pagesen_US
dc.genreconference papers and proceedings preprintsen_US
dc.identifierdoi:10.13016/m2vwqj-nkvb
dc.identifier.citationSaha, Aniruddha; Subramanya, Akshayvarun; Pirsiavash, Hamed; Hidden Trigger Backdoor Attacks; Computer Vision and Pattern Recognition (2019); Proceedings of the AAAI Conference on Artificial Intelligence; https://aaai.org/ojs/index.php/AAAI/article/view/6871en_US
dc.identifier.urihttp://hdl.handle.net/11603/17553
dc.identifier.urihttps://doi.org/10.1609/aaai.v34i07.6871
dc.language.isoen_USen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Student Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.rights© 2019, Association for the Advancement of Artificial Intelligence
dc.subjectdeep learning algorithmsen_US
dc.subjectadversarial attacksen_US
dc.subjectbackdoor attacksen_US
dc.subjectdeep networksen_US
dc.subjectpoisoned dataen_US
dc.titleHidden Trigger Backdoor Attacksen_US
dc.typeTexten_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
1910.00033.pdf
Size:
1.53 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: