Adversarial Patches Exploiting Contextual Reasoning in Object Detection
No Thumbnail Available
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
2019-12-21
Type of Work
Department
Program
Citation of Original Publication
Saha, Aniruddha; Subramanya, Akshayvarun; Patil, Koninika; Adversarial Patches Exploiting Contextual Reasoning in Object Detection; Computer Vision and Pattern Recognition (2019); https://arxiv.org/abs/1910.00068
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Abstract
The utilization of spatial context to improve accuracy in most fast object detection algorithms is well known. The detectors increase inference speed by doing a single forward pass per image which means they implicitly use contextual reasoning for their predictions. We show that an adversary can exploit such contextual reasoning to fool standard detectors. We develop adversarial patches that make an object detector blind to a particular category even though the patch does not overlap with the missed detections. We also study methods to fix this vulnerability and show that limiting the use of contextual reasoning during object detector training acts as a form of defense that makes the detector robust. We believe defending against context based adversarial attack algorithms is not easy. We take a step towards that direction and urge the research community to give attention to this vulnerability.