dc.contributor.author | Subramanya, Akshayvarun | |
dc.contributor.author | Pillai, Vipin | |
dc.contributor.author | Pirsiavash, Hamed | |
dc.date.accessioned | 2019-07-03T17:36:26Z | |
dc.date.available | 2019-07-03T17:36:26Z | |
dc.date.issued | 2018-12-06 | |
dc.description.abstract | Deep networks have been shown to be fooled rather easily using adversarial attack algorithms. Practical methods such as adversarial patches have been shown to be
extremely effective in causing misclassification. However,
these patches can be highlighted using standard network
interpretation algorithms, thus revealing the identity of the
adversary. We show that it is possible to create adversarial
patches which not only fool the prediction, but also change
what we interpret regarding the cause of prediction. We
show that our algorithms can empower adversarial patches,
by hiding them from network interpretation tools. We believe our algorithms can facilitate developing more robust
network interpretation tools that truly explain the network’s
underlying decision making process. | en_US |
dc.description.sponsorship | This work was performed under the following financial assistance award: 60NANB18D279 from U.S. Department of Commerce, National Institute of Standards and Technology, and also funding from SAP SE. | en_US |
dc.description.uri | https://arxiv.org/abs/1812.02843 | en_US |
dc.format.extent | 10 pages | en_US |
dc.genre | conference papers and proceedings preprints | en_US |
dc.identifier | doi:10.13016/m256in-dfyc | |
dc.identifier.citation | Akshayvarun Subramanya, Vipin Pillai, Hamed Pirsiavash, Towards Hiding Adversarial Examples from Network Interpretation, Computer Vision and Pattern Recognition , 2018, https://arxiv.org/abs/1812.02843 | en_US |
dc.identifier.uri | http://hdl.handle.net/11603/14342 | |
dc.language.iso | en_US | en_US |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department Collection | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.rights | This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author. | |
dc.subject | adversarial attack algorithms | en_US |
dc.subject | deep networks | en_US |
dc.subject | network Interpretation | en_US |
dc.title | Towards Hiding Adversarial Examples from Network Interpretation | en_US |
dc.type | Text | en_US |