Adversarial Attacks for Network Interpretation

Author/Creator ORCID

Date

2018-01-01

Department

Computer Science and Electrical Engineering

Program

Computer Science

Citation of Original Publication

Rights

Distribution Rights granted to UMBC by the author.
Access limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan thorugh a local library, pending author/copyright holder's permission.
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.

Abstract

Adversarial attacks are known to fool deep neural networks to produce incorrect predictions. We introduce adversarial attack algorithms that not only fool the network's prediction, but also fool our interpretation of the cause of the network's decision. We show that our algorithms can empower practical adversarial attacks, like adversarial patches, by hiding them from network interpretation tools. We also introduce adversarial attack algorithms which can change the interpretation of the network's decision without changing the network's output. We show that our attack tuned for GradCam visualization transfers directly to other visualization algorithms like CAM and occluding patch as well. We believe our algorithms can facilitate developing more robust network interpretation tools that truly explain the network's underlying decision-making process.