NoiseCAM: Explainable AI for the Boundary Between Noise and Adversarial Attacks

dc.contributor.authorTan, Wenkai
dc.contributor.authorRenkhoff, Justus
dc.contributor.authorVelasquez, Alvaro
dc.contributor.authorWang, Ziyu
dc.contributor.authorLi, Lusi
dc.contributor.authorWang, Jian
dc.contributor.authorNiu, Shuteng
dc.contributor.authorYang, Fan
dc.contributor.authorLiu, Yongxin
dc.contributor.authorSong, Houbing
dc.date.accessioned2023-04-06T18:03:02Z
dc.date.available2023-04-06T18:03:02Z
dc.date.issued2023-03-09
dc.description.abstractDeep Learning (DL) and Deep Neural Networks (DNNs) are widely used in various domains. However, adversarial attacks can easily mislead a neural network and lead to wrong decisions. Defense mechanisms are highly preferred in safetycritical applications. In this paper, firstly, we use the gradient class activation map (GradCAM) to analyze the behavior deviation of the VGG-16 network when its inputs are mixed with adversarial perturbation or Gaussian noise. In particular, our method can locate vulnerable layers that are sensitive to adversarial perturbation and Gaussian noise. We also show that the behavior deviation of vulnerable layers can be used to detect adversarial examples. Secondly, we propose a novel NoiseCAM algorithm that integrates information from globally and pixellevel weighted class activation maps. Our algorithm is highly sensitive to adversarial perturbations and will not respond to Gaussian random noise mixed in the inputs. Third, we compare detecting adversarial examples using both behavior deviation and NoiseCAM, and we show that NoiseCAM outperforms behavior deviation modeling in its overall performance. Our work could provide a useful tool to defend against certain types of adversarial attacks on deep neural networks.en_US
dc.description.sponsorshipThis research was partially supported by the National Science Foundation under Grant No. 2309760.en_US
dc.description.urihttps://arxiv.org/abs/2303.06151en_US
dc.format.extent8 pagesen_US
dc.genrejournal articlesen_US
dc.genrepreprintsen_US
dc.identifierdoi:10.13016/m2jnoe-ho7t
dc.identifier.urihttps://doi.org/10.48550/arXiv.2303.06151
dc.identifier.urihttp://hdl.handle.net/11603/27431
dc.language.isoen_USen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Information Systems Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.en_US
dc.rights"CC0 1.0 Universal (CC0 1.0) Public Domain Dedication"*
dc.rights.urihttps://creativecommons.org/publicdomain/zero/1.0/*
dc.titleNoiseCAM: Explainable AI for the Boundary Between Noise and Adversarial Attacksen_US
dc.typeTexten_US
dcterms.creatorhttps://orcid.org/0000-0003-2631-9223en_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2303.06151.pdf
Size:
1.69 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: