Detecting Adversarial Examples in Deep Neural Networks using Normalizing Filters

dc.contributor.authorGu, Shuangchi
dc.contributor.authorYi, Ping
dc.contributor.authorZhu, Ting
dc.contributor.authorYao, Yao
dc.contributor.authorWang, Wei
dc.date.accessioned2019-12-20T16:17:01Z
dc.date.available2019-12-20T16:17:01Z
dc.date.issued2019
dc.descriptionIn Proceedings of the 11th International Conference on Agents and Artificial Intelligence (ICAART 2019)en_US
dc.description.abstractDeep neural networks are vulnerable to adversarial examples which are inputs modified with unnoticeable but malicious perturbations. Most defending methods only focus on tuning the DNN itself, but we propose a novel defending method which modifies the input data to detect the adversarial examples. We establish a detection framework based on normalizing filters that can partially erase those perturbations by smoothing the input image or depth reduction work. The framework gives the decision by comparing the classification results of original input and multiple normalized inputs. Using several combinations of gaussian blur filter, median blur filter and depth reduction filter, the evaluation results reaches a high detection rate and achieves partial restoration work of adversarial examples in MNIST dataset. The whole detection framework is a low-cost highly extensible strategy in DNN defending works.en_US
dc.description.sponsorshipThis work is supported by the National Natural Science Foundation of China(61571290, 61831007, 61431008), National Key Research and Development Program of China (2017YFB0802900, 2017YFB0802300, 2018YFB0803503), Shanghai Municipal Science and Technology Project under grant (16511102605, 16DZ1200702), NSF grants 1652669 and 1539047.en_US
dc.description.urihttp://www.scitepress.org/DigitalLibrary/Link.aspx?doi=10.5220/0007370301640173en_US
dc.format.extent10 pagesen_US
dc.genreconference papers and proceedingsen_US
dc.identifierdoi:10.13016/m2ffkj-sy8k
dc.identifier.citationGu, Shuangchi; Yi, Ping; Zhu, Ting; Yao, Yao; Wang, Wei; Detecting Adversarial Examples in Deep Neural Networks using Normalizing Filters; In Proceedings of the 11th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART, pages 164-173 (2019); http://www.scitepress.org/DigitalLibrary/Link.aspx?doi=10.5220/0007370301640173en_US
dc.identifier.urihttps://doi.org/10.5220/0007370301640173
dc.identifier.urihttp://hdl.handle.net/11603/16930
dc.language.isoen_USen_US
dc.publisherScitePressen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)*
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectnormalizing filteren_US
dc.subjectadversarial exampleen_US
dc.subjectdetection frameworken_US
dc.titleDetecting Adversarial Examples in Deep Neural Networks using Normalizing Filtersen_US
dc.typeTexten_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
ICAART_2019_79.pdf
Size:
4.9 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: