Gu, ShuangchiYi, PingZhu, TingYao, YaoWang, Wei2019-12-202019-12-202019Gu, Shuangchi; Yi, Ping; Zhu, Ting; Yao, Yao; Wang, Wei; Detecting Adversarial Examples in Deep Neural Networks using Normalizing Filters; In Proceedings of the 11th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART, pages 164-173 (2019); http://www.scitepress.org/DigitalLibrary/Link.aspx?doi=10.5220/0007370301640173https://doi.org/10.5220/0007370301640173http://hdl.handle.net/11603/16930In Proceedings of the 11th International Conference on Agents and Artificial Intelligence (ICAART 2019)Deep neural networks are vulnerable to adversarial examples which are inputs modified with unnoticeable but malicious perturbations. Most defending methods only focus on tuning the DNN itself, but we propose a novel defending method which modifies the input data to detect the adversarial examples. We establish a detection framework based on normalizing filters that can partially erase those perturbations by smoothing the input image or depth reduction work. The framework gives the decision by comparing the classification results of original input and multiple normalized inputs. Using several combinations of gaussian blur filter, median blur filter and depth reduction filter, the evaluation results reaches a high detection rate and achieves partial restoration work of adversarial examples in MNIST dataset. The whole detection framework is a low-cost highly extensible strategy in DNN defending works.10 pagesen-USThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)normalizing filteradversarial exampledetection frameworkDetecting Adversarial Examples in Deep Neural Networks using Normalizing FiltersText