loading
Papers

Research.Publish.Connect.

Paper

Authors: Shuangchi Gu 1 ; Ping Yi 1 ; Ting Zhu 2 ; Yao Yao 2 and Wei Wang 2

Affiliations: 1 School of Cyber Security, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai and China ; 2 Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore and U.S.A.

ISBN: 978-989-758-350-6

Keyword(s): Normalizing Filter, Adversarial Example, Detection Framework.

Abstract: Deep neural networks are vulnerable to adversarial examples which are inputs modified with unnoticeable but malicious perturbations. Most defending methods only focus on tuning the DNN itself, but we propose a novel defending method which modifies the input data to detect the adversarial examples. We establish a detection framework based on normalizing filters that can partially erase those perturbations by smoothing the input image or depth reduction work. The framework gives the decision by comparing the classification results of original input and multiple normalized inputs. Using several combinations of gaussian blur filter, median blur filter and depth reduction filter, the evaluation results reaches a high detection rate and achieves partial restoration work of adversarial examples in MNIST dataset. The whole detection framework is a low-cost highly extensible strategy in DNN defending works.

PDF ImageFull Text

Download
CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 35.171.45.91

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Gu, S.; Yi, P.; Zhu, T.; Yao, Y. and Wang, W. (2019). Detecting Adversarial Examples in Deep Neural Networks using Normalizing Filters.In Proceedings of the 11th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, ISBN 978-989-758-350-6, pages 164-173. DOI: 10.5220/0007370301640173

@conference{icaart19,
author={Shuangchi Gu. and Ping Yi. and Ting Zhu. and Yao Yao. and Wei Wang.},
title={Detecting Adversarial Examples in Deep Neural Networks using Normalizing Filters},
booktitle={Proceedings of the 11th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
year={2019},
pages={164-173},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007370301640173},
isbn={978-989-758-350-6},
}

TY - CONF

JO - Proceedings of the 11th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
TI - Detecting Adversarial Examples in Deep Neural Networks using Normalizing Filters
SN - 978-989-758-350-6
AU - Gu, S.
AU - Yi, P.
AU - Zhu, T.
AU - Yao, Y.
AU - Wang, W.
PY - 2019
SP - 164
EP - 173
DO - 10.5220/0007370301640173

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.