Authors:
Gabriel R. Machado
1
;
Ronaldo R. Goldschmidt
1
and
Eugênio Silva
2
Affiliations:
1
Section of Computer Engineering (SE/8), Military Institute of Engineering (IME), Rio de Janeiro and Brazil
;
2
Computing Center (UComp), State University of West Zone (UEZO), Rio de Janeiro and Brazil
Keyword(s):
Artificial Intelligence and Decision Support Systems, Advanced Applications of Neural Networks.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Artificial Intelligence and Decision Support Systems
;
Biomedical Engineering
;
Biomedical Signal Processing
;
Computational Intelligence
;
Enterprise Information Systems
;
Health Engineering and Technology Applications
;
Human-Computer Interaction
;
Methodologies and Methods
;
Neural Network Software and Applications
;
Neural Networks
;
Neurocomputing
;
Neurotechnology, Electronics and Informatics
;
Pattern Recognition
;
Physiological Computing Systems
;
Sensor Networks
;
Signal Processing
;
Soft Computing
;
Theory and Methods
Abstract:
Deep Neural Networks have been increasingly used in decision support systems, mainly because they are the state-of-the-art algorithms for solving challenging tasks, such as image recognition and classification. However, recent studies have shown these learning models are vulnerable to adversarial attacks, i.e. attacks conducted with images maliciously modified by an algorithm to induce misclassification. Several works have proposed methods for defending against adversarial images, however these defenses have shown to be inefficient, since they have facilitated the understanding of their internal operation by attackers. Thus, this paper proposes a defense called MultiMagNet, which randomly incorporates at runtime multiple defense components, in an attempt to introduce an expanded form of non-deterministic behavior so as to hinder evasions by adversarial attacks. Experiments performed on MNIST and CIFAR-10 datasets prove that MultiMagNet can protect classification models from adversari
al images generated by the main existing attacks algorithms.
(More)