loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: N. Benjamin Erichson 1 ; Zhewei Yao 2 and Michael W. Mahoney 1

Affiliations: 1 ICSI and Department of Statistics, University of California at Berkeley, U.S.A. ; 2 Department of Mathematics, University of California at Berkeley, U.S.A.

Keyword(s): Adversarial Learning, Robust Learning, Deep Neural Networks.

Abstract: It has been demonstrated that very simple attacks can fool highly-sophisticated neural network architectures. In particular, so-called adversarial examples, constructed from perturbations of input data that are small or imperceptible to humans but lead to different predictions, may lead to an enormous risk in certain critical applications. In light of this, there has been a great deal of work on developing adversarial training strategies to improve model robustness. These training strategies are very expensive, in both human and computational time. To complement these approaches, we propose a very simple and inexpensive strategy which can be used to “retrofit” a previously-trained network to improve its resilience to adversarial attacks. More concretely, we propose a new activation function—the JumpReLU—which, when used in place of a ReLU in an already-trained model, leads to a trade-off between predictive accuracy and robustness. This trade-off is controlled by the jump size, a hype r-parameter which can be tuned during the validation stage. Our empirical results demonstrate that this increases model robustness, protecting against adversarial attacks with substantially increased levels of perturbations. This is accomplished simply by retrofitting existing networks with our JumpReLU activation function, without the need for retraining the model. Additionally, we demonstrate that adversarially trained (robust) models can greatly benefit from retrofitting. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.227.228.95

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Erichson, N.; Yao, Z. and Mahoney, M. (2020). JumpReLU: A Retrofit Defense Strategy for Adversarial Attacks. In Proceedings of the 9th International Conference on Pattern Recognition Applications and Methods - ICPRAM; ISBN 978-989-758-397-1; ISSN 2184-4313, SciTePress, pages 103-114. DOI: 10.5220/0009316401030114

@conference{icpram20,
author={N. Benjamin Erichson. and Zhewei Yao. and Michael W. Mahoney.},
title={JumpReLU: A Retrofit Defense Strategy for Adversarial Attacks},
booktitle={Proceedings of the 9th International Conference on Pattern Recognition Applications and Methods - ICPRAM},
year={2020},
pages={103-114},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0009316401030114},
isbn={978-989-758-397-1},
issn={2184-4313},
}

TY - CONF

JO - Proceedings of the 9th International Conference on Pattern Recognition Applications and Methods - ICPRAM
TI - JumpReLU: A Retrofit Defense Strategy for Adversarial Attacks
SN - 978-989-758-397-1
IS - 2184-4313
AU - Erichson, N.
AU - Yao, Z.
AU - Mahoney, M.
PY - 2020
SP - 103
EP - 114
DO - 10.5220/0009316401030114
PB - SciTePress