Authors:
Shota Nemoto
1
;
Subhash Rajapaksha
2
and
Despoina Perouli
2
Affiliations:
1
Case Western Reserve University, 10900 Euclid Avenue, Cleveland, Ohio, U.S.A.
;
2
Marquette University, 1250 West Wisconsin Avenue, Milwaukee, Wisconsin, U.S.A.
Keyword(s):
Neural Networks, Adversarial Examples, Evasion Attacks, Security, Electrocardiogram, ECR.
Abstract:
Evasion attacks produce adversarial examples by adding human imperceptible perturbations and causing a machine learning model to label the input incorrectly. These black box attacks do not require knowledge of the internal workings of the model or access to inputs. Although such adversarial attacks have been shown to be successful in image classification problems, they have not been adequately explored in health care models. In this paper, we produce adversarial examples based on successful algorithms in the literature and attack a deep neural network that classifies heart rhythms in electrocardiograms (ECGs). Several batches of adversarial examples were produced, with each batch having a different limit on the number of queries. The adversarial ECGs with the median distance to their original counterparts were found to have slight but noticeable perturbations when compared side-by-side with the original. However, the adversarial ECGs with the minimum distance in the batches were prac
tically indistinguishable from the originals.
(More)