Authors:
Dominique Mercier
1
;
2
;
Jwalin Bhatt
2
;
Andreas Dengel
1
;
2
and
Sheraz Ahmed
1
Affiliations:
1
German Research Center for Artificial Intelligence GmbH (DFKI), Kaiserslautern, Germany
;
2
Technical University Kaiserslautern (TUK), Kaiserslautern, Germany
Keyword(s):
Deep Learning, Time Series, Interpretability, Attribution, Benchmarking, Convolutional Neural Network, Artificial Intelligence, Survey.
Abstract:
In the last decade neural network have made huge impact both in industry and research due to their ability
to extract meaningful features from imprecise or complex data, and by achieving super human performance
in several domains. However, due to the lack of transparency the use of these networks is hampered in the
areas with safety critical areas. In safety-critical areas, this is necessary by law. Recently several methods
have been proposed to uncover this black box by providing interpreation of predictions made by these models.
The paper focuses on time series analysis and benchmark several state-of-the-art attribution methods which
compute explanations for convolutional classifiers. The presented experiments involve gradient-based and
perturbation-based attribution methods. A detailed analysis shows that perturbation-based approaches are
superior concerning the Sensitivity and occlusion game. These methods tend to produce explanations with
higher continuity. Contrarily,
the gradient-based techniques are superb in runtime and Infidelity. In addition,
a validation the dependence of the methods on the trained model, feasible application domains, and individual
characteristics is attached. The findings accentuate that choosing the best-suited attribution method is strongly
correlated with the desired use case. Neither category of attribution methods nor a single approach has shown
outstanding performance across all aspects.
(More)