Authors:
Kaiyu Suzuki
1
;
Tomofumi Matsuzawa
1
;
Munehiro Takimoto
1
and
Yasushi Kambayashi
2
Affiliations:
1
Department of Information Sciences, Tokyo University of Science, Chiba, Japan
;
2
Department of Computer Information Engineering, Nippon Institute of Technology, Saitama, Japan
Keyword(s):
Explainable AI (XAI), Machine Learning, Neural Networks, Disaster Countermeasures, Seismic Disaster.
Abstract:
One of the most important tasks for drones, which are in the spotlight for assisting evacuees of natural disasters, is to automatically make decisions based on images captured by on-board cameras and provide evacuees with useful information, such as evacuation guidance. In order to make decision automatically from the aforementioned images, deep learning is the most suitable and powerful method. Although deep learning exhibits high performance, presenting the rationale for decisions is a challenge. Even though several existing decision making methods visualize and point out which part of the image they have considered intensively, they are insufficient for situations that require urgent and accurate judgments. When we look for basis for the decisions, we need to know not only WHERE to detect but also HOW to detect. This study aims to insert vector quantization (VQ) into the intermediate layer as a first step in order to show HOW to detect for deep learning in image-based tasks. We pr
opose a method that suppresses accuracy loss while holding interpretability by applying VQ to the classification problem. The applications of the Sinkhorn–Knopp algorithm, constant embedding space and gradient penalty in this study allow us to introduce VQ with high interpretability. These techniques should help us apply the proposed method to real-world tasks where the properties of datasets are unknown.
(More)