Human-agent Explainability: An Experimental Case Study on the Filtering of Explanations

Yazan Mualla, Igor Tchappi, Amro Najjar, Timotheus Kampik, Stéphane Galland, Christophe Nicolle

Abstract

The communication between robots/agents and humans is a challenge, since humans are typically not capable of understanding the agent’s state of mind. To overcome this challenge, this paper relies on recent advances in the domain of eXplainable Artificial Intelligence (XAI) to trace the decisions of the agents, increase the human’s understandability of the agents’ behavior, and hence improve efficiency and user satisfaction. In particular, we propose a Human-Agent EXplainability Architecture (HAEXA) to model human-agent explainability. HAEXA filters the explanations provided by the agents to the human user to reduce the user’s cognitive load. To evaluate HAEXA, a human-computer interaction experiment is conducted, where participants watch an agent-based simulation of aerial package delivery and fill in a questionnaire that collects their responses. The questionnaire is built according to XAI metrics as established in the literature. The significance of the results is verified using Mann-Whitney U tests. The results show that the explanations increase the understandability of the simulation by human users. However, too many details in the explanations overwhelm them; hence, in many scenarios, it is preferable to filter the explanations.

Download


Paper Citation


in Harvard Style

Mualla Y., Tchappi I., Najjar A., Kampik T., Galland S. and Nicolle C. (2020). Human-agent Explainability: An Experimental Case Study on the Filtering of Explanations.In Proceedings of the 12th International Conference on Agents and Artificial Intelligence - Volume 1: HAMT, ISBN 978-989-758-395-7, pages 378-385. DOI: 10.5220/0009382903780385


in Bibtex Style

@conference{hamt20,
author={Yazan Mualla and Igor Tchappi and Amro Najjar and Timotheus Kampik and Stéphane Galland and Christophe Nicolle},
title={Human-agent Explainability: An Experimental Case Study on the Filtering of Explanations},
booktitle={Proceedings of the 12th International Conference on Agents and Artificial Intelligence - Volume 1: HAMT,},
year={2020},
pages={378-385},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0009382903780385},
isbn={978-989-758-395-7},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 12th International Conference on Agents and Artificial Intelligence - Volume 1: HAMT,
TI - Human-agent Explainability: An Experimental Case Study on the Filtering of Explanations
SN - 978-989-758-395-7
AU - Mualla Y.
AU - Tchappi I.
AU - Najjar A.
AU - Kampik T.
AU - Galland S.
AU - Nicolle C.
PY - 2020
SP - 378
EP - 385
DO - 10.5220/0009382903780385