Authors:
Hiroshi Honda
and
Masafumi Hagiwara
Affiliation:
Faculty of Science and Technology, Keio University, Yokohama, Japan
Keyword(s):
Deep Learning, Explainable Artificial Intelligence, Fuzzy, Prolog, Reinforcement Learning, Symbolic Processing.
Abstract:
The authors propose methods for reproducing deep learning models using a symbolic representation from learned deep reinforcement learning models and building agents capable of knowledge communication with humans. It is difficult for humans to understand the behaviour of agents using deep reinforcement learning, and to inform agents of the state of the environment and to receive actions from the agents. In this paper, fuzzified states of the environment and agent actions are represented by rules of first-order predicate logic, and models using symbolic representation are generated by learning such rules. By replacing deep reinforcement learning models with models using a symbolic representation, it is possible for humans to inform the state of the environment and add rules to the agents. As a result of the experiments, the authors can reproduce trained deep reinforcement learning models with high match rate for two types of reinforcement learning simulation environments. Using reprodu
ced models, the authors build agents that can communicate with humans that have yet be realized thus far. This proposed method is the first case of building agents capable of knowledge communication with humans using trained reinforcement learning models.
(More)