Authors: Joseph Groot Kormelink 1 ; Madalina M. Drugan 2 and Marco A. Wiering 1

Affiliations: 1 University of Groningen, Netherlands ; 2 ITLearns.Online, Netherlands

ISBN: 978-989-758-275-2

Keyword(s): Reinforcement Learning, Computer Games, Exploration Methods, Neural Networks.

Related Ontology Subjects/Areas/Topics: Agents ; Artificial Intelligence ; Artificial Intelligence and Decision Support Systems ; Autonomous Systems ; Biomedical Engineering ; Biomedical Signal Processing ; Computational Intelligence ; Distributed and Mobile Software Systems ; Enterprise Information Systems ; Evolutionary Computing ; Health Engineering and Technology Applications ; Human-Computer Interaction ; Knowledge Discovery and Information Retrieval ; Knowledge Engineering and Ontology Development ; Knowledge-Based Systems ; Machine Learning ; Methodologies and Methods ; Multi-Agent Systems ; Neural Networks ; Neurocomputing ; Neurotechnology, Electronics and Informatics ; Pattern Recognition ; Physiological Computing Systems ; Sensor Networks ; Signal Processing ; Soft Computing ; Software Engineering ; Symbolic Systems ; Theory and Methods

Abstract: In this paper, we investigate which exploration method yields the best performance in the game Bomberman. In Bomberman the controlled agent has to kill opponents by placing bombs. The agent is represented by a multi-layer perceptron that learns to play the game with the use of Q-learning. We introduce two novel exploration strategies: Error-Driven-e and Interval-Q, which base their explorative behavior on the temporal-difference error of Q-learning. The learning capabilities of these exploration strategies are compared to five existing methods: Random-Walk, Greedy, e-Greedy, Diminishing e-Greedy, and Max-Boltzmann. The results show that the methods that combine exploration with exploitation perform much better than the Random-Walk and Greedy strategies, which only select exploration or exploitation actions. Furthermore, the results show that Max-Boltzmann exploration performs the best in overall from the different techniques. The Error-Driven-e exploration strategy also perfo rms very well, but suffers from an unstable learning behavior. (More)

PDF ImageFull Text


Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Groot Kormelink, J.; M. Drugan, M. and Wiering, M. (2018). Exploration Methods for Connectionist Q-learning in Bomberman.In Proceedings of the 10th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, ISBN 978-989-758-275-2, pages 355-362. DOI: 10.5220/0006556403550362

author={Joseph Groot Kormelink. and Madalina M. Drugan. and Marco A. Wiering.},
title={Exploration Methods for Connectionist Q-learning in Bomberman},
booktitle={Proceedings of the 10th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},


JO - Proceedings of the 10th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
TI - Exploration Methods for Connectionist Q-learning in Bomberman
SN - 978-989-758-275-2
AU - Groot Kormelink, J.
AU - M. Drugan, M.
AU - Wiering, M.
PY - 2018
SP - 355
EP - 362
DO - 10.5220/0006556403550362

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.