loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Joseph Groot Kormelink 1 ; Madalina M. Drugan 2 and Marco A. Wiering 1

Affiliations: 1 University of Groningen, Netherlands ; 2 ITLearns.Online, Netherlands

Keyword(s): Reinforcement Learning, Computer Games, Exploration Methods, Neural Networks.

Related Ontology Subjects/Areas/Topics: Agents ; Artificial Intelligence ; Artificial Intelligence and Decision Support Systems ; Autonomous Systems ; Biomedical Engineering ; Biomedical Signal Processing ; Computational Intelligence ; Distributed and Mobile Software Systems ; Enterprise Information Systems ; Evolutionary Computing ; Health Engineering and Technology Applications ; Human-Computer Interaction ; Knowledge Discovery and Information Retrieval ; Knowledge Engineering and Ontology Development ; Knowledge-Based Systems ; Machine Learning ; Methodologies and Methods ; Multi-Agent Systems ; Neural Networks ; Neurocomputing ; Neurotechnology, Electronics and Informatics ; Pattern Recognition ; Physiological Computing Systems ; Sensor Networks ; Signal Processing ; Soft Computing ; Software Engineering ; Symbolic Systems ; Theory and Methods

Abstract: In this paper, we investigate which exploration method yields the best performance in the game Bomberman. In Bomberman the controlled agent has to kill opponents by placing bombs. The agent is represented by a multi-layer perceptron that learns to play the game with the use of Q-learning. We introduce two novel exploration strategies: Error-Driven-e and Interval-Q, which base their explorative behavior on the temporal-difference error of Q-learning. The learning capabilities of these exploration strategies are compared to five existing methods: Random-Walk, Greedy, e-Greedy, Diminishing e-Greedy, and Max-Boltzmann. The results show that the methods that combine exploration with exploitation perform much better than the Random-Walk and Greedy strategies, which only select exploration or exploitation actions. Furthermore, the results show that Max-Boltzmann exploration performs the best in overall from the different techniques. The Error-Driven-e exploration strategy also perf orms very well, but suffers from an unstable learning behavior. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 44.211.58.249

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Groot Kormelink, J.; M. Drugan, M. and Wiering, M. (2018). Exploration Methods for Connectionist Q-learning in Bomberman. In Proceedings of the 10th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART; ISBN 978-989-758-275-2; ISSN 2184-433X, SciTePress, pages 355-362. DOI: 10.5220/0006556403550362

@conference{icaart18,
author={Joseph {Groot Kormelink}. and Madalina {M. Drugan}. and Marco A. Wiering.},
title={Exploration Methods for Connectionist Q-learning in Bomberman},
booktitle={Proceedings of the 10th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART},
year={2018},
pages={355-362},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006556403550362},
isbn={978-989-758-275-2},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 10th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART
TI - Exploration Methods for Connectionist Q-learning in Bomberman
SN - 978-989-758-275-2
IS - 2184-433X
AU - Groot Kormelink, J.
AU - M. Drugan, M.
AU - Wiering, M.
PY - 2018
SP - 355
EP - 362
DO - 10.5220/0006556403550362
PB - SciTePress