loading
Papers

Research.Publish.Connect.

Paper

Authors: Juan Montoya and Christian Borgelt

Affiliation: Chair for Bioinformatics and Information Mining, University of Konstanz and Germany

ISBN: 978-989-758-350-6

Keyword(s): Wide and Deep Reinforcement Learning, Wide Deep Q-Networks, Value Function Approximation, Reinforcement Learning Agents.

Related Ontology Subjects/Areas/Topics: Artificial Intelligence ; Biomedical Engineering ; Biomedical Signal Processing ; Computational Intelligence ; Evolutionary Computing ; Health Engineering and Technology Applications ; Human-Computer Interaction ; Knowledge Discovery and Information Retrieval ; Knowledge-Based Systems ; Machine Learning ; Methodologies and Methods ; Neural Networks ; Neurocomputing ; Neurotechnology, Electronics and Informatics ; Pattern Recognition ; Physiological Computing Systems ; Sensor Networks ; Signal Processing ; Soft Computing ; Symbolic Systems ; Theory and Methods

Abstract: For the last decade Deep Reinforcement Learning has undergone exponential development; however, less has been done to integrate linear methods into it. Our Wide and Deep Reinforcement Learning framework provides a tool that combines linear and non-linear methods into one. For practical implementations, our framework can help integrate expert knowledge while improving the performance of existing Deep Reinforcement Learning algorithms. Our research aims to generate a simple practical framework to extend such algorithms. To test this framework we develop an extension of the popular Deep Q-Networks algorithm, which we name Wide Deep Q-Networks. We analyze its performance compared to Deep Q-Networks and Linear Agents, as well as human players. We apply our new algorithm to Berkley’s Pac-Man environment. Our algorithm considerably outperforms Deep Q-Networks’ both in terms of learning speed and ultimate performance showing its potential for boosting existing algorithms.

PDF ImageFull Text

Download
CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.233.226.151

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Montoya, J. and Borgelt, C. (2019). Wide and Deep Reinforcement Learning for Grid-based Action Games.In Proceedings of the 11th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, ISBN 978-989-758-350-6, pages 50-59. DOI: 10.5220/0007313200500059

@conference{icaart19,
author={Juan M. Montoya. and Christian Borgelt.},
title={Wide and Deep Reinforcement Learning for Grid-based Action Games},
booktitle={Proceedings of the 11th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
year={2019},
pages={50-59},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007313200500059},
isbn={978-989-758-350-6},
}

TY - CONF

JO - Proceedings of the 11th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
TI - Wide and Deep Reinforcement Learning for Grid-based Action Games
SN - 978-989-758-350-6
AU - Montoya, J.
AU - Borgelt, C.
PY - 2019
SP - 50
EP - 59
DO - 10.5220/0007313200500059

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.