Actor-Critic Reinforcement Learning with Neural Networks in Continuous Games

Gabriel Leuenberger, Marco A. Wiering

2018

Abstract

Reinforcement learning agents with artificial neural networks have previously been shown to acquire human level dexterity in discrete video game environments where only the current state of the game and a reward are given at each time step. A harder problem than discrete environments is posed by continuous environments where the states, observations, and actions are continuous, which is what this paper focuses on. The algorithm called the Continuous Actor-Critic Learning Automaton (CACLA) is applied to a 2D aerial combat simulation environment, which consists of continuous state and action spaces. The Actor and the Critic both employ multilayer perceptrons. For our game environment it is shown: 1) The exploration of CACLA’s action space strongly improves when Gaussian noise is replaced by an Ornstein-Uhlenbeck process. 2) A novel Monte Carlo variant of CACLA is introduced which turns out to be inferior to the original CACLA. 3) From the latter new insights are obtained that lead to a novel algorithm that is a modified version of CACLA. It relies on a third multilayer perceptron to estimate the absolute error of the critic which is used to correct the learning rule of the Actor. The Corrected CACLA is able to outperform the original CACLA algorithm.

Download


Paper Citation


in Harvard Style

Leuenberger G. and Wiering M. (2018). Actor-Critic Reinforcement Learning with Neural Networks in Continuous Games.In Proceedings of the 10th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, ISBN 978-989-758-275-2, pages 53-60. DOI: 10.5220/0006556500530060


in Bibtex Style

@conference{icaart18,
author={Gabriel Leuenberger and Marco A. Wiering},
title={Actor-Critic Reinforcement Learning with Neural Networks in Continuous Games},
booktitle={Proceedings of the 10th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
year={2018},
pages={53-60},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006556500530060},
isbn={978-989-758-275-2},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 10th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
TI - Actor-Critic Reinforcement Learning with Neural Networks in Continuous Games
SN - 978-989-758-275-2
AU - Leuenberger G.
AU - Wiering M.
PY - 2018
SP - 53
EP - 60
DO - 10.5220/0006556500530060