loading
  • Login
  • Sign-Up

Research.Publish.Connect.

Paper

Authors: Michael Castronovo ; Vincent François-Lavet ; Raphaël Fonteneau ; Damien Ernst and Adrien Couëtoux

Affiliation: Montefiore Institute and Universite de Liège, Belgium

ISBN: 978-989-758-220-2

Keyword(s): Bayesian Reinforcement Learning, Artificial Neural Networks, Offline Policy Search.

Related Ontology Subjects/Areas/Topics: Artificial Intelligence ; Artificial Intelligence and Decision Support Systems ; Bayesian Networks ; Biomedical Engineering ; Biomedical Signal Processing ; Computational Intelligence ; Enterprise Information Systems ; Evolutionary Computing ; Health Engineering and Technology Applications ; Human-Computer Interaction ; Knowledge Discovery and Information Retrieval ; Knowledge-Based Systems ; Machine Learning ; Methodologies and Methods ; Neural Networks ; Neurocomputing ; Neurotechnology, Electronics and Informatics ; Pattern Recognition ; Physiological Computing Systems ; Sensor Networks ; Signal Processing ; Soft Computing ; Symbolic Systems ; Theory and Methods

Abstract: Bayesian Reinforcement Learning (BRL) agents aim to maximise the expected collected rewards obtained when interacting with an unknown Markov Decision Process (MDP) while using some prior knowledge. State-of-the-art BRL agents rely on frequent updates of the belief on the MDP, as new observations of the environment are made. This offers theoretical guarantees to converge to an optimum, but is computationally intractable, even on small-scale problems. In this paper, we present a method that circumvents this issue by training a parametric policy able to recommend an action directly from raw observations. Artificial Neural Networks (ANNs) are used to represent this policy, and are trained on the trajectories sampled from the prior. The trained model is then used online, and is able to act on the real MDP at a very low computational cost. Our new algorithm shows strong empirical performance, on a wide range of test problems, and is robust to inaccuracies of the prior distribution.

PDF ImageFull Text

Download
Sign In Guest: Register as new SCITEPRESS user or Join INSTICC now for free.

Sign In SCITEPRESS user: please login.

Sign In INSTICC Members: please login. If not a member yet, Join INSTICC now for free.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 54.81.216.152. INSTICC members have higher download limits (free membership now)

In the current month:
Recent papers: 1 available of 1 total
2+ years older papers: 2 available of 2 total

Paper citation in several formats:
Castronovo M., François-Lavet V., Fonteneau R., Ernst D. and Couëtoux A. (2017). Approximate Bayes Optimal Policy Search using Neural Networks.In Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, ISBN 978-989-758-220-2, pages 142-153. DOI: 10.5220/0006191701420153

@conference{icaart17,
author={Michael Castronovo and Vincent François-Lavet and Raphaël Fonteneau and Damien Ernst and Adrien Couëtoux},
title={Approximate Bayes Optimal Policy Search using Neural Networks},
booktitle={Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
year={2017},
pages={142-153},
doi={10.5220/0006191701420153},
isbn={978-989-758-220-2},
}

TY - CONF

JO - Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
TI - Approximate Bayes Optimal Policy Search using Neural Networks
SN - 978-989-758-220-2
AU - Castronovo M.
AU - François-Lavet V.
AU - Fonteneau R.
AU - Ernst D.
AU - Couëtoux A.
PY - 2017
SP - 142
EP - 153
DO - 10.5220/0006191701420153

Sorted by: Show papers

Note: The preferred Subjects/Areas/Topics, listed below for each paper, are those that match the selected paper topics and their ontology superclasses.
More...

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.

Show authors

Note: The preferred Subjects/Areas/Topics, listed below for each author, are those that more frequently used in the author's papers.
More...