loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Phong Nguyen 1 ; Takayuki Akiyama 1 and Hiroki Ohashi 2

Affiliations: 1 Hitachi Ltd, Japan ; 2 Hitachi Europe GmbH, Germany

Keyword(s): Reinforcement Learning, Experience Replay, Similarity, Distance, Experience Filtering.

Related Ontology Subjects/Areas/Topics: Agents ; AI and Creativity ; Artificial Intelligence ; Biomedical Engineering ; Biomedical Signal Processing ; Computational Intelligence ; Evolutionary Computing ; Health Engineering and Technology Applications ; Human-Computer Interaction ; Industrial Applications of AI ; Knowledge Discovery and Information Retrieval ; Knowledge-Based Systems ; Machine Learning ; Methodologies and Methods ; Neural Networks ; Neurocomputing ; Neurotechnology, Electronics and Informatics ; Pattern Recognition ; Physiological Computing Systems ; Robot and Multi-Robot Systems ; Sensor Networks ; Signal Processing ; Soft Computing ; Symbolic Systems ; Theory and Methods

Abstract: We propose a stochastic method of storing a new experience into replay memory to increase the performance of the Deep Q-learning (DQL) algorithm, especially under the condition of a small memory. The conventional standard DQL method with the Prioritized Experience Replay method attempts to use experiences in the replay memory for improving learning efficiency; however, it does not guarantee the diversity of experience in the replay memory. Our method calculates the similarity of a new experience with other existing experiences in the memory based on a distance function and determines whether to store this new experience stochastically. This method leads to the improvement in experience diversity in the replay memory and better utilization of rare experiences during the training process. In an experiment to train a moving robot, our proposed method improved the performance of the standard DQL algorithm with a memory buffer of less than 10,000 stored experiences.

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.117.137.64

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Nguyen, P.; Akiyama, T. and Ohashi, H. (2018). Experience Filtering for Robot Navigation using Deep Reinforcement Learning. In Proceedings of the 10th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART; ISBN 978-989-758-275-2; ISSN 2184-433X, SciTePress, pages 243-249. DOI: 10.5220/0006671802430249

@conference{icaart18,
author={Phong Nguyen. and Takayuki Akiyama. and Hiroki Ohashi.},
title={Experience Filtering for Robot Navigation using Deep Reinforcement Learning},
booktitle={Proceedings of the 10th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART},
year={2018},
pages={243-249},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006671802430249},
isbn={978-989-758-275-2},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 10th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART
TI - Experience Filtering for Robot Navigation using Deep Reinforcement Learning
SN - 978-989-758-275-2
IS - 2184-433X
AU - Nguyen, P.
AU - Akiyama, T.
AU - Ohashi, H.
PY - 2018
SP - 243
EP - 249
DO - 10.5220/0006671802430249
PB - SciTePress