Bootstrapping a DQN Replay Memory with Synthetic Experiences

Wenzel von Pilchau, Anthony Stein, Jörg Hähner

Abstract

An important component of many Deep Reinforcement Learning algorithms is the Experience Replay that serves as a storage mechanism or memory of experienced transitions. These experiences are used for training and help the agent to stably find the perfect trajectory through the problem space. The classic Experience Replay however makes only use of the experiences it actually made, but the stored transitions bear great potential in form of knowledge about the problem that can be extracted. The gathered knowledge contains state-transitions and received rewards that can be utilized to approximate a model of the environment. We present an algorithm that creates synthetic experiences in a nondeterministic discrete environment to assist the learner with augmented training data. The Interpolated Experience Replay is evaluated on the FrozenLake environment and we show that it can achieve a 17% increased mean reward compared to the classic version.

Download


Paper Citation