loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: M. Solinas 1 ; S. Rousset 2 ; R. Cohendet 1 ; Y. Bourrier 2 ; M. Mainsant 1 ; A. Molnos 1 ; M. Reyboz 1 and M. Mermillod 2

Affiliations: 1 Univ. Grenoble Alpes, CEA, LIST, F-38000 Grenoble, France ; 2 Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France

Keyword(s): Incremental Learning, Lifelong Learning, Continual Learning, Sequential Learning, Pseudo-rehearsal, Rehearsal.

Abstract: While deep learning has yielded remarkable results in a wide range of applications, artificial neural networks suffer from catastrophic forgetting of old knowledge as new knowledge is learned. Rehearsal methods overcome catastrophic forgetting by replaying an amount of previously learned data stored in dedicated memory buffers. Alternatively, pseudo-rehearsal methods generate pseudo-samples to emulate the previously learned data, thus alleviating the need for dedicated buffers. Unfortunately, up to now, these methods have shown limited accuracy. In this work, we combine these two approaches and employ the data stored in tiny memory buffers as seeds to enhance the pseudo-sample generation process. We then show that pseudo-rehearsal can improve performance versus rehearsal methods for small buffer sizes. This is due to an improvement in the retrieval process of previously learned information. Our combined replay approach consists of a hybrid architecture that generates pseudo-samples t hrough a reinjection sampling procedure (i.e. iterative sampling). The generated pseudo-samples are then interlaced with the new data to acquire new knowledge without forgetting the previous one. We evaluate our method extensively on the MNIST, CIFAR-10 and CIFAR-100 image classification datasets, and present state-of-the-art performance using tiny memory buffers. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.90.242.249

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Solinas, M.; Rousset, S.; Cohendet, R.; Bourrier, Y.; Mainsant, M.; Molnos, A.; Reyboz, M. and Mermillod, M. (2021). Beneficial Effect of Combined Replay for Continual Learning. In Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART; ISBN 978-989-758-484-8; ISSN 2184-433X, SciTePress, pages 205-217. DOI: 10.5220/0010251202050217

@conference{icaart21,
author={M. Solinas. and S. Rousset. and R. Cohendet. and Y. Bourrier. and M. Mainsant. and A. Molnos. and M. Reyboz. and M. Mermillod.},
title={Beneficial Effect of Combined Replay for Continual Learning},
booktitle={Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART},
year={2021},
pages={205-217},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010251202050217},
isbn={978-989-758-484-8},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART
TI - Beneficial Effect of Combined Replay for Continual Learning
SN - 978-989-758-484-8
IS - 2184-433X
AU - Solinas, M.
AU - Rousset, S.
AU - Cohendet, R.
AU - Bourrier, Y.
AU - Mainsant, M.
AU - Molnos, A.
AU - Reyboz, M.
AU - Mermillod, M.
PY - 2021
SP - 205
EP - 217
DO - 10.5220/0010251202050217
PB - SciTePress