Progressive Training in Recurrent Neural Networks for Chord Progression Modeling

Trung-Kien Vu, Teeradaj Racharak, Satoshi Tojo, Nguyen Thanh, Nguyen Minh

2020

Abstract

Recurrent neural networks (RNNs) can be trained to process sequences of tokens as they show impressive results in several sequence prediction. In general, when RNNs are trained, their goals are to maximize the likelihood of each token in the sequence where each token could be represented as a one-hot representation. That is, the model learns for its sequence prediction from true class labels. However, this creates a potential drawback, i.e., the model cannot learn from the mistakes. In this work, we propose a progressive learning strategy that can mitigate the mistakes by using domain knowledge. Our strategy gently changes the training process from using the class labels guiding scheme to the similarity distribution of class labels instead. Our experiments on chord progression modeling show that this training paradigm yields significant improvements.

Download


Paper Citation


in Harvard Style

Vu T., Racharak T., Tojo S., Thanh N. and Minh N. (2020). Progressive Training in Recurrent Neural Networks for Chord Progression Modeling. In Proceedings of the 12th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, ISBN 978-989-758-395-7, pages 89-98. DOI: 10.5220/0008951500890098


in Bibtex Style

@conference{icaart20,
author={Trung-Kien Vu and Teeradaj Racharak and Satoshi Tojo and Nguyen Thanh and Nguyen Minh},
title={Progressive Training in Recurrent Neural Networks for Chord Progression Modeling},
booktitle={Proceedings of the 12th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
year={2020},
pages={89-98},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0008951500890098},
isbn={978-989-758-395-7},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 12th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
TI - Progressive Training in Recurrent Neural Networks for Chord Progression Modeling
SN - 978-989-758-395-7
AU - Vu T.
AU - Racharak T.
AU - Tojo S.
AU - Thanh N.
AU - Minh N.
PY - 2020
SP - 89
EP - 98
DO - 10.5220/0008951500890098