# Accelerating Matrix Factorization by Overparameterization

### Pu Chen, Hung-Hsuan Chen

#### 2020

#### Abstract

This paper studies overparameterization on the matrix factorization (MF) model. We confirm that overparameterization can significantly accelerate the optimization of MF with no change in the expressiveness of the learning model. Consequently, modern applications on recommendations based on MF or its variants can largely benefit from our discovery. Specifically, we theoretically derive that applying the vanilla stochastic gradient descent (SGD) on the overparameterized MF model is equivalent to employing gradient descent with momentum and adaptive learning rate on the standard MF model. We empirically compare the overparameterized MF model with the standard MF model based on various optimizers, including vanilla SGD, AdaGrad, Adadelta, RMSprop, and Adam, using several public datasets. The experimental results comply with our analysis – overparameterization converges faster. The overparameterization technique can be applied to various learning-based recommendation models, including deep learning-based recommendation models, e.g., SVD++, nonnegative matrix factorization (NMF), factorization machine (FM), NeuralCF, Wide&Deep, and DeepFM. Therefore, we suggest utilizing the overparameterization technique to accelerate the training speed for the learning-based recommendation models whenever possible, especially when the size of the training dataset is large.

Download#### Paper Citation

#### in Harvard Style

Chen P. and Chen H. (2020). **Accelerating Matrix Factorization by Overparameterization**.In *Proceedings of the 1st International Conference on Deep Learning Theory and Applications - Volume 1: DeLTA,* ISBN 978-989-758-441-1, pages 89-97. DOI: 10.5220/0009885600890097

#### in Bibtex Style

@conference{delta20,

author={Pu Chen and Hung-Hsuan Chen},

title={Accelerating Matrix Factorization by Overparameterization},

booktitle={Proceedings of the 1st International Conference on Deep Learning Theory and Applications - Volume 1: DeLTA,},

year={2020},

pages={89-97},

publisher={SciTePress},

organization={INSTICC},

doi={10.5220/0009885600890097},

isbn={978-989-758-441-1},

}

#### in EndNote Style

TY - CONF

JO - Proceedings of the 1st International Conference on Deep Learning Theory and Applications - Volume 1: DeLTA,

TI - Accelerating Matrix Factorization by Overparameterization

SN - 978-989-758-441-1

AU - Chen P.

AU - Chen H.

PY - 2020

SP - 89

EP - 97

DO - 10.5220/0009885600890097