Authors:
Bihi Sabiri
1
;
Bouchra El Asri
1
and
Maryem Rhanoui
2
Affiliations:
1
IMS Team, ADMIR Laboratory, Rabat IT Center, ENSIAS, Mohammed V University in Rabat, Morocco
;
2
Meridian Team, LYRICA Laboratory, School of Information Sciences, Rabat, Morocco
Keyword(s):
Recommendation Systems, Machine Learning, Deep Learning, Neural Network, Generative Adversarial Networks.
Abstract:
Generative adversarial networks (GANs) have become a full-fledged branch of the most important neural network models for unsupervised machine learning. A multitude of loss functions have been developed to train the GAN discriminators and they all have a common structure: a sum of real and false losses which depend only on the real losses and generated data respectively. A challenge associated with an equally weighted sum of two losses is that the formation can benefit one loss but harm the other, which we show causes instability and mode collapse. In this article, we introduce a new family of discriminant loss functions which adopts a weighted sum of real and false parts. With the use the gradients of the real and false parts of the loss, we can adaptively choose weights to train the discriminator in the sense that benefits the stability of the GAN model. Our method can potentially be applied to any discriminator model with a loss which is a sum of the real and fake parts. Our method
consists in adjusting the hyper-parameters appropriately in order to improve the training of the two antagonistic models Experiences validated the effectiveness of our loss functions on image generation tasks, improving the base results by a significant margin on dataset Celebdata.
(More)