Improving training of generative adversarial networks

Authors: Ali, Y. and Wani, M.A.

Journal: Proceedings of the 2021 8th International Conference on Computing for Sustainable Global Development, INDIACom 2021

Pages: 81-86

ISBN: 9789380544434

DOI: 10.1109/INDIACom51348.2021.00016

Abstract:

Optimization algorithms and objective functions play an important role in the training of deep learning networks. This paper explores the impact of using various optimization algorithms and objective functions for the training of different Generative Adversarial Networks. The paper first summarizes various Generative Adversarial Networks available in the literature. Various Generative Adversarial Networks are then evaluated for different objective functions and optimization algorithms. Empirically, Generative Adversarial Networks are analyzed here based on generator loss, discriminator loss, and accuracy metrics. The training of various Generative Adversarial Networks is analyzed on the MNIST dataset. The results indicate that Adam optimization algorithm and conditional objective function is a good choice for improved training of Generative Adversarial Networks.

Source: Scopus