Optimization Algorithms in Generative AI for Enhanced GAN Stability and Performance

Authors

  • Reem Al Maawali Sohar University Author
  • Amna AL-Shidi Author

DOI:

https://doi.org/10.52098/acj.20244225

Keywords:

Optimization Algorithms, Generative AI, GAN Stability,, Neural Networks, Model Training Efficiency

Abstract

Generative Adversarial Networks have been a game-changer in generative modelling, enabling the generation of high-quality synthetic data across various domains. However, training GANs has remained problematic owing to inherent instability and mode collapse issues. Recently, advances in optimization algorithms have greatly improved the stability and performance of GANs by resolving these challenges. This paper reviews the different optimization techniques proposed in the context of Generative AI, focusing on GANs for their impact on training dynamics, convergence rates, and quality of output. Several of them, such as Wasserstein distance, progressive growing, and attention mechanisms, have already shown their potential in terms of alleviating training stability and mode collapse. Architectural Enhancements: WGAN-GP, RaGAN, and ProGAN propose techniques like gradient penalties, proportional losses, and progressive training to achieve more stable training. Some methods are complex in design and take more time while training, such as ProGAN and TTUR, while others, such as DCGAN and LSGAN, converge faster but have a possibility of losing stability. Moreover, approaches based on InfoGAN and mode-regularized methods lead to more diverse samples, while One-Sided Label Smoothing and adaptive learning rates contribute to better generalization and training dynamics. These results indeed show that the relative strengths of different optimization algorithms are sharply varying, and their choice is highly sensitive to the application and specific architecture of GAN. Successive contributions from training methodologies, regularization techniques, and adaptive strategies have collectively driven the research on GANs toward more robustness, diversity, and quality in their outputs. Future research needs to be done in terms of computational efficiency, scalability, and ethical considerations pertaining to GAN applications to refine their capabilities for real-world implementations.

Downloads

Published

2025-01-31

Issue

Section

Articles

How to Cite

Optimization Algorithms in Generative AI for Enhanced GAN Stability and Performance. (2025). Applied Computing Journal, 4(2), 359-371. https://doi.org/10.52098/acj.20244225