A generative adversarial network is a deep learning framework that generates realistic synthetic data by pitting two neural networks against each other. One network, the generator, creates fake data from random noise. The other, the discriminator, evaluates whether the data it receives is real or generated. Through this back-and-forth process, both networks improve over time until the generator produces outputs realistic enough to fool the discriminator.
The training process is driven by two loss functions, one for each network. The generator's loss measures how convincingly it can fool the discriminator, while the discriminator's loss measures how accurately it can tell real from fake. Backpropagation and gradient descent are used to update both networks based on these losses.
GANs are primarily used in image-related tasks, including image generation, image enhancement, style transfer, and converting images from one domain to another. Beyond visuals, they are applied in data augmentation to expand training data, anomaly detection, and scientific research, where generating synthetic data is more practical than collecting real-world samples.
The main drawbacks of GANs are training instability and mode collapse, where the generator stops producing diverse outputs and instead fixates on a narrow range of results. They also require significant computational resources and large datasets to train effectively.
GANs have increasingly been replaced by newer approaches—variational autoencoders (VAEs) for tasks where training stability matters and transformer-based diffusion models, which have largely taken over image generation.