Generative Adversarial Networks (GANs)
Official Definition
A type of generative AI model that generates realistic data by pitting two neural networks against each other. A generator creates synthetic data, and a discriminator evaluates whether data is real or synthetic.
Source: AIEOG AI Lexicon (Feb 2026), adapted from BIS FSI Insights No. 63
What GANs means in plain language
A GAN is a pair of neural networks that learn by competing with each other. One network (the generator) creates synthetic content. The other (the discriminator) tries to tell the difference between the generator’s output and real data. As training progresses, the generator gets better at creating realistic content until its output is virtually indistinguishable from real data.
GANs were one of the first AI architectures capable of generating highly realistic synthetic images, audio, and video. While newer approaches like diffusion models have surpassed GANs for some applications, GANs remain important in financial services for synthetic data generation, data augmentation, and understanding synthetic media threats.
Why it matters in financial services
GANs are relevant both as tools and as threats:
As tools: GANs can generate synthetic data for model training when real data is limited or privacy-restricted, create realistic test scenarios for model validation, and augment datasets to address class imbalance in fraud detection.
As threats: GANs have been used to create deepfake images and videos for identity fraud, generate synthetic identity documents, and produce realistic but fraudulent financial records.
Compliance teams should understand GANs because they underpin both synthetic data capabilities that institutions may want to adopt and synthetic media threats that institutions must defend against.
Key considerations for compliance teams
- Govern synthetic data generation. If your institution uses GANs for synthetic data, establish policies on data quality, privacy validation, and acceptable use.
- Assess deepfake risk. GAN-generated content can be used for identity fraud. Evaluate your institution’s vulnerability.
- Include in AI governance. GAN deployments should be documented in the AI use case inventory.
- Validate synthetic data quality. Synthetic data used for model training must be validated to ensure it accurately represents real-world patterns without introducing bias.
- Monitor the threat landscape. Track developments in GAN capabilities to stay ahead of emerging fraud vectors.
- Test fraud controls. Use GAN-generated test content to assess whether your identity verification and fraud detection systems can detect synthetic media.
Related terms
Stay current on AI risk in financial services
Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.
