Diffusion models

Official Definition

A type of generative AI model that produces output to match a prompt by iteratively refining noise. These types of models require substantial computational resources and processing time.

Source: AIEOG AI Lexicon (Feb 2026), adapted from NIST AI 100-4

What diffusion models means in plain language

Diffusion models are a category of generative AI that creates content by starting with random noise and gradually refining it into a coherent output. The process works by learning to reverse a “diffusion” process: the model is trained by progressively adding noise to real data until it becomes pure randomness, then learning to reverse those steps to reconstruct the original data from noise.

When generating new content, the model starts with random noise and applies the learned reversal process step by step, gradually transforming noise into a realistic image, audio clip, or other output that matches the user’s prompt.

Diffusion models are the technology behind popular image generation systems and are increasingly used for audio, video, and 3D content generation. Their outputs can be remarkably realistic, which is both their value and their risk.

Why it matters in financial services

Diffusion models are relevant to financial services primarily through two lenses: as tools that institutions may adopt and as threats that institutions must defend against.

As tools:

  • Document generation and augmentation for training and testing purposes
  • Synthetic data generation for model development when real data is limited
  • Content creation for customer communications and marketing

As threats:

  • Generation of realistic fake identity documents for fraud
  • Creation of synthetic images for deepfake-based social engineering
  • Production of convincing but fraudulent financial documents

The computational cost of diffusion models is worth noting for governance purposes. Because they require significant processing power, institutions that deploy them typically rely on cloud-based AIaaS providers, introducing third-party risk considerations.

Key considerations for compliance teams

  1. Assess generative content risks. If your institution uses diffusion models for content generation, establish policies governing what can be generated and how synthetic content is labeled.
  2. Strengthen document verification. Update identity verification and document review processes to account for AI-generated documents and images.
  3. Include in AI inventory. Diffusion model deployments, including those accessed through vendor APIs, should be documented in the AI use case inventory.
  4. Evaluate third-party dependencies. Assess the AIaaS providers supplying diffusion model capabilities for concentration risk and data handling practices.
  5. Monitor for misuse. Establish guidelines for acceptable use of generative AI within the organization to prevent misuse.
  6. Stay current on detection capabilities. Monitor developments in synthetic media detection technology and assess applicability to your fraud prevention controls.

Stay current on AI risk in financial services

Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.

Google reCaptcha: Invalid site key.