Generative AI
Official Definition
A class of AI models, often built upon foundation models, that are intended to generate new content. The content a generative AI model produces can include, but is not limited to, text, images, audio, video, and code.
Source: AIEOG AI Lexicon (Feb 2026), adapted from NIST AI 100-1 and EU AI Act
What generative AI means in plain language
Generative AI is the category of AI that creates new content rather than analyzing or classifying existing data. When you ask an AI chatbot to draft an email, summarize a document, write code, or generate an image, you are using generative AI.
This is a fundamental shift from earlier AI applications that focused on classification, prediction, or optimization. Generative AI produces outputs that did not previously exist: text, images, audio, video, and code.
The most widely adopted generative AI models in financial services are large language models used for document drafting, customer communication, regulatory analysis, and internal knowledge management.
Why it matters in financial services
Generative AI adoption in financial services is accelerating rapidly. Governance challenges specific to generative AI include:
- Hallucination. Generative AI models can produce outputs that appear authoritative but are factually incorrect. In financial services, hallucinated compliance guidance or inaccurate financial data could have serious consequences.
- Non-determinism. Many generative AI systems produce different outputs for the same input, complicating reproducibility and audit.
- Data leakage. Generative AI models can inadvertently reproduce information from their training data, raising privacy and confidentiality concerns.
- Content attribution. Determining who is responsible for AI-generated content is a governance challenge institutions must address.
- Regulatory uncertainty. The regulatory framework for generative AI in financial services is evolving rapidly.
Key considerations for compliance teams
- Establish acceptable use policies. Define where generative AI can and cannot be used. Customer-facing content, regulatory filings, and legal documents may require additional review.
- Implement human review processes. Require human review and approval for generative AI outputs used in regulated or high-stakes contexts.
- Test for hallucination. Validate that generative AI systems produce accurate outputs for the specific use cases they serve.
- Address data privacy. Evaluate what data is sent to generative AI systems and whether it creates privacy or regulatory exposure.
- Log inputs and outputs. Maintain records of prompts and responses for audit, compliance, and investigation purposes.
- Include in AI governance. Generative AI deployments should be inventoried, risk-assessed, and governed like any other AI system.
Stay current on AI risk in financial services
Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.
