Hallucination
Official Definition
In the context of AI, instances where an AI model generates fictitious or inaccurate output presented as factual information.
Source: AIEOG AI Lexicon (Feb 2026), adapted from NIST AI 100-1
What hallucination means in plain language
Hallucination occurs when an AI model confidently produces information that is false, fabricated, or unsupported by its training data or the input provided. The term is most commonly associated with large language models and generative AI, which can generate text that reads as authoritative and factual but is entirely made up.
For example, a generative AI model might cite a nonexistent regulatory guidance document, fabricate a court case, or produce a financial figure that has no basis in reality. The output looks and reads like genuine information, making it particularly dangerous in professional contexts.
Hallucinations are not intentional deception. They result from how generative models work: they predict the most likely next word or phrase based on patterns learned during training. When the model encounters a prompt that falls outside its training data or requires factual precision, it fills the gap with plausible-sounding but incorrect information.
Why it matters in financial services
Hallucination risk is significant in financial services because the consequences of acting on false information are severe:
- Regulatory compliance. An AI system that hallucinates regulatory citations or filing deadlines could lead to compliance failures.
- Customer communications. A chatbot that hallucinates account details or fee information could create customer harm and UDAP/UDAAP exposure.
- Internal decision-making. AI-generated summaries that contain hallucinated facts could lead to incorrect business decisions.
- Legal risk. AI-generated legal analysis that cites nonexistent case law could expose the institution to liability.
Key considerations for compliance teams
- Implement output validation. Establish processes to verify AI-generated outputs against authoritative sources before using them.
- Require human review. For high-stakes use cases, require human review and approval of all AI-generated content.
- Use retrieval-augmented generation (RAG). When factual accuracy is critical, implement RAG approaches that ground AI outputs in verified source documents.
- Test for hallucination rates. During validation, test generative AI systems for hallucination frequency and severity in your specific use cases.
- Train users on hallucination risk. Ensure staff understand that AI outputs may contain fabricated information and should be verified.
- Document hallucination mitigation measures. Record the steps taken to reduce and detect hallucinations for each deployment.
Stay current on AI risk in financial services
Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.
