Validation
Official Definition
The process of independently evaluating an AI model’s performance, reliability, fairness, and compliance with intended specifications to determine whether it is fit for its intended purpose and meets regulatory requirements.
Source: AIEOG AI Lexicon (Feb 2026), adapted from SR 11-7, OCC 2011-12, and NIST AI 100-1
What validation means in plain language
Validation is the process of checking whether an AI model actually works as intended — and continues to work — before and after it is deployed in a real-world environment. It is distinct from development testing. Validation is typically performed by an independent party who was not involved in building the model, applying rigorous evaluation methods to confirm that the model is accurate, fair, stable, and appropriate for its use case.
In financial services, validation is not optional. It is a regulatory expectation embedded in model risk management guidance, supervisory standards, and examination procedures. Deploying an AI model without proper validation is one of the fastest paths to regulatory action.
Why it matters in financial services
Validation sits at the intersection of technical rigor and regulatory compliance:
- SR 11-7 and OCC 2011-12. These foundational model risk management guidance documents explicitly require independent model validation as a core element of sound model governance. Validation must cover conceptual soundness, ongoing monitoring, and outcomes analysis.
- Fair lending compliance. Models used in credit decisions must be validated for fairness across protected groups. Regulators expect institutions to test for disparate impact and document the results — even when models do not use protected characteristics as direct inputs.
- Consumer protection. CFPB guidance increasingly scrutinizes AI models used in consumer-facing applications. Validation must demonstrate that models treat consumers fairly and produce accurate outcomes.
- Operational resilience. Models that fail in production can cause significant financial and reputational damage. Validation helps identify weaknesses before they become real-world failures.
- Audit and examination readiness. Examiners review validation reports as primary evidence of sound model governance. Incomplete or superficial validation is one of the most common findings in regulatory examinations.
Key considerations for compliance teams
Related terms
- Benchmarking — comparing model performance against standards or alternatives
- Capability evaluation — assessing what an AI system can and cannot do
- Performance monitoring — ongoing tracking of model metrics in production
- Model risk — the potential for adverse consequences from model errors or misuse
- Documentation — the artifacts that capture validation methodology and results
Stay current on AI risk in financial services
Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.
