AI model
Official Definition
A component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs.
Source: AIEOG AI Lexicon (Feb 2026), NIST SP 800-218A
What AI model means in plain language
An AI model is the core computational engine inside an AI system. It takes in data (inputs), applies learned patterns and mathematical operations, and produces results (outputs). The model is the part that does the “thinking,” though it is important to understand that AI models do not think the way humans do. They apply statistical relationships learned from data.
A model is not a complete system on its own. It is a component that sits within a broader information system that includes data pipelines, user interfaces, business logic, and integration layers. Understanding this distinction matters for governance because the model and the system around it can each introduce risk.
Examples of AI models in financial services include:
- A credit scoring model that predicts the likelihood a borrower will default based on application data and bureau information.
- A transaction monitoring model that scores transactions for suspicious activity based on patterns learned from historical data.
- A natural language processing model that extracts key information from regulatory filings or customer correspondence.
- A fraud detection model that identifies potentially fraudulent transactions in real time based on device, behavioral, and transactional signals.
Each of these models was trained on historical data, learned patterns from that data, and now applies those patterns to new inputs. The quality of the model depends heavily on the quality of the training data, the appropriateness of the chosen technique, and the rigor of the validation process.
Why it matters in financial services
AI models are subject to model risk management requirements. The OCC’s Comptroller’s Handbook on Model Risk Management and the Federal Reserve’s SR 11-7 define a “model” broadly and establish expectations for how models should be developed, validated, and used. AI models fall squarely within this definition.
For compliance teams, the key implication is that every AI model used for a business decision should be subject to the institution’s model risk management framework. This includes:
- Inventory and classification. Every AI model should be registered in the institution’s model inventory with its risk tier, intended use, data dependencies, and responsible owner.
- Development documentation. The model development process should be documented, including the choice of technique, training data, feature selection, and design decisions.
- Independent validation. Models should be validated by parties independent of the development team before deployment and on a recurring basis.
- Ongoing monitoring. Model performance should be tracked continuously against defined metrics, with alerts when performance degrades.
The AIEOG definition is deliberately broad: “computational, statistical, or machine-learning techniques.” This means the governance expectation applies to everything from simple regression models to complex deep learning systems. Compliance teams should not assume that simpler models are exempt from governance requirements.
Key considerations for compliance teams
- Use a broad definition of “model.” Align your internal model definition with the regulatory standard. If a system uses computational techniques to produce outputs that inform business decisions, it is a model and should be governed accordingly.
- Register all AI models in your model inventory. Include models built in-house, procured from vendors, and embedded in third-party platforms.
- Require model documentation. Every AI model should have a model card or documentation package that describes its purpose, design, training data, performance metrics, limitations, and intended use.
- Establish validation standards. Define what constitutes an acceptable validation for AI models, including performance benchmarks, fairness testing, and stress testing requirements.
- Monitor model performance continuously. Implement dashboards or automated reporting that tracks key model metrics and alerts when performance falls outside acceptable thresholds.
- Plan for the full model lifecycle. Development, validation, deployment, monitoring, revalidation, retraining, and retirement should all have defined processes and documentation requirements.
Related terms
Stay current on AI risk in financial services
Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.
