Risk tiering
Official Definition
The process of classifying AI systems into risk categories based on their potential impact, complexity, and regulatory implications, to determine the appropriate level of governance and oversight.
Source: AIEOG AI Lexicon (Feb 2026), adapted from NIST AI RMF, EU AI Act, and Model Risk Management, Comptroller’s Handbook
What risk tiering means in plain language
Risk tiering is the practice of sorting AI systems into categories — typically high, medium, and low risk — based on how much harm they could cause if they fail, produce biased results, or are misused. The tier determines how much governance, oversight, validation, and monitoring is required.
Not all AI systems carry the same risk. A model that decides who gets a mortgage carries far more risk than a model that suggests internal meeting times. Risk tiering ensures that governance resources are concentrated where the stakes are highest, rather than applying the same level of oversight to every AI tool regardless of impact.
Risk tiering typically considers factors like the type of decision the AI influences, who is affected by the output, the regulatory context, the model’s complexity and opacity, the volume of decisions, and the consequences of errors.
The EU AI Act formalizes risk tiering into law, categorizing AI systems as unacceptable risk (banned), high risk (subject to extensive requirements), limited risk (transparency obligations), and minimal risk (no specific obligations).
Why it matters in financial services
Risk tiering is foundational to any scalable AI governance program. Without it, institutions face two untenable choices: under-govern everything (accepting unacceptable risk for high-impact systems) or over-govern everything (making AI adoption impractical by applying maximum oversight to every tool).
The OCC’s Model Risk Management guidance and SR 11-7 both endorse a risk-based approach to model governance. Examiners expect institutions to demonstrate that governance intensity is proportional to model risk. The Treasury’s AI guidance for banks reinforces this expectation.
For financial institutions with dozens or hundreds of AI use cases, risk tiering is what makes governance manageable. It creates a framework for prioritizing validation resources, setting monitoring frequency, determining documentation depth, and allocating oversight responsibility.
Key considerations for compliance teams
- Define risk tiers clearly. Establish clear criteria for each risk tier so classification is consistent and defensible.
- Use multiple risk factors. Consider impact on customers, regulatory implications, model complexity, data sensitivity, volume of decisions, and reversibility of outcomes.
- Calibrate governance to tier. Define specific governance requirements for each tier: documentation depth, validation rigor, monitoring frequency, and oversight level.
- Review classifications periodically. As AI systems evolve and business contexts change, reassess risk tier assignments.
- Document tiering decisions. Record the rationale for each AI system’s risk classification.
- Align with regulatory frameworks. Map internal risk tiers to applicable regulatory frameworks (EU AI Act, SR 11-7, etc.) to ensure compliance.
Related terms
Stay current on AI risk in financial services
Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.
