Robustness
Official Definition
The ability of an AI system to maintain its level of performance under a variety of circumstances, including unexpected inputs, adversarial attacks, and environmental changes.
Source: AIEOG AI Lexicon (Feb 2026), adapted from NIST AI 100-1
What robustness means in plain language
Robustness is the ability of an AI system to perform reliably even when conditions are not ideal. A robust model continues to work well when it encounters data that is different from its training data, when inputs are noisy or incomplete, when the environment changes, or when adversaries try to trick it.
Robustness is often described in contrast to fragility. A fragile model works well under expected conditions but fails unpredictably when anything changes. A robust model degrades gracefully — its performance may decline under challenging conditions, but it does not fail catastrophically.
Robustness testing involves deliberately exposing AI systems to challenging conditions: out-of-distribution data, adversarial inputs, missing values, extreme values, corrupted data, and novel scenarios. The goal is to understand how the model behaves when reality diverges from the conditions it was trained on.
Why it matters in financial services
Financial services environments are dynamic. Economic conditions change, customer populations shift, fraud tactics evolve, and regulatory requirements update. AI models must be robust enough to maintain acceptable performance across these changing conditions.
Lack of robustness can lead to model failures during economic stress (when models are needed most), vulnerability to adversarial attacks (fraud, manipulation), cascading errors when upstream data quality degrades, and unexpected behavior in novel situations.
Regulators expect institutions to understand how their models perform under stress. Stress testing, sensitivity analysis, and scenario testing are all forms of robustness assessment.
Key considerations for compliance teams
- Test under diverse conditions. Validate AI systems with out-of-distribution data, edge cases, and stress scenarios.
- Conduct adversarial testing. Test robustness against adversarial inputs and manipulation attempts.
- Assess sensitivity. Understand how sensitive model outputs are to changes in inputs and environmental conditions.
- Document robustness limitations. Record the conditions under which the model may not perform as expected.
- Monitor for environmental changes. Track whether the operating environment has changed in ways that may affect model robustness.
- Plan for degradation. Establish fallback procedures for when model performance degrades below acceptable levels.
Related terms
Stay current on AI risk in financial services
Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.
