Explainability

Official Definition

The ability to express the functional relationship between inputs and outputs that an AI system uses to generate an output. The level of detail, and the language used for explanation, can be tailored to the role, knowledge, or skill level of the explainee.

Source: AIEOG AI Lexicon (Feb 2026), adapted from NIST AI 100-1

What explainability means in plain language

Explainability is the ability to describe, in understandable terms, how an AI system arrived at a particular output. It answers the question: “Why did the model produce this result?”

The AIEOG definition makes an important point: explanations should be tailored to the audience. A data scientist needs a technical explanation of feature importance and model weights. A compliance officer needs an explanation of which factors drove a risk score. A customer denied credit needs a clear, plain-language reason. Explainability is not one-size-fits-all.

Explainability exists on a spectrum. Some models are inherently explainable (decision trees, logistic regression). Others require post-hoc explainability techniques (SHAP values, LIME, feature attribution) to provide approximate explanations. The choice of model and explainability approach should match the requirements of the use case.

Why it matters in financial services

Explainability is a regulatory requirement in multiple financial services contexts:

  • Adverse action notices. Under ECOA, lenders must provide specific reasons when denying credit. These reasons must be derived from the model’s actual decision logic.
  • BSA/AML. SAR narratives require articulating why activity is suspicious. The underlying detection model must be explainable enough to support this requirement.
  • Fair lending. Examiners assess whether institutions can explain how models make decisions and whether those decisions are fair across protected classes.
  • Model risk management. SR 11-7 requires understanding of model limitations. Models that cannot be explained represent a known limitation that must be managed.

Key considerations for compliance teams

  1. Match explainability to the use case. Customer-facing decisions require higher explainability standards than internal operational tools.
  2. Select explainability techniques appropriate to the model. Use inherently interpretable models where possible. Where complex models are necessary, deploy appropriate post-hoc explainability tools.
  3. Test explanations for accuracy and consistency. Ensure that the explanations generated actually reflect the model’s decision logic.
  4. Document explainability capabilities and limitations. For each model, document what can be explained, to what level of detail, and what limitations exist.
  5. Train staff on explainability requirements. Model developers, validators, and compliance teams should understand explainability concepts and regulatory requirements.
  6. Include explainability in vendor requirements. For third-party models, require vendors to provide explainability tools and documentation.

Stay current on AI risk in financial services

Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.

Google reCaptcha: Invalid site key.