Black box
Official Definition
The nature of some AI techniques whereby the inferential operations are complex, hidden, or otherwise opaque to their developers and end users in terms of providing an understanding of how classifications, recommendations, or actions are generated and what overall performance will be.
Source: AIEOG AI Lexicon (Feb 2026), NSCAI Appendix A Technical Glossary
What black box means in plain language
A black box AI system is one where the internal workings are not transparent. You can see what goes in (the inputs) and what comes out (the outputs), but the process in between is opaque. Even the developers who built the system may not be able to fully explain why a specific input produced a specific output.
This opacity is inherent to certain AI techniques, particularly deep learning models with millions or billions of parameters. The model has learned complex, non-linear relationships in the data, but those relationships are encoded in numerical weights that do not translate easily into human-understandable explanations.
Not all AI models are black boxes. Linear regression models, decision trees, and rule-based systems are generally transparent and explainable. The black box challenge arises specifically with complex models where the trade-off between predictive power and interpretability becomes significant.
For financial institutions, this trade-off has real consequences. A deep learning model might produce more accurate fraud detection than a simpler model, but if you cannot explain why it flagged (or did not flag) a specific transaction, you may face regulatory and legal challenges.
Why it matters in financial services
Black box AI is one of the most scrutinized issues in financial services AI governance. Regulators, examiners, and courts expect institutions to explain their decisions, especially those that affect customers.
- Adverse action notices. Under ECOA and Regulation B, lenders must provide specific reasons when they deny credit. A black box model that cannot identify the factors driving a denial creates compliance risk.
- Fair lending examinations. Examiners assess whether institutions can explain how their models make decisions and demonstrate those decisions do not discriminate. Black box opacity complicates both.
- SR 11-7 requirements. Model risk management guidance requires institutions to understand model limitations. A model whose decision logic cannot be explained represents a fundamental limitation.
- BSA/AML expectations. When filing SARs, investigators need to articulate why activity is suspicious. If the underlying detection model is a black box, articulating the basis for suspicion becomes more difficult.
Key considerations for compliance teams
- Assess explainability requirements for each use case. Not every AI application requires the same level of explainability. Customer-facing decisions (credit, pricing, access) require more transparency than internal operational tools.
- Consider simpler models first. If a transparent model achieves acceptable performance, it may be preferable to a black box model with marginally better accuracy.
- Use explainability tools. Techniques like SHAP values, LIME, and feature importance analysis can provide partial explanations for black box model decisions.
- Document the explainability trade-off. When deploying a black box model, document why the added complexity is necessary and what explainability measures are in place.
- Test for compliance with disclosure requirements. Verify that the institution can generate the specific explanations required by applicable regulations (adverse action reasons, SAR narratives).
- Require vendor explainability. For third-party black box models, require the vendor to provide interpretability tools and documentation.
Related terms
Stay current on AI risk in financial services
Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.
