Bias
Official Definition
A systematic distortion, as opposed to random error, that reduces the representativeness or accuracy of an AI system’s outputs or performance for its intended purposes and operating conditions. Bias may be introduced inadvertently or purposely, and may also emerge as the AI system is used in an application.
Source: AIEOG AI Lexicon (Feb 2026), adapted from NIST SP 1270
What bias means in plain language
Bias in AI refers to systematic errors that push an AI system’s outputs in a particular direction, reducing accuracy or fairness. Unlike random errors (which cancel out over time), bias is consistent and directional. It causes the system to reliably get things wrong in a specific way.
Bias can enter an AI system at multiple points:
- Training data bias. If the data used to train a model does not represent the population the model will serve, the model learns distorted patterns. A credit model trained primarily on data from one demographic group may perform poorly for others.
- Feature selection bias. If the variables chosen as model inputs correlate with protected characteristics (race, gender, age), the model can produce discriminatory outcomes even if those protected characteristics are not used directly.
- Algorithmic bias. The mathematical techniques used to build the model can introduce bias through assumptions about data distributions, optimization objectives, or regularization choices.
- Deployment bias. Bias can emerge after deployment when the model is applied to populations or conditions that differ from its training environment.
- Feedback loop bias. When model outputs influence future training data (a fraud model that flags certain groups more frequently, generating more training examples from those groups), bias can compound over time.
Why it matters in financial services
Bias in AI is a primary regulatory concern in financial services. The CFPB, OCC, Federal Reserve, FDIC, and DOJ have all signaled that AI-driven bias, particularly in lending, is a top enforcement priority.
The regulatory framework is clear:
- Fair lending laws apply to AI. The Equal Credit Opportunity Act (ECOA) and Fair Housing Act prohibit discrimination in lending, regardless of whether the discrimination is intentional or results from a facially neutral model. AI models that produce disparate impact are subject to enforcement.
- UDAP/UDAAP exposure. AI systems that produce unfair outcomes for consumers, even outside of lending, can trigger Unfair, Deceptive, or Abusive Acts or Practices enforcement.
- Model risk management. SR 11-7 and OCC guidance require institutions to assess model limitations, which includes bias. Bias testing should be part of model validation.
- Adverse action requirements. When AI models are used for credit decisions, institutions must provide specific and accurate reasons for adverse actions. Black-box models that cannot explain their decisions create compliance risk.
Bias is not limited to consumer-facing models. Internal AI systems (employee screening, resource allocation, risk scoring) can also produce biased outcomes that create legal, ethical, and reputational risk.
Key considerations for compliance teams
- Test for bias before deployment. Every AI model that affects customers or employees should undergo bias testing as part of validation. Test for disparate impact across protected classes.
- Use representative training data. Assess whether training data represents the full population the model will serve. Document data composition and any known limitations.
- Monitor for bias in production. Bias can emerge or change after deployment. Implement ongoing monitoring that tracks model outcomes across demographic groups.
- Document bias testing methodology. Maintain detailed records of how bias was assessed, what metrics were used, what was found, and what actions were taken.
- Establish remediation procedures. Define what happens when bias is detected: who is notified, what the investigation process is, what corrective actions are available, and what thresholds trigger model shutdown.
- Train model developers and validators. Ensure the people building and validating AI models understand bias concepts, testing techniques, and regulatory requirements.
- Include bias in vendor due diligence. For third-party AI models, request bias testing documentation and conduct independent bias assessments where possible.
Stay current on AI risk in financial services
Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.
