AI drift/decay
Official Definition
The tendency for an AI model’s performance to degrade over time when deployed in a real-world setting with differing conditions from those present in training and testing.
Source: The Treasury’s AIEOG Lexicon (Feb 2026), adapted from ISO/IEC 12792:2025 and NIST AI 100-1
What AI drift/decay means in plain language
AI drift (also called model decay or model degradation) is what happens when an AI model that performed well during development starts producing less accurate or less reliable results after deployment. The model itself has not changed, but the world around it has.
Consider a fraud detection model trained on transaction patterns from 2023. By 2025, customer behavior has shifted, new payment methods have emerged, and fraudsters have adapted their techniques. The model is still applying 2023 logic to 2025 reality. Its accuracy degrades not because anything broke, but because the conditions it was trained for no longer match the conditions it operates in.
Drift can be gradual or sudden. A slow shift in customer demographics over months is gradual drift. A rapid change in market conditions (a new payment rail launching, a regulatory change affecting transaction flows) can cause sudden drift. Both are problems, but sudden drift is often harder to detect because monitoring systems may not flag it quickly enough.
There are two primary types of drift that compliance teams should understand:
- Data drift. The statistical properties of the input data change over time. The model receives data that looks different from what it was trained on.
- Concept drift. The relationship between inputs and the correct output changes. What constituted a “suspicious” transaction pattern in 2023 may look different in 2025.
Why it matters in financial services
AI drift is one of the most common and underappreciated risks in financial services AI deployment. Models that drift silently can produce real harm: missed fraud, incorrect risk ratings, biased lending decisions, or failed compliance obligations.
Regulators expect institutions to actively monitor model performance after deployment. The OCC’s Model Risk Management guidance explicitly requires ongoing monitoring to confirm that models continue to perform as intended. The Federal Reserve’s SR 11-7 sets similar expectations. Drift that goes undetected and uncorrected is a governance failure.
Specific regulatory and operational risks include:
- BSA/AML exposure. A transaction monitoring model experiencing drift may fail to detect suspicious activity, exposing the institution to regulatory enforcement and financial crime risk.
- Fair lending violations. A credit model that drifts may begin producing disparate outcomes across protected classes, creating fair lending exposure that did not exist at the time of initial validation.
- Exam findings. Examiners routinely ask about model performance monitoring. Institutions that cannot demonstrate they track drift metrics and have defined thresholds for action will face criticism.
- Operational losses. Beyond regulatory risk, drifting models can lead to incorrect decisions that directly impact the business: approving loans that should be denied, missing fraud that should be caught, or flagging legitimate activity that wastes investigator time.
Key considerations for compliance teams
- Establish drift monitoring from day one. Every deployed AI model should have defined performance metrics that are tracked continuously. Do not wait for a validation cycle to discover drift.
- Set performance thresholds. Define quantitative thresholds that trigger review, retraining, or model replacement. These thresholds should be documented in the model’s governance documentation.
- Monitor input data distributions. Track the statistical properties of incoming data and compare them to training data baselines. Significant shifts in data distributions are early indicators of drift.
- Schedule regular revalidation. Annual model validation is the minimum. High-risk models or models operating in rapidly changing environments may need more frequent validation cycles.
- Document drift incidents. When drift is detected, document the finding, root cause, impact assessment, and corrective action. This creates an audit trail that demonstrates active governance.
- Plan for model retraining. Have a defined process for retraining or replacing models when drift exceeds acceptable thresholds. This process should include updated validation, testing, and approval before redeployment.
- Report on drift to governance committees. Model performance metrics, including drift indicators, should be part of regular reporting to risk committees, compliance committees, and the board as appropriate.
Stay current on AI risk in financial services
Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.
