The Treasury just released its first AI guidance for financial services. Here is what it means for your compliance program.
On February 23, 2026, the U.S. Treasury published the first two resources in a six-part suite designed to help financial institutions deploy AI securely and in line with regulatory expectations. The release includes an AI lexicon tailored to financial services and a financial services-specific adaptation of the National Institute of Standards and Technology (NIST) AI Risk Management Framework, complete with a self-assessment questionnaire and a risk-to-controls mapping matrix.
This is a meaningful development. Not because it introduces new rules, but because it signals how regulators expect banks, fintechs, and other financial institutions to think about AI risk, and it gives compliance teams a practical starting point.
Here is what you need to know, and what you should do about it.
What the Treasury actually released
Two documents are out now, with four more expected by the end of February 2026:
- An AI lexicon for financial services. This defines commonly used AI and machine learning terms with specific relevance to banking and financial regulation. Think of it as a shared vocabulary so compliance, risk, technology, and business teams can stop talking past each other.
- A financial services adaptation of the NIST AI Risk Management Framework. This is the more substantial piece. It includes:
- A self-assessment questionnaire to help institutions gauge their AI maturity
- A matrix mapping AI-related risks to potential security controls
- Practical guidance on implementing those controls
The Treasury was explicit: these resources focus on practical implementation, not prescriptive rules. The intent is to give institutions, especially small and mid-sized ones, clearer pathways to adopt AI technologies securely.
Why this matters for compliance teams
If you lead compliance, risk, or governance at a bank or fintech, this release is important for three reasons.
1. It creates a common language.
One of the biggest friction points in AI governance is that compliance, engineering, product, and executive teams often use the same terms to mean different things. “Model,” “validation,” “bias,” and “explainability” carry different weight depending on who is in the room. A standardized lexicon backed by the Treasury gives compliance officers a reference point when engaging with technical teams or board members.
2. It connects AI risk to existing risk management frameworks.
The NIST AI RMF is not new. What is new is a version specifically mapped to financial services risk and control structures. For institutions already managing model risk under SR 11-7 or OCC 2011-12, this adaptation provides a bridge between traditional model risk management and the broader category of AI governance, which includes machine learning models, large language models, and agentic systems that may not fit neatly into legacy MRM programs.
3. It sets the tone for what examiners will expect.
Guidance documents like these have a way of becoming the baseline for supervisory expectations. Even though the Treasury framed these as voluntary resources, compliance teams should treat them as a preview of what examiners will reference. If your institution is deploying AI in any capacity (fraud detection, credit decisioning, customer service automation, compliance monitoring), having a documented assessment against this framework strengthens your position in future exams.
What compliance teams should do now
You do not need to overhaul your program overnight. But there are concrete steps worth taking before the remaining four resources drop.
Benchmark your current state. Use the self-assessment questionnaire to evaluate where your institution stands. Be honest. The point is not to score well. The point is to identify gaps before an examiner does.
Align your internal vocabulary. Circulate the AI lexicon to your risk, technology, and product teams. If your organization has been using informal or inconsistent definitions for AI-related terms, adopt the Treasury’s definitions as a baseline. Consistent language reduces miscommunication and strengthens documentation.
Map AI use cases to the risk-controls matrix. Start by inventorying every AI or ML tool your institution currently uses or plans to deploy. Then map each use case against the risk-controls matrix in the NIST adaptation. This exercise alone will surface gaps in oversight, documentation, or control design.
Connect AI governance to your broader CMS. AI governance does not exist in a vacuum. It touches data governance, third-party risk management (if you are using vendor-built AI tools), testing and monitoring, and board oversight. If your Compliance Management System does not already include AI and model governance as a defined pillar, now is the time to add it.
Document everything. The Treasury’s focus on practical implementation means examiners will look for evidence of action, not just awareness. Meeting minutes, risk assessments, control testing results, and remediation plans tied to AI deployments all matter.
What this does not cover (yet)
The Treasury’s guidance is a strong starting point, but it is not comprehensive. A few areas that compliance teams should continue to monitor:
- Consumer protection implications. The guidance focuses on cybersecurity and secure deployment. It does not directly address UDAAP risks, fair lending implications, or adverse action requirements related to AI-driven decisioning. Those obligations remain governed by existing regulations and supervisory expectations.
- Third-party AI risk. Many institutions rely on vendor-built AI tools. The guidance does not yet address how to assess, monitor, or manage AI risk introduced through third-party relationships. Your TPRM program needs to account for this independently.
- Agentic AI and autonomous systems. The current framework is oriented toward traditional ML models and large language models. As agentic AI systems become more prevalent in financial services, expect additional guidance on oversight, auditability, and accountability for autonomous decision-making.
The bigger picture
This release is part of a broader pattern. Regulators across agencies are moving from abstract AI principles toward operational guidance. The institutions that will be best positioned are the ones building AI governance into their compliance infrastructure now, not waiting for a formal rule.
The Treasury’s resources lower the barrier. They give compliance teams a framework, a vocabulary, and a self-assessment tool. The work that remains is making it real inside your organization: aligning teams, documenting controls, testing outcomes, and building the evidence trail that demonstrates your program is not just on paper.
AI adoption in financial services is accelerating. The compliance function that keeps pace is the one that treats AI governance as a core operational discipline, not a side project.

