AI Agent
Official Definition
A system that autonomously perceives its environment, decides what to do, and takes actions to achieve its goals.
Source: AIEOG AI Lexicon (Feb 2026), adapted from doi.org/10.3390/fi17090404
What AI agent means in plain language
An AI agent is a software system that operates with a degree of independence. It observes what is happening around it (its environment), decides on a course of action, and then executes that action to achieve a defined objective. The key word is “autonomously.” An AI agent does not wait for step-by-step instructions from a human operator. It makes decisions on its own within the boundaries it has been given.
The concept of an AI agent is closely related to agentic AI, but the distinction matters. “AI agent” refers to a specific system or instance, while “agentic AI” describes the broader category of technology. An organization might deploy multiple AI agents, each designed for a specific purpose, and collectively they represent the organization’s use of agentic AI.
In financial services, AI agents are beginning to appear in areas like alert triage (an agent that reviews transaction monitoring alerts and prepares initial case summaries), customer onboarding (an agent that collects and verifies identity documents), and regulatory reporting (an agent that pulls data from multiple systems and prepares draft filings).
Why it matters in financial services
AI agents represent a shift from AI as a tool that assists human decision-making to AI as an actor that participates in decision-making processes. This shift has significant implications for regulated institutions.
Regulatory frameworks were designed around human decision-makers. When an AI agent takes an action, such as flagging a transaction, recommending a risk rating, or generating a compliance report, the institution remains fully responsible for that action. The agent’s decision is the institution’s decision in the eyes of regulators.
This creates several challenges:
- Explainability requirements. Institutions must be able to explain why an AI agent took a specific action. If an examiner asks why a particular alert was dispositioned a certain way, “the AI agent decided” is not an acceptable answer.
- Testing and validation. AI agents need to be validated just like any other model used in a decision-making capacity. This means pre-deployment testing, ongoing performance monitoring, and periodic revalidation.
- Access control. AI agents that interact with sensitive systems (customer data, regulatory filing platforms, transaction systems) need the same access controls and permissions governance that would apply to a human employee in a similar role.
- Change management. When an AI agent’s behavior changes (due to model updates, data shifts, or configuration changes), those changes need to be tracked, documented, and assessed for impact.
Key considerations for compliance teams
- Inventory all AI agents. Maintain a current list of every AI agent operating in your organization, including its purpose, data access, decision authority, and the human responsible for its oversight.
- Classify agents by risk tier. Not all AI agents carry the same risk. An agent that summarizes meeting notes is fundamentally different from one that triages BSA alerts. Risk-tier each agent and apply governance proportionate to the risk.
- Define the agent’s decision authority. For each AI agent, clearly document what decisions it can make independently, what requires human review, and what actions are prohibited.
- Require audit-ready logging. Every AI agent should generate logs that capture inputs received, decisions made, actions taken, and outcomes produced. These logs should be retained according to your organization’s record retention policies.
- Assign a human owner. Every AI agent needs a named individual who is responsible for its performance, compliance, and governance. This person should review agent activity regularly and has authority to modify or shut down the agent.
- Include agents in your incident response plan. If an AI agent malfunctions, makes an incorrect decision, or is compromised, your incident response plan should include procedures for identifying, containing, and remediating the issue.
Related terms
Stay current on AI risk in financial services
Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.
