Artificial general intelligence (AGI)

Official Definition

The currently hypothetical level of AI capability that is able to understand or learn an intellectual task as human being can. It is an AI system that can perform across diverse cognitive domains with versatility and proficiency, rather than being limited to a narrow task or domain.

Source: AIEOG AI Lexicon (Feb 2026), adapted from arXiv:2510.18212 and DOI:10.2478/jagi-2014-0001

What artificial general intelligence means in plain language

Artificial general intelligence (AGI) is the idea of an AI system that can learn and perform any intellectual task a human can. Unlike today’s AI systems, which excel at specific tasks (translating text, scoring transactions, generating images) but struggle outside their training domain, AGI would be able to apply intelligence flexibly across any domain.

AGI does not exist today. The definition explicitly states it is “currently hypothetical.” This is an important point for compliance and risk professionals: while AGI is a concept worth understanding, it should not be confused with the AI systems that financial institutions are actually deploying.

The distinction matters because the governance challenges of current AI systems (narrow AI) are different from the theoretical challenges of AGI. Today’s compliance work focuses on governing specific, bounded AI applications. AGI, if it ever arrives, would raise fundamentally different questions about control, accountability, and oversight.

Why it matters in financial services

AGI is included in the AIEOG Lexicon not because financial institutions are deploying it, but because the term appears frequently in industry discussions, vendor marketing, and public discourse about AI. Compliance professionals need to understand what AGI is (and is not) to:

  • Evaluate vendor claims accurately. No vendor is selling AGI today. Claims that suggest AGI-level capabilities should be scrutinized carefully. Understanding the difference helps compliance and procurement teams ask better questions.
  • Frame risk appropriately. The risks of narrow AI (bias, drift, explainability, adversarial attacks) are concrete and actionable. The risks of AGI are speculative. Governance resources should be directed toward the risks that exist today.
  • Participate in strategic conversations. Board members, executives, and technology leaders may raise questions about AGI. Compliance professionals who understand the concept can provide informed perspective on what it means (or does not mean) for the institution’s risk posture.
  • Monitor regulatory developments. Some regulatory and policy discussions reference AGI, particularly in the context of international AI governance frameworks (EU AI Act, G7 Hiroshima Process). Awareness of the concept helps compliance teams follow these developments.

Key considerations for compliance teams

  1. Focus governance on current AI capabilities. Direct governance resources toward the narrow AI systems your institution is actually using, not toward hypothetical AGI scenarios.
  2. Scrutinize AGI-adjacent vendor claims. When vendors describe their products using language like “general intelligence” or “human-level reasoning,” push for specifics about what the system actually does and its documented limitations.
  3. Stay informed. Monitor developments in AGI research and policy discussions as background knowledge, but do not build governance frameworks around capabilities that do not yet exist.
  4. Educate stakeholders. Help board members and executives understand the difference between current AI capabilities and AGI to support informed decision-making.
  5. Prepare for capability evolution. While AGI may not exist today, AI capabilities are advancing rapidly. Build governance frameworks that can scale and adapt as AI systems become more capable.

Stay current on AI risk in financial services

Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.

Google reCaptcha: Invalid site key.