General purpose AI

Official Definition

An AI model that, with or without modification, can be used for a wide range of applications, for which it was not intentionally and specifically designed.

Source: AIEOG AI Lexicon (Feb 2026), EU Artificial Intelligence Act, Article 3

What general purpose AI means in plain language

General purpose AI (GPAI) is an AI model designed to perform many different tasks rather than being built for a single, specific application. Unlike a fraud detection model or a credit scoring model built for one purpose, a GPAI model can be applied to text generation, summarization, classification, translation, code writing, and many other tasks.

The EU AI Act definition is particularly relevant because it establishes a regulatory category for these models. Under the EU framework, GPAI models have specific obligations around transparency, documentation, and risk management that apply regardless of how they are ultimately used.

The term is related to but distinct from “foundation model” and “artificial general intelligence.” A GPAI model is versatile across tasks but still operates within the bounds of its training. It is not AGI, which implies human-level intelligence across all domains.

Why it matters in financial services

GPAI models are increasingly used in financial services for tasks like document processing, customer service, regulatory analysis, and report generation. Their governance presents unique challenges:

  • Use case proliferation. A single GPAI model can be applied to dozens of use cases across the organization, each carrying different risk profiles.
  • Scope ambiguity. Because GPAI models can do many things, defining boundaries of acceptable use requires explicit policy and enforcement.
  • EU regulatory obligations. For institutions with EU operations, the EU AI Act’s GPAI provisions may apply, requiring specific documentation and risk management.

Key considerations for compliance teams

  1. Inventory all GPAI use cases. A single GPAI model may support multiple use cases. Document each one individually in your AI inventory.
  2. Apply use-case-specific governance. Govern each application based on its specific risk profile, not the model’s general capabilities.
  3. Establish acceptable use policies. Define what the GPAI model can and cannot be used for within the organization.
  4. Assess EU AI Act applicability. Determine whether EU GPAI obligations apply to your institution.
  5. Monitor for unauthorized use. GPAI models can be used in ways not anticipated by governance. Implement monitoring for unapproved applications.
  6. Require vendor compliance. For third-party GPAI models, ensure the provider meets applicable documentation and transparency requirements.

Stay current on AI risk in financial services

Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.

Google reCaptcha: Invalid site key.