Foundation models

Official Definition

Self-supervised models trained on broad data (generally using self-supervision at scale) that can be adapted (e.g., fine-tuned) to a wide range of downstream tasks.

Source: AIEOG AI Lexicon (Feb 2026), adapted from arXiv:2108.07258

What foundation models means in plain language

Foundation models are large-scale AI models trained on vast amounts of general data that serve as a base for many different applications. Instead of training a separate model from scratch for each task, organizations start with a foundation model and adapt it to their specific needs through fine-tuning or prompting.

The most well-known examples are large language models (GPT, Claude, Gemini) trained on broad text data, but foundation models also exist for images, code, audio, and multimodal inputs. Their defining characteristic is generality: they learn broad representations that transfer across many tasks.

For financial institutions, foundation models create a new governance dynamic. The institution may fine-tune or prompt a foundation model for a specific use case, but the base model was built by a third party using data and methods the institution has limited visibility into.

Why it matters in financial services

Foundation models are entering financial services rapidly through vendor products, cloud APIs, and internal experimentation. Their governance challenges are distinct from traditional models:

  • Limited transparency. Institutions often have limited insight into how foundation models were trained, what data was used, and what biases or limitations exist.
  • Third-party dependency. Foundation models are typically provided by a small number of large technology companies, creating concentration risk.
  • Dual risk surface. Risk exists in both the foundation model itself and in how the institution adapts and deploys it.
  • Emergent capabilities. Foundation models can exhibit unexpected capabilities that create both opportunity and risk.

Key considerations for compliance teams

  1. Assess foundation model risk. Evaluate the risks specific to each foundation model in use, including data provenance, known biases, and limitations.
  2. Validate fine-tuned models. When adapting foundation models for specific use cases, validate the fine-tuned version as you would any other model.
  3. Require provider documentation. Request model cards, safety evaluations, and known limitation documentation from foundation model providers.
  4. Monitor for provider changes. Foundation model providers frequently update their models. Establish processes to detect and assess provider-side changes.
  5. Assess concentration risk. Track your institution’s dependency on specific foundation model providers and develop contingency plans.
  6. Include in AI governance. Foundation models and their adaptations should be documented in your AI use case inventory.

Stay current on AI risk in financial services

Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.

Google reCaptcha: Invalid site key.