Transfer learning

Official Definition

A machine learning technique where a model trained on one task or domain is adapted for use on a different but related task or domain, leveraging previously learned knowledge.

Source: AIEOG AI Lexicon (Feb 2026), adapted from NIST AI 100-1 and arXiv:1911.02685

What transfer learning means in plain language

Transfer learning is the practice of taking an AI model that was trained for one purpose and adapting it for a different, related purpose. Instead of training a new model from scratch, you start with a model that has already learned useful patterns from a large dataset and fine-tune it for your specific task.

For example, a language model trained on general internet text can be fine-tuned on regulatory documents to create a specialized model that understands compliance terminology. A computer vision model trained on general image recognition can be adapted for document verification or check image processing.

Transfer learning is the mechanism behind the widespread adoption of foundation models. Most organizations do not train foundation models from scratch — they take existing models and adapt them through fine-tuning, prompt engineering, or other transfer learning techniques.

Why it matters in financial services

Transfer learning makes advanced AI capabilities accessible to financial institutions without requiring massive training datasets or computational resources. However, it introduces governance considerations:

  • Inherited characteristics. The adapted model inherits the biases, limitations, and knowledge of the source model. Transfer learning does not eliminate problems present in the original model.
  • Domain gap. The source model was trained on different data for a different purpose. The transfer to financial services may not be seamless.
  • Validation requirements. The adapted model must be validated for its specific use case, not just assumed to perform well because the source model was capable.
  • Provenance tracking. Institutions must document the chain from source model to adapted model, including what was changed and why.

Key considerations for compliance teams

  1. Validate the adapted model. Do not assume that a capable source model produces a capable adapted model. Validate for your specific use case.
  2. Assess inherited biases. Evaluate whether biases from the source model persist in the adapted version.
  3. Document the adaptation process. Record the source model, the adaptation method, the adaptation data, and the validation results.
  4. Test domain-specific performance. Verify that the adapted model performs well on financial services terminology, concepts, and scenarios.
  5. Monitor post-deployment. Track performance of the adapted model over time, as it may degrade differently than a model trained from scratch.
  6. Assess provider dependencies. If the source model is updated by its provider, understand how those changes affect your adapted model.

Stay current on AI risk in financial services

Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.

Google reCaptcha: Invalid site key.