Federated learning
Official Definition
A machine learning approach where a model is trained across multiple decentralized devices or servers, each holding local data samples, without exchanging the raw data itself.
Source: AIEOG AI Lexicon (Feb 2026), adapted from doi.org/10.1016/j.cosrev.2021.100380
What federated learning means in plain language
Federated learning is a method for training AI models across multiple organizations or locations without requiring any of them to share their raw data. Instead of collecting all data into a central location, the model travels to where the data is. Each participant trains the model on their local data, and only the model updates (not the data) are shared and combined.
This approach solves a fundamental tension in financial services: institutions want to benefit from AI models trained on large, diverse datasets, but regulatory, privacy, and competitive constraints prevent them from sharing sensitive data.
In federated learning, no participant sees another’s data. Each institution trains a local copy of the model using its own data, sends only the model parameter updates to a central server, which combines all updates into an improved global model and distributes it back.
Why it matters in financial services
Federated learning is gaining attention in financial services for applications where data sharing is restricted but collective intelligence is valuable:
- Fraud detection consortiums. Multiple institutions can collectively train a fraud model without sharing customer transaction data, improving detection accuracy across the consortium.
- AML collaboration. Financial institutions can improve transaction monitoring without sharing SAR data or customer information.
- Credit risk modeling. Lenders can build models on broader, more representative data without exposing individual borrower information.
Governance considerations unique to federated learning include model aggregation integrity, free-rider detection, data privacy verification, and intellectual property concerns.
Key considerations for compliance teams
- Assess privacy guarantees. Evaluate whether the federated learning implementation provides sufficient privacy protection for the data involved.
- Establish governance agreements. Multi-party federated learning requires clear agreements on data handling, model ownership, liability, and exit provisions.
- Validate the aggregated model. The combined model should be validated as rigorously as any internally developed model.
- Monitor for poisoning attacks. Federated learning is vulnerable to participants submitting malicious model updates. Implement detection mechanisms.
- Document the architecture. Maintain detailed documentation of the federated learning setup, including participants, data types, model architecture, and aggregation method.
- Assess regulatory requirements. Evaluate whether federated learning satisfies applicable data privacy, sharing, and governance regulations.
Stay current on AI risk in financial services
Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.
