Large language model
Official Definition
A class of large, pre-trained AI models, typically based on transformer architecture, designed to understand and generate text-based content.
Source: AIEOG AI Lexicon (Feb 2026), adapted from NIST AI 100-1 and NIST AI 100-4
What large language model means in plain language
A large language model (LLM) is a type of AI model trained on vast amounts of text data to understand and generate human language. LLMs are the technology behind AI chatbots, text generation tools, and many other natural language applications.
The “large” refers to both the amount of data used for training and the number of parameters in the model. Modern LLMs have billions of parameters and are trained on datasets that encompass a significant portion of publicly available text.
LLMs work by predicting the most likely next word (or token) in a sequence. Through training on massive text corpora, they develop an ability to generate coherent, contextually appropriate text across a wide range of topics and tasks.
In financial services, LLMs are being deployed for customer service chatbots, document summarization, regulatory text analysis, contract review, compliance report drafting, knowledge management, and code generation.
Why it matters in financial services
LLMs present a unique governance profile combining broad capability with specific risk factors:
- Hallucination. LLMs can generate plausible but false information, which is particularly dangerous in regulatory, legal, and financial contexts.
- Non-determinism. The same prompt can produce different responses, complicating reproducibility and audit.
- Data privacy. Sensitive information sent to LLM APIs may be processed or stored by third parties.
- Prompt injection. Adversaries can craft inputs that cause LLMs to bypass their intended constraints.
- Bias. LLMs can reflect and amplify biases present in their training data.
Key considerations for compliance teams
- Establish LLM-specific policies. Define acceptable use cases, prohibited applications, and review requirements for LLM deployments.
- Implement output review. Require human review of LLM-generated content used in regulated or customer-facing contexts.
- Protect sensitive data. Establish policies governing what data can be sent to LLM providers.
- Test for hallucination. Validate LLM accuracy in your specific use cases and implement guardrails.
- Monitor for prompt injection. For LLMs exposed to external inputs, implement defenses against prompt injection.
- Include in AI governance. LLM deployments should be inventoried, risk-assessed, and monitored like any other AI system.
Stay current on AI risk in financial services
Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.
