AI lifecycle

Official Definition

The set of phases an AI system goes through. These are plan and design, collect and process data, build and use model, verify and validate, deploy and use, and operate and monitor. These phases are often iterative, and not necessarily sequential.

Source: AIEOG AI Lexicon (Feb 2026), adapted from NIST AI 100-1 and OECD Framework for the Classification of AI systems

What AI lifecycle means in plain language

The AI lifecycle is the complete journey an AI system takes from initial concept to retirement. It provides a structured way to think about every stage of an AI system’s existence, from the moment someone identifies a need for AI through ongoing operation and eventual decommissioning.

The AIEOG defines six phases:

  1. Plan and design. Identify the business problem, define objectives, determine whether AI is the appropriate solution, and design the system architecture.
  2. Collect and process data. Gather the data needed to build the model, clean and prepare it, assess data quality, and document data sources and lineage.
  3. Build and use model. Develop the AI model, select algorithms, train the model on prepared data, and tune parameters for optimal performance.
  4. Verify and validate. Test the model against defined requirements, assess accuracy and fairness, conduct independent validation, and document results.
  5. Deploy and use. Move the model into production, integrate it with operational systems, establish access controls, and begin using it for its intended purpose.
  6. Operate and monitor. Track model performance in production, monitor for drift, conduct periodic revalidation, and manage changes and updates.

The definition emphasizes that these phases are “often iterative and not necessarily sequential.” In practice, teams frequently cycle back through earlier phases. A model in production that experiences drift may require new data collection, retraining, and revalidation before returning to operation.

Why it matters in financial services

The AI lifecycle provides the structural foundation for AI governance. Each phase carries specific risks and requires specific controls. Financial institutions that govern AI on a phase-by-phase basis can apply proportionate oversight at each stage rather than treating governance as a single checkpoint.

Regulatory frameworks align closely with lifecycle thinking. The OCC’s Model Risk Management guidance addresses model development, validation, and ongoing use. The NIST AI RMF maps its functions (Govern, Map, Measure, Manage) across the lifecycle. For a deeper look at how Treasury guidance is shaping lifecycle expectations, see our explainer. Examiners evaluate whether institutions have controls at each stage, not just at deployment.

Common lifecycle governance gaps in financial institutions include:

  • Missing documentation in early phases. Design decisions and data selection rationale are often undocumented, making later validation and examination responses more difficult.
  • Insufficient pre-deployment validation. Models are deployed without independent validation or with validation that does not cover all intended use cases and populations.
  • Weak post-deployment monitoring. Institutions invest heavily in development but underinvest in the “operate and monitor” phase, allowing drift and degradation to go undetected.
  • No defined retirement process. Models are replaced or deactivated without formal decommissioning documentation, creating gaps in the audit trail.

Key considerations for compliance teams

  1. Map your governance controls to each lifecycle phase. Create a matrix that identifies the specific policies, procedures, roles, and artifacts required at each stage of the AI lifecycle.
  2. Require documentation at every phase. Each phase should produce defined documentation: design documents, data quality assessments, validation reports, deployment checklists, and monitoring dashboards.
  3. Establish phase gates. Define criteria that must be met before an AI system can advance to the next lifecycle phase. For example, a model cannot be deployed without an approved validation report.
  4. Assign ownership at each phase. Different teams may be responsible at different phases (data teams, model developers, validators, operations). Ensure handoffs are documented and accountability is clear.
  5. Include lifecycle requirements in vendor contracts. For third-party AI models, require vendors to provide lifecycle documentation and support ongoing monitoring and revalidation.
  6. Plan for iteration. Build processes that accommodate the iterative nature of the lifecycle. Retraining, revalidation, and redeployment should have defined, repeatable procedures.
  7. Define end-of-life criteria. Establish conditions under which an AI system should be retired or replaced, and document the decommissioning process.

Stay current on AI risk in financial services

Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.

Google reCaptcha: Invalid site key.