Deepfake

Official Definition

AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful.

Source: AIEOG AI Lexicon (Feb 2026), EU Artificial Intelligence Act (Regulation (EU) 2024/1689), Article 3(60)

What deepfake means in plain language

A deepfake is synthetic media, an image, video, or audio recording, that has been created or manipulated using AI to look, sound, or appear authentic when it is not. The term combines “deep learning” (the AI technique used) and “fake” (the result).

Deepfake technology has advanced rapidly. AI-generated faces, voices, and videos can now be produced at quality levels that are difficult for humans to distinguish from real content. This creates significant risks for financial institutions, particularly in areas that rely on identity verification and the authenticity of communications.

Common deepfake techniques include face swapping (replacing one person’s face with another in a video), voice cloning (generating synthetic speech that mimics a specific person), and full synthetic generation (creating entirely fictional but realistic-looking people, documents, or scenes).

Why it matters in financial services

Deepfakes represent a growing fraud vector for financial institutions. Specific threat scenarios include:

  • Identity verification bypass. Deepfake images or videos used during KYC onboarding to impersonate a real person or create a synthetic identity. This threatens the integrity of customer due diligence processes.
  • Authorized push payment fraud. Deepfake audio or video of a company executive used to authorize wire transfers or payments. Several high-profile cases have involved deepfake voice calls instructing finance personnel to move funds.
  • Account takeover. Deepfake voice used to pass voice-based authentication systems and gain access to customer accounts.
  • Social engineering. Deepfake video calls used to impersonate trusted individuals and extract sensitive information from employees.
  • Document forgery. AI-generated documents (IDs, bank statements, pay stubs) used to support fraudulent applications.

Key considerations for compliance teams

  1. Assess deepfake risk in your identity verification processes. Evaluate whether your KYC and onboarding systems can detect AI-generated or manipulated identity documents and biometric submissions.
  2. Implement liveness detection. For biometric verification, deploy liveness detection technology that can distinguish between a real person and a presentation attack (photo, video, mask, or deepfake).
  3. Establish multi-factor verification for high-value transactions. Do not rely solely on voice or video verification for wire transfers and large payments. Require additional authentication factors.
  4. Train staff to recognize deepfake risks. Employees involved in customer onboarding, transaction authorization, and customer support should understand the threat and know escalation procedures.
  5. Monitor for deepfake-enabled fraud patterns. Work with your fraud detection team to identify and monitor for transaction patterns associated with deepfake-enabled fraud.
  6. Evaluate vendor detection capabilities. Assess whether your identity verification and fraud detection vendors offer deepfake detection features and how effective they are.

Stay current on AI risk in financial services

Get practical guidance on AI governance, model risk, and regulatory developments delivered to your inbox. Stay up to date on the latest in financial compliance from our experts.

Google reCaptcha: Invalid site key.