Deepfake CEO Fraud: How Voice Cloning Targets US Executives

Nearly 60% of US companies reported an increase in fraud losses from 2024 to 2025, driven largely by AI-powered deepfakes, according to Keepnet Labs. The FBI’s 2025 Internet Crime Report logged more than 22,000 AI-related fraud complaints with losses exceeding $893 million, and Congressional researchers estimate that fewer than 5% of voice clone scam victims ever report their losses, making the official figures a significant undercount of what US organisations are actually experiencing on the ground. Deepfake-enabled vishing attacks surged by over 1,600% in the first quarter of 2025 compared to the fourth quarter of 2024 in the US alone, and Deloitte’s Center for Financial Services projects that AI fraud losses in the United States could reach $40 billion annually by 2027.

The technology driving this surge requires no specialist expertise and no significant financial investment. A voice can be cloned from as little as three seconds of publicly available audio, and every earnings call, conference keynote, podcast appearance and investor presentation your executives have ever recorded is sitting on the open internet, available as training data for anyone who wants to use it. This guide covers how deepfake CEO fraud works in 2026, the documented US cases that illustrate what is at stake, and the specific controls that security teams and finance leaders need to implement before the next attempt is made against their organisation.

What is deepfake CEO fraud?

Deepfake CEO fraud is the use of AI-generated voice or video to impersonate a senior executive and manipulate employees into authorising fraudulent financial transactions, sharing sensitive credentials, or bypassing normal security procedures. Unlike traditional business email compromise, which relies on spoofed email domains and written communication, deepfake CEO fraud exploits the human tendency to trust familiar voices and faces, making it significantly harder to detect with conventional security awareness training or technical filters. The FBI classifies it as one of the fastest-growing and highest-value fraud categories targeting US enterprises in 2026, with AI-powered BEC generating $2.77 billion in losses across 21,442 incidents in 2024 and deepfake audio and video elements increasingly layered into what were previously text-only campaigns.

How attackers build a deepfake of your executive

The preparation phase of a deepfake CEO fraud attack begins weeks before any fraudulent call is made, and it relies almost entirely on information that organisations have made publicly available themselves without realising the risk it creates. Attackers begin by identifying target executives from LinkedIn profiles, company websites, SEC filings and press releases, building a detailed picture of who holds authority over wire transfer approvals, which vendors the company works with regularly, and what a financially plausible and time-sensitive pretext would look like given the organisation’s current publicly visible business activity.

Once the organisational map is complete, attackers harvest audio and video from every available public source, including earnings calls, investor day recordings, conference keynotes, podcast interviews and internal webinar recordings that were posted publicly without a second thought about how they might later be weaponised. The 2026 International AI Safety Report confirmed that the tools powering these scams are free, require no technical expertise, and can be used anonymously, which is precisely why AI voice fraud is now growing faster than any other fraud category in the United States. With a convincing voice clone assembled and a company-specific scenario constructed around real business intelligence, the attacker identifies the specific employee most susceptible to executive authority pressure and the campaign moves into its operational phase.

The three attack formats targeting US organisations in 2026

  • Voice cloning calls remain the most common format and the one generating the largest volume of reported US losses. A finance team member or operations manager receives a call that sounds exactly like their CEO, CFO or General Counsel, complete with familiar speech patterns, verbal mannerisms and contextually accurate references to real company business. The caller creates urgency around a time-sensitive payment, requests that the employee not discuss the matter with colleagues until it is resolved, and applies the kind of authority pressure that most employees are psychologically conditioned to comply with quickly and without challenge. The FBI explicitly warned in its 2024 public service announcement that generative AI enables criminals to create content that bypasses traditional detection methods entirely, with law enforcement struggling to keep pace with the velocity of new campaigns.
  • Deepfake video calls have moved from theoretical risk to operational reality for US-targeted campaigns. The most prominently documented case involved a finance employee who transferred 15 separate transactions totalling $25.6 million after a video conference in which every participant, including the apparent CFO, was an AI-generated deepfake. The employee had initially suspected phishing but the live video call with convincing AI-generated colleagues, synchronised facial movements and realistic voice replication overcame his scepticism entirely, and the fraud was discovered only through manual verification with corporate headquarters some time later. This case shattered the assumption that video calls are inherently trustworthy and established the operational template for the deepfake video campaigns that followed throughout 2025 and into 2026.
  • Multimodal campaigns combine email, voice and video sequentially in order to build cumulative credibility across multiple communication channels simultaneously. An email from a spoofed executive domain establishes the initial request and creates a paper trail that feels administratively legitimate. A voice call follows to confirm the instruction and add personal authority. A short video message or a brief Teams or Zoom appearance adds a final layer of visual confidence that overwhelms the normal verification instincts that might otherwise slow the process down. Each channel independently might trigger scepticism in a well-trained employee, but the combination of all three communicates through multiple trust pathways at once, and the 2025 IRONSCALES report found that over half of US organisations reported financial losses tied to deepfake or AI voice fraud in the past year, with average losses exceeding $280,000 per incident and nearly 20% of affected organisations reporting losses of $500,000 or more.

The US regulatory picture in 2026

For enterprise security teams, the regulatory picture matters because it directly shapes institutional liability in the event of a successful attack. When a US company loses millions to a deepfake video call and regulators investigate, the question is not simply whether the attack was sophisticated but whether the organisation had adequate, documented controls in place at the time. The FBI’s explicit public warnings, combined with the documented frequency and dollar value of these attacks throughout 2025 and into 2026, mean that US organisations can no longer credibly claim they were unaware of the threat when setting their internal fraud prevention standards.

Why traditional verification controls are failing US finance teams

Most US organisations still rely on controls that were designed for a threat environment that no longer exists in 2026. Caller ID verification is trivially bypassed because attackers spoof numbers matching the executive’s known mobile or office line with tools that cost nothing and require no technical knowledge. Email confirmation of verbal instructions provides no meaningful protection when the email account has been compromised or a convincing lookalike domain passes casual visual inspection. Video call verification has been definitively broken by the documented $25.6 million case and the multiple similar attacks that followed it throughout 2025. Annual security awareness training has shown no measurable effect on voice clone susceptibility specifically because the psychological mechanism being exploited is authority compliance rather than ignorance, and 24% of employees say they are not confident they could distinguish a deepfake voice from a real one, with that figure rising significantly when the voice belongs to someone the employee knows personally and works with on a daily basis.

Controls that work in 2026

So, what can you actually do anything?

Pre-agreed out-of-band verification codes are the most effective single control currently available to US organisations and the one most consistently recommended by security practitioners who have investigated successful deepfake fraud cases. Before any voice clone attack can succeed, it needs to answer a challenge the attacker cannot have prepared for, and a pre-established verbal passphrase or personal challenge question agreed upon face-to-face between finance team members and senior executives, and never shared digitally in any form, provides exactly that barrier. Any instruction to authorise an out-of-process payment requires the executive to provide this code before the instruction is acted upon, and this control cannot be bypassed by even the most sophisticated voice clone because the attacker has no way of knowing a code that was never recorded, transmitted or published in any accessible form.

Dual authorisation thresholds on all high-value wire transfers eliminate the entire category of single-voice-call fraud by ensuring that no individual employee can authorise a significant payment based on one communication channel alone. Any transfer above a defined dollar threshold should require independent sign-off from two separate individuals through two independent communication channels that are both verified out-of-band before the transfer is executed.

Executive digital footprint auditing raises the cost and complexity of the voice cloning task significantly, even if it cannot eliminate the risk entirely. Organisations should conduct a quarterly audit of what public audio and video is available for each member of their executive team, covering old webinar recordings, conference panel appearances, investor call archives, media interview recordings and any other publicly accessible content featuring the executive’s voice or image. Where recordings serve no current business purpose, removing them from public access meaningfully reduces the quality and availability of training data for a potential attacker.

En continu web sombre and social media monitoring for executive names provides the earliest available warning that a targeting campaign is being assembled against your leadership team. Deepfake fraud campaigns targeting US executives are frequently coordinated in closed Telegram channels and dark web forums before they surface as live attacks, with targeting packages assembled from breached credential data, social media profiles and public records over a period of weeks. Monitoring for your executives’ names, roles and personal contact details in these closed channels gives security teams a meaningful window to brief finance teams, implement additional verification controls and alert law enforcement before the first fraudulent call is placed. For a full breakdown of how executive targeting campaigns are assembled in practice, read our guide to executive cyber threats in 2026.

AI detection tooling used as one layer within a multi-layer approach can add meaningful value when it is not treated as a primary or standalone control. Deepfake detection software is improving rapidly but continues to lag behind the generation quality of the best available AI tools, and Gartner projects that by 2026, 30% of enterprises will no longer consider standalone identity verification solutions reliable in isolation, which is precisely why the procedural and organisational controls described above matter more than any single technology solution.

Conclusion

Deepfake CEO fraud is the fastest-growing financial crime targeting US enterprises in 2026, and the FBI’s own data confirms that reported losses represent only a small fraction of actual incidents given the chronic underreporting that Congressional researchers have documented. The $40 billion projection from Deloitte is not a distant hypothetical scenario but the trajectory of a threat that tripled its US losses in a single year and that is becoming cheaper, faster and more convincing with every month that passes as the underlying technology improves.

CybelAngel monitors dark web forums, closed Telegram channels, paste sites and social platforms continuously, alerting US security teams when their executives’ names, voices or personal data appear in contexts that indicate a targeting campaign is being assembled against them.

Frequently asked questions

À propos de l'auteur