Scale: How Many Deepfake Voice Calls Are Happening?

The scale of AI voice cloning attacks has reached a point that would have seemed implausible just three years ago. In 2024, over 3.1 billion deepfake voice calls were placed globally. That is roughly 8.5 million AI-generated voice calls per day — or approximately 100 deepfake calls every second, continuously, 24 hours a day.

3.1B
Deepfake voice calls in 2024. That is 8.5 million per day, 350,000 per hour, nearly 100 per second — and growing rapidly.

This figure represents a dramatic acceleration from prior years. The primary driver is the falling cost of AI voice cloning technology. What required specialized machine learning infrastructure in 2021 now runs on a consumer smartphone in 2025. Dozens of commercial and open-source tools offer real-time voice cloning for free or for as little as a few dollars per month.

Financial Impact: $25 Billion Lost Annually

Voice fraud — a category that includes AI voice cloning scams, caller ID spoofing attacks, social engineering by voice, and vishing (voice phishing) — costs individuals and businesses $25 billion annually. This figure spans:

$25B
Lost to voice fraud annually. AI voice cloning is accelerating these losses by making impersonation attacks dramatically more convincing and scalable.

Growth Rate: 2,400% Year-Over-Year

The single most important AI voice cloning statistic for understanding the threat trajectory is its growth rate: 2,400% year-over-year increase in AI voice cloning attacks. This is not a rounding error or a statistical anomaly — it reflects a genuine exponential adoption of voice cloning technology by fraudsters.

The growth is being driven by several compounding factors:

2,400%
Year-over-year growth in AI voice cloning attacks. No other fraud vector is growing at this rate. Voice social engineering is up 890%. Caller ID spoofing is up 340%.

Attack Vector Breakdown

Attack Type Growth Rate Primary Target
AI Voice Cloning (real-time) ↑ 2,400% All demographics — individuals and businesses
Voice Social Engineering ↑ 890% Executives, finance teams, legal professionals
Caller ID Spoofing ↑ 340% All demographics — any phone number can be faked
Grandparent Voice Scam ↑ High Elderly individuals via family voice impersonation
CEO / BEC by Voice ↑ High Finance employees, accounts payable, wire transfers

Who Is Most Targeted by AI Voice Cloning Scams?

Elderly Individuals

Older adults are disproportionately targeted by AI voice cloning scams, particularly the grandparent scam. The combination of strong family trust, less familiarity with AI capabilities, and greater likelihood of having liquid assets makes elderly individuals high-value targets. The AI voice clone of a grandchild is highly effective precisely because elderly grandparents have deep emotional investment in recognizing their grandchildren's voices.

Business Executives and Finance Teams

Executives are high-value targets because their voices are often publicly available (earnings calls, conference presentations, media interviews) and because impersonating them can authorize large financial transactions. A single successful CEO voice fraud attack can yield millions of dollars. Finance and accounts payable teams are the secondary targets — the recipients of fraudulent voice-authorized transfer requests.

Anyone With Public Audio Online

The fundamental democratization of AI voice cloning as a threat means that anyone with publicly accessible audio is technically at risk. Social media users who post videos, podcast hosts, YouTubers, journalists, and anyone who has spoken on a recorded public call is a potential cloning target. The threshold for a functional voice clone is now just 3 seconds.

The Detection Gap: Why No Existing App Can Stop This

0
Existing calling apps that can detect AI voice cloning. Every phone, every app, and every calling platform in use today is completely blind to deepfake voice attacks. VeriCall is the first solution.

The detection gap is the most critical AI voice cloning statistic for consumers and businesses to understand. Despite 3.1 billion deepfake calls, $25 billion in losses, and 2,400% growth — not a single mainstream calling application offers AI voice clone detection.

This is not an oversight by Apple, Google, or calling app developers. Voice authentication at the level required to detect AI clones in real time is an extremely hard technical problem. It requires:

This is exactly what VeriCall was built to solve — and why it runs on Apple's Neural Engine on-device, with zero cloud infrastructure required.

// FAQ

Frequently Asked Questions

Over 3.1 billion deepfake voice calls were placed in 2024. This represents approximately 8.5 million AI-generated calls per day — a figure driven by the rapid commoditization of AI voice cloning technology, which is now available for free or at minimal cost.

Voice fraud costs individuals and businesses $25 billion annually worldwide. This figure spans consumer losses from AI voice cloning scams, corporate losses from CEO fraud via voice, institutional losses from voice-based identity verification bypass, and recovery and response costs.

AI voice cloning attacks grew 2,400% year-over-year. Voice social engineering attacks are up 890% and caller ID spoofing is up 340%. The growth is driven by dramatically falling costs and new zero-shot cloning models that require only 3 seconds of audio to generate a convincing voice clone.

// VeriCall

The First App That
Detects Deepfake Calls.

3.1 billion deepfake calls happened in 2024. Zero existing apps can detect them. VeriCall is the first — on-device, real-time, zero cloud.

Private beta · No spam · Founding members only