Scale: How Many Deepfake Voice Calls Are Happening?
The scale of AI voice cloning attacks has reached a point that would have seemed implausible just three years ago. In 2024, over 3.1 billion deepfake voice calls were placed globally. That is roughly 8.5 million AI-generated voice calls per day — or approximately 100 deepfake calls every second, continuously, 24 hours a day.
This figure represents a dramatic acceleration from prior years. The primary driver is the falling cost of AI voice cloning technology. What required specialized machine learning infrastructure in 2021 now runs on a consumer smartphone in 2025. Dozens of commercial and open-source tools offer real-time voice cloning for free or for as little as a few dollars per month.
Financial Impact: $25 Billion Lost Annually
Voice fraud — a category that includes AI voice cloning scams, caller ID spoofing attacks, social engineering by voice, and vishing (voice phishing) — costs individuals and businesses $25 billion annually. This figure spans:
- Consumer losses — individuals scammed into wiring money, purchasing gift cards, or providing financial account access to AI-voiced impersonators
- Business losses — CEO fraud via voice, unauthorized wire transfers authorized by finance teams tricked by AI-cloned executive voices
- Institutional losses — banks, insurance companies, and financial services firms defrauded through voice-based identity verification bypass
- Recovery and response costs — legal fees, investigation costs, regulatory penalties, and reputational damage following voice fraud incidents
Growth Rate: 2,400% Year-Over-Year
The single most important AI voice cloning statistic for understanding the threat trajectory is its growth rate: 2,400% year-over-year increase in AI voice cloning attacks. This is not a rounding error or a statistical anomaly — it reflects a genuine exponential adoption of voice cloning technology by fraudsters.
The growth is being driven by several compounding factors:
- Falling compute costs — AI inference is dramatically cheaper in 2025 than in 2022; attacks that once required expensive cloud compute now run locally
- Zero-shot cloning — new model architectures require only seconds of audio vs. the minutes required by earlier systems
- Commoditization — voice cloning tools are now available as consumer apps, browser plugins, and open-source models accessible to anyone
- Criminal adoption — fraud-as-a-service ecosystems now include voice cloning as a commodity capability
Attack Vector Breakdown
| Attack Type | Growth Rate | Primary Target |
|---|---|---|
| AI Voice Cloning (real-time) | ↑ 2,400% | All demographics — individuals and businesses |
| Voice Social Engineering | ↑ 890% | Executives, finance teams, legal professionals |
| Caller ID Spoofing | ↑ 340% | All demographics — any phone number can be faked |
| Grandparent Voice Scam | ↑ High | Elderly individuals via family voice impersonation |
| CEO / BEC by Voice | ↑ High | Finance employees, accounts payable, wire transfers |
Who Is Most Targeted by AI Voice Cloning Scams?
Elderly Individuals
Older adults are disproportionately targeted by AI voice cloning scams, particularly the grandparent scam. The combination of strong family trust, less familiarity with AI capabilities, and greater likelihood of having liquid assets makes elderly individuals high-value targets. The AI voice clone of a grandchild is highly effective precisely because elderly grandparents have deep emotional investment in recognizing their grandchildren's voices.
Business Executives and Finance Teams
Executives are high-value targets because their voices are often publicly available (earnings calls, conference presentations, media interviews) and because impersonating them can authorize large financial transactions. A single successful CEO voice fraud attack can yield millions of dollars. Finance and accounts payable teams are the secondary targets — the recipients of fraudulent voice-authorized transfer requests.
Anyone With Public Audio Online
The fundamental democratization of AI voice cloning as a threat means that anyone with publicly accessible audio is technically at risk. Social media users who post videos, podcast hosts, YouTubers, journalists, and anyone who has spoken on a recorded public call is a potential cloning target. The threshold for a functional voice clone is now just 3 seconds.
The Detection Gap: Why No Existing App Can Stop This
The detection gap is the most critical AI voice cloning statistic for consumers and businesses to understand. Despite 3.1 billion deepfake calls, $25 billion in losses, and 2,400% growth — not a single mainstream calling application offers AI voice clone detection.
This is not an oversight by Apple, Google, or calling app developers. Voice authentication at the level required to detect AI clones in real time is an extremely hard technical problem. It requires:
- Building per-contact biometric voiceprints from real call audio
- Running speaker verification AI models during live calls — with sub-second latency
- Doing all of this without sending audio to a server (which would create a privacy disaster)
- Differentiating between a voice clone and a genuine speaker across varying call quality, microphones, and environments
This is exactly what VeriCall was built to solve — and why it runs on Apple's Neural Engine on-device, with zero cloud infrastructure required.
Frequently Asked Questions
Over 3.1 billion deepfake voice calls were placed in 2024. This represents approximately 8.5 million AI-generated calls per day — a figure driven by the rapid commoditization of AI voice cloning technology, which is now available for free or at minimal cost.
Voice fraud costs individuals and businesses $25 billion annually worldwide. This figure spans consumer losses from AI voice cloning scams, corporate losses from CEO fraud via voice, institutional losses from voice-based identity verification bypass, and recovery and response costs.
AI voice cloning attacks grew 2,400% year-over-year. Voice social engineering attacks are up 890% and caller ID spoofing is up 340%. The growth is driven by dramatically falling costs and new zero-shot cloning models that require only 3 seconds of audio to generate a convincing voice clone.
The First App That
Detects Deepfake Calls.
3.1 billion deepfake calls happened in 2024. Zero existing apps can detect them. VeriCall is the first — on-device, real-time, zero cloud.
Private beta · No spam · Founding members only