In a startling turn of events, Asia-Pacific is witnessing a dramatic escalation in AI-powered fraud, particularly deepfake-enabled voice phishing, or “vishing.” Cybersecurity firm Group‑IB reports that fraudulent attempts leveraging artificial intelligence increased by 194% in 2024 compared to the previous year, with deepfake vishing identified as one of the predominant scam methods.
According to Yuan Huang, Group‑IB’s senior fraud analyst for the region, these deceptive scams exploit the sophistication of AI voice-cloning technology to trick unsuspecting victims. Scammers are now capable of mimicking voices with alarming realism, using recorded or publicly available audio snippets to fabricate their targets’ speech patterns.
This is not an isolated phenomenon. In recent years, voice-cloning tools have become accessible to more people—even requiring as little as a few seconds of audio to generate convincing impersonations. Such deepfake voices have been used in high-stakes scams, including those that tricked employees into transferring substantial sums of money.
Formal research further underscores the depth of this threat. Experimental studies simulating real-world conditions have revealed that AI-generated voice phishing attacks can be remarkably persuasive—even among individuals who have been pre-warned about such scams.
Summary of Key Insights
- Explosive growth: AI-related fraud attempts in the Asia-Pacific region rose by 194% in 2024, highlighting an urgent cybersecurity concern.
- Deepfake vishing on the rise: Scammers are increasingly using synthetic voice technology to impersonate trusted individuals and organizations.
- Ease of exploitation: Voice samples from social media or voicemails can be leveraged to craft realistic deepfake calls.
- Susceptibility persists: Studies show that AI voice phishing remains effective—even against those informed about the risks.
Source: CNA








