- The vast majority of BEC attacks will be linked to artificial intelligence and deepfakes
Business Email Compromise (BEC) attacks that utilize deepfakes are set to become a significant threat by 2025. For instance, this year in Hong Kong, scammers employed video and audio deepfakes to impersonate company executives during Zoom calls, convincing employees to transfer nearly $30 million.
According to Medius, around 53% of accountants in the U.S. encountered deepfake attacks in 2023. Additionally, VIPRE Security Group reports that 40% of BEC emails are now entirely generated by AI.
“As technology advances, risks are increasing exponentially,” said Usman Chaudhry, Chief Product and Technology Officer at Security Magazine.
- Romantic scams using AI chatbots will proliferate
Recently, a notorious Nigerian cybercriminal released a video showing an AI chatbot conversing with a victim while posing as a "loving" military doctor. The chatbots communicate without an accent, which enhances the victims' trust. Experts warn that such autonomous chatbots will become a widespread tool for scammers by 2025.
- "Pig butchering" schemes will shift to AI-driven models
Scammers are actively leveraging AI to scale the so-called "pig butchering" schemes (where victims are misled over a long period—"fattened up"—to extract significant sums of money). For example, the software "Instagram Automatic Fans" sends out thousands of messages every minute, such as: “My friend recommended you. How are you?” Furthermore, deepfakes, voice clones, and chatbots will significantly enhance the scale of such operations.