18.02.2025

CEO fraud via email is yesterday’s news. Easy access to artificial intelligence makes it simple for cybercriminals to imitate voices and faces to obtain money or information. Today, such deepfake attacks occur every five minutes.

AI-based deepfakes are the reason companies and their employees fall into traps. Increasingly, criminals hide behind seemingly familiar voices.

Such deepfake attacks and the forgery of digital documents have surged by 244 percent in 2024 and are becoming an ever-greater threat to businesses. Particularly, AI-driven vishing – a new portmanteau of phishing and voice – is posing a growing danger to companies and private individuals alike. According to the consulting firm Deloitte, financial damages from AI-driven deepfake attacks are expected to rise to 40 billion dollars by 2027. That’s more than three times the 12.3 billion dollars in 2023.

As it-daily.net reports, there has already been a deepfake attack every five minutes this year, with AI-driven fraud attempts on the rise and becoming increasingly sophisticated. Cybercriminals continue to adapt their techniques to bypass defensive measures.

EMEA better prepared than APAC and the Americas

Attackers are currently targeting onboarding processes. Fraud attempts during this particularly vulnerable phase have increased from 3.1 to 3.4 percent in the EMEA region. The figures are even more alarming in the APAC region (6.8 percent) and the Americas (6.2 percent). This is also due to stricter KYC (Know Your Customer) and onboarding regulations in Europe.

Digital identity verification should be a crucial component of every onboarding process to prevent fraud and financial crime before they occur.

Despite all security measures, there are numerous examples of attempted and successful deepfake attacks on well-known companies. A major Italian luxury car manufacturer recently narrowly avoided a deepfake scam when a manager grew suspicious and exposed the alleged CEO as a fraudster by asking targeted questions.

When the manager did not respond to initial email requests, a call followed with a convincingly authentic voice of the CEO, complete with his southern Italian accent.

Potential for Millions in Damages

This attempt was unsuccessful, but it could have ended with more than just a black eye. That was the case for a bank in Hong Kong about a year ago, when fraudsters used a video deepfake of the chief financial officer to steal 200 million Hong Kong dollars, or nearly 25 million euros – the largest AI-driven financial fraud worldwide to date. Such incidents are rapidly increasing.

The financial sector and its customers are particularly vulnerable to deepfakes or AI-driven phone scams. However, other industries are increasingly affected as well. Recently, a British energy supplier, along with its German parent company, experienced this firsthand. Cybercriminals used vishing to trick a top manager in the UK into transferring 243,000 US dollars, or 217,000 euros, to an alleged supplier in Hungary.

The most important countermeasures

From the attackers’ methods and the usual reactions of affected companies, the following three countermeasures can be derived:

  1. Establish clear rules for communication protocols and ensure that all internal processes involving executives are standardized and always verifiable. It should be clear that a CEO, for example, never initiates large money transfers via email or phone or makes other unusual requests.
  2. Implement a multi-channel verification system and ensure that every important communication occurs through at least two channels, such as email and messaging services. If an important instruction comes through only one channel, employees should ignore it and request confirmation via a second communication channel.
  3. Regular employee training is key to overall IT security and specifically to combating deepfakes. As mentioned earlier, this starts with educating employees about what deepfakes are. According to a Bitkom survey, 30 percent of Germans do not know what a deepfake is. Training is necessary to show which tactics fraudsters use, what vishing entails, and how to respond appropriately.
Bildmotiv zu Mehrkanalige Verifizierung ist eine wichtige Gegenmaßnahme gegen Deepfake-Betrug. (Bildquelle Adobe Stock /
Mehrkanalige Verifizierung ist eine wichtige Gegenmaßnahme gegen Deepfake-Betrug. (Bildquelle Adobe Stock / Chaiwat)

To keep financial damage to a minimum or prevent it altogether, companies should adopt a strict zero-trust security model and ensure that every communication is authenticated only after thorough verification.

Additionally, new EU regulations oblige companies to implement better security measures.

Source header image: Adobe Stock / WrightStudio

Share this article:

More Articles

11.04.2026

Chief AI Officer 2026: Real Role or Just Another C-Level Title?

Tobias Massow

⏳ 9 min read The Chief AI Officer is the most frequently announced-and least understood-C-level ...

Read Article
10.04.2026

Cloud Repatriation 2026 Is a Statistical Illusion

Benedikt Langer

7 Min. Lesezeit "86 Prozent der CIOs planen Cloud Repatriation" lautet die Überschrift, die sich seit ...

Read Article
08.04.2026

AI Governance 2026: Only 14% Have Clarified Who Is Responsible

Tobias Massow

7 Min. Reading Time 87 percent of companies are increasing their AI (Artificial Intelligence) budgets. ...

Read Article
07.04.2026

18 Percent Pay Gap, an EU Deadline, and Little Preparation: Salary Transparency from June 2026

Benedikt Langer

8 min. reading time Starting June 2026, salary ranges must appear in job postings. Inquiring about current ...

Read Article
06.04.2026

Cyber Insurance 2026: Premiums Doubled, Coverage Halved – The Calculation No CFO Wants to See

Benedikt Langer

6 Min. Read 15.3 billion US dollars in premium volume, a 15 to 20 percent price increase for 2026, and ...

Read Article
05.04.2026

IT Budget 2027: Three Quarters for Operations – That’s the Problem

Benedikt Langer

6 min read By 2026, companies worldwide will spend $6.15 trillion on IT. That sounds like an unprecedented ...

Read Article
A magazine by Evernine Media GmbH