Chief AI Officer 2026: Real Role or Just Another C-Level Title?
Tobias Massow
⏳ 9 min read The Chief AI Officer is the most frequently announced-and least understood-C-level ...
CEO fraud via email is yesterday’s news. Easy access to artificial intelligence makes it simple for cybercriminals to imitate voices and faces to obtain money or information. Today, such deepfake attacks occur every five minutes.
AI-based deepfakes are the reason companies and their employees fall into traps. Increasingly, criminals hide behind seemingly familiar voices.
Such deepfake attacks and the forgery of digital documents have surged by 244 percent in 2024 and are becoming an ever-greater threat to businesses. Particularly, AI-driven vishing – a new portmanteau of phishing and voice – is posing a growing danger to companies and private individuals alike. According to the consulting firm Deloitte, financial damages from AI-driven deepfake attacks are expected to rise to 40 billion dollars by 2027. That’s more than three times the 12.3 billion dollars in 2023.
As it-daily.net reports, there has already been a deepfake attack every five minutes this year, with AI-driven fraud attempts on the rise and becoming increasingly sophisticated. Cybercriminals continue to adapt their techniques to bypass defensive measures.
Attackers are currently targeting onboarding processes. Fraud attempts during this particularly vulnerable phase have increased from 3.1 to 3.4 percent in the EMEA region. The figures are even more alarming in the APAC region (6.8 percent) and the Americas (6.2 percent). This is also due to stricter KYC (Know Your Customer) and onboarding regulations in Europe.
Digital identity verification should be a crucial component of every onboarding process to prevent fraud and financial crime before they occur.
Despite all security measures, there are numerous examples of attempted and successful deepfake attacks on well-known companies. A major Italian luxury car manufacturer recently narrowly avoided a deepfake scam when a manager grew suspicious and exposed the alleged CEO as a fraudster by asking targeted questions.
When the manager did not respond to initial email requests, a call followed with a convincingly authentic voice of the CEO, complete with his southern Italian accent.
This attempt was unsuccessful, but it could have ended with more than just a black eye. That was the case for a bank in Hong Kong about a year ago, when fraudsters used a video deepfake of the chief financial officer to steal 200 million Hong Kong dollars, or nearly 25 million euros – the largest AI-driven financial fraud worldwide to date. Such incidents are rapidly increasing.
The financial sector and its customers are particularly vulnerable to deepfakes or AI-driven phone scams. However, other industries are increasingly affected as well. Recently, a British energy supplier, along with its German parent company, experienced this firsthand. Cybercriminals used vishing to trick a top manager in the UK into transferring 243,000 US dollars, or 217,000 euros, to an alleged supplier in Hungary.
From the attackers’ methods and the usual reactions of affected companies, the following three countermeasures can be derived:

To keep financial damage to a minimum or prevent it altogether, companies should adopt a strict zero-trust security model and ensure that every communication is authenticated only after thorough verification.
Additionally, new EU regulations oblige companies to implement better security measures.
Source header image: Adobe Stock / WrightStudio