Could you be fooled by a deepfake? Why CFOs need to boost their cyber awareness

Over the past few years, the social engineering threat has stepped up a gear, with criminals putting AI to work to mount spectacularly convincing fraud scams. As keepers of the purse strings, corporate finance professionals are prime targets in this new wave of cybercrime.

Here’s a closer look at the evolving social engineering landscape and the guardrails you should have in place to keep your business safe.

Deepfakes: no longer the stuff of sci-fi

Last year, CFO Dive reported how the UK engineering group Arup lost approximately $25 million after scammers used AI-generated “deepfakes” to pose as the company’s CFO and dupe an employee to transfer cash into a number of bank accounts. An Arup staff member got a fake message, claiming to be from the company’s CFO, regarding a “confidential transaction”. Following a video conference with the false CFO and other AI-generated employees, the employee executed transactions to five different Hong Kong-based banks.

In July, a Ferrari executive received a series of messages from an unknown number, purporting to be from the company’s CEO, Benedetto Vigna. The messages detailed a company acquisition, advising the Ferrari exec to maintain discretion and that an NDA would be needed. Following the texts, the executive received a phone call from the same number, with the caller mimicking Vigna’s distinctive voice (subsequent inquiries revealed that this had been created with an AI voice clone). Despite the scenario’s plausibility, the executive grew suspicious of certain aspects of the caller’s intonation. The exec asked the caller the name of a book Vigna had recommended to him just a few days earlier. The call suddenly ended.

How common is the deepfake problem?

Evidence suggests that sophisticated AI-driven fraud is now part of the mainstream.

A 2024 Medius poll of 1,533 US and UK finance professionals suggested that just over half (53%) of corporations have been targeted with financial scams using deepfake technology, with 43% falling victim to such attacks. Another report suggested that in 2023, deepfake incidents targeting the fintech sector increased by 700%.

According to Deloitte’s Center for Financial Services, total losses from financial fraud in 2023 amounted to $12.3 billion. By 2027, generative AI is predicted to enable losses to reach $40 billion.

The most popular deepfake techniques include the following:

  • Deepfake voice scams. With just a few seconds of audio, scammers use AI tools to clone a person’s voice. They then make fake phone calls or submit voicemails impersonating key company insiders
  • Deepfake video scams. Based on snippets from, for example, YouTube or corporate webinars, AI can generate fake video footage of a real person. The level of sophistication in this area has now reached a point where it is possible to clone a company executive’s face and voice to conduct entire real-time video conference calls
  • Fake ID and social media scams. AI can generate realistic fake profile pictures for scam accounts. These profiles can then be used as broader corporate fraud attempts
  • Fake AI-generated documents. AI can create counterfeit passports, IDs and other documents. These forgeries can help scammers bypass security checks in financial fraud and identity theft scams

A big part of the problem is that deepfake technology is available to pretty much anyone – if you know where to look. A booming dark web cottage industry sells scamming software of various levels of sophistication from $20 to thousands of dollars. You do not need advanced AI skills to mount an attack; you need cash.

Deepfakes and the wider social engineering landscape

Phishing – i.e. scammers attempting to trick people into revealing sensitive information – has long been part of the cyber threat landscape. An estimated 3.4 billion emails a day are sent by cybercriminals, designed to look like they come from trusted senders, which amounts to over a trillion phishing emails per year.

Finance leaders must be especially wary of ‘bespoke’, personalised attacks. Spear phishing is where messages – ostensibly from a known or trusted party, such as a supplier or customer – are sent to targeted individuals to induce them to reveal information or take specific actions. A highly refined version of this is known as “whaling”, where precisely engineered spoofing messages are created to trick CEOs, CFOs, and other senior stakeholders.

According to Barracuda, spear phishing campaigns make up only 0.1% of all email-based phishing attacks but are responsible for 66% of all breaches. The more targeted and personalised the social engineering campaign, the more likely it is to be successful.

Against this backdrop, deepfake technology extends the fraudster’s toolkit. It sits alongside existing phishing techniques, injecting them with an extra layer of realism.

For example, an attacker does some homework on your company. They obtain the details of your CEO and finance team members via LinkedIn. They hone in on an individual whose job title suggests the authority to execute substantial cash transfers. Looking at the company news section of your website, they also notice details of a couple of new deals or partnerships that they can use as possible backdrops from which to concoct a story.

Lifted from a webinar on your site, they have an excellent data extract of your CEO’s voice. Using a replication tool purchased on the dark web, they can use this for cloning and live voice manipulation.

Using a spoof email, they message the finance team member, pretending to be the CEO. The message requests a quick chat to discuss an urgent transaction. A call is arranged, and the fake CEO tells the employee where to send the cash.

How to prevent deepfake attacks

Good practice includes the following:

Raise awareness 

Most cyber attack techniques rely on human vulnerability, carelessness, and/or a lack of awareness on the victim’s part. This applies to all types of social engineering, including deepfakes.

As we’ve seen, there’s nothing new about phishing, so you should have awareness training in place already (if not, this is an area that needs to be addressed). This should include training on the telltale signs of phishing to look out for, such as irregular email domains or attachments, display name mismatch, or an undue sense of urgency within requests.

Update your training to cover the threats raised by advanced AI. This should include the signs of deepfake attacks, such as unnatural facial movements and expressions, slight visual distortions and imperfect lip synching in video, and slightly robotic speech patterns and inconsistent tone in audio.

Have set procedures in place 

Scammers thrive on inconsistency. If payments are routinely made in an ad-hoc way within a company, it becomes much more challenging for employees to spot a fake email, call, or Zoom session from a genuine one.

Especially for financial transactions, ensure you have set consistent protocols. Examples of this may include:

  • The requirement that every request needs to be accompanied by an invoice and submitted in a set format
  • Multi-factor authentication, e.g. requiring at least two levels of authentication for payment approvals
  • One-time passwords for high-value transactions
  • Multi-person approval systems
  • A strict call-back policy for unusual requests

Ensure that the rules apply to everyone 

Threat actors also prey on the fact that a kind of “Do you know who I am?” culture exists in many organisations. It’s easy to pull rank if you’re a senior exec. So, if you need a payment to be actioned, instead of following the relevant workflow, you telephone someone in accounts and tell them to make it happen.

This is precisely why highly personalised attacks involving senior stakeholders are so popular with fraudsters. If people at the top are routinely bypassing your safeguards, they start to lose effectiveness quickly!

What next? Future-proofing your information security stance

The rapid rise of deepfake attack techniques in such a short time highlights that no company can afford to stand still when it comes to cyber security. Risk analysis, awareness training, multi-layer defence strategies, and optimisation of transaction procedures all demand regular reappraisal and constant vigilance.

So are you currently making the right choices regarding keeping your business safe? Where are your vulnerabilities, and how can they be addressed? This is where Millennium Consulting can help. Combining expertise in cybersecurity and optimisation of office of finance processes, our consultants can review your existing processes, identify your weak points, fill in any skills gaps you may have, and ensure the implementation of an information security framework tailored to your specific needs.

Find Out More Here

Millennium Consulting Awarded ISO27001 & ISO9001 Certification

January 2025

Updating and re-validation of our ISO 9001 & 27001 certification to the globally recognised UK Government UKAS standard. The ISO 27001 certification now aligns with the latest ISO 27001:2022 standard.

VIEW OUR ACCREDITATIONS PAGE