Cyber scammers are distorting what we see and putting words into our mouths — and these deepfakes could end up costing companies millions, writes John M Green FAICD.
What do a dead celebrity chef, Salvador Dalí, Tom Cruise and Australian boardrooms have in common? The answer is “deepfakes”. Deepfakes use a form of artificial intelligence (AI) called “deep learning” to make images of fake events. They are an increasingly pervasive technology that means we can no longer simply trust what we see or hear. Trust, but verify.
What would you think if... you saw a video on social media of your CFO drunk at a work Christmas party; you heard a sound bite of your board chair phoning a mate and leaking an upcoming M&A deal; or heard your CEO phoning your head of accounts to follow up an urgent email asking for a multimillion-dollar payment to be made to a supplier?
Today, those videos or audios could easily be deepfakes, which do for video and audio what Photoshop first did for photographs in the 1990s. A deepfake is a doctored video or audio realistically portraying a person doing or saying something they never actually did or said.
Deepfakes are literally changing the face of cybersecurity. They’re called deepfakes because they use deep learning technology — a branch of machine learning that applies neural net simulation to massive data sets — and are becoming more and more convincing. To make a deepfake, you pit two AI algorithms against each other, one creates the fake and the other rates its efforts, which teaches the fabrication engine to make better forgeries. These are called generative adversarial networks, or GANs.
Many deepfakes are fairly innocent, created for fun or profit. An entire TikTok channel is devoted to Tom Cruise deepfakes. More creepily, a genealogy website offers deepfakes so customers can bring photos of their deceased ancestors back to life. And in Florida, a museum has an interactive deepfake of Salvador Dalí welcoming visitors.
In August, howls of outrage hit a documentary about a deceased celebrity chef. In Roadrunner: A Film About Anthony Bourdain, the filmmaker literally put words in Bourdain’s mouth using deepfake technology. In the documentary, Bourdain “said” things on screen that he’d never spoken, although he had written them. Artistic licence simply didn’t cut it for many critics — not in a documentary without any disclaimer.
Deepfakes are popping up in thrillers, onscreen and in books. Fiction is just a mirror to real life. US House Speaker Nancy Pelosi featured in a notorious deepfake, apparently making a speech while drunk.
More relevant for boards, the first two cases of cybercriminals using deepfakes to rip off corporate targets have only recently become public. Neither involved ransomware, the hot cyber topic in many boardrooms. These criminals twinned deepfakes with the most prevalent and costly cyber risk on the planet — business email compromise (BEC). BEC is a specialist type of spear phishing, impersonating senior executives to trick employees, customers or suppliers into wiring payments to dodgy bank accounts. The FBI says BEC scams were its costliest cybercrime three years running, stealing more than US$1.8b in 2020 alone.
According to the 2021 Verizon Data Breach Investigations Report, 58 per cent of BECs successfully stole money. The median loss was US$30,000, with 95 per cent of BECs costing between US$250 and US$984,855. The Australian average BEC theft is similar ($50,600) according to the 2021 Annual Cyber Threat Report from the Australian Cyber Security Centre (ACSC). Not bad for a day’s work.
Companies I’ve been involved with have been victims of BEC scams. This was years ago, before BECs were widely known.
In one, the chair’s email account was hacked and taken over. “He” emailed the CEO on Christmas eve that, with his wallet stolen, he was stuck in Malaysia. So he asked the CEO to urgently arrange for many thousands of dollars to be wired to “his” bank account. In the second case, the CFO was on vacation. An email supposedly from him — “I should’ve sorted this out before I left for hols, sorry!” — arrived in his deputy’s inbox with an urgent request to pay a supplier’s attached invoice for a sum in the hundreds of thousands of dollars.
In both cases, the emails looked genuine. Money was paid, only some recovered. In neither case did the recipient take the sensible precaution of picking up the phone to check with the sender that their unusual email was actually genuine.
Back then, these scams got most companies promoting a “trust, but verify” stance. The ACSC has developed excellent resources on how to avoid BEC scams.
Deepfakes are now being used to weaponise BEC scams. Imagine, an accounts executive gets the now-classic urgent email from the CEO or CFO to pay out on a big money request. But before they get a chance to pick up the phone to verify it, the CEO or CFO calls them to stress the urgency of the matter. Hearing the voice of your boss is a compelling convincer.
That call can easily be a deepfake. To synthesise the voice of your CEO or CFO, all scammers have to do is apply their AI to the gigabytes of audiovisual content sitting on your company website — think AGM addresses, investor day presentations or media interviews.
So far, there have been just two publicly reported cases of deepfake BEC stings. In 2019, the CEO of the UK subsidiary of a German energy company took a call from “his boss” in Germany, who asked him to urgently send €220,000 to a Hungarian supplier “within the hour”. Except it was not his boss, even though it sounded like him right down to his accent and the cadence of his voice.
In 2020 — but only reported last October, when Forbes magazine revealed the details — the game got hotter. A Hong Kong bank manager had taken a call from a director of a client in the UAE he’d spoken to before. The client told him they were making an acquisition and needed the bank to transfer US$35m. Legitimate-looking emails backed it up and the money was sent.
What do we do?
Like these two cases, the vast majority of cybersecurity breaches are caused through human error, whether by mistake — for example, cutting corners — ignorance or negligence. Deepfake BEC scams are no longer fiction. To stop our companies getting sucked in, we need to remain alert and prepare our people for this new risk in our cyber education programs. If your employees don’t know what to look or listen for, they’ll be that much more easily fooled by a deepfake scam — and it could cost your organisation dearly.
John M Green FAICD is deputy chair of QBE Insurance Group, a director of Challenger and Cyber Security Cooperative Research Centre, co-founder of Pantera Press and author of Double Deal.
Already a member?
Login to view this content