GenAI is changing the cybersecurity threat landscape – but boards that tackle these issues head-on can reduce the risks and ensure their organisations emerge even stronger.
Generative AI offers transformative potential in the boardroom and beyond, but it also leaves organisations open to new and troubling vulnerabilities. Amplifying traditional cyber threats and creating new ones, GenAI underpins novel dimensions of risk that are more than enough to keep directors up at night, notes Nick Abrahams, technology partner at Norton Rose Fulbright and an adjunct professor of AI at Bond University.
“There should be a degree of insomnia for all directors,” he says.
GenAI is bringing a host of new cybersecurity threats including deepfakes and voice-based social engineering, which are faster, more scalable problems and more plausible than ever.
AI deceptions have become so convincing that no-one can easily distinguish between genuine and fraudulent communications, even at the highest levels of an organisation. For instance, in 2019, criminals used AI tech to impersonate the voice of a UK energy company CEO and steal $243,000.
Abrahams points out this was small change compared to a more recent case in which scammers cloned a CFO’s voice to authorise a US$25m bank transfer in Hong Kong. “When you’re losing $25m, that’s a bad day at the office,” he says.
In the first quarter of 2025 alone, documented financial losses from deepfake-enabled fraud exceeded US$200m, according to a report by Resemble AI.
Contemporary generative models also enabled mass personalisation of phishing.
AI-enhanced ransomware and malware built using dark web tools like WormGPT and FraudGPT were also being used to create more persuasive scam emails and extortion threats, or to exfiltrate documents, he adds.
In one case, a hacker showed how ChatGPT could recreate malware by generating a Python Infostealer script that surreptitiously searched for, copied and exfiltrated documents, PDFs and images from an infected system.
Insider information
Boards also need to view GenAI as a potential data privacy and intellectual property (IP) exposure risk. For instance, the growing use of shadow AI, which involves the uncontrolled and unsanctioned employee use of AI tools, could also lead to data leakage or IP loss, says Abrahams.
In one case, Samsung engineers inadvertently leaked source code into ChatGPT, prompting a company-wide ban on GenAI. It followed an earlier internal warning issued by Amazon in 2023 after ChatGPT responses seemed to mirror internally held data.
To combat such actions, organisations need to ensure appropriate policies and controls are in place, such as AI usage guidelines or data loss prevention tools.
More chilling is the 2024 case of a North Korean hacker who used a counterfeit persona to get hired remotely at a cybersecurity firm, before loading malware onto internal systems.
According to a prediction by market research firm Gartner, one quarter of all job applicant profiles on the global market will be fake by 2028.
As anyone who has ever experimented with chatbots will know, there are many instances in which AI simply gets it wrong — such as when a Chevrolet chatbot “sold” a US$76,000 car to a cunning hacker for the bargain price of US$1.
Then there was the Air Canada chatbot that gave false refund advice, leading to the airline losing a small claims court case.
Abrahams notes that while board-level cybersecurity drills have typically focused on phishing, malware outbreaks, ransomware and third-party breach scenarios, directors need to step it up to test both technical and governance readiness.
Deepfake drills
Drills that involve simulated fake CEO calls along the lines of scams that have already unfolded can ensure directors have the “muscle memory” to react to such challenges in a meaningful way. “We’ve not dealt with deep fakes to this degree before,” says Abrahams.
New drills should focus on building awareness and developing new protocols like callback or multi-channel verification — and ensuring that at least one out-of-band check (a known personal number or face-to-face confirmation) is required for sensitive requests.
AI incident drills also need to address other emerging challenges, such as indirect prompt injection, described by the UK’s Centre for Emerging Technology and Security (CETaS) as “one of the most urgent issues facing state-of-the-art GenAI models”.
CETaS defines indirect prompt injection as the insertion of malicious information into the data sources of a GenAI system by hiding instructions in the data it accesses.
Boards may also need to consider how to manage market value wipeouts, such as that seen in 2023 when Alphabet Inc lost US$100b in market value after chatbot errors dented investor confidence.
Abrahams says boards need to rehearse their responses to AI-specific crises. For instance, if a GenAI system “goes rogue” or is exploited, how quickly can the team disable it and who communicates to customers or the public?
Odd one out
On the flip side, directors need to understand that GenAI can also help their companies strengthen their cybersecurity posture. “We must use AI to protect ourselves against AI,” says Abrahams.
GenAI can rapidly analyse mountains of security data such as logs, alerts and incident reports — and provide faster detection and response times than human security teams can.
For instance, Microsoft’s Security Copilot (an AI assistant for cyber defence) reportedly helped to cut incident analysis times by up to 40 per cent.
GenAI isn’t only reacting to known threats — it is helping security teams anticipate and pre-empt new threats by recognising early indicators.
For example, at McLaren Racing, self-learning AI Darktrace detected and autonomously halted a sophisticated email impersonation attack during the “very distracting time” of a busy Formula 1 weekend, says Abrahams. In that case, the AI recognised unusual sending patterns and blocked malicious emails imitating a trusted person, long before staff became aware of them.
GenAI can also boost human preparedness by generating custom phishing simulations and security drills that incorporated company context, making training exercises even more effective for employees.
“This is an arms race,” says Abrahams. “We have AI coming at us and we need to make sure we make decisions with AI technology in mind.”
Latest news
Already a member?
Login to view this content