6 GenAI risks for directors to consider

Wednesday, 01 October 2025

Jane Nicholls photo
Jane Nicholls
Journalist
    Current

    What are the main threats of GenAI that directors need to understand to ensure their companies are embracing  opportunities safely? Here are six to watch for.


    As the use of GenAI grows, directors need to deepen and expand their understanding of this emerging and pervasive technology. 

    Roboticist Dr Sue Keay is director of the UNSW AI Institute and founder and chair of peak body Robotics Australia Group. As a board member who has specialised in guiding companies through the strategic adoption of AI, robotics and automation, she’s acutely aware of the risks directors need to take into account. 

    She urges directors to consider that the unprecedented access that technology providers have to company data may carry commercial risks. “The growing use of AI and the market dominance of many tech companies means we rely on this software so much. But they’re not just providing a service, they have insights into your unique value proposition. And as we’ve seen with the evolution of tech, sometimes that can indirectly benefit competitors.”

    Directors must understand the core value of their businesses. “It’s surprising how many companies don’t really quite know. Map it out and think about what information you’re exposing to your tech providers. Could it be a risk to your company if another business finds out your secret sauce in that data? Are there other ways to share it?”

    Keay shares six overall risks  GenAI here.

    Key points:

    • Unauthorised AI use is creeping into your systems
    • Breaches can  lurk in Terms and Conditions
    • Predictive can be perilous
    • GenAI isn’t a calculator
    • Today’s stolen training data is tomorrow’s governance headache
    • Even trusted tech providers bring risks

    1. Unauthorised AI use creeping into your systems

    Shadow AI — tools and applications brought into your operation by employees — is a  huge risk and, as Keay explains, tricky to mitigate. “AI is a general-purpose technology, but with such technologies of the past, such as electricity, you didn’t have your staff coming into the work with power  in their pockets,” says Keay. “Now, you don’t know what company data people are putting on their smartphones and devices. If those tools aren’t available to you in your workplace it’s very tempting to just avail yourself of them.” Continuous education is a must. “People don’t realise they’re breaching privacy by inputting data into an unapproved tool without being cognisant of how it will be used,” says Keay. “There has to be a constant push from companies to educate the entire workforce around what’s appropriate and what’s not.”

    2. Breaches can lurk in Terms and Conditions

    Once everyone’s on board and using only company-sanctioned tools you can relax, right? Wrong. “Even with the approved tools, you really want to make sure you've looked at the terms and conditions around where the data is stored,” says Keay. “In many cases, the default position of some of these technology companies is that they can use your data to train their models. You’re putting a lot of faith into your procurement people, who might not have a lot of experience in this. Even your IT people might not necessarily be AI specialists because to some extent, it’s new territory for everybody.”

    In short, you need humans, not bots, scouring the terms and conditions (Ts&Cs). “While procurement people might not have AI backgrounds, they have good knowledge of the right questions to ask,” says Keay. She attended a procurement event this year which addressed how every single piece of software coming into a company can be tracked to what AI models it is built with and what data was used. “That’s becoming much more standard in procurement for big companies, but it is something SMEs need to start thinking about.” All need to ensure that  customer data is not stored offshore and that the inputs and outputs of the use of GenAI are not being used to train models — unless they’re your own models.

    3. Predictive can be perilous

    “AI is a predictive tool and the answer you get today might not be the answer you get tomorrow,” says Keay. “When answers can change, then GenAI requires human verification, especially in highly regulated sectors.” 

    It’s vital employees realise how GenAI operates to deliver its answers. “It’s not an encyclopedia, it’s a predictive tool, so often it’s telling you the most likely thing that needs to be said next, rather than the most accurate thing,” says Keay. “Fact-checking  answers is also tricky, because the sources it cites can be made up.” 

    A Victorian lawyer in 2025 was stripped of his ability to practise as a principal solicitor after using AI-powered legal software that presented false citations, aka made-up prior cases. He admitted he didn’t understand how the software worked, but neither did he verify the “cases” it spat out, which didn’t exist beyond the AI hallucination.

    4. GenAI is no calculator

    “A lot of people think you can use GenAI for mathematical calculations, but LLMs (large-language models) generate the most likely answer based on patterns,” says Keay. “Which means they can make basic errors, especially with multi-step problems. They don’t reason in the way humans do.” 

    While the big players in GenAI are working on adding reasoning to their models, Keay is sceptical. Models like ChatGPT-5 or Gemini show you their step-by-step ‘reasoning’, but this is still just statistical prediction which does  not apply any actual reasoning. 

    She says this is why keeping humans in the loop is so essential. Entrusting, say, company forecasting to a tool that is possibly having an educated guess, without ensuring a human expert scrutinises the results, is foolhardy at best, and negligent at worst.

    5. Today’s stolen training data is tomorrow’s governance headache

    At some point, there will be a reckoning, says Keay. “From a corporate governance perspective, how would you like to have to justify your company’s use of AI tools built on open-source models that have likely used copyright data without permission?” 

    She believes it will only be a year or two before this issue starts biting. Companies will be challenged about how they’re building their AI agents because most of the models being built on, like OpenAI, are already facing lawsuits regarding the legality of their acquisition of data. 

    Maincode is soon to release Matilda, Australia’s first national LLM, using local sovereign data and infrastructure. Other consortia are looking to develop local LLMs. “Companies will have a hard time arguing they need to build off models like OpenAI if there are ethically sourced AI models available to train from,” says Keay. 

    6. Even trusted tech providers pose risks

    A very important risk-mitigation strategy is diversifying the suppliers of your tech stack, says Keay. In Taiwan, for example, the government took into tech supply-chain resilience when building their tech stack. The stack had to be interoperable, which gave them the flexibility that if they lost faith in any suppliers, they could  pull the application out and plug something else in.

    Latest news

    This is of of your complimentary pieces of content

    This is exclusive content.

    You have reached your limit for guest contents. The content you are trying to access is exclusive for AICD members. Please become a member for unlimited access.