Current

    Directors must urgently consider the ethics of AI, says Dr Catriona Wallace, founder of the Responsible Metaverse Alliance.


    What is responsible AI?

    RAI is an extension of a field known as ethical AI. It is a strategy where organisations and their leaders take a responsible approach to using AI. It includes implementing systems and processes that reduce discrimination, unfairness, safety issues, fraud and exploitation. Responsible AI also seeks to reduce the chances of unintended consequences arising. It covers governance, monitoring, auditing and reporting activities.

    What are the key concerns for directors to be aware of?

    Even though no specific laws in Australia govern the use of AI, there is still a legal risk associated with it. Data privacy laws can be extended to include AI as a technology. There are also reputational and financial risks, and the risk of doing harm to other companies or individuals. Employee risk is also possible. For example, employees may feel unhappy about a misalignment of their values if their organisation is not using the technology responsibly.

    What do boards commonly get wrong about AI adoption?

    One of the biggest mistakes is boards delegating decisions about AI to frontline engineers and coders. In Australia, boards are often run by people with backgrounds in finance, law, marketing or sales. Very rarely do you see a chief technology officer (CTO) step up into a chair role. If the board is not particularly AI-literate, members tend to delegate decision-making down to the engineers, who don’t understand the implications of an absence of responsible AI. Therein lies the disconnect.

    For example, governance and the strategic decisions related to AI must be made at the board level. By extension, investors also need to be aware of the importance of a responsible AI strategy so they’re not putting pressure on boards to save money through dangerous shortcuts.

    I’m not suggesting boards get into the weeds of AI use, but members need education on how to set a responsible AI strategy. They need experts to come in and present the risks, opportunities and latest developments. So that when the CTO comes and says, “We need money to invest in this strategy”, the board feels comfortable they know enough to make the right call.

    Why is this issue a moral imperative?

    I’m keen for directors to understand that AI poses an existential risk to humanity. According to the Future of Humanity Institute at Oxford University, AI poses a greater existential risk than climate change, nuclear war or pandemics. It is critical we step up to meet the challenge.

    It’s not only about the risk to humanity. AI is a huge polluter and its environmental impact is rarely spoken about. AI produces more carbon emissions than the aviation sector. To train just one generative AI model produces the equivalent of the carbon emissions of five cars in their lifetime — and 20 different minerals need to be extracted.

    There is also a societal risk that it can propagate discrimination and unfairness. Any historical data set used to train an AI model will have inherent biases, which leads to discrimination against a large segment of the community. We’ve seen this at play within the banking sector, when women were given lower credit limits than men. We’ve also seen it in border control and in justice systems, where AI has discriminated against people of colour.

    How is good governance central to not only mitigating risk, but building trust — and how can it impact the bottom line?

    Trust needs to be redefined from the traditional way organisations thought about it. For example, Optus experienced a cybersecurity breach that resulted in its brand being significantly damaged. There’s no doubt the cyberattack would have been supported by AI. I believe trust has a new definition, which relates to character and competence. Digital competency needs to be demonstrated. It cannot simply be a case of saying, we’re good people, we tell the truth and we’ve got a shiny brand.

    Trust is about how you keep customer details safe and how the company demonstrates its ethical use of AI. When it comes to the ethics of AI, there are core principles that serve as guidelines, as they are not enshrined in law. They include using AI responsibly to minimise harm and unintended consequences. I believe if companies adopt these core principles, it will help build trust with customers and other stakeholders.

    What are some risks relating to cybersecurity stemming from AI misuse?

    Just as we see AI being used to monitor and improve cybersecurity, we’re also seeing it as one of the core ways cybersecurity issues are occurring. We see this a lot in identity theft, phishing and trolling, and using AI and synthesised voices to con people out of money. We’re also seeing AI used to breach cybersecurity in large corporations. What’s really interesting is that AI is being used by criminals to infiltrate cybersecurity measures — and we’re using AI to fight against it. I hope we will win the battle.

    What are emerging government regulations around AI that directors should be aware of?

    The role model internationally is the European Union AI Act (2024). It’s the only significant AI legislation in the world and goes into five key categories of risk from AI. The risks are classified from extreme down to limited risk. Then it has mitigation strategies for each. The legislation spells out how the risks need to be audited and reported. It also covers the implications if a company violates those risk categories. In Australia, we’re in the early stages of forming such laws. Draft guidelines came out last year, which were just an abbreviated version of the EU law, only covering extreme and high-risk situations. I was hard on the government when it came out, because I believe a moderate risk from AI will still do a lot of damage and cause hurt to people. Australia is not going far enough.

    Directors can start to put frameworks and systems in place that speak to the coming legislation, because eventually it will come. The World Economic Forum has brilliant guidelines and free tools, while the eSafety Commission and Human Rights Commission are actively helping organisations to implement these processes ahead of time.

    What is your advice for directors seeking continuous education about AI?

    There’s a huge amount of information available online, including short courses and real-life case studies. I have a new book coming out in August titled Rapid Transformation. There is a section dedicated to the transformative use of AI.

    Directors need to play with AI. I did a presentation last week to very senior business leaders and someone there was telling them they shouldn’t be using ChatGPT. I disagree. You all should be downloading various AI applications and seeing what AI can do well — and not so well.

    The future leader and board will be AI-enabled. It’s likely the board will have an AI robot sitting at the table — as numerous organisations already do — with AI-based avatar directors giving advice to the human directors.

    This article first appeared under the headline 'Safeguarding AI' in the July 2025 issue of Company Director magazine.

    Latest news

    This is of of your complimentary pieces of content

    This is exclusive content.

    You have reached your limit for guest contents. The content you are trying to access is exclusive for AICD members. Please become a member for unlimited access.