Current

    A new suite of AICD resources, developed with the Human Technology Institute (HTI) at the University of Technology Sydney, helps boards harness the power of AI responsibly.


    Professor Nicholas Davis is co-director of the Human Technology Institute (HTI) and Industry Professor of Emerging Technology at the University of Technology Sydney. He told an AICD webinar held on 12 June that research shows about a third of organisations are reporting no use of governance at all, while an additional third are relying on existing governance aspects.

    “Very, very few organisations are actually at the point where they say, ‘We have a specific AI governance approach set up’ and that’s exactly why we put these resources together,” he said.

    Boards need to start with the questions, “What is AI and where is it being used in my organisation,” said Davis. “If you take an expansive view on this, AI systems are doing impressive things and they're not explicitly programmed to do those things.”
    Davis also notes that it is often very hard to know when and where AI is being used.

    Opacity confounds the AI governance challenge

    The use of AI is “in shadow” where your employees or contractors are using AI systems without revealing that to managers or to other systems. That opacity is a big governance challenge, as is the fact it’s often hard to know how an AI system has come up with a particular prediction, inference or outcome..

    “That is challenging, because it leads to questions about provenance,” said Davis, adding that security and data governance become “absolutely critical questions when you’re talking about how to ensure that AI systems are safe and responsible”.

    Some of the latest generative AI tools can increase efficiency and productivity, and with careful application and implementation, they are able to unlock a lot of value in businesses, he added. However, he acknowledged, “they do pose commercial risks, reputational risks and regulatory risks”.

    “In Australia, we have a very broad set of tech-agnostic laws that make up the legal and regulatory environment,” he said. “It doesn’t matter whether you’re using an AI system or not. If that system produces an output that contravenes the law, your business will be subject to those regulatory risks, that reputational damage and, of course, the commercial hit you might take as a result.”

    AI Governance Guide for Directors

    As stewards of organisational strategy and risk management, directors should seek to seize the opportunities and mitigate the risks of AI, with its ethical use in the interests of customers being paramount. Trying to fit AI within existing IT governance frameworks is problematic, with HTI’s research finding that existing IT risk management frameworks and systems are largely unsuited for AI governance.

    Addressing the unique characteristics of AI systems requires a robust governance framework, which incorporates eight elements of effective, safe and responsible AI use, as detailed in the Director’s Guide to AI Governance.

    The guide details eight elements board should be considering for AI governance:

    1. Roles and responsibilities
    2. People, skills and culture
    3. Governance structures
    4. Principles, policies and strategy
    5. Practices, processes and controls
    6. Stakeholder engagement and impact assessment
    7. Supporting infrastructure
    8. Monitoring, reporting and evaluation.

    Download the Snapshot of the eight elements of effective, safe and responsible AI governance here.

    Access the main Governance Resource here.

    People, skills and culture

    Wendy Stops GAICD, a non-executive director at Coles Group, who has served on many boards throughout her career, says that skills and understanding are important to building a strong governance framework.

    “A lot of boards feel they don’t understand AI,” she said. “Getting the board educated is a good place to start rather than relying on one person on the board who might understand it. It needs to be very business-driven to develop the skills and the culture around the decisions and use of AI.”

    Establishing a committee to “ask the hard questions” around ethical use and reputational risks, and to make sure that all the responsibilities are in place, is a consideration for boards. Policies and strategies will help to develop guardrails around what the organisation can and can’t do, and how it will deal with privacy, cyber and ethics, said Stops.

    “AI and analytics come with a whole different level of understanding, so having that type of skill and culture among the organisation and among the board is quite important.”

    The board needs to be the leader in creating the right culture about the use of AI, and executives need to be with them, said Stops. “If you come from an organisation that is constantly innovating, you will lean in to AI. Having that culture will naturally draw out the use of AI at the organisation. The culture of innovation starts from the top.”

    Monitoring, reporting and evaluation

    Davis acknowledged that monitoring, reporting and evaluation can be tough, but verifying that management has a risk-based monitoring reporting system, with metrics and regular frequency to give oversight and some assurance, are essential.

    Principles, policy and strategy

    Boards need to ask how the current and intended use of AI supports the overall strategy.

    “It’s not the first question people ask,” said Davis. “It’s assumed — or people don’t feel skilled enough to ask — what those opportunities look like.” However, drilling down into why the organisation is using AI is important for the board to understand.

    A lot of organisations are using AI because their competitors are, but Davis cautions directors to ask the hard questions to see if there is a better, cheaper or more reliable solution to whatever issue the organisation is trying to solve.

    Webinar facilitator and AICD senior policy adviser Anna Gudkov noted the resource makes the point that AI use should come back to strategy and what the organisation is trying to achieve. “AI for AI’s sake” should be avoided.

    Evaluating management’s responses

    The guide provides an amber/red-light approach to risk considerations. Gudkov explained that amber implied there may be some risk and directors should probe further. The guide sets out a list of manageable responses that might be amber flags. A red light indicates there are potential high risks and directors should be on guard, probe much further and consider how to address these risks.

    Stops said if management is pushing things to lower levels and having “to get back to you”, that implies there is not a business-led use of AI and that should be a red flag.

    “Ask management to ‘explain how that works’, and if they can’t answer those questions, that might suggest something’s not in place around the governance,” suggested Stops. “An internal or external audit can review how data governance, analytics and AI are being used to see if the right sort of structures are in place for governance. That can be a great resource for the board to gain a level of confidence in things that are, or are not, in place.”

    This is an edited version of the discussion from the webinar held 12 June 2024. The full recording can be accessed here until 14 June 2025.

    This webinar is part of the AI webinar series. To learn more click here.

    Latest news

    This is of of your complimentary pieces of content

    This is exclusive content.

    You have reached your limit for guest contents. The content you are trying to access is exclusive for AICD members. Please become a member for unlimited access.