Current

    New resources by AICD and the UTS Human Technology Institute (HTI) are designed to help boards understand the governance required to make the most of the opportunities and manage the risks of artificial intelligence. 


    Organisations in Australia and around the world are enthusiastically adopting artificial intelligence (AI) in many parts of their operations. The uptake of the new technology presents unique challenges for directors, who will need to integrate AI into their governance framework.

    The new suite of resources — A director’s introduction to AI (“introductory guide”), A director’s guide to AI governance (“governance guide”) and the AI Governance Checklist for SME and NFP directors — help to bridge the gap, providing structured and practical guidance to enhance directors’ understanding and oversight of AI technologies within their organisations.

    As AICD CEO Mark Rigotti and Professor Nicholas Davis from HTI point out in their foreword to the resources, domestic and foreign governments are on a path to figuring out the right calibration of policy levers to support the uptake of safe and responsible AI.

    “These include consideration of the introduction of mandatory guardrails for AI deployment in high-risk settings, consideration of labelling and watermarking of AI in high-risk settings, and clarifying and strengthening existing laws to address AI harms. Internationally, we are seeing jurisdictions attempt to walk the policy tightrope between regulating high-risk AI uses to avoid the most significant AI harms, and ensuring innovation continues to flourish by tapping into this transformational technology.”

    In our June issue, AI experts and leading directors discussed coming to grips with the indisputable governance imperative AI creates for directors. Here, we continue our coverage with a snapshot of key excerpts from this new thought leadership. In particular, the guide details the eight elements of safe and responsible AI governance, showing how directors can guide organisations to deploy AI systems safely and responsibly for maximum strategic and competitive advantage.

    The governance imperative

    Traditional governance may need to be adapted to be fit for purpose in an AI-driven world, the guide states. Research by HTI suggests that existing IT risk management frameworks and systems are largely unsuited for AI governance. Traditional approaches may not fit because of the speed and rate of change of AI, as well as its opacity — the challenge of testing, validating, explaining and reproducing AI system outputs and the difficulty in identifying AI use within an organisation and its value chain. Additionally, AI use crosses organisational barriers and reporting lines, and the technology exists in an uncertain policy, regulatory and technological environment.

    To meet these challenges, directors need to adopt an iterative, flexible and adaptive governance approach. It must be human-centred, with governance mechanisms transparently tracking and reporting how AI systems are impacting key stakeholders including consumers, employees, suppliers and contracting parties.

    The approach also needs to be cross- functional, the guide states. “AI governance cannot be achieved through the establishment of separate, disconnected roles or policies and procedures. AI governance needs to span various departments and roles, including those responsible for privacy, IT, product design and development, procurement, HR, risk and strategy.”

    Finally, given the speed of technological transformation, organisations should not rely on a “set-and-forget” approach to AI governance. Governance systems and processes should be subject to regular review to ensure that targets and outcomes are being achieved. The use of AI should be aligned to the broader organisational strategy, and that strategy should then be regularly reviewed to clarify and adjust the role of AI and emerging technologies.

    Key questions for directors to ask themselves and senior management include: how is AI currently being used to deliver business goals? What sorts of problems and challenges can or should AI systems be used to solve? What is our overall assessment of the evolving balance between the risks and benefits of AI systems to drive business value?

    At the same time, directors should be managing the AI risks by reviewing the organisational risk framework to test its application to AI use, noting increased scrutiny by stakeholders over how AI risks are being managed and defining and reviewing the organisation’s risk appetite and risk statement to cover AI use.

    In many cases, this will be the board risk committee. This does not absolve the board from overall responsibility for effective oversight.

    Practical tips

    The eight elements of effective, safe and responsible AI governance are detailed in the guide as follows.

    Roles & responsibilities:

    The first element of AI governance stresses the necessity for clarity in roles and responsibilities. Directors must identify who within the management and board is accountable for AI decision-making. This includes documenting individuals responsible for AI system procurement, development and use. Furthermore, decision-making processes should incorporate considerations of AI risks and opportunities. This clarity ensures that AI governance is not overly reliant on a few individuals, but distributed across relevant stakeholders, mitigating key-person risk and enhancing overall oversight.

    Governance structures:

    Effective AI governance necessitates robust structures at both the board and management levels. Boards must determine the appropriate governance structures, such as committees, that support AI oversight. Reviewing committee charters to ensure AI issues are incorporated and leveraging external experts for periodic briefings can enhance governance. Additionally, the frequency and nature of management reporting to the board should be carefully considered to ensure timely and informed oversight of AI activities within the organisation.

    People, skills & culture:

    Building the right capabilities is crucial for AI governance. Directors should ensure management assesses the organisation’s AI skills and training needs, implementing upskilling programs where necessary. Discussions should also address AI’s impact on workforce planning and how governance structures can incorporate diverse perspectives to avoid groupthink. A culture that values continuous learning and adaptation to new AI developments is essential for maintaining a competitive edge and managing AI risks effectively.

    Principles, policies & strategy:

    Guiding principles and clear policies form the backbone of AI governance. Directors should embed AI considerations within the broader organisational strategy, ensuring AI initiatives align with business objectives, avoiding the pitfalls of “AI for AI’s sake”. Engagement with management to translate high- level AI principles into actionable policies is vital. These policies should integrate with existing privacy, data governance, cybersecurity and procurement frameworks to provide a holistic approach to managing AI risks and opportunities.

    Practices, processes & controls:

    Principles and policies need to be underpinned by robust processes and controls. Directors should work with management to ensure there are adequate controls for AI use, including risk management frameworks, AI impact assessments and compliance policies. Regular reviews and updates to these controls in line with best practices are essential. This element also emphasises the importance of processes for assessing supplier and vendor risks, ensuring comprehensive oversight across the AI value chain.

    Supporting infrastructure:

    A solid infrastructure supports effective AI governance. Directors should ensure management has a clear understanding of where AI is being used within the organisation, often facilitated by an AI inventory. Robust data governance frameworks are critical, given the data- intensive nature of AI systems. Increasing transparency about how AI systems use data can build trust among stakeholders. This infrastructure ensures the organisation can manage data effectively and comply with regulatory requirements.

    Stakeholder engagement & impact assessment:

    Engaging with stakeholders and conducting thorough impact assessments are key to understanding AI’s broader implications. Directors should identify and engage stakeholders to understand their expectations and the impact of AI use. Ensuring AI systems are designed and assessed with accessibility and inclusion in mind is crucial. This element highlights the importance of explaining AI-generated results to stakeholders and providing appeal processes to address any concerns or issues.

    Monitoring, reporting & evaluation:

    Given AI systems’ ability to learn and adapt, ongoing monitoring, reporting and evaluation are critical. Directors should confirm that management has implemented risk-based monitoring and reporting systems for mission-critical AI systems. Developing clear metrics and outcomes to track AI governance framework performance is essential. Regular reassessment against these metrics and seeking internal and external assurance help to maintain the integrity and effectiveness of AI systems over time.

    The guide also outlines key questions for directors to ask across eight of the key governance elements, a traffic light system to assess responses where “amber” indicates some risk and advises further probing, and “red” signals high risk, requiring directors to work with management to implement safe and responsible AI governance practices as outlined.

    Governance structures: Telstra

    Telstra has implemented a tiered governance framework to address the complexities of AI oversight. These include an AI Model Register and a Risk Council for AI and Data (RCAID) to oversee high-impact AI use cases. At the operational level, AI use cases undergo initial review, with high- impact cases escalated to the Executive Data and AI Council, which includes executives from across the business. This council ensures comprehensive oversight of AI implementations and manages escalations. Significant AI risks are reported to the audit and risk committee, which briefs the board biannually, ensuring strategic alignment and robust oversight. This multi-layered approach ensures that AI governance at Telstra is thorough, responsive and integrated into the company’s broader risk management framework.

    Monitoring, reporting & evaluation: Microsoft

    Microsoft’s Responsible AI Governance Framework centres on six principles: accountability, inclusiveness, reliability and safety, fairness, transparency, and privacy and security. Effective governance requires a three-tiered system: Aether, which offers research and expertise; the Office of Responsible AI (ORA), setting policies and ensuring readiness; and the Responsible AI Strategy in Engineering (RAISE) group, aiding engineering teams. The environmental, social and public policy committee provides board-level oversight. This multi-disciplinary approach ensures responsible AI practices are integrated within Microsoft and its value chain, supported by policies, standards, training, monitoring and transparency.

    This article first appeared under the headline 'The AI Tightrope’ in the July 2024 issue of Company Director magazine.

    Latest news

    This is of of your complimentary pieces of content

    This is exclusive content.

    You have reached your limit for guest contents. The content you are trying to access is exclusive for AICD members. Please become a member for unlimited access.