Strong governance foundations are critical to navigating artificial intelligence responsibly and safely.
Due to its broad application, unique capabilities and risks, artificial intelligence (AI) requires an integrated and flexible governance approach, supported by strategic cross-functional collaboration.
Generative AI (GenAI) user interfaces have democratised access to powerful AI systems. GenAI will unlock a wave of automation and fundamentally alter how we interact with machines. This shift will change how we work, learn and relate to information and each other.
Importantly for boards, GenAI has triggered a step change in human expectations. It is reshaping our perceptions of what technology can do, challenging the status quo of existing workflows and accelerating digital transformation. This goes beyond individual curiosity, extending to organisations that are seeking efficiencies and augmented outcomes.
As a general-purpose technology, GenAI can be used across organisations in a wide variety of ways, increasingly woven into business processes and potentially circumventing traditional procurement. However, its potential for unintended, unpredictable or shadow use creates unique governance challenges.
Boards should ensure safe and responsible AI development and deployment from the start, given the difficulty of retroactively implementing controls. Many harms arising from AI are covered by existing technology-neutral laws, including intellectual property, privacy, confidentiality, data use, consumer protections, cyber, anti- discrimination and workplace health and safety.
As with many board matters, culture plays its part in responsible AI. It is not a single static desktop application and requires a different mindset to account for the increasing dynamism between humans and machines.
AI-specific guardrails and standards
On 5 September 2024, the Australian government released two key documents as part of its broader agenda to promote safe and responsible use of AI in Australia — the Proposals paper for introducing mandatory guardrails for AI in high-risk settings (Proposals) and the Voluntary AI Safety Standard (Voluntary Standards). The government held a public consultation on its proposal to introduce mandatory guardrails around high-risk AI systems and models.
The Voluntary Standards largely mirror the proposed mandatory guardrails to guide organisations to develop and deploy AI systems safely and responsibly, pre-empting a transition to mandatory requirements. It provides practical guidance to all Australian organisations on how to use and innovate with AI safely and responsibly. In an official statement, Minister for Industry and Science Ed Husic indicated these standards may be used immediately and are intended to “give businesses certainty ahead of implementing mandatory guardrails”.
These documents follow the government’s latest interim response to the Supporting Responsible AI discussion paper in January 2024, which called for the development of a regulatory environment that builds community trust and promotes AI adoption. There are specific frameworks for the public sector.
Developers and deployers
The government observes that both AI developers and deployers will need to adhere to the guardrails. It notes that responsibility for the guardrails should be assigned based on which parties are most capable of managing risks at each development stage, considering factors like access to vital information such as training data and the capability to effectively intervene and modify an AI system. For entities deploying AI from a supplier, it is worth noting that the Voluntary Standards include high-level procurement advice to assist a deployer to align with the standards.
Regulatory options
The Proposals set out three options for implementing the mandatory guardrails, discussing the advantages and disadvantages of each option and inviting public commentary on them:
Domain-specific approach — adapting existing regulatory frameworks to include the proposed guardrails.
Framework approach — introducing new framework legislation with amendments to existing laws.
Whole of economy approach — enacting a new cross-economy Australia AI Act.
Separate to these options, the Proposals state the government will “continue to strengthen and clarify existing laws so it is clearer how they apply to AI systems and models” — for example, privacy, consumer protection, intellectual property, anti- discrimination and competition.
What this means for boards
Together, the Proposals and Voluntary Standards signal the government’s intention to provide regulatory clarity and certainty for those developing AI models and systems, and for organisations to safely manage use of AI. Although the mandatory guardrails are still under consultation, organisations should strongly consider adopting the standards now to give themselves a head start in building their internal capability to responsibly manage innovation using AI.
Any updates to the Voluntary Standards during consultation will likely be mirrored in the mandatory guardrails. These broadly reflect existing international practices for organisations to ensure safe and responsible development and deployment of AI alongside robust data governance, privacy measures and cybersecurity protocols. Adopting such practices builds consumer trust and a competitive advantage in the market.
This article first appeared under the headline ‘The New Working Partnership’ in the December 2024/January 2025 issue of Company Director magazine.
Susannah Wilkinson is Herbert Smith Freehills director of Generative AI (Digital Change) and leads the firm’s adoption of GenAI.
Latest news
Already a member?
Login to view this content