AI-focused governance helps secure its benefits while managing its risks. So, what are organisations doing right now to seize the opportunities of AI and mitigate potential problems? In a recent AICD webinar, Prof. Nicholas Davis, co-Director of the Human Technology Institute (HTI) and Industry Professor at University of Technology, spoke to a panel of experts about the risks and harms AI systems pose and governance approaches to thwart potential pitfalls. Go to our website to watch the full webinar.
How Australian organisations are governing AI
The first thing to ask when putting AI governance into action is how much control do you have over your AI systems?
“There is a range of control that your organisation will have from a senior management perspective, and it’s important to know what AI systems your R & D, engineering, and data science teams are producing and what you need to govern,” says Prof. Nicholas Davis, co-Director of the Human Technology Institute. “Are you governing third-party provided AI systems? Systems provided by your CRM and cloud accounting or your recruitment service. Are you governing shadow IT, which is when your employees are using systems without your knowledge? As a spectrum, this is important.”
Other considerations organisations look at relating to AI governance are their ethical principles and ethics statements around AI. Once determined, these demanding statements will shape their AI policies and allow reflection, engagement and accountability among data teams, stakeholders, compliance teams, the board and the public. Documenting clear guidelines, thoughtful processes and strong incentives in an organisation is also necessary, if sometimes challenging.
Says director of the National Artificial Intelligence Centre, Stela Solar MAICD, “Compliance with standards and regulation is absolutely critical, and this does come up to you as a director.”
With the incredible pace of technology today, the challenges of designing a governance compliance system rely on day-to-day management of risks. Directors must learn and evolve with the AI itself.
“Building a process which can evolve with the AI, which is essentially ‘alive’, and checking it on a regular basis is your best defence to the upfront risks and uncertainty of designing a governance compliance system,” says Peter Waters, consultant at Gilbert + Tobin.
What risks do AI systems pose that need to be governed?
Before a governance compliance system can be designed, directors must know what concerns they should be looking for by identifying the source(s) of harm relating to their AI systems.
Based on work undertaken at the Human Technology Institute, Davis and his team have outlined three potential sources of harm that might affect an organisation and its stakeholders:
- AI system failures
- Malicious or misleading deployment
- Overuse, inappropriate or reckless use
Every system has the potential to fail, and AI is no different. You may have performance systems that just aren’t good enough or may have biased system performance that’s failing just for a subset of people if they are in a protected category. Some systems may be fragile and fail at the points when you need them most or could be insecure, allowing for data breaches.
“These are all sources of harm from AI system failures,” says Davis. “You need to be reassured on the board that you’ve got robust systems.”
Even if an AI system is working well, there may be instances where it is being used in malicious or misleading ways, such as the Trivago case that went before the high court in 2018 for using AI that was misleading Australian consumers by presenting the best result for a hotel recommendation based on the revenue it earned for the platform and not the quality of the accommodation. Other malicious uses of AI point to AI-powered cyber-attacks and even to the weaponization of AI systems.
Davis says these types of systemic ‘dark patterns’ with the use of AI are illegal and expose organisations to reputational and commercial risks, so it is important for boards to keep an eye on the misuse of their own systems.
Inappropriate or reckless use of systems may also result in regulatory risks such as the infringement of IP and privacy rights through unlawful limitations, in addition to sociopolitical, economic and environmental externalities.
While AI systems do require more maintenance and oversight than other systems, there’s no questioning their place in organisations as an essential technological tool.
Why is AI essential to organisations today?
Data garnered from the HTI revealed that approximately two-thirds of Australian businesses are currently using or planning to use AI systems this year. This number may, in fact, be higher when considering that many company employees are using AI platforms without the knowledge of senior management.
“Thirty to 40 per cent of employees are using generative AI. But more importantly, 68 per cent are not telling anyone about it,” says Solar.
These shadow IT users have come to realise that AI systems add value by making predictions, optimising and inferring, and by generating content.
Company directors wondering what and where AI systems would fit into their organisations may consider things like chatbots, predictive text, language translation or any recommended system that gives back to their customers or clients.
“It's so important right now for organisations to be implementing a generative AI policy,” says Solar. “So that you can fully embrace this workforce transformation that's going on.”
This webinar is part of the AI webinar series. You may also be interested in Understanding AI Regulations: A director’s guide and AI governance implementation: The role of the board.
Already a member?
Login to view this content