Developing an effective and realistic AI strategy is vital at a time when many are polarised, seeing the potentially transformational technology as either an exciting opportunity or a dangerous risk.
“AI has reached peak hype,” says Sandra Peter, associate professor at the University of Sydney Business School and facilitator of the USyd-AICD AI Fluency sprint. Expectations are soaring, giving many senior leaders pause for thought. Notably, the president of a leading global tech company recently admitted to Peter that he knew how to apply AI to a problem, but not how to reorganise his company to benefit from the technology.
“How can our organisation take advantage of AI?” is a question she’s frequently asked. It’s a strategy question, not a technology question, insists Peter, and one that’s increasingly pressing.
As co-director of Executive Plus, the university’s digital-first learning initiative, Peter and her team have upskilled some 2000 business leaders, including many board directors, from more than 30 industries across Australia and globally, in AI Fluency sprints. She says all directors and other senior leaders need a baseline understanding of the probabilistic technology —which can reason and wrangle uncertainty — and what it can do that’s different.
In the organisational quest to find strategic direction with AI, it’s important to think beyond how it applies to the way a business currently operates. Peter says we must consider how the most transformative technology of our lifetime might change the shape of their entire organisation and potentially move their market position. The big challenge is not knowing where they’ll end up..
One of the most profound shifts for business leaders engaged in the AI strategy process, for instance, is the need to take a two-lane approach to return on investment (ROI). “First, there is AI for productivity and efficiency. This is where AI adds value and we can easily measure the ROI,” says Peter, noting these are also the popular kick-off points for Australian businesses.
“The second is AI for transformation, where we reimagine products or services, or embark on AI-first market disruption. This is where AI can create new value and where we need to redefine ROI. It’s much harder to devise long-term strategy when you don’t know upfront what to measure.”
A well-reported confidence gap exists in Australia, substantially driven by uncertainties over the risks — from data quality, biases and hallucinations to company reputational damage and the long-mooted threats of AI replacing jobs.
A new global study on trust in AI has found half of Australians use it regularly, but only 36 per cent are willing to trust it, with 78 per cent concerned about negative outcomes. The KPMG–Melbourne Business School survey shows only 30 per cent of Australians believe AI benefits outweigh the risks, the lowest ranking of any country.
Have we moved on? Peter anticipates transformation will be in sharp focus in coming years as organisations that have already piloted AI via small proofs of concept, begin scaling it. Scalability is an imperative, with high risk from inaction, emphasises Peter, as AI uptake proliferates among business competitors, suppliers, customers, government and broader society. “If you don’t do it, it will be done to you.”
Critical to every business strategy — and board members finding their AI mojo — is prioritising continuous education and getting hands-on. Peter recommends starting at the top. “Firstly, directors, executives and leaders need to master AI, not only for individual productivity, but also for value-based use cases and to reimagine processes and services. Secondly, product and project leads, and unit managers need individual productivity and value-based use cases. Everyone else needs to upskill around personal productivity.”
Responsible AI: the catalyst for innovation
Fostering business confidence starts with adopting a safe and responsible AI governance framework as the essential handbrake on the risks worrying many. But will it be enough?
Responsible AI defines how the technology should be designed, developed, deployed and used in a way that’s human-centred, trustworthy and accountable. AI systems should provide benefits and minimise the risk of negative impact to people, groups and wider society.
Useful guidance on the fundamentals of responsible AI and related governance for Australian businesses is now widespread, notes Professor Jeannie Paterson of the University of Melbourne Law School and director of the Centre for AI and Digital Ethics. On her list, the federal government’s first Voluntary AI Safety Standard, encompassing 10 guardrails for all organisations and governance across the AI lifecycle; plus more specifics in its Policy for Responsible Use of AI in Government. Standards Australia also has issued technical guidance on AI safety systems.
Regulators ASIC, APRA and the ACCC, along with the Privacy Commissioner, have all effectively said what’s needed, notes Paterson. “Directors can now make the available guidance bespoke for their businesses and look to embedding responsible AI practices organisation-wide.”
Hanging out for a hard AI law that goes beyond today’s voluntary guardrails may be time wasted, she says, not because it won’t happen, but because “hard law doesn’t always bring compliance”. Ensuring everyone keeps AI safe more likely depends on organisational culture, she suggests. “All the outputs we want from AI — compliance, productivity, creativity and innovation — come from having a shared culture and understanding of what the technology can do, why we’re using it and how to manage the risks.”
Ethics frameworks and popular organisational values — inclusion, equity, fairness and privacy — inform the development of the technology. Transparency is crucial, says Paterson. “We talk about AI being a black box because we don’t know how it makes outputs using thousands of data points and neural networks, but we do know we’ve deployed it, how it’s being used, we’re overseeing and testing it.”
AI literacy is a “no-brainer” across the workforce, both to drive innovation and to avoid boards being overly prescriptive on governance. Be sure to choose a course that matches your values, she adds. Some AI-leading corporations now have responsible AI board committees focused on policy, on how and where systems and processes are implemented, and on what risk assessment tools are being used.
While the board puts AI safety in the frame, responsibility traverses the organisation. “We talk about having a human in the loop or a designated person responsible for the AI system,” says Paterson. “When everyone has AI in their phone or computer, that’s like putting a finger in the dam.”
Following the release of ChatGPT in 2022, people have used GenAI at work, with or without permission — 25 per cent of respondents to a recent Microsoft survey said they used public GenAI at work. “There’s also a contingent who don’t want to use it at all,” says Paterson. “These sceptics will be helpful in driving responsible AI in organisations.”
She believes a diverse mix of enthusiasts and those with fears or concerns is needed on teams and committees, or involved in the increasingly prevalent “communities of practice” (CoPs) where people share AI experiences, insights and tips.
Questions to ask on business risk
- What business problem are we solving and how will AI specifically help? Is there a clear, valuable use case? Why is AI the right tool over traditional analytics or automation?
- What data do we need and is our data ready for AI? Do we have enough clean, relevant and unbiased data? Who owns the data and are there privacy, security or regulatory concerns? How will the data be collected, maintained and governed?
- Do we have the right people, skills and leadership in place to deliver and govern AI effectively? Is there a cross-division team with business, data and AI expertise? Who is accountable for the project’s success? How will we ensure ongoing governance, oversight and continuous improvement?
Questions to ask on technical risk
- What kind of AI model are we using and why was it chosen? Is it a machine learning model, a large language model or a rules-based system? Why was this architecture or approach chosen for the problem?
- How will the model be monitored, maintained and updated? Who is responsible for model performance over time? How will we detect performance drift or model degradation? Is there a plan for retraining the model as data or business conditions change?
- How are we validating the model’s performance? What metrics are we using to test accuracy, reliability or fairness? Are we compliant with relevant standards? Have we conducted an AI ethics or bias audit?
GenAI: the ubiquitous transformer
GenAI completely changes the nature of how we develop skills and capability as human beings, says Lee Hickin, the recently appointed executive director of the Australian National AI Centre (NAIC) and former Microsoft ANZ CTO. “It’s an entirely new paradigm of communication with technology. I no longer have to figure out how something works, instead I can talk to GenAI and ask it to tell me.”
From inside the Department of Industry, Science and Resources, the NAIC is engaging with Australian industry and looking to seize AI opportunities. This includes rethinking old ways of working, with some using off-the-shelf AI products to give employees access to quick accelerators. Then there are the increasing numbers striving to think big to realise the transformational promise of AI.
From Hickin’s vantage point, arguably the more impactful and overarchingly transformative outcomes for organisations deploying GenAI will happen incrementally over time, from individual use. “It’s the self-improvement outcomes — the change in the way humans can skill themselves to do new, better, more productive things in their jobs and lives — that are so powerful,” he says. “As humans, we operate in chains of process and it’s not always easy for individuals to see the end-to-end system. AI helps me get better at contributing my piece of the process, by bringing more to the thinking and conversations in my industry. I get more skilled in the end-to-end, more able to navigate knowledge and make better decisions.”
While many use cases for GenAI to date have involved time-saving efficiencies, Hickin sees the future big-bang effect coming from the quality uplift in broader knowledge work, which applies equally for directors, executives and employees.
Traditionally a tech frontrunner, Australia has been slow out of the blocks with its “considered approach” to AI. But Hickin believes it will keep showing the world how to adopt AI in impactful ways, noting the success of AI innovators such as ToothFairy.AI with their agentic business process tools or Leonardo.AI creating AI art assets.
NAIC’s remit is to spur the uptake of AI for productivity and growth. AI developments are “moving fast like a bullet train” and many may think it’s impossible to jump aboard, says Hickin. “What’s critical is not to get stuck on finding a problem to solve with AI. Focus on existing challenges in market opportunity and look to see how AI can help.”
To directors, he says, “Don’t get caught up in the narrative that AI is one thing. There’s a rapidly evolving AI ecosystem of possibilities to learn more about, from generative, machine learning and statistical-based systems to new products and research. The important takeaway is to be engaged and ask questions.”
This article first appeared under the headline 'Wrangling future uncertainty' in the July 2025 issue of Company Director magazine.
Contemporary governance resources
AICD’s Policy team supports members with guidance on governance issues, including:
- AI Fluency for Directors Sprint
- Directors’ Guide to AI Governance
Latest news
Already a member?
Login to view this content