What directors need to know about governance challenges in Australian SaaS

    Current

    If SaaS startups are to emulate the high-profile success stories such as Canva and Atlassian that this country has spawned, Australia should be more willing to embrace risk. Boards need to be mindful of restrictive boundaries while adhering to governance good practice.


    Atlassian, Canva, Culture Amp, Employment Hero — some of Australia’s most successful software-as-a-service (SaaS) companies have achieved international success. Others focus on niche domestic markets such as government, education or healthcare. Along with the hurdles confronting every startup, they all faced unique Australian challenges.

    Jared Hill, vice-president and head of cloud and custom applications at Capgemini Australia and New Zealand, identifies scarcity of talent, limited resources, the comparatively small size of the domestic market and fewer research and development (R&D) incentives. In 2021–22, Australia’s gross expenditure on R&D was 1.68 per cent of gross domestic product, compared with the OECD average of 2.7 per cent.

    Local SaaS startups can also find it particularly difficult to fund customer acquisition, product development and international expansion.

    “Some angel investors are sending money offshore due to a lack of high-potential local opportunities, or less attractive tax incentives in Australia,” says Hill. “More generally, Australian investors tend to favour proven and revenue-generating business models, which makes it harder for local startups to access capital than those based in less risk-averse environments, such as Silicon Valley.”

    He believes that culturally, Australia should be more willing to embrace risk.

    “In the US, startup failure is considered a badge of honour, because you tried,” he says. “Here, it’s still taboo. But if you don’t take risks, the downside is clear — you get left behind.”

    In the meantime, many SaaS founders are forced to bootstrap their launch and initial growth by drawing on their savings, personal equity and lines of credit.

    “From there, they might move on to venture capital for scaling and non-dilutive R&D tax incentives for innovation,” says Carl Prins, co-founder and CEO of SaaS provider Pathzero. “Revenue-based financing aids cash flow, so directors should prioritise annual recurring revenue metrics to attract investment.”

    Competition and regulation

    For startups outside major cities, uneven regional infrastructure can add the risk of service disruptions.

    “Unlike sectors focused on physical assets, SaaS boards face direct risks like cybersecurity breaches and downtime,” says Prins. “Indirect risks arise when clients rely on SaaS for compliance, such as accurate, auditable emissions data. Third-party data dependency and regulations like data sovereignty demand agile governance to avoid penalties and uphold client trust.”

    Across the sector, there is also fierce competition from overseas.

    “While international companies bring benefits such as job creation, tech advancements and a stronger ecosystem, boards must counter competition for talent and other pressures,” says Prins. “For example, global giants can squeeze local market share by leveraging their scale for the rapid rollout of new features and aggressive pricing.”

    Technology is borderless and legacy platforms are vulnerable.

    “It’s now cheaper and faster than ever for competitors — or their agentic AI platforms — to build better, more efficient alternatives,” says Hill. “If you’re not iterating and adapting quickly, you’re at risk of being outpaced.”

    Boards must also contend with SaaS regulation that is both global and fragmented.

    “While the EU is tightening rules with, for example, the EU AI Act, other jurisdictions — especially the US — are lighter-touch,” says Hill. “Consent and data use are also increasingly complex, especially when AI interfaces with customers. Boards need to think beyond basic privacy and ask when customer data is being used to train models, and what consent is required.”

    Governing agentic AI

    A few years ago, when he was CEO of a SaaS workforce management company, George Khreish MAICD, now managing partner APAC at global IT research and advisory firm Info-Tech Research Group, explored early automation and machine learning to optimise workforce management.

    “At that time, the autonomous decision-making systems we now know as agentic AI weren’t commercially viable,” he says. “They were costly, brittle and hard to scale beyond narrow use cases. Today, they’re transforming the landscape as AI agents can operate independently, adapt to changing inputs and make decisions in complex workflows.”

    They’re also transforming the way SaaS companies are governed.

    “Agentic AI’s autonomy shifts accountability to the very top of the organisation,” says Khreish. “CEOs and boards can no longer treat AI governance as a purely technical issue. They must own it as a strategic and ethical responsibility.”

    Risks and opportunities

    Guided well, agentic AI is creating unprecedented opportunities for faster decision cycles, richer insights and entirely new product models, as well as improved efficiency, innovation and customer value. The trade-off is unprecedented risk.

    “Agentic AI can have a black box problem, where internal decision-making processes are obscure and unintelligible, even to their creators,” says Khreish. “It can also magnify errors at speed, expose sensitive data and cause over-reliance on a few providers. Quantum computing will accelerate both the potential and the complexity, making agentic AI even harder to oversee and control. That’s why governance needs to be proactive, built into leadership decisions from day one and designed to capture the upside while managing the downside.”

    For Valence Howden, advisory fellow at Info-Tech Research Group, accountability is at the heart of emerging governance issues. “Deciding whether issues arising from the use of agentic AI are tied to the developer, the provider, the consumer — or a combination of all three — will remain a challenge,” he says.

    Khreish considers ethical boundaries to be shared, but not equal. “Providers must build systems with safeguards, transparency and bias mitigation from the ground up, while customers must apply these tools within legal, ethical and values-based frameworks,” he says. “For CEOs and boards, the task is twofold — harness the technology to create competitive advantage and societal value, and ensure its use reflects the organisation’s duty of care, including concrete risk controls such as safety-by-design, human-in-the-loop for material decisions, monitoring for drift and abuse, incident disclosure and clear liability allocation with vendors.”

    Dynamic, adaptive governance

    In order to stay effective, different aspects and layers of governance must all be in alignment.

    “That means governance has to be dynamic and adaptive,” says Howden. “There are weak points across the entire landscape, often at the human layer. This will become performative as scaling up increases pressure on individuals, organisational velocity maintains the need to move fast and overwork leaves people too tired to pay attention. As an accelerator, quantum will force governance to be woven and embedded into all work. It will not be effective if it’s cobbled together after the fact.”

    Howden sees ethical use as particularly complex because ethics are far from standard. “They vary by society, country and belief system,” he says. “We don’t know which ethical perspectives will be incorporated into AI, or how these will be impacted by real-world inputs, given that AI simulates care and empathy without the ability to care or have empathy. There’s a likelihood that ethical values will drift and a re-injection of values may not hit the same belief system.”

    If they are to work, the regulatory drivers influencing both governance and ethics need to be more cohesive.

    “A lack of global agreement on the baseline ethics of AI will make ethical use harder to govern, as will ethical biases that come from language, which are difficult to identify and exclude,” says Howden.

    “Customers will be accountable for the ethical use of the agentic AI products they use, but will not necessarily have the ability to impact what providers have injected into the system. They will also have limited control of how agentic AI responds when they inject their own ethics into the mix.”

    This article first appeared as 'SaaS risk return' in the November 2025 Issue of Company Director Magazine.

    Latest news

    This is of of your complimentary pieces of content

    This is exclusive content.

    You have reached your limit for guest contents. The content you are trying to access is exclusive for AICD members. Please become a member for unlimited access.