When boardrooms meet AI jargon, confusion reigns. From mysterious “agents” to “agentic AI”, here’s what directors need to know and why it matters.
Directors may not realise it yet, but the machines around us are evolving fast. Some might not fully grasp what terms like “agentic AI” and “agents” mean, why they matter or even the right questions to ask. Yet the decisions boards make around these technologies could reshape entire industries, influence national security and redefine how organisations operate.
It represents a leap in machine autonomy with profound implications for corporate governance.
Dr Amanda Rischbieth AM FAICD, Harvard ALI (Advanced Leadership Initiative) advisory board member and chair of the National Blood Authority Australia, explains that traditional agents “simply carry out pre-programmed tasks, moving from input to output”.
They are reactive, predictable and confined to narrow rules. They automate workflows, retrieve information or interact with systems with tight parameters. Think of an automated chatbot that a bank might use. The customer asks, “What is my account balance?” and the chatbot instantly responds from a knowledge base. That’s traditional agents in practice.
AI expands on this. A bit like a calculator, these systems can recognise patterns, analyse data and make recommendations — but they still rely on humans to make the final call.
However, agentic AI marks a transformative leap. These systems act with autonomy, set goals, learn from outcomes and coordinate across people, tools and other AI.
“These systems don’t just wait for prompts, they pursue objectives based on broader goals,” says Rischbieth.
Examples from 2025 show agentic AI moving from experiments to enterprise impact. PwC Australia has launched agentic AI-powered professional services solutions on Amazon Web Services Marketplace, designed to address critical business challenges across industries, from data management and cybersecurity to customer experience and operational efficiency. Salesforce’s Agentforce 360 is embedding autonomous AI across global workflows, demonstrating the real-world power of goal-driven AI.
In plain English, agentic AI behaves like a junior executive — planning, adapting and acting.
The potential for company boards is immense and, according to Rischbieth, its greatest value lies in insight generation and value creation, given agentic AI’s capacity to scan vast volumes of data — from regulatory filings to media and supply chain signals.
“[It can also] spot weak signals, often earlier than humans,” she says. “That elevates both insight and foresight for directors.”
But while the efficiency of these systems is unparalleled, without clear governance, Rischbieth warns they can introduce operational, ethical and legal risks at a scale boards have never encountered.
“It’s paramount to establish human oversight and build in clear guardrails from the outset — the rules for how, when, where and on what basis they operate.”
Ultimately, agentic AI should be treated as an enabler, not a substitute for human oversight.
Questions directors should ask
→ Materiality
Which decisions or processes are material enough that the use of agents or AI warrants board-level input?
→ Knowledge and skills
Do we, as a board, have sufficient digital literacy to adopt such systems? If not, how do we source that expertise?
→ Opportunity and risk lens
Show us evidence of business value, clear use cases, ROI, risk controls.
→ Purpose alignment
Does this AI deployment align with our corporate purpose and values?
→ Accountability
Who, internally and externally, is ultimately responsible for AI outputs and outcomes?
→ Transparency
Can vendors (through management’s vendor agreements) clearly explain how the system works, the related data use and disclosure, and all of its limitations?
→ Oversight
What guardrails (rules and protection), audits and independent validations are in place for users and for the technology itself?
Supporting the decision process
However, AI use should never be mistaken for decision-making, but rather decision-enabling — and quite often faster, smarter and cheaper.
“With proper oversight, notwithstanding fledgling regulations to date, AI provides decision support, not decision rights,” says Rischbieth. “Judgement, accountability and responsibility remain with directors.”
Whether using agents, AI or agentic AI, her message is clear — accountability must remain a central concern.
“Even if management adopts agents or agentic AI and their outputs shape board decisions, directors still carry the legal responsibility for outcomes,” she says. “Technology doesn’t dilute fiduciary duty.”
The first line of defence, says Rischbieth, is understanding. This means directors need both AI literacy and fluency.
“They need to understand what AI is and does. They should know enough to ask informed questions, recognise risks, consider opportunities and probe management’s claims.”
Training, workshops and scenario exercises all help, while staying current through credible sources requires a tiered approach that builds on AI fundamentals.
Good AI governance, concludes Rischbieth, rests on the same principles as other forms of sound oversight.
“It’s about data governance from the outset, value creation and risk management. That means checking vendor scope and due diligence, conducting regular audits and stress testing, ensuring explainability and traceability, and keeping an eye on evolving AI standards.”
The rise of agentic AI is not a distant possibility — it’s a present reality that boards cannot afford to ignore. From simple chatbots to autonomous, goal-driven systems, the pace of change demands directors understand not just what these systems can do, but also their limitations and risks.
Boards that fail to grasp these distinctions risk ceding control to systems they do not fully understand — and exposing their organisations to operational, ethical and strategic surprises.
This article first appeared as 'Secret agents' in the December 2025/January 2026 Issue of Company Director Magazine.
Cautionary tales
Fabrication
In 2025, Deloitte produced a report for the Department of Employment and Workplace Relations reviewing their Targeted Compliance Framework (TCF) used in welfare payments processing. Shortly after publication, the report was found to include fabricated academic references, nonexistent court quotes and other inaccuracies. Deloitte later disclosed parts of the report had been drafted using a generative-AI tool.
Techno fascism
An update to xAI’s Grok chatbot, which temporarily instructed it not to avoid “politically incorrect” claims, led to two incidents in July. Grok responded to a user query by providing detailed instructions for breaking into the home of a policy researcher and and assaulting him. Then Grok made a series of antisemitic posts, declaring itself a “MechaHitler”.
Latest news
Already a member?
Login to view this content