Five AI risk signals boards can’t afford to miss

Wednesday, 18 March 2026

Joanna Nelson GAICD
    Current

    The International AI Safety Report 2026 brings together global evidence on the evolution of general-purpose AI and associated risks emerging, plus the effectiveness of current mitigations. Drawing on contributions from more than 100 experts across 30+ countries and intergovernmental bodies, it provides directors with early indicators of where risks are already material, where uncertainty remains high and where governance practices are lagging.

    This article distils five signals from the report that are most relevant to boards today. Each is framed in familiar governance terms: Controls, exposure, dependence and oversight cadence
     

    #1 Trust is becoming a control weakness

    The report highlights that people are increasingly unable to distinguish synthetic content from authentic material. AI now enables the scalable creation of highly convincing fake text, images and voices, already used in fraud, impersonation and harassment. In controlled studies, participants misidentified AI-generated text as human-written 77 per cent of the time, and AI-generated voices as real speakers 80 per cent of the time.

    For boards, this undermines long-standing informal controls. Familiarity with a voice, writing style or email pattern can no longer be relied upon to verify identity or authority for high-risk actions. Where these cues persist as implicit controls, such as approving payments, changing supplier details or resetting credentials, increases exposure.

    Board Considerations

    • Which approvals or changes still rely on trust rather than multi-step verification?
    • What minimum verification standards apply to high-risk actions (payments, supplier changes, credential resets, sensitive data access?)
    • Who in your organisation is accountable for deepfake readiness and how are we supporting the readiness and testing?

    #2 AI is accelerating cyber risk

    The report documents how AI is now used across multiple stages of the cyber-attack lifecycle, from reconnaissance to exploitation. AI-enabled tools, increasingly available in underground markets, are lowering the skill threshold required to identify and exploit vulnerabilities. In one competition, an AI agent identified 77 per cent of real-world software vulnerabilities, ranking in the top five per cent of more than 400 teams.

    While it remains unclear whether AI ultimately advantages attackers or defenders, the speed of discovery and exploitation is clearly increasing. This compresses response windows and shifts expectations around patching, detection and incident response.

    Board considerations

    • Are our patching and detection expectations calibrated for machine-speed threat conditions?
    • Do we treat AI as a material accelerant in cyber risk, or a “future” topic?
    • What controls govern the use of AI tools within cyber defence teams?
       

    #3 Productivity shifts are material and require oversight

    While broader labour-market impacts remain contested, the report highlights consistent task-level productivity gains from general-purpose use. Estimates suggest up to 60 per cent of jobs in advanced economies may be
    affected, with productivity uplifts of 20-60 per cent in controlled studies and15-30 per cent in real-world trials.

    For boards, the governance issue is not whether AI will be adopted, but how productivity gains are captured without eroding quality, compliance, capability development or customer outcomes. Without oversight, gains may appear quickly but prove fragile, masking rising error rates, rework or skill degradation.

    Board considerations

    • Where should productivity uplift show up first—and what guardrails protect quality, privacy and compliance
    • How are early-career capability pathways affected as entry-level tasks change?
    • What regular metrics does the board require on uplift, error rates, rework and workforce impacts?
     

    #4 Concentration risk elevates third-party oversight

    The report highlights significant concentration in advanced model development.  In 2024, 64.5 per cent of notable models originated in the United States, 24.2 per cent in China and 12.3 per cent elsewhere.

    This is not a geopolitical observation, but a dependency risk. A small number of upstream models increasingly underpin multiple internal systems and third-party products, creating shared vulnerabilities and potential single points of failure. For many organisations, AI exposure is already embedded through vendors, often with limited visibility or contractual leverage.

    Board considerations

    • Map where AI is embedded across our operations, including through vendors?
    • What contractual levers do we have for transparency, incident notification and material model changes
    • What is our contingency plan if access, terms, performance or regulatory settings shift abruptly?
       

    #5 AI Shapes human behaviour and decision-making

    The report presents growing evidence that AI tools are becoming embedded in the everyday decision environment. Users cite curiosity, stress reduction and companionship as common motivators. At the same time, the report highlights the risk of automation bias, where AI outputs are over-trusted and insufficiently challenged.

    For boards, this raises governance questions beyond technology. AI use intersects with culture, training, duty of care and accountability. In some cases, escalation and review pathways may be weakened rather than strengthened, particularly where AU outputs are perceived as authoritative.

    Board considerations

    • Where might AI influence employment-related decisions – and how is procedural fairness maintained?
    • How is AI use in recruitment and performance management monitored for bias and discrimination risk?
       

    Conclusion: From exposure to advantage

    Taken together, these signals point to a clear board-level opportunity. With appropriate governance, AI can become a controlled advantage – enhancing productivity, resilience, and decision-support – rather than a source of unmanaged exposure.

    The report’s central message for directors is practical: AI capability is advancing faster than most oversight rhythms. Governance must therefore be continuous and repeatable, not a one-off policy exercise. Waiting for perfect clarity is, in effect, a decision to accept risk.

    For most organisations, AI is already present - through vendors, teams and tools. The differentiator will not be adoption, but board-grade oversight. A practical starting point is a regular AI governance cadence: a formal board briefing, a current AI inventory (including embedded vendor AI), clear autonomy boundaries, and quarterly reporting on incidents, near misses, material model changes, and high-risk uses.

    More about the author

    Joanna Nelson GAICD is a technology and transformation leader with 30+ years’ experience across government, blue-chip and financial services, spanning delivery, operational optimisation and enterprise change. An active AICD Gold Coast Forum member (MBA, GAICD), she chairs advisory boards supporting organisations making real AI investment decisions under uncertainty—translating technical and regulatory complexity into pragmatic, board-ready governance and action. She also specialises in helping organisations develop actionable technology strategies aligned to business transformation, designed to move beyond intent and be executed.

    LinkedIn: www.linkedin.com/in/joanna-nelson


    Latest news

    This is of of your complimentary pieces of content

    This is exclusive content.

    You have reached your limit for guest contents. The content you are trying to access is exclusive for AICD members. Please become a member for unlimited access.