Current

    Government agencies here and overseas are recognising the regulatory potential of AI, but the necessary safeguards are proving slow to emerge.


    Late in October, US President Joe Biden released an executive order (EO) on Safe, Secure and Trustworthy Artificial Intelligence. While most of the order was concerned with development and utilisation of artificial intelligence (AI) by the private sector, it included a section on ensuring “responsible and effective government use of AI”.

    Two days later, Australia was one of 28 countries to sign the Bletchley Declaration on AI Safety in the UK, affirming that “AI should be designed, developed, deployed and used in a manner that is safe... human-centric, trustworthy and responsible”. While recognising that “the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed” as AI continues to develop, the Bletchley Declaration did not single out the risks associated with the use of AI by government agencies, including for regulation.

    For several years, government agencies in Australia have talked up the benefits AI might bring in streamlining and improving regulatory decision- making across a range of sectors. The use cases range from processing applications for licences and approvals to more efficiently targeting and conducting monitoring for regulatory compliance, as well as “predictive policing” that identifies and disrupts potential law-breaking. AI could be utilised by regulators to analyse market information and mandatory reports or returns filed by regulated entities. This might identify patterns of behaviour across or within a sector that justify a policy intervention or enforcement response. But as the EO recognises, individual regulators cannot move forward without clear, robust guidance to ensure they — and the checks and balances under which they operate — adequately protect the rights of regulated entities and individuals.

    Assisted decision-making

    In Australia, many Commonwealth regulators across different sectors are empowered by their legislation to use assisted decision-making (ADM), including with AI. In 2004, the Administrative Review Council formulated best-practice principles on automated assistance in administrative decision- making for the Commonwealth. The Commonwealth Ombudsman developed a “better practice guide” in response, released in 2007 and updated in 2019.

    Explicit statutory recognition of the use of ADM in government soon followed. For example, the Therapeutic Goods Act 1989 (Cth) was amended in 2009 to allow the departmental secretary to “arrange for the use, under the secretary’s control, of computer programs for any purposes for which the secretary may make decisions under this Act or the regulations” and treated those decisions as decisions of the secretary. The National Consumer Credit Protection Act 2009 (Cth) allowed for ASIC to make decisions relating to its credit jurisdiction by computer program, too.

    The idea spread and the language modernised. The new business register legislation, included in the Corporations Act 2001 (Cth) in 2020, allows for the registrar to “arrange for the use, under the registrar’s control, of processes to assist decision making (such as computer applications and systems) for any purposes for which the registrar may make decisions in the performance or exercise of the registrar’s functions or powers under this Act, other than decisions reviewing other decisions.”

    However, the introduction of similar provisions for AUSTRAC in October 2023 points to growing recognition of the complexities inherent in this form of regulatory decision-making and its interaction with administrative law principles. The explanatory memorandum to the Crimes and Other Legislation Amendment (Omnibus) Act 2023 (Cth) included a statement that “the operationalisation of the provisions is intended to be accompanied by sophisticated internal business rules and quality assurance processes to ensure computer assistance is not used to make discretionary decisions, and to ensure that high-risk decisions (such as those that could conceivably lead to an adverse outcome by a person affected by the decision) will continue to be made in the first instance by the AUSTRAC CEO or delegated officers.”

    Stalled process?

    Clearly, as legislative facilitation of regulatory ADM increases, the need for clear guidelines becomes more pressing. But a comprehensive response to the use of AI/ADM in government seems to have stalled. In 2021, a legal audit of AI in the public sector by the ANU Humanising Machine Intelligence project concluded the “legal rules that currently apply to government use of AI lag behind technical advancements in AI, fail to explicitly regulate the potential harms of AI, and use ‘soft’ rather than ‘hard’ law”. Also in 2021, the Digital Technology Taskforce in the Department of Prime Minister and Cabinet (PM&C) published an issues paper entitled Positioning Australia as a leader in digital economy regulation: Automated decision making and AI regulation. Several submissions in response drew attention to the potential benefits of improved efficiency, consistency and accountability. However, the Law Council pointed out that public sector use of AI and ADM should be properly regulated to “ensure that it is employed consistently with administrative law principles, which underpin lawful decision-making — lawfulness, fairness, rationality and transparency”.

    Policy responsibility for the digital economy was then transferred from PM&C to the Department of Industry, Sciences and Resources, and the process restarted. The new department produced a discussion paper called Safe and Responsible AI in Australia in June 2023. The discussion paper referred to the use of AI and ADM by government and noted the work done by the Commonwealth’s Digital Transformation Agency on public sector adoption of AI, issued in October 2022. The new AI in Government Taskforce is now undertaking a survey across the Commonwealth “to identify and map the extent to which automated decision- making... is relied on in the delivery of government services and payments, and the supporting legislative basis”, with results due in 2024.

    Proper guidelines for government agencies’ use of AI and ADM in regulatory decision-making are urgently needed. Their development will help protect the rights of regulated entities and individuals — including by ensuring administrative law controls operate effectively — and demonstrate best practice for private-sector entities that use new technologies to make decisions that impact directly on others. 

    Latest news

    This is of of your complimentary pieces of content

    This is exclusive content.

    You have reached your limit for guest contents. The content you are trying to access is exclusive for AICD members. Please become a member for unlimited access.