US President Joe Biden has issued the country’s first executive order (EO) designed to impose national rules on the fast-moving technology of AI and make for “safe, secure and trustworthy” artificial intelligence across 11 crucial sectors. The EO is the biggest single effort to regulate a technology that has seen exponential growth in recent times.
With powerful generative AI models making headlines across the globe, as millions of people tap into the myriad uses to which it may be applied, cyber security and privacy have been flagged by the Biden administration as requiring governmental attention.
The order will create several taskforces and new government offices that will set standards to ensure data privacy and cybersecurity are applied to nearly every area of life related to the federal government, such as health, education, housing and labour.
“The focus on generative AI shows that even the most innovation-focused governments are willing to put additional requirements on technologies that can produce significant risks,” says NSW-based Industry Professor, Emerging Technology and Co-director at the Human Technology Institute, Nicholas Davis.
Davis says the executive order proves that governments are increasingly looking to set firm standards for their own use of AI systems which will greatly influence industry via procurement rules and set a high bar of good practice for industry.
The EO will provide regular reports to the Commerce Department outlining how they plan to protect their technology from espionage or digital subversion.
Professor Edward Santow, Co-Director of the UTS Human Technology Institute in Sydney adds, “President Biden’s Executive Order recognises that it’s not enough for an organisation just to commit to ‘behaving ethically’ in how they develop and use artificial intelligence. Organisations need to take a rigorous approach.”
Though they cannot be enforced by law, the new guidelines are validated by the recently announced launch of a set of International Guiding Principles for AI and an International Code of Conduct produced by the governments of the Group of Seven (G7).
Principles to guide good conduct
Days prior to Biden’s announcement, the UN established its multi-stakeholder Advisory Body on AI Governance, designed to encourage international cooperation in the effective governance of artificial intelligence. The principles accompany a Code of Conduct that can help governments and businesses navigate the AI landscape on the road to targeting the safe and responsible use of artificial intelligence models. From this perspective, Biden’s executive order may act as a blueprint from which other countries can take advice in setting their own agendas toward best practices in the responsible and ethical use of AI.
Opportunity for directors
Every organisation relies on different AI models, from generative AI to automated responders for online customer support and automated insights for data-driven industries. In all cases, the onus is on directors to create risk management frameworks to mitigate issues that could easily spiral out of control if not given ample attention.
“As Australian firms increasingly use AI to both innovate and drive efficiencies, directors should see these regulatory signals as motivation to ensure their AI governance systems are fit-for-purpose and that their use of AI systems is compliant with the full range of existing obligations and customer expectations,” says Davis.
“The way the executive order uses standards to impose obligations is an important signpost. Locally, Standards Australia is closely involved in developing forthcoming ISO AI standards that will influence Australian regulation.”
Pamela Hanrahan, Professor of Commercial Law and Regulation at UNSW Business School, refers to the US policy development in her latest Directors Counsel column for the December 2023 issue of Company Director magazine. “Clearly, as legislative facilitation of regulatory ADM increases, the need for guidelines becomes more pressing. But a comprehensive response to the use of AI/ADM in government seems to have stalled.”
Policy responsibility for the digital economy now sits with the Australian Department of Industry, Sciences and Resources, which produced a discussion paper called Safe and Responsible AI in Australia in June 2023. The new AI in Government Taskforce is now undertaking a survey across the Commonwealth to map the extent of automated decisionmaking in government, with results due in 2024. “Proper guidelines for government agencies’ use of AI and ADM in regulatory decisonmaking are urgently needed,” writes Hanrahan.
“Their development will help protect the rights of regulated entities and individuals – including by insuring administrative law controls operate effectively – and demonstrate best practice for private sector entities that use new technologies to make decisions that impact directly on others.”
For all intents and purposes, the principles and standards laid out by the G7 are a non-exhaustive moveable feast as businesses and the world learn more about AI and its potential risks. They will encourage organisations and governments to consider best practices in their use of AI to ensure safe, secure and trustworthy systems that change with the tide of technological advancement in the field.
“The lesson for Australian company directors is that leading organisations need to understand the risks associated with AI and put in place effective safeguards to address those risks,” says Santow. “But they should also take an extra step—they should assume that those safeguards will be imperfect, and having effective accountability measures that protect their internal and external stakeholders is critical.”
For more on AI in a governance and risk management context, visit our Innovative Technology page.
Already a member?
Login to view this content