A toolkit released by the World Economic Forum helps directors develop an ethical approach to using AI, writes Kay Firth-Butterfield.
Global spending on artificial intelligence (AI) is forecast to hit US$98b over the next three years, but only a handful of companies have policies for managing the potential risks.
The World Economic Forum (WEF) worked with more than 100 companies and technology experts during the course of a year to develop its AI toolkit — Empowering AI Leadership: An Oversight Toolkit for Boards of Directors. This was launched at a WEF meeting in Davos, Switzerland, earlier this year.
Companies will play a significant role in how AI impacts society. Yet, our research found that many executives and investors do not understand the full scope of what AI can do for them. Nor do they understand what parameters they can set to ensure their use of the technology is ethical and responsible.
AI requires boards’ attention because it affects every aspect of their oversight duties. For example:
- Strategy is often influenced and executed by AI technologies. Its impact on strategy will grow as AI shapes lives, customer expectations, markets and the supply chain.
- AI will affect financial reporting as it is put to work to process financial data.
- AI amplifies existing ethical issues and creates new ones that boards should heed.
The management of the data, algorithms and people involved in AI requires governance mechanisms for decision-making that are consistent with and assist the organisation’s overall governance.
Research found that many executives and investors do not understand the full scope of what AI can do for them.
Picking up the tools
The WEF toolkit warns that any failure to consider and address these issues and concerns could drive away clients, partners and employees — and that there may be legal and regulatory consequences. The toolkit recommends:
- Boards ensure that ethics matter by hiring ethical executives and holding them accountable.
- Boards can’t execute responsibilities without ethical standards.
- Boards protect whistleblowers.
Five tools in the ethics section offer the AI principles development tool, which helps directors and AI ethics boards develop an AI ethics code.
Built with the structure of the board meeting in mind, the toolkit aligns 12 learning modules with traditional board committees and working groups, including audit, strategy, cybersecurity, people and culture, and risk. It aims to help companies make informed decisions about AI solutions that protect the customer and shareholders.
The toolkit was co-created by the Centre for the Fourth Industrial Revolution network fellows, as well as contributions from the AICD. The AICD has been an active contributor to WEF artificial intelligence toolkits, in line with its view that directors can be challenged by AI and its impact on business, industry, economy and society. This toolkit provides a global perspective that will help Australian directors learn and share experiences with the global best practices that the WEF can provide.
Seven modules focus on strategy oversight and the responsibilities connected with them. They cover: brand, competition, customers, operating model, people and culture, technology and cybersecurity. Other modules cover additional board oversight topics: ethics, governance, risk, audit and board responsibilities.
Kay Firth-Butterfield is head of AI and machine learning at the World Economic Forum.
Already a member?
Login to view this content