Current

    Dr Mahendra Samarawickrama GAICD details innovative contributions by the Australian Red Cross to AI governance — uplifting ESG strategies while driving AI for sustainability.


    AI can contribute more than US$15 trillion to the world economy by 2030, estimates PwC. It’s clear that the proper and ethical governance of this rapidly evolving technology is fundamental to a board’s role, given its strategic significance for economic prosperity. However, despite tremendous opportunity, AI creates high risks for humanity, — including autonomous weapons, automation- spurred job losses, socio-economic inequality, privacy violations, deepfakes and bias caused by data and algorithms. According to Gartner, 85 per cent of AI projects fail due to bias in data, algorithms or the teams responsible for managing them.

    From an ethical perspective, 79 per cent of United Nations Sustainable Development Goals (SDGs) can be supported by AI. These goals are planned to be achieved by 2030, driving the predicted economic prosperity through sustainability. For supporting organisational responsibility towards these goals, a set of standards, frameworks and metrics — environment, social and governance (ESG) — has emerged, which will be the new strategy for organisational responsibility.

    As part of the world’s largest humanitarian movement, operating in more than 190 countries, the Red Cross mission is to prevent or reduce human suffering wherever it is found. Naturally, the Red Cross is concerned about how AI should be governed to enhance social justice by mitigating AI risks and driving its benefits.

    AI governance toolkit

    AI’s complexity can seem like a roadblock to developing effective AI governance. Its autonomous decision-making capability and influence on people’s decision-making is unlike any other technology. Hence, AI governance is entangled with human ethics, which must be recognised where AI is applied or influences.

    At Red Cross, we investigated a framework to effectively address “why/how/what” questions related to AI governance — a similar approach to the Golden Circle model (which explains how leaders can inspire cooperation, trust and change in a business).

    This framework helps directors find solutions for the questions they should ask when governing AI. The Australian Red Cross Data Science and Analytics (DS&A) team initially designed these schematics to enhance International Federation of Red Cross and Red Crescent Societies (IFRC) and Australian Red Cross volunteering, partnering and business initiatives aligning with AI, ESG and SDGs.

    Firstly, the Red Cross KITE conceptualised abstraction framework helps directors drive the purpose of AI initiatives to address key success factors. Secondly, to support the KITE abstraction framework, the wind-turbine conceptualised model helps develop a comprehensive AI strategy for organisations. Together, they help drive AI for sustainability in more structured, systematic, transparent and collaborative ways.

    KITE abstraction framework

    The KITE framework helps directors govern AI by aligning with the broader ESG purpose — fundamentally, the “why” aspect of the Golden Circle. Irrespective of the complexity of the AI application, the KITE analyses the four key dimensions of AI, organisation, society and sustainability.

    Wind-turbine conceptualised model

    The front-facing, multi-blade rotor represents the values and policies (the seven fundamental principles of the Red Cross — humanity, impartiality, neutrality, independence, voluntary service, unity, universality) which ethically and efficiently address humanitarian needs, risks and suffering. The wheels in the gearbox represent community, partners and volunteers who continually help with diversity, equity and inclusion. Finally, the generator represents the data and AI capabilities that drive AI innovation and transformation for sustainability. In summary, directors can oversee the full spectrum of the AI processes, stakeholders and management.

    The KITE framework helps directors address the “why” aspect while the wind-turbine model helps directors address the how” and “what” aspects of AI governance. They help to oversee AI processes supporting social justice with social diversity, equity and inclusion. From a Red Cross perspective, this model directs the AI initiative towards humanity and SDGs to minimise human suffering. Further, this model helps oversee Red Cross leadership and guidelines — represented by the tail of the turbine.

    Case studies

    Through the KITE framework and wind-turbine conceptualised model, the Australian Red Cross enhances diversity, equity and inclusion in AI initiatives by engaging with partners, volunteers and community. Our initiatives aim to help society while minimising stakeholder bias. We established the volunteer DS&A team with industry collaborators focused on AI for sustainability and transformation. This helped us enhance resources and capabilities to support ESG goals. Citizen scientist programs enable social justice by bringing AI and technology equity to multicultural communities. In the following case studies, we used our framework to engage with the community, volunteers and partners for social justice and sustainability.

    Climate awareness and resilience to mitigate social risks and vulnerabilities is an important research focus in the AI governance framework. We focused on climate change posing a major threat to First Nations health and their ability to sustain cultural heritage. We adapted the AI governance framework to mitigate their climate change risks by mobilising them as citizen scientists — delivering tech equity. Our program enabled the establishing of partnerships with universities to develop solutions to support citizen scientists and we mentored future leaders to drive innovation aligned with national reconciliation.

    Guided by the AI governance framework, we strategically enhanced social diversity, equity and inclusion in AI processes by positioning some of our volunteers in AI projects.

    We established the DS&A team for facilitating community, volunteers and partners to collaborate with the Australian Red Cross to drive AI-for-good programs. With this social diversity, our AI projects became more representative, diverse, equitable and inclusive, complementing women in AI initiatives. Diversity mitigates the biases in decision-making and AI development processes.

    Our framework contributed to unifying social justice initiatives in emerging data-centric technologies and SDGs. The framework was recommended in the UN World Data Forum as a solution to realise SDGs by leveraging data and AI capabilities. Our climate- smart initiative, driven by the AI governance framework, was a finalist in the Australian IoT Awards 2021 (Diversity, Equity & Inclusion in Action).

    The IFRC has adapted its AI governance framework to enhance data literacy and ethics, govern data-centric emerging technologies and mitigate data risks. The framework brings together IFRC Strategy 2030 to cope with 21st- century challenges with SDGs, ESG and the Red Cross’ seven fundamental principles. This drives AI innovation and transformation for social justice and sustainability, and enhances customer experience and trust.

    In recognition of this, the Australian Red Cross DS&A team has been nominated to the ACS Digital Disruptors Awards 2022 and is a finalist in the Ashton Media Best Use of Technology to Revolutionise CX award 2021. It also contributed to winning the Ashton Media CX Team of the Year Award in 2021.

    Latest news

    This is of of your complimentary pieces of content

    This is exclusive content.

    You have reached your limit for guest contents. The content you are trying to access is exclusive for AICD members. Please become a member for unlimited access.