A great deal has changed since Humanitarian AI: the hype, the hope and the future was published in November 2021. At that time, only a handful of humanitarian actors were exploring the potential of artificial intelligence (AI) and machine learning (ML) and these experiments were largely led by agencies that had the requisite staff, data, cash, and/or relationships with technology firms to do so. In fact, identifying a sufficient number of individuals to interview in 2021 for that paper, who had both the interest and a basic understanding of both AI and humanitarian action, proved challenging.

Then, in November 2022, OpenAI released ChatGPT, arguably triggering the latest AI boom. Big Tech firms raced to release their own generative AI models. McKinsey estimated that generative AI ‘could add trillions of US dollars in value to the global economy’. Comedians, writers, and media companies went on the picket lines and to the courts. Some of the world’s leading AI engineers and pioneers became doomsayers. And it became seemingly impossible to read, listen to or watch something about AI without hearing the words ‘inflection point’.

As Bill Gates suggested, it seems that ‘the Age of AI has begun’. And humanitarian actors have taken note.

This paper attempts to cut through some of the recent AI hype and offers suggestions on how AI can be safely and ethically adopted and deployed in support of humanitarian action. It focuses solely on the deployment or use of AI systems rather than other parts of the AI life cycle, such as the ethical design and development of AI algorithms and systems, as this latter issue has been well covered across a range of industry and academic papers and sits outside the scope of this paper.

It also provides insights into how humanitarian actors are using AI and algorithmic tools as well as the risks of doing so, many of which are similar to, if not the same as, those highlighted in the 2021 Network Paper.

It focuses primarily on how AI is being integrated within existing humanitarian systems and adopted by large, multinational humanitarian agencies. Thus, the conclusions and recommendations in this paper may prove less relevant for smaller organisations or new market entrants, such as public–private ventures or social enterprises who offer so-called ‘tech for good’ services. This specific focus is not intended to confer a value judgement but rather reflects the aggregate fiscal and political power that a dozen or so of the largest aid agencies wield and thus the collective weight of their decisions related to AI.

Finally, the paper offers reflections on the role of international AI standards as well as recommendations on AI procurement and governance, including assurance tools to establish trustworthy AI – an AI system that generates reliably accurate results.

Comments

Comments are available for logged in members only.

Can you help translate this article?

We want to reach as many people as possible. If you can help translate this article, get in touch.
Contact us

Did you find everything you were looking for?

Your valuable input helps us shape the future of HPN.

Would you like to write for us?

We welcome submissions from our readers on relevant topics. If you would like to have your work published on HPN, we encourage you to sign up as an HPN member where you will find further instructions on how to submit content to our editorial team.
Our Guidance