The World Food Programme (WFP) has for several years been exploring AI applications including computer vision, machine learning, natural language processing, chatbots and robotics to better predict, assess and respond to emergencies.
The use of AI to augment how WFP conducts needs assessments is one area with a significant number of use cases, given the availability of data that can be used to train algorithms. For example, WFP has partnered with Google Research to create SKAI, an AI-powered tool that automatically analyses satellite images to assess damage from disasters within 24 hours, a task that would have taken weeks with manual annotation methods. The advent of micro-satellites has meant that large volumes of imagery are now available at low cost. This has allowed the development of AI-driven computer vision applications that can rapidly ingest raw imagery and produce accurate disaster maps. SKAI has been tested on earthquakes, wildfires, tropical cyclones and conflict zones with an average accuracy of 85%. DEEP, another WFP AI tool, does the same with drone imagery.
WFP has also used AI to bring together existing datasets on food prices, vegetation, and crops to assess food insecurity at sub-national levels, even in inaccessible areas. HungerMapLive, a publicly accessible data platform, delivers AI-driven pinpoint estimates of household food security. HungerMapLive’s estimates have been used by WFP country offices, resulting in increased coverage and rations in response to emerging needs detected by the platform’s algorithms.
Stumbling blocks: bias, data access and engagement with the field
The very nature of AI tools – their ability to codify and reproduce patterns – raises significant questions alongside the promises they offer. Of particular concern is when such tools are used for racial profiling, surveillance or to perpetuate stereotypes. Algorithms may be used in ways that result in unfair outcomes for minority populations. People facing humanitarian hardship are often also excluded from the global digital economy, and as a result are under-represented in datasets. Additionally, the complexity of AI models can make it difficult to locate accountability when models make mistakes.
It is critical that the humanitarian sector address AI-driven inequity in their tools, and identify innovative and creative approaches to help decision-makers (designers, buyers, sellers, regulators, donors) and users of AI technology to identify and address actual and potential biases. This could be achieved by adhering to the soon-to-be-released Global Index on Responsible AI Assessment or by adapting existing tools in the private sector (e.g. Fairness 360), and engaging in UN processes on responsible AI.
Access to data could become the largest practical bottleneck to developing humanitarian AI applications, which require a continuous flow of annotated data to train new models and update old ones. The majority of the most valuable and relevant data streams are closely held by large companies who are hesitant to allow humanitarian agencies to access the data. There are exceptions. We’re working with Google Research to build our AI, and they are helping us turn SKAI into an open-source product that anyone can use anywhere. But an ad hoc approach to co-development and data access won’t be enough. We’re looking towards standing partnerships for data access, such as the World Bank’s data partnership, where technology companies make data available to third parties. This could be a way for humanitarian agencies to access this data in a way that’s legal, secure, responsible, and respectful of privacy.
No AI will help save lives unless it delivers value to humanitarian teams on the ground. This means we will need to continue building capacity in data and analytics among humanitarian personnel at the operational level. Humanitarian responders will need to be able to manage situations when different organizations present different AI-based recommendations. AI products are often considered to be the mysterious outputs of a black box, which can alienate the people they are meant to help, creating distance in ownership of the tools and their utility.
The solution to these obstacles is close engagement with colleagues on the front line, as well as the people accessing WFP services. We have had some success in involving first responders, such as Mozambique’s National Disaster Management Agency, who are able to map out flood risk in the country using drones and an AI image processing application developed by WFP. Deputy National Director Antonio Beleza tells his staff ‘not to fear AI’, and consider it as one of the tools in their daily work.
Building a humanitarian AI space
To responsibly leverage AI in the humanitarian sector, we need to invest in partnerships that bring together data, expertise and a diversity of perspectives. WFP’s experience with AI suggests the following steps are especially important:
- Humanitarians will need to partner with private data companies to find responsible and secure ways of accessing data and co-creating AI solutions that anyone can use. WFP’s experience with Google Research shows that ‘safe spaces’ like the WFP Innovation Accelerator can play a strong role in fostering these tech partnerships.
- We need humanitarian agencies to continue working together in communities of practice to manage AI bias, and ethical issues.
- Collaboration should also aim to build bridges with the open AI movement – including universities and non-profits – to create a space where data libraries can be shared as public goods.
- Our colleagues on the front line, and the populations WFP serves, must be closely involved in the design of humanitarian AI solutions by ensuring their feedback informs the design and outputs of any AI algorithm from the outset.
- Continue to improve data and AI literacy inside and across UN agencies and humanitarian organizations.
Building a humanitarian AI space will take vision and time, and it will fundamentally require the participation of more than a single agency to make it work effectively and responsibly. Developing key partnerships will be essential to achieve the greatest social impact from this technical innovation.
At WFP’s Innovation Accelerator, Dr. Kyriacos Koupparis is the Head of Frontier Innovations, Amine Baha is an AI Technical Solutions Consultant, Fiona Huang is an AI Consultant and Jean-Martin Bauer is WFP’s Senior Advisor Digital, Data and Innovation.