2023 was a watershed year in many respects. Humanitarian need reached record high levels, with more than 360 million people around the world in need of lifesaving assistance. Yet, only 35% of the 2023 Global Humanitarian Overview (GHO) was funded, the largest shortfall in more than a decade. This year, the GHO will only meet the needs of 60% of the 299.4 million people the UN estimates will require humanitarian aid. And, need continues to rise, with greater numbers of people on the move, displaced by conflict and the effects of the climate crisis.
Against this backdrop of increasingly stretched resources, humanitarian actors are hoping artificial intelligence (AI) will help them do more with less. Dozens of agencies are trialling a wide range of AI use cases, including rapid assessments to identify where need is greatest; improving the management of supply chains and critical infrastructure; tracing missing family members; writing grant reports and proposals for funding; predicting future movements of displaced populations; and powering virtual assistants and chatbots.
As AI continues to change humanitarian action and as humanitarians design more AI pilot projects, some top tips are worth considering:
- Avoid Malsow’s hammer. As the adage goes, “If the only tool you have is a hammer, it’s tempting to treat everything as a nail.” Not every humanitarian problem has an AI solution. Remain centred on the problem (not the AI) and explore the range of tech and non-tech solutions available to address it.
- Establish the right AI and data governance mechanisms. To date, little attention has been given to how AI systems should be integrated and managed within agencies’ organisational structures and existing operations. This includes having the right data required for your AI project and permission to use it, developing a data governance strategy or framework which includes AI, and identifying a named individual within the organisation who is accountable for the governance and risk management of the AI system. A number of international AI Standards set out the key actions required for quality management of AI and AI risk management and, whilst applicable to all industries, offer a handrail for organisations and teams designing AI pilot projects.
- Prioritise accountability to people affected by crisis. Accountability is a key ethical principle for the development and deployment of safe and responsible AI. Accountable AI in humanitarian contexts means, at a minimum, communicating with populations who will be impacted by AI and establishing methods of recourse if things go wrong. Where feasible, individuals should be given the opportunity to opt out of any AI-informed decision-making. This supports some of the commitments captured in the recently updated Core Humanitarian Standard.
- Consider the impact on localisation. AI may well deliver productivity and efficiency gains for aid agencies. But it’s important to consider where these gains will be realised and who will be impacted. For example, resources (time as well as money) could be directed back to the Global North if AI systems are procured from name-brand firms in North American and Europe. Additionally, AI efficiencies could result in reductions in the number of locally recruited staff whilst increasing roles in regional capitals or headquarters.
- Time to upskill. AI has already permeated the everyday lives of billions and its reach is only expected to grow. Foundational knowledge on AI as well as digital and data literacy is increasingly important for all humanitarian workers and no longer solely the domain of IT or innovation teams. Humanitarian staff should increase their basic understanding of what AI is, how it works, how it’s being used in humanitarian contexts and conflict settings, and its risks and opportunities.
Decades of experience has shown that efficient, cost-effective, and high-impact humanitarian operations require good coordination. This is no less true for humanitarian AI. As the number of use cases and pilot projects increase, so too must coordination and collaboration between humanitarian actors. This could improve problem exploration, lesson sharing and learning, and maximise stretched resources. A joint project launched by the UK Humanitarian Innovation Hub (UKHIH) and Elrha’s Humanitarian Innovation Fund (HIF) aims to respond to this need for coordination by scoping out the emerging Humanitarian AI landscape and support humanitarian practitioners to add their voice and influence its future. Perhaps more importantly, it might advance wider, cross-sector efforts to establish specific norms and standards that enable the safe and ethical uptake of AI across the sector. Only then will AI be able to live up to humanitarians’ aspirations to do more with less.
At the same time, AI is changing the contexts in which humanitarians operate. Parties to conflict are using AI to support battlefield decision-making and identify targets and the world’s major militaries are scaling AI integration at all levels. But the extent to which these systems comply with international humanitarian law remains unclear. For example, how accurately can they distinguish between civilian objects and military objectives? And how will humanitarian actors adapt to environments where AI is changing the means and methods of warfare?