A newsletter on Responsible AI and Emerging Tech for Humanitarians
This edition of Humanitarian AI Unpacked is about AI safety and governance, about what responsible governance looks like in practice, who in the sector is building it, and why the stakes of getting it wrong have never been clearer.
Following this edition, we will be taking a break to re-assess the newsletter’s content - please information below on how you can continue shaping our journey. But for now let's dive in.
On 27 February 2026, the Trump administration ordered all US government agencies to stop using Anthropic's AI systems [1]. The reason was not a security breach, a data scandal or a failure of the technology but instead it was a red line. Anthropic had declined to allow its model, Claude, to be used for fully autonomous weapons systems or the mass surveillance of American citizens. It stated publicly that today's frontier AI models are simply not reliable enough for those applications and that mass domestic surveillance constitutes a violation of fundamental rights. Within hours, OpenAI had signed a deal with the Pentagon to fill the gap. [2]
It is worth pausing on what this moment reveals. One of the world's leading AI companies, drew a safety red line, and was penalised for doing so, by the very government that the rest of us rely on to create guardrails for these technologies. At the same time a competitor, eager for the contract, stepped in without the same hesitation and safety concern.
This incident did not happen in isolation: it unfolded against the backdrop of the Israeli and US campaign against Iran, where AI targeting systems (including Claude) have been used to generate thousands of airstrike targets at speeds no human team could match [3], where AI-generated deepfakes are distorting what civilians understand about where it is safe to move [4], and where questions remain about how much human oversight governed the strike on a girls' primary school in Minab that killed at least 165 children and staff [5].
For the humanitarian sector, the Anthropic story is about what happens to safety governance when commercial incentives and political pressure push in the opposite direction. And it raises a direct question for every organisation in this sector that is deploying, procuring, or planning to use AI: what are your red lines, who knows about them, and what happens when they are tested?
Call to action I The final edition for now
After eighteen editions, we are pressing pause on Humanitarian AI Unpacked.
As Elrha enters a moment of transition, we are taking a step back and reflect carefully on whether and how this newsletter should continue. The humanitarian sector is navigating one of the most consequential moments in the development of AI, and we remain committed to supporting the people working to get it right.
Before we go, we would love to hear from you. Your feedback will directly shape what comes next.
Has Humanitarian AI Unpacked been useful to you in your work?
If yes, we would love to hear about what would be most useful for you, take 3 mins to fill in this short survey
Podcast Spotlight
Voices from the sector on emerging tech deployment in humanitarian response.
Humanitarian AI Today feature an episode of a new podcast ‘The Inference Layer’. In this crossover episode, guest host Patrick Hassan, former USAID disaster response lead and now AI policy lead at the International Association of Firefighters, interviews Federico Pierucci, Scientific Director of Icaro Lab in Rome. The conversation cuts to a risk that most humanitarian AI safety discussions overlook: what happens when multiple AI agents interact with each other, rather than with a human? Individually safe agents can produce dangerous outcomes in combination.
Hassan pushes hard on the humanitarian stakes: if multi-agent systems determine who receives food, shelter, or medical care, how would a field analyst detect bias introduced through agent interaction rather than human decision? And what about AI safety benchmarks built almost entirely in English, applied to tools used across dozens of underrepresented languages in crisis contexts? Both guests are candid that governance frameworks, including the EU AI Act, are not yet built for the world they are describing.
🕐 Run time: ~45 mins
Who’s Doing What
Other examples of AI tools being used across the humanitarian sector.
Signpost AI — AI-powered information for displaced people at scale Signpost AI, housed within the IRC, delivers trusted, localised information to people affected by humanitarian crises by connecting them to health, legal, and financial services across 30+ countries and reaching roughly 20 million registered users. Its AI Information Assistant uses generative AI to respond to inbound queries based on verified content, with a clear commitment to enhancing, not replacing, frontline human teams. The platform treats safety governance as a design requirement: every tool is purpose-built, monitored, and tested before deployment. Contact: signpostai.org
Stop Killer Robots. The international civil society coalition Stop Killer Robots is actively engaged in the UN Group of Governmental Experts (GGE) on lethal autonomous weapons, pushing for a legally binding treaty before the GGE's mandate expires at the end of 2026. Their 2026 advocacy sheet provides a clear, accessible account of where negotiations stand and what civil society is calling for. For humanitarian organisations that want to understand how international governance of military AI is developing and what it means for the contexts they operate in. An important reference point. Contact: stopkillerrobots.org
Spotlight I ACAPS – Syria Area Based Analysis (SABA) - Artificial Intelligence for Syria
Artificial Intelligence for Syria: Applications within the Syrian Humanitarian Response. This new landscaping assessment published by UKHIH presents research led by ACAPS. It provides a comprehensive overview of the current ecosystem of AI use within Syria’s humanitarian response, examining where and how AI tools are being deployed, what gaps they aim to address, and why their impact has remained limited or uneven.
Editor’s picks
Curated reads and resources our team found especially insightful this year.
SAFE AI: Standards and Assurance Framework for Ethical AI, CDAC Network (2025/26). A practical governance framework developed specifically for the humanitarian sector, offering tools to assess whether AI systems are fair, trustworthy, and accountable. It includes a participatory model that gives crisis-affected communities (see skill up section) a role in shaping the AI deployed in their name. For organisations navigating procurement decisions without a clear internal policy, this is the most directly usable safety resource in the sector.
From Promise to Practice: How We're Scaling AI for Humanitarian Good, International Rescue Committee (2025). The IRC shares what it has learned from deploying purpose-built AI agents across education, resettlement, and anti-trafficking programmes. The IRC emphasises that safety must be designed in from the start, not retrofitted, with every deployment purpose-built, monitored, and evaluated against real-world benefit. A grounded counterpoint to high-level AI governance documents: this is what responsible practice looks like in an organisation operating at scale in crisis settings.
Deciding Under Algorithms: Artificial Intelligence and the Protection of Civilian Infrastructure in Armed Conflict, ICRC Law and Policy Blog (March 2026). Published this month, this piece examines how AI decision-support systems are shaping military targeting of civilian infrastructure and why the most severe humanitarian consequences arise not from initial strikes but from cascading disruptions to areas like power, water, and hospitals that algorithms systematically underestimate. Essential reading for anyone trying to understand the legal and ethical gap between what AI can calculate and what International Humanitarian Law actually requires.
UNHCR AI Approach, UNHCR (September 2025). UNHCR's strategic framework for responsible AI sets out how the agency approaches AI across refugee status determination, displacement forecasting, and detection of online harms. It grounds every application in human rights principles and clear accountability mechanisms. One of the most substantive governance frameworks published by a major humanitarian organisation to date, and a useful benchmark for others developing their own.
Global Call for AI Red Lines, The Future Society & co-signatories (September 2025). Endorsed by over 300 prominent figures this call urges governments to agree binding international prohibitions on the most dangerous AI applications by end of 2026, including autonomous weapons, mass surveillance, and AI impersonation without disclosure. For the humanitarian sector, it reframes safety governance not as a technical question but as a political one: are the red lines that matter the ones states are willing to enforce?
Skill Up
Short, practical learning picks for practitioners - no tech background needed.
AI for Humanitarian Practice (MOOC)
Developed by Elrha, this course introduces humanitarian practitioners to the practical use of artificial intelligence in crisis settings, with a strong focus on safety, governance, and responsible use. Participants learn how to identify appropriate AI use cases, understand data flows and risks, and apply ethical and humanitarian principles to ensure AI systems protect communities and do no harm. The course also guides learners through developing context-appropriate plans for deploying, or choosing not to deploy, AI in humanitarian operations. Access the course
How-to note: Co-designing AI solutions with crisis-affected communities
A practical guide for humanitarian practitioners on how to meaningfully involve crisis-affected communities in the design of AI tools. The note covers when to engage, how to structure participation across the AI lifecycle, and how to avoid tokenistic consultation. Draws on the SAFE AI Glossary from CDAC Network for key terms. Access the how-to note
Upcoming Opportunities
Stay ahead of events and funding calls.
AI in Humanitarian Response - AI for Good
When: 1 May, 2026, 16:00 - 16:20 (20 mins - recorded session)
A practical AI for Good session showcasing how AI is being applied in humanitarian response, including crisis prediction, field operations, communications, and coordination. Features case studies and expert insights relevant to organisations exploring responsible, operational AI use in emergencies. More info
ICT4D Conference 2026
When: 20 - 22 May, 2026, Nairobi
A leading global conference bringing together technologists, policymakers, researchers, and humanitarian practitioners to explore how digital innovation — including AI, data, and connectivity — can support inclusive development and crisis response. Offers humanitarian organisations a space to share AI-enabled use cases, learn from peers, and connect with tech partners working on resilience, service delivery, and emergency response. More info
AI for Good Global Summit 2026 (ITU)
When: 7-10 July, 2026, Geneva & online
The UN-backed flagship summit on AI for global challenges, featuring humanitarian-focused sessions on AI in emergencies, health, climate risk, food security, and ethical governance. A key convening space for NGOs to engage with UN agencies, governments, and industry on responsible AI pathways and real-world deployments. More info
Climate Change Summit 2026
When: 30-31 July 2026, Paris France
An international summit exploring innovations, policies and solutions for sustainable futures with a focus on climate action. Includes this track on AI: Digital technologies are transforming how climate data is collected, analysed, and applied. This track explores climate data science, artificial intelligence, and machine learning tools used for modeling, forecasting, and decision support. Applications include emissions tracking, climate risk assessment, and optimization of mitigation and adaptation strategies. More info
The $3 Million Conrad N. Hilton Humanitarian Prize
Deadline: until April 30, 2026
Every year, the Conrad N. Hilton Humanitarian Prize honours a non-profit leading efforts to alleviate human suffering. At $3 million, the Prize is the world’s largest annual humanitarian award presented to nonprofit organizations. More info
The Fund for Innovation in Development (FID)
Deadline: rolling
The FID’s ambition is to contribute, in the long term, to the transformation of public policies by supporting the scaling up of proven innovations in the fight against poverty and inequality. A previous project has been: A sustainable housing solution for climate-displaced communities in Ethiopia. The call for proposals is open all year round to all types of teams for solutions targeting low- and middle-income countries. Amount: EUR €50,000 to EUR €4,000,000 depending on stage. More info
WFP Innovation Accelerator
Deadline: rolling (cohorts announced periodically)
The World Food Programme’s innovation arm supports early-stage and scaling solutions that strengthen humanitarian operations including AI-driven forecasting, logistics optimisation, and digital cash assistance. Offers grants, technical support, and access to WFP’s global operational network. More info
DRK Foundation – Early-Stage Social Impact Funding
Deadline: rolling
Provides up to $300,000 in unrestricted grants to early stage organisations addressing an urgent or critical social or environmental problem in an innovative fashion and in a way that directly benefits underserved populations. More info
Aurora Prize for Awakening Humanity
Deadline: rolling
The US $1,000,000 Aurora Prize for Awakening Humanity is a global humanitarian award. Its mission is to recognise and support those who risk their own lives to save the lives of others suffering due to violent conflict or atrocity crimes. More info
Anthropic, on refusing the Pentagon autonomous weapons contract, February 2026We do not believe that today’s frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America’s war-fighters and civilians.
Disclaimer: The views expressed in the articles featured in this newsletter are solely those of the individual authors and do not reflect the official stance of the editorial team, any affiliated organisations or donors.