A newsletter on Responsible AI and Emerging Tech for Humanitarians
As organisations across the humanitarian sector set priorities for the year ahead - or begin translating them into action - we wanted to start 2026 by asking two simple but important questions: how is the humanitarian AI narrative evolving, and what are organisations actually focusing on now?
While AI development is not shaped by the turn of the calendar year, January often creates space for reflection and agenda-setting. Looking back at 2025, one shift is clear: for humanitarian actors, the question is no longer whether to engage with AI, but how to do so responsibly and in ways that strengthen rather than strain the system. AI can no longer be treated as a novelty or a future concept. It is becoming part of the essential infrastructure that people will increasingly rely on and interact with in their daily lives.
This shift is unfolding within a global context. As the year opens with Davos [1], public debate around AI appears to be moving away from “what AI can do” towards “what AI is doing to the world” [2]. Alongside ongoing concerns about weaponisation, bias, surveillance and other misuse, as well as issues surrounding transparency, one dilemma dominated the discussion about AI in Davos: the possible gap between the economic gains promised by AI and the risk of large-scale job displacement, driving new forms of inequality [3].
These debates are no longer abstract. They are increasingly shaping geopolitical visions and post-conflict futures, as reflected in the prominence of high-tech infrastructure including data centres and advanced manufacturing within the US-proposed “Board of Peace” framework for Gaza [4].
For the humanitarian sector, this raises urgent questions. If AI reshapes labour markets, access to services, and public trust at scale, what does preparedness look like – not only for sudden shocks and disasters - but for systemic change?
At the same time, we know that AI holds real potential to help address some of the sector’s most pressing challenges, from early warning and anticipatory action [5] to health access [6] and resource allocation [7]. As always, the task ahead is not to choose between optimism and caution, but to navigate the tension carefully.
This January edition explores where humanitarian AI stands as we enter 2026 – how the narrative is shifting, where attention is concentrating, and what this means for organisations wanting to act responsibly in this rapidly changing landscape.
Spotlight I NetHope: Humanitarian AI – What to Expect in 2026
In a recent reflection, Daniela Weber, Director of NetHope, outlines how humanitarian AI is entering a more sober phase. After years of pilots and proof-of-concepts, the emphasis is shifting toward integration, governance, and operational relevance.
Daniela highlights several trends likely to define 2026 and key areas that organisations should be focussing on this year such as:
- Fewer standalone AI experiments, and more focus on embedding AI into core systems
- Increased attention to responsible AI frameworks, human-in-the-loop design, and trust
- Greater demand for shared infrastructure, partnerships, and interoperability
- A growing recognition that AI capacity gaps, not technology, are now the main constraint
Rather than framing AI as a silver bullet, the piece argues that 2026 will be about choices: where AI genuinely adds value, where it introduces unacceptable risk, and where restraint is the more ethical option.
Who’s Doing What
Other examples of AI tools being used across the humanitarian sector.
Dataminr & Ushahidi AI partnership to improve Data Management
Ushahidi worked with Dataminr to use AI to sort, map, and analyse large volumes of crowdsourced information more quickly, helping the organisation turn real-time reports from crises and elections into clear, usable insight.
Contact: contact form
University College London - EthosKit
EthosKit is a guided toolkit designed to help non-technical humanitarian practitioners safely explore, design, and prototype AI solutions for real-world challenges. Through hands-on learning, users build a practical understanding of AI fundamentals while being supported to reflect on ethics, suitability, and impact. The toolkit enables experimentation in a safe environment, lowering barriers to AI adoption without requiring a technical background.
Contact: t.bhatnagar@ucl.ac.uk, hamdan.albishi.24@ucl.ac.uk
World Food Programme (WFP) - Optimus
Optimus is an online tool used by the World Food Programme through its partnership with Palantir to help plan the best and most cost-effective way to deliver food to people in need. By combining data from many sources, it helps teams decide what food to provide, where to source it, and how to deliver it efficiently in different emergency settings. However, the partnership has been controversial, with civil society raising concerns about data governance, beneficiary protection, and humanitarian neutrality given Palantir’s work with defence and intelligence actors (e.g. Israel’s Defence Ministry).
Contact: global.innovation@wfp.org
Editor’s picks
Curated reads and resources our team found especially insightful this month
What’s shaping aid policy in 2026, The New Humanitarian (2026) Analyses how the humanitarian system is struggling with deep funding cuts and shifting political will. It flags emerging technological risks including increasing use of drones, sophisticated cyber-attacks and AI-linked threats to aid worker security and information integrity and underscores the need for AI strategies that protect both people and principled action in crisis settings.
Local, Everywhere: The Blueprint for a Humanitarian AI Transformation, DiploFoundation (2025) Argues that for AI to genuinely support locally led humanitarian action, systems must be designed and governed with local cultural, societal, and policy contexts in mind, highlighting localisation and accountability as ethical prerequisites for equitable AI uptake in crisis responses.
World Food Programme Challenges Business Leaders at Davos, WFP (2026) Shows how AI-driven transformation, from supply chain optimisation to advanced early warning models, is already improving efficiency and predictive capacity in food security operations. At the same time, it urges the private sector to invest in scaling these technologies where humanitarian needs exist, and funding gaps are deep. Missing from much of this conversation, however, is a clearer articulation of how community and beneficiary data are being protected as these systems scale.
Podcast Spotlight
Voices from the sector on emerging tech deployment in humanitarian response.
How are Humanitarians using AI in 2026?
In this episode, the Humanitarian Leadership Academy launches the next phase of its sector-wide research into AI use. The conversation reflects on lessons from early adopters, highlights persistent barriers to scale, and explores how humanitarian organisations are recalibrating expectations around the role of AI.
Rather than focusing on tools, the discussion focusses on organisational readiness, ethics, and long-term impact which makes it a timely listen for the year ahead.
Also learn about how your participation in the new 5-minute pulse survey, which is part of the next phase of research (open until the 31 January 2026), can help the sector understand and navigate AI adoption during this period of unprecedented change. And take the survey here.
🕐 Run time: ~30 mins
Skill Up
Short, practical learning picks for practitioners - no tech background needed.
NetHope, Gender Equitable AI Toolkit, (free, online)
A practical, self-paced toolkit designed by and for the humanitarian sector to help organisations implement AI and machine-learning solutions that actively address gender bias. It offers a step-by-step guide across the full AI lifecycle, from problem design and data collection to deployment and continuous learning, and uses an ethical, gender-equitable lens.
ISO 42001 - AI Management Systems Standard (paid)
An international standard from ISO providing a framework for establishing, implementing, and maintaining an AI management system that supports ethical, accountable and transparent use of AI. It includes risk assessment, governance, and continual improvement and is relevant for humanitarian organisations that want to build or scale responsible AI practices solutions.
Upcoming Opportunities
Stay ahead of funding calls and events.
AI in Humanitarian Response - AI for Good
When: 1 May, 2026, 16:00 - 16:20 (20 mins - recorded session)
A practical AI for Good session showcasing how AI is being applied in humanitarian response, including crisis prediction, field operations, communications, and coordination. Features case studies and expert insights relevant to organisations exploring responsible, operational AI use in emergencies. More info
ICT4D Conference 2026
When: 20 - 22 May, 2026, Nairobi
A leading global conference bringing together technologists, policymakers, researchers, and humanitarian practitioners to explore how digital innovation — including AI, data, and connectivity — can support inclusive development and crisis response. Offers humanitarian organisations a space to share AI-enabled use cases, learn from peers, and connect with tech partners working on resilience, service delivery, and emergency response. More info
AI for Good Global Summit 2026 (ITU)
When: 7-10 July, 2026, Geneva & online
The UN-backed flagship summit on AI for global challenges, featuring humanitarian-focused sessions on AI in emergencies, health, climate risk, food security, and ethical governance. A key convening space for NGOs to engage with UN agencies, governments, and industry on responsible AI pathways and real-world deployments. More info
The $3 Million Conrad N. Hilton Humanitarian Prize Deadline: until April 30, 2026
Every year, the Conrad N. Hilton Humanitarian Prize honours a non-profit leading efforts to alleviate human suffering. At $3 million, the Prize is the world’s largest annual humanitarian award presented to nonprofit organizations. More info
WFP Innovation Accelerator
Deadline: rolling (cohorts announced periodically)
The World Food Programme’s innovation arm supports early-stage and scaling solutions that strengthen humanitarian operations including AI-driven forecasting, logistics optimisation, and digital cash assistance. Offers grants, technical support, and access to WFP’s global operational network. More info
DRK Foundation – Early-Stage Social Impact Funding
Deadline: rolling
Provides up to $300,000 in unrestricted grants to early stage organisations addressing an urgent or critical social or environmental problem in an innovative fashion and in a way that directly benefits underserved populations. More info
Aurora Prize for Awakening Humanity
Deadline: rolling
The US $1,000,000 Aurora Prize for Awakening Humanity is a global humanitarian award. Its mission is to recognise and support those who risk their own lives to save the lives of others suffering due to violent conflict or atrocity crimes. More info
Guy Parmelin, President of the Swiss Confederation (Opening remarks at the World Economic Forum Annual Meeting 2026)The rapid rise of AI is set to deeply transform the fabric of our societies. The changes it brings will be visible at every turn and in every area of human life… they present both risks and opportunities.
Disclaimer: The views expressed in the articles featured in this newsletter are solely those of the individual authors and do not reflect the official stance of the editorial team, any affiliated organisations or donors.