A newsletter on Responsible AI and Emerging Tech for Humanitarians
Artificial intelligence has often been described as a double-edged sword - like fire, it can illuminate or destroy. It accelerates the speed with which information is being created, accessed, and shared, but this also increases the risks when that information is false.
The humanitarian sector has long grappled with the damage caused by misinformation (false information shared without intent to mislead) and disinformation (false information shared deliberately to cause harm). [1] In crisis contexts, the stakes are painfully high. Misleading messages about where displaced people can access food, shelter, or medical aid can send vulnerable families into danger or cause them to miss lifesaving services.
The consequences can also ripple far beyond individual lives. Only last year, Israel ran a disinformation campaign through Google ads (which use AI for more effective advertising), [2] steering people searching for UNRWA to government pages that claim the UN refugee agency is linked to Hamas. [3] This case shows how disinformation campaigns can undermine trust in humanitarian agencies, disrupt aid delivery, and put already vulnerable communities at greater risk.
In 2020, the International Committee of the Red Cross (ICRC) in Burkina Faso was the target of a disinformation campaign, accusing the organisation to work with terrorists. [4] This false narrative eroded public trust and hampered the ICRC's ability to operate effectively.
Researchers have documented over 80 Russia-linked disinformation campaigns in 22 African countries since 2022, many undermining humanitarian and peacebuilding efforts [5] and deep fakes about the war in Ukraine. [6] Social media is often the main battlefield. False claims can reach millions before truth-checks can catch up; at the same time automated accounts exacerbate the problem. Other forms of AI disinformation include fake news websites, GenAI generated images, deep fakes etc.
AI is certainly part of the problem, but it can also be part of the solution. Advanced AI can analyse patterns, language, and context to fact-check, detect false information and crafting rapid, targeted counter-messages. [7] Initiatives like AIRA (see case study below) show how AI can be adapted for humanitarian contexts, while organisations like Meedan provide practical tools for combating the “infodemic.”
This month, we explore the tensions, risks, and opportunities in using AI to tackle disinformation and misinformation in humanitarian contexts.
The Africa Infodemic Response Alliance (AIRA)
Hosted by the World Health Organization
Managing disinformation, especially about health, is a critical challenge in the humanitarian sector. False information during health crises can severely impact public safety and well-being, particularly where reliable information is scarce.
For example, the COVID-19 pandemic showed how quickly false narratives can spread, creating fear, confusion, and undermining effective public health responses. This erosion of trust in health systems can lead to dangerous behaviours, such as rejecting vaccinations or endorsing unproven treatments.
To address this, the Africa Infodemic Response Alliance (AIRA), hosted by WHO, uses AI to monitor online and offline health conversations across Africa in over 200 languages. AIRA detects harmful misinformation in real time and provides verified, accessible information through its Africa Misinformation Portal, to enable partners like UNICEF and Africa CDC to respond swiftly.
AI greatly improves the speed and scale of identifying misinformation trends however the human element remains essential to ensure cultural and contextual accuracy. Local partnerships and manual reviews ensure that responses are trusted and effective.
📎 Get in touch: Elodie Ho, AIRA Coordinator, hoelodie@who.int
Editor’s Choice
Curated reads and resources our team found especially insightful this month.
📖 AI-Generated Disinformation in Europe and Africa: Use Cases, Solutions and Transnational Learning, Konrad-Adenauer-Stiftung (2025) Examines how AI-driven disinformation impacts elections and crisis-affected areas, while highlighting solutions and lessons for protecting trust and humanitarian action across regions.
📺 Navigating truth in crisis – The dual role of AI in harmful information and humanitarian action, AI for Good (2025) This video conversation explores the dual role of AI in crisis settings, how it both contributes to spreading misinformation and can be harnessed to detect and counter false narratives effectively and highlights the key challenges and opportunities in humanitarian communication.
📺 How the UN is combating disinformation in the age of AI, GZERO, in this 7 minute long video conversation Melissa Flemming (UN) shares how the UN is strengthening its response to rampant disinformation by leveraging AI tools, strategic communications, and digital literacy campaigns to rebuild trust and counter misleading narratives across social media and online ecosystems.
📖 The Crucial Role of Humanitarian Communication in the Fake News and Infoglut Era, Alternatives Humanitaires (2025) This article shows how humanitarian NGOs can counter fake news and information overload by investing in principled communications, joint advocacy campaigns, grassroots storytelling, and innovative platforms to sustain credibility and amplify affected communities’ voices.
📺 How disinformation works, a 3-minute video from the European Parliament (2024) warns that misinformation campaigns pushed through social media and spread faster through emerging technologies often don’t aim to persuade but rather to confuse, especially during disasters.
Who’s Doing What
Examples of AI tools being used across the humanitarian sector.
International Committee of the Red Cross (ICRC) - Real time social insights
The Sprinklr team helped the ICRC develop customised dashboards powered by AI to collect and analyse real-time conversations happening throughout Afghanistan. AI-powered reporting also helped detect anomalies in unstructured data, so the team could filter out false or misleading claims, including identifying fake Red Cross accounts, and avoid transmitting misinformation in their own messages.
📎 Contact: https://www.icrc.org/en/contact
Skill Up
Short, practical learning picks for practitioners - no tech background needed.
UNHCR’s Information Integrity Toolkit (free) is a practical resource that helps humanitarian actors understand, prevent, and respond to misinformation, disinformation, and hate speech, particularly in digital spaces that impact forcibly displaced and stateless communities.
RedR UK & Australia’s AI for Humanitarians (paid, online) this interactive course (Sep–Oct 2025) introduces key AI concepts, risks, and applications in humanitarian settings to help participants critically assess tools while upholding humanitarian principles.
Podcast Spotlight
Voices from the sector on emerging tech deployment in humanitarian response.
Countering Disinformation in the War in Ukraine
Humanitarian AI Today, hosted by journalist Jodi Hilton with Larissa Doroshenko, visiting Lecturer of Communication Studies at Northeastern University.
In this Humanitarian AI Today episode, Larissa Doroshenko discusses her computational research on disinformation and misinformation campaigns during the war in Ukraine. She explores how online propaganda shapes conflict narratives, the risks of the “dark side” of digital media, and how emerging technologies can be adapted to help communities amplify their voices and resist manipulation.
🕐 Run time: ~45 min
Upcoming Opportunities
Stay ahead of funding calls and events.
💰 AI for Climate Resilience Program – Klarna & Milkywire - Deadline: 31 August 2025
Grants of up to $300,000, plus mentorship, training, and community support to AI-driven, community-led climate adaptation projects in underserved, vulnerable regions - such as tools for farmers, health systems, or coastal resilience, with an emphasis on responsible innovation and local ownership.
🔗 More info
💰 JPI Climate – Climate Services for Risk Reduction in West Africa (CS4RRA) - Deadline: Pre-proposals 11 September 2025
Up to €2 million per project (duration up to 36 months) to support interdisciplinary, co-designed innovation in West Africa - spanning early warning systems, climate security assessments, finance integration, and capacity development through African-European partnerships.
🔗 More info
💰 Eureka Network – Disaster Resilience, Response & Recovery Projects - Deadline: 31 October 2025
International R&D funding (range EUR €150K- €5M ) for collaborative projects (min. two organisations across Eureka member countries) developing AI, digital tech, materials, or devices to enhance disaster resilience, emergency response, and post-crisis recovery.
🔗 More info
Melissa Fleming, UN Under-Secretary-General for Global CommunicationsWith COVID-19, we realized very quickly we were in a communication crisis unlike any that we had ever been in before.
Disclaimer: The views expressed in the articles featured in this newsletter are solely those of the individual authors and do not reflect the official stance of the editorial team, any affiliated organisations or donors.