Building a Responsible AI Framework for Humanitarian Action in a Rapidly Changing Landscape (FCDO Roundtable)
Date and time: 17 March 2025, 1630-1750, followed by drinks reception
Location: FCDO, Durbar Court Conference Room, Kings Charles Street, FCDO
The FCDO hosted a roundtable and soft launch of the FCDO-funded SAFE AI project, delivered by a consortium consisting of the CDAC Network, The Alan Turing Institute, and Humanitarian AI Advisory. Over 40 experts and leaders in AI, humanitarian action, and public policy attended in person, with another 50 participating online.
Matthew Wyatt—Director Humanitarian, Food Security and Resilience at the FCDO—opened, underscoring the pressure the humanitarian system was under. While 300 million people need lifesaving support, the humanitarian system would only meet the needs of 100 million, given cuts to funding this year. Matthew noted AI has a significant role to play in identifying and providing support to those in need. He called for greater collaboration and cooperation on AI, particularly between humanitarian actors and AI developers, arguing this was the greatest way to mitigate the potential harms related to AI and ensure its safe and responsible use in high stakes contexts, like humanitarian crises. Three short discussions were held on AI governance, policies, and principles; trustworthy partnerships; and power and participatory AI. The following points were raised:
- Transformative potential – if guardrails are in place. As AI capabilities grow, and costs decline—illustrated by innovations like DeepSeek—there is a growing opportunity to rethink how institutions use AI to improve performance and reach. Lower costs can enable broader access, but only if paired with responsible design and inclusive implementation. This raises a key question: How can emerging AI capabilities be applied in ways that meaningfully improve institutional performance for humanitarian actors and the communities they serve? Rapid improvements in AI have led to significant reduction of cost but also potentially a race to the bottom where firms and agencies racing to adopt AI are cutting corners on governance and responsible use.
- We need to get the incentives right. Incentives to develop safe and responsible AI solutions are misaligned, and should reinforce commitments to localisation, community participation and accountability.
- Navigating private sector collaboration requires vigilance. Working with AI firms and developers comes with significant opportunities and risks. Many big tech firms have removed responsible AI principles and ethics, as well as policies related to equity and inclusion, for political reasons. Humanitarians will have to ensure that the values, principles, and guidelines relevant to humanitarianism, including do-no-harm, are kept at the forefront of AI development and deployment. There is tension with those firms who are delivering AI along multiple lines of effort – for defence and national security versus for development and humanitarian effect.
- We can grow trust by establishing common standards. We need to establish common standards on how to evaluate AI solutions in humanitarian uses which include standards related to explainability, transparency, accountability, and participation.
- AI assurance and participatory AI cost time and money, neither of which is being invested into humanitarian AI at the levels required to ensure responsible use.
- Create the conditions for an evidence-based future. Given how important evidence is within the context of rapidly emerging capabilities, we need to think carefully about how we share evidence and information on AI and how donors can play a role in fostering an environment that enables greater transparency.
- Foster collaboration, not competition but be realistic about challenge right now. Sharing between humanitarian actors on AI is limited and compounded by heightened competition for decreasing funding. This creates barriers to transparency and learning what works and what does not when it comes to developing and deploying responsible AI in humanitarian contexts, yet networked learning is key to unlocking the potential of AI.
- Strengthen existing Communities of Practice. There is a need to build on, bring together, and disseminate more widely the lessons and information shared within existing Communities of Practice. This might include a platform where those developing and designing AI solutions for humanitarian contexts might exchange, share resources, and enable the co-creation of common solutions.
- Existing work to test and validate the performance of AI systems and their governance mechanisms against internationally recognised principles (e.g. that of the Singaporean government) offers an opportunity to benchmark good practice and adapt for humanitarian use cases.
- Do not overlook the less advanced AI tools, including robust machine learning models, which may offer as much if not more value in some contexts compared to generative AI tools and LLM-powered chatbots.
- Responsible AI starts with good data and good data stewardship. Investments in data architecture, protection, ownership, and ethical use are foundational. A broader conversation on data sovereignty and readiness is needed across the sector. It was flagged that, "When you debate who owns data you get to the real power dynamics.”
- We need a full AI life cycle approach, multi-disciplinary and multilateral, from design to deployment to evaluation. This is key for community partners, UN agencies and INGOs.
- Communication with communities who will be affected is an important gap currently. Humanitarian organisations must remember that communities deserve agency and voice in the development and deployment of AI solutions, and this will lead to better design and impact. But participatory AI costs time and money.
- Critical elements of effective participation when building and deploying AI solutions include analysing power and incentives; disrupting power structures; changing organisational and economic consequences; building receptiveness to input; enhancing community capacity to push for change; focusing on the most affected.
- Language AI has huge potential but lacks diversity. Ninety percent of the training data used for language AI comes from just 17 dominant languages, yet the technology needs to serve the experiences and priorities of most of the world's population.
- Empower local AI innovation. We need to enable a future where impacted communities are themselves developing low-cost AI (TinyMLs) for their own humanitarian response and how we can support that.
- Deal with the present reality of AI. We must focus on those already affected by AI who are not users or decision-makers; and build into the new humanitarian system emerging the ability to enable and sustain partnership on this.
- The detail matters. We need to thoroughly explain and understand what kind of AI is being deployed, where, with whom, and for which purpose to understand the potential benefits, capabilities, drawbacks, and risks.