Deadline: February 20, 2025
Applications are open for the UNDP AI Trust and Safety Re-imagination Programme 2025. The AI Trust and Safety Re-imagination Programme invites innovators across the public and private sectors to collectively re-imagine Trust & Safety, through measures that both prioritize equitable and practical approaches and foster shared, public–private sector responsibility. The programme aims to advance T&S beyond reactive measures, by creating practices that actively anticipate and prevent harm, and create safer development environments. These practices ought to be tailored to the sensitivities of local contexts and the needs of impacted communities.
More specifically, the programme seeks to:
- Gather practical experiences and data on how AI products and systems manifest new risks that create harm in local contexts;
- Lay the foundations for equitable and safe AI development ecosystems that support local startups and the safe application of AI in developing countries; and
- Explore innovative partnerships that ensure safety-by-design in the early stage of AI development and deployment.
Benefits
Applicants with successful submissions will have an opportunity to:
- Present their ideas in a multi-stakeholder forum consisting of leading AI researchers, experts, and innovators who can validate and potentially support implementation of their ideas at scale.
- Engage with government stakeholders and experts to co-design and test collaborative strategies towards re-imagining AI Trust and Safety in different regional contexts.
- Join UNDP’s AI Trust and Safety community, contribute to thought leadership publications and participate in programming development initiatives through the AI Hub for Sustainable Development and other related activities.
Eligibility
- Submissions are encouraged from innovators working at the intersection of AI and T&S, which may range from prototypes, unpublished or recently published research, to a fully-fledged product or solution that has been deployed. Successful submissions must function beyond the ideation stage and demonstrate substantive outputs or findings.
- Submissions are also welcome from individuals with significant expertise from private sector companies, startups, industry leaders integrating with AI in their business sectors, relevant civil society groups, research institutions, universities and other similar organizations.
- Teams applying could comprise of a university department, corporate R&D teams, non-governmental organizations working in the field of T&S, industry alliances, think tanks, etc. Note that only up to three people may present for a team and one of the team members must be an English speaker.
- Applicants should currently be working on impactful research, interventions or solutions in one of the following areas:
- Scalable local insights and solutions: Approaches to T&S that are sensitive and adaptive to local contexts and industries. Examples may include a locally specific taxonomy of AI harms; red-teaming for language-specific vulnerabilities; and interventions specific to auditing for AI fraud in the finance sector.
- Local risk forecasting and prevention: Projects that forecast AI risk by leveraging local knowledge on the digital risk and vulnerabilities landscape, or projects that seek to develop locally focused, proactive strategies (e.g. technical AI auditing systems for business-to-business (B2B) financial industry scams in local languages).
- Shared responsibility: Public–private collaborative approaches to AI escalations and risk management practices that impact developing countries and distribute risk and responsibility (e.g. product-feedback mechanisms).
Submission Requirements
Submissions should be in the form of a slide presentation (e.g. PowerPoint or PDF) with no more than 10 slides, or an abstract paper (e.g. in Microsoft Word or PDF) with no more than three pages, single-spaced.
The submissions should demonstrate:
- Alignment with the objectives of the programme and proven experience to deliver impact;
- Awareness of the AI and T&S landscape with demonstrable understanding of local AI Trust and Safety challenges;
- Proven skills, experience, relationships and expertise needed to re-imagine AI Trust and Safety in developing countries. Submissions addressing AI Trust and Safety for a specific region should be able to demonstrate an existing equitable partnership that validates the local relevancy of their work;
- Clarity of impact and focus and realistic understanding of regional challenges and potential impact, including limitations. Note: Effectiveness in a specific industry or with a specific population is highly valued;
- A collaborative approach, strong cross-functional thinking and strategies that enable partnership-driven impact; and
- A high potential for practical implementation or meaningful impact. Submissions may include evidence or research that brings attention to the nuances of how AI harms manifest locally among specific vulnerable populations, or a series of technical, practical, or educational interventions that enable the detection or protection of AI risks and harms. Solution-based submissions are required to at least be in the early stages of being field-tested and validated by the impacted populations.
Eligible applicants must possess professional-level proficiency in English. English is a requirement for at least one member of each team. Teams will be expected to present online and be interviewed in English.
Application
For more information, visit UNDP.