The 2024 election year began by highlighting fears of AI’s profound societal impact on information ecosystems and ended with post-election narratives downplaying concerns about AI as exaggerated. This ignored a key truth: those most affected by AI shortcomings and harms—particularly in underserved regions and among critical frontline information actors —were overlooked, and opportunities for them to drive, adapt, and lead in an emerging AI ecosystem were limited.
At WITNESS, our Deepfakes Rapid Response Force (DRRF) revealed the global repercussions of AI’s role in the 2024 election cycle. Weak AI detection compounded by systemic inequities, fragile media ecosystems, a lack of diverse linguistic and regional representation in training datasets, insufficient skills development, and increasing threats to safety, human rights, and democracy. Meanwhile, opportunities for AI to enhance public interest information-gathering and sharing remain underexplored.
WITNESS is attending the official French AI Action Summit in February 2025, where one critical focus will be on Public Interest AI. To build a resilient, safe, and inclusive information ecosystem, we must urgently invest in Public Interest AI that serves frontline journalism and human rights. These investments must ensure equitable access to AI tools, governance, innovation, and information integrity.
We Advocate for Four Key Investment Areas in Public Interest AI for a Resilient Global Information Ecosystem
Strengthening Policy and Governance Through Frontline Expertise
Frontline actors—fact-checkers, journalists, human rights defenders, and civil society organizations—are crucial in shaping and governing Public Interest AI. Positioned on the frontlines of AI’s societal impacts, they witness firsthand the consequences of AI-driven decisions and are uniquely equipped to identify gaps and mitigate immediate and emerging harms as well as systemic risks. Yet, their expertise is often marginalized, deepening regional disparities and perpetuating inequalities.
To ensure AI serves the public good, these actors must play a central role in shaping tools, policies, and governance frameworks and receive equitable investments in compute, talent, and data, ensuring they can actively participate in AI governance in a true multistakeholder fashion.
WITNESS’s DRRF initiative has demonstrated the transformative impact of empowering frontline actors. Collaborations with fact-checkers, journalists, and civil society groups in India, Mexico, Ghana, Sudan, Ukraine, and Georgia have addressed challenges such as the lack of training data for underrepresented languages, limited access to detection tools, and the weaponization of misinformation. Strengthening these partnerships and ensuring ongoing global consultations will create an essential, continuous feedback loop—allowing journalists, technologists, and human rights defenders with lived experience to shape AI development effectively.
Building Capacity to Resist and Engage with AI Advances
Frontline information actors face persistent gaps in AI literacy, access to detection technology, and capabilities to reinforce the credibility of trustworthy information. These challenges are exacerbated by the rapid pace of AI advancements. Furthermore, AI’s potential to enhance journalistic and human rights practices—such as streamlining workflows, anonymizing sources, and refining OSINT approaches—remains largely untapped.
Investment is needed, with a focus on the capabilities needed by low-resourced, under-risk frontline information actors, including journalists and civil society, to:
- Support journalists and civil society in detection and mitigation of malicious and deceptive uses of AI, as well as to improve the credibility and resiliency of trustworthy information.
- Develop AI-driven solutions to enhance investigative reporting, protect sources, ensure accurate information, provide access to information, and support new forms of storytelling and reporting.
- Drive proactive engagement to fully leverage the potential of frontline media actors by fostering partnerships between local actors and AI specialists, expanding AI literacy programs, and providing access to tools and datasets tailored to their needs.
- Implement multistakeholder approaches and effective grassroots participation in AI regulation and standardization.
Scaling Global and Regional Support Mechanisms
The growing demand for escalation responses on AI high-profile fakes (and claims of fakery), requires the expansion of initiatives like the DRRF both at a global level and at a regional level.
Incident-based collaborations with frontline information actors serve critical public interest goals. Such collaborations support sociotechnical advocacy for vulnerable communities and information defenders. Feedback mechanisms on actual usages of detection and media transparency technologies from actual cases can support a vibrant public interest detection community, including media forensics experts volunteering as members of the DRRF (who represent a key cross-section of the detection community), as well as other relevant technical experts and policymakers globally.
By scaling this model, we can ensure that lessons learned from real-world AI threats directly shape the development of more effective detection tools and policy responses (Read here more about our Benchmark work on the Equitable Effectiveness of AI Detection Tools).
Advancing Innovation Through Inclusion and Equitable Access to Talent, Data, and Resources
To build a truly inclusive Public Interest AI ecosystem, frontline actors must be integrated into collaborative innovation processes. AI-driven solutions must not be imposed top-down but instead emerge from the needs of those most affected to drive scalable, sustainable impact.
Over the next two to three years, frontline information actors—including civil society, journalists, and human rights defenders—must make critical decisions about their engagement with AI. Their responses will encompass strategies for:
- Tactical adaptation, adoption and resistance, leveraging AI to enhance impact, rights, trust, and voice, drawing on how AI changes efficiency, speed, or access – for example, to embrace the potential of AI to increase access to their information, to increase their effectiveness in communicating to multiple audiences, and to facilitate localization and translation. Similarly, they will have to make choices on resistance to the epistemic impacts of AI, for example, by forcefully defending the integrity of the ‘real’ and the notion of singular documentation of evidence or investigation of wrongdoing.
- Paradigmatic adaptation, addressing fundamental changes in the knowledge and communications environments brought by AI––for example, in the context of the epistemic threat of AI, how to ensure ethical, authentic narrative, both human and AI-facilitated, can compete with high-volume, hyper-realistic, personalized AI content and AI slop.
The future and resilience of global information ecosystems depends on how effectively frontline actors are resourced, empowered, and integrated into AI governance and innovation. Public Interest AI cannot be an afterthought for these communities. It must be a core investment priority—ensuring that those defending facticity, safety, rights, and democracy remain at the forefront of AI’s evolution.