Deepfakes and Digital Abuse: Dismantling Technology-Facilitated Gender-Based Violence


As we celebrate the achievements of women globally on International Women’s Day we must also acknowledge the ongoing digital threats that disproportionately target them. From non-consensual intimate imagery (NCII) and deepfake abuse to algorithmically amplified harassment, technology-facilitated gender-based violence (TFGBV) threatens gender equality, free expression, and personal safety.

To address the pressing problem of TFGBV, WITNESS has submitted recommendations to the United Nations Human Rights Council Advisory Committee as part of its study on TFGBV. Our submission highlights gaps in the current international human rights framework and offers concrete strategies to strengthen global responses to digital gender-based violence based on years of work on technology and with vulnerable communities globally.

Since 2018, WITNESS conducted global in-person consultations with journalists, technologists, community leaders and other human rights actors. During these conversations, AI-driven Sexual and Gender-based Violence (SGBV) was identified as one of the most pressing threats across regions, including South-East Asia, Africa, Latin America, and Brazil. The misuse of generative artificial intelligence (AI) to create non-consensual sexual deepfakes of ordinary citizens is already a reality. Deepfakes depicting female leaders and activists in compromising or false situations could be used to discredit their authority, undermine their influence, erode public trust, and lead to physical harm. This could have a chilling effect, discouraging women from participating in political, social, and economic arenas, thereby reinforcing gender inequality. 

AI-Driven Gender-Based Violence

Years of WITNESS’ Technology Threats and Opportunities (TTO) program underscores how AI, particularly generative AI and deepfake technology, is being weaponized to silence and discredit vulnerable communities. Since 2018, WITNESS has led Prepare, Don’t Panic, the first civil society-led initiative tackling synthetic media threats on the information ecosystem. As part of this work, WITNESS has conducted extensive work on risk and harm assessments of provenance and authenticity standards, alongside advocacy for equitable and effective detection tools. Our expertise has revealed the significant limitations of these technologies in responding to TFGBV. 

Since 2023, WITNESS’s Deepfake Rapid Response Force (DRRF) has also provided real-time deepfake analysis in election and conflict contexts globally, highlighting the impact of AI-generated falsehoods on the information ecosystem, democracy and human rights.  Lessons learnt from the DRRF directly translate into the challenges faced when dealing with non-consensual sexual deepfake content which is increasingly used as a tool of intimidation, extortion, and misinformation. This has grave implications for democracy, free expression, and personal safety of women and gender minorities:

  • Women in politics, media, and activism are being disproportionately targeted with AI-generated disinformation and online harassment, which could discourage them from participating in public life.
  • AI is amplifying existing mis/disinformation risks and patterns of abuse which disproportionately impact already vulnerable groups, including women of colour, women with disabilities, migrant women, indigenous women, gender minorities and the broader LGBTQ+ community. 
  • Current AI detection tools remain inadequate, particularly for non-English languages and low-resource communities, making justice inaccessible for many survivors. And in many cases, whether the content is AI-generated or manipulated does not change the harm inflicted—damage to reputation, credibility, safety, and personal security is already done, with little recourse for those affected.

Gaps in the International Human Rights Frameworks

Despite the growing prevalence of TFGBV, current frameworks are ill-equipped to address emerging digital threats, leaving survivors with limited legal recourse. Key shortcomings include:

  • While international human rights treaties address gender-based violence broadly, they fail to account for the digital dimensions of abuse, including AI-driven threats, and the disproportionate impact of TFGBV on marginalized groups. 
  • There is a lack of enforceable global standards and mechanisms for holding social media companies and AI developers accountable in responding to TFGBV.
  • The cross-border nature of online abuse complicates legal enforcement and survivor protection, requiring stronger international cooperation.

Strengthening Protections

In our submission, WITNESS calls for urgent reforms to ensure digital spaces are not weaponized against marginalized communities. Our key recommendations include:

  • Establishing international mechanisms and commitments with companies that help address and ensure accountability for the misuse of AI generated content for the purpose of creating and disseminating nonconsensual intimate images. 
  • Ensure equitable access to detection solutions and strengthen accountability for tech platforms, App creators and App stores. 
  • Creating survivor-centered policies that provide interoperability, swift removal of harmful content and accessible legal recourse.
  • Equipping activists, journalists, and civil society organizations with tools and knowledge to navigate and counter TFGBV.
  • Implementing binding global regulations that require transparency in takedown mechanisms and AI development practices by tech companies. 

As WITNESS continues to engage with the UN and other international bodies, we emphasize the urgent need for stronger global protections against TFGBV. On this International Women’s Day, we call on governments, technology companies, and civil society to take meaningful action in dismantling the structures that enable digital violence.

Published 6th March, 2025

We will be happy to hear your thoughts

Leave a reply

Som2ny Network
Logo
Compare items
  • Total (0)
Compare
0