Monday, January 27, 2025
HomeBusinessLegalEU’s Code of Conduct Countering Online Hate Speech Becomes Part of the...

EU’s Code of Conduct Countering Online Hate Speech Becomes Part of the DSA


On January 20, in a long-awaited move, the European Commission (EC) announced the integration of the revised Code of Conduct on Countering Illegal Hate Speech Online (the Hate Speech Code) into the regulatory framework of the Digital Services Act (DSA). 

Article 45 of the DSA provides for the formation of voluntary codes of conduct at EU level to ”contribute to the proper application of” the DSA. The Hate Speech Code marks the first Code to be integrated into the DSA framework via the Article 45 process. 

Background

The Hate Speech Code revises and builds on the EU’s 2016 Code of Conduct on Countering Illegal Hate Speech Online (the 2016 Code). The 2016 Code, which predates the DSA, was an EU-led co-regulatory initiative whereby a group of leading online platform providers committed to a voluntary set of general practices in respect of combatting illegal hate speech on their platforms. The initial signatories of the code were Facebook, Microsoft, Twitter (now X) and YouTube. This was expanded over time to also include Instagram, Snapchat, Dailymotion, Jeuxvideo.com, TikTok, LinkedIn, Rakuten Viber and Twitch.

A centerpiece of the 2016 Code was a regular monitoring exercise, whereby the EC engaged with a network of largely non-profit organisations located in the different EU countries. Using an agreed methodology, these organisations tested how effectively and efficiently the online platforms responded to reports of illegal hate speech submitted by the organisations.

The New Hate Speech Code 

The updated Hate Speech Code aims to enhance the response of online platforms to content deemed to be illegal hate speech in accordance with EU law and Member State legislation. The Code was committed to by a total of 12 service providers: Dailymotion; Facebook; Instagram; Jeuxvideo.com; LinkedIn; Microsoft-hosted consumer services; Snapchat; Rakuten Viber; TikTok; Twitch; X; and YouTube (collectively the Signatories) – the same signatories who had previously committed to the 2016 Code.

In substance, the new Code is similar to the 2016 Code but with greater detail provided in respect of the obligations of Signatories. The Signatories have committed to a range of practices in respect of the effective enforcement of content moderation rules in respect of illegal hate speech. Some are quite broad, general commitments while others are specific requirements. Key commitments include:

  • Terms and conditions for addressing illegal hate speech: Signatories must ensure their terms and conditions prohibit illegal hate speech and provide clear information on their policies on illegal hate speech, along with the measures that will be taken where these polices are breached;
  • Intra-industry and multi-stakeholder cooperation to address and prevent risks of the spread of illegal hate speech: Signatories will participate in regular forums to share best practices, expertise and tools on measures to address the dissemination of illegal hate speech content.
  • Awareness raising and counter narrative initiatives: Signatories will commit to supporting new ideas, initiatives and educational programmes that encourage civility online and critical thinking.
  • Transparency, accountability and monitoring: Signatories will assess their adherence to their commitments regarding the review of notices of illegal hate speech on their platforms and to monitor these trends over time. Signatories must document a summary of measures taken to address illegal hate speech as part of their content moderation policies – called the “Summary Document”.
  • Review and possible removal of or disabling access to illegal hate speech content: Signatories have committed to review the majority of notices they receive under Articles 16 and 22 DSA from “Monitoring Reporters” (explained below) within 24 hours. 

Impact of the Hate Speech Code and the Monitoring Exercise

In considering the likely practical impact of the new Code, there are a few elements to consider.

As noted above, even though the Hate Speech Code has been integrated into the DSA framework, it is voluntary. Subscribing to the code is not mandatory and a Signatory’s failure to comply with the Hate Speech Code will not constitute an infringement of the DSA. However, Recital 104 of the DSA does state that “refusal without proper explanations […] of the Commission’s invitation to participate in a code could be taken into account” when determining if an infringement of the DSA has taken place. Furthermore Article 35(1)(h) of the DSA provides that compliance with relevant Article 45 codes is among the suggested ways for VLOPs/VLOSEs to discharge their risk mitigation obligations. As such, there is some impetus for service providers to comply with such voluntary codes.

Some commentators have noted that there is some significant overlap between some of the commitments and the obligations that exist under the DSA. For example, much of the information which Signatories must provide in its Summary Document under the Hate Speech Code, must also be provided under the transparency obligations under the DSA. 

One of the most impactful parts of the new Code is likely to be that it reintroduces the hate speech Monitoring Exercise. This is a process which aims to evaluate the performance of the Signatories in responding to illegal hate speech reports.  The exercise involves an approved list of not-for-profit or public entities reporting what they consider to be illegal content for a six week period and monitoring how much of the reported content is actioned within 24 hours. The performance of each of the Signatories will then be published by the EC. 

A similar monitoring exercise had taken place annually under the 2016 Code. However, the exercise had been paused since 2022, as planning for the revised, DSA-integrated code began. Some weaknesses of the monitoring exercise under the 2016 Code had been criticised. For example, the 2016 Code did not make provision for when there were disagreements between the Monitoring Reporters and the Signatories as to the legality of the reported content. Some criticised some of the organisations who were selected to act as Monitoring Reporters, suggesting that they did not have sufficient expertise or resources to properly assess the platforms’ performance. The new Code has attempted to address a number of these issues through providing greater detail on the monitoring methodology, introducing a “Disputed Case” mechanism and providing a system for Signatories to object to certain organisations being granted the status of Monitoring Reporters.

If you would like any further information on the Hate Speech Code, please contact Stephen King, Eoghan O’Keeffe, Rosalyn English or your usual contact on A&L Goodbody’s Technology team.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments

Skip to toolbar