Three years on from its introduction in UK Parliament, the Online Safety Act comes into force today, meaning communications regulator Ofcom can now take action against tech companies that fail to tackle illegal content on their platforms. Under the law, which handed online safety powers to Ofcom, the watchdog can fine businesses up to £18 million or 10 percent of their worldwide revenue, whichever is greater. The regulator can also apply for a court order to block sites in the UK in the most serious cases.
The Online Safety Act comes into effect following yesterday’s deadline for companies within the scope of the law (including social media, video sharing and messaging apps) to submit a risk assessment into illegal harms posed by their services. The deadline triggers the next set of illegal harms duties, according to Ofcom, whereby tech firms must begin implementing measures to promptly remove illegal material, and to reduce the risk of criminal content from appearing in the first place.
The law’s remit covers more than 100,000 services, and Ofcom will be assessing their compliance with new illegal harms obligations over the coming weeks and months. The watchdog said it would take enforcement action where concerns are uncovered. To aid compliance, the regulator has published codes of conduct for tech companies to follow, including more than 130 “priority offences” they must tackle under the legislation, including terrorism, intimate image abuse, and child sexual exploitation.
Suzanne Cater, Enforcement Director at Ofcom, said Ofcom’s “first priority” was to stop the spread of online child sexual abuse material (CSAM). Following consultation with law enforcement agencies and child protection expects, the regulator has launched an enforcement programme to assess safety measures being taken by file-sharing and file-storage services, which are “too often used to share this horrific material,” according to Cater.
“The Online Safety Act has the potential to be transformational in protecting children from online exploitation,” added Derek Ray-Hill, Interim CEO at the Internet Watch Foundation. “Now is the time for online platforms to join the fight and make sure they are doing everything they can to stop the spread of this dangerous and devastating material.”
Firing the starting gun
The Online Safety Act became law in November 2023 in efforts to prevent online harms, and update legislation that governs the practices of tech companies in the space. Previously these firms were largely thought of as providing the pipes through which users post content, with content moderation policies and age limitations expected to allow social media firms to self-police their services.
But with the thresholds for harmful content and the staffing of content moderation teams at the discretion of social media companies, Ofcom has found growing instances of consumers encountering online harms on Facebook, Instagram and X, as well as TikTok for younger users. The law therefore aims to make tech companies legally responsible for the content that appears on their services.
And child protection has been a major focus for the legislation, given the prevalence of underage users accessing social media by falsifying their date of birth. In May 2024, Ofcom instructed tech firms to introduce robust age-checks and stop their algorithms recommending harmful content to children, as part of its draft Children’s Safety codes of practice under the Online Safety Act.
The first-edition codes of practice were then published in December 2024, and Ofcom said it was “firing the starting gun on new duties for tech firms.” The new guidelines set out over 40 safety measures which tech companies must introduce starting today. These include appointing a senior person who will be accountable for compliance, ensuring moderation teams are appropriately resourced and trained in preventing online harms, and disabling the visibility of children’s profiles and locations to other users.
“Platforms must now act quickly to come into compliance with their legal duties, and our codes are designed to help them do that,” said Ofcom’s Suzanne Cater. “But, make no mistake, any provider who fails to introduce the necessary protections can expect to face the full force of our enforcement action.”