
Tech Companies Face Tough Regulation as UK Enforces the Online Safety Act
The British government has commenced implementation of the UK Online Safety Act. The new law requires tech companies to take more action in combating illegal content online. According to CNBC, this paves the way for the government to supervise harmful online content strictly and impose huge fines on tech giants that violate the law.
Ofcom Regulations
As the new UK online regulations take effect, British media and telecommunications watchdog, Ofcom published guidelines for tech firms. The guidelines explain what tech companies like Tiktok, Meta, and Google need to do to address harmful content like hate, terror, fraud, and child sexual abuse on their digital platforms.
The Online Safety Act imposes duties of care on tech companies to compel them into taking responsibility for any harmful content published and shared on their platforms. According to Ofcom, tech companies are required to conduct risk assessments on illegal harms by March 16, 2025. This allows them three months to comply with the new law.
“Ofcom’s illegal content codes are a material step change in online safety meaning that from March, platforms will have to proactively take down terrorist material, child and intimate image abuse, and a host of other illegal content, bridging the gap between the laws which protect us in the offline and the online world,” British Technology Minister Peter Kyle said in a statement.
After the assessment deadline, tech companies should commence implementation of online safety measures in Britain to mitigate illegal harms risks. Such measures include better moderation, in-built safety tests, and easy reporting.
“We’ll be watching the industry closely to ensure firms match up to the strict safety standards set for them under our first codes and guidance, with further requirements to follow swiftly in the first half of next year,” Ofcom CEO Melanie Dawes said.
The UK Online Safety Act was passed in October 2023 but had not come into full force. The move to commence its implementation marks its official entry into force.
Penalties for Violation
Violation of the Online Safety Act will attract hefty fines for tech companies. According to the law, Ofcom may impose fines amounting up to 10% of a company’s global yearly revenue for breaching its provisions. Repeat violations could lead to jail time for senior managers of tech companies, service suspension in the US, or limited access to advertisers or payment providers.
“If platforms fail to step up the regulator has my backing to use its full powers, including issuing fines and asking the courts to block access to sites,” Kyle added.
Earlier this year, Ofcom was under pressure to toughen internet regulation in the UK following rights that were, in part, instigated by disinformation on social media. The watchdog says the duties of care provided in the Online Safety Act cover a wide range of entities including search engines, social media companies, messaging, dating, and gaming apps, and file-sharing and adult content sites.
Related News – Inside Apple’s Plan to Launch a Slimmer, Foldable AI iPhone
Compliance Requirements
Ofcom outlined various requirements in its first-edition code published online. The watchdog’s regulations require digital entities to make reporting and complaint features easy to locate and use.
High-risk platforms are expected to leverage hash-matching technology to detect and delete content relating to child sexual abuse. Hash-matching technology links images of child sexual abuse from police databases to encrypted digital footprints. This enables automated filtering systems on social media websites to delete them.
The watchdog emphasized that the codes are only the first set. The regulator will publish additional codes in 2025, including blockage of accounts that share content on child sexual abuse and those that use AI to handle illegal harms.