What’s the Online Safety Act and what do you need to do to comply?

The Online Safety Act is a new set of laws designed to protect UK children and adults online. Enforced by communications regulator Ofcom and introduced in a phased approach, it puts a range of new duties on social media companies and search services, making tech firms more responsible for user safety on their platforms.

What does the Online Safety Act cover and what does your business have to do to comply?

Why is the Online Safety Act being introduced?

The Online Safety Act (2023) has actually been in the works since 2017, and argued over extensively in Parliament along the way as campaigners highlighted the increasing risks of harmful content online – particularly to children.


The tragic case of Molly Russell – the 14-year-old girl who took her own life after watching a stream of dark content relating to suicide, self-harm and depression on Pinterest and Instagram – was one factor in highlighting the urgent need for regulation of the content young people are exposed to on the internet and social media platforms.

But of course, it’s very hard to regulate the Internet.

If you think back to when print first came into being as a medium of communication, it took decades – centuries even – to become as widespread as it is now. Laws around libel and copyright were introduced gradually as books, newspapers and other printed publications flourished over time.

Whereas the internet became mainstream in the late 1990s, with Facebook really starting to take hold in the early 2000s, with Twitter not far behind. So in the space of around just 20 years, we have this enormous proliferation of social media platforms and of course, Google itself. The number of daily internet users is now at 5.5bn, 68% of the global population.

And this global element is a core challenge when it comes to regulation to ensure that the content we see is informative, accurate, and of good quality. The clue is in the name: world wide web. Apart from in places like China where the majority of western platforms are blocked, web content crosses boundaries in seconds; it’s not something you can restrict, like visas and passports, or goods crossing from one country to another.

And so if the regulation is made in the UK, you might argue that it’s not going to make a difference.

People will just set up their company from an offshore HQ, safe in the knowledge that they can escape regulation. And with things like generative AI galloping into view – leading to deliberate disinformation designed to distort elections and fake images being created for the purposes of sextortion and abuse…you could argue that the law will never be able to keep up.

It's not perfect but you have to start somewhere, and do we want a world where young people (or anyone) can access horrific content on the device on their pocket, or not?!

And what we’ve seen with GDPR – the 2018 EU law around using people’s personal data for marketing purposes – is that other countries have fallen into line. The UK chose to abide by it from day one, despite Brexit. And although so many email marketing platforms like Mailchimp which store the personal data are US based, the customers being targeted are in Europe.

So they have to sharpen up their act too, and collectively, it does make a positive impact.

What does the Online Safety Bill cover?

The bill is centred around two types of content:

  1. Illegal content. Some examples listed include child sexual abuse, intimate image abuse, extreme sexual violence and pornography. Fraud, people smuggling, terrorism, selling illegal drugs or weapons.

  2. Content that is harmful to children. The primary priority content is pornography and anything that encourages self-harm, eating disorders or suicide. Then there’s bullying, abusive or hateful content, or that which encourages, serious violence, dangerous stunts, or exposure to harmful substances.

New offences have been introduced too, in line with changing features and behaviour on the internet, including cyberflashing, threatening communications and epilepsy trolling.

The most harmful illegal online content disproportionately affects women and girls, and the Act requires platforms to proactively tackle this. It also requires providers to run risk assessments which specifically consider how algorithms impact our exposure to illegal content and that which is harmful to children.

Who does the Online Safety Act apply to?

  • Search services

  • Any service that allows users to post content online or to interact with each other

This includes a range of websites, apps and other services, including social media services, consumer file cloud storage and sharing sites, video-sharing platforms, online forums, dating services, and online instant messaging services. 

The Act applies even if the companies providing the service are outside the UK. So if there are a significant number of UK users, if the UK is a target market, and there is a material risk of significant harm to UK users.

Every site and app in scope of the new laws has until 16 March 2025 to complete an assessment to understand the risks illegal content poses to children and adults on their platform.

What's the penalty if you don’t comply with the Online Safety Act?

You’re looking at a fine of up to £18m or 10% of your qualifying worldwide revenue – whichever is greater.

Ofcom can also enforce business disruption. So while you’re being investigated, your hosting platform can take the site down so nobody can access it, leading to lost customers, lost revenue, and damaged reputation.

How do you comply with the Online Safety Act?

Tech firms and established organisations managing and developing existing online platforms

This Online Safety Act checklist has been created from the variety of official sources which I’ve listed above. Please follow those links for far more detailed information on the official codes of practice.

  1. Read the Ofcom summary of new rules for online services and follow the links and carry out an illegal content risk assessment and children’s risk assessment by 16 March 2025

  2. Publish annual transparency reports containing online safety related information, such as information about the algorithms you use and their effect on users’ experience, including children.

  3. Introduce age verification tools and show that you are enforcing age limits.

  4. Ensure accountability. Each provider should name a senior person accountable to their most senior governance body for compliance with their illegal content, reporting and complaints duties.

  5. Bring in better moderation, easier reporting and built-in safety tests. You’ll need to make sure your moderation teams are appropriately resourced and trained and are set robust performance targets, so they can remove illegal material quickly.

  6. Make it easier for users to report and complain about illegal content.

  7. Introduce empowerment tools to give users more control over what they see.

  8. Improve the testing of your algorithms to make illegal content harder to disseminate.

Startups and businesses developing new apps, tools, forums and platforms

  1. Look at the above list of Online Safety Act requirements and build in the required online safety features into your product and processes so you are compliant from the start. Particularly if your product will be used by young people or involves generative AI that creates image or video content.

  2. Get up to speed with best practice and encourage ethical product development to support the UK in leading the way in online safety,

  3. As part of your development processes, get into the habit of asking the following questions (especially if your team is built of a narrow demographic who may not consider potential harms to women and children): 

  • Could our algorithm nudge people towards harmful content? How can we avoid this?

  • Have we built in user empowerment tools to give users more control over who they engage with and what content they see?

  • Do we have age verification in place?

  • Could visual content be created with the deliberate intention of causing harm?

  • Plus anything else particularly relevant to your sector and audience

Use common sense when building online safety features

It’s important to keep it all in context, and remember that the absolute focus of the bill is to protect children. I saw a guy making a big noise about shutting down his cycling forum because the new rules felt so cumbersome; but (strictly in my opinion, and I definitely am not a lawyer) unless you’re enabling discussions around terrorism, suicide or sexual abuse of children I don't think you're going to be high on Ofcom’s hitlist. If you’re Reddit or OnlyFans you have a lot more to worry about.

However, Meta’s plan to bring in fake AI-generated users on its platforms, could very easily fall foul of the act. It takes only a couple of seconds’ thought to consider how fake profiles interacting with young people could lead to them being exposed to harmful content if safeguards aren’t put in place. It feels wilfully naive of them to ignore this - so this is where the Online Safety Act can make them think again.

Whatever the size of your organisation, it’s important to familiarise yourself with the new rulings to protect users, and also protect your business from eye-wateringly high fines, reputational damage, and the developer costs of fixing features to ensure Online Safety Act compliance.

Stay compliant with consultancy from Sookio

We follow digital trends, techniques and regulations closely to support our clients in staying compliant.

Chat to us now about consultancy services as you develop your product and take it to market.

Sue Keogh

Director, Sookio. Confident communication through digital content

Previous
Previous

Sookio Founder becomes Enterprise Nation adviser to startups and SMEs

Next
Next

Nestlé Have AI Break, Have A KitKat: The anatomy of a campaign