With ‘Legal But Harmful’ Gone, Will Terms of Service Protect Social Media Users?

Sarah Reynolds and Caroline Marshall from our Legal team investigate what the Online Safety Bill says about social media platforms’ Terms of Service now that the ‘legal but harmful’ provision has been dropped. This article follows Part 1, ‘Where Does the Online Safety Bill Stand on Age Verification’, in this two-part series analysing the protections offered by the Online Safety Bill.

The Online Safety Bill has rarely been out of the news since the Online Harms White Paper, from which it originated, was published in 2019.

It’s now nearly four years since discussions about regulating big tech companies and social media platforms began – and with mixed reactions from online safety organisations, MPs, tech companies and academics, the Bill looks unlikely to be a panacea for the myriad issues we’re seeing when it comes to protecting users online. However, the general consensus seems to be that some regulation, albeit with imperfections, is better than none. 

Out with ‘legal but harmful’, in with Terms of Service

Although still in draft form, the Online Safety Bill is making progress. The Bill received its second reading in the House of Lords on 1 February, which saw a number of changes made to the most recent version.

Arguably the most significant change in recent months is the removal of proposed duties in respect of content which while not illegal, could be harmful to adults. This requirement for operators to address “legal but harmful” content has been replaced with a duty on providers to remove content which is prohibited by a platform’s own Terms of Service. Social media companies will also be required to provide adult users with the ability to hide certain potentially harmful content that, although not illegal, they do not wish to see.

While the Bill takes meaningful steps to empower adult users to opt out of seeing harmful content, users can still share and access harmful content by default. In the absence of effective age verification systems, there is a real risk that the audience for this harmful content will undoubtedly include child users.

In addressing harmful content which is not illegal, the Bill puts the decision firmly in the hands of tech companies. It does so by requiring that content is removed if it is prohibited in a platform’s Terms of Service. This means that the onus is on social media platforms to set their own Terms of Service, rather than the Bill containing detailed minimum standards that the Terms of Service must include and setting out how they must be enforced.

Are Terms of Service currently effective?

Terms of Service are nothing new. The majority of websites hosting user-generated content maintain their own Terms of Service, which act like a set of rules in setting out what is and is not allowed on the platform.  However, it is no secret that at present, the Terms of Service put in place by social media companies are often vague and enforced inconsistently.

There is a clear lack of transparency when it comes to understanding how content is moderated across platforms which host user-generated content. Some platforms undertake automated moderation using AI, while others employ teams of human content moderators.  As recent reports regarding changes at Twitter have highlighted, there are differing standards of moderation between even the most popular and well-funded platforms.

The Bill now goes some way to addressing these inconsistencies, by requiring that platforms enforce Terms of Service consistently and give users transparency about how this is being done. However, looking beyond illegal content, it remains that the protections for users are therefore only as good as the standards platforms choose to set for themselves.

When looking at the current experiences of social media users, it’s clear that consistent application of the Terms of Service is a point of contention. This isn’t just a lawyer’s view: a cursory internet search brings up examples of users stuck in a tug of war with social media platforms regarding where the line is drawn on harmful content and how and when the Terms of Service should be interpreted and enforced. The resounding message is that even where Terms of Service demonstrate a commitment to removing harmful content, they are applied inconsistently.

There are instances of social media platforms failing to remove photographs of children which have been reposted by adult users alongside inappropriate captions, contrary to the platform’s Terms of Service. In cases like this, there is an apparent lack of consideration over the overall context and harm likely to be caused by content on a case-by-case basis.

In the same vein, The Guardian highlighted concerns in 2022 about both Instagram and Twitter’s in-app reporting tools in relation to their failure to remove sexualised images of children. The Guardian also published details of a different report which reportedly found that Facebook, Twitter, Instagram, YouTube and TikTok failed to act on 84% of posts spreading anti-Jewish hatred and propaganda reported via the platforms’ official complaints system.

Similarly, while a platform’s Terms of Service may prohibit content that is of a bullying or harassing nature, this standard is meaningless if the moderation procedures are not fit to enforce it. The Guardian reported that Twitter failed to delete 99% of racist Tweets aimed at footballers in the run-up to the World Cup last year, and, out of the Tweets reported, a quarter of them used emojis, rather than words, to direct abuse at players.

The lack of clarity around how to report harmful content effectively (other than simply pressing “Report” on the posts in question), and the various mechanisms for doing so is something that the provisions of the Online Safety Bill may not overcome. It seems that the platforms assume an unrealistic degree of legal knowledge of its users, rather than maintaining a process that is accessible, and this has to change.

Beyond a requirement to enforce Terms of Service, these examples highlight the real need for careful consideration and analysis of content that is flagged as being in breach of the Terms of Service, if the internet really is going to be a safer place.

Will enforcing Terms of Service make the internet safer?

Given that each technology company will have scope to self-regulate in part by setting their own Terms of Service, a universal standard of internet safety will not exist under the Online Safety Bill, causing some to question the effectiveness of the upcoming regulation.

It cannot be denied that some social media companies are already trying to tackle some of the harm that takes place online. Searching for posts on Twitter using the #suicide hashtag brings up a banner informing users that help is available and provides the contact details for Samaritans. But scroll down, and there is still graphic content available to users of any age. Many platforms were also quick to introduce policies that prevented misinformation around the Covid-19 vaccinations.

While it can be said that social media companies are putting in some effort, these actions, however, are nothing more than baby steps being taken by huge giants with large pockets; they can – and should – be doing far more so that harmful content doesn’t fall through the cracks.

As currently drafted, the Online Safety Bill is undoubtedly a significant step towards ensuring that both children and adults have a safer experience online. However, in its current form, the legislation leaves considerable scope for big tech companies to set the pace of progress towards a safer future.