The New Additions To The Online Safety Bill Are Welcome – But Are They Enough?

Allan Dunlavy 10 May 2022

Partner Allan Dunlavy takes a deep dive into the revised Online Safety Bill, looking at what’s changed since the draft, and what loopholes remain.

Today, issues such as fake news, hate speech, and harmful content seem an inevitable part of life online – but they should not be something we are willing to accept. To tackle these problems, we need proper and proportionate regulation, which the government has recognised with the publication of their revised Online Safety Bill. This new version, referred to as a ‘manifesto’, was published on 17 March, and is an update on the government’s draft Bill from May 2021.

Aiming to make the UK the ‘safest place in the world to be online’, the Bill crucially seeks to maintain free speech whilst protecting users from harmful content – two entirely compatible aims. Since the release of the Draft Online Safety Bill, several changes have been made, primarily in response to the Law Commission’s recommendations. Whilst many of these additions, such as the inclusion of fraudulent ads and financial crime, represent positive steps, there are still vague elements, unclear meaning, and gaps in the Bill that may mean it’s not as effective as it could be in achieving its aims.

Key points in the Bill

The Online Safety Bill provides a new set of rules and regulations for services which host user-generated content (such as social media), and search engines. Under the Bill, these services will have a duty of care towards users – and will have to answer to the regulator, facing fines of up to 10% of revenue (potentially a whopping $25 billion, in Google’s case), if they fail to do so.

In addition to removing illegal content, the Bill introduces a duty to tackle ‘legal but harmful’ material, such as content about abuse, self-harm, eating disorders, or harassment. Content will be deemed harmful if there is a “reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”. If applied properly, this definition could work well – as with all of this, the test will be in how these terms are applied and interpreted by the relevant regulatory bodies and the Courts.

What’s changed since the draft Online Safety Bill?

Several amendments have been made to the draft Bill – and whilst these are all marginal improvements, they are not necessarily the best solution for certain issues. One major change is where the decision-making lies. Whilst the draft Bill allowed each platform or service to decide what counted as ‘harmful’, now, the government will decide which content meets this threshold. The categories of harmful content will be introduced in separate legislation which will be voted on by Parliament, giving them an ongoing role in the regulation. This will also potentially allow for the categories to be kept under review and updated as necessary to remain relevant and comprehensive.

‘Harmful’ content

The decision to regularly review the ‘harmful’ categories is a good future-proofing measure – although having this process led by Parliament and not the regulator risks overly politicising the issues. These categories could easily be impacted by the ‘issue of the day’, leading to the list of harmful content being less representative of what is actually causing harm online and more representative of election issues. This ongoing regulatory role would be better suited to an independent expert or regulatory body, which could ensure impartiality and expertise.

The level to which these categories of harmful content would extend is also up for debate. Content which may reference these harmful categories must be acknowledged as nuanced and general ban may capture content that should not be prevented and not capture content that should. For example, would any content discussing self-harm be removed, even in the context of a self-help group, or fundraiser? More details on content which falls into a ‘harmful’ category, but is not in fact harmful in substance, is clearly needed.  

Fraudulent ads

One particularly welcome addition in this new version of the Bill is that of fraudulent adverts. This new standalone duty requires the biggest user-to-user companies, such as Facebook and Twitter, to ‘take action to minimise the likelihood of fraudulent adverts being published, shared or hosted online.’ What is key here is the fact that adverts have to be paid for (amongst other things) to be deemed fraudulent. At Schillings, my colleagues and I have seen the damage that these ads can cause. We see the impact as twofold: firstly, there are an increasing number of social media users who have been left financially damaged through fraudulent ads; secondly, high-profile individuals are often unknowingly used as the face of these ads, impacting their privacy and reputation. Regulation in this area is urgently needed, and the recent inclusion of this in the Bill is a positive move.

Online anonymity

Anonymity plays a major part in the freedom the online world offers – but is an issue when it comes to trying to address content and pursue users whom can’t be identified. We’ve been talking about the problems absolute anonymity engenders for years, calling for user-to-user companies to be required to maintain proper and vetted identification for their account holders but not for a real name policy, which would mean that users can still be publicly anonymous and that the police could find out who was behind accounts in cases of harassment or other illegal behaviour. Whilst the new Bill does not go so far, anonymity is touched upon through a requirement that platforms introduce ‘user empowerment tools’. These tools are intended to give users more control over who they interact with, which (legal) content they see, and will also give them the option to verify their identity. Users can then opt to only interact with and see content posted by verified users.

This recognition of the harm that unverified and completely anonymous users can cause is important and this is a good first step. It will be interesting to see what percentage of users opt in for verification and what impact this has on their online experience. In addition, the verification process itself is vague: it can ‘be of any kind (and in particular, it need not require documentation to be provided)’. Given no requirement for documentation, this verification process is lacking in substance. Making user verification – through official ID – a universal requirement would be a far more effective way to tackle anonymous abuse. As noted this is different from a real name policy, this would not require users to use and reveal their true identity to other users, but just that the platforms would have to maintain an accurate record of who was behind each username in cases of abuse.

Who will the Bill affect?

The Bill, although tackling the major platforms, does not apply to everything we see online. Several key services are exempt, including, notably, news websites. News websites are defined as such if they are subject to editorial control, and if their principal purpose is ‘the publication of news-related material’. It’s not just the articles on news websites, but also their comments sections, which are exempt from the scope of the Bill. Arguably, a comments section on a news website is no less prone to abuse than any other forum covered in the Bill, as not all of them are moderated.

Journalists are offered specific safeguards under the new Bill, and as such, platforms are required to create expedited routes of appeal for them if their content is removed. Again, the definition of ‘journalist’ is difficult to nail down: is there a minimum audience required for a user or site to be deemed a journalist or journalism? Could any blogger or social media user claim they are a journalist and seek to benefit from the journalist exemption? These factors need to be considered, or the Bill risks creating large loopholes that are all too easy to get through.

Is the Online Safety Bill the solution we’ve been looking for?

As advocates of online safety and privacy, we at Schillings believe the Online Safety Bill marks a significant milestone and is a great first step to tackling the numerous issues of harmful and damaging content online – but there are issues which still need to be thought through and addressed. Given the rate at which technology evolves and new online platforms develop, the Bill has the potential to become outdated fairly rapidly, so a fast-track avenue to add in guidelines for new services would be welcome.

The new version of the Bill does make good progress on tackling some previously disregarded problems– indeed, it looks to be significantly more comprehensive and effective than the draft version – however, some sections still need a more thorough evaluation. In places, it feels as the Bill is trying to pay lip service to certain problems without properly ensuring that the rules will work in practice. It remains to be seen whether platforms adhere to the guidelines, if fines are really handed out, and what loopholes are discovered that may require further amends to the regulation.