A joint Parliamentary committee will report on the draft Online Safety Bill on 14 December, approximately six weeks after whistleblower Frances Haugen voiced her concerns about social media safety to government committees in the US and UK. Ahead of the report, Allan Dunlavy looks at how the Online Safety Bill would work – and where it could be improved.
Currently, social media – and the internet more generally – doesn’t prioritise users’ safety. At Schillings, we’ve been concerned about this for years – and now, finally, the UK government has also recognised that something needs to be done about it. The Online Safety Bill, currently in draft form, could be the answer. It’s presented as the solution to ‘making the UK the safest place to go online’. But will the bill really solve the problems it aims to address?
The Online Safety Bill started life as the Online Harms White Paper back in 2019, a piece of legislation aimed primarily at making the internet a safer place for children. The bill creates a duty of care between social media companies and their users. In practical terms, these companies would be required to remove harmful content quickly from their platforms or be fined by Ofcom. If approved, the Online Safety Bill would be a landmark piece of legislation.
The draft Bill has already attracted some complaints about its supposed negative impact on freedom of speech and ‘over-policing’ . Both of these are really important issues and any regulation of the Internet needs to take very careful account of these concerns but in reality, the draft Bill does neither. In fact, quite the contrary is true and, on a close read, it has a number of significant gaps which mean many businesses and service providers will slip through the net.
In fact, only 3% of digital businesses actually fall under the scope of the Online Safety Bill, a figure quoted by the government itself. For example, ‘Internal business services’, which includes file transfer services, corporate email services, cyber lockers and webhosts, are not included under the Bill’s remit. Further, under the current draft Bill, most of the largest pornography websites are already outside of the Bill’s scope or could easily put themselves out of its scope. This severely limited scope calls into question the Bill’s effectiveness in achieving its aims of removing harmful content for children.
‘Recognised news publishers’ are also exempt. Which publishers are deemed ‘recognised’ is of course, subjective. This is likely to be hotly debated and lumps genuine and legitimate publishers together with all sorts of purveyors of fake news, paid-for news and the like. Ultimately, it is too broad and vague a categorisation to make much of an impact.
Naturally, there needs to be a balance between freedom of speech and proper, proportionate regulation to address the massive issues that our society is currently grappling with. Many news publications are already subject to regulation, but the ones which aren’t should not be excluded from the Bill. In a world where the very definition of news is up in the air and where anyone can set up a website and ‘report’ on the news, this loophole requires reviewing.
As it stands, in my eyes, the Bill’s narrow focus, vague definitions and loopholes mean that many services, platforms and publishers will escape accountability, continue to contribute to misinformation and fake news, and remain complicit in facilitating harmful content online. To make the internet a truly safe place, the scope of the Online Safety Bill needs a rethink.