The reality is that the internet today is simply not synonymous with the word “safe”, particularly for children and young people. Indeed, last year, the NSPCC reported that online grooming crimes have risen by more than 80% in four years and in 2021, the Internet Watch Foundation found that there is fifteen times more child sexual abuse material found online than there was 10 years ago .
Against this backdrop, and with the tragic death of Molly Russell still gripping the headlines, the lack of adequate steps taken by social media companies over the past two decades to protect users means that it has fallen upon the legislature to try to prevent further harm being done to children and young people; by way of the Online Safety Bill .
In order to protect young people and children from harm online, what constitutes as adequate age verification is a central issue which needs to be tackled. Currently, age verification processes used by various platforms are inconsistent and seemingly inefficient.
The question on people’s minds, however, is whether the Online Safety Bill will actually make a difference when it comes to the seemingly all-powerful social media giants and the way they ensure young people are not accessing content which is aimed at adults. Do the requirements of the Bill and the threat of substantial fines go far enough to ensure that children are adequately protected online, or is the Bill’s bark likely to be worse than its bite?
The Revised Bill
The Online Safety Bill purports to be world-leading legislation which aims to make the UK “the safest place in the world to be online”. It certainly has the potential to achieve that aim, not least because the UK is ahead of any other nation in recognising the need for Government intervention and is proactively (albeit slowly) pushing the Bill through Parliament.
The Bill returned to the House of Commons at the start of 2023, nearly four years after it started life as the Online Harms White Paper. The Bill has undergone further changes to the scope, narrowing in some areas while expanding in others.
Arguably the most significant change is the removal of proposed duties in respect of content which while not illegal, could be harmful to adults.
Protections for children who use social media have survived the latest round of amendments. There remains a duty on platform providers to mitigate and manage the risks of children encountering content that is harmful to them.
However, as with many of the obligations on social media companies proposed by the Bill, providers are only required to take “proportionate” measures to restrict child users from accessing harmful content. The types of content that will fall within this category are by no means settled, with the Bill leaving it to the Secretary of State to set this out in regulations at a later date. The Bill also makes provision for Ofcom to publish guidance on what content should be considered harmful to children.
What does the Online Safety Bill say about age verification?
While social media companies are encouraged to use age verification in order to identify child users, this is not required by the Bill. Instead, providers are required to put in place age assurance processes which ensure that children do not normally have access to the part of the service intended for adult users. Social media companies will have to demonstrate to Ofcom how children are being prevented from accessing harmful content, with the possibility of steep fines for non-compliance.
The majority of platforms have always had a minimum age for users built into their Terms of Service – but it’s no secret that these are regularly bypassed by underage users. Research by Ofcom suggests that a third of children ages 8-17 have signed up to social media platforms with a false data of birth; an easy enough way to circumvent the age checks put in place by various platforms.
Although the Bill takes promising steps to require that age limits will be enforced, its success will be dependent on the suitability of the measures adopted by social media companies and the willingness of Ofcom to penalise non-compliance.
Simply put, the Bill puts the onus on providers to choose how they ensure children are kept from content intended for adults, rather than mandating the universal use of a single, effective system for age verification. In doing so, the Bill prioritises convenience for tech companies, rather than what is most likely to ensure that children cannot access harmful content intended for adults.
While tech companies have yet to show their hand as to how they will approach user age confirmation, modern technology offers a range of options, which provide varying levels of certainty. These include age verification using facial recognition, peer verification or even hand measuring technology.
If tech companies fail to put in place adequate processes, they may face steep fines down the line. However, it’s unclear how much of a deterrent this will be. The high financial penalties that were introduced by the GDPR in 2018 made headlines for being ground-breaking, but to date there is little evidence to suggest that they have they led to the larger organisations, such as social media platforms, becoming compliant.
Facebook’s parent company, Meta, was recently fined a total of £345 million after European regulators found it had unlawfully processed customers’ personal data to deliver targeted advertising since 2018 (i.e. after the GDPR entered into force). Meta is appealing this but, even if it is forced to pay, this would equate to around 0.2% of Meta’s annual profits. This is unlikely to have a significant impact on the organisation’s operations.
If Big Tech companies show the same disregard to the duties proposed by the Online Safety Bill, the immediate impact will be that children will continue to be exposed to harmful content intended for an adult audience. Whether the internet will be a safer place come Safer Internet Day 2024 will largely depend on whether social media companies commit to upholding the standards the Online Safety Bill aims to set.
Take a look at the second article in this two-part series looking at protections offered by the Online Safety Bill, which focuses on social media terms of service.