Around The World (Of Big Tech Regulation) In 8 Minutes

Allan Dunlavy 14 May 2021

The Ripple-Effect heading to Big Tech Regulation

Over the last few years, we have seen various trends and issues emerge online, in particular on social media services and amongst Big Tech companies. By and large these have not been positive and include a sharp rise in fake news, bullying, hate speech, user isolation, invasion of privacy and loss of control over your personal data. Trump’s use of social media leading up to the election in 2016 and during his presidency demonstrated clearly the power and importance of these services to real and central societal concerns, like elections, truth and democracy, and not just to what was perhaps previously seen as unimportant tittle tattle, such as the now notorious ‘pictures of what someone is having for lunch’. When Trump was ultimately banned from most mainstream social media services during his final month in office, many questioned if Big Tech should have the power to silence a sitting President in this way and if they had the right rationale to do so. Trump has now handed over @POTUS to Joe Biden and the whole world is talking about what may be in store for Big Tech when it comes to regulation as a result of this transformed understanding.

Changes afoot in the US

Starting in The United States, we regularly see one of the country’s most prized constitutional provisions, the First Amendment, invoked to counter proposals to manage and control content on social media websites. In fact, First Amendment rights are not invoked and are not being violated when Big Tech companies exercise editorial control over their own websites. While The First Amendment provides that: “Congress shall make no lawabridging the freedom of speech, or of the press…”, private companies are free to manage and control their own property (digital or real property) as they see fit. The First Amendment prevents the government from restricting free speech but should not apply to Big Tech companies – or any company – seeking to exercise editorial control over what is said, written or shared on their privately owned websites. The result is that Big Tech companies are increasingly introducing rules about what can and cannot be said or shared on their websites. These rules are necessary as the internet has not only enabled but supercharged and anonymised cyber-bullying, hate speech, racial attacks, conspiracy theories and fake news, revenge porn, access to terrorist handbooks and even the Capitol Riots, just to name a few.

Plainly these are troubling developments in our interconnected world; yet it was all the way back in 1996 that the rule governing Big Tech came into force. At that time Section 230 of the Communications Decency Act established that an interactive computer service can’t be treated as the publisher or speaker of third-party content. Remarkably, today, 25 years later, this is still the governing legislation that mostly protects Big Tech companies from liability when a user post something illegal or improper, because the Big Tech companies are deemed to only be acting as a service – akin to a telephone company. Consumers and Congress are growing weary of this protection and the freedom it gives Big Tech to take or not take action as they alone see fit and they are trying to put the genie that Big Tech has unleashed back in the bottle. As the Biden Administration considers the options for stricter regulations for social media giants, we anticipate that Section 230 will probably survive this round of debate but will likely only do so with new limits and exclusions that reflect the current state of affairs. It’s a shift in the right direction in the US, but it is unlikely to go far enough for the rest of the world.

Tighter controls in Europe

The UK and EU have historically been more aggressive about issues such as data privacy, and we expect they will push Big Tech regulatory requirements further. The EU’s proposed Digital Services Act (DSA), for example, seeks to improve the editorial control exercised by social media platforms and address concerns about illegal content. While the DSA preserves the current rule – that social media companies are not automatically liable for third party statements – it adds the significant requirement that once illegal content is flagged, social media companies are required to remove it. The DSA also requires social media companies to disclose how decisions to remove content are taken and applies to websites which have more than 45m users in the EU. Even though the DSA is reactive, it is big step to ensuring that social media companies, at least have to take action to remove offending content that is notified to them. It’s unclear how much proactive work they’ll do to independently uncover offending and illegal content and then remove it without a complaint being made.

Also, in the EU, the Digital Markets Act (DMA) targets a similar sized set of companies and includes Google, Facebook and Amazon as ‘gatekeepers’. The DMA aims to limit abuse of a dominant market position, allowing smaller, newer companies into the market. It will put an end to ‘self-preferencing’ whereby, for example, Google can display their products more prominently in their search results. Gatekeepers would also be prohibited from re-using customers’ personal data across products, meaning Facebook could not use data obtained from its subsidiary WhatsApp. If companies fall foul of the DMA, they face fines of up to 10% on their worldwide turnover.

The UK’s Online Harms White Paper takes a different approach and calls for an Online Safety Bill that would create a general duty of care and is intended to help prevent physical or psychological harm resulting from inflammatory content on social media sites, their closed groups and instant messaging services. It proposes that companies complete risk assessments and take action where necessary. Failure to do so could result in fines up to £18m or 10% of annual turnover, whichever is higher. The appointed regulator, Ofcom, will also have the power to block UK access to non-compliant services, and the government would reserve the right to introduce criminal sanctions for senior managers who do not answer information requests from Ofcom.

These are all steps in the right direction when demanding transparency and accountability from Big Tech companies in the UK and EU, but as we’re hopefully nearing the end of “just pay the fine to make it okay” culture historically seen in data protection regulation, companies must be compelled to act to support users, or face being shut down.

History-making moves in Australia

Further afield, Australia is also taking steps to provide a framework to legislate Big Tech. The Australian government has taken direct action to address what it perceives to be market access and revenue sharing concerns with Google and Facebook. They are introducing a law requiring Big Tech companies to pay for news that appears on their websites or search sites. As a result, Google threatened to pull out of Australia and Facebook temporarily banned Australian news on their platform. Ultimately the legislation was passed with some concessions to Big Tech and both Google and Facebook have agreed to revenue sharing agreements with large Australian media outlets. Newspapers and other content providers, in desperate need of new revenue streams, are now being compensated for generating the content that both Google and Facebook – and other websites – rely on to drive their services and revenue. Ultimately, as Netflix found out in a different market, content is king and so the companies that produce new, original and in-demand content – often an expensive enterprise – should be able to at least share in the revenue and profit generated by this content, for example as a result of advertisements.

Big picture trends

This whistle-stop tour of some key regulatory proposals that are in train around the world shows that there are tangible measures out there being discussed, but also reveal significant gaps, differences in approach and undefined areas of interest to watch as regulators seek to clip the wings of Big Tech giants. The three key takeaways I foresee as necessary to continue forward are:

Consistency – Concerns about Big Tech’s power ultimately come down to how they apply their rules and to whom. To date this has been ad hoc and has varied widely across the different services. There must be a single, consistent and transparent approach to what the rules are, how they are enforced and how decisions can be appealed. Only proportionate Government regulation can achieve this – ideally, but unlikely, agreed across jurisdictions to create a global framework.

Consumer-centricity – Remember ‘data is the new oil’? Consumers still hand it over for ‘free’ access to their favourite websites. As consumers start to value their privacy more highly and the recognition of data’s value grows and as Apple starts to sit between users and apps to prevent the services from harvesting data, consumers may seek much more than just free access in exchange for this valuable resource or may seek an option where they can pay for a service and not hand over their data.

Commercial reciprocity – As always, money talks. It is not in Big Tech’s best interest to limit its users or to remove lots of contentious or outrageous content as this type content and engagement with it (whether positive or negative) is the centre of their business. And of course, while advertisers don’t want their content appearing next to anything that would affect their brand’s reputation, they all continue to advertise with these companies because they have the most users and the best demographic breakdowns and access.

We cannot pretend that these commercial interests don’t exist or are suddenly going to disappear. If regulation can solve societal issues while also allowing commercial activity to continue, we should have a winning strategy.