Partner Allan Dunlavy explains why Big Tech needs to look at the bigger picture when it comes to addressing issues of harmful content online.
Last year, The New York Times published an article exposing the online slander industry. Nobody in our field of privacy and reputation was surprised. It’s an issue we’ve been grappling with for years – a murky world of pay-to-defame websites offering a straightforward extortion service. Users pay to post defamatory comments and pictures on websites, and the target then has to suffer the damage this causes or pay to have these removed. These sites are often dressed up as review websites and frequently rank well on Google: Big Tech companies have, as yet, not made sufficient headway in tackling this.
Despite this widespread knowledge, apparently senior Google executives were taken by surprise as two of them claimed to the newspaper that they “had been unaware of the extent of this problem until the Times articles highlighted it…”.
At best this appears to me as wilful blindness to a serious, long-standing and well-known issue: Google’s response to this problem is a clear illustration of why technology giants are failing to limit the reach of fake news and other harmful material.
It’s important to recognise that Big Tech companies have created huge opportunities and advancements. However, at this point, the cost of this to society is simply too high. Read the news on any given day, and no doubt you’ll find examples of the plethora of problems that have originated from, or been made worse by, the services that Big Tech have developed. These include the growth of fake news, harassment, extremism, racist attacks, cyber bullying, invasion of privacy, revenge porn, slander for profit, identify theft and doxxing.
Why are Big Tech giants continuing to refuse to comprehensively combat fake news and other harmful content available on their services? The reasons are, in my view, simple. Firstly, they make huge profits and require exactly this type of negative, contentious content to drive the growth of their services (think of social media like reality TV – nobody tunes in to watch everyone get along and work together in harmony).
Secondly, they are (rightly) concerned that if they start voluntarily editing content, regulators will realise they have significant editorial capability and control and they will lose the protection of section 230.
The result is a slow, disjointed process of incremental change. Attempts by Big Tech to limit the reach of damaging content have almost universally been too little and too late. After an issue has arisen and a problem is exposed, it seems Big Tech companies take a few, limited steps to resolve that single narrow issue and move on.
The (Apparent) Solution
Google’s recent announcement of changes to address the online slander business is a case in point. Rather than taking this opportunity to surface other issues they may be unaware of, Google decided to take only narrow steps targeting this single issue. These primarily include changes to Google’s algorithm and the creation of a ‘known victims’ classification. The algorithm changes are intended to reduce the visibility of these attack websites, but it remains to be seen how readily these changes can be circumvented by better camouflaging these websites or creating new ones.
The ‘known victims’ category, where victims of these attacks can have their name added to a list by Google who will then automatically remove this content from their search results, looks to be much more powerful. This gives victims another avenue of recourse and will hopefully remove the website’s incentive.
Yet there are still significant concerns with this proposed solution. For example, it is unclear how you get classified as a ‘known victim’ or if you can be removed from the list by Google or another party. Again, I think this is a narrow, non-technical, labour-intensive solution to a single problem that does nothing to address wider issues with fake news, harassment, cyber bullying, invasion of privacy and all of the other challenges we are facing.
The reality is that Google remains more concerned with websites gaming their search results than supporting victims of attacks and actually looking for meaningful solutions to these issues. While any action to combat these attacks is welcome, this approach of doing as little as possible, as late as possible is not enough. Similarly, unless pressure is also put on social media to also address this issue, these actions by Google are unlikely to prompt action from other companies. I believe that more needs to be urgently done to fight online harm.
Government regulation can create a universal standard that all companies have to meet and, amongst other things, can require Big Tech companies to:
- Validate users and hold real user identification;
- Give users the opportunity to stand by their content so either the content is removed, or they provide the user’s information to complainants;
- Take responsibility for the content that they publish on their services;
- Meet a duty of care to their users;
- Monitor and remove content pro-actively and reactively following complaints; and
- Make it easier to flag damaging content and have it removed quickly.
Any regulation will need to include appropriate carve outs to protect legitimate free speech. This will allow shared responsibility between the author of the content and Big Tech companies as the publishers of the content and will give victims a real recourse.
As part of their response to the New York Times, Google stated: “we can’t police the web, but we can be responsible citizens.” This, in my eyes, completely misunderstands the issue. This has nothing to do with Google policing “the web”; it is about Google policing Google – the product and service that they designed and built and from which they make a king’s ransom every year. The fact that Google itself cannot tell the difference between “the web” and Google’s own product tells you all you need to know about their monopoly position.
Google cannot, in my eyes, both have this level of control, power and revenue and also have absolutely no liability or responsibility for ensuring that the content that they publish on their service is true, fair, accurate, balanced and up to date – requirements that are very familiar to even the smallest newspaper publisher.