Web Giants Under Pressure
19 April 2017
Traditionally, web giants such as Facebook and Google have been subject to light regulation, which does not correspond to their vast resources and influence.
In Europe, online platforms benefit from being treated as hosts or intermediaries. Despite making public statements committing to combating extreme content such as fake news and hate speech, online platforms have fought tooth and nail to defend this status and to avoid accountability for the content from which they profit.
For example, in March Facebook successfully resisted a claim brought in Germany by a Syrian refugee who had been targeted by racist trolls, and claimed that Facebook had failed to take sufficient action against the defamatory posts he flagged. Facebook allegedly failed to counter this harassment despite the fact that its terms of service prohibit users from posting hate speech and illegal content, and that it recently signed up to EU rules on hate speech requiring the deletion of hate speech within 24 hours of it being reported. However the Court held that Facebook was neither a “perpetrator nor a participant”, and as such was not liable.
In the US, online platforms often benefit from protection under the First Amendment as the information published on their websites can be considered to be protected speech. Online companies also have additional protection from Section 230 of the Communications Decency Act 1996, which gives them broad immunity from liability for content posted by users.
However, the tide may be turning in light of a series of high-profile incidents highlighting the failure of web giants to regulate themselves effectively.
Last year, Facebook took the controversial decision to censor an iconic, Pulitzer prize-winning photo depicting the naked 9-year-old Kim Phúc running away from a napalm attack during the Vietnam War. Initially Facebook defended its decision on the basis that “any photographs of people displaying fully nude genitalia or buttocks, or fully nude female breast, will be removed”, and implied that the photograph was akin to child pornography. The decision drew widespread criticism, and Facebook eventually relented and reinstated the photo.
Six months later however, Facebook lurched in the other direction when it chose to delete only 18 of 100 inappropriate images of children reported to it by the BBC. Initially Facebook claimed that the 82 remaining images did not breach its community standards, but later blamed a faulty moderation tool and removed all of the images reported by the BBC.
A separate investigation by The Times also revealed that Facebook was failing to remove ISIS propaganda videos. Again, Facebook apologised but their failure to remove the content when it was reported calls into question the effectiveness of their reporting tools.
Then over the Easter weekend, Facebook’s monitoring systems were again questioned when a video depicting the fatal shooting of a man in Cleveland was posted for two hours, widely republished and viewed millions of times before being removed.
Likewise, Google has come under fire following another investigation by The Times for not only failing to remove, but placing advertisements alongside extremist content.
The advertising model of YouTube, a subsidiary of Google, sees those who post content on YouTube earn about $7.60 for every 1,000 times that an advert is viewed on their ‘channel’. Advertisers pay for the placement of an advert on a per-view basis. This means that advertisers, which includes the UK government, were inadvertently funding the extremists who posted such content. Several major companies have now pulled their YouTube advertising in light of the revelations.
These developments point to the dual pressures which could force online platforms to take more responsibility for the content from which they profit.
First, the UK government appears more prepared to legislate as high-profile examples of the failure of web giants to self-regulate come to light. Google has been summoned for discussions at the Cabinet Office to explain why taxpayer funding was inadvertently used to promote extremism due to Google’s lax advertisement placement policy. And while this issue has not attracted a similar response from the US government, Google faces considerable pressures from key advertisers. Should these advertisers or the US government seek redress for the publication of this content, Google may not be able to rely on its First Amendment defence, as this could arguably be deemed to be commercial speech which does not receive the same level of protection.
A UK based inquiry into fake news has also announced that it will investigate Facebook’s complaints handling procedure in light of their failure to remove sexualised images of children reported by the BBC, with the chair of the culture, media and sports committee speculating that creating an offence for failure to act on a referral would “create a massive incentive” for social media organisations to act on complaints. Creating an offence for failure to act on a referral would certainly provide an incentive for Facebook to improve its reporting tools.
Second, online platforms appear to be willing to reform their policies in the face of public scrutiny, particularly when this affects their advertising revenues. A number of the world’s largest companies have withdrawn advertisements from Google in the wake of recent disclosures, and if the boycott continues, it is estimated that it will cost Google more than $750m a year. In response, Google has already published a statement promising an overhaul of its advertising policies in order to increase brand safety levels and controls for advertisers.
Ultimately, the dual threats of declining advertising revenues and government intervention could lead web giants to take accountability for their content more effectively. For companies and individuals who are subject to negative online campaigns, keeping abreast of developments and the changing regulatory landscape is essential.Receive our monthly newsletter