Deepfake: The Rising Risk And How To Protect Yourself

Rachel Atkins 14 Apr 2021

If you thought the phenomenon of fake news couldn’t get any worse, the arrival of the so-called ‘deepfake’ looks set to keep us all on our toes. So, what is a deepfake and how can we protect ourselves?

Manipulation at its finest

A deepfake is a video that’s been cleverly manipulated to replace the person in the video with someone else, in most cases, a celebrity or politician. They are usually incredibly realistic and can be hard to spot as fake. Tom Cruise was a recent victim in what was billed by the media as ‘the most realistic yet’. US congresswoman Alexandria Ocasio-Cortez has also been subject to AI, which seemed to ‘show’ her in a bikini.

The most troubling element of the deepfake is that it’s leading to very high rates of non-consensual deepfake porn – primarily targeting women. In fact, in 2020, a deepfake monitoring firm Sensity discovered the messaging app Telegram had been used as a platform to release thousands of non-consensual nude images of women – and in some cases, underage girls. 

Real or fake?

Detecting an amateurish deepfake is relatively easy – the edges may be blurry, the audio inconsistent, or the video will simply look unnatural. But a more credible deepfake – one created with neural networks, will be much harder to spot and may even require machines or digital forensics to identify it. And therein lies the problem: if it looks entirely credible, how will the victim be able to prove it isn’t real?

A rising risk

The last couple of years have seen a rise in apps such as Fakeapp that enable anyone with reasonable tech skills to create a deepfake.  Faceswap is another popular example, which, as the name suggests, gives users the ability to put a different face on someone else.  

App creators’ market these as a fun and light-hearted way to spend your time, and for most people – that’s exactly the case. Still, for those with malicious intent, there is the potential to cause real harm.

As the mechanisms to create such videos become easier to access, and the output becomes more realistic, so too the level of risk rises. This is now a genuine concern for all women with images in the public domain, not just celebrities and influencers – though they may be hit first.

Detection and prevention

Now, this is where it gets tricky because although some tech companies are looking to tackle the issue by developing solutions that will recognise manipulated content, detecting deepfakes is still very difficult.

What’s more, the law around deepfakes is difficult to pin down. The main issue is around image ownership and rights. Here in the UK, if images are in the public domain, it becomes much harder to fight the case legally.

The nature of the image is also important. The Tom Cruise example isn’t overly defamatory, but the AOC example is – and it’s the victim’s decision as to whether the deepfake will affect their livelihood and whether they want to take legal action.  

In the UK, the Domestic Abuse Act has recently announced a sentence of up to two years in jail for those who threaten “to disclose intimate images with the intention to cause distress”. This is a step in the right direction, but more undoubtedly needs to follow.

In the meantime, there are some simple steps you can take to lower your risk level:

  • Stop posting photos of yourself or your family on unprotected social media accounts
  • Make sure you maintain a high level of security on all your electronic devices 
  • Talk to your family and friends about the risks of posting images on social media 
  • Always use caution when uploading photos of yourself to public sites

And if you find yourself the victim of a deepfake, remember that speaking out is incredibly important:

  • Report images to the platforms they’re being hosted on
  • Collect evidence of the images on those platforms
  • Engage with the police
  • Speak to legal professionals to find out whether you have a case to make
  • Don’t be afraid to reach out to mental health charities or the medical profession if you are struggling to cope

Halting the tide

The area of non-consensual porn remains a distressing problem: according to AI firm Sensity, 96% of deepfakes are pornographic and used to target women, with the number of deepfake porn clips doubling every six months. This means that by summer 2021, there could be as many as 180,000 porn videos “starring” innocent people online.  

Halting the tide of deepfakes will require a combination of investment and ongoing effort from tech companies if we are to see any real progress.

As well as making the detection of deepfakes a priority, tech companies must start removing them from their platform alongside holding perpetrators to account for their actions (a lifetime ban, for instance). 

From a legal perspective, change is necessary to prevent technology companies from washing their hands of the issues that can arise when platform users are free to share any content they wish.