Deepfakes: why we should no longer believe our eyes
12 November 2019
Welcome to the alarming world of the ‘deepfake’. A world where simulations are so real that anyone can be shown doing or saying anything.
To date these have been used for ‘entertainment’ value – but a recent example of a synthesised voice being used to defraud a company of hundreds of thousands of dollars demonstrates the potential sinister uses for this technology. Deepfakes pose a serious threat not only to individuals, but also businesses and their employees.
But first, what exactly is a deepfake? Simply put, it’s the use of artificial intelligence (AI) based technology to produce or alter both video and audio content so that it presents something that didn’t, in fact, occur. Scary right? Especially if youcan manipulate one of the most powerful men in the free world.
If you are familiar with deepfakes, it’s likely you have seen video examples – such as the widely viewed video of President Obama described above. In this illusion, President Obama’s jawline was replaced with one that followed an actor’s mouth movements, then the footage was refined through more than 50 hours of automatic processing using FakeApp. This built upon the work of the University of Washington where computer scientists used neural network AI to model the shape of President Obama’s mouth and make it lip sync to a new audio input.
How? Here we come to the ‘science bit’. The technique used to create this kind of threat relies on machine learning techniques. A system called ‘the generator’ creates the fake video and the other system, named ‘the discriminator’, runs through the video to ascertain whether it’s fake or not. Through numerous iterations, the discriminator gives the generator clues about what not to do when creating the next clip – and the process is then repeated. Videos can be iterated thousands of times very quickly and in the process a training dataset is created. As the generator creates better fake videos, the discriminator gets better at identifying the fakes, therefore the machine is constantly learning new ways to improve.
While this might sound like the preserve of computer scientists, there are now many seemingly harmless apps in existence that do this work. ZAO, for example, is a free deepfake face-swapping app enabling you to ‘appear’ in scenes from hundreds of movies and TV shows. Innocent enough, sure, but the potential to use this technology maliciously is enormous. At the macro level, deepfakes represent the next level of fake news. Counterfeit video clips could be used to manipulate elections, to misrepresent a person or company, to slander and to shame.
In June, Facebook’s chief executive, Mark Zuckerberg, said the social network was struggling to find ways to deal with deepfake videos, saying they may constitute “a completely different category” of misinformation than anything faced before.
It is also not a stretch to imagine this video technology being used to extort individuals or businesses. Criminals could manipulate video to depict any scenario, whether that be ‘you’ committing an illegal or embarrassing act, or a family member being held against their will. In the latter instance, how much time would you spend questioning the authenticity of that video?
Video, however, is not the only challenge thrown up by deepfakery – voice faking for the purposes of fraud is on the rise too. The Wall Street Journal recently reported the first ever case of AI-based voice fraud, also known as vishing (short for “voice phishing”), costing a company $243,000.
In this instance the attacker synthesised the voiceof the parent company’s CEO to approve the transfer of funds. The vocal content used by the criminals was all sourced from genuine speeches by the CEO that were publicly available online.
The crux of the problem is that deepfakes are, by their nature, extremely difficult to detect. While more amateur videos can be identified by something being a bit “off” – for example a lack of blinking, shadows in the wrong place or slightly out of sync audio – digital forensic techniques are the only reliable way of ascertaining if a video is real. And with machine learning improving every day, there’s no reason ‘the generator’ couldn’t learn how to circumvent digital forensics.
In 2018, MIT researchers – analysing Twitter posts by three million people over the last 11 years – confirmed that fake information was 70% more likely to be retweeted than facts and truthful tweets took six times as long as fake ones to spread across Twitter. In this media environment how can we protect ourselves from the threat of deepfakes? Scenarios will vary but the speed of response is critical. A multi-disciplinary team, including legal and forensic experts, should be engaged as soon as possible to investigate and correct inaccurate content. A practical preventative measure you could take, particularly to avoid “vishing”, would be for businesses and individuals to employ a two-factor verification process in place for transferring funds. However, as deepfake technology develops, the greatest defence is to be aware and remain vigilant at all times.
To learn more about this topic, as well as incidents of stalking and fixated behaviour, please read our latest Schillings Critical Risk Brief available to download here: https://www.schillingspartners.com/info/critical-risk-brief-novemberReceive our monthly newsletter