The Hyper-Growth Of Synthetic Media: All Is Not As It Seems

Lauren Barr 21 Apr 2022

What is synthetic media? Lauren Barr, Head of Research in our Intelligence and Investigations team explains why synthetic media is so difficult to detect, and what implications it has on privacy and reputation.

It’s late on a Friday afternoon and your phone rings with an unknown number. It’s your boss, asking if you can process an extra payment to a supplier that recently provided services for the company. Could you get it sent before the weekend, they ask, hurriedly listing the payee details and reference? This might seem a familiar scenario to many, but how sure are you that the call was legitimate?

The caller certainly sounded like they normally do. The amount owed seemed in the right range. The supplier is known to the company already. The money is sent. Unfortunately, the call was a scam, facilitated by a new age of AI – Synthetic Media.

All is not as it seems

AI image generation is not new: thispersondoesnotexist.com has captivated audiences since its launch in Feb 2019 when, at the click of a button, a human face could suddenly be randomly generated. These generated images have been used in mass production of fake social media profiles, notably during the Hong Kong protests in 2019, to thwart information sharing amongst the protesters.

This technology has developed further, using a process of deep learning known as Generative Adversarial Networks (GANS) where, essentially, two algorithms compete, one generating material and the other highlighting discrepancies.

Crucially for the unsuspecting finance department or private client, this technology is now available to the lay person. Tools once wrapped in code and requiring specialist developer knowledge are now cleanly presented with intuitive user interfaces which can be accessed on the surface web, along with video tutorials. It takes just a small amount of target data, such as short video footage, multiple photos to generate deepfake images or, as in this example, mere minutes’ worth of audio recordings to produce synthetic voice re-enactment audio. These resources are often easily available on social media or from interviews given to news sites. More recently, Ukraine’s President Volodymyr Zelensky was impersonated through deepfake audio, calling on people to surrender and put down their arms. This broadcast was quickly and widely debunked, but fake news, including the use of synthetic media, remains a large part of the conflict. Even senior HM government ministers have been targeted with deepfake video calls.

Is it all bad news?

There are, however, many credible uses for synthetic media. One notable example is the film Gladiator (2000), where a digital body double and a computer-generated imagery mask of Oliver Reed was used after he died during filming. This was done painstakingly frame by frame at a cost of $3.2 million for approximately two minutes of footage, a stark contrast to the rapid, low-cost software used today. Radio and podcast producers also use voice re-enactment audio. Should a presenter muddle their words, short replacement pieces can be generated to fill the gap, saving the studio from the time and cost of re-recording. The use of synthetic media for these purposes are rarely highlighted to audiences, nor the related ethical considerations of delivering AI generated content to unknowing consumers.

Implications on privacy and reputation

The example given at the start of the article was all too real for one UK CEO in March 2019 when a call from someone who they believed was their chief executive resulted in a transfer of €220,000 to a Hungarian bank account. No suspects have been identified in this case yet: meanwhile, police in the United Arab Emirates are currently investigating a fraud where voice re-enactment audio was allegedly used to steal $35 million.

There are currently no laws in the UK that deal directly with deepfakes or synthetic media. Some existing tort laws can be applied, such as defamation or infliction of emotional distress; privacy and harassment laws have also been used in the past. In 2018, a male was found guilty of harassment at Westminster Magistrates Court after posting fake explicit images of his colleague in an attempt to discredit her. He was sentenced to sixteen weeks in jail and ordered to pay £5,000 in compensation. Despite successes in the court, often it is seen that the damage inflicted through ‘trial by (social) media’ is much greater and has a protracted impact on individuals and organisations. As Schillings’ Investigative Partner Juliet Young discussed in her recent article, social media is a primary conduit for fake news and rumours, with many platforms and parties driving agendas and manipulating content exposure. This means that more than ever, it is critical to take swift action should you or your organisation become the subject of unwanted media attention.

Digital forensics

So how easy is it to detect synthetic media? Increasingly difficult, in fact, due to AI advances.  However, there is software in existence which is getting progressively more accurate. In order to guard against scams or frauds, there are some methods that organisations, no matter how small or large, and family offices can employ. Having robust policies and procedures mean that all employees should be clear about how sensitive information such as payment requests are passed in-house: flags should be raised if there is a diversion from this. Verification systems can be set up, such as asking known personal questions, which could be as simple as ‘how do you take your coffee?’ If there is any doubt about the origin of the call then arrange to call their work mobile back, or better still, a video call.

Increasingly, the rapid development of AI results in an assault on the senses and requires constant questioning of what we see and hear. Thorough assessment and corroboration is required of any source of information before its credible use – with synthetic media adding yet another level of complexity. The damage inflicted by synthetic media can be deep and wide ranging, making it more important than ever to establish the true narrative.