Whether we like it or not, the age of Artificial Intelligence (AI) is here and is set to shape all aspects of our lives: education, health, leisure, politics, employment, as well as security. Technology by its nature is constantly evolving; AI is no different, however the speed at which it is changing means it is difficult to keep up with the developments and advice is often out of date as soon as it is published.
Talks among governments and providers on how to regulate it and harness its benefits are at the infancy stage. The potential to enhance various aspects of our lives cannot be denied, but there is also a real threat, as with all new technologies, that ultimately it will be exploited by criminals for financial gain or by competitors for advantage. These threats are heightened for high profile individuals, whose privacy, reputation, and security could be compromised.
Therefore, having a solid understanding of new technologies – and how they can be corrupted and used by criminals – is the first step in proactively protecting yourself or your organisation.
AI generated voice tech is one such technological development which has vast potential but brings with it new threats. To date, AI generated voice tech has primarily been used for ‘entertainment’ purposes: for example, AI-generated voices have already been used in films to replicate actors’ voices, and could soon be utilised for other means, such as audio books.
At present, a mere three second audio sample is needed to generate volumes of material – which is not hard to come by in our world of digitisation.
However, in addition to the uses for entertainment, the criminal uses of AI generated voice tech are endless; as a result, the technology has increasingly been the subject of increasing interest and scrutiny worldwide.
Recent AI voice technology cases
The impact of AI generated voice tech on privacy and security was brought to the fore in a number of public instances, which also showcase how developed the technology already is.
In April a German magazine, Die Aktuelle, published an artificial intelligence-generated ‘interview’ with the former Formula 1 driver Michael Schumacher. The article was produced using an AI programme called charatcter.ai – which artificially generated Schumacher ‘quotes’ about his health and family. The editor has since been fired and Michael Schumacher’s family are taking legal action against the magazine.
At the same time, a song using artificial intelligence was used to clone the voices of Drake and The Weeknd. The song was quickly removed from all streaming services and Universal Music Group said the song violated copyright law. The music group criticised current legislation as being “nowhere near adequate to address deepfakes and the potential issues in terms of IP and other rights.” after the creator, known as @ghostwriter, was able to use software trained on the musicians’ voices.
It again demonstrates the extent to which convincing material can be created with very little original content. In this instance it was a song; but it doesn’t take much to imagine it being used to convey fake instructions or inflammatory quotes. Both of which are security and reputation concerns.
AI generated voice technology can also be misused by criminals and scammers – and it’s not just high-profile individuals who are being targeted. A mother in the US claimed she was “100 per cent” convinced by an AI voice clone of her daughter that scammers used in a faked kidnapping attempt. The kidnapper allowed her to speak to her daughter briefly before making threats, with the mother commenting “I never doubted for one second it was her.”
The ‘kidnapper’ demanded US$1 million for the daughter to be released, before lowering the figure to $50,000. The victim only realised that her daughter was safe after a friend called her husband and confirmed that she was safe.
What are the risks of AI voice technology?
Clearly, as seen in recent news, the issues with AI voice tech are multiple, from violations of privacy to infringements on copyright law – with uses in criminal activity such as kidnap or fraudulent activity. Geoffrey Hinton has recently quit his role at Google, citing fears that AI is growing too powerful too quickly. He flagged concerns that “bad actors” who would try to use AI for “bad things”.
As with the majority of technology, most AI tools are easy to use and access, often freely available on the internet. This means that there are little to no boundaries between criminals and AI. A threat from AI could arise in any number of situations.
For high-profile individuals, there is the risk that their voice could be cloned and recorded to say something controversial, which could be detrimental to their reputation. It’s not only those in the public eye who are at threat, though: the general public is also at risk.
AI voice tech could be used in any situation where an individual is required to provide information or instructions over the phone, such as banking, leading to scams and fraud. We have already seen instances of a synthesised voice being used to defraud a company of hundreds of thousands of dollars, highlighting the potential sinister uses for this technology.
Individuals in the public eye could face reputational damage by false voice calls in which they say something harmful or ‘admit’ committing an illegal act. Virtual kidnap, extortion and sextortion are also real risks which could be amplified by AI voice technology, as is blackmail – and with the availability of voice recordings on streaming platforms, catch up TV and social media, three seconds of material is very easy to acquire.
How can we protect ourselves?
The crux of the problem is that AI voice tech is, by its nature, extremely difficult to detect. In his resignation letter Geoffrey Hinton stated that the public might soon be unable to tell the difference between fact and fiction. As technology improves, the discrepancies between real and fake content will likely become harder to identify.
Tech companies are beginning to develop methods for spotting deepfakes, however as they develop, so does the AI technology, becoming more sophisticated. Becoming a race between the increasing speed of AI evolution, and the technology needed to keep us safe from it.
Clearly, keeping up with the constantly evolving AI developments is not at easy task, however there are steps that can be taken to prevent exploitation by AI voice technology. Businesses, for example, should employ a two-factor verification process for transferring funds etc. to minimise the risk of synthetic voice calls leading to fraud. When challenged, AI voice will only ever be able to answer with inputted information. Which means it is fallible in instances when challenged on private or personal information.
Of course, AI has the potential to present an enormous upside, but it also may present one of the greatest threats in our lifetime as we grapple with how it may be deployed. This technology is the subject of a new ‘space race’ between countries and companies and there is no putting the genie back in the bottle. As the technology develops, increasing your awareness around the subject and trusting your suspicions will be crucial.