Smart Speakers are too smart to be used for conference calls
20 April 2020
Do you – like at least 1 in 5 households in the UK – have a Smart Speaker in your home? For increasing numbers of us, Alexa is part of the family – and “Hey Siri” is a legitimate way of starting a conversation. Now we’re working from home, you might be tempted to replace the office star phone with your home smart speaker. I’m here to strongly urge you not to. And here’s why.
Using a smart speaker for your conference calls – or even holding a work call in the same room as a smart speaker – could, I believe, put your security at risk. If the speaker believes a keyword has been spoken, it may well record your call. It’s an extremely sobering thought if your work involves any kind of confidential information.
This is a widespread issue. Smart speakers have grown increasingly popular in recent years. In 2019, it was estimated that between 119 and 130 million smart speakers were sold. Most of the smart speaker sales were made in the United States, with around 41 million units shipped. This represents a 70% jump from the 86.2 million units the market shipped in 2018 and 32 million units in 2017. In just two years an increase of more than four and a half times.
There are plenty of worrying stories, in the press and online, about smart speakers. While the devices can often make life easier for us, they can also, in the same way computers and smartphones can, be hacked.
Alexa, what’s the issue here?
Smart speakers use a whole range of personal information – which makes them a natural target for cyber crime. In many cases, our speakers have access to personal accounts like Amazon, eBay, email and Spotify. Now we’re working from home, they can also open a door to critical business information.
Smart speakers are activated with a “wake word.” For the Echo, it’s “Alexa,” for the Google Home, it’s “OK Google” and for Apple’s Siri it is, of course, “Siri”. If the speaker thinks it’s detected this demand it will spring to life. “Seriously”, for example, sounds like the wake word “Siri” and often causes Apple’s Siri-enabled devices to start listening. After hearing the wake word, the smart speaker starts analysing whatever is said after it. And don’t forget, in order to catch the wake word, smart speakers must keep their microphone active at all times – they are quite literally “always listening”.
This has rightly raised privacy concerns about firms listening to and storing all conversations. The firms behind these devices must send conversations to their cloud servers - the machine learning algorithms that analyse and process voice commands require computer processing capabilities that the speakers themselves just don’t have. The good news is, the device doesn’t send anything to the cloud before the wake word triggers it. Firms would be overwhelmed with useless data if they were recording their smart speakers all the time.
That said, this doesn’t mean that a smart speaker, which is fundamentally a computer with a microphone and an internet connection, doesn’t have the capability to record and store your conversations in the cloud. In fact, that is the default if it’s hacked, or sometimes even if it just malfunctions.
A recent study from researchers at Northeastern University in the USA and Imperial College London found that smart speakers from Google, Apple and Amazon can accidentally activate as many as 19 times daily and record snippets of more than 40 seconds without users knowing. It’s vital if you choose to buy and use a smart speaker, you understand you are agreeing to the manufacturer recording and processing your voice.
Doesn’t my phone do that anyway?
The same threats do apply to your phone, which is also a computer with a microphone (and a camera plus potentially many other features) and connected to the internet. The difference with your phone, which makes it even more valuable as a target, is that you always carry it with you instead of leaving it in your kitchen or living room. A follow up article will address mobile phone security.
If someone gets a hold of your phone, they could go through your recorded conversations by accessing the account associated with your smart speaker. Or if law enforcement serves a warrant, your country’s law and the manufacturer’s approach to privacy will determine whether they’ll get access to your voice recordings stored in the cloud.
However, this is only like someone getting access to your email account and reading through your emails. Like all online accounts, using mutli- or two-factor authentication and strong passwords is an effective measure to prevent unwanted access to your recordings. However, in contrast to email and messaging services, which offer a range of privacy settings and end-to-end encryption, by choosing to buy and use a smart speaker, you are agreeing to the manufacturer recording and processing your voice. Of course, you can go through the accounts linked to their smart speakers and delete their recording history, but it will probably affect the performance of the device.
Ok Google, should I avoid smart speakers altogether?
Advances in machine learning and natural language processing (NLP) mean that smart speakers can provide us with a hands-free and easy-to-use interface to use with computers and carry out tasks that previously required a screen, mouse and keyboard. Of course, there is always a risk that these speakers may have security bugs, but their software is updated regularly. So long as you keep applying patches when they are released, your speaker should be relatively secure.
The good news is that speakers from the top three leading brands are all considered to be relatively secure. Each supports WPA-2 encryption and a secure WiFi connection that can prevent data and instructions being intercepted by hackers.
They also support two-factor authentication. Whenever you try to make a purchase using your smart speaker you will be asked to speak a 4 or 6 digit code. This code is sent by the manufacturer to your smartphone – which is usually a trusted device that only you should have access to. If you – or an unauthorised user – cannot speak the code, the purchase is aborted.
Hey Siri, how can I stay safe?
As with any technology, if you own a smart speaker, keep in mind the information stored and used by their devices and consider what some nefarious person could do if they got hold of this personal, or now potentially business, information. If you’re having any kind of confidential conversation – particularly about work – do so out of the speaker’s ‘earshot’.
If you do choose to get a smart speaker, take the time to set up your security settings, be careful of less secure default settings and allow access only to people and companies you trust.
Our homes are our “castles” and increasingly represent our last private space. So be careful before you let a Smart Speaker into your home and now (for many) office. Do your research, turn on security settings, change defaults and keep your software up to date. Be careful where you place your device and be aware enough to keep work and home life separate.Receive our monthly newsletter