Researchers at Zhejiang University have developed a way to activate and use speech recognition systems using frequencies humans can’t hear, TechCrunch reported.
The attack works by exploiting the natural characteristics of microphones called harmonics.
Dubbed “DolphinAttack”, the team tested the hack on Siri, Google Now, Samsung S Voice, Huawei HiVoice, Cortana, and Alexa.
They could execute wake phrases such as “OK Google”, as well as dial numbers and make other multi-word requests like “unlock the back door”.
Success rates varied by phone, phrase, and distance – with the attack not working effectively when there was 1.5m between the ultrasonic transducer and the device.
Users can also disable wake phrases, and many phones restrict access to sensitive areas until the device has been unlocked.