By Calypso Team, March 11, 2019
Researchers manipulated an Amazon Alexa using voice commands hidden within recordings that were imperceptible to the human ear. This hack has implications for nearly all voice applications in consumers’ homes.
How They Hacked is a series detailing how adversaries were able to successfully hack machine learning and artificial intelligence applications.
In 2018, researchers at a lab in Germany managed to steal highly personal information by playing an audio recording of birds chirping within “earshot” of an Amazon smart device equipped with the company’s culturally ubiquitous voice assistant, Alexa.
To the researchers, the recording sounded indistinguishable from songbirds. However, hidden within the recording was data that the human ear did not register, but the voice assistant in the room did. By playing the recording for the device, the researchers were able to steal the device owner’s personal banking and financial details and make unauthorized purchases galore. All this transpired without the human observers in the vicinity becoming any the wiser.
This experiment was designed to test Alexa’s vulnerability to “adversarial attacks,” an emerging class of hacks aimed at compromising machine learning and artificial intelligence (AI) systems.
The process for hacking an AI departs significantly from the process for executing a traditional cyberintrusion. Instead of gaining unauthorized access to a network or stealing protected data, hacking an AI involves using slight changes — or perturbations — in data inputs to fool the AI system. When leveraged effectively, these data perturbations can help an attacker accomplish a range of malicious acts, including fooling malware classifiers, breaking military systems, and, yes, compromising voice assistants.
The researchers hacked the Alexa (and several other voice assistants) by tricking two systems: the voice assistant’s AI and the human ear. Tricking the former was a matter of understanding the mathematical process (namely, a Fourier transform) the voice assistant uses to transform audio data into machine-readable code. Once they understood this process, the researchers were able to create audio data that, when transformed, would read just like a human voice command.
The researchers also hacked the human ear — after all, the experimental attack technique would be fairly useless if any person within earshot was able to pick up on it immediately. According to Fast Company, “their method, called ‘psychoacoustic hiding,’ shows how hackers [can] manipulate any type of audio wave…to include words that only the machine can hear, allowing them to give commands without nearby people noticing.”
In short, when humans process a sound being emitted at a certain frequency, our ears automatically block out other, quieter sounds at this frequency for a few moments. This provides just enough time to sneak through commands that machines will hear but humans will not.
Once they had hacked the Alexa’s sound-to-code mathematical process and the human ear, the researchers were able to deliver a series of commands that enabled them to access and exploit the device owner’s personal financial information in a variety of ways.
AI-powered voice assistants are becoming increasingly common in homes and businesses alike, and it is hard to overstate just how important it has become to equip them with sufficient security protocols.
As things stand, security is little more than an afterthought in the development of these tools — if it is thought of at all. Consequently, as the hack detailed above demonstrated, it is remarkably easy to hack an AI-powered voice assistant and use it for a range of nefarious ends.
Ultimately, it is incumbent upon manufacturers like Amazon to keep security in mind from day one when designing and developing products like Alexa. There is no question that AI holds immense promise, but until AI Security is taken seriously, the costs of this cutting-edge technology will likely outweigh its benefits.
At Calypso AI, we recognize this imperative, and help forward-thinking companies develop robust, trusted, and explainable AI solutions. Contact us to learn more.
Get in touch to learn how we can identify your systems’ vulnerabilities and keep them secure.