Researchers at Cornell University have discovered a new way for AI tools to steal your data — keystrokes. A new research paper details an AI-driven attack that can steal passwords with up to 95% accuracy by listening to what you type on your keyboard.
The researchers accomplished this by training an AI model on the sound of keystrokes and deploying it on a nearby phone. The integrated microphone listened for keystrokes on a MacBook Pro and was able to reproduce them with 95% accuracy — the highest accuracy the researchers have seen without the use of a large language model.
The team also tested accuracy during a Zoom call, in which the keystrokes were recorded with the laptop’s microphone during a meeting. In this test, the AI was 93% accurate in reproducing the keystrokes. In Skype, the model was 91.7% accurate.
Before your throw away your loud mechanical keyboard, it’s worth noting that the volume of the keyboard had little to do with the accuracy of the attack. Instead, the AI model was trained on the waveform, intensity, and time of each keystroke to identify them. For instance, you may press one key a fraction of a second later than others due to your typing style, and that’s taken into account with the AI model.
In the wild, this attack would take the form of malware installed on your phone or another nearby device with a microphone. Then, it just needs to gather data from your keystrokes and feed them into an AI model by listening on your microphone. The researchers used CoAtNet, which is an AI image classifier, for the attack, and trained the model on 36 keystrokes on a MacBook Pro pressed 25 times each.
There are some ways around this kind of attack, as reported by Bleeping Computer. The first is to avoid typing your password in at all by leveraging features like Windows Hello and Touch ID. You can also invest in good password manager, which not only avoids the threat of typing in your password but also allows you to use random passwords for all of your accounts.
What won’t help is a new keyboard. Even the best keyboards can fall victim to the attack due to its method, so quieter keyboards won’t make a difference.
Unfortunately, this is just the latest in a string of new attack vectors enabled by AI tools, including ChatGPT. Just week ago, the FBI warned about the dangers of ChatGPT and how it’s being used to launch criminal campaigns. Security researchers have also seen new challenges, such as adaptive malware that can quickly change through tools like ChatGPT.
Editors’ Recommendations