JUL 14, 2023 10:00 AM PDT

Voice Authentication May Not Be As Secure As We Think

WRITTEN BY: Ryan Vingum

In a digital age, security has become an increasingly prominent focus. You’ve probably heard about massive security breaches where people’s personal information is stolen, putting them in harm’s way. That’s why new ways of promoting security of personal and sensitive information are crucial in an age where people try to access this data with a willful determination.

One layer of security that has been added to various sectors (though largely in banking and other call-center oriented sectors) is voice recognition. Thought of as a type “audio fingerprint,” voice recognition is offered as a way to ensure that only an account user can access sensitive information. And it makes sense: no one else has your voice or sounds just like you, so it must be a relatively safe way to ensure that your information stays out of harm’s way. And it’s seen as safer than simple password protection, because an audio fingerprint can’t be readily “stolen.” Voice authentication works, often, by repeating a phrase that allows a computer to analyze your voice and identify unique features of your voice. When you use a different phrase to access your account, it can compare the new phrase to the unique features it has collected about you.

A team of computer scientists at the University of Waterloo, however, highlight that voice authentication may not be as secure as it seems. In fact, researchers may have discovered a way that a hacker could very easily break through voice-recognition protection.

The technique, specifically, allows hackers to break through voice-recognition systems with surprising success rate: 99% success after just six attempts to break through. Often times, hackers turn to AI to generate convincing replications of a person’s voice, which is why many institutions have implemented “spoofing” measures to help identify real voices and those created by AI.

To test the true security of voice recognition, University of Waterloo researchers created a way to evade these spoofing measures. Specifically, they detected aspects of an AI-generated they spoofing protection would identify as make the sound AI-generated. By writing a piece of code, they were able to remove these markers, enabling AI-generated sounds to bypass spoofing measures, making voice authentication less secure.

Sources: Science Daily; Gnani

About the Author
Master's (MA/MS/Other)
Science writer and editor, with a focus on simplifying complex information about health, medicine, technology, and clinical drug development for a general audience.
You May Also Like
Loading Comments...