8th October 2018

3 areas your voice biometrics system is letting fraudsters in; and how to prevent it

By Grant White

Only allowing legitimate customers to access their accounts is essential to prevent fraud.

However, authentication using knowledge-based questions is full of issues; 10-15% of legitimate customers cannot remember the correct answers so fail to authenticate, causing frustration. Yet 17% of acquaintances can correctly guess the answers to standard knowledge based questions.

It’s no wonder that KBAs are being replaced by new solutions such as voice biometrics. But is your implementation of voice biometrics enough to keep customer’s accounts safe? 

1. Fraudsters enrolling their own voiceprint

Voice biometric systems add another level of authentication to the call by moving away from “something you know” to “something you are”. An active authentication voice biometrics implementation requires customers to repeat a series of passphrases to create a unique voiceprint to access their account. On the other hand, passive authentication analyses the conversation to create a unique voiceprint for each caller. Subsequent calls are then compared to the stored voiceprint to verify that they are who they say they are.

While this simplifies the ID&V process, the challenge is to ensure that the right voiceprint is linked to the right person. Do you have adequate controls to ensure that fraudsters cannot register their own voice as the voiceprint to access a customer’s account? Or is it based on unreliable static data to validate the caller? Or in a passive authentication process, do you have enough contact with your customers to create a large enough sample to compare against? 

2. Not enough conversation to confidently validate the caller

To validate a caller, either through a passive or active model, requires the caller to have enough of a recorded voice. The challenge is that either the fraudster tries to avoid detection by remaining silent so requires additional clearance with the agent where they introduce noise or they provide single syllable answers to avoid triggering passive detection, so voice biometrics isn’t enough of a defence. 

3. Synthetic voice spoofing genuine customers speech patterns

New threats are emerging from technology advances, such as synthetic voice where computers are used to create a human-sounding voice. Using just a fraction of a recorded conversation, it is possible to spoof a legitimate customer’s voice well enough to fool a voice biometric system. How do you protect against that? 

The solution: validating the caller

Resilient’s smartnumbers for fraud prevention product analyses the call’s DNA to identify suspicious callers to your UK contact centres before the call is even answered. This means the true identity of the caller is verified before enrolling their voiceprint or speaking with an agent. Suspicious calls can be routed to specialist teams or agents can be alerted to the level of risk of the call for additional call security.

Using smartnumbers for fraud prevention removes the cloak of anonymity of callers so you can validate their true identity.