As organisations strengthen their defences against AI attacks, fraudsters are turning their focus to tricking consumers. Fraudsters are increasingly adopting AI tools to make scams more convincing leaving organisations in need of new ways to spot customers acting under their spell, writes Christopher Wade.
Organisations are aware of the imminent threat from fraudsters using AI tools such as deepfakes and voice cloning to target contact centres. And although they may not be fully protected yet, many have some kind of plan in place to spot fake callers and other automated scams. Consumers, however, remain largely unprotected – and that is a problem not just for them, but for financial institutions too.
Consumers are increasingly vulnerable to fraudsters targeting them on their phones and through social media, using various tricks and scams. Through what is known as social engineering, fraudsters are increasingly able to convince their victims to part with their money. Techniques include the ‘safe account’ scam, where fraudsters convince you they are calling from your bank to tell you your account has been compromised and help you move your money to a different account.
Another common example is the ‘Hi Mum’ con where scammers posing as the children of their potential victims email hundreds of targets with a text or WhatsApp message claiming that they have a new number because they have lost or broken their phone. Their goal is to convince you to send some money to help your ‘child’ and is increasingly successful.
Unfortunately, examples like this are becoming increasingly common, with AI making these scams easier and more convincing. Indeed, generative AI helps fraudsters find your personal information, photos and videos and use them to create fake messages and even clone your voice or create ‘deepfakes’ designed to trick even the most savvy people into handing over their money. Imagine thinking you’re hearing your child’s voice on the phone!
Very little can be done easily to protect consumers directly and when they fall victim to these kinds of scams, financial institutions are usually held liable (financially and reputationally). Indeed, in a step to protect consumers the UK Payment Systems Regulator (PSR) is putting in place a 50-50 shared liability split between sending and receiving institutions, which is expected to go into effect in October 2024. So pressure is on organisations to take the lead in protecting themselves and their customers and prevent further losses from this increase in AI-powered scams
Here are two key areas, however, where increasing contact centre defences can help.
First, when you consider the fraudster’s activity lifecycle, there’s an opportunity to limit the risk by looking upstream at the stages before a scam is carried out and spotting unusual activity. As fraudsters prepare to carry out scams using stolen personal data, they first need to gather as much information about their targets and test out what data works and at which organisations. One way they do this is to set up bots to repeatedly attack contact centre IVR systems, calling multiple contact centres over and over again, using trial and error with different data combinations to validate and enhance the data they have. With the ability to spot this, at-risk accounts can be identified early on in the cycle and customers can be protected later on in the attack.
The second opportunity is at the moment the fraud is taking place – the point at which your customer calls ‘under the spell’ of the fraudster to initiate a transaction. And this requires an understanding of your customers and their typical calling patterns and behaviour – how often do they usually call, what time of day and what kind of transactions do they normally carry out? Putting in controls and “breaking” the fraud attack can provide the vital time needed to stop money leaving the account and to help the customer realise what has happened.
Solutions like Smartnumbers that analyse calls into the contact centre and use machine learning to spot and flag these kinds of anomalies are vitally important for organisations as the fight against AI-powered fraud steps up. Likewise, Smartnumbers Consortium which flags and shares caller data on known fraudsters is an essential way to combat fraud across multiple organisations and sectors by increasing effectiveness in detecting and preventing these kinds of scams even as the fraudster methods change.