By: Seth Ruden
If you want to start getting your financial institution’s fraud team anxious about the future, you only have to say two letters: AI.
Generative artificial intelligence (GenAI) is a – if not the – top-tier concern for most fraud team managers. Nearly all expect this new technology to massively disrupt their field. And for good reason: The most recent report from the mass-market creators of AI clearly demonstrates bad actors are using it quite successfully for nefarious purposes.
Consider the poorly spelled grammar and lousy English that typified earlier phishing campaigns and how those campaigns have matured into more effective attacks. In a decade, we went from Nigerian prince emails to the precision and sophisticated bank impersonation schemes many of us now experience every day. You may have even received recent SMS messages suggesting you have an unpaid toll road fine.
The fact is: AI is a game-changer for scammers. Already we’re seeing bad actors thriving in this new normal, and the Hong Kong deepfake case extensively covered by news outlets all around the world tells the $25 million story. The emergence of AI in enabling fraud is so compelling that BioCatch, which prevents financial crime by recognizing patterns in human behavior, has published a whitepaper on how we see this space evolving. I’d encourage anyone with a strong stomach to have a read here. If you’d prefer a light summary, I’ll outline a few of the finer points below:
- AI allows scammers to craft flawless (in grammar, punctuation, syntax, etc.) messages in any language they choose, targeting victims anywhere in the world. Deepfake technology also improves every day, letting bad actors impersonate the voices and even the appearance of a victim’s loved ones, bank representatives, officials, etc. This is happening right now, and there are cases of it exploiting the very thing we once felt was the best control for preventing scams: verification of out-of-channel contact.
- AI will help fraudsters automate every criminal thing they do, identifying gaps in their targets, exploiting them at speed, and vastly improving time-to-market. AI can review your code and debug it. AI can write its own scripts. AI can automate all of this and then course correct. It can and will do this when we sleep. This empowers fraudsters to open and take over accounts, scam victims, and move money more efficiently than the manual processes of the pre-AI world.
- AI will accelerate how quickly fraudsters work, allowing them to perform the research, weaponization, delivery, and exploitation stages of a fraud event and cash out illicit funds more rapidly than we have seen at any point in history. Assume a fraudster must identify a target, find that person’s weakness, and leverage multiple databases of stolen personal information to exploit that victim. AI bots will be able to find targets with a process or control gap, then build the data lookups to fit their modus operandi, orchestrate the scam, and then move the stolen funds faster than we have ever seen.
In the whitepaper referenced above, we forecast much of what the recent reports from OpenAI are suggesting is currently occurring in the wild today. So, knowing this, what can your financial institution do as a countermeasure? Well, AI and machine learning models can be used for good as well, and one of the use cases is for measuring legitimate human responses to stimuli. Behavioral biometrics is a natural fit in this AI world. It’s very challenging for an AI model to emulate legitimate human behavior. Bot behavior, meanwhile, is not terribly challenging to identify.
Finding mechanisms that allow us to distinguish legitimate sessions from fraudulent ones and then using those mechanisms in fraud detection can now be considered table stakes in a modern financial crime deterrence capability. So, while AI is now entrenched in our lives (and the attacks of scammers) in a growing number of ways, we should also feel confident in its ability to effectively deter and mitigate AI-powered fraud.
About BioCatch:
BioCatch prevents financial crime by recognizing patterns in human behavior, continuously collecting more than 3,000 anonymized data points – keystroke and mouse activity, touch screen behavior, physical device attributes, and more – as people interact with their digital banking platforms. With these inputs, BioCatch’s machine-learning models reveal patterns in user behavior and provide device intelligence that, together, distinguish the criminal from the legitimate. The company’s Client Innovation Board – an industry-led initiative in partnership with American Express, Barclays, Citi Ventures, HSBC, National Australia Bank, and others – collaborates to pioneer innovative ways of leveraging customer relationships for improved fraud detection. Today, 34 of the world’s largest 100 banks and 257 total financial institutions deploy BioCatch solutions, analyzing 14 billion user sessions per month and protecting 447 million people around the world from fraud and financial crime.
About the author:

Seth Ruden
Seth Ruden is BioCatch’s director of global advisory for the U.S. and Canada. He ran one of the nation’s top credit union fraud teams and, before that, was a global fraud consultant with a top payments technology provider.