AI-Based Attacks , Fraud Management & Cybercrime , ID Fraud

AI Voice Cloning Pushes 91% of Banks to Rethink Verification

BioCatch Survey Report Focuses on New AI-Based Risks and Fraud-Fighting Solutions
AI Voice Cloning Pushes 91% of Banks to Rethink Verification
Image: Shutterstock

Banks are concerned about the latest advancements in voice-cloning technology and the threat it poses to the authentication process. The failure of identity-centric solutions to combat synthetic identity fraud has convinced 91% of U.S. banks to reconsider their use of voice verification for major customers, according to a BioCatch report based on a survey of 600 fraud fighters in 11 countries.

See Also: 2024 Threat Hunting Report: Insights to Outsmart Modern Adversaries

The report says that AI-based attacks are growing. "While AI can be useful for financial institutions in fraud detection and response, AI is being used by bad actors to power increasingly advanced threats. AI allows these threat actors to automate tactics and scale attacks beyond traditional limitations. AI and large language models are also being used to create believable messages for social engineering attacks, power voice scams and fuel deepfake videos," the report says.

Over the past year, generative AI companies have released a number of tools that fraud investigators warn are helping criminals - including instantaneous language translation, speech therapy, reading assistance and voice-cloning technology that can copy an account holder's voice patterns by using only three seconds of recorded audio. BioCatch said voice cloning can potentially defeat the use of voice recognition verification technology by many banks and financial services firms, and 91% of respondents said they are looking for new verification methods (see: Cloned Voice Tech Is Coming for Bank Accounts)

"While once considered cutting-edge and a promising answer to complex threats, voice verification will no longer be adequate for financial institutions to protect their customers. As such, financial institutions will need to use a strategic combination of authentication methods to minimize user frustration while maximizing protection," BioCatch said.

The Federal Reserve in 2019 said synthetic identities were the fastest-growing type of fraud and that traditional fraud models failed to flag up to 95% of synthetic identities used in new account applications. The BioCatch survey found that 72% of financial institutions are encountering synthetic identity fraud during the client onboarding process.

Behavioral Biometrics and Information Sharing

Banks have primarily relied on information sharing from financial institutions, law enforcement and regulatory authorities to spot synthetic identities, the report says. While these efforts are commendable, going forward, it says, financial institutions should invest in newer ways of authentication that could be least affected by AI.

Behavioral analysis and anomaly detection tools have proved to be successful in spotting synthetic identity. The evolution of behavioral analysis to incorporate both expertise in online user behavior and the psychology of cybercrime and social engineering has resulted in 41% of financial institutions relying on the technology, the report says.

"What we can observe is what their behavior looks like, and if somebody is pushing through a lot of applications from the same device, from the same IP address, from the same background, from the same location, making very small changes ... there's a lot of behavioral elements that can be leveraged in this capacity along with the device and the network, and that allows you to get a much clearer picture of what the individual's intent is," Seth Ruden, director of global advisory for the Americas at BioCatch, told Information Security Media Group.

Increase in Global Crime to Continue

The report says that fraud management and AML teams are looking to AI to counter fraud threats. About 69% of financial institutions surveyed believe AI will lead to more revenue, improved customer interactions and less time spent investigating false positives.

Global fines for AML and other financial crimes grew by 50% in 2022 to almost $5 billion, according to the report. Financial institutions are increasingly investing and employing technologies such as AI detection and behavioral biometric intelligence solutions, but experts and practitioners in the industry expect financial crime and fraud activity to continue to increase in the next year, the report states.

At the same time, respondents expect management to improve coordination between fraud and financial crime departments. In the survey, 90% of the respondents said financial institutions and government authorities need to share more information to combat fraud and financial crime.


About the Author

Suparna Goswami

Suparna Goswami

Associate Editor, ISMG

Goswami has more than 10 years of experience in the field of journalism. She has covered a variety of beats including global macro economy, fintech, startups and other business trends. Before joining ISMG, she contributed for Forbes Asia, where she wrote about the Indian startup ecosystem. She has also worked with UK-based International Finance Magazine and leading Indian newspapers, such as DNA and Times of India.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing healthcareinfosecurity.com, you agree to our use of cookies.