AI-powered financial risk models have prevented over $1 billion in fraud, but do AI models match human expertise?
Financial crime and banking fraud are always evolving, with fraudsters constantly finding new ways to outsmart financial institutions and slip past regulations. Traditional investigating methods to mitigate crime require sifting through endless spreadsheets and transaction records—a labor-intensive process that is often prone to human error. Now, with the rise of agentic artificial intelligence (AI), banks are looking to turn the tide, using smart automation to detect threats more accurately than ever before.
In addition to adding a more personalized service, the ability to predict trends, and an overall flashy customer experience, AI and machine learning (ML) are also a “prime application” for fraud detection. “AI algorithms can identify suspicious activities with unparalleled accuracy and speed,” according to a blog post by William Harmony, financial practice lead at compliance and risk management company Founder Shield.
Unlike conventional AI chatbots that require investigators to ask the right questions, AI agents autonomously collect evidence and recommend decisions. For instance, Oracle’s
Yet, while AI has become an indispensable tool in fraud detection and anti-money laundering (AML) efforts, it is only as effective as the data it’s trained on. Joe Biddle, UK market director at Trapets, cautions that an over-reliance on AI could lead to a dangerous false sense of security. The company operates at a midway point between traditional manual processes and only AI.
“We don’t (currently) have AI models that can detect novel threats that aren’t part of their training data. Criminals would inevitably develop new tactics that fall outside its scope,” Biddle told me.
“If we were to rely solely on AI for financial crime prevention, we’d have to constantly retrain these systems just to keep up. Even then, AI models operate based on patterns, meaning they might miss outlier cases or subtle threats that a human investigator would catch.”
The Good and Bad of AI in Financial Crime Prevention
Between 2023 and 2024, the
However, many AI models lack transparency and operate as “black boxes,” making it difficult for institutions to explain to regulatory bodies or audit how compliance-related decisions were made.
“Financial crime experts can look closely at things like criminal motives, global economic shifts, geopolitical risks, and so on, to analyze trends and predict emerging threats. AI doesn’t have this level of insight,” argues Biddle. “If banks become over reliant on AI, it can create huge issues when it comes to compliance, since regulators need to see clear reasoning for every action a bank takes.”
Additionally, as AI-driven frameworks become more prevalent, human investigators could lose their ability to independently recognize fraudulent behavior, creating a long-term knowledge gap.
“A sensible starting point lies with high-performing employees with a strong mastery of cross-functional processes. These individuals can use these to create pilot projects that allow AI agents to learn how to handle complex organizational workflows and the tasks within that deliver toward the set goals,”
“AI simply follows pre-established ‘rules’ based on the data it was trained on, so it is unable to explain each factor that led to its ultimate decision, which is important for regulators. So if an AI flags a transaction as suspicious, a human reviewer might not always be able to see the full logic chain behind it,” Biddle told me. “It’s critical to have humans in the loop at every stage of the process.”
Financial Institutions Must Stay Smarter Than Their AI
Biddle emphasizes that AI should serve as an enhancement to human expertise rather than a complete replacement.
“Institutions should have the expertise to be able to critically evaluate the AI’s decisions instead of being satisfied by its confident responses. Ultimately, AI should be an extra layer of protection in the fight against financial crime, not the entire defense,” Biddle emphasized. “It’s ensuring that institutions don’t become so reliant on it that they lose sight of the bigger picture.”
Keeping a close eye on compliance in a fast-evolving sector is also important, according to Ruban Phukan, CEO at GoodGist, a no-code platform that uses Agentic AI for workplace productivity.
“AI is here, and it’s evolving fast. With the rise of generative AI (genAI), it’s critical that financial institutions continually update their compliance frameworks—not only to stay ahead of regulatory shifts but to clearly track and explain how their AI systems operate,” Phukan told me.
“For example, ensuring chatbot responses are accurate and legally sound means banks must review the language models being trained, and implement guidelines on how AI-generated content is communicated to clients.”
For his part, Biddle asserts that financial institutions need to stay one step ahead by continuously updating AI defenses and ensuring human oversight remains in place.
“The best approach is to adopt a hybrid model that combines traditional rules-based systems with AI tools – and always with strong human oversight. Any AI model used by banks should be continuously tested, refined, and most importantly, guided by human input,” Biddle added. “Bank’s risk and compliance teams should be trained in AI, or better yet, have AI experts embedded within them, so they understand how these models work and where they might fail.”