Skip to main content Skip to main navigation Skip to footer

5 Takeaways From the AI Keynote at ACAMS: The Emerging Consensus on Risks & Benefits of AI in Financial Services

ACAMS AI Keynote

The most interesting session at ACAMS Hollywood this year was the keynote on Day 2 entitled The Emerging Consensus on Risks & Benefits of AI in Financial Services. 

The panelists were: 

  • Marc Fungard, Head of Fincrime at Stripe
  • Melissa Strait, Chief Compliance Officer at Coinbase
  • Andrew Bingenheimer, Corporate CIO at U.S. Bank
  • Craig Timm, Senior Director at ACAMS 

Our 5 key takeaways from the session:  
 

  1. The running theme at ACAMS is always improving effectiveness. AI is now fundamental to delivering an effective AFC program.
  • “Everybody in this room wants to better fight financial crime to make the world a safer place,” was the opener to the session. “Tech really is the key enabler to that, beyond maybe regulation and supervision.”
  • But in the ACAMS Global Threats survey, the US had come last for industry uptake of AI. Why is that, especially when the US leads the way in tech innovation? What role should AI play in improving effectiveness in AFC?  
  • One of the panelists recalled that in the regulatory session on the previous day, the question had been asked: is it the regulatory expectation that we are using AI? The response was: the expectation is that you are running an effective program. The panelist’s view was that if you're not using AI, you’re not running as effective a program as you could be running.  
  • To understand how to achieve effectiveness, we need to reexamine what our controls do and how they work. Why wouldn't we use LLMs to augment our investigators? Why wouldn't we do KYC verification in model-based ways that use more device information?  
  • We also need to think about what consumers and businesses want: embedded finance, faster payments, agentic AI. How can AML be effective in supporting those?
  • The prize here isn't only efficiency - it's essentially how do we do our work in a way that has a bigger impact on financial crime.
  • Efficiency also remained a key theme: one panelist described how they’re using AI to support information collection for enhanced due diligence (EDD). They deployed technology that took that process from an hour to 15 seconds, delivering impressive savings.
  • Another panelist pointed out that you only need to shadow an investigator or alert adjudicator for a half a day to see the AI opportunity, for example with automated adjudication in economic sanctions. It’s a relatively easy problem to solve from a technology perspective if we can get comfortable with the risk.
     
  1. ‘No SAR left behind’ will limit effectiveness
  • All of the panelists concurred that the concept of ‘no SAR left behind’ could hold the industry back in using AI to improve outcomes.  
  • The example given was how one group had tested machine learning models and found that the model accurately replicated the human decision 97 times out of 100. “Most of us in the room were high fiving. This was way better than we expected. And then there's one person in the corner that says; but what about the three? Now it's not to say that three isn't important, because the three is very important. But it's the mindset as it relates to this emerging technology. And if the expectation is one-for-one, then we will have a really hard time driving these advancements more quickly.”
  • “The standard we have to be held to is: is this a demonstrably better way of getting to the output that we're trying to get to? And are we finding more of the things we need to find? And are we finding them with a higher degree of effectiveness? Are we being more impactful in the things that we're finding?”
  • One panelist asked the audience: who believes that they have 100% precision currently? That is, you’re finding absolutely everything you need to find and the SARs you’re filing are exactly the SARs that need to be filed and no other SARs need to be filed? Not one hand went up. “Because that's impossible. When you start from no SAR left behind, you're assuming that you have 100% recall. You're assuming that every SAR you file is the right SAR and it had to be filed. It's a really bad presumption to start from because it denies you the ability to say ‘the model might actually be better at this’.”
     
  1. What we have now isn’t necessarily right
  • The concept of ‘no SAR left behind’ is even more flawed when we consider that what we have today is nowhere near perfect.
  • “We've convinced ourselves that the tools and systems we're currently using are the right way to do it. We've been conditioned by the technology we had available historically.”  
  • The example was given of self-driving cars: Google isn’t building Waymos to replicate a human driver, because humans are not good at driving. Google is instead looking at the canonical driver and what that should be. When we think about the model challenges - we shouldn't be trying to replicate heuristics in an AI model. 
     
  1. AI will allow people to do more interesting work (and a bigger AML team is not necessarily a sign of a more effective team)
  • A lot of the discussion was around the impact of AI on people’s jobs.
  • The point was made that AI gives people more time to do the actual investigative work that needs to be done.
  • An example was one firm doing website searches as part of their EDD. Researchers look at a merchant’s website and get a sense of whether that merchant is bona fide. But a human can only look at so much of a website. An LLM powered agentic model could read the whole website and provide a summary to the investigator on whether the website matches what the AI has been taught about the firm’s supportability policies. The human's time is therefore not to do the reading and the summarizing but the decisioning and the risk management.
  • The point was also made that a common problem in the industry is being torn between the amount of resource available to meet a regulatory standard and the amount of resource available to actually go after financial crime. The technology now allows you to converge those two.
  • One panelist pointed out that during the spreadsheet revolution, accountants were fearing for their jobs. We still have a lot of accountants. It’s learning how to use the tools to be more effective, rather than jobs going away.
  • One of the panelists also made the point that in some organizations a larger number of people supporting an AML program is a badge of an effective program – he doesn’t agree and thinks we should see more people in an AML program as a sign of a less effective program. 

5. There’s huge optimism for AI and what it can do in financial crime 

  • To close, the panelists were asked to give a score out of 10 for how optimistic they were about the future potential of AI in anti-financial crime. The three scores from the panelists were 9, 9.5, and 10 for the technology (but 6 for our ability to get out of our own way).
  • They were also asked to give a prediction for something we’ll see in AI in the next year or two. The chosen areas were:
  • Convergence - thinking about your customers as customers and not as an AML, transaction monitoring, fraud monitoring, KYC, or EDD challenge. Models are very good at aggregating data so you can manage the risk of a customer as opposed to running a particular control that has a particular output.
  • Ripping out the heuristics to do a much better job at detection.
  • Narrative creation.  

Read more about the Hawk AML AI Overlay, which is enabling banks to add AI augmentations to their AML operations without ripping and replacing existing systems.

Alternatively, follow Hawk on LinkedIn for more AI insights.


Share this page