Skip to main content

Practical Advice for Implementing AI for AML Detection

Practical Advice for Implementing AI for AML Detection

Artificial Intelligence (AI) improves AML detection processes and operations via false positive reduction and other use cases. But how should financial institutions (FIs) implement AI? What pitfalls should they avoid? 

At a recent webinar with ACAMS, we highlighted the tradeoff between AML risk coverage and alert efficiency, discussed how to overcome the challenges inherent to implementing AI for AML detection, and explained practical steps FIs can take to implement AI for AML detection.

Watch the full webinar. (Access with free ACAMS registration.)

Key Topics

  1. Making Tradeoffs: Risk Coverage vs. Alert Efficiency 
  2. Avoiding Common AI Implementation Errors
  3. Employing Practical AI Tips & Tricks

The Expert Panel

  1. ModeratorTobias Schweiger, Co-founder & CEO, Hawk AI
  2. SpeakerMichael Shearer, Chief Solutions Officer, Hawk AI & Former Group Head of Compliance Product Management, HSBC
  3. FacilitatorSarah Runge, Executive Managing Director, K2 Integrity

Making Tradeoffs: AML Risk Coverage vs. Alert Efficiency 

AI is not a silver bullet; put simply, it just generates a better set of rules than what you’ve used before. The same tradeoff between AML risk coverage and alert efficiency inherent to rules-based technology also applies to AI. 

High Thresholds, High Alert Efficiency

If we set high thresholds on our rules, we look for very egregious behavior before we will alert. The result? All our alerts are truly suspicious. A customer must demonstrate extremely suspicious behavior to get above the threshold. The side effect, however, is that risk coverage suffers. We've set the threshold so high that we're missing behaviors that are truly suspicious, but not extreme. We therefore open ourselves to regulatory and reputational risk. In the image below, this would put us at the bottom right quadrant of the graph.

Low Thresholds, High Risk Coverage

Alternatively, we can significantly reduce thresholds. This improves risk coverage because we catch pretty much every customer who behaves suspiciously. The problem with this method is that many low-risk customer behaviors also sit above the lower threshold. The result? We get poor alert efficiency, i.e., large volumes of false positive alerts. This would put us in the top left quadrant of the graph below.

Chart demonstrating the difference between applying rules only versus rules and AI to detect financial crime

The more you de-risk your AML program, the more difficult detection becomes. This remains true when you use AI. What is also true, however, is that AI detects more financial crime while generating fewer false positive alerts, wherever you draw the line between risk coverage and alert efficiency. It’s up to every FI to determine an appropriate balance based on its unique customer portfolio.

Avoiding Common AI Implementation Errors

Even when you know that AI is not a silver bullet, it can be easy to jump to conclusions about the technology’s effectiveness. Avoiding these common errors will help you get the most out of your AI technology and have an accurate view of whether it’s working or not:

  1. Don’t rush the process. Getting data takes time. It’s not just “wrangling” the data; you need to approve and develop it as well. Getting your data ducks in a row before implementation will help smooth the rest of the process.
  2. Don’t judge too soon. AI usually creates a sizeable uplift at implementation, but the measurable rate of improvement can plateau or even dip. However, even after a dip, AI is still much more effective and efficient than using legacy rules-based technology. Make sure to control for any leveling off in your analysis of AI effectiveness.
  3. Don’t train too soon. If you train an AI model on cases you haven’t worked to conclusion, you may find that the results get skewed. When you train on unclosed cases, the machine has only seen half the story. Waiting until you have a robust set of completed cases before training should prevent this issue.
  4. Don’t train on insufficient casework. If a case has been detected for one reason, and then gets escalated or filed for another reason, don't try to train an AI model on that data. If the reason for the escalation is something the machine doesn't have a data point for, then it will flag incorrectly. Weed these cases out and only train on the cases where the machine has all the data that it needs to learn.

These common errors all stem from not training your AI models on quality data. When you do train your AI systems with quality data, you can rest assured that your AI technology will work as intended. You’ll also be more likely to avoid headaches at implementation and beyond. 

“The facts speak for themselves,” said Shearer. “Facts are the best advocates.”

Employing Practical AI Tips & Tricks

  1. Use AI for any task you do repeatedly and based on the same inputs. AI excels at tasks of this nature. 
  2. Before you predict with AI, make sure you don’t already have the data you’re looking for. Data can often get buried in an organization, so doing a search can prevent you from wasting time trying to predict something you already have. 
  3. Invest in data first and case outcome labelling second. What did your investigators find? Did they find suspicion? What sort of suspicion? That information is gold for a machine.
  4. Differentiate rule parameters and AI features. You’re likely used to rules having certain parameters. AI models have features, which are similar, but there's a crucial difference: an AI algorithm may ignore a feature because it doesn't correlate with the outcome it’s trying to predict. 
  5. Beware of time travel. It’s possible to train a machine learning algorithm with data that spans multiple time periods, meaning it can see into the future. If you do this, your machine looks very clever on your test data. However, it will perform poorly in production because it won’t have the future view it had in training.
  6. Demonstrate risk coverage equivalence at the aggregate level, rather than rule-by-rule. A side-by-side comparison of rules and AI can lead to a dead end. Instead, evaluate your risk coverage equivalent. Are you catching better numbers of financial crime at the global level?
  7. Be clear about material change. AI is still risk-based detection. Your model will get stuck if you don’t retrain it regularly. On the other hand, you don’t want to lose control of what your model is doing. You need a level of model governance. It’s about finding the sweet spot between control and innovation.
  8. Evaluate performance fairly. Human investigators make mistakes, and machines do too. Take care not to compare your AI’s performance against an unachievable ideal. 
  9. Use rules. Don't throw the baby out with the bathwater. There is still a place (albeit a smaller place) for rules. You may want to alert on a particular type of behavior, regardless of customer identity, context, or any other factor. A rule is still the best way to do this. Be mindful of this approach, as it can cause an increase in false positives.

Hawk AI’s AML Technology

Hawk AI’s AML technology empowers FIs to detect more suspicious activity more using fewer resources. We support FIs in their pursuit of AML compliance and risk management goals. Hawk AI’s technology is designed with data science and AML expertise to maximize its fitness for financial crime detection purposes. Our AI technology is fully explainable, providing natural language explanations and probabilities based on specific risk factors. 

Want to see a demo? Contact us today.


Share this page