Skip to main content

Implementing AI for AML: Tips for Success and Pitfalls to Avoid

AI_Pitfalls_2

Our Chief Solution Officer here at Hawk has extensive experience of implementing AI for anti-financial crime. 

During his time as Group Head of Compliance Product Management at HSBC, Michael Shearer led the creation of an industry-first cloud-based machine learning platform for financial crime detection.

Here he shares some of his learnings on how to implement AI within your AML and fraud prevention operation – and also what you should avoid:

  1. DO use AI for any task you do repeatedly and based on the same inputs. AI excels at tasks of this nature.
  2. DO make sure that before you predict with AI, you don’t already have the data you’re looking for. Data can often get buried in an organization, so doing a search can prevent you from wasting time trying to predict something you already have.
  3. DO invest in data first and case outcome labelling second. What did your investigators find? Did they find suspicion? What sort of suspicion? That information is gold for a machine.
  4. DO differentiate between rule parameters and AI features. You’re probably used to rules having certain parameters. AI models have features, which are similar, but there's a crucial difference: an AI algorithm may ignore a feature because it doesn't correlate with the outcome it’s trying to predict.
  5. DO beware of time travel. It’s possible to train a machine learning algorithm with data that spans multiple time periods, meaning it can see into the future. If you do this, your machine looks very clever on your test data. However, it will perform poorly in production because it won’t have the future view it had in training.
  6. DO demonstrate risk coverage equivalence at the aggregate level, rather than rule-by-rule. A side-by-side comparison of rules and AI can lead to a dead end. Instead, evaluate your risk coverage equivalent. Are you catching better numbers of financial crime at the global level?
  7. DO be clear about material change. AI is still risk-based detection. Your model will get stuck if you don’t retrain it regularly. On the other hand, you don’t want to lose control of what your model is doing. You need a level of model governance. It’s about finding the sweet spot between control and innovation.
  8. DO evaluate performance fairly. Human investigators make mistakes, and machines do too. Take care not to compare your AI’s performance against an unachievable ideal.
  9. DO use rules. Don't throw the baby out with the bathwater. There is still a place for rules. You may want to alert on a particular type of behavior, regardless of customer identity, context, or any other factor. A rule is still the best way to do this. Be mindful of this approach, as it can cause an increase in false positives. 

On the other hand:

  1. DON’T rush the process. Getting data takes time. It’s not just “wrangling” the data; you need to approve and develop it as well. Getting your data ducks in a row before implementation will help smooth the rest of the process.
  2. DON’T judge too soon. AI usually creates a sizeable uplift at implementation, but the measurable rate of improvement can plateau or even dip. However, even after a dip, AI is still much more effective and efficient than using legacy rules-based technology. Make sure to control for any leveling off in your analysis of AI effectiveness.
  3. DON’T train too soon. If you train an AI model on cases you haven’t worked to conclusion, you may find that the results get skewed. When you train on unclosed cases, the machine has only seen half the story. Waiting until you have a robust set of completed cases before training should prevent this issue.
  4. DON’T train on insufficient casework. If a case has been detected for one reason, and then gets escalated or filed for another reason, don't try to train an AI model on that data. If the reason for the escalation is something the machine doesn't have a data point for, then it will flag incorrectly. Weed these cases out and only train on the cases where the machine has all the data that it needs to learn.

AI can deliver significant benefits in increasing the efficiency and effectiveness of your AML and fraud prevention activities. If you’d like to know more about how Hawk can help you on your AI journey, request a demo.

Hawk’s AML Technology

Hawk’s AML technology empowers FIs to detect more suspicious activity using fewer resources. We support FIs in their pursuit of AML compliance and risk management goals. Hawk’s technology is designed with data science and AML expertise to maximize its fitness for financial crime detection purposes. Our AI technology is fully explainable, providing natural language explanations and probabilities based on specific risk factors. 

Want to see a demo? Contact us today.

 


Share this page