Skip to main content

Finding Fincrime When we Don't Know What to Look For

Finding Fincrime with Hawk AI

Financial criminals are becoming increasingly sophisticated, and so are the tools needed to catch them. Trust and transparency are key to successful deployment. AI models that emphasize explainability and auditability act as virtual teammates that work in partnership with their human operators.

Artificial intelligence is an essential tool in the fight against financial crime, but how do we learn to trust the AI super sleuth?

Understanding is the key. AI is a crime-fighting partner, not a magic bullet. Knowing how AI models work to detect FinCrime can help to demystify some common misconceptions and build trust.

Domain Knowledge

The starting point for any good model is an extensive understanding of the model’s context. To ensure quality outputs, we need to make sure that the model is working with good input data. Domain knowledge is critical to this process.

“Today’s approaches to ML and AI are generally domain-agnostic, ignoring domain knowledge that extends far beyond the raw data itself,” said the authors of the Argonne National Laboratory paper AI for Science. “Improving our ability to systematically incorporate diverse forms of domain knowledge can impact every aspect of AI.”

Anomaly detection is at the core of FinCrime models, and it relies on the incorporation of such contextual knowledge. This starts with the basic characteristics of financial data, such as transaction size, currency, time, and location, through to more detailed characteristics and statistical relationships. With an understanding of this contextual information, the AI and FinCrime experts at Hawk AI can prepare the data for analysis by AI models.

Many familiar with data science may know the term “garbage in, garbage out.” This means that the quality of the output of any model is dependent on the quality of the input data. Data scientists decide what features to input into a machine learning model, but they do not decide which features are important to the model. Anomaly detection models determine the important features without data scientists directly steering them to do so. Therefore, preparing the input data with applicable domain knowledge is key.

Knowing that the model has been trained on high-quality data that has been prepared appropriately gives operators the confidence they need to trust its outputs.

Positive Feedback Loops

Once the AI models have been trained, they can learn as they operate. As more cases are reviewed by operators, the model gets better at detecting FinCrime. In this sense, AI isn’t replacing human operators but working in partnership with them — a virtual teammate.

This is fundamentally different to traditional rules-based screening and transaction monitoring, where human operators repeatedly correct the same mistakes with little or no improvement in the overall result. As well as being fruitless, it can be boring, repetitive work.

With an AI model, the human operators are constantly improving the quality and accuracy of the output. The decisions that operators make are incorporated into the model itself. This doesn’t just produce better results for the model, but also helps the human operators to trust the results of AI and machine learning models. They know that their subject matter expertise played an important role in the results.

Proprietary Pattern and Risk Factor Repositories

The positive feedback loops resulting from operator decisions are not exclusive to one financial institution. The more financial crime patterns and risk factors our model detects, the more effective the models get across the board. We can use the same risk factors and patterns with multiple financial institutions, so our proprietary repositories grow over time.

Our primary method for sharing and improving patterns across institutions is federated learning. With federated learning, we can train algorithms on multiple servers without sharing any sensitive data between institutions.  Here’s how it works:

  1. We develop the model in our central server
  2. Multiple institutions receive a copy of the model
  3. We briefly train the copies of the model on each institution’s data
  4. We send trained models back to the central server
  5. We merge the trained models

As with incorporating operator decision feedback, we can initiate this process again and again, resulting in continous improvement of our models. These models, once optimized, are stored in a proprietary pattern repository. They can then be employed for use with additional financial institutions, even if they did not participate in the training. This ensures that all the institutions using our models get the best results possible.

Explainability

Even with appropriate preparation and feedback, it’s difficult to trust the results of an AI model if it’s a black box. Financial regulators increasingly  stress the importance of explainable AI for the sake of transparency, and rightly so. Users need to understand how and why an AI model makes decisions in order to trust its outputs. As AI becomes more complex, rules around transparency and responsible development are a topic of growing significance.

At Hawk AI, our models produce natural language explanations for each risk factor whenever an anomaly is detected. In its role as a virtual team member, the model supports the investigation by allowing human operators to see specific transaction and risk factor groupings. The model generates these explanations in plain language that anyone can understand. Operators don’t need to be specialists in data science to understand why the AI identified specific activities as suspicious.

Auditability

Auditability works together with explainabilty to promote transparency in FinCrime AI models. Financial institutions need explicit audit trails to show that their models are functioning as intended. If a financial institution can’t rely on an AI solution to provide results that regulators can audit, the solution isn’t useful to them at all.

Auditability is a powerful and important component of trust. Not only should a good model explain itself in plain language, but every decision it makes should also be logged and reproducible. To ensure our models do this, we follow established principles of model governance in the model training process. Based on these principles, we employ the following model governance practices:
 

  1. Experiment tracking & reproduceability: We automatically track, reference, and index performed actions while producing a model. This documentation of what has been done and when allows us to reproduce models (and their output) for any third party.
  2. Versioning: Each model we produce receives a version number. We do not overwrite old models, and we can reactivate them as needed.
  3. User acceptance: We constantly keep users informed about core metrics and relevant information. This enables the user to make informed decisions about when to activate, deactivate, or re-train a model.

Employing these practices results in models that are open and transparent, promoting a proactive approach to oversight and accountability. In the event of a regulatory audit, clear documentation of how a model was trained and why it made certain decisions will be readily available.

Model Validation

Before any model is deployed, it goes through a validation process, which looks like this:

  • Stage 1: Set strict criteria for model deployment.
  • Stage 2: Set performance benchmarks for models.
  • Stage 3: Compare new model iterations to previous models.
  • Stage 4: Adjust based on the model’s performance.

When FinCrime investigators know that a model was developed and validated with expertise, they can rest assured that it will generate reliable results.

The Wolfsberg Principles

The Wolfsberg Group, an association of global banks that develops frameworks and guidance for the management of financial crime risks, recently published Principles for Using Artificial Intelligence and Machine Learning in Financial Crime Compliance. These principles aim to help financial institutions achieve fair, effective, and explainable outcomes with their use of AI and Machine Learning.

The principles are as follows:

  1. Legitimate Purpose: Financial Institutions (FIs) should use AI and machine learning for the legitimate purpose of financial crimes compliance, guarding against the potential for data misuse, misrepresentation, or bias.
  2. Proportionate Use: FIs should balance the benefits of AI and machine learning with appropriate management of the risks that may arise from these technologies. FIs should regularly validate the use and configuration of AI and machine learning and ensure that data use is proportionate to the legitimate, intended financial crimes compliance purpose.
  3. Design and Technical Expertise: Teams involved in the creation, monitoring, and control of AI and machine learning technology should be composed of staff with appropriate skills and diverse experiences needed to identify bias in the results. AI and machine learning systems should be driven by a clear definition of the intended outcomes and ensure that results can be adequately explained or proven given the data inputs.
  4. Accountability and Oversight: FIs are responsible for their use of AI and machine learning technology, including for decisions that rely on their analysis, regardless of whether the systems are developed in-house or sourced externally.
  5. Openness and Transparency: FIs should be open and transparent about the use of AI and machine learning, consistent with legal and regulatory requirements. FIs should ensure that this transparency does not facilitate evasion of the industry’s financial crime capabilities, or inadvertently breach reporting confidentiality requirements or other data protection obligations.

By putting these principles into practice, compliance teams can ensure the effective and explainable use of AI and machine learning to detect financial crime.

Building Trust

The effort that goes into developing and validating models creates the foundation for strong and effective partnerships with operators. Together, human investigators and their AI super sleuth teammates can detect anomalies with unmatched speed, accuracy, and transparency.

With criminals now using their own AI, financial institutions of all sizes need to take the threat seriously. Rules-based transaction monitoring systems can’t keep pace with the development of modern FinCrime. Catching these criminals requires a sophisticated suite of tools to detect anomalies as they happen. AI models can learn the techniques and spot them in the future, as well as detecting previously unknown risks, all while explaining what they’re doing and why. All of this results in AI models that operators can trust to detect financial crime.


Share this page