Skip to main content Skip to main navigation Skip to footer

Takeaways on Building Better AI Models: A Framework for Responsible Governance

ACAMS Webinar: Governing and Optimizing Models in Financial Crime

Build better AI, faster — without sacrificing the governance to back it up. That was the theme of Hawk's recent webinar with ACAMS, which brought together a panel of experts to explore the intersection of responsible model development and governance:

  • Moderator: Erica Brackman, Senior Product Marketing Manager at Hawk 
  • Panelist: Adrianna Fabijanska, Global Head of Financial Crime Compliance for Investment Banking at ING 
  • Panelist: Michael Morrison, VP of Compliance Technology Product Management at Wintrust Financial Corporation 
  • Panelist: Kyle Daddio, Partner in Risk Advisory Services at Grant Thornton (US). 

During the discussion, the panelists advocated for a framework of organizational literacy, defensible governance, and adaptive resilience. They also gave concrete examples of how to embed these models into the very fabric of risk management. 

The following are our key takeaways from the conversation.

What "Good" AI Looks Like: Lifecycle, Ownership, and Clean Data

Michael Morrison, VP – Compliance Technology Product Management at Wintrust Financial Corporation, emphasized that good AI isn’t just a model: "It's a lifecycle with clear ownership. If you can't explain how your model is performing today and not six months ago, it's not good, no matter how sophisticated it is. Good AI isn’t just accurate, it's operationally embedded and defensible. This starts at the point of selecting the right AI model by establishing what problems you're trying to solve with it." 

Adrianna Fabijanska, Global Head Financial Crime Compliance Investment Banking at ING, agreed that clarity on 'problems to solve' was critical to success: "We have seen cases, not just from our own organization but from peer learning as well, where organizations are learning the hard way that AI without a defined problem to solve doesn't go far. No clarity, no capability." 

Adrianna also highlighted that successful AI relies on data: “I strongly believe that data quality is the bedrock for deploying AI solutions in an organization. Poor data equals poor AI. If you want to successfully deploy AI, structure your data and work on your data lineage. That will save you from having to explain false positives that wouldn't have occurred had the data been cleansed at the start of the process." 

Erica Brackman, Senior Product Marketing Manager at Hawk, added that while adoption is high, the path to "good" AI is often blocked by governance hurdles: "One of the biggest obstacles to getting a good AI model is governance, understanding and interpreting the model, and the quality of data used to train it. Our research found that these three concerns increase as you gain more experience with AI, because you start to understand where the risks lie and how much they can vary depending on the model type, your systems, your implementation approach, and vendor practices.”

The "Copycat League" and Customizing AI for Risk Appetite

Kyle Daddio, Partner, Risk Advisory Services at Grant Thornton (US), warned against following the crowd: "Everybody is talking about what good AI looks like, but we're also seeing what bad AI looks like. It has almost become a copycat league. People hear that another institution has implemented AI for transaction monitoring or sanctions screening, and suddenly they need to figure out what that institution did and replicate it — or hire someone from their model governance or compliance technology team to come in and do the same thing. What really ends up happening is you're doing what was good for somebody else, not what's good for your organization." 

He suggested a strategic approach: "Take a step back, set your goals, get the board involved, and understand where you want to be in three or five years' time. It's far more valuable than reacting out of fear that other institutions are implementing AI and you'll miss the boat. If you're adopting AI because you want it to have a real impact on your program and you have a clear outcome you want to achieve, then you're more likely to take your time and end up on the right path." 

Erica noted that finding that ‘right path’ often comes down to the quality of the partnership when working with vendors: "There's no point in developing an AI model if you can't integrate it into your own system. From a technology perspective, every vendor claims to reduce false positives. But it's important that the solution is tailored to your organization and your risks. Getting results you can expect and defend is fundamental to both choosing the right vendor and successfully getting AI live."

Developing a Defensible Model Governance Framework

Michael argued that governance is actually a tool for long-term success: "Governance often gives the impression that it slows things down. However, it also makes sure things are sustainable and manageable long term so you're not flying too close to the sun. One typical piece of documentation to include is a clear purpose statement for the model: why are we doing it and what problem is it solving?" 

He noted that a defensible framework must also account for the data's journey and how its success is measured: "Other things to document are data lineage and controls. AI models consume data and they have outputs: where are they getting the data from and how do you trace it? You also need to establish and document performance metrics. How are you certain the model is working as expected? You want to assess: is it accomplishing what you want it to do, and how do you know that?" 

Michael also pointed to change management tracking as the final safeguard: "If things are changing from a business decision perspective, or the model is going through new versioning, you need to document that. Understanding what's changing in your model is critical. And with all of this in mind, tailor your model governance rigor to the risk profile of your policy." 

Erica highlighted that once this strategic foundation is in place, the right technology can accelerate governance requirements: “There are many tools out there to govern AI, but without knowing what you want your model to accomplish, the tool itself can't help. Once that foundation is in place, the right technology can automatically develop documentation, provide performance insights, and surface what’s happening ‘under the hood’ to accelerate the model risk management process.”

Creating Regulator-Ready Internal Capabilities

Adrianna made the case that a "regulator-ready" model isn't just a technical achievement but an organizational one rooted in internal literacy: "Work on building the internal capability to build that understanding and encourage teams to work cross-disciplinary. Without that internal model literacy, it's very difficult to challenge the decisions and to detect any drift or adapt to regulatory change. We are facing unprecedented events that require quick adaptability... and fast adaptation has to be pre-empted by understanding and knowledge." 

She warned that relying on a single technical expert is a significant governance risk during an audit: "Just as much as the person who designed the model knows how it works, if an analyst can't explain why they're making the decision they are — or if an examiner comes and asks a question and there's only one person who can answer it — the AI you've designed is flawed. It lacks the right explainability and documentation to effectively communicate that organizational literacy." 

Michael suggested that "One of the things to pursue is identifying AI solutions with specific, smaller business cases. So, solving a specific challenge at lower complexity and lower risk. I feel that working through smaller cases first is where you'll find all the landmines, and find them on a smaller scale, so you can apply those lessons as you take on larger initiatives. It helps to build confidence with regulators and auditors since you’re being cautious and reasonable, rather than diving in headfirst."

Final Takeaways: Building Resilience and AI Literacy

Adrianna then talked about the importance of pivoting quickly when things go wrong: "Modern risk management is shifting from predictive control to adaptive resilience. It’s no longer just about building the better models. It’s about asking how quickly we can recognize when our assumptions are wrong. We need to align the technical language within the organization to make sure that when we say 'A,' we all mean 'A.' Building that internal capability will scale up your adaptability in this fast-paced environment." 

Michael reminded everyone that AI is a high-stakes tool: "Literacy is crucial both for the people operating the tool and those using the day-to-day output. AI is really just a tool, but all tools require training. To give a dramatic example: a chainsaw can be really helpful, but it can also be really dangerous if you don’t know how to use it effectively. All the stakeholders involved need to understand what the tool is and why it’s being used, otherwise you simply won't be effective at governing it." 

Putting Governance into Practice with Technology

Throughout the discussion, we explored AI governance and internal model literacy. But how do financial crime & compliance teams put it into practice? 

FIs need technology that bridges the gap between complex data science and compliance strategy, to create a shared environment that is accessible to both sides. 

Crucially, this technology must automate the "technical heavy lifting," so financial crime & compliance teams can more independently manage the model lifecycle: 

  • Direct Integration to Existing Systems: Technology should handle the heavy lifting of data pipelines and feature engineering so teams can easily create models using their organization’s own data and deploy seamlessly. 
  • Streamlined Model Development: Tools should be capable of automatically selecting the model architecture and hyperparameters using domain expertise, and conducting initial training  where appropriate to ensure consistency and speed. 
  • Exploration and Readiness: Insight graphs for data exploration are crucial for assessing model quality and understanding results. Data exploration is relevant to understand first patterns, flaws in data quality, distributions of the data, and to guide your initial model development strategy. 
  • Automated Documentation: Feature documentation and change tracking should be automated to ensure decisions are documented and readily available during an audit or regulatory review. 
  • Explainable Alert Samples: Development tools should allow users to click into alert samples with clear model explanations, rather than presenting an opaque mathematical equation. 

Want to see how Hawk helps FIs achieve all of the above? Join our Product Deepdive Webinar on March 3 where we'll show you how Hawk’s Analytics Studio helps financial crime and compliance teams solve the challenges of AI model management.  

Register for the Product Deep Dive here. 


Share this page