By Foundry Labs
Yesterday at AML Edge, Mark Monfort, Director of AI & Technology Strategy at Foundry Labs, delivered a clear and confronting message: financial crime has already changed, and most compliance frameworks have not kept pace.
What institutions are now facing is not simply an increase in fraud volume, but a structural shift in how financial crime is executed. We have entered what can only be described as an AI vs AI environment, where criminal capability is scaling faster than institutional defence, and where traditional assumptions about risk, identity and trust are being tested in real time.
Fraud No Longer Scales with People
Historically, financial crime was constrained by human effort. Expanding a fraud operation required more people, more coordination, and more time. That limitation has now disappeared.
AI has introduced automated crime infrastructure, allowing a single actor to replicate what previously required entire teams. Scam messages can be generated at scale, synthetic identity documents can be produced instantly, and deepfake voice and video tools can convincingly impersonate trusted individuals. The defining shift is that fraud now scales with compute rather than headcount, fundamentally changing its cost, speed and accessibility.
This transition has lowered the barrier to entry while accelerating the pace of iteration. Criminals no longer need to be highly sophisticated; they simply need access to increasingly capable tools.
The Emergence of an AI vs AI Arms Race
The result is not just more fraud, but a continuous arms race between offensive and defensive capabilities. On one side, AI is being used to generate scams, construct identities and replicate human behaviour with increasing precision. On the other, institutions are attempting to respond using systems that were largely designed for a pre-AI environment.
This is no longer a dynamic of humans detecting fraud supported by technology. Instead, it is AI systems generating attacks and AI systems attempting to detect them, with human oversight sitting above both. That shift introduces both opportunity and risk, particularly where legacy infrastructure struggles to adapt at the same pace.
When Trust Becomes the Attack Surface
Recent cases demonstrate how materially this landscape has changed. In a widely reported incident, a finance employee joined what appeared to be a legitimate video call with their CFO and senior leadership team. Every participant was a deepfake. The employee, relying on visual and auditory cues, authorised a transfer of $25 million.
No systems were breached and no technical controls were bypassed. The attack succeeded because it replicated trust signals, including authority, familiarity and urgency, with a level of realism that traditional controls are not designed to detect.
This reflects a broader trend in which fraud is shifting away from system vulnerabilities and towards human decision-making, particularly at senior levels where the consequences are highest.
Identity and KYC Under Pressure
At the same time, identity itself is being redefined. Rather than simply impersonating real individuals, AI is enabling the creation of entirely synthetic identities that are internally consistent across documents, behaviours and data points.
This presents a significant challenge for traditional KYC frameworks, which often rely on the assumption that a real person exists behind the data being verified. As synthetic identities become more sophisticated, that assumption becomes increasingly unreliable, exposing gaps in existing verification models.
Why Traditional AML Systems Are Struggling
Many AML systems were designed around static rules, retrospective analysis and high-volume alert generation. While effective in a previous era, these approaches are now showing clear limitations.
High false positive rates, often exceeding 90 per cent, consume significant investigative capacity. Manual workflows slow response times, fragmented data prevents a holistic view of risk, and rules-based systems struggle to detect emerging fraud typologies that do not match historical patterns. The result is that compliance teams spend disproportionate time clearing low-value alerts rather than identifying genuine risk.
In an environment where fraud evolves rapidly and operates at machine speed, this reactive model becomes increasingly difficult to sustain.
Where AI Creates Practical Value
The role of AI in AML is not to replace compliance teams, but to enhance their ability to operate effectively at scale. When deployed correctly, AI acts as an intelligence layer across existing systems, connecting data sources, surfacing patterns and reducing complexity.
This allows investigators to move away from alert clearing and towards meaningful risk analysis. AI can prioritise high-risk cases, identify hidden relationships between entities, detect anomalies beyond predefined rules and assist in summarising complex transaction histories.
One of the most immediate and measurable benefits is the reduction of false positives. By triaging alerts and clustering related activity, AI enables teams to focus on a smaller number of higher-quality cases, with organisations already seeing significant reductions in manual review time.
Integration, Not Replacement
A common misconception is that adopting AI requires a complete rebuild of existing systems. In practice, most successful implementations involve layering AI on top of existing infrastructure, rather than replacing it.
This intelligence layer ingests data from transaction monitoring systems, KYC records and behavioural signals, applies advanced analysis, and feeds insights back into investigator workflows. The strategic question is therefore not whether to replace systems, but where intelligence can be introduced to create the greatest impact.
Governance, Trust and the Next Phase of Compliance
As AI becomes more embedded in decision-making processes, governance becomes central. Regulators are increasingly focused on explainability, auditability and accountability, requiring organisations to demonstrate how decisions are made and who is responsible for them.
Emerging approaches, including the use of zero-knowledge proofs, are beginning to address these challenges by enabling organisations to verify that checks have been performed correctly without exposing sensitive data. This represents a shift towards a model of compliance based not on assumed trust, but on verifiable outcomes.
Where to Start
For most organisations, the challenge is not recognising the opportunity but knowing where to begin. The most effective approach is to focus on high-impact pain points, such as investigator workload, document verification and the detection of missed patterns, and to build capability incrementally.
Targeted deployments, including investigator copilots, AI-driven document verification and anomaly detection overlays, can deliver immediate value while creating a foundation for broader transformation over time.
The Future of Financial Crime Detection
Financial crime detection is moving towards a model that is adaptive, verifiable and collaborative. AI will continue to evolve alongside emerging threats, but its effectiveness will depend on how well it is integrated with human expertise and supported by systems that can be trusted.
The future is unlikely to be defined by AI alone, but by the combination of human judgement, machine capability and cryptographic assurance. Organisations that recognise and act on this shift early will be better positioned to respond to a threat landscape that is already moving at a fundamentally different pace.
As highlighted at AML Edge, the objective is not to replace compliance teams, but to enable them to operate effectively in an environment that is evolving faster than traditional systems were ever designed to handle.




