Why the Fight Against Fraud Must Shift from a Control-Based Approach to a Detection-Based One
Artificial intelligence is profoundly transforming identity fraud. Technologies for generating images, videos, and voices now allow fraudsters to create credible identities, manipulate identity documents, and bypass biometric controls with increasing ease

This phenomenon is accelerating rapidly. According to the 2025 Identity Fraud Report (Entrust), a deepfake attempt occurs every five minutes, while digital document forgeries have increased by 244% in one year.
These figures illustrate a major shift: identity fraud is becoming an industrialized phenomenon, fueled by AI and organized on a large scale.
In this context, the central question for financial institutions and digital platforms is no longer simply how to strengthen controls, but how to detect fraudulent behavior in an environment where identity artifacts are becoming increasingly credible.
The pitfall of visible checks
Historically, the fight against fraud has been structured around visible checkpoints:
- ID checks
- facial recognition
- KYC procedures during onboarding
These mechanisms remain essential. But they share a fundamental characteristic: they are visible to those undergoing them. In other words, fraudsters can analyse them, test them and gradually learn to circumvent them. A fraudster regularly confronted with a face-matching system will quickly seek to understand how it works. They can then experiment with different methods: deepfakes, face swapping, video injection or biometric replication.
This phenomenon is nothing new. In many areas of security, visible mechanisms are always eventually analysed and circumvented. It is precisely for this reason that the fight against fraud cannot rely solely on what might be called the ‘right hand’: visible monitoring.
The ‘left hand’: invisible detection
An effective anti-fraud strategy actually relies on two complementary aspects:
- The ‘right hand’: verification. These are the visible checks that enable information or an identity to be confirmed.
- The ‘left hand’: detection. This involves analysing signals that are invisible or difficult for fraudsters to interpret.
The difference is crucial. Controls can be observed and circumvented. Detection relies on data patterns and correlations that are difficult to anticipate. In this model, controls play an important but secondary role: they serve to reinforce or confirm a signal detected elsewhere. This approach helps to avoid a constant technological arms race between fraudsters and control systems.
Deepfakes: the new frontier in biometric fraud
The emergence of deepfakes perfectly illustrates this trend. Modern biometric systems now incorporate liveness detection mechanisms, requiring the user to perform an action in real time. These checks make fraud more complex, but they also drive attackers towards more advanced techniques. According to Entrust, deepfakes now account for around 40% of fraud attempts on video biometric systems.
Fraudsters exploit, in particular:
- AI-generated deepfakes
- injection attacks using virtual cameras
- manipulated video streams inserted into capture systems
These techniques enable falsified biometric data to be fed directly into digital identity systems. Fraud is therefore no longer simply a matter of forging a document. It involves manipulating the data streams themselves.
The rise of synthetic identities
At the same time, AI is accelerating the creation of synthetic identities. Rather than stealing an existing identity, fraudsters create a new identity by combining:
- real personal data
- fabricated information
- manipulated or generated documents
AI tools enable these identities to be produced quickly, cheaply and on a large scale. This type of fraud is particularly dangerous because these identities can remain active within an organisation’s systems for a long time before being exploited. Losses associated with synthetic identity fraud are expected to reach tens of billions of dollars in the coming years.
AI for detection
In light of these developments, the issue is not simply a matter of using AI to strengthen visible controls. A more effective strategy involves deploying AI at the heart of detection mechanisms. Fraud does not usually manifest itself as a single anomaly. Rather, it emerges through an accumulation of weak signals:
- inconsistencies in data
- atypical behaviour
- correlations between accounts or identities
- similar activity patterns
Analysing these signals requires processing large amounts of data and detecting patterns invisible to the human eye. It is precisely in this area that artificial intelligence can deliver the greatest value. This approach fits within the DIKW (Data – Information – Knowledge – Wisdom) conceptual framework, where data analysis gradually transforms raw data into actionable knowledge.
A multi-layered anti-fraud architecture
To be effective, a modern anti-fraud strategy must combine several layers of protection:
- document analysis
- facial biometrics
- device and behavioural intelligence
- geolocation and velocity analysis
- identity correlation
- repeat fraud detection
The aim is not to increase the number of visible checks, but to combine multiple signals that make fraud detectable without being easily observable. This approach helps maintain a vital balance: protecting organisations against fraud whilst minimising friction for legitimate users.
Building trust in digital identity
In an increasingly digital world, identity verification remains a critical stage in the user journey. Onboarding often presents the first opportunity to build trust in an identity. Modern digital identity verification solutions now combine multiple sources of information and detection mechanisms to assess the overall risk associated with an identity.
Platforms such as Oneytrust’s D-Risk Commerce and D-Risk ID follow this approach by orchestrating various risk signals to enhance fraud detection without compromising the user experience. The aim is not to replace controls, but to integrate them into a broader detection strategy that is less predictable for fraudsters.
Conclusion
The rise of AI-generated deepfakes marks a new stage in the evolution of identity fraud. Fraud is no longer just a matter of forged documents or stolen data. It is becoming a systemic phenomenon combining synthetic identities, AI-generated content and automated attack infrastructures. In this context, organisations must move beyond an approach focused solely on visible controls.
True effectiveness lies in the ability to detect fraudulent patterns through data analysis and the identification of weak signals.
In other words, fighting fraud is not waged solely with the right hand — that of control. It is waged above all with the left hand — that of detection.

