27 februarie 2026
|

AI Act and Fraud Prevention : why the market must prepare

Regulation (EU) 2024/1689, known as the ‘AI Act’, came into force on 1 August 2024; its requirements will be phased in until 2027. The good news is that financial fraud detection is not classified as ‘high risk’ in the final text (Annex III provides for an exception). However, ignoring the AI Act would be a strategic mistake: our regulated clients (banks, fintechs, e-merchants under supervision) will still have to require traceability, model governance and team training (‘AI literacy’).

Illustration symbolique montrant un bouclier avec un cadenas et un maillet de justice, représentant la sécurité et la réglementation, à côté d’un cercle d’étoiles européennes entourant les lettres AI, sur fond violet.


Find out how Oneytrust is preparing to comply with this new text, which is just as crucial as the GDPR was in its day!

  1. The new broad definition of an ‘AI system’

The text defines an AI system very broadly as a ‘machine-based system, operating with varying degrees of autonomy, capable of adapting after deployment and which, for explicit or implicit purposes, deduces from data how to generate results (predictions, content, recommendations, decisions) that influence physical or virtual environments’.

This broad technological scope is complex to implement operationally, but the market agrees that it encompasses sophisticated approaches such as machine learning (ML), hybrid expert rule + machine learning approaches, and advanced optimisation.

With the proliferation of AI use by fraudsters, trusted anti-fraud companies such as Oneytrust will need to further develop their range of AI solutions to power their scoring, behavioural alerting and anomaly analysis engines in order to keep up with the times and identify increasingly sophisticated fraud patterns.

  1. Risks: where does fraud detection fit in?

The AI Act classifies certain uses as ‘high risk’ (critical biometrics, employment, education, access to essential services such as credit scoring, health insurance pricing, etc.). These systems are authorised but must be accompanied by a very sophisticated risk management system and an assessment of their compliance before being placed on the market.

One notable exception is that AI used to detect financial fraud is not automatically considered ‘high risk’. This reduces the direct regulatory burden, but does not remove the expectations of transparency, data quality and human oversight that regulated institutions contractually pass on to their subsidiaries and suppliers (such as Oneytrust).

  1. What are the key dates to remember?

The main requirements of the AI Act will come into force in stages. Here are the key dates to remember:

  • 2 February 2025: entry into force of the ‘unacceptable risk’ prohibitions + AI literacy requirement (awareness/training for staff involved in AI).
  • 2 August 2025: obligations for general-purpose AI models (GPAI) begin to apply (generative AI).
  • 2 February 2026: European deadline for certain implementing acts (post-market surveillance plans).
  • 2 August 2026: obligations for high-risk AI systems.
  • August 2027: general application of all provisions of the regulation.
  1. Why prepare even if you are not ‘high risk’?

Fraudsters are becoming industrialised with AI. Deepfakes, fake documents, automated attack scripts, automated use of synthetic identities: fraud is scaling up. Those involved in the fight against fraud must adapt quickly to these new trends.

Although fraud prevention is not explicitly considered high risk, it is essential to ensure that solutions respect the fundamental rights and privacy of customers and end users, whether in accordance with the GDPR or the AI Act. In addition, the expectations of regulated partners (mainly in the banking sector) are increasingly high, as they are subject to sector-specific requirements that demand traceability, good documentation and heightened vigilance regarding the quality of the data used to satisfy their supervisors.

  1. The Oneytrust response

Oneytrust provides identity verification and fraud detection solutions for e-merchants, fintechs and banks, relying on AI solutions coupled with our 25 years of human expertise in fraud prevention to detect synthetic identities, transactional anomalies and risk signals in real time. We are members of the BPCE Group, which drives us to high standards of compliance and model governance.1

As with the GDPR, we did not wait to cultivate our expertise on the AI Act and have been raising awareness and training our staff on these new compliance issues for several years. Contact us if you would like to learn more about Oneytrust’s vision for AI compliance and risk management!