22 augustus 2025
|

AI at Oneytrust: technology at the heart of our DNA!

In the fight against fraud, the playing field is constantly changing. Fraudsters are now exploiting generative AI to industrialise phishing, create credible impersonations and automate attack scenarios. The result is a constant game of cat and mouse, where speed of adaptation makes all the difference. At Oneytrust, we have a simple belief: to counter AI-powered attackers, we must use AI better than fraudsters — without sacrificing customer experience or regulatory compliance.

Cerveau divisé en deux avec un cerveau humain et un cerveau digital relié à des données. Avec écrit : L'IA détecte. L'Humain comprend. Ensemble, ils sécurisent.

1) Why AI has become essential in the fight against fraud


The boom in language models and content generation tools has lowered the barriers to fraud: flawless phishing emails, carefully crafted fake profiles, synthetic identities that are more difficult to detect, industrialisation of fraud scenarios, reusable attack scripts. Static controls are no longer sufficient. We need systems that are capable of learning, detecting weak signals, and continuously evolving. AI is not a gadget: it is essential for detecting anomalies, linking scattered events, and responding in real time without hindering legitimate processes.

2) Oneytrust’s AI toolkit


Over the years, Oneytrust has built a range of scores to analyse digital events, identify fraud patterns and prioritise actions. Our scores aggregate more than 200 signals and are based on a shared database covering more than 15 million consumers, authorised by the CNIL (French Data Protection Authority) in 2013 — well before the GDPR.
Beyond fraud at a given moment, we observe velocity: how a user, payment method, device (phone, PC, etc.) or email address behaves over time. This intelligent memory makes it possible to streamline the journey of good customers (controlled reuse of acquired trust) and tighten the noose on risks (accumulation of clues, inconsistencies, anomalies). Our rule engines remain adaptive: we add, adjust and remove rules as trends evolve, to keep pace with changing attacks without compromising the customer experience.

3) Humans + AI: augmented expertise, not automated


AI identifies, humans understand. Our investigators and fraud experts play a particularly crucial role in challenging alerts, contextualising cases, and revealing emerging patterns that models have not yet learned (low volumes, ambiguous signals, attacker innovations). This feedback loop feeds the models, improves the rules, and ensures an explainable decision. In particularly sensitive situations, human arbitration protects the customer relationship, avoids false positives, and maintains trust.

4) A strategy to acculturate our employees to Generative AI


As soon as ChatGPT was released in 2022, we quickly decided to train our teams in the use of generative AI to increase speed, operational efficiency and quality on non-sensitive tasks: writing, summarising, analysis assistance, preparing deliverables, etc. As part of the BPCE Group, we have access to the secure MAIA portal, which allows our employees to access reference models (e.g. GPT-4o, Mistral, Gemini) in a safe and ethical environment. In our training courses and with the help of our Generative AI Ambassadors, we emphasise the importance of maintaining a critical perspective on the results generated by AI (systematic verification, non-disclosure of sensitive information, traceability of uses): the aim is to accelerate useful production, not to automate indiscriminately.

5) Governance & compliance: already prepared for the AI Act!


Because many of our customers are regulated (banks, fintechs, e-merchants), we have long since implemented the safeguards expected by sector regulators: model documentation, decision logging and auditability, data quality, continuous monitoring, and human supervision. The AI Act, which has just come into force, reinforces these requirements: we anticipated this framework and are aligning our work with the BPCE Group’s compliance approach. In concrete terms, this means: system mapping, model risk management, periodic controls, proportionate explainability and team acculturation. Our ambition is to meet deadlines without slowing down useful innovation.


In conclusion, let us consider that the fight against fraud is a movement, not something static. AI enables earlier detection, better explanation and faster action — provided it is governed, complemented by humans and integrated without disrupting the user experience. At Oneytrust, it is this winning trio — high-performance models, committed experts and solid compliance — that transforms a shifting threat into a sustainable competitive advantage.

Would you like to learn more about the expertise of our models, which combine more than 200 types of data and velocities, and benefit from Oneytrust’s Data Consortium with more than 15 million digital identities to secure your customer journeys without losing conversions? Contact us.