27 Gennaio 2026
|

Agentic AI  – what you need to know in 2026 

As France is fully embedded in its traditional winter sales period, professionals may wonder: this tradition is bound to stay, but what about the way people shop during this period of time? Customers have long switched to the burden of real-life sales to the ease of their online counterpart, it’s fair to say the future is aiming at ever growing ease in customer experience. Enter Agentic AI… but as sales increased traffic on real life shops, it also brought its fair amount of shoplifters…

Robot d’intelligence artificielle analysant des interfaces numériques holographiques, symbolisant l’Agentic AI et l’automatisation des processus dans la lutte contre la fraude en ligne.

Agentic AI – autonomous systems that can observe, decide and act across multiple tools – is about to collide hard with European fraud and payments – on both sides of the fraud fight. For merchants and banks, the real story in 2026 won’t be “magic AI agents that make fraud disappear”, but a messy mix of tougher regulation, smarter attackers, and more opaque vendor black boxes. 

Across Europe according the MRC online merchants are already losing roughly 2.8% of revenue to fraud, with fraud representing about 3% of all orders. Identity fraud and account takeover are surging, fuelled in part by AI that can generate convincing fake identities and social-engineering scripts at scale according to UK fraud agency CIFAS. Agentic AI simply supercharges that trend on both sides of the fence. 

For merchants and finance companies though – there is a dilemma. Increasingly buyers and new customers will use AI to seek out and purchase goods and services,  

From now you’ve basically got two new “customer segments” turning up in your data: 

  • Legitimate agentic AI – shopping assistants, payment agents and aggregators acting on behalf of real people. 
  • Malicious agentic AI – computer-using agents hammering your sign-up flows, ID checks and checkouts at scale. 

The job for merchants and financial services providers is to work with their fraud detection companies to pick their way through this minefield. At Oneytrust we have been seeing this emerge over 2025 and like the rest of the world we anticipate a huge uptick in 2026.  

So, what you do not want to block; agentic AI is already being used as a consumer interface for payments and shopping: 

  • A number of agentic AI options are emerging in Europe like Mastercard “Agent Pay”, Visa “Intelligent Commerce”, Amazon “Buy for Me” and Google “Shop with AI”, where agents initiate payments within parameters set by the consumer 
  • Payments players describe “agentic commerce” as AI agents that hold conditional permissions to shop and pay on a user’s behalf, functionally similar to cards-on-file or recurring payments but with a conversational UX and more decision-making according to Visa.  

Regulators are behind the curve: PSD2 doesn’t mention AI, and even PSD3/PSR only references AI once for fraud prevention. But consumer agents are happening anyway. 

For a merchant, that means some “bot traffic” is now high-value, compliant traffic (AI doing what a loyal customer asked it to do). These agents will often come from data-centre IPs, headless browsers or shared devices, and will reuse stored credentials and payment instruments – i.e. very similar to the bots you’ve spent so much time trying to deter.  

If you block everything that looks like an agent, you’ll break this emerging channel and annoy the schemes, PSPs and big-tech partners driving it. 

How fraudsters are using agentic AI against merchants and banks 

Several trends are showing up in European and global reporting: 

Credential stuffing & ATO with Computer-Using Agents (CUAs). 

According to a report by Push Security,  OpenAI “Operator”-style CUAs shows they can log in to arbitrary web apps, read pages, click buttons and handle full flows like a human – but at bot scale. That lets attackers spray stolen credentials across thousands of sites and then perform in-app actions once they get in. 

AI-scaled phishing and social engineering feeding into payments. 

The European Payments Council’s 2025 threats report flags AI-generated phishing and deepfakes as a key enabler of APP fraud and impersonation scams, making language barriers vanish. 

Account opening and synthetic ID at industrial scale. 

Financial-crime specialists warn that agentic AI will be used to flood banks’ online onboarding with highly consistent, multi-step new account applications, reusing and recombining stolen or synthetic identity elements. 

Payment fraud “speedruns”. 

Arkose Labs talk about agents that skip normal browsing, go straight to high-value endpoints (card testing, gift cards, BNPL, high-ticket items), and adapt in real time if they hit friction. 

Your ID and fraud stack is going to see AI agents trying to open accounts and make purchases 24/7, some benign, some absolutely not. 

Why classic bot defences aren’t enough 

The nasty twist, well-summed up by one recent fraud blog, is that beneficial and malicious agentic AI are technically indistinguishable at first glance. Both: 

  • Run in browsers or CUAs that move the mouse, scroll and click like humans. 
  • Can introduce jitter into timings and keystrokes. 
  • Can respect (or deliberately emulate) your UX flows. 

Old-school bot rules – “data-centre IP = block”, “too fast = bot”, “no mouse = bot” – are blunt instruments here and will kill legitimate payment agents along with attackers. You have to stop thinking “bot vs human” and start thinking in terms of intent and pattern at the identity, device and journey level 

How to tell good agents from bad ones 

For merchants and banks using an ID+fraud provider like Oneytrust, the detection strategy for this use case looks roughly like this. 

Treat “agent” as its own identity class 

First step is to explicitly model agent traffic: 

Tag sessions where the user agent, device behaviour or integration pattern clearly indicates an AI or automation layer. 

Maintain separate risk baselines for: 

Human sessions 

First-party agents (your own app’s automation) 

Trusted third-party agents (big-tech payments, official partners and the new AI purchase agents listed above) 

Unknown / suspicious agents 

Legitimate agents tend to be stable over time – same provider, similar IP ranges, same small set of identities, and predictable timing – the same as legitimate people. Malicious agents show sprawl across identities, merchants and institutions – mimicking their fraudster creators.  

Identity and data-level coherence 

This is where your digital-identity layer comes into its own, Oneytrust’s own positioning is crystal clear: as generative AI, deepfakes and synthetic ID fraud rise, static KYC checks are turning into security liabilities. D-Risk ID focuses on the contextual coherence of identity data (phone, email, device, address, etc.) and can cross-validate against a consortium of live, validated identities from major European retailers and banks. 

Against agentic AI 

Legitimate agents will mostly reuse known, well-behaved identities: long history, normal spend patterns, consistent device history, strong matches in consortia data. 

Malicious agents trying to mass-open accounts or test stolen identities will produce: 

  • Many first-seen identities in a short window. 
  • Weak or no matches to real identity graphs. 
  • Synthetic patterns (odd name/email combos, phone/email geography mismatch, disposable infrastructure). 

Your scoring should treat “new identity + agent session” as high-risk by default, unless the identity is clearly rooted in your consortium / historical data. The truth is that legitimate agents actually resemble patterns and identities of legitimate users. Phew! 

Additionally, legitimate payment agents will often come from a small, stable fleet of devices with clear, contractual relationships. You can whitelist those patterns progressively once you’re confident they’re clean. 

On-site behaviour 

This is where malicious agentic AI really gives itself away. 

Tell-tale patterns for malicious agents  jump straight from entry to login/signup/payment without any browsing or hesitation; no content exploration, just form-filling. They can over-index on voucher/code entry, BNPL, gift cards, high-limit products – anything with a better payout per second. Again, they behave in very similar patterns to human fraudsters – just with greater velocity.  

By contrast, legitimate consumer agents do read product content, compare options and respect user preferences set upstream. They often return to the same merchants and categories repeatedly, with low dispute/chargeback rates over time. You can create a pattern that recognises and allows good behaviour and agents just as you do for good buyers.  

In short, your fraud stack should be able to distinguish “agent that behaves like a long-term customer proxy” versus “agent that behaves like a credential-stuffing script with a UI”. 

  1. Don’t outlaw automation; classify it. 

Build explicit support for “trusted agent” traffic in your risk models instead of treating all non-human sessions as hostile. 

2. Tie everything back to a real identity. 

Lean hard on identity-graph and consortium data: if an agent is acting for identities you can’t anchor in the real world, raise the bar sharply. Talk to Oneytrust about our graph networks.  

3. Use adaptive friction, not blanket blocks. 

For “new identity + agent + high-risk product”, default to extra verification: stronger SCA, additional ID checks, or out-of-band confirmation. EU rules under PSD3/PSR are anyway pushing PSPs towards stronger screening and liability for impersonation fraud – merchants can ride that wave 

4. Align with EBA remote-onboarding guidance 

The EBA’s remote onboarding guidelines demand robust, risk-sensitive processes for online account opening, including impersonation and ID-forgery controls. That’s your mandate to deploy deeper behavioural and identity checks on agent-driven account creation without breaking legitimate customers. 

5. Instrument and rate-limit flows that agents love 

Sign-up, login, password reset, payment-instrument addition and high-risk product applications should have fine-grained rate limits and anomaly detection specifically tuned for I have tidied up the document by standardizing the titles and headers, removing all numbering, and cleaning up the content based on your instructions.