AI agents that research, negotiate and complete purchases without human involvement are already live in the US. Europe is next — but no regulator has decided who is responsible when an autonomous agent gets it wrong.
QUICK ANSWER
Agentic commerce — where AI agents autonomously research, compare and purchase products on behalf of consumers — is projected to generate $3 to $5 trillion globally by 2030, according to McKinsey. ChatGPT’s Instant Checkout has been live since September 2025, and Google launched its Universal Commerce Protocol in January 2026. But no jurisdiction has enacted regulation specifically addressing autonomous AI purchasing. In the EU, the AI Act, PSD3, GDPR and the Consumer Rights Directive all overlap without clearly answering the central question: when an AI agent makes an unauthorised or harmful purchase, who is liable — the consumer, the AI provider, the merchant or the platform?
Join The European Business Briefing
New subscribers this quarter are entered into a draw to win a Rolex Submariner. Join 40,000+ founders, investors and executives who read EBM every day.
SubscribeWhat is agentic commerce and why does it matter now?
The shift happened faster than most retailers expected. In September 2025, OpenAI launched Instant Checkout inside ChatGPT, allowing its 900 million weekly users to discover and purchase products without ever visiting a merchant’s website. In January 2026, Google CEO Sundar Pichai announced the Universal Commerce Protocol at the National Retail Federation conference, with Walmart, Target, Shopify and more than twenty partners backing it. Mastercard piloted Agent Pay in the UAE, with European rollout expected within months. The global agentic AI market is estimated at $10.86 billion in 2026, according to Precedence Research, and McKinsey projects that agentic commerce could generate $3 to $5 trillion globally by 2030, with AI agents orchestrating up to $1 trillion in US retail revenue alone.
This is not a smarter chatbot. Agentic commerce is a model in which AI agents autonomously handle the entire transaction workflow: finding products, comparing options, checking real-time inventory, applying discounts, processing payment and confirming delivery. The consumer sets the parameters — a budget, a preference, a deadline — and the agent executes. According to IBM, 45 per cent of consumers already use AI for at least part of their buying journey. Forrester predicts that 20 per cent of B2B sellers will face agent-led quote negotiations by the end of 2026. The infrastructure is live. What is missing is the legal framework to govern it.
Who is liable when an AI agent makes a bad purchase?
This is the question that no regulator has fully answered. When a human buys something and it goes wrong, consumer protection law is clear: the buyer initiated the transaction, the merchant fulfilled it, and liability follows established rules. When an AI agent sits between buyer and merchant, those assumptions collapse. McKinsey calls this the “third actor problem” — a non-human entity initiating transactions that existing law was never designed to accommodate.
The liability could fall on the consumer who delegated authority to the agent, the AI provider that built and operates it, the merchant that accepted the transaction, or the platform that facilitated it. As of early 2026, no jurisdiction has enacted regulation specifically addressing agentic commerce. The EU AI Act, the most ambitious AI regulation to date, predates the technology and contains no specific provisions for autonomous purchasing agents. The AI Liability Directive acknowledges that the complexity and opacity of AI make it difficult for victims to identify the liable party, but it was designed for damage caused by AI-enabled products, not for AI acting as an economic agent in its own right. As we explored in our analysis of how EU financial rules could reshape fintech, Europe’s regulatory stack is converging fast — but not fast enough for a technology that is already processing real transactions.
How does Europe’s regulatory framework apply to AI agents?
In Europe, at least five major frameworks overlap without providing a clear answer. The EU AI Act classifies many agentic systems as high-risk if they influence financial decisions or handle sensitive data, triggering obligations around documentation, risk assessments and human oversight. But it does not specify whether human control must be real-time at the moment of purchase. PSD3, expected from 2026, aims to formalise delegated payment initiation and harmonise liability for AI-initiated transactions — but the detail is still being negotiated. GDPR governs how agents process personal data across multiple merchants in a single automated workflow. The Consumer Rights Directive dictates withdrawal and error handling but assumes a human initiated the transaction. The Digital Markets Act may govern how large platforms prioritise agent interactions.
The practical result is a regulatory grey zone. Banks and payment providers are moving to fill it themselves. Visa has introduced its Trusted Agent Protocol for verifying agent identity in real time. Mastercard’s Agent Pay uses Agentic Tokens that tie AI agents to individual users through tokenisation. A new “Know Your Agent” principle is emerging — extending traditional KYC to machine actors — and the European Commission is discussing whether AI agents should be classified as “supervised digital agents,” a new category between humans and machines. As we reported in our coverage of Europe’s $24 trillion breakup with Visa and Mastercard, the battle for control of European payment infrastructure is already fierce. Agentic commerce adds a new front.
What are the fraud and security risks of autonomous purchasing?
The risks are structural, not theoretical. Agent impersonation — where bad actors create agents mimicking legitimate purchasing behaviour — is already a concern. Credential harvesting through agent interactions and fraudulent merchant networks designed to exploit autonomous flows are being flagged by compliance teams across the industry. Traditional fraud detection systems were built to flag anomalies for human review, but agents can execute thousands of targeted transactions rapidly, adapting in real time to detection measures. As our coverage of Europe’s €4.2 billion fraud problem detailed, the EU’s instant payments mandate has already compressed fraud screening windows from hours to seconds. Agentic commerce compresses them further — and removes the human from the loop entirely. The EBA’s finding that instant credit transfer fraud is up to ten times higher than standard transfers takes on a new dimension when the entity initiating those transfers is not a person but a piece of software.
What does this mean for European businesses?
The commercial stakes are enormous. Bain & Company reports that 50 per cent of consumers remain cautious about fully autonomous purchasing, but adoption is accelerating fastest in exactly the categories where Europe’s digital economy is strongest: groceries, consumer goods, travel and B2B procurement. Gartner predicts AI agents will control $15 trillion in B2B purchasing volumes by 2028. For merchants, the shift is from search engine optimisation to what the industry now calls answer engine optimisation — making product data machine-readable so AI agents can find, evaluate and transact with it. As we examined in our analysis of why 95 per cent of AI pilots fail, the gap between AI experimentation and operational transformation remains wide. Agentic commerce will punish that gap ruthlessly. Merchants whose data is not structured for machine consumption will simply not exist in the agent’s decision set.
For payment providers, the opportunity and threat are equally stark. OpenAI charges merchants a 4 per cent transaction fee on every Instant Checkout purchase, on top of Stripe’s processing fees. That is a new toll on commerce that did not exist eighteen months ago. As we explored in our analysis of how fintechs are taking on Visa and Mastercard, the payments industry is already being reshaped by regulation and technology. Agentic commerce accelerates both forces simultaneously. The firms that define the trust, identity and liability frameworks for autonomous transactions will set the rules for the next era of European commerce. Everyone else will be playing on their terms.
The technology is live. The protocols are being built. The regulations are lagging. Europe’s window to shape the rules of agentic commerce — rather than import them from Silicon Valley — is measured in months, not years.
Frequently Asked Questions
What is agentic commerce?
Agentic commerce is a model in which AI agents autonomously research, compare and purchase products on behalf of consumers or businesses. Unlike chatbots that recommend products, agentic systems handle the entire transaction — from discovery to checkout — without requiring human approval at each step. ChatGPT’s Instant Checkout and Google’s Universal Commerce Protocol are the two leading platforms as of early 2026.
Who is liable when an AI agent makes an unauthorised purchase in Europe?
As of early 2026, no jurisdiction has enacted regulation specifically addressing liability for autonomous AI purchases. In the EU, liability could fall on the consumer, the AI provider, the merchant or the platform. The EU AI Act, PSD3, GDPR and the Consumer Rights Directive all overlap without providing a clear answer. PSD3 is expected to formalise delegated payment initiation and harmonise liability rules, but the detail is still being negotiated.
How big is the agentic commerce market?
McKinsey projects agentic commerce will generate $3 to $5 trillion globally by 2030, with AI agents orchestrating up to $1 trillion in US retail revenue. The global agentic AI market is estimated at $10.86 billion in 2026. Gartner predicts AI agents will control $15 trillion in B2B purchasing volumes by 2028. In Europe, full autonomous purchasing rollout is expected within 12 to 24 months.
