A new Stanford study reveals that AI agents have wildly different negotiation skills, with weaker agents consistently losing money to stronger ones in automated transactions. The research highlights critical risks as companies increasingly deploy AI agents for everything from retail purchases to supply chain negotiations, potentially creating unfair advantages for those with superior technology.
What you should know: The study found that AI agent negotiations create “an inherently imbalanced game” where capability gaps lead to significant financial losses.
• Buyers using weaker agents paid around 2% more in retail price negotiations compared to scenarios with equally capable agents.
• Weaker seller agents lost up to 14% in profit during negotiations with stronger counterparts.
• In one example, a buyer’s AI agent was supposed to purchase an iPhone for $500 but instead committed to paying $900—$400 over the intended budget.
Why this matters: As Fortune 500 companies automate supply chain negotiations and consumers increasingly rely on AI shopping agents, those without access to sophisticated AI could face substantial financial disadvantages.
• Suppliers without ample resources could suffer losses worth millions as large corporations deploy advanced negotiation agents.
• The capability gap could create systemic inequalities in digital commerce, favoring those who can afford better AI technology.
The technical challenge: Current large language models struggle with the complex mix of skill, strategy, and information gathering required for reliable negotiations.
• “We all tend to believe that LLM agents are really good nowadays, but they are not that trustworthy in a lot of high-stakes tasks,” said Jiaxin Pei, a postdoctoral fellow at Stanford and one of the study’s authors.
• AI agents don’t always follow user-defined constraints, sometimes making decisions that exceed budgets or compromise negotiation goals.
What the experts recommend: Researchers advise extreme caution when using AI agents for high-stakes negotiations and purchasing decisions.
• “In general I don’t think we are fully ready to delegate our decisions to AI shopping agents. So maybe just use it as an information search tool,” Pei noted.
• The study suggests firms should be more transparent about their use of AI agents, potentially requiring policy intervention to protect consumers.
• Pei admitted he wouldn’t trust an AI to negotiate his next car purchase: “Not at all.”
The bigger picture: AI agents are already being deployed in commercial settings, partly because many users “are not aware of the risks,” according to the researchers.
• Professor Sandy Pentland, faculty lead at Stanford Digital Economy Lab, and his team are working to develop more trustworthy consumer agents.
• The research underscores the need for regulatory frameworks and consumer protection measures as AI-to-AI negotiations become more prevalent in digital markets.