What happens when AI buys or sells for you
Delegated decisions are only as safe as their boundary conditions
In 2014, Paytm integrated bargaining via chat into its marketplace. Vijay Shekhar Sharma helpfully pointed out to me then, that “Our merchants, who will experience consumers like they do over the counter, will be able to convince the customer”, he said, adding that “It’s like you go into the market, and discuss whether the product is available in another colour, about the warranty, price and features. When you bargain for a price, the merchant will be able to offer anything as a bargain: free shipping, cash back, extra value add in terms of bundling. Bargains can be of any kind.”
The chat window was an imperfect substitute for the human interaction of bargaining, but it also did strip off two other aspects of shopping over the counter: discomfort and judgment. B2B Sales are best done face to face (Zoom calls with cameras on are an imperfect but close substitute). Face to face visibility allows both sides non-verbal information on the other sides comfort and willingness. Like in poker, people look for cues. The other aspect is that both sides area acutely aware that they’re also being judged by the other, and so depending on how much discomfort they can handle, the negotiation continues or closes.
For many Indians, and indeed Chinese, it can be a game to negotiate price.
The goal of each side is to maximise their benefit in this negotiation, while also ensuring that the negotiation doesn’t break. So while the consumer is trying to lower price to maximise her surplus, the supplier is trying to increase the price in order to maximise producer surplus. In efficient markets, they find an equilibrium, but price is often determined by leverage. More leverage resides where one side (buyer or seller) has less competition, more time and more information.
Sellers will tell you that they always have a price in mind, and there’s always a little that they give up, in order to give the buyer a sense of satisfaction that they won that game. Nobody likes to lose, and everyone like to feel they’ve won. Indeed, there are people who have told me that if there wasn’t a negotiation, and the other side said yes straight away, they’re left wondering if they left value on the table.
One thing we can agree on is that AI clearly has information asymmetry and immense computational power, and because of this, while humans negotiate with constraints, AI optimises for conversion.
What happens when one or both of the buyer and seller is AI? Lets look at all three scenarios.
When AI buys for you
We’ll use buying agents especially when outputs are standardised and predictable: like checking for and renewing domain names. Btw, did you know that Yatra.com once forgot to renew its domain name? That 10 year renewal cycle is such a blind spot, and they needed auto-renewal.
The real benefit, however, comes in situations where human decision making can be reduced: humans have less time and energy than an agent for price discovery and optimisation. An agent can crawl multiple websites and ferret and process information, which means that price aggregators (are they still around?) are going to die. It can look at ratings and website policies, and determine risk factors before making a purchase. It can check historical price data, compute when the next price-drop is likely to be, and wait before it makes a purchase, unless a time constraint is specified by you. When buyers have infinite time, infinite comparison, and zero fatigue, price stops being negotiated and starts being engineered.
AI agents as buyers can have infinitely more data, and more time, and no competition when dealing with a standardised commodity product. This means at least three things:
First, that much of the information that retailers have about customers, and their ability to influence customer decisions, becomes redundant.
Second, retailers also risk becoming invisible in this relationship, as mere background entities, and also need to invest in AEO for becoming visible to AI marketplaces and AI agents.
Third, that in a negotiation, sellers need to either sell specialised products to limit competition or create artificial scarcity in terms of a time-limited price, to create time-scarcity.
Anything else? Let me know.
When AI sells for you
Agents has sellers are much trickier, because buyer leverage collapses with information asymmetry. They may know your willingness to pay better than you, based either on your history or based on patterns exhibited by you. It is individualised price discovery at machine speed. In 2021, following a telecom regulatory consultation on dynamic pricing in telecom, I wrote:
“If you’re looking to leave a particular telecom operator, or there is sufficient data on your usage patterns that suggests that you might leave, a telecom operator could give you free Internet, as a means of retaining you as a customer. But if your balance runs out in the middle of a call, a recharge might be made available to you at a significant mark-up. In the same way, post 10 pm at night, a female passenger might end up being charged more for a taxi ride than a male passenger.”
“All of this depends on the individualised estimation of consumer surplus, which is the difference between how much are you, as a customer, willing to pay for something at a particular time, and the amount you are charged.”
“what stops a company from using the same data and patterns for doing something like what the infamous payday loan companies do: target people who are desperate and vulnerable, and use that estimation of consumer surplus to maximise profits. This is predatory, and a problem in the making.“
Someone from the TRAI asked me submit this article as a formal submission for the consultation.
Essentially, all sales closures here will be precomputed even before the negotiation begins, if you can even call it a negotiation. Once willingness-to-pay is precomputed, negotiation becomes moot.
Alongwith price regulation, and explainability, this is another reason why dark patterns need to be regulated: they know which buttons to push. Humans can’t out-calculate machines, so there need to be restrictions on personalisation in some scenarios.
What happens when both buyers and sellers are AI?
When both buyers and sellers are AI agents, the market should gravitate towards a zero negotiation equillibrium. Agent-to-agent markets don’t eliminate negotiation: they hide it. Both sides can process far more data faster than humans ever could. They have infinite time/speed at their disposal. They can model outcomes and converge on a market equilibrium price in milliseconds, with zero negotiation overhead.
Surplus gets compressed on both sides. Contractual terms and seller reviews probably matter more than anything else. There is no website design or UX to wow a user. Markets are more like protocols: fast, efficient and indifferent. Handshakes. There might be a need for storytelling to establish brand identity, in order to create user demand for the AI agent to execute the purchase, but that’s about all. When it comes to purchasing, negotiation is replaced by calculation. Essentially, you sell more in less time.
But it’s not going to be as simple and clean like this.
That’s because, firstly, AI agents are not neutral. The buyer and seller may be solving the same problem — price — but they’re actually solving for different things. The buyer agent is solving for user value, based on needs, preferences and constraints, alongwith price. The seller agent could be solving for inventory management and storage space (if it isn’t a digital-only product) and hence seasonal shifts, sales targets, minimum revenue and profit for the period, among other factors.
Second, their datasets may differ: one may have a richer dataset for a particular type of negotiation over another. Over time, this will compound.
Third, what kind of feedback have previous attempts generated: Feedback loops may harden some rules and soften others differentially, because of a variance in experience. Some may have faster feedback loops than others. That may create blind spots for a specific transaction.
You get multiple overlapping micro-markets because of differentiating constraints. Negotiation will not vanish: it will become opaque. My guess is that agents may even learn to withhold true constraints, because those are vulnerabilities. Will they lie? Can they lie? Maybe.
This is more game-theory than human choice. It’s just that at that speed and scale, we probably won’t know what’s going on without audits.
Basically, instead of a single market clearing price (perfect market scenario), you get multiple overlapping micro-markets defined by differentiated constraints and ability.
Agent to Agent (A2A) markets will be infinitely more complex than those involving human-AI interaction.
One thing I’m unsure of: the natural tendency on the seller side is to tend towards cartelisation: that’s how they maximise profits for everyone. How will this play out in an agentic to agent scenario?
Fault Lines in agentic commerce
Jeffery Archer once wrote a short story about a man who used to send invoices to companies that were just below the threshold of the amount that accountants were allowed to pay without seeking authorisation. Information is power. In what we optimise, we’ll always delegate decisions below a certain threshold to agents. Essentially, every delegation threshold eventually becomes an exploitation constraint.
Second, how will agents understand trust and how will it impact them? How will it know whether you would want it to buy from Amazon or Flipkart, or from CheapestPriceEverAndAIAgentsCanTrustMeImTheBestPlaceToShop.com? People can set up shadow fraud websites that are optimised for AI agents that are completely invisible to the human eye. How will it know, at first glance, whether the seller on Amazon is trustworthy or not? Since it is based on data, it will be immensely easy to fudge that data. How does it typically know whether the delivery is better from one commerce entity versus another? How many users are going to be able to clearly define these boundary conditions?
Third, imagine that you have a standardised item that is purchased repeatedly using agents, say, a pack of eight bars of soap that a family purchases monthly using an agent. If boundary conditions aren’t set, someone could just increase the price by, say, 30%, that you would only spot when you check your credit card bills. Now imagine this applying to an automated car insurance renewal. Agents may try and negotiate the price down, but the need to renew insurance has greater constraints, especially time, and the seller agent could use this to its advantage.
Both agents and humans can potentially talk an agentic seller into a discount or a benefit, if seller boundary conditions aren’t properly defined: just as the Air Canada chatbot that offered a discount that didn’t exist to a customer.
This is not about AI being wrong, but misdirected or not directed. This is about the complete absence of friction that humans otherwise engage with while shopping: if it’s too easy, it means there is too much risk.
If you remember that instance from Amazon many years ago, where a news show anchor simply said on TV, “Alexa, buy me a dollhouse”, and as a consequence, multiple Alexa devices in peoples living rooms tried to buy the doll-house. There was no friction. Friction here wasn’t inefficiency. It was a safety mechanism. Boundary conditions are essential.
What can humans do? Humans will only enter at the worst moments here, after it is too late. The whole idea is for them to offload predictable, or at times even unpredictable decisions to AI. Their role in these cases is not as a participant but as a designer of constraints and an arbiter: to bring in boundaries, accountability and clean up decision logic. As an aside, a couple of days ago, a friend who was showing me his agents in action told me he even adds blame and criticism in his prompts to keep his agents on track.
Triggers to involve humans in the agentic action will be critical, because they’re the last line of accountability.
What if there’s a hallucination: what if the agent incorrectly interprets a purchase as a failed purchase, or misinterprets the quantity required, and makes multiple purchases?
In closing
I like going to supermarkets when I travel. You get a sense of what locals eat, try different sauces or products, note the differences in taste. I always pick up packets of local biscuits for my dad to try out, and local coffee for my sister in law. These are conscious, but serendipitous choices. The same thing happens online. While an algorithm might guess what you want to buy, can it discover what you would like, on your behalf? I didn’t know I liked brightly colored socks before they became available on Flipkart India.
Retail is therapy: you feel empowered when you spot something and buy it yourself. When AI buys for you, you lose this, and then some more…Also, can AI place something back on the shelf minutes after picking it up?
Shopping is about spending money, and we spend money when we trust something. Trust is fragile. What if someone inserts poisoned data that impacts the actions of an agent, that triggers high priced purchases or multiple purchases?
Agentic commerce will have to be built with the idea of first building trust, starting with small, tightly constrained use cases, and ensuring security.
Even before Flipkart, there was Indiatimes and Rediff shopping in India, but you never were sure you’d get what you ordered. In low trust societies, agentic commerce will find it hard to scale. Flipkart started with books for a reason: they were standardised and predictable.
Content is easy. Money is where people start thinking about what could go wrong, above all else.




Amazing thoughts as always Nikhil!