Reasoned Insights: 16 to 30
Distrust Is a Feature, Not a Bug
This is the last batch of insights from the first 10 essays. As I’d mentioned earlier, from here on, I’ll do this 5 essays at a time, especially given overlaps. In case you missed them, here are the first set of 15.
How to read these: The insights are numbered, and the number of the last insight in the post is on the featured image, so it’s easy to locate when you’re scanning posts. This is a slow read: you might want to read the original article the insight is drawn from, before returning to this. Every read may surface something new for you: something you missed, or a disagreement. Especially when you disagree, please write to me. These are lines on the beach, not something set in stone, so you may wash them away. :)
Insights will eventually become a paid feature, but it’s free for now.
If you like Reasoned, please do consider supporting it: INR / USD
Context selection determines outcomes once memory is abundant: Once a system can remember everything, the real decision is not just what it knows but also what it chooses and treats as causal. If a system has eight years of biometrics, incidents, and routines, it still has to decide which history is “alive” and which is effectively archival. Context also gets lost in chat compression, and prioritisation takes place during compression. These choices directly shape outputs, yet users cannot typically see or interrogate it. Memory introduces a new kind of governance problem: not whether the system recalls, but how it forgets, reinterprets, prioritises or downgrades.
Based on: The product challenges that ChatGPT Health will have to navigateDistrust Is a Feature, Not a Bug: ChatGPT Health, by extension AI, is most valuable to users who know how to distrust it. This inverts conventional product design logic, where trust is something systems try to maximize unconditionally. Here, trust must be calibrated, not maximized. AI operates in probabilistic space, but user behavior often treats outputs as deterministic. Power users compensate by challenging assumptions, requesting corroboration, and validating with doctors. Mainstream users will not. This creates a structural risk: the system’s safety depends on user literacy rather than system guarantees. From a product standpoint, this is precarious. A health system cannot rely on adversarial users to function safely at scale. Yet removing the need for scepticism entirely risks overconfidence. Designing for productive distrust—without overwhelming or alienating users—may be the central challenge of AI health products. Based on: The product challenges that ChatGPT Health will have to navigate
Orchestration layers create a new kind of monopoly power: control over sequencing of actions, not services. When an agent decides “what happens next,” it becomes the real interface between intent and the entities that provide the services or tools. The Manus description emphasises internal planning, tool selection, and intermediate-state memory. Structurally, this means leverage migrates to the layer that sequences actions: it can reorder steps, choose which vendors are called, and decide when to stop. It can consequently decide who not to call. That’s power without ownership: the orchestrator can extract value while keeping underlying services commoditised.
Based on: AI Agents, and why Meta acquired ManusHumans become arbiters of agentic action: In agentic commerce, humans no longer act as participants in everyday decisions. They become designers of constraints and arbiters of failure. Instead of choosing products, comparing prices, or negotiating terms, people define boundary conditions, set escalation triggers, and step in only when systems break. This is a fundamental shift in agency. Decision-making moves out of the moment and into configuration. The risk: intervention typically happens after damage is visible, whether it is after a bill spikes, a renewal goes wrong, or fraud is discovered. By then, the system has already acted. This demands a new kind of literacy from users and organisations alike: not how to shop better, but how to articulate limits, acceptable loss, and failure conditions. In agentic systems, safety needs to be pre-engineered. Based on: When AI buys and sells for you
Outcome ownership is an unresolved issue: If AI advice shapes routines, medication adherence, or lifestyle changes, it participates causally in outcomes. Existing liability frameworks are poorly equipped for this gray zone between tool and advisor. The same applies to agentic ecommerce purchases, where users are not necessarily involved in decision making, and where hallucination can be expensive. Health and money (commerce/payments) create risks that are currently largely not there for content based outputs. Until liability is clarified, the system will remain both powerful and precarious. Based on: The product challenges that ChatGPT Health will have to navigate and When AI buys and sells for you
Serendipity is the hidden casualty of Agentic Commerce: agentic commerce struggles to replicate serendipitous discovery of purchases. Humans stumble into preferences through exploration, not optimisation. Agents are excellent at buying what you already want. They are poor at discovering what you might love. Over-optimisation risks narrowing taste, reducing experimentation, and flattening demand. Retail is not just a transaction—it is an experience as well as an act of discovery. Based on: When AI buys and sells for you
Friction is a safety feature, not inefficiency: The recurring feature of AI systems is the collapse of friction and the lack of boundary conditions. AI doesn’t put a product back in a shelf after picking it up, and agentic commerce operates without human hesitation, second thoughts, and effort, and in fact removes them by design. What feels like convenience is also the removal of resistance. Whether the Alexa dollhouse incident, automated renewals, unexpected chatbot discounts: these indicate the friction is a safeguard. Safer systems may need to be deliberately slower, noisier, or more interruptive. Based on: When AI buys and sells for you
Training turns piracy from a distribution shortcut into a low-cost input source: When models can be trained on pirated material without needing to distribute it, piracy stops being about free access for users and becomes a way to cheaply acquire high-value inputs. The economic value is captured at ingestion, not at publication. At scale, this favours actors who can ingest large volumes of content at near-zero acquisition cost. Those who rely on licensed or permitted data face structurally higher input costs, even though the training benefit is similar once the data is absorbed.
The implication is that piracy no longer competes with legitimate markets downstream: it undercuts them upstream, reshaping cost structures before any product reaches users. Based on: Theft and Data Mining in AI
Free inputs entrench incumbents, not democratise innovation: The claim that Text and Data Mining exceptions are necessary to preserve innovation masks a deeper structural effect: these exceptions advantage incumbents by reducing their marginal input costs, thereby reinforcing their market power. If data is free and the primary constraint is compute, only firms with massive infrastructure and distribution reach can fully capitalize. Far from democratizing innovation, a Text and Data Mining exemption creates a high fixed-cost, low marginal-cost regime. This is exactly the kind of structure that enables monopolistic behavior. Startups and smaller firms, unable to afford that level of compute, are effectively locked out. Contrary to rhetoric, paid access to data could restore competitive balance by forcing all players to prioritize quality over quantity, and encourage innovation in smaller, domain-specific models. The so-called “collateral damage” of enforcing copyright is not damage…it’s friction that prevents extractive scale and allows diverse participation. TDM doesn’t fuel innovation. It fuels consolidation. Based on: Theft and Data Mining in AI
Training turns diverse sources into interchangeable inputs: When models are trained on millions of texts, images, and videos at once, the system stops “seeing” where anything came from. Distinct voices, editorial choices, and cultural contexts are absorbed as raw material for pattern learning, not as differentiated sources. At scale, this destroys the value of diversity. A carefully produced article and a low-effort post both contribute signal, but neither retains its identity once training is complete. What matters is coverage and volume, not viewpoint or intent. The implication is that while diverse content still feeds the system, it no longer earns a premium. All content is the same when it has to be chewed up and shat out. Based on: Theft and Data Mining in AI and AI and fragility of creation
At scale, AI is imitation that starts replacing creators: Individual imitation has always existed, but it was limited by time, effort, and reach. With AI, the same voice, look, or skill can be reproduced everywhere at once, cheaply and continuously. AI removes those limits, allowing the same style, voice, or skill to be reproduced everywhere at once. At scale, this changes how markets respond. Users gravitate toward what is most available and cheapest, not what is most original. The cumulative effect is replacement, even when no single output feels decisive on its own. The implication is that scale transforms imitation from a niche behaviour into a market-wide substitute. Based on: AI and fragility of creation
AI replaces creators by intercepting demand, not just by copying outputs: AI systems don’t just need to reproduce content to undermine it. By answering questions, summarising, or completing tasks directly, they capture user attention before it ever reaches the original source. This shifts value away from producers and toward whoever controls the point of interaction. Even accurate summaries and clear attribution fail to restore the lost relationship, because the user’s need is already satisfied. this shifts power away from those who produce content and toward those who control the interaction layer. The implication is that demand flows to whoever sets the defaults, not necessarily to whoever creates the value. Based on: Theft and Data Mining in AI, AI and fragility of creation and AI and the right to say no
Once value is absorbed into a model, it can’t be separated or returned: Training permanently embeds information into a system with no practical way to trace or remove specific contributions later. What is taken stays taken. At scale, this changes incentives. There is little downside to capturing first and resolving questions later, because reversal is effectively impossible. The implication is that irreversibility acts as a shield, making early extraction more valuable than long-term restraint. Based on: Theft and Data Mining in AI and AI and the right to say no
Economic damage appears only after the system has already shifted: Early on, creators may see little immediate impact, which makes disruption easy to dismiss. But substitution builds quietly as habits, workflows, and demand reroute through AI systems. At scale, the damage shows up only once production weakens or stops entirely. By then, the new system has become normal and difficult to undo. The implication is that waiting for clear market signals often means waiting until recovery is no longer possible. Based on: Theft and Data Mining in AI, AI and fragility of creation and AI and the right to say no
AI looks cheap because someone else is taking on the cost: AI systems appear efficient because the most expensive parts of the process sit outside the AI firm’s balance sheet. Articles, images, music, video and expertise are costly to produce. These are now inputs that make cheap generation possible. However, these costs were paid by humans long before the AI model existed. Once the system is live, a second layer of cost is externalised to creators: monitoring use, opting out, enforcing rights, and managing any liability. Value is captured once by AI, while costs are paid twice: first in creation, then in compliance and defence. The system looks cheap not because it is efficient, but because the true expenses are borne elsewhere. Based on: Theft and Data Mining in AI



