Reasoned Insights 01-15
Connecting the dots
Reasoned turned 1 month old earlier this month, and last week I published its 10th post. For the next few weeks, I’ll publish some distilled insights from the first 10 posts, alongside essays, that will hopefully inform the way you and I think going forward. Next time onwards, I will do this every 5 essays (10 is too many).
How to read this: The insights are numbered, and the number of the last insight in the post is on the featured image, so it’s easy to locate when you’re scanning posts. This is a slow read: you might want to read the original article the insight is drawn from, before returning to this. Every read may surface something new for you: something you missed, or a disagreement. Especially when you disagree, please write to me. These are lines on the beach, not something set in stone, so you may wash them away. :)
Also, Insights will probably eventually become a paid feature, but it’s free for now. If you like Reasoned, please do consider supporting it: INR / USD
Here goes: Insights 01-15
The web is becoming a supply chain for AI: Content, services, and transactions are unbundled from their original contexts and recomposed by AI services into solutions. This has second-order effects: supply chains tend to reward scale, standardisation, and reliability, not diversity or experimentation. If the web becomes upstream infrastructure for AI, then smaller publishers, niche services, and regionally specific offerings risk being filtered out, not by explicit exclusion, but by optimisation. AI chatbots will favor sources that are easiest to integrate, cheapest to access, and least risky to use. They make the choices, not users. Diversity suffers. Based on: AI and the quiet rewiring of the open Internet
Standards can centralise power without ownership: The donation of MCP to the Linux Foundation illustrates a subtle but powerful shift in how control is exercised on the Internet. By ensuring MCP is “owned by no one,” Anthropic increases the probability that it becomes unavoidable infrastructure. Once that happens, influence shifts from who owns the standard, to who shapes its evolution, hosts it at scale, and embeds it into developer workflows. This matters for the open Internet because it redefines where leverage sits. In the web era, publishing content initially created leverage, before search built leverage via aggregation; in the app era, distribution created leverage. In the AI era described, leverage accrues to those who define interfaces between intelligence and action. Even if formally neutral, such standards inevitably reflect the incentives of those who control their evolution. Based on: AI and the quiet rewiring of the open Internet
Algorithmic gatekeeping replaces App Store gatekeeping: There’s a shift taking place from explicit gatekeeping to probabilistic gatekeeping. Instead of ranking in an app store or bidding for placement, developers now compete for invocation by a stochastic model. This is harder to contest or audit. Decisions are framed as emergent properties of the system, not deliberate choices. Yet the economic consequences are real. A slight bias in orchestration logic can redirect demand at scale, with no clear recourse for affected businesses. Optimisation emerges wherever visibility is mediated. But unlike search, where links and rankings are at least observable, AI-mediated selection operates inside black boxes. This raises competition and neutrality concerns that existing frameworks are poorly equipped to address. Based on: The Opportunity Trap of the ChatGPT App Store
Owning context is the last durable moat: There’s a fundamental shift in how value is created in digital markets. In the app era, differentiation came from owning the interface and the data generated within it. In the ChatGPT app ecosystem, developers are explicitly denied both. Context exists, but it is fragmented, selectively disclosed, and ultimately controlled by the platform. OpenAI’s rules limit developers to narrow, task-specific inputs, preventing them from compounding context over time. The result is an asymmetry: ChatGPT accumulates longitudinal understanding of the user, while apps operate with episodic glimpses. Context improves outcomes non-linearly, small gains compound into large advantages. By centralising context, ChatGPT positions itself as the only actor capable of deep personalisation within its interface, while apps become interchangeable utilities. Owning the context is the last durable moat for app developers, and app developers can only build this by taking users off-platform to their own website or app, by keeping key features on their own platform, and treating ChatGPT as a space for marketing to users. Based on: How to beat the opportunity trap of the ChatGPT App Store and The product challenges that ChatGPT Health will have to navigate
AEO is the new arms race, not a mere growth hack: The comparison between SEO and AEO is not merely historical: it highlights a recurring pattern of optimisation under opacity. As with search, visibility inside ChatGPT is mediated by algorithms that evolve to resist manipulation. In search, ranking affected discovery, while in ChatGPT, invocation impacts execution. Being called or ignored determines whether an app or a service exists at all in the user journey. The risks are greater because multiple services may not be called upon together: you have to be the top link always. The danger is that AEO reproduces the same concentration effects as SEO, but faster, but it also offers players that optimise early and well the ability to scale rapidly. This cuts both ways: smaller players may be priced out before equilibrium emerges, narrowing competition, or small players can gain leverage by doing well sooner. Based on: How to beat the opportunity trap of the ChatGPT App Store
Brand recall still has a role inside chat apps: When AI systems auto-select tools, brand recall shifts from marketing advantage to a defensive moat. In a world where users no longer browse menus or compare interfaces, and AI relegates in-chat apps to “jobs to be done”, brand recall means that a user can override chatbot recommendations by explicitly invoking a service over the recommended one. This addresses the risk where if the user does not ask for you, the system may never surface you. Becoming memorable, providing utility and customer service consistently, great brand advertising…all of these become mechanisms for triggering a human request for your business. Brand becomes the last user-controlled routing mechanism in systems designed to remove choice by default. Based on: How to beat the opportunity trap of the ChatGPT App Store
AI Agents are managers, not analysts: LLMs are analysts, agents are managers. This makes agents structurally disruptive, rather than incrementally useful. Analysts produce insight; managers decide priorities, allocate resources, and revise plans based on outcomes. When AI crosses that boundary, it stops advising workflows and starts governing them. In domains like advertising, sales, or customer support, this shifts optimisation from periodic, human-led decisions to continuous, autonomous control loops. Humans increasingly supervise systems rather than direct them. This is a redistribution of authority within organisations and markets. Control migrates away from individuals and toward the entities that design, deploy, and operate agent layers. Ownership of the agent layer becomes ownership of decision power. Based on: Why Meta bought Manus and What happens when AI buys or sells for you
Execution layers matter more than intelligence now: Models are increasingly commoditised; orchestration is not. Manus matters to Meta not because it is smarter than other LLMs, but because it is autonomous and decides what to do next. That distinction is decisive. Execution layers translate intelligence into outcomes. They coordinate tools, manage memory, evaluate intermediate results, and absorb failure. They learn from the failures. These capabilities are harder to replicate than model improvements because they depend on integration, trust, and real-world feedback loops. Meta’s acquisition signals that competitive advantage is shifting away from raw intelligence toward systems that can act autonomously across messy, real environments. Whoever controls execution mediates not just information, but action. Based on: Why Meta bought Manus
Trust and failure slow agent adoption: Despite the hype, agents are still slow, costly, and fragile. Autonomy increases degrees of freedom, which increases error propagation and the cost of failure. This is why agents thrive first in low-risk domains like content, social media, and customer support. Meta’s advantage, while acquiring Manus, is not eliminating these risks, but absorbing them. At scale, platforms can normalise occasional failure as statistical noise, while individuals and small businesses cannot. This asymmetry accelerates centralisation of power with larger players. Trust, not intelligence, remains the binding constraint. Agents will spread fastest where failure is cheap, or where the platform, not the user, bears the cost… and maybe even the liability. Based on: Why Meta bought Manus
Social media no longer needs humans to function: the social graph is no longer the core asset. The content and the context graph is. Zuckerberg says that AI is entering a “third era”, where AI generated content will dominate social media. His statement implicitly deprioritises relationships in favor of relevance engines that optimise toward goals, usefulness, and engagement. Relationships are slow, ambiguous, and hard to measure. Content — especially AI-generated content — is fast, scalable, and optimisable. This shift matters because the original defense of social platforms against AI disruption rested on human connection. If platforms increasingly treat feeds as content inventories rather than relational spaces, that defense collapses. The platform no longer needs to preserve intimacy if engagement can be synthetically sustained. Platforms stop optimising for who you know, and instead optimise for what keeps you scrolling. Based on: When AI enters the conversation
The line between human and machine speech is blurring: As AI chatbots increasingly participate in social media conversations, whether via answers in comment threads, DMs, or visible posts, the boundaries between human and machine speech blur. In case of Grok, there’s clear attribution to an AI model, but social media is now full of AI characters: ranging from young blonde women to monks giving life advice. Do people care whether the content they’re viewing is from a human being or an AI bot? The uncanny valley has been crossed. Based on: When AI enters the conversation
AI as a first-party speaker changes liability: Grok operating as a user that can be invoked inside X illustrates a structural break: AI is no longer just assisting users. It is producing and publishing content inside the platform. This challenges the long-standing intermediary defense that platforms rely on. When AI outputs are generated, amplified, and distributed natively, the platform’s role shifts from conduit to publisher. Product decisions, such as enabling direct publication of AI-edited images, are deliberate. Liability becomes harder to deflect when content is first-party by design. The safe harbor platform immunity framework was built for user speech, not platform-generated speech. Based on: When AI enters the conversation
AI Agents turn advertising from messaging and branding into control systems: AI agents fundamentally change advertising by shifting it from persuasion to execution. Traditional advertising was about crafting messages, testing creatives, and interpreting results after the fact. Agentic advertising collapses these stages into a continuous control loop where systems observe user signals, decide what to change, and act immediately. Ads stop being campaigns and become adaptive systems. When agents decide which creative to show, how to price an offer, when to retarget, or when to stop spending altogether, human judgment moves out of the critical path. Optimisation no longer happens at reporting intervals; it happens in real time, at machine speed. Advertisers become increasingly dependent on platforms that own the agent layer, the data, and the execution surface. Transparency declines as decisions become harder to audit (because of the scale!), and competition shifts from creative differentiation to access and integration. Based on: Why Meta bought Manus and When AI buys and sells for you
Context Persistence as the Real Value Proposition: What differentiates ChatGPT Health from existing health apps in the text is not diagnostics, but pattern recognition across data. Doctors see snapshots; the system can see timelines. Trust emerges not from one recommendation, but from multi-year coherence across sleep, diet, medication, and biomarkers, and trends that emerge and can be correlated. This reframes “memory” as a clinical feature, not just a UX one. Without durable context persistence, AI health tools revert to symptom checkers. Memory also amplifies risk. Errors compound over time, and outdated or misinterpreted data can silently distort future recommendations. Persistence of data increases value. Based on: The product challenges that ChatGPT Health will have to navigate
Start designing for bots: That moment when agent and bot traffic overtakes human traffic on the web, will mark a shift for the Internet. Maybe it has already happened. Most of the web’s norms, whether advertising models, user experience design and consent mechanisms, are built on the assumption that humans are the primary users. When agents become the dominant audience, all those assumptions fail. Like with advertising, optimisation will have to shift from human attention toward machine readability and legibility, as well as optimising for dominant algorithmic constraints. AEO is an early signal of this transition, showing how visibility now depends on being interpretable by models rather than appealing to people. The web will become an operational layer for agents, not a public square. Designing only for humans risks invisibility in agent-mediated ecosystems. Based on: Why Meta bought Manus and When AI buys and sells for you
Have an insight of your own to share? Disagree with something? I’m working on a post that captures your comments (already got some amazing ones) and builds a conversation around them. Email me or leave a comment.



