AI and the quiet rewiring of the open Internet
How AI is changing the way we use the Internet
While there are interesting battles at play in AI, the future of the web is already decided: as a supplier of content and context to AI services.
I wrote about this in July on MediaNama (Read: a world without apps and the web), and I’ll explain my thesis again, in the context of two big moves that have only affirmed that thesis.
The next part is a bit technical, though I’ve simplified it as much as I could, but you can skip to the section named “What does this mean for the open web?” Here goes:
When standards become strategy
This week two developments that will shape the future of how the open web interacts with AI chatbots: Firstly, Anthropic, the makers of Claude, donated its Model Context Protocol (MCP) standard to the Linux Foundation, and will be managed Agentic AI Foundation (AAIF), which has been created by Anthropic, Block and OpenAI, with support from Google, Microsoft, AWS, Cloudflare and (surprise, surprise) Bloomberg. Secondly, Google launched MCP support for Google services, including for Google Maps.
I want to avoid getting too technical here, but MCP enables chatbots and AI agents interact with external datasets and execute actions. For example, Claude today allows you to connect with your Zerodha trading account, and an AI agent can potentially execute a trade for you. These are early days for MCPs, but consider it as key underlying infrastructure that allows AI tools to interact with services.
An alternative here has typically been to chain different AI (and non-AI) services together using tools like Zapier, or running actions via a browser agent and connecting that with chatbots, which can be very clunky, and susceptible to failure. Google explains how MPCs work for laypersons:
“Anthropic’s Model Context Protocol (MCP), often likened to a “USB-C for AI”, has quickly become a common standard to connect AI models with data and tools. MCP enables AI applications to execute the complex multi-step tasks it takes to solve real world problems.”
What Anthropic is doing by open-sourcing MCPs is ensuring that they become the default standard for how AI agents talk to the web. Developers don’t want different standards for different AI services, and eventually one standard wins. Think of Blue-Ray vs DVD a decade and half-ago.
Before you read further, do consider supporting my work by making a payment here, if you’re in India, and here you’re not from India.
Standards win only if no one owns them, and Anthropic is not the biggest game in town in AI, and it doesn’t have enough heft to scale it for themselves. Open sourcing AND placing it with the Linux Foundation, a highly respected org that manages Linux, which is also underlying infrastructure for the web, means your standard wins: OpenAI can’t ignore it, Google can’t fork it, Microsoft can’t replace it with a proprietary alternative and AWS has to support it for enterprise customers.
Everyone moves to the next stage, while Anthropic continues to have a major say in how MCPs develop.
What is Google’s game here?
Here’s what Google has to say about its MCP implementation:
“…implementing Google’s existing community-built servers often requires developers to identify, install, and manage individual, local MCP servers or deploy open-source solutions–placing the burden on developers, and often leading to fragile implementations. “
To translate this, Google is merely saying that everyone wants AI apps to talk to different tools and data sources easily. Right now, that process is messy, and everything has to be hacked together. So it’s saying that it’s stepping in to clean it up, standardise it, and make it reliable, so developers don’t struggle, and users get smoother experiences.
Your laptop has many plug ‘n play parts, but how many people actually know how to assemble them together, or want to do this, and trust what they’re doing? This addresses that anxiety for people who want something, but want reliable tooling: large enterprises.
What this also means is that Google wants developers to build more stuff on Google Cloud, and given that MCPs are going to become the underlying infrastructure for AI’s interaction with external services, whoever hosts MCP servers becomes the backbone of the AI tools economy. If your app uses Google-hosted MCP servers, it’s easier to stay on Google, harder to switch, and your tools depend on Google’s uptime, permissions and pricing.
This is an AI infrastructure battle being played out: If they become “the default MCP host,” they become the default AI integration hub. This is an AI battle that will play out between Google Cloud, Microsoft and AWS, and Google probably has most leverage here, but it will lead to commodification of MCPs (and pricing), which can’t possibly be a bad thing, can it?
What does this mean for the open web?
To recap: MCPs enable AI tools to talk to each other and the open web, can be used to conduct actions (buy stocks; shop for groceries; pull in data from a website, create a table, perform an analysis and create charts). MCPs are moving towards universalised usage and commodification, so everyone will be able to build using MCPs and they will be connected to AI all chatbots.
This changes how the Internet works.
Today the web (not the Internet) works like this:
Websites publish content
People visit websites
Traffic monetises the web
Search engines rank pages
The web stays open because value flows back to publishers
But the Web Era was followed by the App Era after the launch of the iPhone. We’ve already accepted some amount of gatekeeping with apps.
Websites publish content / services. They also have apps.
Apps form an easy to use interface on mobile devices
Google and Apple gatekeep apps
People use the app
Traffic monetizes the app
The web acts as a supplier to apps
The browser went from being a gateway to coming just an app on your phone that opens links you click on WhatsApp.
The difference between the App Era and the AI Era is that AI agents and chatbots are not created by website owners: in case of content they merely steal from them (more on this next week).
If AI agents become the main way people interact with information, the browser (or the proprietary app) stops being the primary interface, the AI app does.
Anthropic donating MCPs to the Linux Foundation ensures that all major AI players will use the same connector framework, tools/APIs will become interoperable, and AI agents can call website directly, skipping users, to give them a solution.
Great for users — and I accept this, and this is why AI will win against the web and apps unless copyright is respected — but it is terrible for the open web.
The web basically becomes raw material for AI. Monetization then moves from the web to AI apps. Eventually, as the utility of AI improves, the open web will become invisible to humans.
Think about it: how many people in the world type a URL to visit a website, instead of going through search, social media (including messaging) or apps? If AI can do all this, and connect your services together, why do you need to use multiple websites and apps? This is also why Google Search is switching to AI Mode.
In all of this, what’s going to happen to ecommerce?
OpenAI launched “agentic commerce” in September (I had predicted this in July), and created an Agentic Commerce Protocol along with Stripe. Paypal is also on board for payments within OpenAI. Instacart, a few days ago, became the first to enable direct checkout within ChatGPT.
That AI is eating the web is now common knowledge. First content, now commerce.
The thing to watch out for will be the battles that follow in transactions: will OpenAI and Stripe do what Anthropic has done, and open up their Agentic Commerce Protocol, to enable all AI chatbots to incorporate purchases? If they don’t, then Google and Anthropic will be forced to develop their own protocols.
My guess is that the Linux Foundation will probably have more work to do.
What I’m watching out for
First: The inflection point will be when bot and agent traffic on the web will exceed human traffic. I’m not sure if it already has…that’s something Cloudflare can tell you. If that is indeed the case, then when will bot originated (or triggered) traffic exceed human originated traffic on the open web. That will be the moment of reckoning for the open web.
Second: Full standardisation, with MCP becoming the default Interface Layer for AI, and more and more integrations with chatbots. It’s how Facebook added third party apps to the social web back in the day. MCP will be across platforms, so this will be bigger.
Third: Hosting layer gets captured by Google/Microsoft/AWS, and they control the plumbing that connects AI services with the web/app space. Smaller hosting service providers will struggle. I’m waiting to see if we move towards open standards, but closed infrastructure.
Fourth: Fragmented Agent Protocol Wars. IF OpenAI doesn’t open-source Agentic Commerce, or Agentic Commerce Protocol doesn’t merge with MCPs and becomes a single protocol for web and commerce, then we’ll have agentic commerce protocol wars: Google and Anthropic will launch their own and it will be a mess for ecommerce companies to roll out different toolkits for different chatbots.
Fifth: Regulatory intervention in AI Agents. Content is more benign than AI actions. AI agents can impact stock markets, dynamic ticket prices (hello airlines!), and regulators will be more vary of the proliferation of agents that will disempower non-Agent users…retail users?
*
If you’ve read this far, do consider forwarding this newsletter who might be interested in subscribing.
I will write about India’s tech business ecosystem, as well as my current obsession: the evolving AI Economy. I have spent 19 years looking at the intersection of tech, business and policy, and two lenses I have a bias for: what isn’t happening and why not, and what next.



