When AI enters the conversation
What changes when AI becomes a first-party participant inside social media platforms
Welcome to all the new readers at Reasoned. Getting close to 500 in all. In case you haven’t checked it out, you can find the archives in an index format here. I’d recommend the post on how AI is rewriting the web, which has my core thesis. Here’s today’s edition:
I was pretty certain about my theory of how AI will impact social media until Mark Zuckerberg said something that changed it.
My theory was simple:
AI is replacing the world wide web and absorbing the apps ecosystem along with it, but it cannot replace how human beings are connected with each other. I’m bullish on Meta’s ability to navigate this shift not so much because of Llama, but because of the social graph and the relationships that extend across Whatsapp, Instagram and Facebook (in that order). Maybe even Threads. LinkedIn is also likely to survive, given that they’ve survived being in Microsoft.
For whatever it’s worth, on Social Media, there’s a relationship. You know this person. Kinda. You read them (almost) every day, including what they say in response to other people. You respond, you like, you repost. They do this too. Patterns form, and we get a sense of their values and views. Sometimes. When I met folks at a tweetup in Delhi two years ago, it felt like meeting friends even though we hadn’t met for years. We had points of reference to build upon.
Content can be automated, connection can’t. (Can it?). So in theory, the Social Graph is a defense against an all consuming AI.
On Meta’s last earnings call, Zuckerberg suggested that things might change with Social Media:
“Social media has gone through two eras so far. First was when all content was from friends, family, and accounts that you followed directly. The second was when we added all the creator content. Now, as AI makes it easier to create and remix content, we’re going to add yet another huge corpus of content on top of those. Recommendation systems that understand all of this content more deeply and can show you the right content to help you achieve your goals are going to be increasingly valuable.”
We already see this: Social Media is already less social, more media. We come for the connections and stay for the content. I see AI slop that engages me all the time: suggestions regarding prompts for useful use cases, how to start using Claude Code, how to sleep better (I’m obsessed with sleep data at present), optimising Ubuntu 25.10, and of course, the Zomato and Gig workers fight. I know X/Twitter has thankfully added a “following” tab, but it is not default by platform design, and I only click on it after my “For You” feed exhausts me. X probably notes when that happens. Micro behavior informs macro decisions.
Platforms optimise for engagement and time-spent, and to keep you on the platform, to keep you doomscrolling. Link in the comment, if you know what I mean.
This is not a natural progression: it is architected. These are product decisions that yield a macro picture based on microscopically tracked behaviour. It creates a feedback loop, and decisions are taken. Think of all the changes that X has made after Elon Musk took over: he made all those changes and optimised for something. Our behaviour can be nudged, and there’s proof of that: In 2012, Facebook ran a controversial experiment where it manipulated the emotions of 689,000 users based on what they were shown in their newsfeeds.
Notice that Zuckerberg said that in the second era, they added all the creator content, which meant that we already moved, especially with Instagram, into consumption mode. I’m saying all this to make you realise that your timeline isn’t yours, and there’s a clear difference between what users value and what platforms optimise for. Relationships are hard to measure, not really scalable (and hence low velocity and low engagement), while content is fast, measurable, optimisable and scalable. It’s the easier thing to do.
We engage now with usefulness (or the hope of usefulness) instead of actual people.
Social Media stopped being social well before AI arrived on the scene.
The structural shift when AI enters the conversation
AI enters social media in three ways:
It’s infrastructure, in that it is a recommendation engine that ranks what you see.
It is a tool, like Meta AI in WhatsApp and Grok on X, and lastly,
It is a participant in social media.
We’ve had AI on our MediaNama Zoom calls, and seeing “XYZ’s firely.ai note taker” has always made me uncomfortable: when the person isn’t there, it feels like rejection… it isn’t worth it for you to join this conversation? When the person is there, I feel watched. Documented. It’s why I’m so uncomfortable with Meta AI inside WhatsApp: it feels like it has invaded my personal space. I still haven’t used it on my primary phone. I’m unsure of how much context it captures, and when it captures context.
It’s kinda like I have PTSD from reading how Facebook has done things: the kind of thing that got so many of us to leave (but not really leave) Facebook.
AI as a participant in social media is not new: Reddit, for example, has had bots that auto-summarise comments on posts (here’s a Claude AI summary bot), set a reminder to check a post in a weeks time, just like Grok so far has been used by people to explain things or attempt to fact-check things on X. Meta AI summarises messages on Messenger and in group chats in WhatsApp. Discord has had chatbots for ages: in fact Midjourney is used largely on Discord. These are, if you think about it, largely task manager roles performed by tools.
At the turn of this year, X made a critical product decision, in line with Elon Musk’s FAFO (erm…Fool Around and Find Out) approach to managing social media: it allowed Grok users to edit other peoples photos, and publish them directly in the feeds. You don’t need AI to tell you what could go wrong. And it did. People found that their photos were being sexualised and turned into swimsuit photos. Social Media platforms treat replies as an engagement signal, and amplified the content further.
X essentially shipped out the product without:
Consent checkpoints
Post-generation safeguards (which would prevent the output from being published online, with post-publishing as the only reporting option).
Throttling algorithmic amplification of generated images at launch as a safeguard.
These are things that product teams discuss, so these were clearly product decisions taken. Platforms that operate at this scale don’t fail to foresee second-order effects. They don’t ship without safeguards unless they are testing for safeguards, or running a pilot, or they’re optimising for something else.
Once this happened, people were shocked and scandalised or amused, and reacted to it, made more and more people aware of it. X/Grok optimised for velocity of product diffusion, content abundance (go back to Zuckerberg’s thesis), and engagement velocity. Every safeguard X could have added would have reduced exactly the signals its ranking system values most. Safety would have introduced friction in generation. These are some of the tradeoffs that product teams consider, and the decision they took meant that the benefit is X’s, in terms of engagement, and the harm is externalised. It’s a feature, not a bug.
Slight regulatory diversion: There are some regulatory issues here as well: platforms are not just ”intermediaries” (essentially mere conduits) anymore when they start producing content and publishing it, so they can’t absolve themselves of the liability for this content. I’ve argued previously that Google Bard should not be held responsible for its response to a question about the Indian PM being a fascist: the user who published that response on X should have been. It was the user who made a private response public. I had also said then that it’s difficult to affix liability for AI outputs, because they’re impacted by training data, the weights, the prompt and at times the user history or context. For the purpose of the outputs from Grok published on X, it is Grok, as a publisher that should be held responsible. I wrote in more detail about the regulatory side of this issue for Quint here.
The direction is clear
If this is what happens when an AI bot enters a social feed, the more important question isn’t “should X have done better?” Of course it should. The real question is: what kind of social media are we building now?
What we might be seeing here is the beginning of a structural change for Social Media, with the addition of AI outputs as first-party content that is generated, distributed and algorithmically amplified, at least for now, without checks.
As Zuckerberg said, “we’re going to add yet another huge corpus of content on top”. Some of that content is going to be created by AI that resides in our feeds, perhaps even automagically. Imagine logging into X and Grok, publishes a summary of messages that you’ve missed that it thinks might be relevant for you.
Will we come to a point where Social Media won’t need humans to function?
Facebook’s mission changed from “making the world more open and connected” to “To give people the power to build community and bring the world closer together” in 2017. What will its mission in the AI era be?
Like Flipboard used to organise your news for you, will Social Media now largely contain AI generated content to engage and entertain, by generating content it knows will keep you doomscrolling forever? It has enough training data to become the most meta engagement farmer.
Once engagement no longer depends on humans, platforms will stop defending human-specific protections. What Grok did here won’t look like an exception: it will look like the beginning of this shift.
Update: I thought about it after publishing and realised that there’s a broader set of product decisions that I should think about. Here’s a long list. Tell me if I’ve missed anything:
Product decisions when releasing a chatbot on Social Media:
How is AI treated on the platform: as a background utility that users can query, or can it generate its own outputs into a users newsfeed?
Who can use it: Is it opt-in, opt-out or always there? Can anyone use it or only paying customers? Meta AI chose opt-in, Grok is there without consent. Is the invocation private by default or public by default?
Permission for public content: Is public content treated as viewable only, or freely remixable by platform features? Is there a consent model?
Safety at launch: are known safeguards shipped at launch or deferred for only if users complain
Output blocking: Is AI output blocked by default for certain transformations (for ex: NSFW outputs), or allowed unless reported?
Launch throttling: Is AI functionality gradually rolled out, or fully open on day one?
Content propagation on the platform: Can AI output remain confined to the originating thread, or be surfaced elsewhere?
Protected classes for AI targeting: minors, politicians, public figures, verified users
Are replies amplified: Are AI-generated replies weighted the same as human replies for engagement ranking? Are they throttled? Or are they treated as engagement amplifiers and prioritised? (For example: “Grok responded to @nixxin” shows up in your timeline)
Velocity of output: Is the system tuned to maximise speed of generation and publication, or slowed intentionally?
Visually same or different: Is the AI response a different font or with a different shade of background?
Automation: When is AI allowed to respond? Is it only when a user tags them, or can they be automated in a group?
Where does it operate? Does it appear in DMs and group chats, or directly in feeds without any trigger? Does it appear in search results, maybe an “AI Mode” for a search query? Where does its appearance spook a user?
How does it appear: Does it look like a person in its profile photo, or a robot, or a brand? How will users perceive each differently?
What context does it provide: does it include the context from the message it is referring to, or the messages it is taking from? Or is it a straightforward response?
Querying: Does it allow the user to engage further, and is there a chat window that is available to the user? Does it generate options for “Next question” in order to nudge the user to continue engaging with it?
Labelling: Whether the AI output is labeled as AI? In case of Grok, it doesn’t appear to be.
Usage limits and velocity of output: How much to limit a users limit of AI, in terms of total outputs over a period of time or how frequently outputs are generated, before nudging them to buy a subscription or upgrade to a higher tier, because this is an expense for the platform. Claude AI users will understand this pain.
Monetization: Subscription tiers, and whether ads can be run before a user sees an output.
Memory and persistence: Does the chatbot remember past interactions with the user, or in a thread or a group?
Human escalation: What kind of AI responses can trigger a human intervention?
Blocking: Can users block the AI from their accounts?
Lifecycle: Do AI responses get deleted automatically after a period of time or do they remain online?
Attribution: Can users identify which model gave an output?
Competition: Can other chatbots exist in the same platform? Remember that WhatsApp has blocked other chatbots.
The bigger question I’m going to tackle some time: What will it be like when interaction and relationships no longer require human beings?




There is an interesting emerging litigation - https://www.ailawandpolicy.com/2026/01/illinois-bipa-suit-targets-ai-note%e2%80%91takers-practical-lessons-for-meeting-transcription/