How algorithms are shaping social media
How social is social media anyway?
A popular YouTuber friend told me a couple of years ago that when he stopped posting for a couple of weeks in order to travel, his traffic dropped drastically. It took him six months to reach that level of traffic again.
This was his livelihood. The algorithm had punished him for not turning up for work.
There’s a pattern: the system never tells you what you did wrong. You only see the outcome. Your reach drops. Your visibility disappears. You’re left guessing. All you can do is go back to the drawing board and try to figure out how to make it work again. You experiment. You infer rules.
People learn very quickly what works and what doesn’t, by watching outcomes change, and trying to hack their way back to relevance and reach. They see what gets engagement now. They see what gets ignored. They see what triggers penalties and shadow bans.
Over time, behaviour adjusts. Creators know which days to post, how “Watch Time” impacts views, how frequency of posting impacts overall traffic. They change tone. They change thumbnails. They ask friends and family (or a Telegram group with other creators) to actively amplify each others post. They avoid things that seem risky. No one forces this. The system nudges it.
Algorithms are infrastructure for behavior modification at scale.
How the machine decides
When platforms are small, decisions can still be made directly by people: in the early days of YouTube, the creator support team actively helped important publishers adapt to changes. Once systems scale to millions of users, that becomes impossible: allocation, ranking, visibility, and reach get handed over to algorithms.
In 2012, Facebook ran an experiment. Their researchers modified their newsfeed for almost 700,000 people for a week, and determined that just by modifying what people saw, they could change how they felt.
Algorithms can’t typically measure intent, but they can infer it from signals: engagement, reach, impressions, clicks, responses. How much time you spent not scrolling but also not clicking.
What’s measurable scales. What isn’t, disappears.
When a platform exposes even a sliver of its logic, you can see exactly what it’s optimizing for.
X’s open sourced algorithm illustrates how its “For You” feed works. It prioritises likes, replies, reposts, quotes, clicks, profile visits, video watch, image expand, share, dwell time, and then some more. It deprioritises if a user blocks the author, mutes, reports or indicates they’re not interested. Replies have the highest priority, which then means that users optimise for eliciting a response, in order to increase reach. The system doesn’t just predict what you’ll engage with.
It also predicts what you’ll regret engaging with.
This isn’t unique to X. Across the web, especially on social platforms, algorithms decide what gets seen and what doesn’t. Who gets distribution and who doesn’t. When something works and when it quietly stops working.
The important thing to understand is that none of this requires explicit control. Platforms don’t need to tell people what to do.
Agency doesn’t vanish inside optimised systems: if it did, nobody would post.
The algorithms have a sense of how much tweaking has what consequences. You want your content creating non-employees to think they have control and agency, but within the boundary conditions that you specify.
It’s not just Instagram or YouTube: ask a seller on Amazon. Ask a gig worker. Now I’m wondering: is YouTube a form of gig economy?
Welcome to all the new subscribers at Reasoned.
I’d appreciate your responses to a few short questions.
This will me plan some subscriber-specific products.
The Optimisation Economy
Across the internet, people sell tips and tricks for gaming the YouTube algorithm: what thumbnails to use, how to reverse engineer faceless channels for making truckloads of money, how to show up in people’s newsfeeds, what time of day and day of week to post. What to avoid. What to repeat. It’s not just YouTube: it’s also Instagram, X/Twitter, even Reddit. People work to figure them out, adapt, before they change again.
The “Link in comments” hack is a function of people figuring out that the algorithm punishes them for sending people out of the platform.
SEO is multi-billion dollar industry because websites have to keep adapting to changes in the Google algo in order to show up in results. GEO is becoming a multi-billion dollar industry because people need traffic from AI. Google’s E-E-A-T guidelines have changed the way people write website copy, just as the infamous “Penguin” update a decade and half ago — yes, I remember this — led to the demise of many content websites optimised just to game search for traffic.
When distribution is controlled by opaque systems, an entire economy forms around second-guessing them.
This is why there’s an entire industry selling algorithm hacks: Because we want followers, we like seeing engagement metrics, it’s validation, and the algorithm optimises for just the right amount of validation.
How machines determine what becomes culture
Shephali Bhatt wrote in Mint about how Instagram changed its algorithm to highlight content that’s more relatable, shifting away from aesthetic content. The algorithm now “rewards relatability over visual grammar,” she told me while we were discussing this over message. It’s not one-way, top-down decision making, though. She sent this via a voice note:
“It is led by our behavioral change as well. And again, to your point about the fact that there is so much AI-generated content, the aesthetic content still gets as many likes and views as the other.”
“It’s just that the other, the quote-unquote “non-aesthetic content,” initially was only being circulated in a certain kind of audience, but now, the urban, affluent audience is not only liking it, they’re engaging with it, and that is what Instagram cares for.”
“So, we will not particularly repost or share in DMs or otherwise outside of the app, the content that is just aesthetic and posh. That is just for our consumption. But the other kind of content, with the relatable kind of content, is what increases shareability, and that increases the time that people overall spend on the app. So it’s a very deliberate move from their end to, to change the algorithm in a way that we basically get a bit of both.”
Instagram isn’t rewarding taste. It’s rewarding what gets shared. Relationships are slow, messy, and hard to quantify. Content is fast, optimisable, and scalable.
Once people internalise this, instead of asking “What do I want to say?”, they start asking “what will work.”
These rules typically aren’t written down: It’s something people feel their way into. And once that happens, behaviour changes even when no one is explicitly watching.
It changes how I write too.
A friend suggested that I shouldn’t write 3000 word articles on Reasoned because it won’t be read. “Keep it to 1000-1500 words max”. Even without the algorithm, I keep looking at traffic data. I was wondering yesterday about whether the headline for the last post should have been “Why AI Agents need wallets” instead of “AI Agents need wallets”.
When someone points out typos in Reasoned posts, I joke: at least this way people know it hasn’t been written by AI. Sankarshan even quoted something from my first post on AI and Social Media (When AI enters the conversation), with the typo intact.
Tell me I’m write about this ;)
AI is changing the way people write. FFS, I’ve stopped using emdashes, and I LOVE emdashes. You avoid nuance because nuance doesn’t scale.
You simplify. You templatise. Templates become culture.
How many people shitpost for fun anymore, as opposed to outraging or engagement farming? It’s inauthentic. It becomes less about saying what you think and more about how it will land.
Same, same, not different
And once everyone is optimizing for the same feedback, the output starts to converge. Performative behaviour is becoming default.
Mark Zuckerberg said about the second era of social media, “First was when all content was from friends, family, and accounts that you followed directly. The second was when we added all the creator content.”
The second era has displaced the first, and there’s no bigger proof of this than LinkedIn, which had the equivalent of performative AI slop before ChatGPT launched for the public.
It’s why our newsfeeds and recommended videos feel so artificial.
It’s not just that the algorithm is surfacing content that engages us—it’s that people are creating content purely for engagement. It almost feels like error propagation, because no one is optimising for diversity.
Thumbnails start looking identical because the algorithm rewards certain visual patterns. YouTubers discovered that shocked faces work. So now every thumbnail has shocked faces.
Titles become formulaic. “How I made $10k in 30 days” beats “Reflections on sustainable business models” even when they’re the same article.
Most successful influencer businesses are about how to become influencers. It’s all optimisation.
The algorithm creates a monoculture, not through censorship but through economics. If relatable content gets shared more than aesthetic content, everyone makes relatable content. If controversy gets engagement, everyone becomes controversial. If simplification travels better than complexity, complexity stops being made.
That same logic applies to relationships as well.
Withdrawal
I was talking to my wife about this and she pointed out something simple but unsettling: You can sit in a room with three people, all of them will intermittently be on their phones, talking to someone else. You’re already absent, even though you’re physically present.
And as this builds up, relationships become harder because people are becoming less tolerant of disagreement. This mimics our behaviour online, especially on Social Media. Over time, people don’t withdraw from platforms completely. They stay present, but largely lurk. They mute conversations. They block people.
You’re technically still there, but you’re less exposed. Less open. Less willing to sit with disagreement. You disengage the moment friction appears. When interaction elsewhere is constantly agreeable and frictionless real-world disagreement starts to feel avoidable, she said.
Social media began as a way to connect people, but it also trained us to expect interaction to be responsive, personalised, and low friction. Generative AI fits neatly into that expectation. I mean, what’s so enduringly addictive about ChatGPT is that it will never explicitly tell you you’re wrong or screwing up.
Where is all of this going? At some point, the question stops being whether interaction needs to be human at all.
Not whether people that we experience on Social Media are real (humans or bots), but whether them being real even matters anymore.
Also read:
Do consider supporting my work:



