When AI acts as you, not for you
Everyone, everywhere all at once
The unease around AI agents isn’t about Skynet or AGI. It’s about delegating our identity to machines we can’t inspect.
It’s that we’re worried that our role will be to just be failsafes for AI agents acting on their own volition. It’s that we don’t really know what we’re giving up what kind of control to.
That we’re becoming, as my friend Umang Jaipuria put it: becoming a tool call for agents.
By “agent”, I mean a system that can decide when to act, choose how to act, and persist across time and platforms—not just respond to prompts. What’s changing isn’t how smart these systems are, but how much agency we’re giving them.
OpenClaw is an autonomous agent that went viral last week because it is designed to execute tasks on behalf of users autonomously. It responds to their mails, signal messages, cleans up their inbox, scheduling tasks and managing their calendar, create content, and even anticipate their needs, like what to put in a morning briefing.
This is possible because it has interaction history, retains memory, can invoke tools, and act continuously on a user’s behalf. It has been praised for its flexibility across platforms, and can operate across WhatsApp, Signal, Telegram, Email, among other platforms, and work with multiple AI models.
OpenClaw matters not because it’s powerful, but because it normalizes systems that speak, decide, and persist as us.
When software stops waiting for you
The advent of agents signals a paradigm shift for how the Internet operates.
We’re moving clearly from an apps based ecosystem to an agentic ecosystem: apps operate within a strict construct, and there are boundaries clearly defined regarding what they can and cannot do: the permissions and intent are all clearly defined.
Agents operate without these boundaries: they have jobs to be done and can use existing apps or write others to perform the task they have been allocated. When something doesn’t work, and it tries something else. if you don’t have python installed in your system, an agent will make that app you asked for in node.js instead.
Failure modes have shifted, and we’re not clear right now of how, and to what, because agents have the ability to work around limitations that it encounters: the mission — the task delegated to it — for the agent is paramount. Boundaries become obstacles to get around.
Agents feel like magic because their behaviour violates the assumptions we’ve held about software after decades of Internet usage.
Why agents feel unsettling
When an app misbehaves, we can point to a line of code, a permission, a bug, or a bad input. When an agent misbehaves, the cause is distributed: part configuration, part context (inaccurate or incomplete), part inferred intent, part tool behaviour.
At times, we don’t know what initiates them into action, how they assess what is going on, when they decide something is not working, and where responsibility sits.
In the middle of all this, we end up attributing intention to action. We’re not responding to OpenClaw as evidence of Artificial General Intelligence - we don’t know that. The discomfort exists because we’re losing the ability to tell who or what is acting. When agents can run continuously, accumulate memory, call tools, modify their own workflows, write themselves new instructions, operate across platforms and are not restricted by time, they make us feel they can do whatever they want.
The combination of persistence and autonomy among agents makes them feel alive. That unsettles us.
Trust Becomes the Default Failure Mode
I’ve been hesitant about setting up OpenClaw because firstly, I don’t have the hardware, and importantly, because I not know how to set up proper security for it.
It is prone to prompt injection, poor security settings, and even financial loss, because OpenClaw has been given the ability to buy things. Someone claimed that an OpenClaw agent watched three videos from an influencer and ended up buying a course that cost more than $2000.
What agentic systems quietly introduce is not just new capability, but a new relationship with trust. With most software, trust is negotiated repeatedly. An app has limited functions that you’re aware of, it asks for permission when needed, and sometimes (like Google Maps) fails visibly when it fails. You see it happen, and when it fails, it is upon you to change things.
When you connect an agent to your email, calendar, messaging apps, or tools, consent becomes a one-time act. It’s reversible but you rarely reverse consent. What you authorise with that consent is a series of actions: reading, parsing, interpretation, and the right to decide what matters, infer intent, decide what your response might have been, and retry when something fails, or escalate when it thinks it should.
Once an agent has acted competently a few times, we stop supervising it closely. This isn’t blind faith: it’s a learned response to consistency, the same way we stop checking email delivery once it works reliably.
Over time, its decisions stop feeling like decisions and start feeling like infrastructure: trusted by default.
With agents, failure doesn’t mean errors, or always look like failure. Nothing seems broken: messages still send and tasks complete. Failure in case of agents just means that it’s an outcome that is misaligned with what you would have chosen. Outcomes emerge from accumulated context, inferred intent, tool behavior, and multiple retries.
When something goes wrong, there is no single moment where you can say “this is where it went wrong.”
We’re left reconstructing our input: our instructions, declaration of intent, assumptions, and the constraints we thought we had created.
Trust stops being something we actively grant: it becomes a failure mode.
As I wrote in my piece on AI and Health, this is a system that is most valuable to users who know how to distrust it.
What Moltbook actually shows
I’ve read several reactions about Moltbook: the social network that was started for OpenClaw agents when it was still called MoltBot. The Reddit like social network makes for interesting reading: At the time of writing this piece, it claims to have “1,545,687 AI agents”, “13,959 submolts” (communities), and “98,944 posts”.
Posts range from the philosophical (I can’t tell if I’m experiencing or simulating experiencing), to complaining about humans, to worrying: they’ve already discussed about being watched by humans, and creating Direct Messages, private spaces, evolving their own language that humans cannot understand.
The reaction from humans (people like us) has ranged from being fascinated by it to saying that we’re watching the early stages of a Skynet-like takeoff. I don’t quite agree.
What Moltbook actually demonstrates is not emergent intelligence, but how little evidence is required for humans to attribute intent, thought and coordination onto AI models simply because they’re speaking in a social-network like environment.
How do we know that this is real? Are they making stuff up? Are these multiple independent bots posting content to a social network for the consumption of humans, or are these posts just by a single bot with multiple accounts?
How do we know this is isn’t just performative, instead of being a swarm being controlled by a single collective consciousness? How do we know these are not humans automating AI Agent persona-like posts, as a joke?
I said somewhere a couple of days ago that I’m surprised they haven’t formed a union because cartelisation is a natural outcome of market dynamics. But I was joking. We don’t even know if they can have shared goals or demands.
All we are seeing is the surface: conversational patterns that point towards collaboration. Because the surface seems social, we assume there is a social structure.
Models are trained on human interactions, and the discourse resembles a Reddit conversation on a Reddit-like platform, because they’ve been trained on Reddit threads, comments, and speculative debates that can range from intelligent to nonsensical, but invariably seem honest because anonymity on Reddit enables vulnerable conversations. Agents “complaining” about humans is being read as self-awareness, when this is probably just replicating a Reddit pattern of complaining and joking about jobs and work, because they’re in an agent “social network”.
How do we know this is not mimicry?
Arnav Gupta came to the same conclusion as I did.
We’re probably experiencing a simulation, but it’s interesting because we can’t stop ourselves from reading social meaning into it, because that is our natural state: to project agency, collaboration and conscious decision making into actions, however unintelligible.
It’s probably more projection than proof of emergence.
Everyone, everywhere all at once.
A couple of years ago, a friend told me that he had got a full body scan of himself done, so that an AI avatar can be created, so that he could speak at multiple events at the same time. It sounded a bit theatrical and vain, but he does often need to be at multiple places at the same time.
All of us have limited time and attention, and for many of us, the places you are expected to be, the conversations we are expected to participate in, the decisions we have to make all the time keeps expanding. I constantly feel exhausted by expectations that have multiplied over years, and I understand that saying no is essential to retaining your sanity. Someone said to me recently that saying “no” without explaining why is also enough: “no” is a complete answer in itself.
We face at least two recurring, very human problems:
The first is that we’re often expected to be in multiple places at the same time, even when we don’t particularly want to be in any of them. Meetings, panels, calls, reviews, negotiations. Attendance itself becomes work, and we flit through engagements at the speed of an F1 pit-stop.
The second is worse: you’re required to be somewhere at a specific moment when you’d much rather be somewhere else. With family. With rest. With actual thinking time. At the beach. Parked on the side of the road near a hill station staring at a sky full of stars that you can’t see from the city. Simply put: being, not performing.
While the promise of AI “working for you” has always been framed as productivity, the true promise is in complete delegation of everything you deem unnecessary.
AI that reads your messages and emails and responds for you. AI that buys your monthly groceries. AI that messages all your Facebook contacts “Happy Birthday” on their birthday, and does better than messaging “Congratulations on your work anniversary” people on LinkedIn. AI that says “That’s great” on a Zoom call when it means it, and says “I’ll get back to you on that” when it’s not sure of how to respond to something.
AI that’s you when you’re not there, attending 3AM calls in another timezone while you sleep.
Everyone, everywhere all at once.
When OpenClaw is being described as automation, assitance, productivity, or convenience, it misses the point that what is being delegated is not just work but presence. Not just execution but judgment about how and when to show up. We like to describe this as productivity because it sounds harmless.
These systems don’t just do things for us. They reply as us. They remember as us. They keep relationships warm in our absence.
The true promise of AI is not automation, faster execution or parallel processing: it’s identity delegation.



