Why AI needs more than just a second brain
AI can access your notes. It still doesn’t know how you decide.
“Forget all that, I’d rather have AI that works on my personal notes”. It was January 2023, a few months after ChatGPT went live, and I was in Hyderabad having dinner with my friend Chaitanya Chokkareddy. While he was talking about all these cool use cases for AI, all I wanted was something that could help me find useful things in my notes.
I had about 5000 notes at that time (over 10,000 now), and I use notes as memory. I had a physical notebook - one for each month, and digital notes spread across Google Keep and in a WhatsApp group with myself.
In 2021, I discovered Tiago Forte’s PARA system and built proper structure for digital note-taking, which brought a semblance of sanity to this activity via a system. Some people use these systems to review their days, weeks and year. Some, like me, use them to dump truckloads of information, to use them to analyze, think, plan and write. For me, a second brain is about knowledge management.
The problems building a second brain creates
There is no perfect mechanism for managing information. As documentation becomes one-click and storage becomes easy, the collection problem has to be resolved daily. I missed a month, and now have 1300 notes that need to be placed somewhere. I’m probably going to create a folder called dump or sort, and start over.
At a certain scale, organisation stops being a solution. It becomes another layer you have to think through each time: which workspace, which folder, and do I already have a folder for this?
Easy collection compounds another problem: when you have over 10,000 notes to go through, across five workspaces, how do you manage context-aware retrieval of information?
Most of the articles I write about AI are essentially based on me dumping large amounts of links, context, papers, and articles from the web into a system, and then segmenting them by topic, and retrieving them when I plan to write about it. There are separate folders for AI and health, education, hiring, music, video, cybersecurity, gaming, investing, law and so on. I have workspaces for writing, product planning, personal development and learning, one for my kid (who has just started school) and one for general work. Documenting information and retrieving it has become a part of my life.
The moment you’ve got ten thousand notes across fifty topics, finding the right thing is no longer a matter of going to the right folder. All structuring struggles at scale: anyone who has run an online media company knows how difficult it is to be consistent with tagging stories in a manner that makes it usable.
That’s the use-case I told Chaitanya about three years ago: AI as a processor: something that could sit on top of accumulated material and retrieve what was relevant.
But AI doesn’t solve this problem
The reason it makes sense to add your personal context to your ChatGPT account or edit your claude.md file with information about you, is that AI tools like Claude, ChatGPT, and Gemini are trained on the entire internet and optimised for everyone, which means they’re optimised for no-one. Without proper context, they have no memory of your work, who you are, what you do. They will personalise outputs somewhat to you based on their learning from interactions with you, but without structured and curated context, and structured prompt engineering, outputs will be generic, regardless of model capability.
These tools are not limited by intelligence. They are limited by context architecture, and the context gap.
The flip-side of the context gap is context abundance. Once you start dumping everything in, AI starts to fail. We’ve identified some of them in the past:
Context prioritisation: as context windows grow, models compress context, losing nuance and specificity
Context degradation: As context grows, models prioritise which information to fix. The rest of the context fades.
Context pollution: Users shift topics mid-conversation, instead of opening a fresh chat for refreshed context.
Then there could be others:
Context distraction: too much irrelevant information makes the model unfocused.
Context confusion: contradictory material produces inconsistent outputs.
Context mistakes: one error early in the conversation poisons everything that follows.
When you add more information hoping to improve outputs, both personal note-taking and AI get overloaded at the time of retrieval.
Giving AI a second brain
While the purpose of a personal second brain is to help you think, for AI, the purpose of a second brain is to improve retrieval to help you think, by creating constraints for retrieval through identity, task context, and a knowledge graph that helps aid retrieval, and helps the AI think like you. All this information becomes the specific context that AI needs to keep in mind while giving you an output, to enable the output to be more relevant. Essentially, the fundamental shift that AI brings to knowledge management is that it shifts from storing information to shaping how AI reasons.
The returns on AI come from better context management, not better prompts. The competitive moat is how much relevant, structured context you’ve built up that the model can work with, not which model you chose.
Also read:
Retrieval doesn’t scale, so memory has to change form
The standard approach to making knowledge available to AI is RAG: Retrieval Augmented Generation. You upload documents, the model retrieves relevant chunks at query time, and generates a response. But as Andrej Karpathy explained in his LLM Wiki concept, RAG means the model is repeatedly recomputing knowledge instead of accumulating it. There is no carry-forward. Ask a question that requires synthesising five documents, and the model has to find and piece together the fragments every time, as if it has never seen them before. Traditional note systems work the same way: a raw repository of inputs that has to be re-interpreted on every use.
Karpathy’s proposal is a different architecture. Instead of retrieving from raw documents at query time, the AI model incrementally builds and maintains a persistent wiki: a structured, interlinked collection of files sitting between you and the raw sources. When you add a new source, the indexes it, reads it, extracts key information, and integrates it into the existing wiki, updating entity pages, revising topic summaries, noting where new data contradicts old claims. The knowledge is compiled once and then kept current.
The wiki becomes a persistent, compounding source of context. The cross-references that are typical of Wikipedia, and available as a feature in Obsidian, are already there in this structure.
The promise of LLM Wiki
My shift to Obsidian, because of AI interoperability, which I had previously written about, has sped up because of this. There is a cost angle here as well: as your knowledge base expands, AI has to go through more knowledge in order to retrieve context that is useful for you. If the amount of information it has to go through is larger, then its ability to get outputs or insights becomes more expensive, because it has to go through more and it costs more in tokens. But if you’re structuring information in a manner that it can be retrieved more directly, then your cost of using AI goes down.
A traditional note system is a raw code repository, while the LLM Wiki actually acts like a compiler: it processes raw inputs into structured, optimised outputs that can be reused efficiently without recomputation. It also creates an interconnected network of information and concepts in a manner that a brain does. This is not meant for humans, for for AI to query it.
It can also do something very unexpected: surface things you hadn’t considered, because of how concepts connect together. This is a very human trait, and symptomatic of how the brain works and memories surface. We have the ability to connect dots, remember random things, infer something from one domain and bring it to another. There are moments of clarity, whether in solitude, while dreaming, or while in the shower, where we experience unexpected moments of clarity.
The LLM Wiki idea is powerful because it offers up the opportunity to replicate this: if we have enough disparate notes, perhaps AI can surface something that we hadn’t considered, and general purpose AI cannot because it doesn’t know which dots to connect.
Lastly, there lies an opportunity to document using AI, something that we rarely do. We take thousands of decisions each day, and almost never document any. Most second brain system document findings, information and knowledge, but rarely capture why we chose one option over another. If you ever read, in Deepseek or Claude, what it is thinking, you’ll understand that it is trying to decipher what exactly you want. When you respond to it, it captures what you decided and why. The decision logic, whether personally or in organisations, resides in peoples heads: it’s why organisations change when people change.
The concept that addresses this is the context graph: a structured record of how decisions were made, what was chosen over other options and why, what rule was applied or bent, who approved it and under what conditions, what precedent it creates. Systems can give you an output and reason why you should do one thing over another until you challenge its assumptions. A context graph helps reduce or remove those assumptions.
The important part here is that investing that kind of time, in capturing decisions and structuring and curating the context for decisions is an effort that human beings rarely make. Now we have systems that can do that.
When your system starts capturing why decisions were made, the role of AI changes. It stops being a tool that retrieves information and starts behaving more like a system that decides, and decides better, and keeps improving its decisions. That doesn’t exist in most setups, except in some well made agentic architecture.



