Karpathy doesn't use a fancy app to manage his research. He uses a folder, Obsidian, and an AI — and I want to copy it.
He posted about it last week. The short version: he dumps raw material — articles, notes, papers, images — into a folder, then lets a large language model (LLM — the AI brain behind tools like Claude or ChatGPT) build a wiki from it automatically. The LLM writes the summaries, creates the links between ideas, organizes everything into categories. He barely touches the wiki himself. When it gets big enough, he asks it questions and gets answers pulled from his own research.
I've been sitting with this for a few days, thinking about what it would look like for my work.
---
What My Work Actually Looks Like
I build things. Agents, content apps, Claude Code workflows, automation scripts. A lot of what I do involves figuring something out — what tool does what, how to wire two things together, what prompt pattern produces the right output, what broke last time and why.
Most of that knowledge lives in my head, or in scattered notes, or in past conversations I can't find anymore.
That's the problem. Every time I start something new, I spend time re-learning things I already know. What flags to use in Claude Code. What agent structure works for what kind of task. What API response format caused everything to break last month.
Karpathy's idea is simple: stop keeping that knowledge in your head. Dump it in a folder. Let the AI organize it. Ask it back when you need it.
—
The Specific Thing I Keep Thinking About
He mentioned that his wiki grows and gets more useful with every question he asks. He asks something, the AI goes through his notes and answers it — and then he saves that answer back into the wiki. So every session adds something. Nothing gets lost.
That hit me because the opposite is true for how I work right now. Every build session ends and most of the small things I figured out just disappear. The next session starts almost from scratch on some of the same ground.
If I had a knowledge base for my Claude Code workflows alone — prompts that worked, structures that didn't, patterns I figured out, error fixes — and an AI that could surface the right piece when I needed it, I'd stop repeating myself.
—
The Part That Actually Excited Me
He also runs "health checks" on his wiki. He asks the AI to find gaps, spot inconsistencies, and find connections between ideas he hadn't noticed yet. The AI suggests new things to look into.
That's the part I can't stop thinking about.
Not just a system that stores what I know. A system that notices what's missing. For someone building content automation apps, that means the system isn't just remembering what tools I've used — it's noticing when two things I built separately could be connected. It's pointing to the next piece.
That changes how building feels. Less like starting from zero every time, more like picking up a thread.
—
What I'm Going to Test
I'm starting with one folder. My Claude Code workflows — the scripts, prompts, notes, fixes, things that broke and how I solved them.
I'll ask Claude to read through everything and build an index: summaries of each file, links between related ideas, a map of what I already know.
From there, I'll ask it questions mid-project. "What pattern did I use last time for a multi-step agent?" "What was the issue I kept hitting with streaming output?" Instead of digging through old files or trying to remember, I just ask.
I'm not building the full Karpathy setup yet. I'm testing whether the core idea holds: does having a searchable, AI-organized version of my own work actually save time and reduce the re-learning?