Anyone played with Andrej Karpathy's "LLM Wiki" idea from the gist he dropped?
Quick version in case you missed it: instead of using RAG to re-chunk your sources every time you ask a question, you compile each source once into a persistent markdown wiki. The LLM extracts concepts, writes entity and concept pages, updates cross-references, flags contradictions, and maintains the whole thing. Future queries read the pre-synthesized wiki. The part that clicked for me: the reason most of us abandon our second brains is that backlink and cross-reference upkeep is boring. The LLM doesn't care. It's happy to touch fifteen pages in one pass. I spent a couple of weeks turning Karpathy's pattern into a Claude Code plugin that actually scales (atomic pages, sharded indexes, BM25 fallback past ~300 pages). It also runs in Codex, Cursor, Gemini CLI, Pi, and OpenClaw through the skills CLI. Install in Claude Code: /plugin marketplace add praneybehl/llm-wiki-plugin /plugin install llm-wiki@llm-wiki Or in any other supported agent: npx skills add praneybehl/llm-wiki-plugin -a <your-agent> Five slash commands (init, ingest, query, lint, stats), stdlib-only Python, no dependencies. Plays well with Obsidian if you want the graph view. Repo: https://github.com/praneybehl/llm-wiki-plugin Karpathy's gist: https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f Curious if anyone here has tried the pattern themselves. What did you ingest first, and what broke before it worked?