Both stay fully file-based and transparent — no black-box vectors. Both rely on markdown, structured folders, and treat the LLM as an active maintainer of a compounding knowledge base.
Shared Strengths
- Raw sources compile into a structured, linked wiki that improves over time.
- Obsidian delivers clean graph views, backlinks, and browsing.
- Far more efficient than classic RAG: the LLM synthesizes summaries and cross-references once instead of re-fetching chunks.
- Everything remains human-inspectable and future-proof.
Where They Differ in Practice
ICM + Obsidian + Karpathy LLM Wiki
- Setup: Drop files in raw/, use Claude Code + Claude md schema to build a wiki/ folder with backlinks and index. Open as Obsidian vault.
- Token handling: Strong compression via the compiled wiki, but agent runs still often load larger wiki sections or raw excerpts. Tokens grow with vault size unless you manually chunk or lean on long context. No native auto-consolidation or layered memory.
- Agent workflow: Excellent for knowledge synthesis and second-brain tasks. Agent edits the wiki live; Obsidian graphs shine for research or one-off compilation.
- Heavy-use limits: Context can bloat on complex multi-step tasks or growing external corpora + history. Relies more on manual discipline or model context windows.
Hermes-stack inside ICM
- Setup: Retains full ICM folder discipline (numbered stages, memory, user and soul md's). Hermes + Cognee adds structured relational memory on top. External materials ingest once into layered memory + skills. Karpathy-style wiki compilation works as a built-in skill; Obsidian still usable for browsing.
- Token handling: Significantly tighter. Prompt memory stays tiny and cache-friendly. SQLite holds full history but only relevant snippets get summarized and injected. Skills load as short summaries with progressive disclosure. Cognee pulls targeted excerpts. Auto-compression folds insights back into lean files. Tokens stay flat or drop as external knowledge grows.
- Agent workflow: Built for autonomous, repeated execution. Hermes turns reference patterns into patched, reusable skills. Selective tools and self-improvement loop make the agent sharper on your materials over time. Full filesystem transparency with SQLite audit trail.
- Heavy-use strengths: Ideal for ongoing agentic work (coding, research loops, client projects) where external references are frequent. Cross-session memory compounds without cost creep and delivers cleaner context with fewer hallucinations.
Bottom Line
Use ICM + Obsidian + Karpathy Wiki when your focus is building and browsing a rich personal knowledge base — it’s lightweight, visual, and great for synthesis. Use Hermes-stack inside ICM when you need the same markdown foundation plus runtime intelligence for optimized tokens and self-improving behavior at scale. It supercharges ICM by turning external materials into selectively loaded, living knowledge. Many combine both: run Hermes with Karpathy-style wiki as a skill, then view in Obsidian. The Hermes layer simply adds the memory management and efficiency that pure compilation setups often miss in long-running workflows
In the field, Hermes keeps per-task tokens flatter and results sharper when external references are frequent and evolving.
***Disclaimer: This Hermes + Cognee stacking talk is advanced, highly experimental mad-scientist $hit. My setups and ideas are garage tinkering — “duct-tape two beasts and see what explodes” energy. Not production gospel. Can break gloriously.
Pearl of wisdom:
Master Jake’s ICM method first. Nail prompt, context, and harness engineering before touching any of this.
If your agent goes rogue, don’t come looking for me.
Use at your own risk,
***