For any agent, memory is the most fundamental pillar. Yet, most users take the LLM that runs the agent as the source of its memory. (Check our previous post, What is this all about!, where we discuss why the LLM is not the agent). Yes, we know LLMs "know everything." They are trained on the literal text of the internet. But when it comes to actual work, 99% of this knowledge is useless—or even problematic. We value the intelligence and reasoning abilities of the LLM, but to work as an agent, we need a different 'type' of memory. We should learn not to rely on the LLM for all aspects of our agent. The agent must build its practical knowledge from experience. 💾 The Concept of "Memory Forms." In our agent design, I use the term 'Memory Form' to describe anything that helps the agent produce reliable, reproducible behavior. It's not just text in a file; it's structure. * Knowledge Files (`.claude/knowledge/`): Static, reference memory. Or any other files that the agent has access. * MCP Tools: Capability memory (remembering 'how' to do something new). * Hooks: Procedural and reflexive memory (remembering 'when' to do something). The key idea is simple: Use the LLM for what it's good at (intelligence and reasoning), and don't try to address all the agent's fundamental requirements with the LLM and its pre-trained memory. 🏛️ The Three Pillars Now that we are familiar with the concept of memory forms, we define the pillars of our agent as: 1. Memories: The context injection layer (Working Memory) — specifically the local `CLAUDE.md` layer. 2. Hooks Ecosystem: A growing control layer that remembers to inject hints, directives, and reminders at the best times during your work. 3. Intentions: An MCP layer that dynamically generates instructions based on pre-set intentions, confining the agent to what it can do in practice. In this post, we focus exclusively on the first pillar: 'Memories', specifically the Working Memory functionality of `CLAUDE.md` files.