Hooks as Reflexes
In our last post, The Agent Memory Problem, we introduced how local `CLAUDE.md` files can be used as working memory—a place where key context lives and is accessible for our agent without any extra wiring.
But there is a catch.
Even though `CLAUDE.md` files are loaded automatically by Claude Code, there is no guarantee that their content will be optimal for working memory. At every prompt, we would need to remind the agent to use them as we expect, or we would need to find another way to ensure the appropriate details and instructions on how best to use these files are added to the context when needed.
This is where 'Hooks' (our second pillar) come in.
🦾 Hooks as Reflexes
Anthropic's Claude Code gave us a platform, and we can use different elements to improve it. Hooks are one of those elements.
Think of hooks as 'reflexes'—automatic responses to specific events. When something happens (a file is edited, a command runs, a task finishes), a hook can fire and inject the proper context or instruction at precisely the right moment.
We use hooks to turn "good advice" into "automatic behavior." Instead of hoping the agent remembers, we create systems that ensure it does.
🛠️ Case Study: Turning Git Into Episodic Memory
Let's look at a concrete example of how we can use hooks to create a memory layer.
Most people know 'Git' as a free tool for tracking changes in files. Git was initially created to make collaborative work on code files easy. Every change is documented with a commit operation, and whoever made the change leaves a commit message explaining what they did and why.
We realized something: if we could ensure our agent writes detailed commit messages every time, we would get a 'free form of episodic memory'—a searchable history of every decision the agent has made.
But how do we make sure the agent actually does this, every time, without us having to remind it?
The Anatomy of a Behavior:
To implement this, we created what we call a 'behavior'—a coordinated system of hooks working together toward a single objective.
Every behavior has a few key components:
1. A Clear Objective 🎯
For our Git behavior, the objective is simple: Capture high-quality memory in every commit message.
The agent should explain WHY changes were made, not just WHAT changed. It should categorize the memory (Is this a Plan? An Observation? Part of executing a task?) so that future agent sessions can search and understand it.
2. Triggers on Different Hooks ⚡
A behavior uses multiple hooks, each triggered at the right moment:
* After a file is edited: The hook detects changes and starts tracking them.
* After a command runs: If the agent runs a `git commit`, the hook marks that the memory was captured.
* Before the task ends: If there are uncommitted changes, the hook blocks the agent from finishing until the memory is saved.
Each trigger injects the best context at the best time.
3. Shared Data Space 📊
All these hooks need to communicate. They share a data file where they can:
* Generate hints ("You might want to commit these three related files together")
* Generate directives ("STOP: You must commit before ending this task")
* Track which files changed and when
* Enforce the objective by blocking specific actions
This shared space lets the hooks work together, not just as independent scripts.
4. Two Internal Helpers 🤖
Our behavior also includes two or more specialized helpers (internal LLM calls) with separate goals:
* The Objective-Focused Helper: Obsessed with the behavior's goal. It analyzes each change and suggests the best commit message, ensuring it fits our memory format. These internal LLM calls will manage the cognitive load of the behavior in different hooks we define.
* The Reflective Helper: Steps back periodically to look at patterns. It asks: "Are we doing this efficiently? Can we improve the behavior itself?" The result is that the behavior evolves itself through experience.
These helpers don't write commits themselves—they generate suggestions that the main agent (you, working with Claude) can use.
How It Works in Practice:
When the agent is working on a task:
1. The agent edits files, creates directories, and runs commands.
2. The behavior's hooks are quietly tracking every change.
3. At the right moment, the hook injects a message: "You have uncommitted changes. Capture this episode in your memory. Write a commit message explaining WHY you made these changes and categorize it (Plan/Observe/Execute/Verify)." [BTW, all these hook outputs are invisible to the user and do not show up in the default chat history]
4. The agent pauses, writes the commit, and the memory is saved.
5. If the agent tries to finish the task without committing, the hook blocks it with a reminder.
The result? The agent never loses context. Every session builds on the episodic memory from previous sessions.
As a result of our git memory generation behavior, we have an agent that, as it processes any user request, also maintains practical context and documents all changes to any file—whether client documents, project documents, code files, or anything else. This context can later feed into other behaviors and overall result in reproducible behavior by our agent.
🏗️ Behaviors as Building Blocks
This is just one example. You can create other behaviors using the same pattern:
* Directory Memory Behavior: When a new directory is created, it automatically creates a local `CLAUDE.md` file to track its purpose.
* Style Enforcement Behavior: When starting a task, inject your style guide into context automatically.
* Safety Net Behavior: Before deleting files, verify they're backed up or not critical.
Each behavior follows the same anatomy: objective, triggers, shared data, and enforcement.
🚀 The Takeaway
Hooks let us build reflexes into our agent. Instead of hoping it follows best practices, we create systems that guide it automatically.
And because Claude Code gave us this hook system, we can build these behaviors purely through conversation. No coding required—just defining objectives and letting the agent help you build its own reflexes. [The patterns I am including in the template agent will make the conversational engineering even easier. Stay tuned for the template agent]
---
Part of the "Claude Agent Engineering" series. We rise together.
1
1 comment
Hadi Nayebi
2
Hooks as Reflexes
powered by
Agent Engineering
skool.com/claude-agents-engineering-4513
Engineer your own AI assistant through conversation, not code.
Join professionals building custom CLI agents for their work. Opencode, Claude, Gemini
Build your own community
Bring people together around your passion and get paid.
Powered by