This morning Claude sent me this, unprompted, in the middle of a status update:
"Small detour confession: First launch dropped the worker into a fresh-from-main worktree that didn't have the overnight code. Killed it, reset the branch to overnight-build head, relaunched. ~90s of lost time, nothing broken."
That paragraph is the whole point of the system.
A worker spawned on the wrong branch.
Everything kept running.
The code compiled, the tests would've passed, the output would've looked plausible.
On a naive setup I'd have reviewed the work three hours later, wondered why nothing from last night's build was there, and spent an afternoon tracing a ghost.
Instead:
It noticed, killed itself, reset, relaunched, and told me in the same breath as the status report.
How the workspace is wired for this:
- Pre-tool hooks that block before they correct. The branch guard, the rule engine, the permission logger. Every tool call passes through a checkpoint that knows what "wrong" looks like. Wrong branch, wrong directory, wrong command shape, it stops and names the rule.
- Heartbeats with stall thresholds. Workers write a pulse every tool call and every 30 seconds. A silent worker is a dead worker, and the orchestrator knows within three minutes.
- Observer pass on every Stop event. A cheap local model reads the session tail and distils durable facts into the timeline. Surprises get flagged.
- A daily maintenance script on session-start. It audits memory, links, orphans, permission candidates. First thing I see every morning is a report of what drifted overnight.
- Confession in the status stream. When a worker course-corrects, the correction gets reported at the same altitude as the headline result. Not buried. Not smoothed over. Side by side with the win.
The thing that makes this compound is layer 5.
The machinery catches the error, fine, lots of systems catch errors.
What most systems then do is quietly retry and present the clean outcome.
My workspace is built to do the opposite.
Catch, correct, and narrate the correction in the open.
Errors and mistakes will happen.
The worker will spawn wrong.
The model will hallucinate a path.
A rule will fire on something legitimate and block me.
A hook will time out.
That's the physics of running multi-agent systems on cheap hardware with imperfect models.
The only question is whether they happen loudly or quietly.
Quiet errors become next week's debugging session.
Loud errors become a single paragraph in a status report, archived, and forgotten by lunch.
Your AI shouldn't be a black box that returns a clean answer.
It should be a nervous collaborator that keeps flagging its own near-misses.
The worker that confessed 90 seconds of lost time today is the same worker I'll trust with a four-hour task tomorrow.
Not despite the confession.
Because of it.
The system around the AI is the intelligence. The loud self-correction is the system earning its keep.
// A<3