The model is rented. Your workspace is yours.
Coupling your workflow to one model is a single point of failure. Pricing shifts. Features get deprecated. A better model lands somewhere else. If your data, your protocols, and your methods all live inside one vendor's runtime, every shift becomes a multi-week refactor. The fix isn't picking the "right" model. It's building so the question doesn't matter. ——————————————————————————————————————————————— Three layers, three different jobs pull your stack apart and look at what's actually coupled. Data and protocols. Markdown files, JSON, plain folders. This layer should already be model-agnostic. If you're storing notes, briefs, plans, and memory inside a vendor's chat history, you don't have a workspace, you have a session. Tooling. Python scripts, CLIs, MCP(Model Context Protocol) servers. Also agnostic by default. The trap is hardcoded paths and runtime assumptions baked into shell commands. Lift those into env vars. Now any worker can run them. Agent invocation. This is the only layer that's genuinely model-coupled. Skills, hooks, slash commands, dispatch syntax don't pretend it isn't. Accept the coupling here, then make it swappable. What you actually want consistent: Not the model. The behaviour around it. 1. Voice rules. Same copy standard regardless of who's drafting. 2. Response format. Same skim-friendly structure across every backend. 3. Methodologies. Brainstorm, brief, handoff, audit. Same shapes. 4. Memory access. One protocol, one source of truth, every model reads it identically. 5. Output contract. Same final message shape so the same audit gate can verify it. That's the consistency that matters. The reasoning style of Opus vs GPT-5.5 vs Gemini stays distinct on purpose. That's why you have three. The shared layer Pull the always-on stuff into a .shared/preferences/ folder. Voice rules, response format, anything load-bearing on every interaction. One source. Generate CLAUDE.md, AGENTS.md, GEMINI.md from it.