#7dayAISChallenge β Day 2 π·οΈ
Built a Firecrawl scraper today using its MCP server in Claude Code. A core lesson for me: how to make MCP servers actually reliable. π MCP servers ship "thin" on purpose. Each tool comes with just a one-line description, because those descriptions get loaded into Claude's context on every turn, keeping them short avoids bloating context across the many MCPs you might install. The tradeoff: the server can't ship opinions about your budget, your use case, or your project conventions. π Fix: a project-local cheatsheet. A markdown file in the repo with an intent β tool decision matrix, cost annotations (cheap vs expensive calls), anti-patterns ("don't use X for Y"), and gotchas you've hit. Then wire it into CLAUDE.md with a line like "consult firecrawl-cheatsheet.md before calling any mcp__firecrawl__* tool." That trigger is what forces Claude to read it every time, not just when it feels like it. π― Why this matters: LLMs are non-deterministic by default. Give it the same prompt, it might make different tool choices. Without rails, Claude might reach for the most powerful-sounding tool when a cheap scrape would do the job for 1/10th the cost. A cheatsheet collapses the decision space: predictable tool choice, predictable cost, fewer credit surprises. β‘ But don't pre-cache. Not every MCP deserves a cheatsheet. Writing and maintaining one has its own cost. My rough test: does it have >5 overlapping tools, a real cost per call, non-obvious gotchas, and will I reuse it across many sessions? 2+ yes β worth writing. Otherwise skip and let Claude figure it out from the built-in descriptions. If unsure yet: install the MCP, use it raw, and let actual friction tell you what to encode. Don't speculate. On to Day 3. π