I use Claude Code + n8n MCP to build full client workflows in under 40 minutes. Here's my exact process.
I run an AI automation freelance business and I've been using Claude Code combined with the n8n MCP server for the past few months. It completely changed how I deliver to clients. Before this setup, building a single n8n workflow for a client took me 2-4 hours. Now I describe what I want in plain English and Claude builds the entire workflow β nodes, triggers, data transformations, error handling β everything. ββββββββββββββββββ π οΈ MY EXACT STACK: β Claude Code (CLI) with Opus as the main model β n8n self-hosted on Coolify (Docker) β n8n MCP server so Claude has direct read/write access to my n8n instance β Supabase for client data ββββββββββββββββββ π REAL EXAMPLE FROM LAST WEEK: A coaching client needed: lead capture from Typeform β AI qualification with Claude β auto-booking on Google Calendar β personalized welcome email β Slack notification to their team. Old way: 3+ hours of manual node wiring, testing, debugging. New way: I described the workflow to Claude Code, it built it in ~15 minutes, I spent another 20 minutes testing edge cases and adjusting the prompts. Done in under 40 minutes. ββββββββββββββββββ β οΈ WHAT I LEARNED THE HARD WAY: 1. Claude generates workflows that are about 50-60% production-ready. The skeleton is solid but you NEED to manually check the data mapping and error handling 2. Always set up an Error Workflow in n8n (Settings β Error Workflow) that alerts you on Slack. Claude sometimes forgets this 3. Never deploy directly to a client's instance. Build on your staging n8n first, test, then export/import 4. Claude Code + n8n MCP works best for workflows with 5-15 nodes. Beyond that, break it into sub-workflows ββββββββββββββββββ π RESULTS SO FAR: β Handling 3x more clients with the same time investment β Average delivery time went from 1-2 weeks to 3-5 days β Client satisfaction went up because I can iterate faster on feedback ββββββββββββββββββ The combo of Claude Code for building + n8n for running in production is honestly the best setup I've found. They're complementary, not competing.