Why I Stopped Letting AI Drive My Workflows
I’m starting to believe most “AI workflow problems” aren’t AI problems at all. They’re workflow problems. Over the last few months, I’ve seen (and built) many automations where AI is used everywhere: - AI decides routing - AI validates inputs - AI handles edge cases - AI becomes the logic At first, it feels powerful. In practice, it quickly becomes messy. When AI logic is scattered across a workflow: - Failures are hard to trace - Decisions are hard to explain - Debugging turns into guesswork - Reliability drops under real usage The system doesn’t become smarter — it becomes fragile. Lately, I’ve changed how I design automation workflows. Workflow first. AI second. Always. I now try to design workflows that: - Make sense even without AI - Have explicit inputs and outputs - Use deterministic logic for control flow - Fail in predictable, debuggable ways Only after that do I add AI — and only where uncertainty is actually useful. In practice, I prefer AI for: - Classification (with confidence thresholds) - Drafting and summarization - Suggestions, not final decisions - Assisting humans, not silently replacing logic Tools like n8n make this painfully obvious. When everything is visible node by node, there’s nowhere to hide bad design. A thought I keep coming back to: If removing AI completely breaks the workflow, the workflow was never solid. AI doesn’t fix messy systems. It amplifies them. Curious how others are thinking about this: Where do you intentionally refuse to use AI — even though you could?