12 Key Lessons From Building 100+ AI Agents
Just watched this talk on the "12 factors of AI agents" and it's packed with practical insights for anyone building AI systems. 1 Small, Focused Agents Work Best Break things down into micro-agents with clear, short tasks (3-10 steps). Don’t force LLMs to handle long, complex workflows in one shot - own the structure, let the agent “propose” the next step, and decide what happens next. 2. Optimize Your Prompts & Context Every improvement in quality comes from better prompt engineering and careful control over what goes into the context window. If you want reliable output, you need to hand-craft (and constantly test) what you put in. 3. Manage State and Errors Carefully Don’t just dump everything into the agent’s context. When there’s an error, summarize and clarify before sending it back in. Make stateful agents by managing state outside the LLM, just like real software. 4. Let Agents & Humans Collaborate Great agents know when to hand off to a human - trigger human input, notifications, approvals, or clarification at the right time. 5. Multi-Channel is the Future Let users interact with agents anywhere: email, Slack, SMS, etc. Don’t make them open a special chat window. 6. Frameworks Are Helpful, But You Need to Own the Core Frameworks can save time, but you need to own your loop, your prompts, and your control flow for real reliability. 7. Find the Real Edge Best teams don’t just add “magic” AI - they engineer reliability, quality, and clear user experience. Find the limits of what your agent can reliably do, then optimize for those. Summary: Building reliable agents isn’t about fancy tools or huge frameworks - it’s about solid engineering, modular code, and owning your prompts, state, and loops. Start small, keep it focused, and iterate. If you want to dig deeper, check out the 12 Factors of AI Agents video