The LLM Overload: How "AI ADHD" is Draining Developer Productivity
Remember the early days of large language models (LLMs)? It felt like a single, powerful oracle at our fingertips, ready to answer coding questions and debug tricky problems. Now, we're bombarded with a dizzying array of models, each with its own strengths, weaknesses, and quirky personalities. While choice is generally good, this explosion of LLMs is starting to feel less like a helpful toolkit and more like… well, a digital form of ADHD for developers. We're calling it "AI ADHD" – the constant distraction and context switching caused by the sheer number of LLMs available and the pressure to know which one is "best" for any given task. Here's how this overload is quietly hurting the programming experience: **1. Decision Fatigue Sets In (Before You Even Write Code):** Before you even type your first line, you're faced with a choice: Which LLM should I ask? Do I need something specifically tuned for Python? Should I use the one known for creative code generation or the one better at factual explanations? This initial decision-making process, repeated multiple times throughout the day, is surprisingly draining. It's like having to choose from a hundred different screwdrivers for a single screw – most of them will *kind of* work, but you're wasting time trying to figure out the *optimal* one. **2. Context Switching Becomes a Constant Headache:** Each LLM has its own prompt engineering nuances, its own preferred input formats, and its own unique ways of interpreting requests. Switching between models for different tasks means constantly shifting your mental gears. You might have just gotten used to crafting prompts for Model A when you realize Model B is better for your current problem, forcing you to relearn how to effectively communicate with it. This constant context switching breaks flow and hinders deep work. **3. The Fear of Missing Out (FOMO) is Real:** There's a nagging feeling that you're not using the "right" tool. Did that other LLM have a more up-to-date knowledge base? Would it have generated cleaner code? This FOMO can lead to second-guessing, re-running requests in different models, and ultimately, more wasted time chasing an elusive "perfect" answer.