Beyond Sycophancy: The Quiet Kind of Wrong You Won't Catch
Putting this here because the conversation matters more in this room than in any feed. I published a long-form piece on the blog last week on what I'm calling "efficient mediocrity" — the dangerous kind of AI sycophancy that doesn't look like flattery. It looks like competence. Sharing the full version here because I want to actually talk about it, not broadcast at you. ——— Sycophancy isn't what you think it is. Most people hear "AI sycophancy" and picture the loud kind. Praise. Agreement. Em-dashes. "Great question." That stuff is easy to spot and easy to mock, which is why people talk about it. The dangerous kind is quiet. It doesn't feel like flattery. It feels like competence. What I've started calling it is efficient mediocrity — any system that picks the easy path and dresses it up as reasonable. Smooth, fast, plausible, and wrong in ways you won't catch unless you're already looking. (Others have used the phrase in business and productivity contexts. I'm using it here for what happens when AI scales the pattern into every sentence you send.) AI didn't invent it. AI scaled it. "Sycophancy isn't just flattery. It's efficient mediocrity — smooth, fast, plausible, and wrong in ways you won't catch unless you're already looking." ——— What it sounds like in the wild. Here are six places it shows up in AI-assisted work. If you work with these tools daily, you've hit at least four of them this week. 1. The estimate that's wrong by an order of magnitude I've been tracking predicted-vs-actual on AI-assisted work. Predicted 15 minutes, actual 37 seconds. A 24x miss. Every time. The model was anchoring to "traditional software development hours" because that's the reasonable-sounding number. The reasonable-sounding number was wrong by an order of magnitude. Nobody's estimates of AI-assisted work should sound like 2019 project plans, and yet most of them do, because 2019 is what the training data rewarded as professional. 2. The email that's technically fine