User
Write something
Pinned
Welcome to Digitally Demented. Here's what you walked into.
I'm Daniel Walters. 15+ years in operations and marketing technology -- the intersection where marketing, tech, and operations either connect or fall apart. I'm the person who sits between people who build things and people who use them. I translate in both directions. I'm not a developer. I'm AuDHD (late-diagnosed), which means I think in systems and frameworks whether I want to or not. I built a 19-agent AI system to run my consulting business, and I'll tell you straight when something doesn't work. That's not a warning -- it's a feature. A while back, something clicked for me: the doing isn't the work anymore. The thinking is the work. AI can draft your emails, research your competitors, analyze your data. That's not coming -- that's here. And most professionals I talk to are in one of three places: 1. Stuck. They know AI matters but don't know where to start. 2. Skeptical. They tried it, got mediocre results, and assumed AI was overhyped. 3. Spinning. They're using AI but starting from scratch every single time. If any of that sounds like you, you're in the right place. This community exists because I got tired of watching smart people feel dumb about AI. What's here: - AI 101 (Free Course) -- Start here. Fundamentals without jargon. Classroom tab. - Connected Intelligence: AI Fluency (Paid Course) -- 5 modules where you build your own cognitive architecture -- a working system for how you think and operate with AI. Every module produces a deliverable you keep. Details in the Classroom. - Community -- Questions, wins, frustrations, resources. The only rule is be real. What I ask: - Introduce yourself below. Who you are, what you do, what brought you here. Even one sentence. - Be direct. If something I post doesn't make sense or you disagree, say so. Honest conversation is how this place works. - Share your work. AI wins, failures, experiments. We learn more from the failures. Your first move: 1. Drop an intro in the comments 2. Check out AI 101 in the Classroom 3. Browse what others are talking about and jump in
What's the AI task you've been avoiding?
Not the one you tell people you're "going to get to." The actual one. The thing you keep rationalizing away because you don't quite know how to start, or you tried once and it was a mess, or you secretly think AI can't actually help with that thing. No judgment. I want to know what's hard. I'll go first: For me it was my daily briefing — specifically the dispatch board, the piece that's supposed to streamline everything. I built the cognitive architecture for it. But every morning I'd open it and feel overwhelmed. This morning I finally saw why: each item on the board was missing the context I needed to actually start. The apprehension wasn't about AI capability. It was my cognitive load walking into a context-less list. Fix was simple once I named it. I had my chief of staff pull the context per item before I open the board. Capability was never the problem. Clarity was. Drop yours below. We'll workshop a few in the comments through the end of the week.
0
0
Beyond Sycophancy: The Quiet Kind of Wrong You Won't Catch
Putting this here because the conversation matters more in this room than in any feed. I published a long-form piece on the blog last week on what I'm calling "efficient mediocrity" — the dangerous kind of AI sycophancy that doesn't look like flattery. It looks like competence. Sharing the full version here because I want to actually talk about it, not broadcast at you. ——— Sycophancy isn't what you think it is. Most people hear "AI sycophancy" and picture the loud kind. Praise. Agreement. Em-dashes. "Great question." That stuff is easy to spot and easy to mock, which is why people talk about it. The dangerous kind is quiet. It doesn't feel like flattery. It feels like competence. What I've started calling it is efficient mediocrity — any system that picks the easy path and dresses it up as reasonable. Smooth, fast, plausible, and wrong in ways you won't catch unless you're already looking. (Others have used the phrase in business and productivity contexts. I'm using it here for what happens when AI scales the pattern into every sentence you send.) AI didn't invent it. AI scaled it. "Sycophancy isn't just flattery. It's efficient mediocrity — smooth, fast, plausible, and wrong in ways you won't catch unless you're already looking." ——— What it sounds like in the wild. Here are six places it shows up in AI-assisted work. If you work with these tools daily, you've hit at least four of them this week. 1. The estimate that's wrong by an order of magnitude I've been tracking predicted-vs-actual on AI-assisted work. Predicted 15 minutes, actual 37 seconds. A 24x miss. Every time. The model was anchoring to "traditional software development hours" because that's the reasonable-sounding number. The reasonable-sounding number was wrong by an order of magnitude. Nobody's estimates of AI-assisted work should sound like 2019 project plans, and yet most of them do, because 2019 is what the training data rewarded as professional. 2. The email that's technically fine
1
0
I've built something. Want your honest take.
Some of you have heard me talk about the system I use to run my consulting business — the AI operating system I've been building for the past 10 weeks. I finally put a name on it and a website behind it. It's called Refracted Cortex. The short version: it sits on top of whatever AI you already use (Claude, GPT, Gemini) and makes it actually remember who you are. Your values, your commitments, your blind spots. It doesn't reset every session. It pushes back when your decisions don't match what you said matters to you. Everything I've been teaching in Connected Intelligence about context, cognitive architecture, and thinking with AI — this is what it looks like when you turn that into a product. The site is live and I'm opening a founding waitlist. Wanted to share it here first, before I share on LinkedIn tomorrow. 20 spots at $97/mo (locked for life — standard will be $197). But honestly? I'm posting this here first because I trust this group's judgment more than the algorithm. If you don't mind talking the time, I'd like to know: 1. Does the concept land when you read the site? 2. What's confusing or feels like a stretch? 3. Would you use something like this? Why or why not? Link: https://refractedcortex.ai Brutal honesty welcome. That's how this place works.
Update - AL AI Innovation Summit Next Week
There are a couple of great academic events going on locally (here in Alabama) next week. The big one for me is the AL AI Innovation Summit - to which my poster presentation has been ACCEPTED!!! I'm excited to bring the idea of cognitive architecture and a working prototype to the summit next week. Also - over the weekend, I'm working on finishing getting my own cognitive architecture online, with the ability for others to use! There will be free trials available for people to be able to try it to see if it's right for them. I hope that all of you will be able to try it out. And as a thank you for being a part of this community - I'd like to extend the free trial for each of you for an additional two weeks. More details and an announcement post to come...
1-23 of 23
Digitally Demented
skool.com/digitallydemented
AI isn't a tech problem. It's a psychology problem. Daniel Walters teaches you how to think with AI — not just use it.
Leaderboard (30-day)
Powered by