Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
What is this?
Less
More

Owned by Daniel

Digitally Demented

23 members • Free

AI isn't a tech problem. It's a psychology problem. Daniel Walters teaches you how to think with AI — not just use it.

Memberships

Skoolers

192.9k members • Free

23 contributions to Digitally Demented
What's the AI task you've been avoiding?
Not the one you tell people you're "going to get to." The actual one. The thing you keep rationalizing away because you don't quite know how to start, or you tried once and it was a mess, or you secretly think AI can't actually help with that thing. No judgment. I want to know what's hard. I'll go first: For me it was my daily briefing — specifically the dispatch board, the piece that's supposed to streamline everything. I built the cognitive architecture for it. But every morning I'd open it and feel overwhelmed. This morning I finally saw why: each item on the board was missing the context I needed to actually start. The apprehension wasn't about AI capability. It was my cognitive load walking into a context-less list. Fix was simple once I named it. I had my chief of staff pull the context per item before I open the board. Capability was never the problem. Clarity was. Drop yours below. We'll workshop a few in the comments through the end of the week.
0
0
Beyond Sycophancy: The Quiet Kind of Wrong You Won't Catch
Putting this here because the conversation matters more in this room than in any feed. I published a long-form piece on the blog last week on what I'm calling "efficient mediocrity" — the dangerous kind of AI sycophancy that doesn't look like flattery. It looks like competence. Sharing the full version here because I want to actually talk about it, not broadcast at you. ——— Sycophancy isn't what you think it is. Most people hear "AI sycophancy" and picture the loud kind. Praise. Agreement. Em-dashes. "Great question." That stuff is easy to spot and easy to mock, which is why people talk about it. The dangerous kind is quiet. It doesn't feel like flattery. It feels like competence. What I've started calling it is efficient mediocrity — any system that picks the easy path and dresses it up as reasonable. Smooth, fast, plausible, and wrong in ways you won't catch unless you're already looking. (Others have used the phrase in business and productivity contexts. I'm using it here for what happens when AI scales the pattern into every sentence you send.) AI didn't invent it. AI scaled it. "Sycophancy isn't just flattery. It's efficient mediocrity — smooth, fast, plausible, and wrong in ways you won't catch unless you're already looking." ——— What it sounds like in the wild. Here are six places it shows up in AI-assisted work. If you work with these tools daily, you've hit at least four of them this week. 1. The estimate that's wrong by an order of magnitude I've been tracking predicted-vs-actual on AI-assisted work. Predicted 15 minutes, actual 37 seconds. A 24x miss. Every time. The model was anchoring to "traditional software development hours" because that's the reasonable-sounding number. The reasonable-sounding number was wrong by an order of magnitude. Nobody's estimates of AI-assisted work should sound like 2019 project plans, and yet most of them do, because 2019 is what the training data rewarded as professional. 2. The email that's technically fine
1
0
I've built something. Want your honest take.
Some of you have heard me talk about the system I use to run my consulting business — the AI operating system I've been building for the past 10 weeks. I finally put a name on it and a website behind it. It's called Refracted Cortex. The short version: it sits on top of whatever AI you already use (Claude, GPT, Gemini) and makes it actually remember who you are. Your values, your commitments, your blind spots. It doesn't reset every session. It pushes back when your decisions don't match what you said matters to you. Everything I've been teaching in Connected Intelligence about context, cognitive architecture, and thinking with AI — this is what it looks like when you turn that into a product. The site is live and I'm opening a founding waitlist. Wanted to share it here first, before I share on LinkedIn tomorrow. 20 spots at $97/mo (locked for life — standard will be $197). But honestly? I'm posting this here first because I trust this group's judgment more than the algorithm. If you don't mind talking the time, I'd like to know: 1. Does the concept land when you read the site? 2. What's confusing or feels like a stretch? 3. Would you use something like this? Why or why not? Link: https://refractedcortex.ai Brutal honesty welcome. That's how this place works.
1 like • 22d
@Paul Harbin Currently no - but HIPPA protection is on the roadmap. It current does scan for PII, so the ability is already in the system. Just have to re-craft it for HIPPA.
Update - AL AI Innovation Summit Next Week
There are a couple of great academic events going on locally (here in Alabama) next week. The big one for me is the AL AI Innovation Summit - to which my poster presentation has been ACCEPTED!!! I'm excited to bring the idea of cognitive architecture and a working prototype to the summit next week. Also - over the weekend, I'm working on finishing getting my own cognitive architecture online, with the ability for others to use! There will be free trials available for people to be able to try it to see if it's right for them. I hope that all of you will be able to try it out. And as a thank you for being a part of this community - I'd like to extend the free trial for each of you for an additional two weeks. More details and an announcement post to come...
Your AI is only as honest as your data
I'm prepping a talk for a sales group tomorrow and I keep coming back to the same thing. Most people think AI's big promise is speed. Get answers faster, automate more, scale everything. And yeah, it does that. But here's what nobody's talking about: **AI doesn't question your inputs. It amplifies them.** If your CRM notes are written a day after the conversation — you're not giving AI the truth. You're giving it a reconstruction. If your project tracker says something is "in progress" because nobody updated the status — your AI sees active work. The project's been stalled for two weeks. This isn't an AI problem. It's a context problem. I ran into this while building my own system. The AI wasn't wrong — it was confidently right about garbage data. The output looked great. The thinking behind it was hollow. Here's the test I keep running on myself: What do I know right now that isn't in any system? That gap — between what's in your head and what's in your tools — that's where AI goes blind. What's something you know about your work right now that isn't written down anywhere? And what would change if your AI actually had access to it?
1-10 of 23
Daniel Walters
3
42points to level up
@daniel-walters-4523
Creating learning moments in people's lives, including his own... one Skool course at a time...

Active 2d ago
Joined Aug 21, 2025
INTJ
Birmingham, AL
Powered by