Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
What is this?
Less
More

Memberships

Clief Notes

27.4k members • Free

47 contributions to Clief Notes
🏆 WEEKLY COMP #3: THE SPECIALIST 🏆
💰 $325 CASH PRIZE 💰 That's a full year of Premium. Win this and your membership pays for itself. 📋 THE CHALLENGE You just got hired again. Different client this time. Meet Sarah, a freelance copywriter who's drowning in context-switching. 📎 Download the full client brief attached to this post. Short version: She works with three types of clients (SaaS founders, ecommerce brands, local service businesses) and starts from scratch every project. She doesn't need another tool. She needs a system. Your job is to build her a folder-based AI specialist she can drop into any Claude project. The folder IS the deliverable. 🗂️ THIS WEEK YOU LEARN ICM Up until now, comps have been "build a thing." This week you utilize the methodology taught throughout the community. 🧠 Folders as architecture. That's it. That's the whole concept this week. Your specialist is a folder with five things: - 📄 identity.md (who they are) - 📐 rules.md (how they respond) - 💬 examples.md (what good looks like) - 📚 reference/ (source material) - 📖 README.md (how to use it) Drop the folder into a Claude project. Claude becomes the specialist. Reusable. Shareable. Portable. 🎯 PICK YOUR SPECIALIST Don't pick copywriting. That's Sarah's example. Pick something YOU would actually use. A few sparks to get you thinking: - A salary negotiation coach - A meal planner that knows your dietary restrictions - A code reviewer for your stack - A real estate market analyst for your city - A technical recruiter screener - A grant writer for nonprofits in your space The more specific, the better. "Marketing expert" is not a specialist. "B2B email expert for enterprise SaaS targeting CFOs" is. 💼 WHY THIS ONE LANDS ON YOUR RESUME Real talk. Winning a comp in a Skool community doesn't get you a job by itself. But shipping a working folder-based AI specialist with a clean README and a public repo? That's a portfolio piece.
1 like • 33m
I built a free AI assistant for real estate agents. Writes listings, ranks buyer fit, and builds defensible price bands — tuned to your local market in about 20 minutes. Three populated examples included: Jerusalem, Northern California, and Khao Lak. https://github.com/NFTYoginis/realtor-copilot.git
See behind the veil - full architecture
This took a few weeks. Not building. Training. Tweaking. Breaking. Locking. Running the same flows over and over until the architecture stopped bending. Everyone here knows ICM. What this is… is what happens when you actually live inside it long enough. Not theory. Not clean diagrams. Real load. A few things only showed up under pressure: - The moment where orchestrator wants to execute… and you don’t let it - The cost of letting workers “figure things out” vs forcing briefs to be exact - How fast token bloat creeps in when you don’t treat load surface as a constraint - The difference between a rule you wrote… and a rule you had to write three times At some point, things flipped. The system stopped feeling like something I was managing… and started feeling like something that was holding shape on its own. That’s when the real work began. What’s in here is not “a good setup”. It’s what survived: - multiple passes of weekly audits - repeated cold starts - real production friction - and a lot of “this felt right but didn’t hold” A few things I’d pay attention to if you explore it: - Where boundaries are enforced (not suggested) - What got locked into rules vs left flexible - How briefs are treated as contracts, not prompts - How little the orchestrator actually does Also interesting: The extraction itself. That process alone shows you what was structural… and what was just personal preference. → https://github.com/NFTYoginis/creator-orchestrator-template If you’re deep into ICM already, you’ll see where this goes. Curious what breaks for you — or what holds better than expected.
0 likes • 1h
@Allan Durhuus
1 like • 34m
@Mary Sheilag
Every session is an audition
Most AI sessions complete a task and die. The system doesn't get smarter, it gets used. I'd been noticing this for months. Every new chat starts cold. Same context re-explained. Same one-off scripts rebuilt by hand. Strong reasoning lost. Repeated failure modes rediscovered. The fix isn't a better memory system. It's a mindset shift, encoded into how every session runs. The shift Don't do the task. Build the workflow that does this task and every future one like it. That's it. Two jobs per session, not one. Job one is the thing I asked for. Task, content, feature, fix. Job two is inheritance. Did the session spot a repeatable pattern? Did it ship the permanent fix, or at least flag the opportunity? Did it leave the system in a better state than it found it? Two trigger surfaces 1. Repeated tasks. Same operation done three or more times. Stop doing it manually. Ship the workflow. 2. Recurring failure modes. Patterns I keep re-correcting belong in a guardrail, not a re-prompt. The session that finds a new failure mode is the one that encodes it. The session doesn't have to act on every signal. But it should notice and ask whether it's worth formalizing. The reversibility floor Auto-promotion only works if undo is clean. Anything shipped this way lands as one isolated artifact. One commit. If it goes wrong, undoing it is a single operation, and the artifact moves to a trash folder rather than being deleted. Record stays. Reversibility is what makes aggressive shipping safe. Without it every proposed upgrade needs manual review, which defeats the point. The success moment One line back in chat: I noticed X, found a better way. The system just got an upgrade. Not a transcript. Not a report. One line. Green-light or kill. The takeaway Sessions don't get re-summoned. But they can leave inheritance. Every session is an audition. Not for the model's job security. For the infrastructure that the next hundred sessions will inherit. Full breakdown. The mindset shift, the trigger surfaces, the reversibility floor, and what it changes about how I run AI workflows, all live here:
1 like • 17h
Exactly:)
The Vault will make your Claude efficient
It’s not: “we cleaned files” It’s: “we built a system that keeps itself clean.” The first month I was just trying to get my head around the Foundation, Implementation and understand what is happening. Foundation already gave me a Content Provider style architecture, along with the animation and website builder, i intuitively and with both ChatGPT and Claude I started by implementing Jake's structure. I watched YouTube videos as well that gave me other insights (and validated that Jake's course is the only place that i see that talks about ICM and Architecture and such). Either way, I signed up for premium yesterday, and found that there was much more to complement the Foundation. As Jake suggested, I slowly took each on of the Vault items, and explored with Claude - how is our architecture compared to GitHub repos (which I tapped into some of the members GitHub repos and made Claude check itself against those insights - super thank you Community!!). It pointed out a few times that our architecture is more mature, though it always found 1 or 2 items that could improve its flow. We had a glitch this morning when I prompted the execution of today's schedule with all the items to post, create and monitor, which the Orchestrator and Operator (me) had to dissect what it did and ensure it won't follow that course again, and whether such a flow is efficient both Token costs and Workflow. It found ways to optimize - extracted what it found useful from the Vault, informed that skills that mention in Vault are skills we already captured (among others). My very first project with Jake - text to video animation has been improving on its own outside my Brand Sandbox. And as I closed my Claude after uploading the content it had structured neatly for me in an HTML file with EVERYTHING that i needed for the content posting, I had it summarize its work - which I hope this help if you read this far (thank you for reading my stuff:)) - Three-day architecture delta (Apr 30 → May 3)
The Vault will make your Claude efficient
0 likes • 1d
@Christopher Jeppson super appreciate your comments! Have a great moment :)
1 like • 1d
@Nathan Smith I do. And I set up other "workers" to do tasks that are unrelated to the brand at times. Though I do intend for sales tracking, and accounting ease, to make a "worker" for that as well. Just have to let the system run and get those leads :)
I was asked about my process: I didn't hire 3 teams. I built an architecture :)
On April 10 I was trying to clean up my Instagram.Tighten the cover graphics. Build a repeatable system. Stop redesigning the same template every week. A one-afternoon job. What actually happened was the first real test of an architecture I had been sketching for writing work — and I tested it on design. I built a sandbox, gave it a governing file, separated references from working material, wrote one clean brief… and let it run. Fifty covers came back in minutes. Same palette. Same typography. Same visual language. None off-brand. That was the moment something shifted. Not because of the output — but because of what it proved: Once an architecture is clear enough, the question is no longer “what can I delegate?”It becomes “what is now worth building?” In 21 days, that small test turned into: - Three working teams (orchestrator, content, design) - Four books shipped or shippable - A new website - Two additional teams already scoped Same operator. Same hours in the day. For those who asked about mindset and process — this is the real answer: 1. I stopped thinking in prompts and started thinking in systems.The model is not the asset. The structure around it is. 2. I separated thinking from doing.The orchestrator doesn’t write. It reads, structures, briefs, and validates.The workers execute. They don’t improvise outside their lane. 3. Everything moves through briefs.No direct “do this” requests. Every handoff is:task → context → scope → acceptance → return checklist.That alone removed most iteration cycles. 4. Context is layered, not dumped.Reference material lives separately from working material.The model doesn’t have to “figure out what matters” — it’s already decided. 5. The human sits outside the system.Not inside prompting.Outside — validating outputs and deciding what ships. The clearest proof this wasn’t theory came from the hardest task I’ve ever tried to coordinate: Mapping TCM meridians, Thai Sen lines, and Anatomy Trains on the human body — in one consistent visual language.
0 likes • 3d
@Carla Bosteder sounds like just a tweak or two if your foundation is there :))
1 like • 3d
@Carla Bosteder of course.
1-10 of 47
Gabriel Azoulay
5
215points to level up
@gabriel-azoulay-3254
Yoga retreat center in southern Thailand

Active 29m ago
Joined Apr 5, 2026
Powered by