Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
What is this?
Less
More

Memberships

AchieveGreatness.com (Scaling)

324 members • Free

Clief Notes

27.3k members • Free

CC Strategic AI

3.3k members • $27/month

ACQ VANTAGE

915 members • $1,000/month

Agent Zero

2.6k members • Free

AI Automation Agency Hub

315.3k members • Free

Automated Marketer

3.7k members • Free

Sales Training & Placement

8.1k members • $17/month

Selling Online / Prime Mover

36.7k members • Free

13 contributions to Clief Notes
Stop rebuilding agents: how I’m using “folder as app” + 60/30/10 to make stacks survive model changes
Every time a new model or agent framework drops, I feel the urge to burn everything down and rebuild. New tool, new “agent,” same problem: my workflows, prompts, and SOPs end up glued to the tool instead of the work. What finally started working was flipping the mental model: The folder is the app. The AI is just the worker that knows how to navigate it. Under the hood I’m using a simple architecture that does two things: Separates what should be code, what should be rules, and what actually needs AI Structures everything so any model can plug in later without a rebuild I think of it as 60 / 30 / 10 on top of a 3‑layer workspace. Step 1: 60 / 30 / 10 – decide where AI actually belongs For every task, I force it into one of three buckets: 60% – Automations (no AI at runtime) Same input → same output every time. Scripts, API calls, cron jobs, database writes. AI can help write the script, but the script runs it. 30% – Rule engines (templates / decision trees) Clear if/then logic, multiple branches, but all map‑able. Triage, routing, form logic, templated replies. Once the tree exists, it runs without “thinking.” 10% – Prompts (actual AI work) Summaries, syntheses, ambiguous inputs, creative drafting, nuanced judgment. This is the only layer where I want the model to “think.” If I can’t decide, I decompose the task until each step is clearly one of those three. That alone kills a ton of “let’s just build an agent” impulses. Step 2: 3‑layer folder architecture Once tasks are classified, everything lives in a single workspace: Layer 1 – The map (router) One root markdown file, e.g. _router.md or WORKSPACE.md: Explains the business / project in plain language Lists folders and what they’re for Defines naming conventions (this is your “schema”) Has a routing table: “If task = X, read these files, skip those, write output here, load these skills.” When I start a session I don’t paste giant prompts. I say: “Read _router.md and follow it.” Layer 2 – Task workspaces by layer
MacOS or Windows
Hey guys, complete rookie here. I currently run Windows on my PC but my PC is completely outdated and I'm thinking of purchasing a Mac Mini for now. Any comments on the pros and cons of switching to Mac OS? It seems like Mac is more conducive to AI or maybe that's a false assumption? Feel free to leave your input, thanks!
1 like • Mar 16
Love this thread – lots of good nuance here. From what I’m seeing, you basically have two “good” paths depending on where you’re headed: - Mac (what you chose): Unified memory + M‑series stability is awesome if most of your work is Claude Code, web, and maybe some light local models. 48 GB on a Mac Mini is a beast for that use case, and you get the nice side benefits (iMessage, tight ecosystem, low friction). - DIY workstation (Windows/Linux + Nvidia/AMD): Starts to shine once you’re pushing bigger local LLMs, image/video pipelines, or you want to scale up multiple GPUs. With the right config you can get close to Mac‑level “it just works” stability, but you’re trading simplicity for raw GPU horsepower. The pattern I see with people who go deep into AI is:Mac (or similar) as the stable daily driver, then a separate GPU box or server later if local-heavy workloads actually show up. Since you’ve already pulled the trigger on the Mac Mini: I’d ride that hard for Claude Code, experiments, and shipping projects. If you ever find yourself bumping into real limits (context size / model size / latency) you can always bolt on a Linux GPU rig later instead of guessing now. Curious: what’s the heaviest thing you realistically see yourself wanting to run locally in the next 6–12 months – chat models, vision, or more full “AI agent” workflows?
👋🏽 I would love your perspective
I would love to get some feedback on my situation/dilemmas. I had accumulated some know-how in defining and encoding the workflow, context, structure, constraints, etc., for different tasks without using tailored personas/instructions for projects within the main AI chat apps. Then I thought that most employees don't have the time or energy to dive deep and dig through layers of AI influencers, prompt packs, etc., to actually optimize their work beyond one-time conversations or simple tasks, and so I could bring some value. Went all in on finding, learning, and designing the best ways to achieve that. As Opus 4.6 and Codex 5.3 came out, I dove into the "agentic" area and its possibilities, where I feel my experience has whole new potential, as there is only so much that can be done within projects and instructions, and they allow for hardcoding some deterministic parts. That led me to realize that, in the current state, I can leverage my somewhat shallow understanding and insight into different areas, as models can compensate for my lack of technical background. So I was able to create a website with an integrated reservation system for my wife's lessons. And, at least in my opinion, it escapes the generic feel of most AI websites. So I defined some workflows and principles to achieve similar results faster for other potential clients. What I am trying to figure out are 3 things: - Whether to keep exploring more complex solutions or to focus on improving people's workflows in the everyday layer of the main apps through instructions/projects. Since I feel most people are and will continue to use them as their primary layer. - How to deliver the value in the best way, as I am not much of a salesperson and have an ick about selling courses or teaching for the sake of it. - I feel the value is there, but it is hard for me to define it clearly and position myself. And it all kind of falls under generic categories of AI workflows, consulting, specialist which are oversaturated
1 like • Mar 16
Love this share. You’re thinking about the right problems, not just tinkering with prompts. What jumps out: - You’re good at encoding workflows / constraints into something a model can actually use. - You’ve already proven it on a real thing (your wife’s lesson site) instead of just theory. - You’re very aware that most “normal” employees will never go down the rabbit hole you did. That’s exactly the combo that is valuable if you package it right. My biased take on your 3 questions: 1. Everyday layer vs complex solutions Most people will live inside the main apps… but the real leverage is when someone like you quietly hides smarter structure underneath. I’d frame your work as: “I turn your recurring work into 3–5 step, reusable AI workspaces inside the tools you already use.” You can use agentic / more complex setups “under the hood,” but sell it as “your daily workflow, just 10x smoother,” not “agents.” 2. How to deliver value without becoming a course peddler If selling courses feels off, don’t start there. You can ship value as: - Done‑for‑you builds: “I’ll design and implement 1–3 AI workflows for your role/team.” - Short implementation sprints (2–4 weeks) where you sit with them, extract their real work, and leave behind a concrete workspace + instructions. - Maybe later: a small library of templates that support your implementations, rather than a big, abstract “AI course.” You don’t have to be a salesperson if your offer is specific and concrete. 3. Positioning in a crowded “AI consultant” market Right now your story is: “I’m good at AI workflows.” So are a million others on paper. You stand out when it becomes: “I help [very specific type of person] do [very specific, painful thing] faster with AI, without them having to become ‘AI people.’” For example (totally made up): - “I help solo teachers / coaches turn their booking + intake + follow‑up into one clean AI‑assisted flow.” - “I help small service businesses turn their website + inbox into a simple, sane pipeline that they can actually trust.”
ICM Research Paper
For those of you who enjoy the academic side of things, here is the current draft of the research paper I am writing that supports my "Folder" methodology in much greater detail. And for those of you who don't just copy and paste the paper into AI and have it explain it to you ha-ha. The core idea is simple. Instead of building complicated software to coordinate AI agents, you use folders and plain text files. Each folder is a step in your workflow. Inside each folder, a markdown file tells the AI what to do at that step. The AI reads the right folder at the right moment, does its work, and drops the result where the next step can pick it up. You review the output at each step and edit anything that needs fixing before moving on. The whole thing runs on your computer with no special infrastructure. For the technical readers: the paper traces this back through Unix pipeline design, Parnas's information hiding, multi-pass compilation, and literate programming. It formalizes a five-layer context hierarchy (identity, routing, stage contracts, reference material, working artifacts) and reports on practitioner findings from this community, including the U-shaped intervention pattern several of you have seen in your own workspaces. It also lays out future directions around semantic debugging and output provenance that I think will interest anyone building complex pipelines. Feedback welcome, especially from those of you running your own workspaces. [2603.16021] Interpretable Context Methodology: Folder Structure as Agentic Architecture
1 like • Mar 16
This is fantastic. It feels like you wrote the formal paper version of what a bunch of us have been hacking together in the dark. I’ve been standardizing a “folder-as-app” pattern for client workspaces too (identity / routing file at the root, workspace‑level context, then stage folders with their own CONTEXT + reference + output), but you’ve actually named the layers and tied them back to Unix, Parnas, compilers, etc. The five‑layer hierarchy (identity, routing, stage contract, reference, working artifacts) maps almost 1:1 to what works in practice. A few things that really clicked: - The Layer 3 vs Layer 4 split. In real pipelines, “factory config” vs “run‑specific artifacts” is exactly where people get lost. Making that structural instead of “hope the model figures it out” is huge. - The U‑shaped intervention pattern. I see the same thing: heavy edits at the “set direction” stage and at the final assembly, fairly light touch in the middle if the stage contracts are tight. - Treating the filesystem itself as the orchestration layer, not just storage. That’s the thing most “agent” frameworks miss. Curious on two fronts: 1. If you had to give people a minimal, concrete “hello world” workspace to feel this, what pipeline would you pick? (e.g. idea → outline → draft → edit?) 2. On semantic debugging: are you leaning more toward explicit provenance tags in outputs, or toward separate audit stages that re‑trace and verify earlier layers? Would love to hear where you’re experimenting next with this.
Is This Actually An Opportunity?
The AI Automation Agency (AAA) Business Model is very popular amongst a sector of YouTube creators such as Nate Herk, Liam Ottley, etc. They offer courses to learn Claude Code as well as learn how to start your own AAA. These courses are marketed to enable you to make your first $10k with automations and so on. After doing research, there is a common proposition from these creators: Offer 24/7 lead capture (chatbot/AI receptionist), an SMS booking system, a social media DM bot, or a “speed to lead” system (a workflow that responds to customers right away). These are the 4 core offers, find a low tech industry and sell one of these offers. My opinion here is that those 4 offers are not very high value and are most likely offered by many SaaS companies or even website hosts. Which brings me to the question: Using Claude Code, are there opportunities within small to medium businesses to automate high value workflows? Everyone wants to start their own AAA but I am not seeing any high value offers. It’s all just basic add-ons that are already offered. Maybe I am not seeing something here, that’s why I want to get some community input on it. Do you think Claude Code workflow automations are a feasible business?
2 likes • Mar 16
Most of the AAA stuff being sold right now is:“Here’s a slightly fancier contact form / chatbot / DM script.”That’s why it feels low value and crowded: it is. Those 4 offers are basically: - Lead capture widget - Calendar widget - DM widget - Faster version of the above All of that is either inside the CRM already or one update away from being native. Where there is real opportunity (and where Claude Code actually matters) is when you: 1. Pick a specific market, not “SMBs”Example: property managers, home services, med spas, niche B2B, treatment centers, etc. 2. Automate a full money-critical workflow, not just the front doorThings like: 3. Own an outcome, not a feature“We recover X lost leads per month / increase show-up rate / cut admin time in half”beats “we built you a 24/7 chatbot.” Claude Code is just the glue that lets you: - Read messy emails / forms / PDFs - Normalize them into structured data - Call APIs / CRMs / schedulers - Write back human‑sounding updates and summaries So is AAA feasible? Yes, but not as “I sell generic chatbots to any low-tech business.” It looks more like:“I’m the ‘revive dead leads and missed calls’ guy for [very specific niche], and I wire Claude Code into their real stack (phones, CRM, billing) so they make more money with zero extra headcount.” Curious: if you had to pick one vertical and one painful workflow to go deep on, what would it be?
1-10 of 13
Robert Randall
2
4points to level up
@robert-randall-4231
DFY SaaS for $2M+ niche businesses when off-the-shelf solutions break. Partner & investor.

Active 2d ago
Joined Mar 9, 2026
Tampa FL
Powered by