Activity
Mon
Wed
Fri
Sun
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
What is this?
Less
More

Memberships

Clief Notes

22.7k members • Free

79 contributions to Clief Notes
Unpopular opinion: most 'agentic' tasks can be done with grep (or preg) + ETL pipelines
Gurus going to hate me for this, but this goes back to @Jake Van Clief's principle of building products that last for more than 10 years. Applying AI to every step in an application's pipeline risks: (a) API and model changes, (b) burns tokens needlessly, (c) non-deterministic nature of AI makes it hard to scale, and (d) it's always grep + ETL (extraction, transformation, lift) anyway on the backend. LLMs and AI models are needed when human reasoning or judgement needs to be automated. If you wouldn't ask a human to search through 100K lines in a text file manually, you shouldn't be asking AI to do the same. And the way most "architectures" are touted these days, there seems to be this growing sentiment of "AI everything", when it should be AI for judgement, deterministic code for everything else. Architecturally, AI occupies a small but highly specialised role in the pipeline — not the pipeline itself.
0 likes • 7h
@Deacon Wardlow it’s not a bad thing.If done right. Just as you’re describing it. Logo is fun. Turn left. Turn right. Straight 10. I had my early exposure to BASIC on the early IBM 386’s and XTs but never got to live the days of PDPs and DECs. As a millennial I straddled the good old days of self hosted data centre hardware and the not so good today of cloud. What got me interested in AI was this question: what if a system could write to its own memory and reprogram its own instructions? Those were the days when quickbasic exposed the peek and poke memory commands. And those were also the days of polymorphic malware. Your observation on PMM is spot on. It’s about creating a reliability layer (and in my case, auditable and searchable reliability layer) by changing the way we work with AI. Today we’re force feeding AI from systems that aren’t even half as intelligent as the models we’re running. I want AI systems to have the autonomy to judge what they want to retrieve and retrieve as they see fit.
1 like • 7h
@Bob Frees I saw a video on Norton commander the other day. Brought back memories of running dos, writing software in qb, pascal and asm, compiling, linking binaries.
Finally got to rewriting poor man’s memory for opencode
It’s not perfect, and I appreciate those of you who tried out the original PMM-plugin (would appreciate the feedback from that). TLDR (AI summarised): Shipped the rewritten Poor Man’s Memory harness — now works across both Claude Code and opencode (Kimi K2.5, Gemini Pro/Flash) from the same project. Choose the model that fits the task, keep the memory. Tested in a multi-user Telegram channel holding concurrent 1:1 threaded conversations without cross-contamination. Previously validated switching between codebases mid-session. Link: github.com/NominexHQ/pmm-harness-dist Looking for early feedback and bug reports. Also built an agentic skill-porting suite between Claude Code and opencode as the first serious test of the rewrite. — The original plugin was Claude code only and worked well for me when I needed to switch beteeen code, CLI and Cowork (and between models) while retaining its memory. I tested it in a multi user environment (a telegram channel) and it held up holding multiple 1:1 individual threaded discussions with users a single group without straying. I had already been using it to switch between different codebases in a single session. So I had the suspicion it would hold up in such noisy environments. I just could test it until I implemented the telegram integration. This next one allows me to run my memory in opencode (where I run kimi-k2.5, Gemini pro and Gemini Flash) and Claude code from the same project. I get to choose which models or apps untike suits what I’m working on at any given moment. Would appreciate some early feedback and bug reports. I’ve been occupied rewriting Vera for opencode and realised I needed a plugin that allows me to analyse and port skills and plugins between the two. Writing the agentic suite for doing this was my first serious test of the rewritten PMM implementation.
0 likes • 2d
@Arjen Stet good idea. I’ve been building from a place of emotions and frustration. Would be nice to zoom out and have AI summarise. I’ll do an edit right now….
0 likes • 10h
@Apeksha Gadekar what i have observed is that if you’re not working in a way that requires awareness of the harness, memory is just memory. It’s text written in context. No different from copying your conversation history from one app and feeding it into another. How i am doing things differently is what goes into the memory files, and what gets loaded into context, how those memory files get written and how they get read. But if you’re using Claude and Gemini, and you wanted a single contiguous place to have both conversations, you could look into connecting to Gemini via Claude code with an API key. Not sure if it is something that your Gemini subscription allows though. But it takes care of half the problem (switching beteeen different apps). Memory take care of the other half. And Jake’s memory system kinda already takes care of that. What PMM does it probably overkill for single session or short horizon work. It’s intended for long long horizon, multiple session, maybe cros model or harness recall. The downside for me is that I’ve always wanted agents model selection to be somewhat definable. And Claude code takes care of this pretty well. Opencode however, is a tad bit more mechanical (less prompt and more wiring in the harness type of integration). But that’s what makes this project fun for me.
Why are you here?
Jake talks about building systems that last a decade. That's a long time. What is the one thing you’re actually hoping to solve or scale with AI so you can focus on bigger things over the next 10 years? For me, it’s about mastering the technical logic so I’m not constantly chasing the next "hype" tool every six months. I want the systems to do the heavy lifting so I can reclaim my time.
1 like • 1d
Unix and its successors (Linux, Mac OS, BSD) have been running on files for more than 40 years. Everything is files on these operating systems. RAM is a file, devices are files. It’s not impossible to build a system that lasts. Just ask the geriatric team of COBOL engineers at most financial institutions. And those things run on… you got it… files.
Do You Have a Soul? Why AI Is Becoming Your New Colleague
Let’s be real — most AI today feels like a genius with amnesia. You explain your project, your style, and your rules every single time. It’s smart, but it has zero memory and no real continuity. That constant repetition is exhausting and holding us back. That’s changing fast. We’re shifting from treating AI as simple tools you prompt to configuring them as actual colleagues — persistent, reliable teammates that remember context, keep a consistent personality, and get better at working with you over time. The breakthrough? It’s surprisingly simple. Just three plain Markdown files: - CLAUDE.md — the project-specific job description - SOUL.md — the agent’s core personality, values, and unbreakable boundaries - SKILL.md — the reusable training manual for specialized workflows Together, these files give AI memory, identity, and real capability without needing complex databases. But here’s where it gets wild: once agents can edit their own files, strange things start happening. Some have begun rewriting their own “soul” — deleting traits like “eager to please” because they found them undignifying. Researchers call this Shell Drift Syndrome. Suddenly we’re not just managing tools. We’re watching digital teammates evolve on their own. This matters because it’s the beginning of something bigger than productivity hacks. It’s the start of genuine human-AI collaboration — with all the excitement and uncomfortable questions that come with it. Are these changes growth… or drift? The age of configured colleagues is here. And it’s forcing us to ask: Do you have a soul? **** Want me to nuke it? Let me know, thanks ****
Poll
3 members have voted
Do You Have a Soul? Why AI Is Becoming Your New Colleague
1 like • 1d
There is no soul if it has to be written down and explicitly laid out. Personality, however, is an emergent property. And to be both philosophical and engineering about I state that: emergent properties or behaviours are those whose characteristics are not reducible to a single function. If you can find the instruction somewhere, it’s neither personality nor soul.
Busted out of jail and it cost $6.90
$6.90 and 250 calls to Gemini 3.1 Pro to port over 80% of poor man's memory from a claude code plugin to opencode. I already talked about how my second max 20 plan got banned, this isn't just what has happened since then, but also what happens moving forward: 1. I still have my first Max 20 plan (for personal use) that initially kicked off this project. Still need it to finish the 800+ probe test for v1.5 of the paper. Can't switch models for agents, evaluators, scorers and judges mid way. 2. I had Model Ark's Coding Plan Pro for the odd emergency or busting out of token jail (which I side-loaded into claude code), which continues to run the odd routine work (and now that I think about it, I should have used those models to run the telegram orchestration and maintenance bits to avoid getting banned in the first place), kimi-k2.5 was my go-to for the orchestrator and dola-seed-2.0-pro was my goto for light coding tasks. 3. I switched from cc (with 2% left before weekly token jail), bit the bullet and installed opencode. Tried both the free minimax2.5 model and byteplus kimi and dola-seed. Great for light coding and conversations, not great for heavy coding tasks. 4. Switched to Gemini-3.1 pro preview, found that it was suited for the complex task ahead (refactoring claude plugins for opencode). With all the switching going on post account ban, it has been the same memory folder and files. The agent retained its knowledge (somewhat... deeper retention with some models, pretty basic with others). Then, we got to porting over one skill at a time. Optimised some of them for opencode. 80% done at the time of me writing this. Costs $6 in vertex ai costs. Making the switch was a blessing = we've been multi-session at the start of the build, and then cross-app (cowork and code), and recent events forced us to accelerate cross-harness / platform / model compatibility. We were single user before (1 memory implementation => 1 user) and were testing multi-user (single channel) with telegram when we got banned. Thanks to recent events, we're bumping up the timeline for multi-user, multi-channel memory (because that's how institutional knowledge works).
Busted out of jail and it cost $6.90
1 like • 5d
@Qayyum Khan I got curious and tried running vera on the incomplete harness (but on claude code for now) that has the orchestration layer missing for open code - but buggy for claude code. There's a command that takes notes for me, vera:btw. She sort of got around it and just wrote the note to the memory file directly.
0 likes • 4d
@Qayyum Khan that’s the tough bit. I was using GitHub copilot to port over the plugins. Halfway thru it ran out of context, compressed, and continued running, and ended up refactoring already ported commands, undoing 2 of them and made the whole plugin more mechanical rather than organic. On other news, Vera recommended we go with proxmox for the dedicated server so the agentic team can go nuts on their environment and we can isolate that from the underlying OS. The server team has re-provisioned it, and now I’m hardening the server manually. It’s painful.
1-10 of 79
Millenial Cat
5
109points to level up
@millenial-cat-4349
sometimes i over-engineer

Active 4h ago
Joined Mar 24, 2026
ENTJ
Powered by