Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
What is this?
Less
More

Memberships

Clief Notes

30.5k members • Free

8 contributions to Clief Notes
Do i have to break down my skill to be even more efficient?
I am experimenting with knowledge management, see https://www.skool.com/quantum-quill-lyceum-1116/validating-the-idea-of-modular-context-architecture?p=85f985fa During a linting session i noticed my skill was verbose and inefficient so i thought: "i wish i had a way to manage knowledge for skill....wait.lol, i do have it" So my question is: Did any of you tried to create a wrapper skill that route subskills when needed to prevent context bloating? is the same principle over and over but i never applied to skills.
0 likes • Apr 12
Yes do it is a bomb.😂
Validating the idea of Modular Context Architecture
I’m following the courses and making my first step to treat knowledge as code. ## Stack: Obsidian + NotebookLM + ContextMode + Docker + Git Thanks to mouthful Gemini i call this: Modular Context Architecture (MCA). In this system, each folder represents a specific "Knowledge Node" with a standardized file syntax to optimize how an agent routes and consumes information. ## The Node Structure Each node (folder) contains four primary files designed to tier information by depth of intent: 1. README.md (Entry Point): Contains the minimal viable information required to identify the node’s purpose. This serves as the first layer for an agent’s broad search or initial embedding match. 2. CONTEXT.md (The Schema): A deeper explanation of the node’s internal logic. It acts as a map, referencing more granular sub-files (e.g., Specific_Detail_A.md) that should only be loaded if the agent requires higher resolution on the topic. 3. REFERENCES.md (The Router): A dedicated file for internal [[wikilinks]] and external URLs. This separates the relational graph from the descriptive prose. 4. MEMORY.md (Heuristics): An optional file containing localized constraints, formatting rules, or "long-term" logic that the agent must apply strictly when working within that specific node. ## Example: Business Model Canvas - The Graph: The complete canvas. - The Node: The "Value Proposition" folder. - The Execution: An agent enters the "Value Prop" directory. It identifies the node via README.md. If the task requires analyzing customer pain points, the CONTEXT.md directs the agent to a specific Pains.md file. Throughout the process, the agent adheres to tone and formatting constraints defined in MEMORY.md. ## Technical Advantages - Context Efficiency: The agent only retrieves the specific "layer" of documentation necessary for the current task. - Scalability: New domain knowledge can be integrated by adding a standardized folder without restructuring the existing graph. - Human-AI Interoperability: The vault remains navigable for human users while providing a clear, API-like structure for LLM parsing.
0 likes • Apr 11
@Chris King I am actually very happy with the results. i have upgraded also some workflows to "lint" the brain. the cool part is that also skills can be decomposed with the same concepts. @Raphael Carvalheira comment just unlocked me a little side on yaml that seems to also boot the speed of navigation outside of llm inference. I was not aware but it perfectly matches the flow of https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f which is an even stronger reference that confirm how effective is to treat knowledge as code and data modeling, regardless of labels. I have to keep up with the rest of courses tho.
1 like • Apr 11
@Michael Hoffmann I've tried to think in similar ways of data model snapshots. there is indeed a cost in maintaining such system and that is why "linting the brain" to ensure clear paths is very important. There is no official definition of weak signal i can relate so far, but https://context-mode.mksg.lu/ is really helping in avoiding context bloat that could generate generic answers. The truth is that LLM just do not think so they have to be instructed with the best context possible to answer a specific question. Everything started because i was tired of collecting context and waste tokens.
🏁 Foundations 4.3 Check-In
You just used Claude Desktop as a thinking partner instead of a vending machine. Vote below, then tell us in the comments: what was the problem, and what was the insight you walked away with?
Poll
196 members have voted
2 likes • Mar 23
I tend to adapt to AI the same principles of agile teams. some models should just be responsible for brainstorming with you and define clear benefits, other to transform that benefit into a sort of epic planning and break it downs to simple actionable tickets dumber model can span in parallel.
🏁 Foundations 4.2 Check-In
You just ran Claude Code on your own files. Vote below, then drop what you made in the comments. What was the task? What did you get back?
Poll
203 members have voted
2 likes • Mar 23
I m sure we will see something in the course like this but if people do not like the idea to waste tokens i strongly recommend https://context-mode.mksg.lu/ Is a tool that act as an virtualization layer for context, meaning it generates a more efficient way to manage contexts and tools. It will be very important when dealing with a large set of tools in the MCP, because they flood your context and increase hallucinations/context-rotting.
🏁 Foundations 3.2 Check-In
You just saw how the folder structure adapts to different use cases. Vote below, then drop your customized setup in the comments. What did you name your workspaces and why?
Poll
410 members have voted
0 likes • Mar 22
@Alberto Neri I think you are missing an important point, AGENT is just a word. this initial part to me is very important because it shows you the abstraction process. documents are now what tables used to be for traditional systems. AI agent development = data engineering with text + LLM + tooling.
1-8 of 8
Davide Lupis
2
4points to level up
@davide-lupis-8178
Data Scientist crafting the art of side hustles and full stack development

Active 9d ago
Joined Mar 18, 2026
ENFJ
Powered by