Zero context, and memory works, but how?
Hey everyone! π First of all, I just want to say a huge thank you to the Agent Zero community and especially to Jan as its creator. I've been following this project from the very beginning, and even back then I could feel it had something special. For me personally, nothing else comes close β Agent Zero is in a league of its own. Now let me tell you about what I've been building. Using Agent Zero as the backbone, I was able to bring to life an idea I'd been sitting on for a while β tackling the zero-context problem from a completely different angle. Pure Intellect is a local memory server that sits between Agent Zero and your LLM. Instead of letting the context window fill up and die, it intercepts the moment before overflow, creates a compressed snapshot of everything important (we call it a "coordinate"), resets the context, and picks up the conversation like nothing happened. No lost information. No repetition. The agent just keeps going. - β
~85% fewer tokens per request - β
Memory survives restarts and model changes - β
Fully local β no cloud, no tracking - β
Open Source π github.com/Remchik64/pure-intellect Since the priority was local-first development, I also built a dedicated Agent Zero plugin that handles all communication with the Pure Intellect server. It replaces the built-in _memory plugin and gives Agent Zero two new abilities: recalling relevant facts from long-term memory before each response, and saving new facts after each turn β all routed through the PI server. π github.com/Remchik64/plagin_agent0 Please don't see this as self-promotion! I'm just trying to give something back to this community. Maybe this idea sparks something new for someone, or becomes a building block for the next cool thing. Thanks again for everything you're building here. Seriously. π