Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

AI Automation Society

202.6k members • Free

Agent Zero

1.7k members • Free

4 contributions to Agent Zero
Agent Zero + Graphiti (memory) + FalkorDB (hybrid GraphRAG/Vector database)?
Has anyone configured Agent Zero to leverage the automated Graphiti memory (https://help.getzep.com/graphiti/) with the fast FalkorDB (https://www.falkordb.com/) graph database (or other hybrid GraphRAG/Vector database)? If so, I was curious if you needed to make significant changes to the memory YAML/JSON/env files in Agent Zero to take full advantage of Graphiti with a GraphRAG/Vector database backend and how much of an improvement it was compared to the included Agent Zero memory system?
Agent Zero + Graphiti (memory) + FalkorDB (hybrid GraphRAG/Vector database)?
0 likes • 13d
@Erich Greenebaum I completely agree with your comment, "I need both "styles" of memory."
0 likes • 12d
@Ken Romero and @Erich Greenebaum You're welcome! I've gone through extensive comparisons between over 30 Graph / Vector database's and as far as I can ascertain the best option for most (not all) use cases is FalkorDB. Graphiti can also be very useful as an open source method to link agents to FalkorDB. I'm hoping that there is a Docker config builder option for Agent Zero in the future like what the new TrustGraph is doing (seen here https://config-ui.demo.trustgraph.ai/). Perhaps I should post a separate message on this topic.
Save AI Tokens - Token-Oriented Object Notation (TOON)
Has anyone else seen this? I just saw this in a Youtube video. It looks promising for saving tokens on AI instructions. "Token-Oriented Object Notation (TOON) – Compact, human-readable, schema-aware JSON for LLM prompts. Spec, benchmarks, TypeScript SDK." https://github.com/toon-format/toon
Coming soon? Knowledge, RAG, Planning, Scheduling, GraphRAG DB
Question 1: I saw that on GitHub that under "Coming Soon" it lists "Knowledge and RAG Tools" and "Planning and Scheduling". Is there any ballpark estimates on this, (i.e. second quarter 2025)? Question 2: Are there any plans on integrating a cutting edge open source GraphRAG database (like FalkorDB)? Question 3: Is there any suggestions on how the AO community can collectively keep improving and refining current and future AO system prompts and share those updates (with test results to show that it is an improvement) with each other? This could be great for getting the community involved with improving memory storage processes, especially as it relates to Vector and GraphRAG database storage.
0 likes • Apr 7
@Jan Tomášek Thank you!
Auto-Memory: Fine-Tuning in Disguise?
I'm here to share a rather intriguing observation I've made while running problem-solving tests on Agent Zero, and more generally within the realm of physics. There's a particular problem – #7 on page 24 of the Oxford University Physics 1 textbook – that no AI can crack using a zero-shot approach, save for o1-preview, as you can see in the third attached screenshot. The first video shows Agent Zero equipped with Gemini 1.5 Flash 002 and two other subordinate agents, having also been informed that one of these subordinates is an expert physicist, but no success, they just overthink while being biased towards their first assumptions. My "Grass grows, guys!" is very clear about that. In the second video, however, we have Agent Zero successfully solving the problem after I prompted it to memorise the following sentence: "When solving physics problems, don't rely solely on appearances, and utilise predictive models to compensate for any missing features within your problems". None of the LLMs I tested managed to deduce the fact that the grass continues to grow, thus requiring a mathematical model that accounts for this growth. They all answered 27 days, but the correct answer is 54. It seems the auto-memory function, when used effectively, offers less case-specific functionality than I initially expected, and actually allows A0 to generalise even better, without further bloating the system prompt. Thanks a million, Jan. [Oxford's textbook URL: https://www.ox.ac.uk/sites/files/oxford/media_wysiwyg/physics-problems-solutions-1-compressed-1.pdf]
0 likes • Mar 30
Great information @Alessandro Frau! It seems the more we appreciate how humans learn, reason, and just how important the right type of is memory the better we can construct and instruct AI. I’m glad many in the AI community are realizing that AI can do much more than it previously has, even with small context windows. After all, humans have a small context window, and yet look at what mankind has been able to do.
1-4 of 4
Peter A
2
14points to level up
@peter-arvo-4187
A human.

Active 1d ago
Joined Mar 30, 2025
Michigan, USA
Powered by