Google Claims to Crack "Infinite Memory"
The TL;DR is they found a way to weigh memory into the inference points that a LLM goes through, rather than having it be separate, it assigns a weight to the memory as it forms. Worthwhile memory gets kept, not-so-important stuff gets purged ("weight decay"). As of December 4th, this is another huge leap on AI capability.
Inference points are basically what LLMs use to make small jumps in understanding context, which makes or breaks the experience for a user. After their memory is bloated (thread gets too long), those jumps get harder and harder to make.
Weights are an extra metric that evaluates whether or not it should forget in real time.
Won't be long until this gets plugged into open source models, opening a new wave of vision-capable AI (think Kimi but in more hardware). Stringing a bunch of smaller AI agents as well, the implications can get scary
The question is how much of this is marketing vs actual research-backed architecture?
Links to the white papers:
"Instead of compressing information into a static state, this architecture actively learns and updates its own parameters as data streams in."
0:11
3
3 comments
Jonathan McLemore
3
Google Claims to Crack "Infinite Memory"
The AI Advantage
skool.com/the-ai-advantage
Founded by Tony Robbins & Dean Graziosi - AI Advantage is your go-to hub to simplify AI, gain "AI Confidence" and unlock real & repeatable results.
Leaderboard (30-day)
Powered by