OpenClaw just dropped two releases in three days. The 2026.4.12 stable release and a 2026.4.15 beta. The headline feature is worth paying attention to if you're building personal AI agents.
Active Memory is a new optional plugin that gives OpenClaw a dedicated memory sub-agent. It runs right before the main reply, automatically pulling in relevant preferences, context, and past conversation details. You configure it in three modes — recent messages only, full context, or search-based recall. The search mode is the interesting one. It queries your stored memories by relevance instead of just recency.
This matters because most personal AI setups lose context between sessions. You tell it something on Monday, it forgets by Wednesday. Active Memory fixes that at the architecture level.
Other notable changes in 2026.4.12:
- LM Studio provider bundled out of the box — runtime model discovery, stream preload, and memory-search embeddings for local/self-hosted models
- Experimental MLX speech provider for macOS Talk Mode — local utterance playback with interruption handling
- Plugin trust boundaries — manifest-declared needs with centralized policy enforcement, safer plugin loading
- Security fixes: removed busybox/toybox from safe interpreter binaries, blocked env-argv injection, prevented empty approver lists from granting auth
- 35+ contributors on this release
The 2026.4.15-beta.1 pushes further:
- LanceDB now supports cloud object storage for memory indexes — no more local-disk-only requirement
- GitHub Copilot added as an embedding option for memory search
- New local model lean mode that drops heavyweight tools (browser, cron) for weaker hardware setups
If you're running OpenClaw on your own devices, the memory story just got significantly more useful. Local-first, privacy-preserving, and now with cloud-durable storage as an option.