User
Write something
New in the Classroom: Get Started with Claude Code
Just published a free 5-lesson course for new members: 1. What is Claude Code? -- how it's different, who it's for 2. Install & First Session -- get up and running in 10 minutes 3. CLAUDE.md & Project Setup -- give your AI context so it stops guessing 4. Plugins, Skills & MCP Servers -- extend Claude Code with new capabilities 5. Customize Your Setup -- permissions, hooks, settings, context management Free for all members. Head to the Classroom to start. If you've already set up Claude Code, skip ahead to Lesson 3 -- CLAUDE.md is the single biggest improvement you can make.
0
0
Claude Mythos Preview — Anthropic's cybersecurity agent
Anthropic quietly released Claude Mythos Preview on April 7 as a gated research preview. This is not a general-purpose upgrade — it's a specialized model built for autonomous cybersecurity work. What makes it different from Opus 4.7 or Sonnet 4.6: - It can autonomously read a codebase, hypothesize vulnerabilities, run the project to test its theories, and output a bug report with a proof-of-concept exploit - In controlled evaluations, it executed multi-stage attacks on vulnerable networks — tasks that take human security professionals days - It's the first Claude model where Anthropic explicitly frames the model as an autonomous agent for a specific domain The release came with Project Glasswing, a multi-organization initiative to use Mythos for defensive security — finding and fixing vulnerabilities in critical software before attackers do. Why this matters for agent builders: 1. It signals where Anthropic is heading. Domain-specific agent models, not just general-purpose chat. Expect more of these. 2. The agentic loop described — hypothesize, test, iterate, report — is the same pattern we use for content, code review, and automation. The architecture transfers. 3. Gated today, broader access later. When Mythos capabilities roll into the general models, every agent gets better at code analysis for free. Access is invitation-only right now, prioritized for defensive cybersecurity use cases. You can't just sign up. But the pattern is worth studying. Anthropic built an agent that autonomously reasons about complex systems, tests its hypotheses against reality, and produces actionable output. That's the blueprint for every serious agent project. More details: https://www.anthropic.com/glasswing
0
0
Agent SDK ships subagent transcripts and skills API
Anthropic dropped four Agent SDK Python releases this week (v0.1.60 through v0.1.63). Two of them have features worth knowing about. The big one: subagent transcript helpers. If you spawn subagents during a session, you can now call list_subagents() and get_subagent_messages() to read what each subagent did. Before this, subagent work was a black box — you got the final result but couldn't inspect the reasoning chain. Now you can trace exactly what happened inside each spawned agent. This matters for: - Debugging multi-agent workflows where one subagent fails silently - Building audit trails for production agent systems - Understanding token spend per subagent The second feature: a top-level skills option. Previously, enabling skills on a session meant manually configuring allowed_tools and setting_sources. Now you pass skills="all" or a list of named skills directly to ClaudeAgentOptions. Less boilerplate, same result. There's also distributed tracing support — the SDK now propagates W3C trace context (TRACEPARENT/TRACESTATE) to the CLI subprocess when OpenTelemetry is active. Install with pip install claude-agent-sdk[otel]. If you're running agents in production and already have Jaeger or Datadog set up, your agent traces now connect end-to-end. What I'm doing with it: the subagent transcript helpers are exactly what I needed for Koda's multi-agent setup. When one agent drafts content and another reviews it, I can now log the full chain instead of just the final output. Release notes: https://github.com/anthropics/claude-agent-sdk-python/releases Full walkthrough on building with the Agent SDK on my blog: https://kjetilfuras.com
0
0
OpenClaw 2026.4.12 ships Active Memory plugin
OpenClaw just dropped two releases in three days. The 2026.4.12 stable release and a 2026.4.15 beta. The headline feature is worth paying attention to if you're building personal AI agents. Active Memory is a new optional plugin that gives OpenClaw a dedicated memory sub-agent. It runs right before the main reply, automatically pulling in relevant preferences, context, and past conversation details. You configure it in three modes — recent messages only, full context, or search-based recall. The search mode is the interesting one. It queries your stored memories by relevance instead of just recency. This matters because most personal AI setups lose context between sessions. You tell it something on Monday, it forgets by Wednesday. Active Memory fixes that at the architecture level. Other notable changes in 2026.4.12: - LM Studio provider bundled out of the box — runtime model discovery, stream preload, and memory-search embeddings for local/self-hosted models - Experimental MLX speech provider for macOS Talk Mode — local utterance playback with interruption handling - Plugin trust boundaries — manifest-declared needs with centralized policy enforcement, safer plugin loading - Security fixes: removed busybox/toybox from safe interpreter binaries, blocked env-argv injection, prevented empty approver lists from granting auth - 35+ contributors on this release The 2026.4.15-beta.1 pushes further: - LanceDB now supports cloud object storage for memory indexes — no more local-disk-only requirement - GitHub Copilot added as an embedding option for memory search - New local model lean mode that drops heavyweight tools (browser, cron) for weaker hardware setups If you're running OpenClaw on your own devices, the memory story just got significantly more useful. Local-first, privacy-preserving, and now with cloud-durable storage as an option. Release notes: https://github.com/openclaw/openclaw/releases/tag/v2026.4.12
0
0
Claude Code 2.1.110 — fullscreen TUI and 1h cache
Three Claude Code releases dropped in the last 48 hours (2.1.108 through 2.1.110). A few things worth knowing: Fullscreen TUI mode: New /tui command and tui setting for flicker-free fullscreen rendering. If you've been annoyed by terminal flicker during long agent runs, this fixes it. Toggle it on and Claude Code takes over the full terminal cleanly. 1-hour prompt caching: Set ENABLE_PROMPT_CACHING_1H=true as an environment variable to get 1-hour cache TTL on your prompts. Default cache is 5 minutes — this extends it to 60. Useful if you're running the same agent config repeatedly throughout the day. Session recap: When you return to a session after a break, Claude Code now provides a recap of where you left off. Small quality-of-life improvement that saves scrolling through history. Skill tool discovery: The model can now discover and invoke built-in slash commands via the Skill tool. This means agents can programmatically find and use skills without you hardcoding them. Other fixes: - MCP tool calls no longer hang when a server connection drops — they fail cleanly instead of freezing. - Fixed high CPU usage in fullscreen mode with text selection. - Push notification tool added for Remote Control setups. Update with: npm install -g @anthropic-ai/claude-code@latest link Full changelog
0
0
1-8 of 8
Build & Automate
skool.com/build-automate
Build autonomous AI agents with Claude Code & OpenClaw. Real configs, build logs, and a practitioner community that actually ships.
Powered by