• Anthropic launched a code review tool designed to catch the flood of AI-generated code hitting production systems. If you're shipping AI-assisted code in your automations or products, this kind of quality-gate tooling is becoming essential infrastructure to watch and adopt. • OpenAI acquired Promptfoo, a security-focused testing framework for AI agents. For builders deploying autonomous agents, this signals that adversarial testing and prompt security are no longer optional — they're becoming standard practice. • Anthropic sued the Defense Department over a supply-chain risk designation, while simultaneously navigating controversy over its Pentagon deal — causing OpenAI hardware exec Caitlin Kalinowski to resign in protest. The politics around government AI contracts are heating up fast, and it's worth understanding how these deals may shape which models and APIs remain openly available to builders. • Tencent's Penguin-VL paper explores pushing vision-language models to their efficiency limits using LLM-based vision encoders. Smaller, faster multimodal models mean cheaper, more practical vision capabilities for real-world automation pipelines. • A new paper from OpenAI found that reasoning models struggle to control their own chains of thought, producing reasoning steps that don't reliably reflect their internal process. If you're building systems that depend on model reasoning being transparent or auditable, this is a meaningful limitation to design around. What's on your radar this week — are you thinking about agent security, multimodal tools, or something else entirely? Drop it in the comments.