📅 Weekly Security Briefing — Feb 23 – Mar 1, 2026
Here’s your clean, high-signal roundup of the most important cybersecurity and AI-security developments from the past week. This cycle highlights AI governance failures, dramatic shifts in attacker velocity, strategic defenses for AI systems, and novel threats targeting developers. 🤖⚠️ Microsoft Copilot Privacy Bug Exposed Confidential Emails What happened: Microsoft confirmed a bug in Microsoft 365 Copilot Chat that caused the AI to summarize emails labeled as “Confidential” even when data loss prevention (DLP) and sensitivity labels were configured to block access. The issue, tracked as CW1226324, affected the Copilot work-tab chat since late January and allowed Copilot to pull content from Sent and Draft email folders that should have been blocked by security policies. 🔗 https://www.bleepingcomputer.com/news/microsoft/microsoft-says-bug-causes-copilot-to-summarize-confidential-emails/ 🚀🔥 AI Arms Race: Attack ‘Breakout Time’ Falls to 29 Minutes What happened: The 2026 CrowdStrike Global Threat Report revealed a dramatic acceleration in AI-enabled cyberattacks, with total AI-leveraged adversary activity up 89 % year-over-year. The average breakout time — the interval from initial access to lateral movement — dropped to just 29 minutes, with the fastest observed breakout in under 30 seconds, as AI is employed for reconnaissance, credential theft, and evasion at machine speed. 🔗 https://www.crowdstrike.com/en-us/global-threat-report// 🧠🏗️ Microsoft Releases Threat Modeling Framework for AI Applications What happened: Microsoft published a comprehensive guide to threat modeling for generative and agentic AI systems, tailored for the unique risks of language-model-driven applications. The framework covers adversarial goals like prompt injection, training data poisoning, and model inversion, and is designed to help security architects integrate “secure-by-design” assessments early in the AI development lifecycle.