User
Write something
Group Coaching and Q & A is happening in 37 hours
The Mythos breach has no AI in it. Here's what to do this week.
If you've been on LinkedIn this week, you've seen the Mythos news. Anthropic is investigating unauthorized access to Claude Mythos Preview β€” the model they capped at about forty partners because they considered it too dangerous to release. Investigation is still ongoing. I want to bring it here because there's a lesson in this one for us specifically β€” and an action you can take this week. Here's the chain (no AI in it, except at the destination): 1. Attackers poisoned a Trivy GitHub Action β€” a security scanner β€” inside LiteLLM's CI/CD pipeline. They stole credentials and pushed backdoored litellm packages to PyPI. Live for about 40 minutes. LiteLLM has 95M+ downloads. 2. Mercor (an AI training startup) was one of thousands hit. Lapsus$ claims 4TB stolen via Mercor's Tailscale VPN. 3. The dump included Anthropic's internal model naming conventions. A Discord group β€” with an Anthropic contractor in it β€” used them to guess the Mythos deployment endpoint. They got in on launch day. No zero-day. No novel exploit. No model jailbreak. Just a poisoned dependency, a CI tool nobody was watching, an over-scoped contractor, and a 4TB dump that shouldn't have held those naming conventions in the first place. Verizon's 2025 DBIR put third-party breach involvement at 30% β€” doubled YoY. Panorays says 85% of CISOs can't see their third-party threats. Only 22% formally vet AI tools. We are getting excited about an AI that can find zero-days while most companies can't see what their vendors are doing on a Tuesday. The biggest risk in 2026 isn't AI capability. It's production security practices that have been broken so long we stopped flinching. This week β€” pick at least one. Drop your result in comments. 1. Find LiteLLM in your stack. Open Claude Code in your repo and paste this: "Search every package manifest, lockfile, requirements file, Dockerfile, and CI workflow in this repo for litellm. Report the version pinned (or unpinned), where it's used, which environment variables and secrets it has access to, and whether the version falls in the compromised range (1.82.7 / 1.82.8). Then list the credentials you'd need to rotate if this dependency was poisoned."
0
0
Walk into your next security team meeting with something real πŸ‘‡
Vercel got breached this week. The initial access wasn't even at Vercel β€” it was at one of their vendors (Context.ai). An employee there got hit with Lumma Stealer malware, attackers grabbed their Google Workspace OAuth tokens, and pivoted straight into Vercel's internals. Two months of dwell time. Customer environment variables exposed. ShinyHunters now asking $2M for the data. No exploit. No zero-day. Just an OAuth grant nobody was watching. Read the story here. Here's the thing: your company almost certainly has the same exposure right now. Every AI tool your coworkers have connected to Workspace or M365 is a non-human identity with a scope attached β€” an account you can't train, fire, or put behind MFA. Most security teams have never taken a hard look at that inventory. Not because they don't care β€” because nobody's been asking the question yet. That's the opening. This is an opportunity to bring this story to your security lead, and say: "I saw what happened to Vercel. I want to make sure we're not exposed the same way. Can I run a quick review?" That's how you get pulled into AI security work at your current job β€” by spotting the thing before someone asks you to. The drill (30 min, no budget, high visibility): 1. Open Google Workspace or M365 admin β†’ Security β†’ third-party / connected apps 2. Export or screenshot the list, sorted by how broad each app's access is 3. Flag the three with the widest scopes and note: who approved it, when was it last used, does anyone still need it 4. Write it up as a one-page brief. Reference the Vercel β†’ Context.ai β†’ OAuth pivot story so leadership understands why you looked. That one page is the deliverable. Send it to your security lead, your manager, or drop it in your team Slack. Doesn't matter if the findings are boring β€” the act of looking is the value. You just demonstrated threat awareness, business context, and initiative in a single artifact.
0
0
Canceled my ChatGPT subscription β€” here's what pushed me to one platform
Quick real-talk post today. I've been a ChatGPT user for years. Paid subscriber since early on. Yesterday I canceled it. Not because it's bad β€” it's not. But my workflow has fully shifted to Claude, and I realized I was paying for something I wasn't reaching for anymore. What did it for me: - Output quality β€” for security work, the responses are more precise and more useful on the first pass - Claude Code β€” this moved AI from "chat assistant" to actual development tool for me. I've built things I wouldn't have attempted before - Co-work features β€” working alongside an AI instead of just prompting it is a different experience Is Claude Max more expensive? Yeah β€” five times more. And I still found it worth consolidating there instead of splitting between two platforms. I'm not saying everyone should do the same. OpenAI just announced a cybersecurity-focused model that might pull me back. The point is β€” try multiple tools and go where you're most productive. That's what we teach here. For those of you working through the labs and courses, you're already building with Claude Code. Curious where you've landed: What's your primary AI tool right now, and has that changed in the last 6 months? Drop it in the comments β€” I want to see where the community is at.
1-3 of 3
AI Cloud Security Lab
skool.com/cloud-security-lab
Learn cloud security using AI by building real cloud labs, security programs, and portfolio artifactsβ€”not just studying for certifications.
Leaderboard (30-day)
Powered by