If you've been on LinkedIn this week, you've seen the Mythos news. Anthropic is investigating unauthorized access to Claude Mythos Preview — the model they capped at about forty partners because they considered it too dangerous to release. Investigation is still ongoing.
I want to bring it here because there's a lesson in this one for us specifically — and an action you can take this week.
Here's the chain (no AI in it, except at the destination):
- Attackers poisoned a Trivy GitHub Action — a security scanner — inside LiteLLM's CI/CD pipeline. They stole credentials and pushed backdoored litellm packages to PyPI. Live for about 40 minutes. LiteLLM has 95M+ downloads.
- Mercor (an AI training startup) was one of thousands hit. Lapsus$ claims 4TB stolen via Mercor's Tailscale VPN.
- The dump included Anthropic's internal model naming conventions. A Discord group — with an Anthropic contractor in it — used them to guess the Mythos deployment endpoint. They got in on launch day.
No zero-day. No novel exploit. No model jailbreak. Just a poisoned dependency, a CI tool nobody was watching, an over-scoped contractor, and a 4TB dump that shouldn't have held those naming conventions in the first place.
Verizon's 2025 DBIR put third-party breach involvement at 30% — doubled YoY. Panorays says 85% of CISOs can't see their third-party threats. Only 22% formally vet AI tools.
We are getting excited about an AI that can find zero-days while most companies can't see what their vendors are doing on a Tuesday. The biggest risk in 2026 isn't AI capability. It's production security practices that have been broken so long we stopped flinching.
This week — pick at least one. Drop your result in comments.
1. Find LiteLLM in your stack. Open Claude Code in your repo and paste this:
"Search every package manifest, lockfile, requirements file, Dockerfile, and CI workflow in this repo for litellm. Report the version pinned (or unpinned), where it's used, which environment variables and secrets it has access to, and whether the version falls in the compromised range (1.82.7 / 1.82.8). Then list the credentials you'd need to rotate if this dependency was poisoned."
Then run it across your other repos. If you don't use LiteLLM, swap in whichever LLM gateway you do use.
2. Pull your contractor / non-employee IAM and ask the AI Quick Wins question (from Course 2): for every non-employee identity that touches AI services, when was their access last reviewed, what models / data can they reach, and how would you know if they were in a Discord channel they shouldn't be?
3. Pick one CI pipeline and audit "trusted" tooling. Trivy was the entry point here — a security scanner. What else in your pipeline has elevated permissions because we've all been using it for years? List the actions in one job. If any one got popped, what's the blast radius?
Discuss in comments:
- Do you have LiteLLM in your stack? (Or: which LLM gateway?)
- Have you ever done a contractor access review specifically for AI services?
- What's the "trusted but unmonitored" tool in your CI that worries you most?
Anthropic will finish investigating. The news will move on. This pattern will not.
See you in the comments.
— Josh