Anthropic is launching a research preview of Claude Code Security, an AI tool designed to think like a human researcher to find and fix deep-seated vulnerabilities that traditional scanners usually miss.
- Claude Opus 4.6 recently uncovered over 500 vulnerabilities in open-source code that had been hidden for decades.
- The tool moves beyond simple rule-based scanning by reasoning through business logic and data flows just like a pro security analyst.
- Every finding goes through a multi-stage verification process to filter out false positives and assign a severity rating.
- It suggests targeted software patches for human review, meaning we stay in control while the AI does the heavy lifting.
- This shifts the advantage to defenders, allowing teams to patch weaknesses before attackers can use AI to exploit them.
Do you guys think AI-driven security will eventually make traditional manual code audits obsolete?