The problem: my Wazuh SIEM fires hundreds of alerts a day across the home lab (Proxmox cluster, ~15 LXCs, all the usual attack surface). Most are noise. Some are real. Reading every one is not happening.
So I built an n8n workflow that pipes every high-severity Wazuh alert through Claude Haiku, enriches it with infrastructure context, and pushes a real risk assessment to Slack. With shell commands to investigate. Per alert cost is about $0.001.
Live demo in the video: I had Claude fire a brute-force SSH attack against one of my Proxmox nodes. Wazuh detected it (rule 5712, level 10). The workflow analyzed it. The Slack message correctly flagged it as "likely false positive — internal source, zero-trust network, no external exposure" and gave remediation steps anyway.
The piece that makes it actually work is the infrastructure context block. You tell the LLM what's normal in your network. Without that, it just summarizes the alert at you.
What's in the workflow:
- Webhook listener (Wazuh fires it)
- Config node (alert thresholds, model choice, timezone)
- Infrastructure context block ← this is the secret
- LLM analysis with provider swap (Anthropic, OpenAI, or local Ollama)
- Formatted notification (Slack, Telegram, Teams, anything with a webhook)
Three things to set up: LLM API key, Slack webhook, Wazuh webhook integration. Everything else is drag and drop with inline docs in the workflow JSON.
The repo also includes an `AI-SETUP-PROMPT.md`, paste it into Claude or ChatGPT and the model will interview you through deployment, including building the infrastructure-context block from your specific environment.
If you've built something similar or want me to cover a specific security workflow next (phishing triage, EDR enrichment, etc.), drop it in the comments.