Deploy Enterprise n8n in 30 Minutes (Queue Mode + 3 Workers + Task Runners + Backups)
Want a REAL production-ready n8n deployment? In this video we break down the n8n-aiwithapex infrastructure stack and why itโs a massive upgrade over a โbasic docker-compose n8nโ setup. Youโll see how this project implements a full queue-mode architecture with: - n8n-main (Editor/API) separated from execution - Redis as the queue broker - Multiple n8n workers for horizontal scaling - External task runners (isolated JS/Python execution) for safer Code node workloads - PostgreSQL persistence with tuning + initialization - ngrok for quick secure access in WSL2/local dev Weโll also cover the โOpsโ side that most tutorials ignore: - Comprehensive backups (Postgres + Redis + n8n exports + env backups) - Offsite sync + optional GPG encryption - Health checks, monitoring, queue depth, and log management scripts - Restore + disaster recovery testing so you can recover fast - Dual deployment paths: WSL2 for local + Coolify for cloud/production If youโre building automations for clients, running n8n for a team, or scaling AI workflows, this architecture is the blueprint: separation of concerns, isolation, scaling, and recoverability. Youtube video: https://youtu.be/HzgrId0kgfw?si=0bzdvDgJW4dLApfi Repo: https://github.com/moshehbenavraham/n8n-aiwithapex