A Guide to Scaling your N8N
Setting up scalable hosting for N8N for my Free N8N hosting (https://aititus.com/free-n8n) has been a lot of fun, and tricky!
Here are some helpful tips and tricks that have helped me, and I hope they help you when the time comes for you to scale your N8N setup!!
Whether you're a seasoned developer or just getting started, this guide is packed with actionable insights to help you!
Key Teachings: The "What" and "Why" of Scaling
In the scaling overview, scaling is explained as enabling a system to handle more work. There are two types: vertical scaling, which is beefing up a single server with more CPU, RAM, etc. (link), and horizontal scaling, which spreads the workload across multiple machines and is where n8n’s latest capabilities shine (link).
The architecture breakdown covers the key components of n8n. The Editor Interface is where you design workflows (link). The Internal API saves your work to the database (link). The Webhook Registration Service manages how n8n listens for external events (link). Initiators decide when workflows run (link), which can be webhooks (link), pollers (link), or time-based triggers (link). Workers execute the non-trigger nodes in your workflows, and multiple workers can be run to distribute the load (link).
The power duo for scaling is Postgres and Redis. Postgres is recommended for large-scale deployments, ideally version 13 or newer (link). Redis acts as a message broker between initiators and workers (link).
Actionable Items & Strategies: Your "How-To" for Scaling
Start by creating a Docker network so all your components—main instance, workers, database—can talk to each other. Then run Redis and Postgres containers to set up your message broker and database.
Next, run the main n8n instance. This will serve the editor UI and manage workflow initiation but delegate execution to the workers. You’ll need an .env file for database details, execution mode set to queue, and the Redis connection (link).
Then, scale your workers. Spin up multiple worker processes or containers—each will listen for jobs on Redis, execute them, and update the database.
For high-traffic webhooks, scale your webhook processes. These are lightweight and push execution requests to Redis, offloading the main instance.
Pro-Tips
Use a shared encryption key for all instances—main, workers, and webhooks—so they can access stored credentials.
Ensure database and Redis accessibility across all processes.
Keep environment variables consistent using a shared .env file or equivalent.
Tweak worker concurrency to control how many jobs each worker processes at once.
Benefit from self-healing—if a worker crashes, the job stays in the queue for another to pick up.
Let me know your thoughts in the comments below, and don't forget to bookmark this post for future reference!
Check out the full n8n live stream at https://www.youtube.com/watch?v=PnoE0xV8BX8
17
10 comments
Titus Blair
8
A Guide to Scaling your N8N
AI Automation Society
skool.com/ai-automation-society
A community built to master no-code AI automations. Join to learn, discuss, and build the systems that will shape the future of work.
Leaderboard (30-day)
Powered by