⚡️ n8n speed tips wanted — how would you cut latency here?
I’m trying to get this flow consistently fast (aiming for sub-10s per run).
Current setup:
  • Webhook → Code (preprocess)
  • AI Agent (OpenAI Chat) with Pinecone Vector Store + OpenAI Embeddings
  • BuildPayload → ElevenLabs (TTS) → rename → Supabase Storage upload
  • In parallel: Supabase DB update + Analytics (Mixpanel)
  • Final Respond to Webhook returning audio URL
Questions for the pros:
  • Best way to parallelize TTS/upload/analytics—Queue Mode + workers or sub-workflows?
  • Any wins from HTTP keep-alive, batching, or reducing hops (replace multiple Set/If with one Function)?
  • Tips for caching/pre-warming (tokens, Pinecone lookups) and passing binary data to avoid base64 bloat?
  • Anyone using streaming (LLM partials → start TTS early) to overlap steps?
  • Execution-data/DB settings you tweak to lower overhead?
Short node patterns or screenshots would be awesome—thanks! 🙏
1
0 comments
J Patrick Magadia
1
⚡️ n8n speed tips wanted — how would you cut latency here?
AI Developer Accelerator
skool.com/ai-developer-accelerator
Master AI & software development to build apps and unlock new income streams. Transform ideas into profits. 💡➕🤖➕👨‍💻🟰💰
Leaderboard (30-day)
Powered by