User
Write something
Deploy Enterprise n8n in 30 Minutes (Queue Mode + 3 Workers + Task Runners + Backups)
Want a REAL production-ready n8n deployment? In this video we break down the n8n-aiwithapex infrastructure stack and why it’s a massive upgrade over a “basic docker-compose n8n” setup. You’ll see how this project implements a full queue-mode architecture with: - n8n-main (Editor/API) separated from execution - Redis as the queue broker - Multiple n8n workers for horizontal scaling - External task runners (isolated JS/Python execution) for safer Code node workloads - PostgreSQL persistence with tuning + initialization - ngrok for quick secure access in WSL2/local dev We’ll also cover the “Ops” side that most tutorials ignore: - Comprehensive backups (Postgres + Redis + n8n exports + env backups) - Offsite sync + optional GPG encryption - Health checks, monitoring, queue depth, and log management scripts - Restore + disaster recovery testing so you can recover fast - Dual deployment paths: WSL2 for local + Coolify for cloud/production If you’re building automations for clients, running n8n for a team, or scaling AI workflows, this architecture is the blueprint: separation of concerns, isolation, scaling, and recoverability. Youtube video: https://youtu.be/HzgrId0kgfw?si=0bzdvDgJW4dLApfi Repo: https://github.com/moshehbenavraham/n8n-aiwithapex
0
0
New NVIDIA open model for voice agents: Nemotron Speech ASR
NVIDIA released a new open source speech-to-text model designed from the ground up for low-latency use cases like voice agents. This is part of NVIDIA's new focus on open models, which I'm excited about. These new models in the Nemotron family include STT and TTS models, specialized models like guardrail models and LLMs. And they are completely open: open weights, training code, training data sets, and inference tooling. This new STT model is very fast. Here's a voice agent running locally on my RTX 5090 with sub-500ms voice-to-voice inference. Technical write-up and link to GitHub repo: https://www.daily.co/blog/building-voice-agents-with-nvidia-open-models/ Also, Twitter and LinkedIn if either of those platforms are your thing. (I post a lot about voice agents on both platforms.) https://x.com/kwindla/status/2008601714392514722 https://www.linkedin.com/posts/kwkramer_nvidia-just-released-a-new-open-source-transcription-activity-7414368349905821696-ufuy/
Best Observability Tools for Voice AI Frameworks?
What observability tools are others using with Pipecat or similar voice AI frameworks? I've built a production voice agent using Pipecat and currently track basic metrics (call duration, sentiment, summary, transcripts) in a custom dashboard. Tomorrow it's going in production so problem I think I can face is When errors will occur, debugging is painful. My current logging approach creates massive log files that are nearly impossible to analyze efficiently when tracking down issues.
Musings about Vibe Coding, Pipecat, LiveKit and more
So, over the past few weeks - I've been neck deep into working with PIpecat, LiveKit and Vibe Coding. Mainly, I wanted to see what kind of milage I can get from Vibe Coding tools, and in order to test it - what's a better way than build a Pipecat/LiveKit implementation? So, I decided to examine 3 primary tools: - Claude Code - Using Sonnet 3.5 (using CLI) - OpenCode - Grok Code Fast 1 - Google Antigravity - Using Gemini 2.5 Below are my conclusions, split into several categories. 💵 Financials: Most expensive to use - Claude Code Least expensive to use - OpenCode 😡 Developer Experience: Best experience - Google Antigravity Worst experience - Claude Code 💪 Reliability: Most reliable - Claude Code Least reliable - OpenCode 🚅 Performance: Fastest planning and building - Google Antigravity Slowest planning and building - OpenCode So, overall - there is no "one tool to rule them all" here - and what I found out that each tool is really good at performing specific tasks. Here is what I've learned about how to "leverage" these tools in order to build something successful: - Planning can be performed with either OpenCode of Google antigravity. Google provides free developer credits for Antigravity, and their deep-thinking and reasoning engine, when applied to software architecture and design works very well. - Backend development with either ClaudeCode or Google Antigravity. When coupled with proper topic sub-agents, these are really powerful tools. For some odd reason, Claude Code is far more capable at handling complex architectures, while Google Antigravity leans towards the "hacker style" coding. - UI/UIX development - without any question, OpenCode did a better job. It was far more capable in spitting out hundreds of lines of working UI/UX code - even faster that Claude. However, if at some point it gets stuck on a specific UI component package, it may require Claude to show it the light - so pay attention to what it's doing. - Code Review, Security and Privacy - without any question, Claude is the winner here - with potentially the most extensive availability of sub-agent topic experts.
1-30 of 92
powered by
Open Source Voice AI Community
skool.com/open-source-voice-ai-community-6088
Voice AI made open: Learn to build voice agents with Livekit & Pipecat and uncover what the closed platforms are hiding.
Build your own community
Bring people together around your passion and get paid.
Powered by