User
Write something
From SIP to AI: A Real Call Finally Worked
Sharing a small but exciting milestone from my learning journey 🚀 Over the last few weeks, I’ve been deep into voice infrastructure and SIP, and I finally have a full working setup: 📞 Local Phone → SIP (from Signalwire) → FreeSWITCH → Voice Agent → Response back to the caller The FreeSWITCH server is running on a Debian server on DigitalOcean, and everything is now talking to each other smoothly — SIP, RTP, real-time audio, and AI responses. I’m currently working with a client, and initially we’re aiming to scale this setup to handle ~100 concurrent calls, which is pushing me to really understand: - SIP call flows - Audio streaming - Server performance & scaling - Latency trade-offs vs managed platforms Honestly, this stuff is challenging but insanely exciting. Every time a real phone call hits the server and the agent responds correctly, it feels like magic — but the earned kind 😄 Just wanted to share this win and keep building. If anyone here is working on voice agents, SIP, or FreeSWITCH — would love to connect and exchange notes 🤝
Built a Full AI Pipeline on One Laptop — Voice Is Next
Hey everyone — been building local-first AI infrastructure and this community is exactly my vibe. I run a full AI pipeline from a single laptop (RTX 5080 16GB, 32GB DDR5) — Ollama in Docker with GPU passthrough, PostgreSQL, Redis. The philosophy: 80% of AI workload runs on free local models, only the 20% that needs frontier reasoning hits a cloud API. Cost per pipeline run dropped from $8-15 to $0.15-0.40. I've shipped a few tools with this setup — market scanners, a knowledge retention engine with RAG, and a live SaaS API product. All from the same machine. What brought me here: I want to add a voice layer. Seeing folks run Pipecat with local STT/TTS on consumer GPUs is exactly the direction I'm heading. My Ollama stack already handles LLM inference — pairing that with local Whisper or the new NVIDIA Nemotron STT model on the same GPU seems like the natural next step. A few things from the recent threads caught my eye: - @Kwindla's sub-500ms voice-to-voice on an RTX 5090 with Nemotron — curious how that scales down to a 5080 with 16GB VRAM when the LLM is also loaded - @Jin Park's custom orchestration engine replacing Vapi/Retell — that modular approach maps directly to how I route pipeline stages between local and cloud models - The latency discussion around local vs cloud STT — has anyone benchmarked Whisper locally against Deepgram for voice agent round-trip times? Looking forward to learning from this group and sharing what I build.
NVidia PersonaPlex?
Has anyone integrated or created an agent on this yet? I would LOVE to test it out..
Llama 4 Scout
Got my adventure with LiveKit underway last week. After testing @John George 's demo, my demo felt underwhelming. I tried the Gemini Live API but my agent still felt sluggish and the data supported that. I tried switching out different TTS provides and voices the day before so yesterday, I focused on LLMs. I found Llama 4 Scout, installed with the Groq API. Admittedly, I only made one call but what a difference! A lot more testing to do, but finally, there hope on the horizon. Latency is such a killer. (those Cal.com tool calls still take forever). Anyone any thoughts on this Llama 4 Scout?
Llama 4 Scout
Anthropic is giving away $50 in free credits.
They just launched Opus 4.6; their most powerful AI model to date. And they want you to try it. Here's how to claim yours: → Open Claude → Head to Settings > Usage → Hit "Claim" → Done. $50 in extra usage unlocked. https://www.linkedin.com/posts/achyuth-kumar-pasnoor_anthropic-is-giving-away-50-in-free-credits-activity-7426576876795596800-QODr?utm_source=share&utm_medium=member_desktop&rcm=ACoAAEyNg5QB0WyASRpx7ykkYjJtIHIMqk_PcKo
1
0
1-30 of 100
powered by
Open Source Voice AI Community
skool.com/open-source-voice-ai-community-6088
Voice AI made open: Learn to build voice agents with Livekit & Pipecat and uncover what the closed platforms are hiding.
Build your own community
Bring people together around your passion and get paid.
Powered by