Activity
Mon
Wed
Fri
Sun
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
What is this?
Less
More

Owned by Max

Learn AI! Coding/Deploying with AI and AI Automations - bridging the gap between non-coders and coders⚡

Memberships

AI Automations by Jack

1.7k members • $77/m

Video Makers by AI Agents A-Z

104 members • $69/m

AI Skool with Ken Kai

707 members • $49/m

AI Automation Accelerator AAA

69 members • $59/m

Business AI Alliance Premium

35 members • $49/m

AI Automation Guild

46 members • $39/m

Amplify Voice AI

218 members • $37/month

AI Academy+

125 members • $49/month

AI Automation Mastery

2.5k members • $49/m

12 contributions to Open Source Voice AI Community
Deploy Enterprise n8n in 30 Minutes (Queue Mode + 3 Workers + Task Runners + Backups)
Want a REAL production-ready n8n deployment? In this video we break down the n8n-aiwithapex infrastructure stack and why it’s a massive upgrade over a “basic docker-compose n8n” setup. You’ll see how this project implements a full queue-mode architecture with: - n8n-main (Editor/API) separated from execution - Redis as the queue broker - Multiple n8n workers for horizontal scaling - External task runners (isolated JS/Python execution) for safer Code node workloads - PostgreSQL persistence with tuning + initialization - ngrok for quick secure access in WSL2/local dev We’ll also cover the “Ops” side that most tutorials ignore: - Comprehensive backups (Postgres + Redis + n8n exports + env backups) - Offsite sync + optional GPG encryption - Health checks, monitoring, queue depth, and log management scripts - Restore + disaster recovery testing so you can recover fast - Dual deployment paths: WSL2 for local + Coolify for cloud/production If you’re building automations for clients, running n8n for a team, or scaling AI workflows, this architecture is the blueprint: separation of concerns, isolation, scaling, and recoverability. Youtube video: https://youtu.be/HzgrId0kgfw?si=0bzdvDgJW4dLApfi Repo: https://github.com/moshehbenavraham/n8n-aiwithapex
0
0
🗺️ Voice AI Conversation w/ 3D Maps - Full FREE App
It's not just answering questions — it's showing you. "Hey, show me the best coffee shops near the Eiffel Tower" And it just... does it. Pans the map. Zooms in. Highlights spots. Talks back to you about what it found. This isn't search. This is conversation. --- What is this thing? Voice AI Conversation with Google Maps — a voice-powered AI agent that actually controls Google Maps while you talk to it. Ask it anything: - "What's the fastest route avoiding highways?" - "Find me beachfront hotels under $200 in Portugal" - "Show me where all the national parks are in Utah" - "Zoom out and show me the whole country" It doesn't just answer. It shows you. In real-time. While talking back. --- Why this hits different We've all typed into Google Maps. But talking to a map that responds, moves, and explores WITH you? That's a completely different experience. It's like having a knowledgeable local guide who also happens to control a giant interactive map on your wall. The Open Source Repo: https://github.com/moshehbenavraham/chat_with_google_maps --- The nerdy part (for my fellow builders) 🔧 Started with Google's AI Studio demo — cool proof of concept, but basically a toy. Nearly FULLY AUTOMATICALLY built: - Full authentication system - Secure backend (no exposed API keys 🔐) - Database for saving your stuff - AI monitoring dashboard and logging - Local/Dev/Production deployment on Vercel - 18,500+ lines of code Built the entire thing using the Apex Spec System — an open-source spec-driven Plugin/Skill for Claude Code. Every feature was spec'd out first, then built systematically. Complex project, zero chaos. 🔗 github.com/moshehbenavraham/apex-spec-system --- The future is conversational We're moving from: Click → Type → Talk This is just the beginning 🚀 Voice + AI + Maps is one combination. What about Voice + AI + your industry? The patterns we built here — security, monitoring, auth, database — they're reusable building blocks.
1
0
7 Voice AIs. 1 App. 429 Tests. $0
If you've been wanting to add voice AI to your project but can't decide which provider to use... I felt the same way. So I built something using my own 'Apex Spec System' (https://github.com/moshehbenavraham/apex-spec-system). It's called Voice Agent PuPuPlatter. (Yes, like the appetizer sampler at a Chinese restaurant.) One app. Seven voice AI providers. All in one tabbed interface. Here's what's included: → ElevenLabs (Widget + SDK modes) → OpenAI Realtime API → xAI Grok → Ultravox → Vapi → Retell You can switch between them instantly and compare the experience yourself. Each provider has: • Real-time transcripts • Audio visualization • Function calling demos • Automatic reconnection The UI is modern glassmorphism design. Works on mobile. Fully accessible. If interested, I also recorded a video tutorial series showing exactly how to set up your own ElevenLabs agent from scratch (links in the Repo). 5 videos. Step by step. Tech stack if you're curious: React 19 + TypeScript + Vite + Tailwind CSS 429+ tests included. Docker support for easy deployment. It's 100% free and open source. GitHub link: https://github.com/moshehbenavraham/Voice-Agent-PuPuPlatter Let me know which voice provider you've been eyeing!
2
0
5 Months. 16 Repos. 1900+ Commits. This Is How.
Think "Spec Kit" finely tuned for startup apps... video below!
1 like • Dec '25
@Nir Simionovich The 6th of January! I just want to do some more fine-tuning before the release and hype it up a bit :)
!!! Spec-Driven AI-First Development (Claude Code, Codex, Gemini, etc)
What is Spec-Driven Dev?: Spec-driven AI development is a method where you write a detailed specification (PRD/Requirements/etc - the "what" and "why" of your software) upfront, and then AI coding tools use that spec as the source of truth to generate, test, and validate the actual code in more manageable pieces —so you're steering the AI with clearer intent. Note - @Doug Montgomery was an advocate for getting me to test these techniques, so special shout out required! What is AI-First Dev?: This is just terminology I like to use in my own little world. It's what I'd call "Vibe Coding" with much more intention, management and realistic expectations/understanding of the capabilities of current AI. Why?: Remember the hype of "context engineering"? I like to think of Context with AI being how much information the model can manage before it degrades in quality. And, as it happens, this is the biggest limiting factor when doing full (or nearly full) AI-coding. Spec-Driven Dev is just one of many techniques to manage context, or "context engineering." How does it help with context? Picture you have a destination and the choice between a maze of roads/highways/gas-stations and car to get there (Vibe Coding) or a train on tracks with stations (Spec Driven Dev). Through various techniques based on Software-Engineering principles, in Spec-Driven Dev we are trying to keep the LLM on tracks and limit the length it has to travel before resetting its context - think reducing distractions and increasing focus. Problems?: I actually have found the platforms I've tested (such as GitHub's Spec Kit) to be over-engineered (too advanced and bulky) for the scale of apps I've had to build for clients and myself. Imagine our previous destination example - there was a better chance I'd get to the destination with the car because the train was going so slow. Hope?: A simpler Spec-Driven Dev workflow has been serving me amazingly for the past couple of months of testing. Further, I've found it to be effective, not only in more my comfort zone (backend), it has yielded very impressive results in frontend designs as well! They have some pre-built lighter weight solutions out there I believe.
2
0
!!! Spec-Driven AI-First Development (Claude Code, Codex, Gemini, etc)
1-10 of 12
Max Gibson
2
6points to level up
@max-gibson-3991
IBM Certified AI Developer and Skool Community Owner @ AIwithApex.com | Humanitarian Board President @ RedeemTheOppressed.org 🔹 AKA Mosheh

Active 4h ago
Joined Nov 7, 2025
ISTJ
Phoenix, Arizona, USA