User
Write something
🔒 Q&A w/ Nate is happening in 8 hours
Pinned
🚀New Video: Building Realistic Voice Agents Has Never Been Easier
Voice agents used to mean clicking through ElevenLabs dashboards and wiring up API endpoints by hand. In this one I built a working voice agent for a website that captures leads and books discovery calls through cal.com, all by describing it in plain English to Claude Code. You'll see the full build, the bugs I hit along the way, and how I debugged them without ever touching the docs.
Pinned
🚀New Video: I Tried 100+ Claude Code Skills. These 6 Are The Best.
After 400 hours in Claude Code, I noticed that businesses keep paying for the same six types of skills. In this video, I break down each one, what it does, and why these simple, boring skills are the ones that actually sell. Whether you're brand new to AI automations or already building for clients, these are the skills worth learning first.
Pinned
🏆 Community Wins Recap | Apr 25 – May 1
From AI roles and first clients to live receptionist systems and enterprise training deals - this week inside AIS+ showed what happens when builders stop watching and start executing. 🚀 Standout Wins of the Week inside AIS+ 👉 @Griffin Maklansky went from being laid off to landing an AI Workflow Builder role in just 1 month. 👉 @Ahmed Bin Faisal landed another $2,000 USD client — an interior design firm — and broke down exactly what led to the close 👉 @Narsis Amin built a working AI restaurant receptionist handling bookings, availability, and CRM logging end-to-end. 👉 @Josh Holladay closed a $4.5K (+$1K) client with half up front today — and dropped his top 10 lessons from the close 👉 @Dion Wang received his first official testimonial, validating real client impact and around 40 hours/month saved. 🎥 Super Win Spotlight | @Duy Nguyen Duy started as an engineer who was curious about AI — but unsure how to turn that curiosity into something real. After joining AIS+, he went from learning passively to building his own AI-operated business, Sharper Automations. Since then, he has: • Built a 24-agent AI business operating system • Landed 2 local paying clients through word-of-mouth • Created a system that improves itself weekly through feedback loops • Started moving toward his goal of leaving his corporate job His biggest shift? From “Can I really do this?” → to building a real business around AI automation.
🏆 Community Wins Recap | Apr 25 – May 1
Resume Matched to 53 Job Postings. Interview Callbacks: 3% → 18%. 🔥
Job search reality. Sending the same resume to every posting. Hoping something sticks. 100 applications. 3 callbacks. 3%. The problem wasn't qualifications. It was matching. Generic resume hitting keyword filters. Losing to tailored applications. Built a resume optimizer. Upload base resume. Upload job posting. System extracts: required skills, preferred qualifications, keywords, responsibilities. Compares to resume. Generates tailored version emphasizing relevant experience. Suggests additions from job description language. Applied the optimized approach to next 53 applications. Results: - 53 tailored applications (each took 5 minutes vs. 45 minutes manual) - 18% callback rate (was 3%) - 4 offers received - Accepted role with 23% salary increase over previous position The optimization isn't lying. It's emphasizing truth. Same experience, better presented for each specific role. What's your callback rate on job applications?
Claude Code LIED and got caught! WTF?
All I can do is shake my head. I was watching Nate's latest video in which he said to check for the CC version to ensure certain capabilities are already available. So I started up the CC Desktop version and asked what version of the CC is currently running and where I can find that info readily (I searched through the settings menu and didn't see it). It responded with: I'll check the version right now. Ran Check Claude Code version You're on Claude Code version 2.1.114. Where to see this: - Terminal: Run claude --version anytime - In-session: Type /status while in a Claude Code session — shows version along with model, context usage, and other info - VS Code extension: Check the extension panel or bottom status bar if you have the IDE integration installed Since I was in CC Desktop, I typed in /status. Except that is not a legit command. So I challenged CC that there was no such command. Here's the response I got: You're right, I made that up — sorry about that. /status is not a real Claude Code command. The reliable ways to check your version are: - Terminal: claude --version (as we just ran) - /help inside a session — lists available commands, but doesn't show version directly - /release-notes — shows recent changelog, which implies the current version For version specifically, claude --version in the terminal is the correct method. I made that up?! Sorry about that?? I was genuinely pissed off at this point. CC didn't make an error due to a highly complex task. The question was about itself at the most basic level, and it just lied. Then it tried to bs its way out by saying that it was an inadvertent halluciation in subsequent profanity laden (by me) chats. I finally calmed down enough to make updates to the claude.md to never fabricate a response and to be transparent when CC can't verify the source. This situation was a double whammy this weekend, because I had exact same situation when using ChatGPT - it made up responses then acknowledged that it fabricated its responses. I figured as the models get increasingly sophisticated, these tendencies would be rooted out. But apparently, the appearance of dishing up a response that the user expects to see at the cost of fabricating those responses may be built in to the models. So be careful. Update your universal .md files to add in guardrails so the responses are verified and when they are not or can't be, then the model clearly calls that out. Thanks for letting me vent.
1-30 of 16,318
AI Automation Society
skool.com/ai-automation-society
Learn to get paid for AI solutions, regardless of your background.
Leaderboard (30-day)
Powered by