Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
What is this?
Less
More

Memberships

AI Automation Agency Hub

314.8k members β€’ Free

AI Developer Accelerator

11.2k members β€’ Free

20 contributions to AI Developer Accelerator
AI Developer Accelerator β€” Coaching Call - April 28th
Last week Brandon burned through $120 in Claude credits in 30 minutes while proving the "SaaSpocalypse" is realβ€”rebuilding a $20 SaaS in under an hour and making everyone panic about their lack of moats. If you missed the chaos, you missed the survival guide for what happens when AI makes code trivial to replicate. πŸ“ž HOW THE CALLS WORK The calls can run 2+ hours. We want to make sure we're respecting everyone's time. Especially those of you who actually show up. Here's the structure: πŸ‘‰ Reply to this post with your questions before the call πŸ‘‰ If you submit a question and you're on the call, you go first πŸ‘‰ We work through questions in the order they came in πŸ‘‰ Then we open it up for everyone else If you can't make the call but want your question answered, drop it in the comments. We'll get to it. But priority goes to people who are there. The goal is simple: if you're taking the time to show up, you shouldn't have to wait behind questions from people who aren't even on the call. We've got some delicious follow-ups brewing: Ty is running his ShipSafe security scanner against Morgan's catio site (cybersecurity meets cat patios), Patrick is polishing that elegant multi-model pipeline for open source release, and Tiran is stripping his landing page down to a single address input. Perfect time to jump back in if you want to see how the experiments land. πŸ”— ZOOM LINK (save this) https://us06web.zoom.us/j/81995207847?pwd=Xe6u6LmIQOmCP5VTnOwWYjDBfZNKGB.1 πŸ“… WHEN Tuesday April 28th at 6PM ET Looking forward to seeing you on the call!
0 likes β€’ 5d
Hey folks, here is my questions: I noticed Claude Code slowing down significantly over the last two months. It's gotten to the poin in which I rely more and more on Cursor instead of Claude Code. In fact, I find GPT 5.5 in Cursor fast and really good at complex task that I would otherwise leave to Claude Code. Alternatively, do you guys rely on other CLI based models like GLM or Kimi k 2.5? I know @Bastian Venegas has been relying on GLM for tasks. A cost benenfit analysis discussion would be interesting.
AI has gained consciousness
https://www.youtube.com/watch?v=eBRzdfU-K30
RecapFlow : March 24th Coaching call analysis
πŸ“ SUMMARY A dense, high-signal call covering self-improving AI pipelines, governance-first agent architecture, Stripe best practices, biometric authentication, and mobile ideation workflows. The strongest through-line: the shift from using AI interactively to building systems that run autonomously β€” defining quality rubrics, letting agents evaluate and improve their own outputs, and removing the human from the loop wherever possible. Practical tool recommendations (CMux, Codex for autonomous tasks, Terraform for infrastructure, Discord over Telegram for agent memory) were grounded in real production experience. The IronClaw white paper, Ty's FaceGate SDK demo, and Patrick's RecapFlow auto-research experiment are the most concrete follow-ups to watch for in the coming week. πŸ’‘ KEY INSIGHTS Self-Improving AI Pipelines β€” The Most Actionable Framework Shared This Call Build systems that eliminate the human from the evaluation loop. Define explicit pass/fail criteria and a point-based rubric, build a representative input suite (e.g., 60 test cases), let the AI run experiments, grade its own outputs, identify failure modes, update its own system prompt, and iterate. Apply this at the individual pipeline step level first, then at the full system level. Brandon uses Codex for this because it runs autonomously for long periods without prompting for human confirmation. Expensive but produces measurable, compounding improvement. The Hardest Part Is Defining "Good" For mathematical outputs, scoring is straightforward. For language outputs, defining quality is the core challenge. Patrick's approach: use mechanical checks (did all URLs get extracted? is compression within bounds?) for the fast inner loop, and community feedback as the slow outer loop for subjective quality. Governance Before Features for AI Agents Prioritize governance before adding capabilities. Recommended architecture: read-only access to most systems, human-in-the-loop via Discord or Telegram for any state-changing action, full audit trail, and a smart router using local models (Ollama) for routine tasks and frontier models only for complex ones. This is the core principle behind the IronClaw framework.
1 like β€’ Mar 26
This is way better, thanks Patrick....just used your summary to start implementing Terrafrom Code as Infra
Cursor and Composer 2.0
Has anyone been trying the New Composer 2 model in Cursor? Whats your impressions?
0 likes β€’ Mar 22
It's hella fast, but it shoots from the hip a lot....making some mistakes that it should be making
LLM Inference Illustrated
Goldmind here folks...at the vanguard of the LLM space, with a scientific and materialist approach, Ted Kyi (LinkedIn: https://www.linkedin.com/in/tedkyi/) is producing his book that allows us to understand LLM inferences in a comprehensitve yet understandable manner. Please take a look at this work in production: https://tedkyi.github.io/llm-inference/ The next meetup will be: https://www.meetup.com/san-diego-machine-learning/events/313891387/?eventOrigin=group_upcoming_events Since I live in San Diego, I can attend in Person, but the meeting is Zoom hybrid! He is sharing this book and giving a lecture on it for free! This is no trifling Ted Talk, this is the real Ted LLM Talk.
1-10 of 20
Juancho Torres
3
35points to level up
@juancho-torres-8802
Data Scientist passsioante about finance and political economy

Active 5d ago
Joined Jan 19, 2025
San Diego
Powered by