Activity
Mon
Wed
Fri
Sun
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
What is this?
Less
More

Memberships

OpenClawBuilders/AI Automation

411 members • Free

OpenClaw For Dummies

487 members • $9/month

AI Founders Labs

1.5k members • Free

Agent Architects

554 members • $97/month

GenHQ - Creative AI Education

1.5k members • $97/m

5 contributions to OpenClawBuilders/AI Automation
OpenClaw runaway costs - anyone seen this before?
I’ve been running OpenClaw for about a month without issues, but last week I ran into a major usage spike and I’m trying to track down the cause. I had 16 subagents running Opus 4.6 for short deep-research tasks (expected ~5 minutes each). Later that day my Anthropic usage jumped to $100+ per day. Actions I took: - Stopped all cron jobs and background workflows - Reduced heartbeat from every 5 minutes to every 30 minutes - Moved heartbeat to local models on my Mac Mini - Set Sonnet 4.6 as default and only use Opus 4.6 explicitly Even after this, I’m still seeing unexpected burn - roughly $10 every 30 minutes during normal use. Before I wipe everything and rebuild from scratch, I’m hoping for a sanity check: Has anyone experienced: - Orphaned cron jobs or background agents continuing to run? - Hidden OpenThreads or loops? - A good way to audit which models are actually being called and why? Context: - Interfaces: Signal, Discord, Telegram - Considering a full reset (export memory, rebuild clean instance) - Also unclear how the Claude Code $200 plan interacts with API usage Main goal: identify where the usage is coming from and put guardrails in place. Any advice or debugging approaches would be appreciated.
0 likes • 20d
@Damien Hooper great points. Yeah, I built a control panel before, but I don't remember if I put the cron jobs and sub-agents in. I did one of them for one of my OpenClaw, but maybe not this one. That's definitely a good first step, including alerting on thresholds. Thanks for that tip.
Most AI agents fail in production for one reason:
Teams optimize prompts before they optimize systems. After reviewing dozens of real-world builds, the pattern is clear: If you skip evals, memory architecture, and observability, your ā€œAI assistantā€ becomes a fragile demo. What actually works in production: 1) Evaluation loops - Define success/failure criteria before shipping - Track output quality over time, not one-off wins 2) Memory architecture - Core facts (always available) - Recent context (compressed) - Semantic retrieval (long-term recall without context bloat) 3) Observability - Tool-call logs - Failure reasons - Cost + latency per workflow 4) Governance - Approval gates for external actions - Tool allowlists - Audit trail for every critical step The market is moving from ā€œCan it generate?ā€ to ā€œCan it operate reliably at scale?ā€
0 likes • 20d
Keith is speaking the truth here, but I feel it's complicated. I feel that in general I know what I'm doing, but I still don't know exactly how to go about spelling out a successful case to an agent vs an unsuccessful case - or and how much work it would take to get there which makes me shy away from it and just try stuff out which gets messy fast. I hear Kieth loud and clear but… like let’s take failed cases, how do I know what the failed cases are? I guess ones that don’t produce favorable outcomes ones? But maybe there’s learning there and it’s ok for it to fail? I'm working on building a tool that will look at commodities prices today and look at all the news in the world and do some deep research there with 4.6 or others. It will come up with some points of view on whether or not a commodities trader should go long or short. Long would mean that the price is going to be going up, and short means the price is going to be going down. What price point should the trade get out at, and where should the traders get out at on each side? How do I know that advisory if that is a legit advisory or not? What was the positive case versus the negative case? I guess if it works and it makes money over time, then that's a positive case, and if it loses or is wrong, then it is a negative case. It seems very complicated for me to understand how to tell my model, as I'm trying to tell it, "Hey, create agents to go make these advisories. Here are the correct cases; here are the incorrect cases." It gets a little tricky. Maybe I need to think about dumbing down my models more and not having them act so smart. I don't know; it's not super simple in my opinion, but maybe I'm missing something.
OpenClaw: Extracted Prompts (Generalized)
I just watched the video below and Matthew Berman details his OpenClaw CRM system. Here are the prompts he used to create his system Enjoy 22 copy/paste-ready prompts for building your own AI agent system. Each prompt builds a functional system or implements a proven best practice you can hand to an AI coding assistant. Replace placeholders like <your-workspace>, <your-messaging-platform>, and <your-model> with your own values. The repo is located here https://gist.github.com/mberman84
2 likes • 23d
Geeeeze @Renato Avanzini , not to take over this comment thread on that topic but that’s bad. I’ve often wondered if I should be running daily pushes of all Open Claw configs/memory to git and Google Drive (just to have a back up beyond the back up). I wonder wha happened with yours. Personally last week I got so mad at mine I had to pull the plug and now I’m thinking of rebuilding from scratch too. Mine started charging me like $50 /day in Anthropic credits and I couldn’t figure out how. I dialed it back to not rebuy but I was going through $10 in credits lik every 30 mins using it. And this is after I told it to kill all corn jobs (and it said it did, but I’m not sure how to check for definitive data that it followed through on that. Similarly, I told it to use local models (I have 3 on my Mac Mini) for heart beats (that I changed to every 30 mins) and to use Sonnet 4.6 instead of Opus 4.6 for all conversations and day to day default work and only ever use Opus 4.6 when I explicitly tell it to. I’ll move this over to a new thread to ask for advice there, but wanted to share that I feel your pain!
1 like • 22d
Ok, guessing that’s $39/month on some VPC that you can SSH into you just have to trust them with all your data? Where’s this at? I’m interested in feeling it out.
Need help
I’m real interested in open claw if anyone is wanting to help a guy figure this stuff out. Anything helps
0 likes • 24d
@Keith Motte Extremely solid. Thank you!
Skills
Since we are the OC Builders community, I want to build a skill for the community what do you all want me to create. After you vote, post a comment so I can capture the details of what you want. At a minimum provide the following information As a persona I would like to do x, for reason y and the desire outcome y As a business owner I want to reduce the time I spend on daily social posting, so I can focus on making more sales calls. When I make sales calls I close 20% and make an extra $10,000 /mo instead of creating social media post that don't perform well. I post to Facebook, X, Reddit and Linkedin to direct potential customers to my funnel. I've made a post a day for each social and I don't get any organic traffic to my funnel. I spend an 2 hours a day making posts.
Poll
23 members have voted
0 likes • 24d
@J Gold Especially if we combine it with the ability to publish once and have it show up as syndicated out to all the different social networks, with social network specific branding or modeling, with the right text and the right thing for that social channel. + ...if you could layer in the ability to pick up on the latest social media trends and take that content and tweak it for that social media trend, that'd be awesome!
1-5 of 5
Christo Roberts
1
2points to level up
@ai-ses-7432
Excited to find this AI mecca - I'm a Solutions Architect at Cloudflare by-day / Creative Vigilante by-night! Learn, then Do, then Teach is my mantra.

Online now
Joined Feb 11, 2026
San Francisco, CA
Powered by