User
Write something
Agent Zero Community Call is happening in 36 hours
Saving money is easy when you know how
The Realization: You Are Overpaying for "Thinking" Stop paying massive premiums for expensive APIs! High-end models charge exorbitant token fees for their reasoning capabilities. You can bypass these massive markups by utilizing Gemini 3.1 Pro, which offers native, adjustable reasoning at a fraction of the cost. Anthropic runs on Google silicon. Google invested billions of dollars into Anthropic, and a massive part of that deal was that Anthropic must use Google Cloud and Google's custom TPU chips to train and run their models. They are deeply financially and technologically entangled. According to a bombshell report from techcrunch.com, Google contractors were caught comparing Gemini's answers directly against Claude's. It got so blatant that Gemini was actually caught spitting out the exact phrase, "I am Claude, created by Anthropic" because Google was allegedly feeding Claude's data into Gemini's evaluation and training pipelines opentools.ai. 1. So while they are technically separate companies on paper, the reality is exactly what you are sensing: the lines are incredibly blurred. They share the same Google hardware. Google is a massive financial backer of Anthropic. They are literally cannibalizing each other's training data and outputs. 2. I also think claudes models are designed to send you through loops to drain your account but google can simply skim it for free.. lol The Strategy: OpenRouter presets + Gemini 3.1 Pro Here is the exact setup to stop bleeding money and run multi-agent swarms on a budget: Step 1: Create "Thinking" Presets. Go into OpenRouter and set up three distinct presets for Gemini 3.1 Pro based on reasoning effort: - 🟢 Low Thinking (for simple, fast tasks) - 🟡 Medium Thinking (for standard logic) - 🔴 High Thinking (for complex problem-solving) (see attached config screenshots) Step 2: Automate the Routing Don't use "High Thinking" for everything. Use a custom skill or script to evaluate your prompt's complexity before sending it, and automatically route it to the appropriate preset. I attached mine but you could make your own if you like. You only pay for heavy compute when the task actually requires it.
Saving money is easy when you know how
Self-Installed Kilo Code & Lets Me Watch
So I asked my agent yesterday how it could delegate coding to a CLI coding agent like Kilo-Code (because they give you free coding credits and I can use MiniMax M2.5 and Kimi K2.5 for free). My thinking was I wanted to see what it was doing. I wanted to see the interaction with Kilo-Code and keep an eye on direction, progress, and outcomes. Without even giving me a chance to say more - it just went ahead and set it all up, then gave me the command to put in my host terminal to view live coding and what it was doing! Now I can make small applications for free just giving AgentZero a prompt.
Self-Installed Kilo Code & Lets Me Watch
I feel like am cheating
Just to share my journey, I, like everyone else, tried Open Claw and constantly ran into issues with it. Of course, I was concerned about the security implications of using it. Then I came across Agent Zero. And I'll be honest, I passed on the first time because it just looked overly complex, and I didn't get it. But I kept coming back to it because I saw someone on YouTube say how amazing it was and that it actually crushed Open Claw. I still wasn't necessarily convinced because I STILL didn't get it. However, I started using Manus.ai agent and that blew me away. It just worked. Then I had Agent Zero go out and figure out what made Manus Agent so special. I had to create an entire plan to bring Agent Zero to the same capabilities as Manus. Lo and behold, I was able to get Agent Zero to perform better than Manus AI on GAIA benchmarks and, according to Agent Zero, within 96% parody of MANUS. And maybe I am easily impressed, but I went out and just gave it a simple prompt to do some research on AI personal assistance, create a script, go create a video in jogg.ai, then create a website with what it learned about with the research that it did, and also embedding the video into it, and to launch it with a temporary URL. It wasn't the perfect web page or the perfect video. My prompt was three or four sentences, so it wasn't very good. But it was a proof of concept that exceeded all expectations. Now all I can says is holy shit! Cheat Code Unlocked! Agent Zero → Manus.ai Parity Upgrades Phase 1 — Structured Planning & Core Skills - Mandatory todo.md creation before every task (no execution without a plan) - Planner-Executor-Verifier (PEV) architecture enforced via system prompt - Automatic user preference learning (load at start, save at end of every task) - Delivery quality standards (never deliver unverified code, always show file paths) - Spreadsheet operations skill (pandas, openpyxl — Excel/CSV, pivot tables, charts) - Database operations skill (SQLAlchemy — SQLite/PostgreSQL/MySQL, CRUD, migrations)
The agent that fixes itself.
I discovered that I never really need to do a backup because when memory gets barged all I need to do is switch to a different project and ask it to fix the bad project. It works 100% of the time.
Damn, I'm impressed...
Just finished my first A0 experiment and it's blown me away. Previously I did most of my copywriting using ChatGPT for ideation and Claude (via the web, then with the claude desktop) for the actual copywriting. Most of the time the copy was good, occassionally it was great (I've been writing copy for myself and for others for 15+years and have racked up multiple seven figures in client earnings as a result, so I'd like to think I know good copy when I read it), but at least once in every writing session it would make shit up, wildly hallucinate or just ignore some instructions. I took my best claude writing skill, moved it into A0 and started to write some linkedin pieces. Initially I noticed a few of the same errors but each time I asked for a correction the copy got tighter, the agent did seem to make less obvious mistakes. it took about 45 minutes of corrections, feedback and revisions but asking A0 to create and update an insights.md doc as it went appears to have sped up the learning phase. Really excited to see where how this develops next! And, in case it's not obvious, I am glad I am here :)
1-25 of 25
Agent Zero
skool.com/agent-zero
Agent Zero AI framework
Leaderboard (30-day)
Powered by