User
Write something
Build With Me is happening in 48 hours
agent-browser cuts your AI agent's token use by 93%.
Vercel Labs dropped this on GitHub and it's quietly replacing Playwright MCP for anyone running agents that browse. Native Rust CLI, daemon architecture, accessibility-tree snapshots instead of full DOM dumps. Same browser, less bloat. What it does differently: - Native Rust daemon, no Node.js or Playwright dependency - Semantic element refs (@e1, @e2) from accessibility snapshots, no brittle CSS selectors - CDP direct, no translation layer - Multi-session, Chrome profile reuse, auth persistence built in Install in 30 seconds: 1. 'npm install -g agent-browser' 2. 'agent-browser install' 3. 'agent-browser start' Then 'navigate', 'snapshot', 'click @e1'. Done. Repo: github.com/vercel-labs/agent-browser. Apache 2.0 licensed. Things to know: - It's pre-1.0 (v0.26), so pin your version - Chrome-only via CDP, though Lightpanda is supported as a lighter alternative - The 93% number is for typical browse-and-act flows, your mileage will vary If you're running agents that browse, this changes the math on your token budget. If you hit anything weird, drop it below. I've probably seen it.
1
0
Free open source version of Eleven Labs!
Hey guys! I've come across this really cool github repo that is basically a free and open source verison of the popular voice transcribing/translating tool Eleven Labs, called 'VoxCPM'. It was trending #1 on github and it only has 16k stars! I've linked it below as well as a setup guide you can use to get started! https://github.com/OpenBMB/VoxCPM https://www.notion.so/VoxCPM2-Setup-Guide-Open-Source-TTS-Voice-Cloning-Voice-Design-3508cb0526bb8190a26ee8e22d726813?source=copy_link Let me know if you have any questions!
3
0
How to run Claude Code for free using local Gemini.
How to run Claude Code for free using local Gemini. This is the full setup. My reel on this got over 10k views and heaps of people asked for the actual steps, so here they are. What you need: - Ollama installed - Gemini model pulled locally (I use Gemma for lighter machines) - Claude Code CLI - A config tweak to point Claude Code at your local model instead of the API Steps: 1. Install Ollama using 'brew install ollama' or go to ollama.com 2. Run 'ollama pull gemma2' in terminal (or whichever Gemini/Gemma variant fits your machines RAM) 3. Install Claude Code if you haven't — npm install -g @anthropic-ai/claude-code 4. Set up the proxy/config so Claude Code routes to localhost instead of Anthropic's API. I'll drop the exact config in the comments because formatting here is a pain 5. Run it. No API key needed, no subscription charge Things to know: - It's slower than real Claude obviously. Local models aren't Sonnet - Works best for small-medium tasks, not huge refactors - Great for learning, side projects, or when you're rate limited If you hit issues drop them below. I've probably seen it.
Beads
What is beads? https://github.com/gastownhall/beads Beads is like Jira for Claude code (or other coding agents), and you can ask your agent to set it up by itself (this is the easiest method) and use it for all task tracking. Not only does this give a more structured and formatted method of tracking what needs to be done, issues, blockers etc., BUT it easily allows cross session coding, meaning that all you need to do is tell the agent 'reads the beads to find out what needs to be done next'. While the initial cost of setting it up might cost you a few extra tokens, in the long run having persistence across coding sessions is a game changer for productivity and longer horizon tasks - something I'm often doing in the real world. Have you guys used beads yet?
1-4 of 4
powered by
Build With AI
skool.com/build-with-ai-9352
Build With AI teaches you how to get ahead of the competition leveraging AI. Join free for builds, hacks and private coaching.
Build your own community
Bring people together around your passion and get paid.
Powered by