User
Write something
Agent Zero Community Call is happening in 5 days
Appearance customization for A0
As I now have two instances of A0 running on two different servers, I find myself constantly having to glance at the URL to determine which one I've got pulled up. It'd be REALLY nice if there'd be an easy way to customize the UI and favicon so we can tell them apart at-a-glance in a crowded browser tab lineup. I tried getting one A0 agent to update the background colors and favicon of the other on the other server, but it seems to really struggle in pulling that off. So I'll probably just resort to locating which CSS does what and do it manually--which kind of feels ironic to me, given what A0 does (semi-autonomous agents). Plus, I'm betting my personalization will get overwritten when the next update occurs. So if I may request: could there be a way to more easily personalize the UI, and have that personalization included in the Backup and Restore feature, pretty please?
Agent Zero Starter Kit
Current experience: >> Install Agent Zero with Docker. >> New Chat >> Agent Zero: Hello! 👋, I'm Agent Zero, your AI assistant. How can I help you today? >> User: Hi, [...] >> Agent Zero: Recall memories extension error: [...]. Error [...] Looks familiar? Yes, we could just call people stupid or silly and send them to ask GPT how to solve this or to watch videos on how to start with Agent Zero. Or, we could implement the simplest, small local LLM setup within the image that would be able to run a predefined routine for the onboarding experience. Does this idea sound ridiculous? 🤔
A0 Website Redesign
We're cooking something special for the Agent Zero website, with a fresh new look that better reflects what our framework has become. We're emphasizing what makes Agent Zero special: complete control over your AI agents, from single tasks to fully automated workflows, with the freedom to use any LLM provider or local model, and the option to access true private and uncensored AI models through Venice AI. This is still a work in progress, and we'd love to hear your thoughts! What would you like to see on the new site? Drop your feedback below 👇 As always, Agent Zero remains free and open-source. We're building this together!
A0 Website Redesign
Rate limit error on Agent Zero AI Venice API
Have other people seen errors like this when using the A0T token staked compute with Venice? It has worked pretty flawlessly until today, as I am seeing what look like rate limit errors. We have barely used any of our quota. Thanks for help! :D Failed to filter relevant memories minimize expand Traceback (most recent call last): Traceback (most recent call last): File "/opt/venv-a0/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 969, in async_streaming headers, response = await self.make_openai_chat_completion_request( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv-a0/lib/python3.12/site-packages/litellm/litellm_core_utils/logging_utils.py", line 135, in async_wrapper result = await func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv-a0/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 436, in make_openai_chat_completion_request raise e File "/opt/venv-a0/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 418, in make_openai_chat_completion_request await openai_aclient.chat.completions.with_raw_response.create( File "/opt/venv-a0/lib/python3.12/site-packages/openai/_legacy_response.py", line 381, in wrapped return cast(LegacyAPIResponse[R], await func(*args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv-a0/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 2589, in create return await self._post( ^^^^^^^^^^^^^^^^^ File "/opt/venv-a0/lib/python3.12/site-packages/openai/_base_client.py", line 1794, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv-a0/lib/python3.12/site-packages/openai/_base_client.py", line 1594, in request raise self._make_status_error_from_response(err.response) from None openai.PermissionDeniedError: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1" />
Do something else! loop after Message misformat?
Hi, I am trialing GLM 4.6 on Venice beta and normally it was working better than anything else yesterday, but today it seems to be caught in a loop of some sort when I ask it to summarize the community call. See attached video. Anyone have any hacks or workaround for this situation? Thanks a lot! :D A0: Message misformat, no valid tool request found. network_intelligence A0: Generating... minimize expand You have sent the same message again. You have to do something else! network_intelligence A0: Generating... minimize expand You have sent the same message again. You have to do something else!
Do something else! loop after Message misformat?
1-24 of 24
Agent Zero
skool.com/agent-zero
Agent Zero AI framework
Leaderboard (30-day)
Powered by