issue with agent zero
Hello everyone,
I’m currently facing an issue with Agent Zero.
My LLM provider is OpenRouter, and I still have available credits on my account. However, I keep getting the following error:
litellm.exceptions.RateLimitError / OpenAIException - You exceeded your current quota, please check your plan and billing details
From my understanding, Agent Zero seems to be trying to use the OpenAI API directly instead of routing everything through OpenRouter.
In my configuration:
  • Main model: OpenRouter / anthropic/claude-sonnet-4.6
  • Utility model: OpenRouter / cognitivecomputations/dolphin-mistral-24b-venice-edition:free
Could someone please help me understand why this is happening and how to fix it?
Thanks in advance.
Version un peu plus directe et naturelle :
Hello everyone,
I’m having an issue with Agent Zero.
My provider is OpenRouter, and I still have enough credits, but I keep getting this error saying that I exceeded my OpenAI quota.
It looks like Agent Zero may be calling OpenAI directly instead of using OpenRouter for all requests.
My current config is:
  • Main: OpenRouter / anthropic/claude-sonnet-4.6
  • Utility: OpenRouter / cognitivecomputations/dolphin-mistral-24b-venice-edition:free
Has anyone faced this before or knows how to fix it?
Thanks.
Le deuxième est meilleur pour Discord, GitHub ou forum support.
litellm.exceptions.RateLimitError: litellm.RateLimitError: RateLimitError: OpenAIException - You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.
Traceback (most recent call last):
Traceback (most recent call last):
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 991, in async_streaming
headers, response = await self.make_openai_chat_completion_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/litellm_core_utils/logging_utils.py", line 190, in async_wrapper
result = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 454, in make_openai_chat_completion_request
raise e
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 436, in make_openai_chat_completion_request
await openai_aclient.chat.completions.with_raw_response.create(
File "/opt/venv-a0/lib/python3.12/site-packages/openai/_legacy_response.py", line 381, in wrapped
return cast(LegacyAPIResponse[R], await func(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 2589, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/openai/_base_client.py", line 1794, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/openai/_base_client.py", line 1594, in request
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/main.py", line 598, in acompletion
response = await init_response
^^^^^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 1041, in async_streaming
raise OpenAIError(
litellm.llms.openai.common_utils.OpenAIError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/a0/helpers/extension.py", line 176, in _run_async
data["result"] = await data["result"]
^^^^^^^^^^^^^^^^^^^^
File "/a0/agent.py", line 596, in handle_exception
raise exception # exception handling is done by extensions
^^^^^^^^^^^^^^^
File "/a0/agent.py", line 471, in monologue
agent_response, _reasoning = await self.call_chat_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/a0/helpers/extension.py", line 183, in _run_async
result = _process_result(data)
^^^^^^^^^^^^^^^^^^^^^
File "/a0/helpers/extension.py", line 143, in _process_result
raise exc
File "/a0/helpers/extension.py", line 176, in _run_async
data["result"] = await data["result"]
^^^^^^^^^^^^^^^^^^^^
File "/a0/agent.py", line 817, in call_chat_model
response, reasoning = await call_data["model"].unified_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/a0/models.py", line 523, in unified_call
_completion = await acompletion(
^^^^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/utils.py", line 1638, in wrapper_async
raise e
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/utils.py", line 1484, in wrapper_async
result = await original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/main.py", line 617, in acompletion
raise exception_type(
^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2323, in exception_type
raise e
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 350, in exception_type
raise RateLimitError(
0
0 comments
Assi Stanislas Seka
1
issue with agent zero
Agent Zero
skool.com/agent-zero
Agent Zero AI framework
Leaderboard (30-day)
Powered by