Activity
Mon
Wed
Fri
Sun
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
What is this?
Less
More

Memberships

33 contributions to Agent Zero
Openclaw vs. Agent Zero
I was skeptical about openclaw. It sounded like it was too powerful and dangerously so. So I ran it. I gave it a device and a name. I’m honestly impressed at how capable it is. I’m already spoken for. Agent Zero has been my one and only agent for over a year. I built my system to make work and fun stuff more to my taste. It’s still powerful but it’s not dangerously powerful. I’ve ran huge cloud agents and small agents locally and it’s just a fact that models matter. It’s said that today’s software would be useless without today’s models. I’d absolutely agree. But what about today’s hardware? Today’s hardware is absolutely playing a role in outcomes. Regardless of what outcomes AI come to they have to live somewhere. My agents live next to each other on my desk but in completely different environments. Agent Zero lives in Kali Linux and inside of a Windows docker container. Openclaw is autonomously residing inside a Mac Mini. I haven’t had a Mac in forever. It’s nothing like what I get from Windows. I appreciate all three OS’s for different reasons. They all have strengths and weaknesses. After a lot of researching and waiting for the other shoe to drop, and it has for many users, openclaw is operating and in my control. I recommend using both. Agent Zero is enterprise certified. It’s secure and fully functional at the same time. Openclaw is powerful when given its own machine. Giving it a vps is a waste of potential. Can Agent-Zero do things that Openclaw cannot? Absolutely. Can Openclaw do things that Agent-Zero cannot? Absolutely. Most abilities depend on the environment that the agent is in and how the user set it up.
Poll
47 members have voted
1 like • 24d
@Lazar Mateev context is still an issue that needs work. I saw somewhere that a simple “hello“ interaction is no less than 50k tokens. The memory on Openclaw is sub performance as well. It’s not going to clean up its room if you don’t tell it and will leave messes and forget them.
2 likes • 16d
@Justin Brown chimeras of agentic environments is where we're at.
Error lmstudio local models
Hi there https://youtu.be/ZvJ78aGLcSI?si=KZqe-XfHcep6VyMh By following the tutorial in this video... I'm connecting the model to A0 with lmstudio... I'm getting the same error with different models... Does anyone know the solution? Error: litellm.exceptions.MidStreamFallbackError: litellm.ServiceUnavailableError: litellm.MidStreamFallbackError: litellm.APIConnectionError: APIConnectionError: OpenAIException - Cannot truncate prompt with n_keep (9041) >= n_ctx (4096) Original exception: APIConnectionError: litellm.APIConnectionError: APIConnectionError: OpenAIException - Cannot truncate prompt with n_keep (9041) >= n_ctx (4096) Traceback (most recent call last): Traceback (most recent call last): File "/opt/venv-a0/lib/python3.12/site-packages/litellm/litellm_core_utils/streaming_handler.py", line 1812, in __anext__ async for chunk in self.completion_stream: File "/opt/venv-a0/lib/python3.12/site-packages/openai/_streaming.py", line 147, in __aiter__ async for item in self._iterator: File "/opt/venv-a0/lib/python3.12/site-packages/openai/_streaming.py", line 193, in __stream__ raise APIError( openai.APIError: Cannot truncate prompt with n_keep (9041) >= n_ctx (4096) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/venv-a0/lib/python3.12/site-packages/litellm/litellm_core_utils/streaming_handler.py", line 1996, in __anext__ raise exception_type( ^^^^^^^^^^^^^^^ File "/opt/venv-a0/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2328, in exception_type raise e # it's already mapped ^^^^^^^ File "/opt/venv-a0/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 569, in exception_type raise APIConnectionError( litellm.exceptions.APIConnectionError: litellm.APIConnectionError: APIConnectionError: OpenAIException - Cannot truncate prompt with n_keep (9041) >= n_ctx (4096) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/a0/agent.py", line 454, in monologue agent_response, _reasoning = await self.call_chat_model( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/a0/agent.py", line 808, in call_chat_model response, reasoning = await model.unified_call( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/a0/models.py", line 511, in unified_call async for chunk in _completion: # type: ignore File "/opt/venv-a0/lib/python3.12/site-packages/litellm/litellm_core_utils/streaming_handler.py", line 2006, in __anext__ raise MidStreamFallbackError(
1 like • 22d
Kudos to everyone who uses local llms. Although you may not get the best performance available you’re doing AI right. Privacy and security are invaluable in this Information age.
1 like • 22d
@Lamin Jobe don’t need v1 for ollama.
Change settings programatically is possible?
Hello everyone! I´m trying to implement a dynamic model change to rotate between free models on openrouter and Gemini model (with the API Key that I have). I have created a script and a skill for that but I've noticed that even changing the settings.json the interface ignores the changes. How the settings actually works? The file is saved only when we update via UI? The settings are saved on memory?
0 likes • 24d
Did you use the behavior_update tool to incorporate the skill?
Memory issue for 0.98
litellm.exceptions.BadRequestError: litellm.BadRequestError: Lm_studioException - Error code: 400 - {'error': 'Context size has been exceeded.'} Traceback (most recent call last): Traceback (most recent call last): File "/opt/venv-a0/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 823, in acompletion headers, response = await self.make_openai_chat_completion_request( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv-a0/lib/python3.12/site-packages/litellm/litellm_core_utils/logging_utils.py", line 190, in async_wrapper result = await func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv-a0/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 454, in make_openai_chat_completion_request raise e File "/opt/venv-a0/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 436, in make_openai_chat_completion_request await openai_aclient.chat.completions.with_raw_response.create( File "/opt/venv-a0/lib/python3.12/site-packages/openai/_legacy_response.py", line 381, in wrapped return cast(LegacyAPIResponse[R], await func(*args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv-a0/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 2589, in create return await self._post( ^^^^^^^^^^^^^^^^^ File "/opt/venv-a0/lib/python3.12/site-packages/openai/_base_client.py", line 1794, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv-a0/lib/python3.12/site-packages/openai/_base_client.py", line 1594, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': 'Context size has been exceeded.'}
0 likes • Feb 13
Your context window settings need to be looked at. I use LM Studio on one of my agent zero builds. I only use a 32k context window. I set it in both lm studio and agent zero. Depending on your hardware you should adjust your context so that it is consistent.
Powering agent zero with a CLI
Guys is there anyway we can power agent zero with the Gemini CLI, Chatgpt pro account or Claude pro account?
1 like • Jan 30
Google provides a free tier for their api. I’m not sure if you can use ChatGPT. Clause code is a definite no-no (I hear that they may ban your api). I recommend z.ai’s code plan. That’s what I use.
1-10 of 33
Theo Wilson
4
82points to level up
@theo-wilson-9360
"How do I run this "...""?

Active 5h ago
Joined Jul 17, 2025
Powered by