litellm.exceptions.BadRequestError: litellm.BadRequestError: Lm_studioException - Error code: 400 - {'error': 'Context size has been exceeded.'} Traceback (most recent call last): Traceback (most recent call last): File "/opt/venv-a0/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 823, in acompletion headers, response = await self.make_openai_chat_completion_request( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv-a0/lib/python3.12/site-packages/litellm/litellm_core_utils/logging_utils.py", line 190, in async_wrapper result = await func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv-a0/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 454, in make_openai_chat_completion_request raise e File "/opt/venv-a0/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 436, in make_openai_chat_completion_request await openai_aclient.chat.completions.with_raw_response.create( File "/opt/venv-a0/lib/python3.12/site-packages/openai/_legacy_response.py", line 381, in wrapped return cast(LegacyAPIResponse[R], await func(*args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv-a0/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 2589, in create return await self._post( ^^^^^^^^^^^^^^^^^ File "/opt/venv-a0/lib/python3.12/site-packages/openai/_base_client.py", line 1794, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv-a0/lib/python3.12/site-packages/openai/_base_client.py", line 1594, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': 'Context size has been exceeded.'}