I am working on a project in which the inputs and outputs are supposed to be quiet long and it's a very complicated use-case, now I want to give it examples to the LLM but I don't think its a good idea to increase the token usage too much. As that also can cause issues, what I'm seeing is that the more the tokens the more mistakes its making. Currently I am giving it just one example which seems like isn't enough, the total token usage is around 25k