So what does this mean? Prompting is really important.
The prompt that you construct to pass into the LLM will determine what comes out of the LLM, that will determine the behavior of the agent.
This prompt isn't just one big string.
It's actually made up of a bunch of different parts. And all these parts come from different places.
They could come from:
- Retrieval steps
- System message
- Conversation history.
So when you construct this context that you're passing into the LLM, it's really, really important to be able to control exactly what goes in there,
Because that will affect what comes out.