"Overview:
Model parameters, response_logprobs and logprobs, are being enabled in Gemini API on Vertex AI to return the log probabilities of the model output tokens, providing developers with a deeper view into the model's decision-making process for each generated token.
The response_logprobs parameter, when enabled, instructs the model to return the log probabilities of the output tokens. The logprobs parameter allows users to specify the number of top alternative tokens to be included in the response, along with their associated log probabilities.
Objectives:
In this tutorial, you will learn how to set and use the response_logprobs and logprobs model parameters in Gemini API on Vertex AI. You will complete the following tasks:
- Set the response_logprobs and logprobs parameters
- Process and interpret log probabilities output
- Use log probabilities in classification tasks
- Use log probabilities in auto-complete
- Use log probabilities in RAG evaluation"