When you send messages to an LLM, you want to make sure they fit in the context window. So before sending them, being able to calculate their size is useful.
Not every LLM calculates tokens the same way. How do I find out what tokenizer to use for a particular model?