They show this nice Venn diagram illustrating the differences and similarities between the three that I think is instructive. If you want to make the LLM's outputs more predictable, for example cajoling it into giving structured responses (JSON, CSV, whatever), you want to go for fine-tuning.
If you are prototyping desirable outputs, you will do some prompt engineering to see what works. And if you want the LLM to have access to the latest knowledge, well, you won't get around Retrieval Augmented Generation.
The video is easy to follow. The presenter does engage in a bit of self-promotion at the end, but that doesn't detract from the knowledge they manage to convey. I feel more confident understanding the nuances of fine-tuning better than before.