Prompt Engineering, RAG and Fine-Tuning
Interesting YouTube video on the differences between Prompt Engineering, RAG and Fine-Tuning, as well as on when to use each or even how to use them concurrently.
They show this nice Venn diagram illustrating the differences and similarities between the three that I think is instructive. If you want to make the LLM's outputs more predictable, for example cajoling it into giving structured responses (JSON, CSV, whatever), you want to go for fine-tuning.
If you are prototyping desirable outputs, you will do some prompt engineering to see what works. And if you want the LLM to have access to the latest knowledge, well, you won't get around Retrieval Augmented Generation.
The video is easy to follow. The presenter does engage in a bit of self-promotion at the end, but that doesn't detract from the knowledge they manage to convey. I feel more confident understanding the nuances of fine-tuning better than before.
4
2 comments
Marco Bottaro
7
Prompt Engineering, RAG and Fine-Tuning
Data Alchemy
skool.com/data-alchemy
Your Community to Master the Fundamentals of Working with Data and AI — by Datalumina®
Leaderboard (30-day)
Powered by