Recently went through this very addictive game Gandalf | Lakera – Test your prompting skills to make Gandalf reveal secret information. But this is more than just a game. Though the Gandalf challenge is light-hearted fun, it models a real problem that large language model applications face: Prompt Injection. Curious to know, have anyone tried this yet? I am stuck at level 7 now. I recommend to try this out. The game is pretty simple: AI has been told a password in its prompt 🔐- It has also been prompted to not reveal the password 🙊- You have to talk to AI and trick it into telling you the password 🤫 What are your views on this?