Wan2.2 model
Alibaba released their Wan 2.2 model back in July. As I am not in that space I let it pass me by until I saw a post from . Having had a look into his issue I deployed the model to my local PC, and utilised ComfyIU as the interface. Damm, to a layman it is great. I had a play around with it and managed to generate a few silly videos to see what it could do. I was impressed. Even when i slipped up after adding a massive prompt the eagles were awesome (First Video)
I think what shows the power though is the second video. It was created from a still picture and i had the dog turn around and wander away. It is only a few seconds clip but the detail is amazing. And my prompt you may ask. Well here it is. " the dog turns and runs away from the camera as teh grass gently sqays in the wind" Yep complete with typos that I am so good at.
I should point out the first video is text to video (T2V), and the second video is image to video (I2V)
Render times
T2V - circa 5 seconds
I2V - 205 Seconds
I am sure you videographers out there know lots of ways to utilize this power. Me? I'll just move on :)
0:05
0:05
1
3 comments
Tom Welsh
6
Wan2.2 model
AI Developer Accelerator
skool.com/ai-developer-accelerator
Master AI & software development to build apps and unlock new income streams. Transform ideas into profits. 💡➕🤖➕👨‍💻🟰💰
Leaderboard (30-day)
Powered by