I tried out Higgsfield’s new Recast feature and wanted to share my thoughts since I know a few of us have been experimenting with AI video tools lately.
Here’s the rundown: Recast lets you take an existing video, like one of you talking, dancing, or doing a product demo, and replace yourself (or the person in the clip) with an AI-generated character. The new character then performs the exact same actions and movements from your original video. It’s similar to what WAN and Runway Act does, but with Higgsfield’s own style of avatars and motion mapping.
Now for my experience… 😅 The output wasn’t quite there yet. It distorted my realistic AI twin’s face, made her body smaller than it actually is, and noticeably lightened her skin tone. So while it’s a cool concept, I’d say it still needs some fine-tuning before it’s ready for realistic content.
For now, I’ll be sticking with WAN, it does a better job at preserving proportions and facial details. But I’ll keep watching Higgsfield’s updates because this feature could become a game-changer once they refine it.
Has anyone else tested Recast yet? 👀 What kind of results did you get?