Welcome to the Out of the Box series — where I explore what can be built with no-code and low-code AI tools in 30 minutes or less.
No manuals.
No tutorials.
Just curiosity and creation in motion.
This time I revisited Sora 2 a few months later to see how the experience has evolved.
App: Sora by OpenAI
Time: Under 30 Minutes
Category: AI Video Creation / Prompt-Directed Video
Video Title: Move Over Rover, The Dog Days of Coding Are Over - Claude Code is The Cats Meow
🎥 What Is Sora?
Sora is an AI video generation platform that transforms a simple text prompt into lifelike, cinematic scenes — complete with motion, lighting, and visual storytelling.
Think of it as having a director, camera crew, and editor… all powered by a prompt.
⚙️ Experience 1 — The First Test
A few months ago, I ran an Out of the Box experiment with Sora using a simple presenter-style scene.
The results were impressive for early generative video, but the workflow still felt a bit like experimentation. The outputs were interesting, but not something that added much practical value beyond demonstrating what the technology could do.
If you’re curious about that original test, you can see the full post here:
That first experiment helped show what was possible, but the bigger question was how quickly the experience would evolve.
⚙️ Experience 2 — Revisiting It Today
For the second experiment, I tried something completely different — a playful, high-motion scene designed to test character behavior and storytelling.
Prompt theme:
A cat driving a quad runner at high speed — Fast & Furious style — with a dog riding on the back howling and clearly terrified.
The twist:
- The cat is labeled “Claude Code.”
- The dog is labeled “ChatGPT.”
Experiment 2 Video:
The scene captured the speed, chaos, and humor of the prompt surprisingly well. The quad rips forward while the dog clings on the back, howling while the cat confidently drives.
More importantly, the workflow itself has improved significantly.
Two features stood out immediately:
🔁 Rework Video
Instead of starting over, you can tweak the prompt and regenerate variations — almost like directing another take.
➕ Extend Video
You can continue the scene beyond the original clip length, letting the story evolve without rebuilding everything.
These two capabilities make the experience feel far less like one-shot generation and much more like iterative video creation.
🔥 Why This Matters
The biggest shift isn’t just video quality — it’s creative control.
Early generative video felt like:
“Let’s see what the AI gives me.”
Now it feels closer to:
“Let’s refine this scene until it matches the vision.”
That said, while the tool is very cool and improving quickly, it is still nowhere near production-ready for most real business applications. The value today is in experimentation, storytelling, and understanding where the technology is heading — not replacing professional video production just yet.
💡 Result
Revisiting the same tool just a few months later shows how quickly the experience is evolving.
The technology is moving from impressive demos toward something that feels more like a real creative workflow.
⚡ Takeaway
Generative video is evolving fast.
What felt experimental just months ago now feels closer to directing scenes with prompts — refining, extending, and iterating until the result matches the idea.
If the last few months are any indication, we’re still very early in the curve.