Activity
Mon
Wed
Fri
Sun
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
What is this?
Less
More

Memberships

AIography: The Pro AI Film Lab

828 members • Free

Master The Workflow

143 members • $9/m

3 contributions to AIography: The Pro AI Film Lab
Adobe Just Built an AI That Does Your First Cut
Here's Why I'm Not Worried. Adobe just dropped a new Firefly feature called "Quick Cut." You upload raw footage, type a description of what the video should be—interview, product demo, travel vlog—and it automatically produces a rough cut. Let that sink in for a second. AI is now assembling edits from raw footage based on a text prompt. It pulls from Adobe, Google, OpenAI, and Runway models. It targets product reviewers, podcasters, marketers—anyone who needs a fast edit without hiring an editor. I can already hear the panic. "They're coming for our jobs." No. They're not. Here's why. A rough cut is not an edit. Every editor in this community knows the difference. A rough cut is assembly. It's organization. It's the starting point. The CRAFT of editing—pacing, rhythm, emotional timing, knowing what to cut and what to keep, building tension, finding the story inside the footage—that's what happens AFTER the rough cut. Quick Cut is doing the part of the job that was already the least creative. It's pulling selects and assembling them in order. That's assistant editor work at best—and even assistants bring more judgment to it than an algorithm. This is actually good news for editors. Here's why: When the rough assembly takes 5 minutes instead of 5 hours, you get to spend more time on the part that actually matters—the storytelling. The craft. The decisions. This is exactly what I mean when I say everything becomes post. AI is collapsing the mechanical parts of the pipeline so humans can focus on the creative parts. The question isn't whether AI can assemble footage. It can. The question is: who decides if the assembly is any good? That's you. That's always been you. What do you think? Are tools like this a threat or an opportunity? Drop your take below.
2 likes • 9d
I can't wait to try it more. But made an attempt just now but realized that you have to upload the footage. Which is not desirable for so many reasons. But really hoping this can help with me cutting down my wife's podcast.
Runway Becomes a Multi-Model Platform
Kling, Sora, WAN, GPT-Image Under One Roof TL;DR: Runway has integrated third-party AI models directly into its platform, including Kling 3.0, Kling 2.6 Pro, Kling 2.5 Turbo Pro, WAN2.2 Animate, GPT-Image-1.5, and Sora 2 Pro — with more models coming soon. Through Sunday, commenting "MODELS" on their X post gets you 50% off Pro Yearly plans. Key Takeaways: - Kling 3.0, Sora 2 Pro, WAN2.2 Animate, and GPT-Image-1.5 are all now accessible directly within Runway's interface - Single-platform workflow — no more juggling multiple tabs, accounts, and credit systems across different AI video tools - Runway's own Gen-3 Alpha still available alongside the third-party models, letting you compare outputs side by side - WAN2.2 Animate brings the open-source Wan model's animation capabilities into a polished UI for the first time - 50% off Pro Yearly through Sunday for early adopters Why It's Important: This is a seismic shift in how AI filmmakers work. Until now, professional workflows meant maintaining separate subscriptions to Runway, Kling, Sora, and others — each with different interfaces, credit systems, and export formats. Runway is positioning itself as the "editing suite" of AI video, not just another model provider. For filmmakers, this means you can prompt the same scene across Kling 3.0, Sora 2 Pro, and Gen-3 Alpha, compare the results, and pick the best take — all without leaving your timeline. This is the Netflix-of-models approach, and it fundamentally changes the competitive landscape. Just as a side note, this is exactly how Lumarka is designed. Access to all major models in the character, shot, and take rendering interfaces. What do they say about great minds? 😎 Source: r/runwayml — Official Announcement
0 likes • 11d
This is awesome thanks! For more data and research capabilities I use a similar service called Perplexity. Basically scours a couple of different models to get you the best answers to things. But I've also successfully used it for some video work and image generation. I can see how this is a game changer. Logging into various models to try and get a result isn't very practical.
The Sky Has Been Falling for 120 Years 🌩️
Hey everyone, You've probably seen the news: Darren Aronofsky just released "On This Day… 1776," a short-form Revolutionary War series created through his AI studio with Google DeepMind. SAG voice actors, AI visuals. I haven't watched it yet, so I'm not here to tell you it's good or bad. But I AM here to talk about the reaction — because we've seen this exact movie before. And I mean that literally. 1903 — "The Great Train Robbery" comes out. Audiences panic at the image of a gun pointed at the camera. Some people want films banned entirely. Late 1920s — Sound arrives. Silent film purists, including legendary filmmakers, declare it a gimmick that will destroy the art form. Chaplin refuses to make a talkie for years. Then it was color. Television. Home video. CGI. Digital editing. Streaming. The sky has been falling for 120 years. And yet here we are — with more ways to tell stories than at any point in human history. Now it's AI's turn to be the villain. Look, I get it. There are real ethical concerns. We should absolutely have conversations about compensation, attribution, and impact on working artists. Those conversations matter, and I'm not dismissing them. But the instant pile-on? The "AI slop" mockery before most people have even watched it? That's not thoughtful criticism. That's fear wearing the costume of principle. An Academy Award-nominated filmmaker is experimenting publicly. Taking a risk. Whether this project lands or not, he's pushing into territory most of Hollywood is too scared to touch. For those of us in this community — many of you would never have had access to traditional production resources. These tools are giving you a voice. That's not a threat to creativity. That's an expansion of it. So yeah. I'm going to watch Aronofsky's series with an open mind. Maybe it's great. Maybe it's rough around the edges. Either way, I'd rather see someone swinging than an industry paralyzed by the same fears it's had since a train first rolled toward a camera.
2 likes • 15d
Lots of things are great about this. However it is notable that I didn't feel really connected to a single moment of acting performance here. It hasn't changed my impression that generative AI is and will be super helpful for creating pitch/spec work. But we'll also see a lot of people try to make whole films/series with way too much AI. The feel of this could become saturated and boring very quickly. But that being said, we as the professionals still definitely need to be learning it.
1-3 of 3
Eric Kenehan
1
1point to level up
@eric-kenehan-6682
I'm a TV, Film, Youtube editor. Love playing guitar and KGLW.

Active 9d ago
Joined Feb 19, 2026
INTP
Agoura Hills, CA
Powered by