Activity
Mon
Wed
Fri
Sun
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
What is this?
Less
More

Memberships

Amplify Views

28.4k members • Free

AI Automation (A-Z)

152.8k members • Free

Max Business Schoolā„¢

264.7k members • Free

The AI Advantage

118.2k members • Free

Creator Profits

18.9k members • Free

AIography

871 members • Free

Ai Filmmaker Lab

360 members • $12/m

AI Automation Society Plus

3.5k members • $99/month

4 contributions to AIography
Adobe Just Built an AI That Does Your First Cut
Here's Why I'm Not Worried. Adobe just dropped a new Firefly feature called "Quick Cut." You upload raw footage, type a description of what the video should be—interview, product demo, travel vlog—and it automatically produces a rough cut. Let that sink in for a second. AI is now assembling edits from raw footage based on a text prompt. It pulls from Adobe, Google, OpenAI, and Runway models. It targets product reviewers, podcasters, marketers—anyone who needs a fast edit without hiring an editor. I can already hear the panic. "They're coming for our jobs." No. They're not. Here's why. A rough cut is not an edit. Every editor in this community knows the difference. A rough cut is assembly. It's organization. It's the starting point. The CRAFT of editing—pacing, rhythm, emotional timing, knowing what to cut and what to keep, building tension, finding the story inside the footage—that's what happens AFTER the rough cut. Quick Cut is doing the part of the job that was already the least creative. It's pulling selects and assembling them in order. That's assistant editor work at best—and even assistants bring more judgment to it than an algorithm. This is actually good news for editors. Here's why: When the rough assembly takes 5 minutes instead of 5 hours, you get to spend more time on the part that actually matters—the storytelling. The craft. The decisions. This is exactly what I mean when I say everything becomes post. AI is collapsing the mechanical parts of the pipeline so humans can focus on the creative parts. The question isn't whether AI can assemble footage. It can. The question is: who decides if the assembly is any good? That's you. That's always been you. What do you think? Are tools like this a threat or an opportunity? Drop your take below.
0 likes • Mar 17
@Aeris Thuy Khanh Nguyen 100 percent
Dusting myself off, getting back on the horse.
Hey all, I'm Lawrence (Larry) Jordan from California. I was a professional film & TV editor here in LaLa Land for 30+ years, and with luck and good timing, I became one of the first people to edit a movie digitally on a computer, maybe some of you have heard of called the Avid Media Composer. That experience got me hooked on tech, which led me to building websites around my area of expertise. In 2018, I launched a very niche site and course that's now put 2,000+ students through it across 40+ countries. The turbulence in the film industry has slowed that biz to a crawl, but about 18 months ago I started immersing myself in AI, and I've been officially obsessed ever since. I've been building all kinds of tools with AI but haven't sold anything yet. However, I'm developing a major SaaS platform for AI filmmaking that will be in beta Q1 2026. I also run a newsletter and free Skool community called AIography for anyone interested in AI filmmaking. I really dug Trevor's intro video and know I can learn a lot from him and his group. So check it out if your interested.
0 likes • Mar 17
@Monique Johnson Monique OMG how are you? šŸ˜…
Quick Tip: Creating Consistent Characters for Video
Hey Everyone, I stumbled upon this workflow from @TechHalla on X. He's an amazing resource on all things AI image and video, and he posted this breakdown on how he gets consistent characters, which, as we all know, is one of the big stumbling blocks when trying to put together a video that makes sense. I've attached the workflow here as a PDF. A few notes: He works with Higgsfield, but you can try this out on any decent generative video platform that has Nano Banana, aka, Gemini-3-pro-image-preview 2K (either standard or Pro). If you have a Google Gemini account, you can access it there as well. Quick tip: To use his prompts, just drag and drop the image file into ChatGPT and ask it to convert the image to text. Definitely go follow TechHalla on X—he's worth adding to your feed..
1 like • Dec '25
Thank you!
Kling O1 Video Engine Now Available as API via Fal
THIS. IS. HUGE. TL;DR: Kling's new O1 video engine offers multimodal control (text, images, video) and is now accessible via API on fal.ai, making advanced video generation available for developers and creators building custom workflows. Key Takeaways: ✨ Multimodal input support: combine text prompts with image references and video context for precise control šŸŽ¬ Handles character consistency, outfit references, and location elements—key pain points in video generation šŸ¤– API-first release enables integration into custom tools and workflows rather than web-only access Why It's Important: This is a significant distribution play for Kling. While competitors like Runway and Pika focus on proprietary web interfaces, Kling O1's API-exclusive launch via fal signals a developer-first strategy. For creators, this matters less if you're a casual user, but much more if you're building pipelines or need programmatic video generation. The multimodal approach—especially element and character referencing—addresses one of video AI's biggest challenges: maintaining visual consistency across shots. If Kling O1 delivers on this promise at API speed and scale, it could become the backbone for AI filmmaking tools the same way Stable Diffusion became foundational for image workflows. The exclusivity with fal is worth noting. Fal has positioned itself as the fast, reliable infrastructure for AI media generation. This partnership suggests Kling is prioritizing performance and developer experience over direct consumer access—at least initially. Expect web interfaces to follow, but this API launch tells you where serious production workflows might head. My Take: API-first launch is smart—developers will build the interfaces Kling doesn't have to. Kling O1 is the first multi-modal model that allows you to create videos with consistent characters (multiple) and place them in any scene you can create. At the end of the day it's essentially Nano Banana for Video.
1 like • Dec '25
Whoa
1-4 of 4
Setty McIntosh
1
3points to level up
@setty-mcintosh-1056
Post Production Supervisor | Line Producer

Online now
Joined Nov 27, 2025
INFJ
Powered by