Activity
Mon
Wed
Fri
Sun
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
What is this?
Less
More

Owned by Lawrence

AIography

872 members • Free

Hollywood craft meets creative AI. Learn how to generate studio-quality content, secure clients, and get paid. From someone who's actually made films.

Master The Workflow

152 members • $9/m

Film Editing, Post-Production. In-depth training to become a professional feature film & television assistant editor, the 1st step to full editor!

Memberships

IDALL LITE

2k members • $9/month

PublicAI

3.5k members • Free

The Build Room (Lab)

4k members • $9/month

Ai School

405 members • $49/month

AI with Apex - Learn AI w/ Max

302 members • Free

Victorable - Make Money

1k members • Free

Ai Director's Cut

191 members • $29/month

Essential Cinematic

988 members • Free

Vertical AI Builders

10.1k members • Free

137 contributions to AIography
The Guy Who Built Kling Just Beat Kling
A mystery AI video model called HappyHorse-1.0 showed up on the Artificial Analysis benchmark last week with no name attached. Within days it was #1 in text-to-video AND image-to-video, beating Seedance 2.0, Runway Gen-4.5, and every other model on the board. Then Alibaba raised their hand. Turns out it was built by their ATH AI unit, led by Zhang Di, the former VP of Kuaishou who built Kling AI's technology. The guy who built the previous champion just built the new one. For a different company. The numbers aren't close. 1333 Elo in text-to-video (60 points ahead of #2). 1392 Elo in image-to-video (37 points clear). And here's the part that matters: it generates video and audio together in a single pass. Not two separate models stitched together. One transformer, 40 layers, everything at once. They've confirmed it's going open source. API access starts April 30. If that open source release actually delivers benchmark-level quality, the math changes for everyone paying monthly for Runway or Kling. The best model in the world, free to download and run locally. Worth paying attention to. What do you think this means for the paid tools? Does free + best quality kill the subscription model? Drop your take below. Founding Members are getting a full technical breakdown of HappyHorse's architecture and what it means for your workflows this week.
1 like • 16h
@Alec Graf 😄
0 likes • 16h
@Sherah Danielle Good question, and honestly, the answer cuts in our favor. Two scenarios worth separating. The API (launching April 30, Alibaba-hosted): your prompts and any reference images land on Alibaba servers in China. For playful experiments, fine. For NDA or proprietary client work, skip it, same rule I'd apply to any cloud tool. The open weights running locally: this is the safer path. Nothing leaves your machine. No servers, no telemetry, no jurisdiction question. Here's the counterintuitive part, open source is a security plus on AI models, not a minus. Every ML researcher on earth can audit the weights and inference code. With closed models like Kling or Runway, you're trusting a black box. HappyHorse will have people all over the world poking at it within days of release. Two practical rules when the weights drop: 1. Only use Alibaba's official release. Don't grab random forks off Hugging Face. 2. If running locally, check the inference script for network calls. Takes two minutes to catch the rare backdoor. Give the community a week or two to vet, then run it with confidence.
Google Just Made AI Video Generation Free for Everyone
Google dropped a quiet bombshell this week. Veo 3.1, their AI video generation model, is now free for anyone with a Google account. Ten video generations per month, no subscription needed. You can open Google Vids right now, type a prompt, and get a high-quality AI video clip back. Free. They also added: - AI avatars you can direct (change outfits, pose them in scenes, keep voice consistent) - Custom music generation via Lyria 3 (30-second clips up to 3-minute tracks) - One-click publish to YouTube For indie creators testing ideas, prototyping a pitch, or mocking up a concept before committing real budget, this is significant. Ten free generations won't cover a full production, but they'll cover the "what if I tried..." phase that every good project starts with. If you're on a Google AI Ultra plan, you get 1,000 generations per month. That's a different conversation entirely. What's your move? Have you tried Google Vids yet? If you have, what's your honest take on the output quality compared to Runway or Kling? Drop your experience below.
0 likes • 5d
@Alec Graf Alec, that close-up whisper observation is sharper than most of the reviews I've read. Veo does seem to collapse the further you get from contained emotional space; wider shots, action, anything with fast motion, falls apart. My working theory: the training data skews prestige-TV intimate, so the model might know that language best. What's the shot you wish it could do but can't?
1 like • 16h
@Alec Graf 😆 No shit sherlock! And does it without having to ask 8X
04-16-2026 Newsletter: Four Big Moves, One Thursday
Just published today's AIography newsletter, and it's one of the densest weeks we've had. Canva shipped Canva AI 2.0 with what they're calling the world's first design foundation model and nine new capabilities. Google quietly opened Veo 3.1 to any Google account (10 free videos a month). Anthropic dropped Opus 4.7 the same morning. Runway's CEO went on stage at Semafor and pitched Hollywood on making 50 films with the $100M they currently spend on one blockbuster. And the WGA ratification vote opened today while SAG-AFTRA heads back to the table April 27. If it felt like everything happened at once today, that's because it did. The newsletter breaks down what actually matters from each announcement, what doesn't, and where Runway's 50-films math falls apart. Also calls out a real fact-check the writer team caught: Scene Extension isn't new this week, the free tier is. I write this twice a week because this space moves faster than any single person can track. My job is to translate what's noise and what's signal from inside the work, not from a press release. Free to read. Link in the comments. If you want the deeper workflows, tool breakdowns, and the "here's exactly how I'd set this up" walkthroughs, that's what Founding Members get. $29/month, locked for life, heading toward 50 members, then the price goes to $49. Which of today's announcements lands biggest for your work?
1
0
04-16-2026 Newsletter: Four Big Moves, One Thursday
So I may have been a little... prolific lately.
Six new deep dives just dropped in the Founding Members tier over the past two weeks. If you blinked, you missed a small library. Here's the what's waiting for you: 📌 Open Source AI Video Models: The Real Comparison Wan, HunyuanVideo, CogVideoX, LTX. Who actually wins when you stop reading press releases and start rendering. 📌 AI Script Coverage: What Hollywood Is Actually Doing (and What It Gets Wrong) Studios are already using AI to read scripts. The results are... instructive. 📌 Claude Inside DaVinci Resolve: An AI Agent in Your NLE Someone connected Claude directly to Resolve's scripting API. It talks to your live timeline. I have thoughts. (Made a little YT Short about this one) 📌 The WGA's AI Training Ban: What It Actually Means For Your Workflow Not what the headlines say. I read the deal so you don't have to. 📌 Morphic Workflows: What "Encode Once, Run a Hundred Times" Actually Looks Like A new tool that might change how you think about rendering pipelines. Might. 📌 OpenMontage: Turn Your AI Coding Assistant Into a Video Production Studio The open source project that turns Claude Code into an editing workflow. Full breakdown. That's what Founding Members get. Real workflows, real tools, no hype. And we're only starting to get rolling. Seriously, don't miss out on this: $29/month, locked for life — but only a few spots left. After that, it goes to $49/month or $490/year. Just saying. Support our free content by following us on social media: LinkedIn Twitter/X Instagram Facebook TikTok
2 likes • 5d
@Alec Graf Back at ya Alec! 🔥
Discussion: The New WGA Contract
Ok, this gets into the weeds a bit, but it's important for anyone who is in the Writers Guild or would like to be. We have granular details about this in the Founding Members tier. But everyone is welcome to discuss here. The WGA drew a line: protect the training data, not restrict the tools. Is that the right line? Should the focus be on what goes INTO the models, or what comes OUT of them? And for indie filmmakers working outside the union system - does this deal change how you think about your own AI work? Drop your take below. Especially interested in hearing from anyone who's already navigated AI-related contract language with a production company or distributor.
1
0
1-10 of 137
Lawrence Jordan
6
1,198points to level up
@lawrence-jordan-3607
Film & TV editor, web entrepreneur, creator of AIography.ai & mastertheworkflow.com. I've consulted Apple, Adobe, Avid & others on digital video apps.

Active 32m ago
Joined Sep 19, 2024
Southern California
Powered by