TechCrunch: OpenAI launches Flex processing
I’ll admit—when I first saw the headline, I braced for yet another clever excuse to charge more for AI. But to my surprise, OpenAI is actually going the other direction (sort of).
Their new “Flex” API pricing is a clear move to onboard more users by trading performance for affordability. After all, their servers buckled under the pressure when they launched their video generator—so they clearly need a way to scale demand without breaking the bank (or their infrastructure).
Flex cuts token pricing in half for their o3 and o4-mini models, which sounds great... until you hit the fine print: users on the cheaper plan must accept “occasional resource unavailability.” Translation? This isn’t for anything mission-critical. It’s a sandbox for experimentation, not production.
Still, it’s a smart play in the face of rising AI compute costs and competition from Google’s new Gemini Flash model. Just don’t build anything you need to rely on with it.
0
0 comments
Mario DiBenedetto
2
TechCrunch: OpenAI launches Flex processing
The AI in Lending Report
skool.com/lending-ai-free-community-2711
A FREE community to exchange ideas and share resources that allow you to stay abreast of what AI progress is tangibly being made in the lending domain
Powered by