OpenAI has unveiled o3 and o4-mini
OpenAI has unveiled o3 and o4-mini, its latest reasoning models, alongside Codex CLI, a new open-source coding agent.
o3 is OpenAI’s most advanced reasoning model to date, while o4-mini delivers strong performance at a lower cost. Both models can now reason with images, handling blurry or rotated visuals by adjusting them internally. They can independently use all ChatGPT tools, including web browsing, Python, and image processing.
Available immediately for Plus, Pro, and Team users, o3 and o4-mini will replace o1, o3-mini, and o3-mini-high. An o3-pro model is slated for release in the coming weeks.
OpenAI reports that o3 achieves state-of-the-art performance in coding, real-world software tasks, and multimodality (MMMU) benchmarks, with both models showing significant efficiency improvements.
The open-source Codex CLI, launched today, offers:
  • Local terminal operation
  • Integration of models with local code and computing tasks
  • Support for o3 and o4-mini, with GPT-4.1 compatibility coming soon
  • $25K API credit grants for early projects
OpenAI president Greg Brockman described the releases as a GPT-4-level “qualitative step into the future,” noting that top scientists praise the models for generating “legitimately good and useful novel ideas.” o3 can zoom and crop images to read small, handwritten text, and one model executed 600 consecutive tool calls to solve a complex task.
livestream replay below.
let us know if you try them out !
1
4 comments
Ray Merlin
6
OpenAI has unveiled o3 and o4-mini
AI Marketing
skool.com/ai-community
Improve your marketing with AI. For entrepreneurs, business owners, marketers and creators.
Leaderboard (30-day)
Powered by