📰 AI News: OpenAI Launches GPT-5.4, Built For Real Professional Work
📝 TL;DR OpenAI just released GPT-5.4, its most capable and token-efficient frontier model for professional work, plus a higher power GPT-5.4 Pro tier for maximum performance. The big shift, GPT-5.4 is the first general model from OpenAI with native computer use, meaning agents can actually operate software and websites, not just talk about them. 🧠 Overview GPT-5.4 is positioned as the “do real work” model, combining stronger reasoning, top tier coding, and better agent workflows into one place. It is designed to produce higher quality deliverables with less back and forth, especially for documents, spreadsheets, presentations, and tool driven tasks. This also signals a broader trend, AI is moving from chat responses to full workflow execution across apps and systems. 📜 The Announcement OpenAI released GPT-5.4 across ChatGPT, the API, and Codex. In ChatGPT it shows up as GPT-5.4 Thinking, and there is also GPT-5.4 Pro for users who want maximum performance on complex tasks. On the developer side, GPT-5.4 is available in the API as gpt-5.4 and GPT-5.4 Pro as gpt-5.4-pro. OpenAI also published new pricing and highlighted improvements in accuracy, speed, and tool use reliability. ⚙️ How It Works • Native computer use - GPT-5.4 can operate computers and carry out workflows across applications, making it much more “agent ready” than prior general models. • Massive context for long projects - It supports up to 1M tokens of context, designed for long horizon tasks where an agent needs to plan, execute, and verify across lots of material. • Better tool selection - Tool search helps agents find and use the right tools and connectors faster, without losing intelligence. • More token efficient reasoning - It uses fewer tokens to solve many problems compared to prior generations, which helps speed and cost at scale. • Stronger knowledge work outputs - OpenAI focused heavily on spreadsheet modeling, document creation, and presentation quality, including better visuals and structure.