OpenAI GPT-5.4 ships: 1.05M context, built-in computer use, Pro tier
OpenAI released GPT-5.4, a unified frontier model that combines reasoning, coding, and computer-use with a 1.05M-token context and an optional Pro tier for heavier workloads. In ChatGPT, the family appears as GPT-5.4 Thinking and GPT-5.4 Pro, while the API exposes gpt-5.4 and gpt-5.4-pro with configurable reasoning effort; Codex now defaults to this line, consolidating coding and reasoning in one family ([structure and specs](https://www.datastudios.org/post/chatgpt-5-4-model-thinking-pro-api-codex-pricing-and-what-it-actually-is), [announcement](https://community.openai.com/t/gpt-5-4-pro-and-thinking-are-here/1375799)). Pricing published via Puter’s API lists $2.5 per million input tokens and $15 per million output tokens, with up to 1,050,000 context and 128K max output ([model card and pricing](https://developer.puter.com/ai/openai/gpt-5.4/)). Computer-use is now built in: the model can drive desktop UIs and websites and posted 75% on OSWorld-Verified, exceeding a human baseline; it also set new highs on BrowseComp and GDPval, indicating stronger research and professional task performance ([feature and benchmarks](https://www.igorslab.de/en/gpt-5-4-openai-combines-reasoning-coding-and-computer-control-in-one-model/), [capabilities](https://coursiv.io/blog/openai-gpt-5-4)). For vision and document scenarios, OpenAI’s cookbook details prompt and workflow tips to maximize accuracy and throughput with the latest model ([developer tips](https://developers.openai.com/cookbook/examples/multimodal/document_and_multimodal_understanding_tips)).