GPT-5.3-Codex: 25% faster agentic coding, now in GitHub Copilot
OpenAI’s GPT-5.3-Codex brings 25% faster, steerable agentic coding for long-running, tool-driven workflows and is rolling out across Codex surfaces and GitHub Copilot.
Codex is an AI tool that translates natural language into code.
OpenAI’s GPT-5.3-Codex brings 25% faster, steerable agentic coding for long-running, tool-driven workflows and is rolling out across Codex surfaces and GitHub Copilot.
OpenAI's GPT-5.3-Codex is a 25% faster, more agentic coding model built for long-running, tool-driven workflows and is now rolling out across Codex surfaces and GitHub Copilot with stronger cybersecurity guardrails. OpenAI positions the model for multi-step coding and broader "computer use" with SOTA benchmark results and notes early versions helped build and operate itself [Pulse 2.0](https://pulse2.com/openai-reveals-gpt-5-3-codex-a-faster-agentic-coding-model-built-for-long-running-work/)[^1] and [AI-360](https://www.ai-360.online/openai-launches-gpt-5-3-codex-extending-agentic-coding-and-real-time-steering/)[^2]. GitHub confirms GPT-5.3-Codex is GA in Copilot (Pro/Business/Enterprise) across VS Code, web, mobile, CLI, and the Coding Agent with an admin-enabled policy toggle and gradual rollout [GitHub Changelog](https://github.blog/changelog/2026-02-09-gpt-5-3-codex-is-now-generally-available-for-github-copilot/)[^3], while OpenAI channels have it now with API access "soon" and a new Trusted Access for Cyber pilot [Pulse 2.0](https://pulse2.com/openai-reveals-gpt-5-3-codex-a-faster-agentic-coding-model-built-for-long-running-work/)[^1] and [ITP.net](https://www.itp.net/ai-automation/openai-launches-gpt-5-3-codex-the-new-era-of-ai-powered-coding-and-beyond)[^4]. [^1]: Adds: Core capabilities, benchmark highlights, safety posture, availability across Codex app/CLI/IDE/web, and NVIDIA GB200 NVL72 infra. [^2]: Adds: Real-time steering in extended runs and cybersecurity classification/pilot context for enterprise adoption. [^3]: Adds: Concrete Copilot GA details, supported surfaces, plans, rollout, and admin policy enablement. [^4]: Adds: Additional context on broader professional task coverage and API timing.
OpenAI launched GPT-5.3-Codex, a faster coding model now embedded in IDEs, the terminal, web, and a macOS app, with early claims it assisted in building itself. OpenAI details ~25% faster runs, stronger SWE-Bench/Terminal-Bench results, and broad distribution via CLI, IDE extensions, web, and a new macOS app in the announcement [Introducing GPT‑5.3‑Codex](https://openai.com/index/introducing-gpt-5-3-codex/)[^1]. Coverage notes all paid ChatGPT plans can access it now, API access is coming, and the team used Codex to debug, manage deployment, and evaluate results during its own development [TechRadar report](https://www.techradar.com/pro/openai-unveils-gpt-5-3-codex-which-can-tackle-more-advanced-and-complex-coding-tasks)[^2], with additional workflow and positioning details on distribution and SDLC scope [AI News Hub](https://www.chatai.com/posts/openai-pushes-codex-deeper-into-developer-workflows-with-gpt-5-3-codex-release)[^3]. [^1]: Adds: Official feature, performance, and distribution overview. [^2]: Adds: Access paths (paid ChatGPT plans), benchmarks, and "built itself" context. [^3]: Adds: Deeper coverage of IDE/CLI/macOS integration, speedup figure, and API timing.