OpenAI rolls out GPT-5.4 mini in ChatGPT and sunsets legacy deep research
Plan for GPT-5.4 mini as a fallback in ChatGPT and retire dependencies on legacy deep research before March 26.
Plan for GPT-5.4 mini as a fallback in ChatGPT and retire dependencies on legacy deep research before March 26.
Codex is now widely available; pilot it for code reviews and routine changes, but add guardrails and watch performance on big repos.
MCP in ChatGPT dev mode is ready to trial, but plan for auth quirks and tool-state bugs before production use.
Channels and --bare make Claude Code ready for real agent workflows, while this release tightens the bolts across auth, proxies, and runtime stability.
Composer 2 looks like a strong, cheaper coding model, but the Kimi K2.5 reveal means provenance and governance now matter as much as speed and price.
Use Sonnet 4.6 for daily coding, escalate to GPT-5.4 for gnarly work, and trust your own benchmark over any single leaderboard.
Treat agents like distributed systems with state, retries, and audits—not like chatbots.
Treat inference as an optimization problem—adopt vLLM, KV caches, and modern decoding to cut latency and cost at scale.
Ship reliability and ergonomics around ChatGPT now—fold code, structure prompts, and guard agent flows against unpredictable capability errors.
Treat AI like a product with SLOs and budgets—without GPU guardrails and local options, your cloud bill will run the roadmap.
The OS‑level agent is becoming the new control plane—secure the orchestration layer and keep your model choices flexible.
Don’t pivot on a Reddit claim; run a quick telemetry audit and wait for primary sources.