CODEX-CLI PUB_DATE: 2026.01.22

CODEX CLI FAILS TO USE Z.AI GLM-4.7 DUE TO ROLE MISMATCH

OpenAI Codex CLI currently sends a 'developer' role message that Z.AI's Chat Completions (GLM-4.7) rejects, as it only accepts system/user/assistant roles. Ther...

Codex CLI fails to use Z.AI GLM-4.7 due to role mismatch

OpenAI Codex CLI currently sends a 'developer' role message that Z.AI's Chat Completions (GLM-4.7) rejects, as it only accepts system/user/assistant roles. There is no Codex config to remap roles for custom providers, so integrations remain blocked even when using wire_api="chat".

[ WHY_IT_MATTERS ]
01.

LLM provider heterogeneity can silently break multi-model workflows due to incompatible chat role schemas.

02.

Lack of role mapping forces teams to add adapters or pin tools, impacting velocity and reliability.

[ WHAT_TO_TEST ]
  • terminal

    Add contract tests that validate role schemas per provider and fail fast on non-supported roles.

  • terminal

    Prototype a thin adapter that maps developer->system for chat providers and run e2e tests against Z.AI GLM-4.7.

[ BROWNFIELD_PERSPECTIVE ]

Legacy codebase integration strategies...

  • 01.

    Insert a proxy/gateway to normalize roles before hitting providers and feature-flag the change per environment.

  • 02.

    Pin Codex CLI version and isolate Z.AI usage behind an adapter while updating CI to catch schema drift.

[ GREENFIELD_PERSPECTIVE ]

Fresh architecture paradigms...

  • 01.

    Adopt a provider-agnostic chat schema internally with explicit role mappers per provider.

  • 02.

    Prefer providers or SDKs that natively support OpenAI Responses 'developer' role or document exact role compatibility.