OPENAI PYTHON SDK V2.15.0 ADDS RESPONSE.COMPLETED_AT
OpenAI's Python client v2.15.0 adds a completed_at property on Response, exposing the server-side finish timestamp for requests. This enables cleaner latency/tr...
OpenAI's Python client v2.15.0 adds a completed_at property on Response, exposing the server-side finish timestamp for requests. This enables cleaner latency/tracing metrics and easier event ordering. The release also includes internal codegen updates and notes no breaking changes.
You can measure model latency and enforce SLAs using a server-side completion time.
It simplifies correlating responses across logs and streams for tracing.
-
terminal
Verify completed_at is present and correctly populated for both non-streaming and streaming responses, and confirm its type/units before persisting.
-
terminal
Update metrics/log schemas and parsers to include completed_at while remaining backward compatible with older responses that lack it.
Legacy codebase integration strategies...
- 01.
If you serialize or log Response objects, add completed_at to schemas and handle nulls for historical data.
- 02.
Audit dashboards/SLIs to safely merge new completed_at-based latency with existing measurements without breaking alerts.
Fresh architecture paradigms...
- 01.
Standardize on completed_at for latency/tracing and emit metrics at request start and on response receipt.
- 02.
In streaming flows, capture completed_at from the final message and propagate it through your pipelines.