gpt-5.x models (via Copilot/OpenAI/Azure) don't support 'minimal' as a reasoning effort level — they only accept 'none', 'low', 'medium', 'high', and 'xhigh'. Setting /thinking minimal with gpt-5.4 causes a 400 error. The openai-codex-responses provider already had this clamping, but the openai-responses and azure-openai-responses providers passed the value through unclamped. Add clampReasoningForModel() to both providers that maps 'minimal' to 'low' for gpt-5.x models, matching the existing behavior in openai-codex-responses. Fixes the bug portion of #688 |
||
|---|---|---|
| .. | ||
| native | ||
| pi-agent-core | ||
| pi-ai | ||
| pi-coding-agent | ||
| pi-tui | ||