Continue a stored OpenAI conversation using server-side state management. The previous response must have been created with `store = TRUE` in the original request.
Arguments
- prev_resp
Previous response object with response_id attribute
- prompt
New user prompt
- ml
Model to use (default: same as previous)
- temp
Temperature setting
- max_tokens
Optional max_output_tokens cap (passthrough to [openai_responses()])
- timeout
Request timeout in seconds (passthrough to [openai_responses()])
