This function retrieves the results of a completed batch and updates
the provided list of LLMMessage objects with the responses. It aligns each
response with the original request using the custom_ids generated in send_batch().
fetch_batch(
.llms,
.provider = getOption("tidyllm_fbatch_default"),
.dry_run = NULL,
.max_tries = NULL,
.timeout = NULL
)A list of updated LLMMessage objects, each with the assistant's response added if successful.
A list of LLMMessage objects containing conversation histories.
A function or function call specifying the language model provider and any additional parameters.
This should be a call to a provider function like openai(), claude(), etc.
You can also set a default provider function via the tidyllm_fbatch_default option.
Logical; if TRUE, returns the constructed request without executing it
Integer; maximum number of retries if the request fails
Integer; request timeout in seconds
The function routes the input to the appropriate provider-specific batch API function.