Send a Batch of Requests to the Mistral API
send_mistral_batch(
.llms,
.model = "mistral-small-latest",
.endpoint = "/v1/chat/completions",
.metadata = NULL,
.temperature = 0.7,
.top_p = 1,
.max_tokens = 1024,
.min_tokens = NULL,
.seed = NULL,
.stop = NULL,
.dry_run = FALSE,
.overwrite = FALSE,
.max_tries = 3,
.timeout = 60,
.id_prefix = "tidyllm_mistral_req_"
)The prepared_llms list with the batch_id attribute attached.
A list of LLMMessage objects containing conversation histories.
The Mistral model version (default: "mistral-small-latest").
The API endpoint (default: "/v1/chat/completions").
Optional metadata for the batch.
Sampling temperature to use, between 0.0 and 1.5. Higher values make the output more random (default: 0.7).
Nucleus sampling parameter, between 0.0 and 1.0 (default: 1).
The maximum number of tokens to generate in the completion (default: 1024).
The minimum number of tokens to generate (optional).
Random seed for deterministic outputs (optional).
Stop generation at specific tokens or strings (optional).
Logical; if TRUE, returns the prepared request without executing it (default: FALSE).
Logical; if TRUE, allows overwriting existing custom IDs (default: FALSE).
Maximum retry attempts for requests (default: 3).
Request timeout in seconds (default: 60).
Prefix for generating custom IDs (default: "tidyllm_mistral_req_").