This function creates and submits a batch of messages to the Mistral API for asynchronous processing.
send_mistral_batch(
.llms,
.model = "mistral-small-latest",
.endpoint = "/v1/chat/completions",
.metadata = NULL,
.temperature = 0.7,
.top_p = 1,
.max_tokens = 1024,
.min_tokens = NULL,
.frequency_penalty = NULL,
.logit_bias = NULL,
.presence_penalty = NULL,
.seed = NULL,
.stop = NULL,
.safe_prompt = FALSE,
.json_schema = NULL,
.dry_run = FALSE,
.overwrite = FALSE,
.max_tries = 3,
.timeout = 60,
.id_prefix = "tidyllm_mistral_req_"
)
The prepared LLMMessage list with a batch_id attribute.
A list of LLMMessage objects containing conversation histories.
The Mistral model version (default: "mistral-small-latest").
The API endpoint (default: "/v1/chat/completions").
Optional metadata for the batch.
Sampling temperature to use, between 0.0
and 1.5
(default: 0.7
).
Nucleus sampling parameter, between 0.0
and 1.0
(default: 1
).
The maximum number of tokens to generate in the completion (default: 1024
).
The minimum number of tokens to generate (optional).
Numeric value (or NULL) for frequency penalty.
A named list modifying the likelihood of specific tokens (or NULL).
Numeric value (or NULL) for presence penalty.
Random seed for deterministic outputs (optional).
Sequence(s) at which to stop generation (optional).
Logical; if TRUE, injects a safety prompt (default: FALSE).
A JSON schema object for structured output (optional).
Logical; if TRUE, returns the prepared request without executing it (default: FALSE).
Logical; if TRUE, allows overwriting existing custom IDs (default: FALSE).
Maximum retry attempts for requests (default: 3).
Request timeout in seconds (default: 60).
Prefix for generating custom IDs (default: "tidyllm_mistral_req_").