Send LLMMessage to Mistral API
mistral_chat(
.llm,
.model = "mistral-large-latest",
.frequency_penalty = NULL,
.logit_bias = NULL,
.presence_penalty = NULL,
.seed = NULL,
.stop = NULL,
.stream = FALSE,
.temperature = 0.7,
.top_p = 1,
.min_tokens = NULL,
.max_tokens = NULL,
.json_schema = NULL,
.safe_prompt = FALSE,
.timeout = 120,
.max_tries = 3,
.dry_run = FALSE,
.verbose = FALSE,
.tools = NULL,
.tool_choice = NULL
)
Returns an updated LLMMessage
object.
An LLMMessage
object.
The model identifier to use (default: "mistral-large-latest"
).
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency.
A named list modifying the likelihood of specified tokens appearing in the completion.
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far.
If specified, the system will make a best effort to sample deterministically.
Up to 4 sequences where the API will stop generating further tokens.
If set to TRUE, the answer will be streamed to console as it comes (default: FALSE).
What sampling temperature to use, between 0 and 2. Higher values make the output more random.
An alternative to sampling with temperature, called nucleus sampling.
The minimum number of tokens to generate in the completion. Must be >= 0
(optional).
An upper bound for the number of tokens that can be generated for a completion.
A JSON schema object provided by tidyllm schema or ellmer schemata.
Whether to inject a safety prompt before all conversations (default: FALSE
).
When should our connection time out in seconds (default: 120
).
Maximum retries to peform request
If TRUE
, perform a dry run and return the request object (default: FALSE
).
Should additional information be shown after the API call? (default: FALSE
)
Either a single TOOL object or a list of TOOL objects representing the available functions for tool calls.
A character string specifying the tool-calling behavior; valid values are "none", "auto", or "required".