Send LLMMessage to Mistral API
mistral_chat(
.llm,
.model = "mistral-large-latest",
.stream = FALSE,
.seed = NULL,
.json = FALSE,
.temperature = 0.7,
.top_p = 1,
.stop = NULL,
.safe_prompt = FALSE,
.timeout = 120,
.max_tries = 3,
.max_tokens = 1024,
.min_tokens = NULL,
.dry_run = FALSE,
.verbose = FALSE
)
Returns an updated LLMMessage
object.
An LLMMessage
object.
The model identifier to use (default: "mistral-large-latest"
).
Whether to stream back partial progress to the console. (default: FALSE
).
The seed to use for random sampling. If set, different calls will generate deterministic results (optional).
Whether the output should be in JSON mode(default: FALSE
).
Sampling temperature to use, between 0.0
and 1.5
. Higher values make the output more random, while lower values make it more focused and deterministic (default: 0.7
).
Nucleus sampling parameter, between 0.0
and 1.0
. The model considers tokens with top_p probability mass (default: 1
).
Stop generation if this token is detected, or if one of these tokens is detected when providing a list (optional).
Whether to inject a safety prompt before all conversations (default: FALSE
).
When should our connection time out in seconds (default: 120
).
Maximum retries to peform request
The maximum number of tokens to generate in the completion. Must be >= 0
(default: 1024
).
The minimum number of tokens to generate in the completion. Must be >= 0
(optional).
If TRUE
, perform a dry run and return the request object (default: FALSE
).
Should additional information be shown after the API call? (default: FALSE
)