- .llm
An LLMMessage
object containing the message or conversation history to send to the language model.
- .provider
A function or function call specifying the language model provider and any additional parameters.
This should be a call to a provider function like openai()
, claude()
, etc.
You can also set a default provider function via the tidyllm_chat_default
option.
- .dry_run
Logical; if TRUE
, simulates the request without sending it to the provider. Useful for testing.
- .stream
Logical; if TRUE
, streams the response from the provider in real-time.
- .temperature
Numeric; controls the randomness of the model's output (0 = deterministic).
- .timeout
Numeric; the maximum time (in seconds) to wait for a response.
- .top_p
Numeric; nucleus sampling parameter, which limits the sampling to the top cumulative probability p
.
- .max_tries
Integer; the maximum number of retries for failed requests.
- .model
Character; the model identifier to use (e.g., "gpt-4"
).
- .verbose
Logical; if TRUE
, prints additional information about the request and response.
- .json_schema
List; A JSON schema object as R list to enforce the output structure
- .seed
Integer; sets a random seed for reproducibility.
- .stop
Character vector; specifies sequences where the model should stop generating further tokens.
- .frequency_penalty
Numeric; adjusts the likelihood of repeating tokens (positive values decrease repetition).
- .presence_penalty
Numeric; adjusts the likelihood of introducing new tokens (positive values encourage novelty).