This function sends a message history to the Perplexity Chat API and returns the assistant's reply.
perplexity_chat(
.llm,
.model = "sonar",
.max_tokens = 1024,
.temperature = NULL,
.top_p = NULL,
.frequency_penalty = NULL,
.presence_penalty = NULL,
.stop = NULL,
.search_domain_filter = NULL,
.return_images = FALSE,
.search_recency_filter = NULL,
.api_url = "https://api.perplexity.ai/",
.json = FALSE,
.timeout = 60,
.verbose = FALSE,
.stream = FALSE,
.dry_run = FALSE,
.max_tries = 3
)
A new LLMMessage
object containing the original messages plus the assistant's response.
An LLMMessage
object containing the conversation history.
The identifier of the model to use (default: "sonar").
The maximum number of tokens that can be generated in the response (default: 1024).
Controls the randomness in the model's response. Values between 0 (exclusive) and 2 (exclusive) are allowed, where higher values increase randomness (optional).
Nucleus sampling parameter that controls the proportion of probability mass considered. Values between 0 (exclusive) and 1 (exclusive) are allowed (optional).
Number greater than 0. Values > 1.0 penalize repeated tokens, reducing the likelihood of repetition (optional).
Number between -2.0 and 2.0. Positive values encourage new topics by penalizing tokens that have appeared so far (optional).
One or more sequences where the API will stop generating further tokens. Can be a string or a list of strings (optional).
A vector of domains to limit or exclude from search results. For exclusion, prefix domains with a "-" (optional, currently in closed beta).
Logical; if TRUE, enables returning images from the model's response (default: FALSE, currently in closed beta).
Limits search results to a specific time interval (e.g., "month", "week", "day", or "hour"). Only applies to online models (optional).
Base URL for the Perplexity API (default: "https://api.perplexity.ai/").
Whether the response should be structured as JSON (default: FALSE).
Request timeout in seconds (default: 60).
If TRUE, displays additional information after the API call, including rate limit details (default: FALSE).
Logical; if TRUE, streams the response piece by piece (default: FALSE).
If TRUE, performs a dry run and returns the constructed request object without executing it (default: FALSE).
Maximum retries to perform the request (default: 3).