This function sends a message history to the Azure OpenAI Chat Completions API and returns the assistant's reply. This function is work in progress and not fully tested
azure_openai(
.llm,
.endpoint_url = Sys.getenv("AZURE_ENDPOINT_URL"),
.deployment = "gpt-4o-mini",
.api_version = "2024-08-01-preview",
.max_completion_tokens = NULL,
.frequency_penalty = NULL,
.logit_bias = NULL,
.logprobs = FALSE,
.top_logprobs = NULL,
.presence_penalty = NULL,
.seed = NULL,
.stop = NULL,
.stream = FALSE,
.temperature = NULL,
.top_p = NULL,
.timeout = 60,
.verbose = FALSE,
.json = FALSE,
.json_schema = NULL,
.dry_run = FALSE,
.max_tries = 3
)
A new LLMMessage
object containing the original messages plus the assistant's response.
An LLMMessage
object containing the conversation history.
Base URL for the API (default: Sys.getenv("AZURE_ENDPOINT_URL")).
The identifier of the model that is deployed (default: "gpt-4o-mini").
Which version of the API is deployed (default: "2024-08-01-preview")
An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far.
A named list modifying the likelihood of specified tokens appearing in the completion.
Whether to return log probabilities of the output tokens (default: FALSE).
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position.
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far.
If specified, the system will make a best effort to sample deterministically.
Up to 4 sequences where the API will stop generating further tokens.
If set to TRUE, the answer will be streamed to console as it comes (default: FALSE).
What sampling temperature to use, between 0 and 2. Higher values make the output more random.
An alternative to sampling with temperature, called nucleus sampling.
Request timeout in seconds (default: 60).
Should additional information be shown after the API call (default: FALSE).
Should output be in JSON mode (default: FALSE).
A JSON schema object as R list to enforce the output structure (If defined has precedence over JSON mode).
If TRUE, perform a dry run and return the request object (default: FALSE).
Maximum retries to perform request
if (FALSE) {
# Basic usage
msg <- llm_message("What is R programming?")
result <- azure_openai(msg)
# With custom parameters
result2 <- azure_openai(msg,
.deployment = "gpt-4o-mini",
.temperature = 0.7,
.max_tokens = 1000)
}
Run the code above in your browser using DataLab