Learn R Programming

promptr (version 1.0.0)

complete_prompt: Complete an LLM Prompt

Description

Submits a text prompt to OpenAI's "Completion" API endpoint and formats the response into a string or tidy dataframe. (Note that, as of 2024, this endpoint is considered "Legacy" by OpenAI and is likely to be deprecated.)

Usage

complete_prompt(
  prompt,
  model = "gpt-3.5-turbo-instruct",
  openai_api_key = Sys.getenv("OPENAI_API_KEY"),
  max_tokens = 1,
  temperature = 0,
  seed = NULL,
  parallel = FALSE
)

Value

If max_tokens = 1, returns a dataframe with the 5 most likely next words and their probabilities. If max_tokens > 1, returns a single string of text generated by the model.

Arguments

prompt

The prompt

model

Which OpenAI model to use. Defaults to 'gpt-3.5-turbo-instruct'

openai_api_key

Your API key. By default, looks for a system environment variable called "OPENAI_API_KEY" (recommended option). Otherwise, it will prompt you to enter the API key as an argument.

max_tokens

How many tokens (roughly 4 characters of text) should the model return? Defaults to a single token (next word prediction).

temperature

A numeric between 0 and 2 When set to zero, the model will always return the most probable next token. For values greater than zero, the model selects the next word probabilistically.

seed

An integer. If specified, the OpenAI API will "make a best effort to sample deterministically".

parallel

TRUE to submit API requests in parallel. Setting to FALSE can reduce rate limit errors at the expense of longer runtime.

Examples

Run this code
if (FALSE) {
complete_prompt('I feel like a')
complete_prompt('Here is my haiku about frogs:',
                max_tokens = 100)
}

Run the code above in your browser using DataLab