Learn R Programming

batchLLM (version 0.2.0)

claudeR: Interact with Anthropic's Claude API

Description

This function provides an interface to interact with Claude AI models via Anthropic's API, allowing for flexible text generation based on user inputs. This function was adapted from the claudeR repository by yrvelez on GitHub (MIT License).

Usage

claudeR(
  prompt,
  model = "claude-3-5-sonnet-20240620",
  max_tokens = 500,
  stop_sequences = NULL,
  temperature = 0.7,
  top_k = -1,
  top_p = -1,
  api_key = NULL,
  system_prompt = NULL
)

Value

The resulting completion up to and excluding the stop sequences.

Arguments

prompt

A string vector for Claude-2, or a list for Claude-3 specifying the input for the model.

model

The model to use for the request. Default is the latest Claude-3 model.

max_tokens

A maximum number of tokens to generate before stopping.

stop_sequences

Optional. A list of strings upon which to stop generating.

temperature

Optional. Amount of randomness injected into the response.

top_k

Optional. Only sample from the top K options for each subsequent token.

top_p

Optional. Does nucleus sampling.

api_key

Your API key for authentication.

system_prompt

Optional. An optional system role specification.

Examples

Run this code
if (FALSE) {
library(batchLLM)

# Set API in the env or use api_key parameter in the claudeR call
Sys.setenv(ANTHROPIC_API_KEY = "your_anthropic_api_key")

# Using Claude-2
response <- claudeR(
  prompt = "What is the capital of France?",
  model = "claude-2.1",
  max_tokens = 50
)
cat(response)

# Using Claude-3
response <- claudeR(
  prompt = list(
    list(role = "user", content = "What is the capital of France?")
  ),
  model = "claude-3-5-sonnet-20240620",
  max_tokens = 50,
  temperature = 0.8
)
cat(response)

# Using a system prompt
response <- claudeR(
  prompt = list(
    list(role = "user", content = "Summarize the history of France in one paragraph.")
  ),
  system_prompt = "You are a concise summarization assistant.",
  max_tokens = 500
)
cat(response)
}

Run the code above in your browser using DataLab