- .llm
An LLMMessage object containing the conversation history and system prompt.
- .model
Character string specifying the Claude model version (default: "claude-3-5-sonnet-20241022").
- .max_tokens
Integer specifying the maximum number of tokens in the response (default: 1024).
- .temperature
Numeric between 0 and 1 controlling response randomness.
- .top_k
Integer controlling diversity by limiting the top K tokens.
- .top_p
Numeric between 0 and 1 for nucleus sampling.
- .metadata
List of additional metadata to include with the request.
- .stop_sequences
Character vector of sequences that will halt response generation.
- .tools
List of additional tools or functions the model can use.
- .json_schema
A schema to enforce an output structure
- .api_url
Base URL for the Anthropic API (default: "https://api.anthropic.com/").
- .verbose
Logical; if TRUE, displays additional information about the API call (default: FALSE).
- .max_tries
Maximum retries to peform request
- .timeout
Integer specifying the request timeout in seconds (default: 60).
- .stream
Logical; if TRUE, streams the response piece by piece (default: FALSE).
- .dry_run
Logical; if TRUE, returns the prepared request object without executing it (default: FALSE).
- .thinking
Logical; if TRUE, enables Claude's thinking mode for complex reasoning tasks (default: FALSE).
- .thinking_budget
Integer specifying the maximum tokens Claude can spend on thinking (default: 1024). Must be at least 1024.