Learn R Programming

tidyllm

tidyllm is an R package designed to access various large language model APIs, including Anthropic Claude, OpenAI,Google Gemini, Perplexity,Groq, Mistral, and local models via Ollama or OpenAI-compatible APIs. Built for simplicity and functionality, it helps you generate text, analyze media, and integrate model feedback into your data workflows with ease.

Features

  • Multiple Model Support: Seamlessly switch between various model providers using the best of what each has to offer.
  • Media Handling: Extract and process text from PDFs and capture console outputs for messaging. Upload imagefiles or the last plotpane to multimodal models. For the Gemini API even video and audio inputs are supported.
  • Interactive Messaging History: Manage an ongoing conversation with models, maintaining a structured history of messages and media interactions, which are automatically formatted for each API
  • Batch processing: Efficiently handle large workloads with Anthropic, OpenAI or Mistral batch processing APIs, reducing costs by up to 50%.
  • Tidy Workflow: Use R's functional programming features for a side-effect-free, pipeline-oriented operation style.

Installation

To install tidyllm from CRAN, use:

install.packages("tidyllm")

Or for the development version from GitHub:

# Install devtools if not already installed
if (!requireNamespace("devtools", quietly = TRUE)) {
  install.packages("devtools")
}
devtools::install_github("edubruell/tidyllm")

Basic Example

Here’s a quick example using tidyllm to describe an image using the Claude model to and follow up with local open-source models:

library("tidyllm")

# Describe an image with  claude
conversation <- llm_message("Describe this image", 
                              .imagefile = here("image.png")) |>
  chat(claude())

# Use the description to query further with groq
conversation |>
  llm_message("Based on the previous description,
  what could the research in the figure be about?") |>
  chat(ollama(.model = "gemma2"))

For more examples and advanced usage, check the Get Started vignette.

Please note: To use tidyllm, you need either an installation of ollama or an active API key for one of the supported providers (e.g., Claude, ChatGPT). See the Get Started vignette for setup instructions.

Interface-change in 0.3.0

The CRAN release of tidyllm 0.3.0, introduced a major interface change to provide a more intuitive user experience. Previously, provider-specific functions like claude(), openai(), and others were directly used for chat-based workflows. They specified both an API-provider and performed a chat-interaction. Now, these functions primarily serve as provider configuration for more general verbs like chat(),embed() or send_batch(). A combination of a general verb and a provider will always route requests to a provider-specific function like openai_chat(). Read the Changelog or the package vignette for more information.

For backward compatibility, the old use of functions like openai() or claude() directly for chat requests still works but now but issues deprecation warnings. It is recommended to either use the verb-based interface:

llm_message("Hallo") |> chat(openai(.model="gpt-4o"))

or to use the more verbose provider-specific functions directly:

llm_message("Hallo") |> openai_chat(.model="gpt-4o")

Learn More

For detailed instructions and advanced features, see:

Similar packages

The are some similar R packages for working with LLMs:

  • ellmer is especially great for asynchronous chats, chatbots in Shiny and advanced tool-calling capabilities. Its schema functions offer robust support for complex structured data extraction, making it a great choice for applications that require highly interactive or structured LLM interactions. While ellmer’s feature set overlaps with tidyllm in some areas, its interface and design philosophy are very different.
  • rollama is specifically designed to support the Ollama API, enabling seamless interaction with local LLM models. A key strength of rollama lies in its specialized Ollama API functionalities, such as copy and create which are not currently available in tidyllm. These features make rollama particularly suited for workflows requiring model management or deployment within the Ollama ecosystem.

Contributing

We welcome contributions! Feel free to open issues or submit pull requests on GitHub.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Copy Link

Version

Install

install.packages('tidyllm')

Monthly Downloads

631

Version

0.3.4

License

MIT + file LICENSE

Maintainer

Eduard Brüll

Last Published

March 27th, 2025

Functions in tidyllm (0.3.4)

fetch_mistral_batch

Fetch Results for an Mistral Batch
fetch_openai_batch

Fetch Results for an OpenAI Batch
field_chr

Define Field Descriptors for JSON Schema
field_object

Define a nested object field
gemini_list_files

List Files in Gemini API
gemini_file_metadata

Retrieve Metadata for a File from Gemini API
fetch_claude_batch

Fetch Results for a Claude Batch
get_reply_data

Retrieve Assistant Reply as Structured Data
fetch_groq_batch

Fetch Results for a Groq Batch
get_user_message

Retrieve a User Message by Index
groq

Groq API Provider Function
groq_chat

Send LLM Messages to the Groq Chat API
list_claude_batches

List Claude Batch Requests
gemini_upload_file

Upload a File to Gemini API
list_batches

List all Batch Requests on a Batch API
get_logprobs

Retrieve Log Probabilities from Assistant Replies
embed

Generate text embeddings
fetch_azure_openai_batch

Fetch Results for an Azure OpenAI Batch
llm_message

Create or Update Large Language Model Message Object
fetch_batch

Fetch Results from a Batch API
get_metadata

Retrieve Metadata from Assistant Replies
gemini

Google Gemini Provider Function
groq_list_models

List Available Models from the Groq API
mistral_chat

Send LLMMessage to Mistral API
groq_transcribe

Transcribe an Audio File Using Groq transcription API
list_models

List Available Models for a Provider
list_openai_batches

List OpenAI Batch Requests
openai_chat

Send LLM Messages to the OpenAI Chat Completions API
list_groq_batches

List Groq Batch Requests
gemini_delete_file

Delete a File from Gemini API
gemini_embedding

Generate Embeddings Using the Google Gemini API
gemini_chat

Send LLMMessage to Gemini API
list_mistral_batches

List Mistral Batch Requests
mistral

Mistral Provider Function
ollama_list_models

Retrieve and return model information from the Ollama API
openai

OpenAI Provider Function
ollama_download_model

Download a model from the Ollama API
mistral_embedding

Generate Embeddings Using Mistral API
ollama_embedding

Generate Embeddings Using Ollama API
send_batch

Send a batch of messages to a batch API
ollama_chat

Interact with local AI models via the Ollama API
ollama_delete_model

Delete a model from the Ollama API
pdf_page_batch

Batch Process PDF into LLM Messages
openai_list_models

List Available Models from the OpenAI API
openai_embedding

Generate Embeddings Using OpenAI API
rate_limit_info

Get the current rate limit information for all or a specific API
send_azure_openai_batch

Send a Batch of Messages to Azure OpenAI Batch API
send_mistral_batch

Send a Batch of Requests to the Mistral API
send_groq_batch

Send a Batch of Messages to the Groq API
voyage_embedding

Generate Embeddings Using Voyage AI API
send_claude_batch

Send a Batch of Messages to Claude API
perplexity

Perplexity Provider Function
img

Create an Image Object
list_azure_openai_batches

List Azure OpenAI Batch Requests
mistral_list_models

List Available Models from the Mistral API
ollama

Ollama API Provider Function
get_reply

Retrieve Assistant Reply as Text
perplexity_chat

Send LLM Messages to the Perplexity Chat API
tidyllm_tool

Create a Tool Definition for tidyllm
send_ollama_batch

Send a Batch of Messages to Ollama API
send_openai_batch

Send a Batch of Messages to OpenAI Batch API
voyage

Voyage Provider Function
tidyllm-package

tidyllm: Tidy Integration of Large Language Models
tidyllm_schema

Create a JSON Schema for Structured Outputs
chatgpt

Alias for the OpenAI Provider Function
LLMMessage

Large Language Model Message Class
azure_openai_chat

Send LLM Messages to an Azure OpenAI Chat Completions endpoint
check_azure_openai_batch

Check Batch Processing Status for Azure OpenAI Batch API
claude_chat

Interact with Claude AI models via the Anthropic API
cancel_openai_batch

Cancel an In-Progress OpenAI Batch
azure_openai_embedding

Generate Embeddings Using OpenAI API on Azure
chat

Chat with a Language Model
check_batch

Check Batch Processing Status
check_claude_batch

Check Batch Processing Status for Claude API
check_groq_batch

Check Batch Processing Status for Groq API
claude_list_models

List Available Models from the Anthropic Claude API
check_mistral_batch

Check Batch Processing Status for Mistral Batch API
check_openai_batch

Check Batch Processing Status for OpenAI Batch API
azure_openai

Azure OpenAI Endpoint Provider Function
deepseek

Deepseek Provider Function
deepseek_chat

Send LLM Messages to the DeepSeek Chat API
df_llm_message

Convert a Data Frame to an LLMMessage Object
claude

Provider Function for Claude models on the Anthropic API