Learn R Programming

⚠️There's a newer version (0.3.4) of this package.Take me there.

tidyllm

tidyllm is an R package designed to access various large language model APIs, including Claude, ChatGPT, Groq, Mistral, and local models via Ollama. Built for simplicity and functionality, it helps you generate text, analyze media, and integrate model feedback into your data workflows with ease.

Features

  • Multiple Model Support: Seamlessly switch between various model providers like Claude, ChatGPT, Groq, Mistral or Ollama using the best of what each has to offer.
  • Media Handling: Extract and process text from PDFs and capture console outputs for messaging. Upload imagefiles or the last plotpane to multimodal models.
  • Interactive Messaging History: Manage an ongoing conversation with models, maintaining a structured history of messages and media interactions, which are automatically formatted for each API
  • Batch processing: Efficiently handle large workloads with Anthropic and OpenAI batch processing APIs, reducing costs by up to 50%.
  • Tidy Workflow: Use R's functional programming features for a side-effect-free, pipeline-oriented operation style.

Installation

To install tidyllm from CRAN, use:

install.packages("tidyllm")

Or for the development version from GitHub:

# Install devtools if not already installed
if (!requireNamespace("devtools", quietly = TRUE)) {
  install.packages("devtools")
}
devtools::install_github("edubruell/tidyllm")

Basic Example

Here’s a quick example using tidyllm to describe an image using the Claude model to and follow up with local open-source models:

library("tidyllm")

# Describe an image with  claude
conversation <- llm_message("Describe this image", 
                              .imagefile = here("image.png")) |>
  claude()

# Use the description to query further with groq
conversation |>
  llm_message("Based on the previous description,
  what could the research in the figure be about?") |>
  ollama(.model = "gemma2")

For more examples and advanced usage, check the Get Started vignette.

Please note: To use tidyllm, you need either an installation of ollama or an active API key for one of the supported providers (e.g., Claude, ChatGPT). See the Get Started vignette for setup instructions.

Learn More

For detailed instructions and advanced features, see:

Contributing

We welcome contributions! Feel free to open issues or submit pull requests on GitHub.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Copy Link

Version

Install

install.packages('tidyllm')

Monthly Downloads

631

Version

0.2.0

License

MIT + file LICENSE

Maintainer

Eduard Brüll

Last Published

November 7th, 2024

Functions in tidyllm (0.2.0)

list_openai_batches

List OpenAI Batch Requests
mistral

Send LLMMessage to Mistral API
ollama_embedding

Generate Embeddings Using Ollama API
ollama_list_models

Retrieve and return model information from the Ollama API
openai

Send LLM Messages to the OpenAI Chat Completions API
send_openai_batch

Send a Batch of Messages to OpenAI Batch API
tidyllm-package

tidyllm: Tidy Integration of Large Language Models
openai_embedding

Generate Embeddings Using OpenAI API
ratelimit_from_header

Extract rate limit information from API response headers
ollama_download_model

Download a model from the Ollama API
get_reply_data

Get Data from an Assistant Reply by parsing structured JSON responses
get_reply

Get Assistant Reply as Text
send_claude_batch

Send a Batch of Messages to Claude API
ollama

Interact with local AI models via the Ollama API
mistral_embedding

Generate Embeddings Using Mistral API
tidyllm_schema

Create a JSON schema for structured outputs
llm_message

Create or Update Large Language Model Message Object
pdf_page_batch

Batch Process PDF into LLM Messages
update_rate_limit

Update the standard API rate limit info in the hidden .tidyllm_rate_limit_env environment
perform_api_request

Perform an API request to interact with language models
rate_limit_info

Get the current rate limit information for all or a specific API
parse_duration_to_seconds

This internal function parses duration strings as returned by the OpenAI API
LLMMessage

Large Language Model Message Class
check_openai_batch

Check Batch Processing Status for OpenAI Batch API
check_claude_batch

Check Batch Processing Status for Claude API
fetch_claude_batch

Fetch Results for a Claude Batch
chatgpt

ChatGPT Wrapper (Deprecated)
claude

Interact with Claude AI models via the Anthropic API
df_llm_message

Convert a Data Frame to an LLMMessage Object
generate_callback_function

Generate API-Specific Callback Function for Streaming Responses
azure_openai

Send LLM Messages to an OpenAI Chat Completions endpoint on Azure
fetch_openai_batch

Fetch Results for an OpenAI Batch
last_reply

Get the Last Assistant Reply as Text
list_claude_batches

List Claude Batch Requests
last_user_message

Retrieve the Last User Message
groq

Send LLM Messages to the Groq Chat API
get_user_message

Retrieve a User Message by Index
last_reply_data

Get the Last Assistant Reply as Text
groq_transcribe

Transcribe an Audio File Using Groq transcription API
initialize_api_env

Initialize or Retrieve API-specific Environment