The ollama()
function acts as an interface for interacting with local AI models via the Ollama API.
It integrates seamlessly with the main tidyllm
verbs such as chat()
and embed()
.
ollama(..., .called_from = NULL)
The result of the requested action:
For chat()
: An updated LLMMessage
object containing the model's response.
For embed()
: A matrix where each column corresponds to an embedding.
Parameters to be passed to the appropriate Ollama-specific function, such as model configuration, input text, or API-specific options.
An internal argument specifying the verb (e.g., chat
, embed
)
the function is invoked from. This argument is automatically managed by tidyllm
and
should not be set by the user.
Some functionalities, like ollama_download_model()
or ollama_list_models()
are unique to the Ollama API and do not have a general verb counterpart.
These functions can be only accessed directly.
Supported Verbs:
chat()
: Sends a message to an Ollama model and retrieves the model's response.
embed()
: Generates embeddings for input texts using an Ollama model.
send_batch()
: Behaves different than the other send_batch()
verbs since it immediately processes the answers