Send LLMMessage to ollama API
ollama(
.llm,
.model = "llama3",
.stream = FALSE,
.seed = NULL,
.json = FALSE,
.temperature = NULL,
.num_ctx = 2048,
.ollama_server = "http://localhost:11434",
.timeout = 120
)
Returns an updated LLMMessage object.
An existing LLMMessage object or an initial text prompt.
The model identifier (default: "llama3").
Should the answer be streamed to console as it comes (optional)
Which seed should be used for random numbers (optional).
Should output be structured as JSON (default: FALSE).
Control for randomness in response generation (optional).
The size of the context window in tokens (optional)
The URL of the ollama server to be used
When should our connection time out