Sends messages to the specified Amazon Bedrock model and returns the response in a stream. converse_stream
provides a consistent API that works with all Amazon Bedrock models that support messages. This allows you to write code once and use it with different models. Should a model have unique inference parameters, you can also pass those unique parameters to the model.
See https://www.paws-r-sdk.com/docs/bedrockruntime_converse_stream/ for full documentation.
bedrockruntime_converse_stream(
modelId,
messages = NULL,
system = NULL,
inferenceConfig = NULL,
toolConfig = NULL,
guardrailConfig = NULL,
additionalModelRequestFields = NULL,
promptVariables = NULL,
additionalModelResponseFieldPaths = NULL,
requestMetadata = NULL,
performanceConfig = NULL
)
[required] Specifies the model or throughput with which to run inference, or the prompt resource to use in inference. The value depends on the resource that you use:
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
If you use an inference profile, specify the inference profile ID or its ARN. For a list of inference profile IDs, see Supported Regions and models for cross-region inference in the Amazon Bedrock User Guide.
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
To include a prompt that was defined in Prompt management, specify the ARN of the prompt version to use.
The Converse API doesn't support imported models.
The messages that you want to send to the model.
A prompt that provides instructions or context to the model about the task it should perform, or the persona it should adopt during the conversation.
Inference parameters to pass to the model.
converse
and
converse_stream
support a base set
of inference parameters. If you need to pass additional parameters that
the model supports, use the additionalModelRequestFields
request
field.
Configuration information for the tools that the model can use when generating a response.
For information about models that support streaming tool use, see Supported models and model features.
Configuration information for a guardrail that you want to use in the
request. If you include guardContent
blocks in the content
field in
the messages
field, the guardrail operates only on those messages. If
you include no guardContent
blocks, the guardrail operates on all
messages in the request body and in any included prompt resource.
Additional inference parameters that the model supports, beyond the base
set of inference parameters that converse
and converse_stream
support in the
inferenceConfig
field. For more information, see Model parameters.
Contains a map of variables in a prompt from Prompt management to
objects containing the values to fill in for them when running model
invocation. This field is ignored if you don't specify a prompt resource
in the modelId
field.
Additional model parameters field paths to return in the response.
converse
and
converse_stream
return the requested
fields as a JSON Pointer object in the additionalModelResponseFields
field. The following is example JSON for
additionalModelResponseFieldPaths
.
[ "/stop_sequence" ]
For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation.
converse
and
converse_stream
reject an empty JSON
Pointer or incorrectly structured JSON Pointer with a 400
error code.
if the JSON Pointer is valid, but the requested field is not in the
model response, it is ignored by converse
.
Key-value pairs that you can use to filter invocation logs.
Model performance settings for the request.