Amazon Personalize is a machine learning service that makes it easy to add individualized recommendations to customers.
personalize(
config = list(),
credentials = list(),
endpoint = NULL,
region = NULL
)
A client for the service. You can call the service's operations using
syntax like svc$operation(...)
, where svc
is the name you've assigned
to the client. The available operations are listed in the
Operations section.
Optional configuration of credentials, endpoint, and/or region.
credentials:
creds:
access_key_id: AWS access key ID
secret_access_key: AWS secret access key
session_token: AWS temporary session token
profile: The name of a profile to use. If not given, then the default profile is used.
anonymous: Set anonymous credentials.
endpoint: The complete URL to use for the constructed client.
region: The AWS Region used in instantiating the client.
close_connection: Immediately close all HTTP connections.
timeout: The time in seconds till a timeout exception is thrown when attempting to make a connection. The default is 60 seconds.
s3_force_path_style: Set this to true
to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY
.
sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-endpoints.html
Optional credentials shorthand for the config parameter
creds:
access_key_id: AWS access key ID
secret_access_key: AWS secret access key
session_token: AWS temporary session token
profile: The name of a profile to use. If not given, then the default profile is used.
anonymous: Set anonymous credentials.
Optional shorthand for complete URL to use for the constructed client.
Optional shorthand for AWS Region used in instantiating the client.
svc <- personalize(
config = list(
credentials = list(
creds = list(
access_key_id = "string",
secret_access_key = "string",
session_token = "string"
),
profile = "string",
anonymous = "logical"
),
endpoint = "string",
region = "string",
close_connection = "logical",
timeout = "numeric",
s3_force_path_style = "logical",
sts_regional_endpoint = "string"
),
credentials = list(
creds = list(
access_key_id = "string",
secret_access_key = "string",
session_token = "string"
),
profile = "string",
anonymous = "logical"
),
endpoint = "string",
region = "string"
)
create_batch_inference_job | Creates a batch inference job |
create_batch_segment_job | Creates a batch segment job |
create_campaign | Creates a campaign that deploys a solution version |
create_dataset | Creates an empty dataset and adds it to the specified dataset group |
create_dataset_export_job | Creates a job that exports data from your dataset to an Amazon S3 bucket |
create_dataset_group | Creates an empty dataset group |
create_dataset_import_job | Creates a job that imports training data from your data source (an Amazon S3 bucket) to an Amazon Personalize dataset |
create_event_tracker | Creates an event tracker that you use when adding event data to a specified dataset group using the PutEvents API |
create_filter | Creates a recommendation filter |
create_metric_attribution | Creates a metric attribution |
create_recommender | Creates a recommender with the recipe (a Domain dataset group use case) you specify |
create_schema | Creates an Amazon Personalize schema from the specified schema string |
create_solution | Creates the configuration for training a model |
create_solution_version | Trains or retrains an active solution in a Custom dataset group |
delete_campaign | Removes a campaign by deleting the solution deployment |
delete_dataset | Deletes a dataset |
delete_dataset_group | Deletes a dataset group |
delete_event_tracker | Deletes the event tracker |
delete_filter | Deletes a filter |
delete_metric_attribution | Deletes a metric attribution |
delete_recommender | Deactivates and removes a recommender |
delete_schema | Deletes a schema |
delete_solution | Deletes all versions of a solution and the Solution object itself |
describe_algorithm | Describes the given algorithm |
describe_batch_inference_job | Gets the properties of a batch inference job including name, Amazon Resource Name (ARN), status, input and output configurations, and the ARN of the solution version used to generate the recommendations |
describe_batch_segment_job | Gets the properties of a batch segment job including name, Amazon Resource Name (ARN), status, input and output configurations, and the ARN of the solution version used to generate segments |
describe_campaign | Describes the given campaign, including its status |
describe_dataset | Describes the given dataset |
describe_dataset_export_job | Describes the dataset export job created by CreateDatasetExportJob, including the export job status |
describe_dataset_group | Describes the given dataset group |
describe_dataset_import_job | Describes the dataset import job created by CreateDatasetImportJob, including the import job status |
describe_event_tracker | Describes an event tracker |
describe_feature_transformation | Describes the given feature transformation |
describe_filter | Describes a filter's properties |
describe_metric_attribution | Describes a metric attribution |
describe_recipe | Describes a recipe |
describe_recommender | Describes the given recommender, including its status |
describe_schema | Describes a schema |
describe_solution | Describes a solution |
describe_solution_version | Describes a specific version of a solution |
get_solution_metrics | Gets the metrics for the specified solution version |
list_batch_inference_jobs | Gets a list of the batch inference jobs that have been performed off of a solution version |
list_batch_segment_jobs | Gets a list of the batch segment jobs that have been performed off of a solution version that you specify |
list_campaigns | Returns a list of campaigns that use the given solution |
list_dataset_export_jobs | Returns a list of dataset export jobs that use the given dataset |
list_dataset_groups | Returns a list of dataset groups |
list_dataset_import_jobs | Returns a list of dataset import jobs that use the given dataset |
list_datasets | Returns the list of datasets contained in the given dataset group |
list_event_trackers | Returns the list of event trackers associated with the account |
list_filters | Lists all filters that belong to a given dataset group |
list_metric_attribution_metrics | Lists the metrics for the metric attribution |
list_metric_attributions | Lists metric attributions |
list_recipes | Returns a list of available recipes |
list_recommenders | Returns a list of recommenders in a given Domain dataset group |
list_schemas | Returns the list of schemas associated with the account |
list_solutions | Returns a list of solutions that use the given dataset group |
list_solution_versions | Returns a list of solution versions for the given solution |
list_tags_for_resource | Get a list of tags attached to a resource |
start_recommender | Starts a recommender that is INACTIVE |
stop_recommender | Stops a recommender that is ACTIVE |
stop_solution_version_creation | Stops creating a solution version that is in a state of CREATE_PENDING or CREATE IN_PROGRESS |
tag_resource | Add a list of tags to a resource |
untag_resource | Remove tags that are attached to a resource |
update_campaign | Updates a campaign by either deploying a new solution or changing the value of the campaign's minProvisionedTPS parameter |
update_dataset | Update a dataset to replace its schema with a new or existing one |
update_metric_attribution | Updates a metric attribution |
update_recommender | Updates the recommender to modify the recommender configuration |
if (FALSE) {
svc <- personalize()
svc$create_batch_inference_job(
Foo = 123
)
}
Run the code above in your browser using DataLab