This is the Amazon Rekognition API reference.
rekognition(config = list())
A client for the service. You can call the service's operations using
syntax like svc$operation(...)
, where svc
is the name you've assigned
to the client. The available operations are listed in the
Operations section.
Optional configuration of credentials, endpoint, and/or region.
svc <- rekognition(
config = list(
credentials = list(
creds = list(
access_key_id = "string",
secret_access_key = "string",
session_token = "string"
),
profile = "string"
),
endpoint = "string",
region = "string"
)
)
compare_faces | Compares a face in the source input image with each of the 100 largest faces detected in the target input image |
create_collection | Creates a collection in an AWS Region |
create_project | Creates a new Amazon Rekognition Custom Labels project |
create_project_version | Creates a new version of a model and begins training |
create_stream_processor | Creates an Amazon Rekognition stream processor that you can use to detect and recognize faces in a streaming video |
delete_collection | Deletes the specified collection |
delete_faces | Deletes faces from a collection |
delete_project | Deletes an Amazon Rekognition Custom Labels project |
delete_project_version | Deletes an Amazon Rekognition Custom Labels model |
delete_stream_processor | Deletes the stream processor identified by Name |
describe_collection | Describes the specified collection |
describe_projects | Lists and gets information about your Amazon Rekognition Custom Labels projects |
describe_project_versions | Lists and describes the models in an Amazon Rekognition Custom Labels project |
describe_stream_processor | Provides information about a stream processor created by CreateStreamProcessor |
detect_custom_labels | Detects custom labels in a supplied image by using an Amazon Rekognition Custom Labels model |
detect_faces | Detects faces within an image that is provided as input |
detect_labels | Detects instances of real-world entities within an image (JPEG or PNG) provided as input |
detect_moderation_labels | Detects unsafe content in a specified JPEG or PNG format image |
detect_protective_equipment | Detects Personal Protective Equipment (PPE) worn by people detected in an image |
detect_text | Detects text in the input image and converts it into machine-readable text |
get_celebrity_info | Gets the name and additional information about a celebrity based on his or her Amazon Rekognition ID |
get_celebrity_recognition | Gets the celebrity recognition results for a Amazon Rekognition Video analysis started by StartCelebrityRecognition |
get_content_moderation | Gets the unsafe content analysis results for a Amazon Rekognition Video analysis started by StartContentModeration |
get_face_detection | Gets face detection results for a Amazon Rekognition Video analysis started by StartFaceDetection |
get_face_search | Gets the face search results for Amazon Rekognition Video face search started by StartFaceSearch |
get_label_detection | Gets the label detection results of a Amazon Rekognition Video analysis started by StartLabelDetection |
get_person_tracking | Gets the path tracking results of a Amazon Rekognition Video analysis started by StartPersonTracking |
get_segment_detection | Gets the segment detection results of a Amazon Rekognition Video analysis started by StartSegmentDetection |
get_text_detection | Gets the text detection results of a Amazon Rekognition Video analysis started by StartTextDetection |
index_faces | Detects faces in the input image and adds them to the specified collection |
list_collections | Returns list of collection IDs in your account |
list_faces | Returns metadata for faces in the specified collection |
list_stream_processors | Gets a list of stream processors that you have created with CreateStreamProcessor |
recognize_celebrities | Returns an array of celebrities recognized in the input image |
search_faces | For a given input face ID, searches for matching faces in the collection the face belongs to |
search_faces_by_image | For a given input image, first detects the largest face in the image, and then searches the specified collection for matching faces |
start_celebrity_recognition | Starts asynchronous recognition of celebrities in a stored video |
start_content_moderation | Starts asynchronous detection of unsafe content in a stored video |
start_face_detection | Starts asynchronous detection of faces in a stored video |
start_face_search | Starts the asynchronous search for faces in a collection that match the faces of persons detected in a stored video |
start_label_detection | Starts asynchronous detection of labels in a stored video |
start_person_tracking | Starts the asynchronous tracking of a person's path in a stored video |
start_project_version | Starts the running of the version of a model |
start_segment_detection | Starts asynchronous detection of segment detection in a stored video |
start_stream_processor | Starts processing a stream processor |
start_text_detection | Starts asynchronous detection of text in a stored video |
stop_project_version | Stops a running model |
stop_stream_processor | Stops a running stream processor that was created by CreateStreamProcessor |
if (FALSE) {
svc <- rekognition()
# This operation compares the largest face detected in the source image
# with each face detected in the target image.
svc$compare_faces(
SimilarityThreshold = 90L,
SourceImage = list(
S3Object = list(
Bucket = "mybucket",
Name = "mysourceimage"
)
),
TargetImage = list(
S3Object = list(
Bucket = "mybucket",
Name = "mytargetimage"
)
)
)
}
Run the code above in your browser using DataLab