Performs toxicity analysis on the list of text strings that you provide as input. The API response contains a results list that matches the size of the input list. For more information about toxicity detection, see Toxicity detection in the Amazon Comprehend Developer Guide.
See https://www.paws-r-sdk.com/docs/comprehend_detect_toxic_content/ for full documentation.
comprehend_detect_toxic_content(TextSegments, LanguageCode)
[required] A list of up to 10 text strings. Each string has a maximum size of 1 KB, and the maximum size of the list is 10 KB.
[required] The language of the input text. Currently, English is the only supported language.