A regex based tokenizer that extracts tokens either by using the provided
regex pattern to split the text (default) or repeatedly matching the regex
(if gaps
is false). Optional parameters also allow filtering tokens using a
minimal length. It returns an array of strings that can be empty.
ft_regex_tokenizer(
x,
input_col = NULL,
output_col = NULL,
gaps = TRUE,
min_token_length = 1,
pattern = "\\s+",
to_lower_case = TRUE,
uid = random_string("regex_tokenizer_"),
...
)
The object returned depends on the class of x
. If it is a
spark_connection
, the function returns a ml_estimator
or a
ml_estimator
object. If it is a ml_pipeline
, it will return
a pipeline with the transformer or estimator appended to it. If a
tbl_spark
, it will return a tbl_spark
with the transformation
applied to it.
A spark_connection
, ml_pipeline
, or a tbl_spark
.
The name of the input column.
The name of the output column.
Indicates whether regex splits on gaps (TRUE) or matches tokens (FALSE).
Minimum token length, greater than or equal to 0.
The regular expression pattern to be used.
Indicates whether to convert all characters to lowercase before tokenizing.
A character string used to uniquely identify the feature transformer.
Optional arguments; currently unused.
Other feature transformers:
ft_binarizer()
,
ft_bucketizer()
,
ft_chisq_selector()
,
ft_count_vectorizer()
,
ft_dct()
,
ft_elementwise_product()
,
ft_feature_hasher()
,
ft_hashing_tf()
,
ft_idf()
,
ft_imputer()
,
ft_index_to_string()
,
ft_interaction()
,
ft_lsh
,
ft_max_abs_scaler()
,
ft_min_max_scaler()
,
ft_ngram()
,
ft_normalizer()
,
ft_one_hot_encoder_estimator()
,
ft_one_hot_encoder()
,
ft_pca()
,
ft_polynomial_expansion()
,
ft_quantile_discretizer()
,
ft_r_formula()
,
ft_robust_scaler()
,
ft_sql_transformer()
,
ft_standard_scaler()
,
ft_stop_words_remover()
,
ft_string_indexer()
,
ft_tokenizer()
,
ft_vector_assembler()
,
ft_vector_indexer()
,
ft_vector_slicer()
,
ft_word2vec()