Indexing categorical feature columns in a dataset of Vector.
ft_vector_indexer(
x,
input_col = NULL,
output_col = NULL,
handle_invalid = "error",
max_categories = 20,
uid = random_string("vector_indexer_"),
...
)
The object returned depends on the class of x
. If it is a
spark_connection
, the function returns a ml_estimator
or a
ml_estimator
object. If it is a ml_pipeline
, it will return
a pipeline with the transformer or estimator appended to it. If a
tbl_spark
, it will return a tbl_spark
with the transformation
applied to it.
A spark_connection
, ml_pipeline
, or a tbl_spark
.
The name of the input column.
The name of the output column.
(Spark 2.1.0+) Param for how to handle invalid entries. Options are 'skip' (filter out rows with invalid values), 'error' (throw an error), or 'keep' (keep invalid values in a special additional bucket). Default: "error"
Threshold for the number of values a categorical feature can take. If a feature is found to have > max_categories
values, then it is declared continuous. Must be greater than or equal to 2. Defaults to 20.
A character string used to uniquely identify the feature transformer.
Optional arguments; currently unused.
In the case where x
is a tbl_spark
, the estimator
fits against x
to obtain a transformer, returning a tbl_spark
.
Other feature transformers:
ft_binarizer()
,
ft_bucketizer()
,
ft_chisq_selector()
,
ft_count_vectorizer()
,
ft_dct()
,
ft_elementwise_product()
,
ft_feature_hasher()
,
ft_hashing_tf()
,
ft_idf()
,
ft_imputer()
,
ft_index_to_string()
,
ft_interaction()
,
ft_lsh
,
ft_max_abs_scaler()
,
ft_min_max_scaler()
,
ft_ngram()
,
ft_normalizer()
,
ft_one_hot_encoder_estimator()
,
ft_one_hot_encoder()
,
ft_pca()
,
ft_polynomial_expansion()
,
ft_quantile_discretizer()
,
ft_r_formula()
,
ft_regex_tokenizer()
,
ft_robust_scaler()
,
ft_sql_transformer()
,
ft_standard_scaler()
,
ft_stop_words_remover()
,
ft_string_indexer()
,
ft_tokenizer()
,
ft_vector_assembler()
,
ft_vector_slicer()
,
ft_word2vec()