A preprocessing layer which maps string features to integer indices.
layer_string_lookup(
object,
max_tokens = NULL,
num_oov_indices = 1L,
mask_token = NULL,
oov_token = "[UNK]",
vocabulary = NULL,
encoding = NULL,
invert = FALSE,
output_mode = "int",
sparse = FALSE,
pad_to_max_tokens = FALSE,
...
)
What to compose the new Layer
instance with. Typically a
Sequential model or a Tensor (e.g., as returned by layer_input()
).
The return value depends on object
. If object
is:
missing or NULL
, the Layer
instance is returned.
a Sequential
model, the model with an additional layer is returned.
a Tensor, the output tensor from layer_instance(object)
is returned.
The maximum size of the vocabulary for this layer. If NULL
,
there is no cap on the size of the vocabulary. Note that this size
includes the OOV and mask tokens. Default to NULL.
The number of out-of-vocabulary tokens to use. If this value is more than 1, OOV inputs are hashed to determine their OOV value. If this value is 0, OOV inputs will cause an error when calling the layer. Defaults to 1.
A token that represents masked inputs. When output_mode
is
"int"
, the token is included in vocabulary and mapped to index 0. In
other output modes, the token will not appear in the vocabulary and
instances of the mask token in the input will be dropped. If set to NULL
,
no mask term will be added. Defaults to NULL
.
Only used when invert
is TRUE. The token to return for OOV
indices. Defaults to "[UNK]"
.
Optional. Either an array of strings or a string path to a text
file. If passing an array, can pass a list, list, 1D numpy array, or 1D
tensor containing the string vocabulary terms. If passing a file path, the
file should contain one line per term in the vocabulary. If this argument
is set, there is no need to adapt
the layer.
String encoding. Default of NULL
is equivalent to "utf-8"
.
Only valid when output_mode
is "int"
. If TRUE, this layer will
map indices to vocabulary items instead of mapping vocabulary items to
indices. Default to FALSE
.
Specification for the output of the layer. Defaults to "int"
.
Values can be "int"
, "one_hot"
, "multi_hot"
, "count"
, or
"tf_idf"
configuring the layer as follows:
"int"
: Return the raw integer indices of the input tokens.
"one_hot"
: Encodes each individual element in the input into an
array the same size as the vocabulary, containing a 1 at the element
index. If the last dimension is size 1, will encode on that dimension.
If the last dimension is not size 1, will append a new dimension for
the encoded output.
"multi_hot"
: Encodes each sample in the input into a single array
the same size as the vocabulary, containing a 1 for each vocabulary
term present in the sample. Treats the last dimension as the sample
dimension, if input shape is (..., sample_length), output shape will
be (..., num_tokens).
"count"
: As "multi_hot"
, but the int array contains a count of the
number of times the token at that index appeared in the sample.
"tf_idf"
: As "multi_hot"
, but the TF-IDF algorithm is applied to
find the value in each token slot.
For "int"
output, any shape of input and output is supported. For all
other output modes, currently only output up to rank 2 is supported.
Boolean. Only applicable when output_mode
is "multi_hot"
,
"count"
, or "tf_idf"
. If TRUE, returns a SparseTensor
instead of a
dense Tensor
. Defaults to FALSE
.
Only applicable when output_mode
is "multi_hot"
,
"count"
, or "tf_idf"
. If TRUE, the output will have its feature axis
padded to max_tokens
even if the number of unique tokens in the
vocabulary is less than max_tokens, resulting in a tensor of shape
[batch_size, max_tokens]
regardless of vocabulary size. Defaults to FALSE
.
standard layer arguments.
This layer translates a set of arbitrary strings into integer output via a table-based vocabulary lookup.
The vocabulary for the layer must be either supplied on construction or
learned via adapt()
. During adapt()
, the layer will analyze a data set,
determine the frequency of individual strings tokens, and create a vocabulary
from them. If the vocabulary is capped in size, the most frequent tokens will
be used to create the vocabulary and all others will be treated as
out-of-vocabulary (OOV).
There are two possible output modes for the layer.
When output_mode
is "int"
,
input strings are converted to their index in the vocabulary (an integer).
When output_mode
is "multi_hot"
, "count"
, or "tf_idf"
, input strings
are encoded into an array where each dimension corresponds to an element in
the vocabulary.
The vocabulary can optionally contain a mask token as well as an OOV token
(which can optionally occupy multiple indices in the vocabulary, as set
by num_oov_indices
).
The position of these tokens in the vocabulary is fixed. When output_mode
is
"int"
, the vocabulary will begin with the mask token (if set), followed by
OOV indices, followed by the rest of the vocabulary. When output_mode
is
"multi_hot"
, "count"
, or "tf_idf"
the vocabulary will begin with OOV
indices and instances of the mask token will be dropped.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/StringLookup
https://keras.io/api/layers/preprocessing_layers/categorical/string_lookup
Other categorical features preprocessing layers:
layer_category_encoding()
,
layer_hashing()
,
layer_integer_lookup()
Other preprocessing layers:
layer_category_encoding()
,
layer_center_crop()
,
layer_discretization()
,
layer_hashing()
,
layer_integer_lookup()
,
layer_normalization()
,
layer_random_contrast()
,
layer_random_crop()
,
layer_random_flip()
,
layer_random_height()
,
layer_random_rotation()
,
layer_random_translation()
,
layer_random_width()
,
layer_random_zoom()
,
layer_rescaling()
,
layer_resizing()
,
layer_text_vectorization()