Update tokenizer internal vocabulary based on a list of texts or list of sequences.
fit_text_tokenizer(object, x)
Tokenizer returned by text_tokenizer()
Vector/list of strings, or a generator of strings (for memory-efficiency); Alternatively a list of "sequence" (a sequence is a list of integer word indices).
Other text tokenization:
save_text_tokenizer()
,
sequences_to_matrix()
,
text_tokenizer()
,
texts_to_matrix()
,
texts_to_sequences_generator()
,
texts_to_sequences()