- tbl
A data frame
- output
Output column to be created as string or symbol.
- input
Input column that gets split as string or symbol.
The output/input arguments are passed by expression and support
quasiquotation; you can unquote strings and symbols.
- token
Unit for tokenizing, or a custom tokenizing function. Built-in
options are "words" (default), "characters", "character_shingles", "ngrams",
"skip_ngrams", "sentences", "lines", "paragraphs", "regex", "tweets"
(tokenization by word that preserves usernames, hashtags, and URLS ), and
"ptb" (Penn Treebank). If a function, should take a character vector and
return a list of character vectors of the same length.
- format
Either "text", "man", "latex", "html", or "xml". When the
format is "text", this function uses the tokenizers package. If not "text",
this uses the hunspell tokenizer, and can tokenize only by "word"
- to_lower
Whether to convert tokens to lowercase. If tokens include
URLS (such as with token = "tweets"
), such converted URLs may no
longer be correct.
- drop
Whether original input column should get dropped. Ignored
if the original input and new output column have the same name.
- collapse
A character vector of variables to collapse text across,
or NULL
.
For tokens like n-grams or sentences, text can be collapsed across rows
within variables specified by collapse
before tokenization. At tidytext
0.2.7, the default behavior for collapse = NULL
changed to be more
consistent. The new behavior is that text is not collapsed for NULL
.
Grouping data specifies variables to collapse across in the same way as
collapse
but you cannot use both the collapse
argument and
grouped data. Collapsing applies mostly to token
options of "ngrams",
"skip_ngrams", "sentences", "lines", "paragraphs", or "regex".
- ...
Extra arguments passed on to tokenizers, such
as strip_punct
for "words" and "tweets", n
and k
for
"ngrams" and "skip_ngrams", strip_url
for "tweets", and
pattern
for "regex".