Learn R Programming

tidytext (version 0.2.6)

unnest_tweets: Wrapper around unnest_tokens for tweets

Description

This function is a wrapper around unnest_tokens( token = "tweets" ).

Usage

unnest_tweets(
  tbl,
  output,
  input,
  strip_punct = TRUE,
  strip_url = FALSE,
  format = c("text", "man", "latex", "html", "xml"),
  to_lower = TRUE,
  drop = TRUE,
  collapse = NULL,
  ...
)

Arguments

tbl

A data frame

output

Output column to be created as string or symbol.

input

Input column that gets split as string or symbol.

The output/input arguments are passed by expression and support quasiquotation; you can unquote strings and symbols.

strip_punct

Should punctuation be stripped?

strip_url

Should URLs (starting with http(s)) be preserved intact, or removed entirely?

format

Either "text", "man", "latex", "html", or "xml". If not text, this uses the hunspell tokenizer, and can tokenize only by "word"

to_lower

Whether to convert tokens to lowercase. If tokens include URLS (such as with token = "tweets"), such converted URLs may no longer be correct.

drop

Whether original input column should get dropped. Ignored if the original input and new output column have the same name.

collapse

Whether to combine text with newlines first in case tokens (such as sentences or paragraphs) span multiple lines. If NULL, collapses when token method is "ngrams", "skip_ngrams", "sentences", "lines", "paragraphs", or "regex".

...

Extra arguments passed on to tokenizers

See Also

Examples

Run this code
# NOT RUN {
library(dplyr)
tweets <- tibble(
   id = 1,
   txt = "@rOpenSci and #rstats see: https://cran.r-project.org"
)

tweets %>%
   unnest_tweets(out, txt)

# }

Run the code above in your browser using DataLab