Learn R Programming

⚠️There's a newer version (4.2.0) of this package.Take me there.

quanteda v0.9.9: Important Changes

This version of the package is a transitional release prior to v1.0. It includes some major API changes (see below), but with the most of the older functions retained and deprecated. v0.9.9 also implements many enhancements and performance improvements. See Quanteda Structure and Design for details.

About the package

An R package for managing and analyzing text, created by Kenneth Benoit in collaboration with a team of core contributors: Kohei Watanabe, Paul Nulty, Adam Obeng, Haiyan Wang, Ben Lauderdale, and Will Lowe. Supported by the European Research Council grant ERC-2011-StG 283794-QUANTESS.

For more details, see the package website.

How to cite the package:

To cite package 'quanteda' in publications please use the
following:

  Benoit, Kenneth et. al. ().  "quanteda: Quantitative Analysis of
  Textual Data".  R package version: 0.9.9-49.
  http://quanteda.io.

A BibTeX entry for LaTeX users is

  @Manual{,
    title = {quanteda: Quantitative Analysis of Textual Data},
    author = {Kenneth Benoit and Kohei Watanabe and Paul Nulty and Adam Obeng and Haiyan Wang and Benjamin Lauderdale and Will Lowe},
    note = {R package version 0.9.9-49},
    url = {http://quanteda.io},
  }

Leave feedback

If you like quanteda, please consider leaving feedback or a testimonial here.

Features

Powerful text analytics

Generalized, flexible corpus management. quanteda provides a comprehensive workflow and ecosystem for the management, processing, and analysis of texts. Documents and associated document- and collection-level metadata are easily loaded and stored as a corpus object, although most of quanteda's operations work on simple character objects as well. A corpus is designed to efficiently store all of the texts in a collection, as well as meta-data for documents and for the collection as a whole. This makes it easy to perform natural language processing on the texts in a corpus simply and quickly, such as tokenizing, stemming, or forming ngrams. quanteda's functions for tokenizing texts and forming multiple tokenized documents into a document-feature matrix are both extremely fast and extremely simple to use. quanteda can segment texts easily by words, paragraphs, sentences, or even user-supplied delimiters and tags.

Works nicely with UTF-8. Built on the text processing functions in the stringi package, which is in turn built on C++ implementation of the ICU libraries for Unicode text handling, quanteda pays special attention to fast and correct implementation of Unicode and the handling of text in any character set, following conversion internally to UTF-8.

Built for efficiency and speed. All of the functions in quanteda are built for maximum performance and scale while still being as R-based as possible. The package makes use of three efficient architectural elements: the stringi package for text processing, the Matrix package for sparse matrix objects, and the data.table package for indexing large documents efficiently. If you can fit it into memory, quanteda will handle it quickly. (And eventually, we will make it possible to process objects even larger than available memory.)

Super-fast conversion of texts into a document-feature matrix. quanteda is principally designed to allow users a fast and convenient method to go from a corpus of texts to a selected matrix of documents by features, after defining and selecting the documents and features. The package makes it easy to redefine documents, for instance by splitting them into sentences or paragraphs, or by tags, as well as to group them into larger documents by document variables, or to subset them based on logical conditions or combinations of document variables. A special variation of the "dfm", a feature co-occurrence matrix, is also implemented, for direct use with embedding and representational models such as text2vec.

Extensive feature selection capabilities. The package also implements common NLP feature selection functions, such as removing stopwords and stemming in numerous languages, selecting words found in dictionaries, treating words as equivalent based on a user-defined "thesaurus", and trimming and weighting features based on document frequency, feature frequency, and related measures such as tf-idf.

Qualitative exploratory tools. Easily search and save keywords in context, for instance, or identify keywords. Like all of quanteda's pattern matching functions, users have the option of simple "glob" expressions, regular expressions, or fixed pattern matches.

Dictionary-based analysis. quanteda allows fast and flexible implementation of dictionary methods, including the import and conversion of foreign dictionary formats such as those from Provalis's WordStat, the Linguistic Inquiry and Word Count (LIWC), Lexicoder, Yoshioder, and YAML.

Text analytic methods. Once constructed, a dfm can be easily analyzed using either quanteda's built-in tools for scaling document positions (for the "wordfish" and "Wordscores" models, direct use with the ca package for correspondence analysis), predictive models using Naive Bayes multinomial and Bernoulli classifiers, computing distance or similarity matrixes of features or documents, or computing readability or lexical diversity indexes.

In addition, quanteda a document-feature matrix is easily used with or converted for a number of other text analytic tools, such as:

  • topic models (including converters for direct use with the topicmodels, LDA, and stm packages);

  • machine learning through a variety of other packages that take matrix or matrix-like inputs.

Planned features. Coming soon to quanteda are:

  • Bootstrapping methods for texts that makes it easy to resample texts from pre-defined units, to facilitate computation of confidence intervals on textual statistics using techniques of non-parametric bootstrapping, but applied to the original texts as data.

  • Additional predictive and analytic methods by expanding the textstat_ and textmodel_ functions. Current textmodel types include correspondence analysis, "Wordscores", "Wordfish", and Naive Bayes; current textstat statistics are readability, lexical diversity, similarity, and distance.

  • Expanded settings for all objects, that will propogate through downstream objects.

  • Object histories, that will propogate through downstream objects, to enhance analytic reproducibility and transparency.

How to Install

  1. From CRAN: Use your GUI's R package installer, or execute:

    install.packages("quanteda") 
  2. From GitHub, using:

    # devtools packaged required to install quanteda from Github 
    devtools::install_github("kbenoit/quanteda") 

    Because this compiles some C++ source code, you will need a compiler installed. If you are using a Windows platform, this means you will need also to install the Rtools software available from CRAN. If you are using OS X, you will need to to install XCode, available for free from the App Store, or if you prefer a lighter footprint set of tools, just the Xcode command line tools, using the command xcode-select --install from the Terminal.

  3. Additional recommended packages:

    The following packages work well with or extend quanteda and we recommend that you also install them:

Getting Started

See the package website, which includes the Getting Started Vignette.

Demonstration

library(quanteda)

# create a corpus from the immigration texts from UK party platforms
uk2010immigCorpus <- 
    corpus(data_char_ukimmig2010,
           docvars = data.frame(party = names(data_char_ukimmig2010)),
           metacorpus = list(notes = "Immigration-related sections of 2010 UK party manifestos"))
uk2010immigCorpus
## Corpus consisting of 9 documents and 1 docvar.
summary(uk2010immigCorpus)
## Corpus consisting of 9 documents.
## 
##          Text Types Tokens Sentences        party
##           BNP  1126   3330        88          BNP
##     Coalition   144    268         4    Coalition
##  Conservative   252    503        15 Conservative
##        Greens   325    687        21       Greens
##        Labour   296    703        29       Labour
##        LibDem   257    499        14       LibDem
##            PC    80    118         5           PC
##           SNP    90    136         4          SNP
##          UKIP   346    739        27         UKIP
## 
## Source:  /Users/kbenoit/Dropbox (Personal)/GitHub/quanteda/* on x86_64 by kbenoit
## Created: Wed Apr 19 13:16:11 2017
## Notes:   Immigration-related sections of 2010 UK party manifestos

# key words in context for "deport", 3 words of context
kwic(uk2010immigCorpus, "deport", 3)
##                                                                     
##   [BNP, 159]        The BNP will | deport | all foreigners convicted
##  [BNP, 1970]                . 2. | Deport | all illegal immigrants  
##  [BNP, 1976] immigrants We shall | deport | all illegal immigrants  
##  [BNP, 2621]  Criminals We shall | deport | all criminal entrants

# create a dfm, removing stopwords
mydfm <- dfm(uk2010immigCorpus, remove = c("will", stopwords("english")),
             remove_punct = TRUE)
mydfm
## Document-feature matrix of: 9 documents, 1,547 features (83.8% sparse).

topfeatures(mydfm, 20)  # 20 top words
## immigration     british      people      asylum     britain          uk 
##          66          37          35          29          28          27 
##      system  population     country         new  immigrants      ensure 
##          27          21          20          19          17          17 
##       shall citizenship      social    national         bnp     illegal 
##          17          16          14          14          13          13 
##        work     percent 
##          13          12

# plot a word cloud
set.seed(100)
textplot_wordcloud(mydfm, min.freq = 6, random.order = FALSE,
                   rot.per = .25, 
                   colors = RColorBrewer::brewer.pal(8,"Dark2"))

Contributing

Contributions in the form of feedback, comments, code, and bug reports are most welcome. How to contribute:

Copy Link

Version

Install

install.packages('quanteda')

Monthly Downloads

22,050

Version

0.9.9-50

License

GPL-3

Maintainer

Kenneth Benoit

Last Published

April 20th, 2017

Functions in quanteda (0.9.9-50)

as.tokens.collocations

coercion and checking functions for tokens objects
as.dist.dist

coerce a dist into a dist
as.list.dist

coerce a dist object into a list
attributes<-

R-like alternative to reassign_attributes()
bootstrap_dfm

bootstrap a dfm
corpus_subset

extract a subset of a corpus
as.yaml

convert quanteda dictionary objects to the YAML format
data_dfm_LBGexample

dfm from data in Table 1 of Laver, Benoit, and Garry (2003)
deprecate_argument

issue warning for deprecrated function arguments
corpus_trimsentences

remove sentences based on their token lengths or a pattern match
dfm_lookup

apply a dictionary to a dfm
dfm_sample

randomly sample documents or features from a dfm
docnames

get or set document names
docvars

get or set for document-level variables
features2list

convert various input as features to a simple list
metadoc

get or set document-level meta-data
ndoc

count the number of documents or features
as.corpus

coerce a compressed corpus to a standard corpus
as.corpus.corpuszip

coerce a compressed corpus to a standard corpus
features

deprecated function name for featnames
dfm2lsa

convert a dfm to an lsa "textmatrix"
dfm_compress

compress a dfm or fcm by combining identical dimension elements
dfm_weight

weight the feature frequencies in a dfm
quanteda-package

An R package for the quantitative analysis of textual data
quanteda_options

get or set package options for quanteda
subset.corpus

deprecated name for corpus_subset
cbind.dfm

Combine dfm objects by Rows or Columns
changeunits

deprecated name for corpus_reshape
compress

compress a dfm by combining similarly named dimensions
View

View methods for quanteda
applyDictionary

apply a dictionary or thesaurus to an object
corpus

construct a corpus object
summary.character

summarize a corpus or a vector of texts
corpus_reshape

recast the document units of a corpus
data_char_sampletext

a paragraph of text for testing various text-based functions
data_char_ukimmig2010

immigration-related sections of 2010 UK party manifestos
dfm_select

select features from a dfm or fcm
dfm_sort

sort a dfm by frequency of one or more margins
fcm_sort

sort an fcm in alphabetical order of the features
convert-wrappers

convenience wrappers for dfm convert
data-deprecated

datasets with deprecated or defunct names
data-internal

internal data sets
featnames

get the feature labels from a dfm
joinTokens

join tokens function
keyness

compute keyness (internal functions)
dictionary-class

print a dictionary object
is.dfm

coercion and checking functions for dfm objects
is.dictionary

check if an object is a dictionary
dfm_tolower

convert the case of the features of a dfm and combine
dfm_trim

trim a dfm using frequency threshold-based feature selection
kwic

locate keywords-in-context
plot-deprecated

deprecated plotting functions
predict.textmodel_NB_fitted

prediction method for Naive Bayes classifier objects
print.dfm

print a dfm object
metacorpus

get or set corpus metadata
scrabble

deprecated name for nscrabble
segment

segment: deprecated function
syllables

deprecated name for nsyllable
textfile

old function to read texts from files
print.dist_selection

print a dist_selection object
nsentence

count the number of sentences
nsyllable

count syllables in a text
selectFeatures

select features from an object
textplot_scale1d

plot a fitted scaling model
textmodel_fitted-class

the fitted textmodel classes
textmodel_wordfish

wordfish text model
textplot_wordcloud

plot features as a wordcloud
textstat_collocations

calculate collocation statistics
textstat_keyness

calculate keyness statistics
selectFeaturesOLD

old version of selectFeatures.tokenizedTexts
similarity

compute similarities between documents and/or features
sort.dfm

sort a dfm by one or more margins
textmodel_wordscores

Wordscores text model
as.matrix.dist_selection

coerce a dist_selection object to a matrix
as.matrix.simil

Coerce a simil object into a matrix
collocations

detect collocations from text
tfidf

compute tf-idf weights from a dfm
toLower

Convert texts to lower (or upper) case
collocations2

detect collocations from text
corpus_sample

randomly sample documents from a corpus
corpus_segment

segment texts into component elements
textmodel_wordshoal

wordshoal text model
deprecated-textstat

deprecated textstat names
dfm-class

Virtual class "dfm" for a document-feature matrix
dictionary

create a dictionary
tokens_compound

convert token sequences into compound tokens
tokens_hash

Function to hash list-of-character tokens
topfeatures

list the most frequent features
docfreq

compute the (weighted) document frequency of a feature
features2vector

convert various input as features to a vector
head.dfm

return the first or last part of a dfm
trim

deprecated name for dfm_trim
ntoken

count the number of tokens or types
phrasetotoken

convert phrases into single tokens
removeFeatures

remove features from an object
textstat_lexdiv

calculate lexical diversity
textstat_readability

calculate readability
tokenize

tokenize a set of texts
sample

randomly sample documents or features
sequences

find variable-length collocations with filtering
settings

Get or set the corpus settings
tokens

tokenize a set of texts
as.list.dist_selection

coerce a dist_selection object into a list
as.matrix.dfm

coerce a dfm to a matrix or data.frame
char_tolower

convert the case of character objects
coef.textmodel

extract text model coefficients
convert

convert a dfm to a non-quanteda format
textmodel_NB

Naive Bayes classifier for texts
textmodel_ca

correspondence analysis of a document-feature matrix
tokens_tolower

convert the case of tokens
tokens_wordstem

stem the terms in an object
corpus-class

base method extensions for corpus objects
data_corpus_inaugural

US presidential inaugural address texts
data_corpus_irishbudget2010

Irish budget speeches from 2010
dfm-internal

internal functions for dfm objects
dfm

create a document-feature matrix
fcm-class

Virtual class "fcm" for a feature co-occurrence matrix
fcm

create a feature co-occurrence matrix
ngrams

deprecated function name for forming ngrams and skipgrams
textstat_dist

Similarity and distance computation between documents or features
tf

compute (weighted) term frequency from a dfm
nscrabble

count the Scrabble letter values of text
sparsity

compute the sparsity of a document-feature matrix
stopwords

access built-in stopwords
tokens_hashed_recompile

recompile a hashed tokens object
tokens_lookup

apply a dictionary to a tokens object
wordstem

stem words
textmodel-internal

internal functions for textmodel objects
textmodel

fit a text model
textplot_xray

plot the dispersion of key word(s)
texts

get or assign corpus texts
tokens_ngrams

create ngrams and skipgrams from tokens
tokens_select

select or remove tokens from a tokens object
valuetype

pattern matching using valuetype
weight

weight or smooth a dfm