Learn R Programming

⚠️There's a newer version (4.2.0) of this package.Take me there.

Quantitative Analysis of Textual Data

quanteda v0.9.9 under development

This version of the package is a pre-release for version 0.9.9, which will be a transitional version (with the old functions retained, but deprecated) prior to a "1.0" release. v0.9.9 includes some major changes but also many major improvements. See Quanteda Structure and Design for details.

About the package

An R package for managing and analyzing text, created by Kenneth Benoit in collaboration with a team of core contributors: Paul Nulty, Adam Obeng, Kohei Watanabe, Haiyan Wang, Ben Lauderdale, and Will Lowe. Supported by the European Research Council grant ERC-2011-StG 283794-QUANTESS.

For more details, see the package website.

Features

Powerful text analytics

Generalized, flexible corpus management. quanteda provides a comprehensive workflow and ecosystem for the management, processing, and analysis of texts. Documents and associated document- and collection-level metadata are easily loaded and stored as a corpus object, although most of quanteda's operations work on simple character objects as well. A corpus is designed to efficiently store all of the texts in a collection, as well as meta-data for documents and for the collection as a whole. This makes it easy to perform natural language processing on the texts in a corpus simply and quickly, such as tokenizing, stemming, or forming ngrams. quanteda's functions for tokenizing texts and forming multiple tokenized documents into a document-feature matrix are both extremely fast and extremely simple to use. quanteda can segment texts easily by words, paragraphs, sentences, or even user-supplied delimiters and tags.

Works nicely with UTF-8. Built on the text processing functions in the stringi package, which is in turn built on C++ implementation of the ICU libraries for Unicode text handling, quanteda pays special attention to fast and correct implementation of Unicode and the handling of text in any character set, following conversion internally to UTF-8.

Built for efficiency and speed. All of the functions in quanteda are built for maximum performance and scale while still being as R-based as possible. The package makes use of three efficient architectural elements: the stringi package for text processing, the Matrix package for sparse matrix objects, and the data.table package for indexing large documents efficiently. If you can fit it into memory, quanteda will handle it quickly. (And eventually, we will make it possible to process objects even larger than available memory.)

Super-fast conversion of texts into a document-feature matrix. quanteda is principally designed to allow users a fast and convenient method to go from a corpus of texts to a selected matrix of documents by features, after defining and selecting the documents and features. The package makes it easy to redefine documents, for instance by splitting them into sentences or paragraphs, or by tags, as well as to group them into larger documents by document variables, or to subset them based on logical conditions or combinations of document variables. A special variation of the "dfm", a feature co-occurrence matrix, is also implemented, for direct use with embedding and representational models such as text2vec.

Extensive feature selection capabilities. The package also implements common NLP feature selection functions, such as removing stopwords and stemming in numerous languages, selecting words found in dictionaries, treating words as equivalent based on a user-defined "thesaurus", and trimming and weighting features based on document frequency, feature frequency, and related measures such as tf-idf.

Qualitative exploratory tools. Easily search and save keywords in context, for instance, or identify keywords. Like all of quanteda's pattern matching functions, users have the option of simple "glob" expressions, regular expressions, or fixed pattern matches.

Dictionary-based analysis. quanteda allows fast and flexible implementation of dictionary methods, including the import and conversion of foreign dictionary formats such as those from Provalis's WordStat, the Linguistic Inquiry and Word Count (LIWC), Lexicoder, and Yoshioder.

Text analytic methods. Once constructed, a dfm can be easily analyzed using either quanteda's built-in tools for scaling document positions (for the "wordfish" and "Wordscores" models, direct use with the ca package for correspondence analysis), predictive models using Naive Bayes multinomial and Bernoulli classifiers, computing distance or similarity matrixes of features or documents, or computing readability or lexical diversity indexes.

In addition, quanteda a document-feature matrix is easily used with or converted for a number of other text analytic tools, such as:

  • topic models (including converters for direct use with the topicmodels, LDA, and stm packages);

  • machine learning through a variety of other packages that take matrix or matrix-like inputs.

Planned features. Coming soon to quanteda are:

  • Bootstrapping methods for texts that makes it easy to resample texts from pre-defined units, to facilitate computation of confidence intervals on textual statistics using techniques of non-parametric bootstrapping, but applied to the original texts as data.

  • Additional predictive and analytic methods by expanding the textstat_ and textmodel_ functions. Current textmodel types include correspondence analysis, "Wordscores", "Wordfish", and Naive Bayes; current textstat statistics are readability, lexical diversity, similarity, and distance.

  • Expanded settings for all objects, that will propogate through downstream objects.

  • Object histories, that will propogate through downstream objects, to enhance analytic reproducibility and transparency.

How to Install

  1. From CRAN: Use your GUI's R package installer, or execute:

    install.packages("quanteda") 
  2. From GitHub, using:

    # devtools packaged required to install quanteda from Github 
    devtools::install_github("kbenoit/quanteda") 

    Because this compiles some C++ source code, you will need a compiler installed. If you are using a Windows platform, this means you will need also to install the Rtools software available from CRAN. If you are using OS X, you will need to to install XCode, available for free from the App Store, or if you prefer a lighter footprint set of tools, just the Xcode command line tools, using the command xcode-select --install from the Terminal.

  3. Additional recommended packages:

    The following packages work well with quanteda and we recommend that you also install them:

    • readtext: For reading text data into R.

      devtools::install_github("kbenoit/readtext")
    • quantedaData: Additional textual data for use with quanteda.

      r devtools::install_github("kbenoit/quantedaData")
    • spacyr: NLP using the spaCy library.

Getting Started

See the package website, which includes the Getting Started Vignette.

Demonstration

library(quanteda)

# create a corpus from the immigration texts from UK party platforms
uk2010immigCorpus <- 
    corpus(data_char_ukimmig2010,
           docvars = data.frame(party = names(data_char_ukimmig2010)),
           metacorpus = list(notes = "Immigration-related sections of 2010 UK party manifestos"))
uk2010immigCorpus
## Corpus consisting of 9 documents and 1 docvar.
summary(uk2010immigCorpus)
## Corpus consisting of 9 documents.
## 
##          Text Types Tokens Sentences        party
##           BNP  1126   3330        88          BNP
##     Coalition   144    268         4    Coalition
##  Conservative   252    503        15 Conservative
##        Greens   325    687        21       Greens
##        Labour   296    703        29       Labour
##        LibDem   257    499        14       LibDem
##            PC    80    118         5           PC
##           SNP    90    136         4          SNP
##          UKIP   346    739        27         UKIP
## 
## Source:  /Users/kbenoit/Dropbox (Personal)/GitHub/quanteda/* on x86_64 by kbenoit
## Created: Mon Jan  9 19:19:48 2017
## Notes:   Immigration-related sections of 2010 UK party manifestos

# key words in context for "deport", 3 words of context
kwic(uk2010immigCorpus, "deport", 3)
##                                                                    
## [BNP, 159]         The BNP will | deport | all foreigners convicted
## [BNP, 1970]                . 2. | Deport | all illegal immigrants  
## [BNP, 1976] immigrants We shall | deport | all illegal immigrants  
## [BNP, 2621]  Criminals We shall | deport | all criminal entrants

# create a dfm, removing stopwords
mydfm <- dfm(uk2010immigCorpus, remove = c("will", stopwords("english")),
             removePunct = TRUE)
mydfm
## Document-feature matrix of: 9 documents, 1,547 features (83.8% sparse).

topfeatures(mydfm, 20)  # 20 top words
## immigration     british      people      asylum     britain          uk 
##          66          37          35          29          28          27 
##      system  population     country         new  immigrants      ensure 
##          27          21          20          19          17          17 
##       shall citizenship      social    national         bnp     illegal 
##          17          16          14          14          13          13 
##        work     percent 
##          13          12

# plot a word cloud
textplot_wordcloud(mydfm, min.freq = 6, random.order = FALSE,
                   rot.per = .25, 
                   colors = RColorBrewer::brewer.pal(8,"Dark2"))

Contributing

Contributions in the form of feedback, comments, code, and bug reports are most welcome. How to contribute:

Copy Link

Version

Install

install.packages('quanteda')

Monthly Downloads

22,704

Version

0.9.9-3

License

GPL-3

Issues

Pull Requests

Stars

Forks

Maintainer

Kenneth Benoit

Last Published

January 10th, 2017

Functions in quanteda (0.9.9-3)

changeunits

deprecated name for corpus_reshape
cbind.dfm

Combine dfm objects by Rows or Columns
char_tolower

convert the case of character objects
as.tokens

coercion and checking functions for tokens objects
compress

compress a dfm by combining similarly named dimensions
as.corpus

coerce a compressed corpus to a standard corpus
as.list.dist

coerce a dist object into a list
corpus_reshape

change the document units of a corpus
corpus_sample

randomly sample documents from a corpus
as.matrix.dfm

coerce a dfm to a matrix or data.frame
corpus_subset

extract a subset of a corpus
corpus_segment

segment texts into component elements
data_char_sampletext

a paragraph of text for testing various text-based functions
corpus-class

base method extensions for corpus objects
corpuszip

construct a compressed corpus object
data_char_ukimmig2010

immigration-related sections of 2010 UK party manifestos
data_char_mobydick

text of Herman Melville's Moby Dick
data_corpus_inaugural

US presidential inaugural address texts
data_corpus_irishbudget2010

Irish budget speeches from 2010
dfm_tolower

convert the case of the features of a dfm and combine
dfm_trim

trim a dfm using frequency threshold-based feature selection
is.collocations

check if an object is collocations type
fcm_sort

sort an fcm in alphabetical order of the features
dfm_sample

randomly sample documents or features from a dfm
is.dfm

coercion and checking functions for dfm objects
dfm_lookup

apply a dictionary to a dfm
featnames

get the feature labels from a dfm
kwic

locate keywords-in-context
metacorpus

get or set corpus metadata
nsentence

count the number of sentences
nsyllable

count syllables in a text
reassign_attributes

copy the attributes from one S3 object to another
removeFeatures

remove features from an object
textmodel-internal

internal functions for textmodel objects
summary.character

summarize a corpus or a vector of texts
subset.corpus

deprecated name for corpus_subset
trim

deprecated name for dfm_trim
textplot_scale1d

plot a fitted wordfish model
corpus

construct a corpus object
dfm_select

select features from a dfm or fcm
dfm_sort

sort a dfm by frequency of one or more margins
dfm_weight

weight the feature frequencies in a dfm
dfm

create a document-feature matrix
ntoken

count the number of tokens or types
phrasetotoken

convert phrases into single tokens
as.corpus.corpuszip

coerce a compressed corpus to a standard corpus
applyDictionary

apply a dictionary or thesaurus to an object
is.dictionary

check if an object is a dictionary
joinTokens

join tokens function
plot-deprecated

deprecated plotting functions
predict.textmodel_NB_fitted

prediction method for Naive Bayes classifier objects
deprecated-textstat

deprecated textstat names
dfm_compress

compress a dfm or fcm by combining identical dimension elements
features

deprecated function name for featnames
dfm2lsa

convert a dfm to an lsa "textmatrix"
docfreq

compute the (weighted) document frequency of a feature
head.dfm

Return the first or last part of a dfm
ngrams

deprecated function name for forming ngrams and skipgrams
selectFeaturesOLD

old version of selectFeatures.tokenizedTexts
sequence2list

convert sequences to a simple list
syllables

deprecated name for nsyllable
textfile

old function to read texts from files
textmodel_ca

correspondence analysis of a document-feature matrix
textmodel_NB

Naive Bayes classifier for texts
tokens_hash

Function to hash list-of-character tokens
tokens_lookup

apply a dictionary to a tokens object
data_dfm_LBGexample

dfm from data in Table 1 of Laver, Benoit, and Garry (2003)
data-deprecated

datasets with deprecated or defunct names
docvars

get or set for document-level variables
docnames

get or set document names
kwic_old

locate keywords-in-context (older)
kwic_split

split kwic results
metadoc

get or set document-level meta-data
ndoc

count the number of documents or features
scrabble

deprecated name for nscrabble
sample

randomly sample documents or features
textstat_lexdiv

calculate lexical diversity
texts

get or assign corpus texts
tokens_tolower

convert the case of tokens
print.dfm

print a dfm object
quanteda-package

An R package for the quantitative analysis of textual data
sort.dfm

sort a dfm by one or more margins
similarity

compute similarities between documents and/or features
textplot_wordcloud

plot features as a wordcloud
textstat_readability

calculate readability
textplot_xray

plot the dispersion of key word(s)
textstat_dist

Distance matrix between documents and/or features
nscrabble

count the Scrabble letter values of text
segment

segment: deprecated function
selectFeatures.dfm

select features from an object
sequences

find variable-length collocations with filtering
settings

Get or set the corpus settings
textmodel_wordshoal

wordshoal text model
textmodel_wordfish

wordfish text model
vector2list

convert a vector to a list
View

View methods for quanteda
valuetype

pattern matching using valuetype
tfidf

compute tf-idf weights from a dfm
tokens_compound

convert token sequences into compound tokens
sparsity

compute the sparsity of a document-feature matrix
stopwords

access built-in stopwords
tokens_ngrams

create ngrams and skipgrams from tokens
tokens_select

select or remove tokens from a tokens object
weight

weight or smooth a dfm
tokens_wordstem

stem the terms in an object
wordstem

stem words
topfeatures

list the most frequent features
toLower

Convert texts to lower (or upper) case