Learn R Programming

About

quanteda is an R package for managing and analyzing text, created and maintained by Kenneth Benoit and Kohei Watanabe. Its creation was funded by the European Research Council grant ERC-2011-StG 283794-QUANTESS and its continued development is supported by the Quanteda Initiative CIC.

For more details, see https://quanteda.io.

quanteda version 4

The quanteda 4.0 is a major release that improves functionality and performance and further improves function consistency by removing previously deprecated functions. It also includes significant new tokeniser rules that make the default tokeniser smarter than ever, with new Unicode and ICU-compliant rules enabling it to work more consistently with even more languages.

We describe more fully these significant changes in:

The quanteda family of packages

We completed the trend of splitting quanteda into modular packages with the release of v3. The quanteda family of packages includes the following:

  • quanteda: contains all of the core natural language processing and textual data management functions
  • quanteda.textmodels: contains all of the text models and supporting functions, namely the textmodel_*() functions. This was split from the main package with the v2 release
  • quanteda.textstats: statistics for textual data, namely the textstat_*() functions, split with the v3 release
  • quanteda.textplots: plots for textual data, namely the textplot_*() functions, split with the v3 release

We are working on additional package releases, available in the meantime from our GitHub pages:

  • quanteda.sentiment: Functions and lexicons for sentiment analysis using dictionaries
  • quanteda.tidy: Extensions for manipulating document variables in core quanteda objects using your favourite tidyverse functions

and more to come.

How To…

Install (binaries) from CRAN

The normal way from CRAN, using your R GUI or

install.packages("quanteda") 

(New for quanteda v4.0) For Linux users: Because all installations on Linux are compiled, Linux users will first need to install the Intel oneAPI Threading Building Blocks for parallel computing for installation to work.

To install TBB on Linux:

# Fedora, CentOS, RHEL
sudo yum install tbb-devel

# Debian and Ubuntu
sudo apt install libtbb-dev

Windows or macOS users do not have to install TBB or any other packages to enable parallel computing when installing quanteda from CRAN.

Compile from source (macOS and Windows)

Because this compiles some C++ and Fortran source code, you will need to have installed the appropriate compilers to build the development version.

You will also need to install TBB:

macOS:

First, you will need to install XCode command line tools.

xcode-select --install

Then install the TBB libraries and the pkg-config utility: (after installing Homebrew):

brew install tbb pkg-config

Finally, you will need to install gfortran.

Windows:

Install RTools, which includes the TBB libraries.

Use quanteda

See the quick start guide to learn how to use quanteda.

Get Help

Cite the package

Benoit, Kenneth, Kohei Watanabe, Haiyan Wang, Paul Nulty, Adam Obeng, Stefan Müller, and Akitaka Matsuo. (2018) “quanteda: An R package for the quantitative analysis of textual data”. Journal of Open Source Software 3(30), 774. https://doi.org/10.21105/joss.00774.

For a BibTeX entry, use the output from citation(package = "quanteda").

Leave Feedback

If you like quanteda, please consider leaving feedback or a testimonial here.

Contribute

Contributions in the form of feedback, comments, code, and bug reports are most welcome. How to contribute:

Copy Link

Version

Install

install.packages('quanteda')

Monthly Downloads

21,100

Version

4.1.0

License

GPL-3

Maintainer

Last Published

September 4th, 2024

Functions in quanteda (4.1.0)

apply_if

Modify only documents matching a logical condition
as.data.frame.dfm

Convert a dfm to a data.frame
char_tolower

Convert the case of character objects
char_select

Select or remove elements from a character vector
data_char_ukimmig2010

Immigration-related sections of 2010 UK party manifestos
data_char_sampletext

A paragraph of text for testing various text-based functions
as.character.corpus

Coercion and checking methods for corpus objects
as.dfm

Coercion and checking functions for dfm objects
convert-wrappers

Convenience wrappers for dfm convert
check_integer

Validate input vectors
as.yaml

Convert quanteda dictionary objects to the YAML format
attributes<-

Function extending base::attributes()
dfm_sort

Sort a dfm by frequency of one or more margins
dfm_subset

Extract a subset of a dfm
dfm_trim

Trim a dfm using frequency threshold-based feature selection
as.dictionary

Coercion and checking functions for dictionary objects
fcm_sort

Sort an fcm in alphabetical order of the features
dfm_weight

Weight the feature frequencies in a dfm
fcm

Create a feature co-occurrence matrix
as.fcm

Coercion and checking functions for fcm objects
convert

Convert quanteda objects to non-quanteda formats
check_class

Check object class for functions
index

Locate a pattern in a tokens object
check_dots

Check arguments passed to other functions via ...
info_tbb

Get information on TBB library
bootstrap_dfm

Bootstrap a dfm
concat

Return the concatenator character from an object
messages

Message parameter documentation
corpus_subset

Extract a subset of a corpus
message_tokens

Print messages in tokens methods
cbind.dfm

Combine dfm objects by Rows or Columns
corpus_trim

Remove sentences based on their token lengths or a pattern match
msg

Conditionally format messages
dfm_match

Match the feature set of a dfm to given feature names
corpus_group

Combine documents in corpus by a grouping variable
dfm_replace

Replace features in dfm
corpus_reshape

Recast the document units of a corpus
data-internal

Internal data sets
data-relocated

Formerly included data objects
names-quanteda

Special handling for names of quanteda objects
pattern

Pattern for feature, token and keyword matching
pattern2id

Match patterns against token types
corpus-class

Base method extensions for corpus objects
corpus

Construct a corpus object
corpus_sample

Randomly sample documents from a corpus
search_glob

Select types without performing slow regex search
search_index

Internal function for select_types to search the index using fastmatch.
data_corpus_inaugural

US presidential inaugural address texts
corpus_segment

Segment texts on a pattern match
data_dfm_lbgexample

dfm from data in Table 1 of Laver, Benoit, and Garry (2003)
summary_metadata

Functions to add or retrieve corpus summary metadata
dfm

Create a document-feature matrix
textmodels

Models for scaling and classification of textual data
dfm-internal

Internal functions for dfm objects
tokens

Construct a tokens object
escape_regex

Internal function for select_types() to escape regular expressions
docvars

Get or set document-level variables
topfeatures

Identify the most frequent features in a dfm
tokens_chunk

Segment tokens object by chunks of a given size
data_dictionary_LSD2015

Lexicoder Sentiment Dictionary (2015)
types

Get word types from a tokens object
get_docvars

Internal function to extract docvars
get_object_version

Get the package version that created an object
dfm-class

Virtual class "dfm" for a document-feature matrix
groups

Grouping variable(s) for various functions
dfm2lsa

Convert a dfm to an lsa "textmatrix"
head.dfm

Return the first or last part of a dfm
dfm_compress

Recombine a dfm or fcm by combining identical dimension elements
featnames

Get the feature labels from a dfm
featfreq

Compute the frequencies of features
message_dfm

Print messages in dfm methods
message_error

Return an error message
dfm_tfidf

Weight a dfm by tf-idf
matrix2fcm

Converts a Matrix to a fcm
dfm_group

Combine documents in a dfm by a grouping variable
merge_dictionary_values

Internal function to merge values of duplicated keys
dfm_tolower

Convert the case of the features of a dfm and combine
nsentence

Count the number of sentences
dfm_sample

Randomly sample documents from a dfm
dfm_lookup

Apply a dictionary to a dfm
ntoken

Count the number of tokens or types
object-builders

Object builders
object2id

Match quanteda objects against token types
expand

Simpler and faster version of expand.grid() in base package
read_dict_functions

Internal functions to import dictionary files
reexports

Objects exported from other packages
dfm_select

Select features from a dfm or fcm
docfreq

Compute the (weighted) document frequency of a feature
print.phrases

Print a phrase object
print-methods

Print methods for quanteda core objects
docnames

Get or set document names
fcm-class

Virtual class "fcm" for a feature co-occurrence matrix
flatten_list

Internal function to flatten a nested list
split_values

Internal function for special handling of multi-word dictionary values
summary.corpus

Summarize a corpus
format_sparsity

format a sparsity value for printing
resample

Sample a vector
is.collocations

Check if an object is collocations
dictionary2-class

dictionary class objects and functions
is_glob

Check if patterns contains glob wildcard
dictionary

Create a dictionary
tokens_replace

Replace tokens in a tokens object
tokens_recompile

recompile a serialized tokens object
tokens_segment

Segment tokens object by patterns
reshape_docvars

Internal function to subset or duplicate docvar rows
field_system

Shortcut functions to access or assign metadata
tokens_select

Select or remove tokens from a tokens object
flatten_dictionary

Flatten a hierarchical dictionary into a list of character vectors
make_meta

Internal functions to create a list of the meta fields
tokenize_internal

quanteda tokenizers
matrix2dfm

Converts a Matrix to a dfm
tokens-class

Base method extensions for tokens objects
tokens_restore

Restore special tokens
is_indexed

Check if a glob pattern is indexed by index_types
ndoc

Count the number of documents or features
is_regex

Check if a string is a regular expression
kwic

Locate keywords-in-context
nest_dictionary

Utility function to generate a nested list
phrase

Declare a pattern to be a sequence of separate patterns
list2dictionary

Internal function to convert a list to a dictionary
tokens_sample

Randomly sample documents from a tokens object
%>%

Pipe operator
meta

Get or set object metadata
meta_system

Internal function to get, set or initialize system metadata
tokens_wordstem

Stem the terms in an object
remove_empty_keys

Utility function to remove empty keys
lowercase_dictionary_values

Internal function to lowercase dictionary values
replace_dictionary_values

Internal function to replace dictionary values
make_docvars

Internal function to make new system-level docvars
tokens_xptr

Methods for tokens_xptr objects
unlist_character

Unlist a list of character vectors safely
textplots

Plots for textual data
quanteda-package

An R package for the quantitative analysis of textual data
unlist_integer

Unlist a list of integer vectors safely
quanteda_options

Get or set package options for quanteda
texts

Get or assign corpus texts [deprecated]
tokens_tolower

Convert the case of tokens
tokens_trim

Trim tokens using frequency threshold-based feature selection
valuetype

Pattern matching using valuetype
serialize_tokens

Function to serialize list-of-character tokens
spacyr-methods

Extensions for and from spacy_parse objects
sparsity

Compute the sparsity of a document-feature matrix
tokens_compound

Convert token sequences into compound tokens
tokens_group

Combine documents in a tokens object by a grouping variable
tokens_split

Split tokens by a separator pattern
set_dfm_dimnames<-

Internal functions to set dimnames
tokens_subset

Extract a subset of a tokens
textstats

Statistics for textual data
tokenize_custom

Customizable tokenizer
tokens_ngrams

Create n-grams and skip-grams from tokens
tokens_lookup

Apply a dictionary to a tokens object
as.matrix.dfm

Coerce a dfm to a matrix or data.frame
as.list.tokens

Coercion, checking, and combining functions for tokens objects