Learn R Programming

quanteda (version 0.99.22)

textmodel_nb: Naive Bayes classifier for texts

Description

Fit a multinomial or Bernoulli Naive Bayes model, given a dfm and some training labels.

Usage

textmodel_nb(x, y, smooth = 1, prior = c("uniform", "docfreq", "termfreq"),
  distribution = c("multinomial", "Bernoulli"), ...)

Arguments

x

the dfm on which the model will be fit. Does not need to contain only the training documents.

y

vector of training labels associated with each document identified in train. (These will be converted to factors if not already factors.)

smooth

smoothing parameter for feature counts by class

prior

prior distribution on texts; one of "uniform", "docfreq", or "termfreq". See Prior Distributions below.

distribution

count model for text features, can be multinomial or Bernoulli. To fit a "binary multinomial" model, first convert the dfm to a binary matrix using tf(x, "boolean").

...

more arguments passed through

Value

A list of return values, consisting of (where \(I\) is the total number of documents, \(J\) is the total number of features, and \(k\) is the total number of training classes):

call

original function call

PwGc

\(k \times J\); probability of the word given the class (empirical likelihood)

Pc

\(k\)-length named numeric vector of class prior probabilities

PcGw

\(k \times J\); posterior class probability given the word

Pw

\(J \times 1\); baseline probability of the word

data

list consisting of the \(I \times J\) training dfm x, and the \(I\)-length y training class vector

distribution

the distribution argument

prior

the prior argument

smooth

the value of the smoothing parameter

Predict Methods

A predict method is also available for a fitted Naive Bayes object, see predict.textmodel_nb_fitted.

Prior distributions

Prior distributions refer to the prior probabilities assigned to the training classes, and the choice of prior distribution affects the calculation of the fitted probabilities. The default is uniform priors, which sets the unconditional probability of observing the one class to be the same as observing any other class.

"Document frequency" means that the class priors will be taken from the relative proportions of the class documents used in the training set. This approach is so common that it is assumed in many examples, such as the worked example from Manning, Raghavan, and Sch<U+00FC>tze (2008) below. It is not the default in quanteda, however, since there may be nothing informative in the relative numbers of documents used to train a classifier other than the relative availability of the documents. When training classes are balanced in their number of documents (usually advisable), however, then the empirically computed "docfreq" would be equivalent to "uniform" priors.

Setting prior to "termfreq" makes the priors equal to the proportions of total feature counts found in the grouped documents in each training class, so that the classes with the largest number of features are assigned the largest priors. If the total count of features in each training class was the same, then "uniform" and "termfreq" would be the same.

References

Manning, C. D., Raghavan, P., & Sch<U+00FC>tze, H. (2008). Introduction to Information Retrieval. Cambridge University Press. https://nlp.stanford.edu/IR-book/pdf/irbookonlinereading.pdf

Jurafsky, Daniel and James H. Martin. (2016) Speech and Language Processing. Draft of November 7, 2016. https://web.stanford.edu/~jurafsky/slp3/6.pdf

Examples

Run this code
# NOT RUN {
## Example from 13.1 of _An Introduction to Information Retrieval_
txt <- c(d1 = "Chinese Beijing Chinese",
         d2 = "Chinese Chinese Shanghai",
         d3 = "Chinese Macao",
         d4 = "Tokyo Japan Chinese",
         d5 = "Chinese Chinese Chinese Tokyo Japan")
trainingset <- dfm(txt, tolower = FALSE)
trainingclass <- factor(c("Y", "Y", "Y", "N", NA), ordered = TRUE)
 
## replicate IIR p261 prediction for test set (document 5)
(nb.p261 <- textmodel_nb(trainingset, trainingclass, prior = "docfreq"))
predict(nb.p261, newdata = trainingset[5, ])

# contrast with other priors
predict(textmodel_nb(trainingset, trainingclass, prior = "uniform"))
predict(textmodel_nb(trainingset, trainingclass, prior = "termfreq"))

## replicate IIR p264 Bernoulli Naive Bayes
(nb.p261.bern <- textmodel_nb(trainingset, trainingclass, distribution = "Bernoulli", 
                              prior = "docfreq"))
predict(nb.p261.bern, newdata = trainingset[5, ])
# }

Run the code above in your browser using DataLab