Learn R Programming

quanteda (version 0.99.12)

textmodel_NB: Naive Bayes classifier for texts

Description

Fit a multinomial or Bernoulli Naive Bayes model, given a dfm and some training labels.

Usage

textmodel_NB(x, y, smooth = 1, prior = c("uniform", "docfreq", "termfreq"),
  distribution = c("multinomial", "Bernoulli"), ...)

Arguments

x

the dfm on which the model will be fit. Does not need to contain only the training documents.

y

vector of training labels associated with each document identified in train. (These will be converted to factors if not already factors.)

smooth

smoothing parameter for feature counts by class

prior

prior distribution on texts; see Details

distribution

count model for text features, can be multinomial or Bernoulli. To fit a "binary multinomial" model, first convert the dfm to a binary matrix using tf(x, "boolean").

...

more arguments passed through

Value

A list of return values, consisting of:

call

original function call

PwGc

probability of the word given the class (empirical likelihood)

Pc

class prior probability

PcGw

posterior class probability given the word

Pw

baseline probability of the word

data

list consisting of x training class, and y test class

distribution

the distribution argument

prior

argument passed as a prior

smooth

smoothing parameter

Predict Methods

A predict method is also available for a fitted Naive Bayes object, see predict.textmodel_NB_fitted.

References

Manning, C. D., Raghavan, P., & Sch<U+00FC>tze, H. (2008). Introduction to Information Retrieval. Cambridge University Press. https://nlp.stanford.edu/IR-book/pdf/irbookonlinereading.pdf

Jurafsky, Daniel and James H. Martin. (2016) Speech and Language Processing. Draft of November 7, 2016. https://web.stanford.edu/~jurafsky/slp3/6.pdf

Examples

Run this code
# NOT RUN {
## Example from 13.1 of _An Introduction to Information Retrieval_
txt <- c(d1 = "Chinese Beijing Chinese",
         d2 = "Chinese Chinese Shanghai",
         d3 = "Chinese Macao",
         d4 = "Tokyo Japan Chinese",
         d5 = "Chinese Chinese Chinese Tokyo Japan")
trainingset <- dfm(txt, tolower = FALSE)
trainingclass <- factor(c("Y", "Y", "Y", "N", NA), ordered = TRUE)
 
## replicate IIR p261 prediction for test set (document 5)
(nb.p261 <- textmodel_NB(trainingset, trainingclass, prior = "docfreq"))
predict(nb.p261, newdata = trainingset[5, ])

# contrast with other priors
predict(textmodel_NB(trainingset, trainingclass, prior = "uniform"))
predict(textmodel_NB(trainingset, trainingclass, prior = "termfreq"))

## replicate IIR p264 Bernoulli Naive Bayes
(nb.p261.bern <- textmodel_NB(trainingset, trainingclass, distribution = "Bernoulli", 
                              prior = "docfreq"))
predict(nb.p261.bern, newdata = trainingset[5, ])
# }

Run the code above in your browser using DataLab