These generic functions are used to compute a language_model
perplexity on a test corpus, which may be either a plain character vector
of text, or a connection from which text can be read in batches.
The second option is useful if one wants to avoid loading
the full text in physical memory, and allows to process text from
different sources such as files, compressed files or URLs.
"Perplexity" is defined here, following Ref.
chen1999empiricalkgrams, as the exponential of the normalized
language model cross-entropy with the test corpus. Cross-entropy is
normalized by the total number of words in the corpus, where we include
the End-Of-Sentence tokens, but not the Begin-Of-Sentence tokens, in the
word count.
The custom .preprocess and .tknz_sent arguments allow to apply
transformations to the text corpus before the perplexity computation takes
place. By default, the same functions used during model building are
employed, c.f. kgram_freqs and language_model.
A note of caution is in order. Perplexity is not defined for all language
models available in kgrams. For instance, smoother
"sbo"
(i.e. Stupid Backoff brants-etal-2007-largekgrams)
does not produce normalized probabilities,
and this is signaled by a warning (shown once per session) if the user
attempts to compute the perplexity for such a model.
In these cases, when possible, perplexity computations are performed
anyway case, as the results might still be useful (e.g. to tune the model's
parameters), even if their probabilistic interpretation does no longer hold.