Kappa: Calculates kappa statistic and other classification error statistics
Description
The kappa statistic, along with user and producer error rates are conventionally
used in the remote sensing to describe the effectiveness of ground cover
classifications. Since it simultaneously considers both errors of commission
and omission, kappa can be considered a more conservative measure of
classification accuracy than the percentage of correctly classified items.
Usage
Kappa(class1, reference)
Value
Returns a list with 4 items
ttl_agreement
The percentage of correctly classified items.
user_accuracy
The user accuracy for each category of the classification.
producer_accuracy
The producer accuracy for each category of the classification.
kappa
The kappa statistic.
table
A two way contingency table comparing the user supplied classification to the reference classification.
Arguments
class1
A vector describing a classification of experimental units.
reference
A vector describing the "correct" classification of the experimental units in class1
Author
Ken Aho
References
Jensen, J. R. (1996) Introductory digital imagery processing 2nd edition. Prentice-Hall.