Computes two agreement rates: Cohen's kappa and weighted kappa, and confidence bands.
Kappa(x, weights = c("Equal-Spacing", "Fleiss-Cohen"))
# S3 method for Kappa
print(x, digits=max(getOption("digits") - 3, 3),
CI=FALSE, level=0.95, ...)
# S3 method for Kappa
confint(object, parm, level = 0.95, ...)
# S3 method for Kappa
summary(object, ...)
# S3 method for summary.Kappa
print(x, ...)
For Kappa
: a confusion matrix. For the print methods:
object of class "Kappa"
or "summary.Kappa"
either one of the character strings given in the
default value, or a user-specified matrix with same dimensions as
x
.
minimal number of significant digits.
logical; shall confidence limits be added to the output?
confidence level between 0 and 1 used for the confidence interval.
object of class "Kappa"
.
Currently, ignored.
Further arguments passed to the default print method.
An object of class "Kappa"
with three components:
numeric vector of length 2 with the kappa statistic
(value
component), along with Approximate Standard Error
(ASE
component)
idem for the weighted kappa.
numeric matrix with weights used.
Cohen's kappa is the diagonal sum of the (possibly weighted) relative frequencies, corrected for expected values and standardized by its maximum value. The equal-spacing weights are defined by \(1 - |i - j| / (r - 1)\), \(r\) number of columns/rows, and the Fleiss-Cohen weights by \(1 - |i - j|^2 / (r - 1)^2\). The latter one attaches greater importance to near disagreements.
Cohen, J. (1960), A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37--46.
Everitt, B.S. (1968), Moments of statistics kappa and weighted kappa. The British Journal of Mathematical and Statistical Psychology, 21, 97--103.
Fleiss, J.L., Cohen, J., and Everitt, B.S. (1969), Large sample standard errors of kappa and weighted kappa. Psychological Bulletin, 72, 332--327.
# NOT RUN {
data("SexualFun")
K <- Kappa(SexualFun)
K
confint(K)
summary(K)
print(K, CI = TRUE)
# }
Run the code above in your browser using DataLab