Learn R Programming

pks (version 0.6-1)

slm: Simple Learning Models (SLMs)

Description

Fits a simple learning model (SLM) for probabilistic knowledge structures by minimum discrepancy maximum likelihood estimation.

Usage

slm(K, N.R, method = c("MD", "ML", "MDML"), R = as.binmat(N.R),
    beta = rep(0.1, nitems), eta = rep(0.1, nitems),
    g = rep(0.1, nitems),
    betafix = rep(NA, nitems), etafix = rep(NA, nitems),
    betaequal = NULL, etaequal = NULL,
    randinit = FALSE, incradius = 0,
    tol = 1e-07, maxiter = 10000, zeropad = 16,
    checkK = TRUE)

getSlmPK(g, K, Ko)

# S3 method for slm print(x, P.Kshow = FALSE, parshow = TRUE, digits=max(3, getOption("digits") - 2), ...)

Value

An object of class slm and blim. It contains all components of a blim object. In addition, it includes:

g

the vector of estimates of the solvability parameters.

Arguments

K

a state-by-problem indicator matrix representing the knowledge space. An element is one if the problem is contained in the state, and else zero.

N.R

a (named) vector of absolute frequencies of response patterns.

method

MD for minimum discrepancy estimation, ML for maximum likelihood estimation, MDML for minimum discrepancy maximum likelihood estimation.

R

a person-by-problem indicator matrix of unique response patterns. Per default inferred from the names of N.R.

beta, eta, g

vectors of initial values for the error, guessing, and solvability parameters.

betafix, etafix

vectors of fixed error and guessing parameter values; NA indicates a free parameter.

betaequal, etaequal

lists of vectors of problem indices; each vector represents an equivalence class: it contains the indices of problems for which the error or guessing parameters are constrained to be equal. (See Examples.)

randinit

logical, if TRUE then initial parameter values are sampled uniformly with constraints. (See Details.)

incradius

include knowledge states of distance from the minimum discrepant states less than or equal to incradius.

tol

tolerance, stopping criterion for iteration.

maxiter

the maximum number of iterations.

zeropad

the maximum number of items for which an incomplete N.R vector is completed and padded with zeros.

checkK

logical, if TRUE K is checked for well-gradedness.

Ko

a state-by-problem indicator matrix representing the outer fringe for each knowledge state in K; typically the result of a call to getKFringe.

x

an object of class slm, typically the result of a call to slm.

P.Kshow

logical, should the estimated distribution of knowledge states be printed?

parshow

logical, should the estimates of error, guessing, and solvability parameters be printed?

digits

a non-null value for digits specifies the minimum number of significant digits to be printed in values.

...

additional arguments passed to other methods.

Details

See Doignon and Falmagne (1999) for details on the simple learning model (SLM) for probabilistic knowledge structures. The model requires a well-graded knowledge space K.

An slm object inherits from class blim. See blim for details on the function arguments. The helper function getSlmPK returns the distribution of knowledge states P.K.

References

Doignon, J.-P., & Falmagne, J.-C. (1999). Knowledge spaces. Berlin: Springer.

See Also

blim, simulate.blim, getKFringe, is.downgradable

Examples

Run this code
data(DoignonFalmagne7)
K   <- DoignonFalmagne7$K     # well-graded knowledge space
N.R <- DoignonFalmagne7$N.R   # frequencies of response patterns

## Fit simple learning model (SLM) by different methods
slm(K, N.R, method = "MD")    # minimum discrepancy estimation
slm(K, N.R, method = "ML")    # maximum likelihood estimation by EM
slm(K, N.R, method = "MDML")  # MDML estimation

## Compare SLM and BLIM
m1 <-  slm(K, N.R, method = "ML")
m2 <- blim(K, N.R, method = "ML")
anova(m1, m2)

Run the code above in your browser using DataLab