Learn R Programming

secr (version 2.4.0)

score.test: Score Test for SECR Models

Description

Compute score tests comparing a fitted model and a more general alternative model.

Usage

score.test(secr, ..., betaindex = NULL, trace = FALSE, ncores = 1)

score.table(object, ..., sort = TRUE, dmax = 10)

Arguments

secr
fitted secr model
...
one or more alternative models OR a fitted secr model
trace
logical. If TRUE then output one-line summary at each evaluation of the likelihood
ncores
integer number of cores available for parallel processing
betaindex
vector of indices mapping fitted values to parameters in the alternative model
object
score.test object or list of such objects
sort
logical for whether output rows should be in descending order of AICc
dmax
threshold of dAICc for inclusion in model set

Value

  • An object of class `score.test' that inherits from `htest', a list with components
  • statisticthe value the chi-squared test statistic (score statistic)
  • parameterdegrees of freedom of the approximate chi-squared distribution of the test statistic (difference in number of parameters H0, H1)
  • p.valueprobability of test statistic assuming chi-square distribution
  • methoda character string indicating the type of test performed
  • data.namecharacter string with null hypothesis, alternative hypothesis and arguments to function call from fit of H0
  • H0simpler model
  • np0number of parameters in simpler model
  • H1alternative model
  • H1.betacoefficients of alternative model
  • AICAkaike's information criterion, approximated from score statistic
  • AICcAIC with small-sample adjustment of Hurvich & Tsai 1989
  • If ...defines several alternative models then a list of score.test objects is returned. The output from score.table is a dataframe with one row per model, including the reference model.

Details

Score tests allow fast model selection (e.g. Catchpole & Morgan 1996). Only the simpler model need be fitted. This implementation uses the observed information matrix, which may sometimes mislead (Morgan et al. 2007). The gradient and second derivative of the likelihood function are evaluated numerically at the point in the parameter space of the second model corresponding to the fit of the first model. This operation uses the function fdHess of the nlme package; the likelihood must be evaluated several times, but many fewer times than would be needed to fit the model. The score statistic is an approximation to the likelihood ratio; this allows the difference in AIC to be estimated. Mapping of parameters between the fitted and alternative models sometimes requires user intervention via the betaindex argument. For example betaindex = c(1,2,4) is the correct mapping when comparing the null model (D$\sim{~}$1, g0$\sim{~}$1, sigma$\sim{~}$1) to one with a behavioural effect on g0 (D$\sim{~}$1, g0$\sim{~}$b, sigma$\sim{~}$1). score.table summarises one or more score tests in the form of a model comparison table. The ...argument here allows the inclusion of additional score test objects (note the meaning differs from score.test). Approximate AICc values are used to compute relative AIC model weights for all models within dmax AICc units of the best model. Multiple cores provide some speed improvment in score.test when comparing more than two models.

References

Catchpole, E. A. and Morgan, B. J. T. (1996) Model selection of ring-recovery models using score tests. Biometrics 52, 664--672. Hurvich, C. M. and Tsai, C. L. (1989) Regression and time series model selection in small samples. Biometrika 76, 297--307. McCrea, R. S. and Morgan, B. J. T. (2011) Multistate mark-recapture model selection using score tests. Biometrics 67, 234--241. Morgan, B. J. T., Palmer, K. J. and Ridout, M. S. (2007) Negative score test statistic. American statistician 61, 285--288.

See Also

AIC, LR.test

Examples

Run this code
AIC (secrdemo.0, secrdemo.b)
    st <- score.test (secrdemo.0, g0 ~ b)
    st
    score.table(st)

Run the code above in your browser using DataLab