Learn R Programming

MuMIn (version 1.48.4)

r.squaredLR: Likelihood-ratio based pseudo-R-squared

Description

Calculate a coefficient of determination based on the likelihood-ratio test (\(R_{LR}^{2}\)).

Usage

r.squaredLR(object, null = NULL, null.RE = FALSE, ...)

null.fit(object, evaluate = FALSE, RE.keep = FALSE, envir = NULL, ...)

Value

r.squaredLR returns a value of \(R_{LR}^{2}\), and the attribute "adj.r.squared" gives the Nagelkerke's modified statistic. Note that this is not the same as nor equivalent to the classical

‘adjusted R squared’.

null.fit returns the fitted null model object (if

evaluate = TRUE) or an unevaluated call to fit a null model.

Arguments

object

a fitted model object.

null

a fitted null model. If not provided, null.fit will be used to construct it. null.fit's capabilities are limited to only a few model classes, for others the null model has to be specified manually.

null.RE

logical, should the null model contain random factors? Only used if no null model is given, otherwise omitted, with a warning.

evaluate

if TRUE evaluate the fitted model object else return the call.

RE.keep

if TRUE, the random effects of the original model are included.

envir

the environment in which the null model is to be evaluated, defaults to the environment of the original model's formula.

...

further arguments, of which only x would be used, to maintain compatibility with older versions (x has been replaced with object).

Details

This statistic is is one of the several proposed pseudo-\(R^{2}\)'s for nonlinear regression models. It is based on an improvement from null (intercept only) model to the fitted model, and calculated as

$$ R_{LR}^{2}=1-\exp(-\frac{2}{n}(\log\mathcal{L}(x)-\log\mathcal{L}(0))) $$

where \(\log\mathcal{L}(x)\) and \(\log\mathcal{L}(0)\) are the log-likelihoods of the fitted and the null model respectively. ML estimates are used if models have been fitted by REstricted ML (by calling logLik with argument REML = FALSE). Note that the null model can include the random factors of the original model, in which case the statistic represents the ‘variance explained’ by fixed effects.

For OLS models the value is consistent with classical \(R^{2}\). In some cases (e.g. in logistic regression), the maximum \(R_{LR}^{2}\) is less than one. The modification proposed by Nagelkerke (1991) adjusts the \(R_{LR}^{2}\) to achieve 1 at its maximum: \(\bar{R}^{2} = R_{LR}^{2} / \max(R_{LR}^{2}) \) where \(\max(R_{LR}^{2}) = 1 - \exp(\frac{2}{n}\log\mathcal{L}(\textrm{0})) \).

null.fit tries to guess the null model call, given the provided fitted model object. This would be usually a glm. The function will give an error for an unrecognised class.

References

Cox, D. R. and Snell, E. J. 1989 The analysis of binary data, 2nd ed. London, Chapman and Hall.

Magee, L. 1990 \(R^{2}\) measures based on Wald and likelihood ratio joint significance tests. Amer. Stat. 44, 250--253.

Nagelkerke, N. J. D. 1991 A note on a general definition of the coefficient of determination. Biometrika 78, 691--692.

See Also

summary.lm, r.squaredGLMM

r2 from package performance calculates many different types of \(R^{2}\).