This gets “leverages” or “hat values” from an object. However, there is hidden complexity in what this may mean, so care must be used in selecting proper arguments for a given use (see Details). To get the full hat matrix, see get_matrix(., which="hat_matrix")
.
# S3 method for HLfit
hatvalues(model, type = "projection", which = "resid", force=FALSE, ...)
A list with separate components resid
(leverages of the observations) and ranef
if which="both"
, and a vector otherwise.
An object of class HLfit
, as returned by the fitting functions in spaMM
.
Character: "projection"
, "std"
, or more cryptic values not documented here. See Details.
Character: "resid"
for the traditional leverages of the observations, "ranef"
for random-effect leverages, or "both"
for both.
Boolean: to force recomputation of the leverages even if they are available in the object, for checking purposes.
For consistency with the generic.
Leverages may have distinct meaning depending on context. The textbook version for linear models is that leverages \((q_i)\) are the diagonal elements of a projection matrix (“hat matrix”), and that they may be used to standardize (“studentize”) residuals as follows. If the residual variance \(\phi\) is known, then the variance of each fitted residual \(\hat{e}_i\) is \(\phi(1-q_i)\). Standardized residuals, all with variance 1, are then \(\hat{e}_i/\)\(\sqrt{}\)\((\phi(1-q_i))\). This standardization of variance no longer holds exactly with estimated \(\phi\), but if one uses here an unbiased (REML) estimator of \(\phi\), the studentized residuals may still practically have a unit expected variance.
When a simple linear model is fitted by ML, the variance of the fitted residuals is less than \(\phi\), but \(\hat{\phi}\) is downward biased so that residuals standardized only by \(\sqrt{}\)\((\phi)\), without any leverage correction, more closely have expected unit variance than if corrected by the previous leverages. The ML and REML computations can be seen as both using “standardizing” leverages, defined so that they are zero in the ML case and are equal to the “projection” leverages (the above ones, derived from a projection matrix) in the REML case.
These “standardizing” leverages can themselves been seen as special cases of those that appear in expressions for derivatives, with respect to the dispersion parameters, of the log-determinant of the information matrices considered in the Laplace approximation for marginal or restricted likelihood (Lee et al. 2006). This provides a basis to generalize the concept of standardizing leverages for ML and REML in mixed-effect models. In particular, in an ML fit, one considers leverages \((q*_i)\) that are no longer the diagonal elements of the projection matrix for the mixed model [and, as hinted above, for a simple linear model the ML \((q*_i)\) are zero]. The generalized standardizing leverages may include corrections for non-Gaussian response, for non-Gaussian random effects, and for taking into account the variation of the GLM weights in the logdet(info.mat) derivatives. Which corrections are included depend on the precise method used to fit the model (e.g., EQL vs PQL vs REML). Standardizing leverages are also defined for the random effects.
These distinctions suggest breaking the usual synonymy between “leverages” or “hat values”: the term “hat values” better stands for the diagonal elements of a projection matrix, while “leverages” better stands for the standardizing values.
hatvalues(.,type="std")
returns the standardizing leverages. By contrast, hatvalues(.,type="projection")
will always return hat values from the fitted projection matrix. Note that these values typically differ between ML and REML fit because the fitted projection matrix differs between them.
Lee, Y., Nelder, J. A. and Pawitan, Y. (2006) Generalized linear models with random effects: unified analysis via h-likelihood. Chapman & Hall: London.
if (spaMM.getOption("example_maxtime")>0.8) {
data("Orthodont",package = "nlme")
rnge <- (107:108)
# all different:
#
hatvalues(rlfit <- fitme(distance ~ age+(age|Subject),
data = Orthodont, method="REML"))[rnge]
hatvalues(mlfit <- fitme(distance ~ age+(age|Subject),
data = Orthodont))[rnge]
hatvalues(mlfit,type="std")[rnge]
}
Run the code above in your browser using DataLab