mixedModel2(formula, random, weights=NULL, only.varcomp=FALSE, data=list(), subset=NULL, contrasts=NULL, tol=1e-6, maxit=50, trace=FALSE)
mixedModel2Fit(y, X, Z, w=NULL, only.varcomp=FALSE, tol=1e-6, maxit=50, trace=FALSE)
randomizedBlock(formula, random, weights=NULL, only.varcomp=FALSE, data=list(), subset=NULL, contrasts=NULL, tol=1e-6, maxit=50, trace=FALSE)
randomizedBlockFit(y, X, Z, w=NULL, only.varcomp=FALSE, tol=1e-6, maxit=50, trace=FALSE)
TRUE
computation of standard errors and fixed effect coefficients will be skippedcontrasts.arg
argument of model.matrix.default
.glmgam.fit
glmgam.fit
glmgam.fit
. If TRUE
then working estimates will be printed at each iteration.fixed.estimates=TRUE
then the components from the diagonalized weighted least squares fit are also returned.
randomizedBlock
and mixedModel2
are alternative names for the same function.This function fits the model $y=Xb+Zu+e$ where $b$ is a vector of fixed coefficients and $u$ is a vector of random effects. Write $n$ for the length of $y$ and $q$ for the length of $u$. The random effect vector $u$ is assumed to be normal, mean zero, with covariance matrix $\sigma^2_uI_q$ while $e$ is normal, mean zero, with covariance matrix $\sigma^2I_n$. If $Z$ is an indicator matrix, then this model corresponds to a randomized block experiment. The model is fitted using an eigenvalue decomposition which transforms the problem into a Gamma generalized linear model.
Note that the block variance component varcomp[2]
is not constrained to be non-negative.
It may take negative values corresponding to negative intra-block correlations.
However the correlation varcomp[2]/sum(varcomp)
must lie between -1
and 1
.
Missing values in the data are not allowed.
This function is equivalent to lme(fixed=formula,random=~1|random)
, except that the block variance component is not constrained to be non-negative, but is faster and more accurate for small to moderate size data sets.
It is slower than lme
when the number of observations is large.
This function tends to be fast and reliable, compared to competitor functions which fit randomized block models, when then number of observations is small, say no more than 200. However it becomes quadratically slow as the number of observations increases because of the need to do two eigenvalue decompositions of order nearly equal to the number of observations. So it is a good choice when fitting large numbers of small data sets, but not a good choice for fitting large data sets.
glmgam.fit
, lme
, lm
, lm.fit
# Compare with first data example from Venable and Ripley (2002),
# Chapter 10, "Linear Models"
library(MASS)
data(petrol)
out <- mixedModel2(Y~SG+VP+V10+EP, random=No, data=petrol)
cbind(varcomp=out$varcomp,se=out$se.varcomp)
Run the code above in your browser using DataLab