This page attempts to summarize some of the common problems with
fitting [gn]lmer
models and how to troubleshoot them.
Most of the symptoms/diagnoses/workarounds listed below are due to
various issues in the actual mixed model fitting process. You may
run into problems due to multicollinearity or variables that are
incorrectly typed (e.g. a variable is accidentally coded as character
or factor rather than numeric). These problems can often be isolated
by trying a lm
or glm
fit or attempting to construct the
design matrix via model.matrix()
(in each case with the random
effects in your model excluded). If these tests fail then the problem
is likely not specifically an lme4
issue.
failure to converge in (xxxx) evaluations
The
optimizer hit its maximum limit of function evaluations. To
increase this, use the optControl
argument of
[g]lmerControl
-- for Nelder_Mead
and bobyqa
the relevant parameter is maxfun
; for optim
and
optimx
-wrapped optimizers, including
nlminbwrap
, it's maxit
; for
nloptwrap
, it's maxeval
.
Model failed to converge with max|grad| ...
The scaled
gradient at the fitted (RE)ML estimates
is worryingly large. Try
refitting the parameters starting at the current estimates: getting consistent results (with no warning) suggests a false positive
switching optimizers: getting consistent results suggests there is not really a problem; getting a similar log-likelihood with different parameter estimates suggests that the parameters are poorly determined (possibly the result of a misspecified or overfitted model)
compute values of the deviance in the neighbourhood of the estimated
parameters to double-check that lme4
has really found a
local optimum.
Hessian is numerically singular: parameters are not
uniquely determined
The Hessian (inverse curvature matrix) at the maximum likelihood or REML
estimates has a very large eigenvalue, indicating that (within numerical
tolerances) the surface is completely flat in some direction.
The model may be misspecified, or extremely badly scaled (see
"Model is nearly unidentifiable"
).
Model is nearly unidentifiable ... Rescale variables?
The Hessian (inverse curvature matrix) at the maximum likelihood or REML
estimates has a large eigenvalue, indicating that the surface is
nearly flat in some direction. Consider centering and/or scaling
continuous predictor variables.
Contrasts can be applied only to factors with 2 or more levels
One or more of the categorical predictors in the model has fewer than two
levels. This may be due to user error when converting these predictors to
factors prior to modeling, or it may result from some factor levels being
eliminated due to NA
s in other predictors. Double-check the number
of data points in each factor level to see which one is the culprit:
lapply(na.omit(df[,vars]), table)
(where df
is the
data.frame
and vars
are the column names of your predictor
variables).