Convergence and log-likelihood
Convergence problems typically arise when the model hasn't converged
to a solution where the log-likelihood has a true maximum. This may result
in unreliable and overly complex (or non-estimable) estimates and standard
errors.
Inspect model convergence
lme4 performs a convergence-check (see ?lme4::convergence
),
however, as as discussed here
and suggested by one of the lme4-authors in
this comment,
this check can be too strict. check_convergence()
thus provides an
alternative convergence test for merMod
-objects.
Resolving convergence issues
Convergence issues are not easy to diagnose. The help page on
?lme4::convergence
provides most of the current advice about
how to resolve convergence issues. Another clue might be large parameter
values, e.g. estimates (on the scale of the linear predictor) larger than
10 in (non-identity link) generalized linear model might indicate
complete separation.
Complete separation can be addressed by regularization, e.g. penalized
regression or Bayesian regression with appropriate priors on the fixed effects.
Convergence versus Singularity
Note the different meaning between singularity and convergence: singularity
indicates an issue with the "true" best estimate, i.e. whether the maximum
likelihood estimation for the variance-covariance matrix of the random effects
is positive definite or only semi-definite. Convergence is a question of
whether we can assume that the numerical optimization has worked correctly
or not.