EXPERIMENTAL. For a given model, this function attempts to isolate potential causes of convergence problems. It checks (1) whether there are any unusually large coefficients; (2) whether there are any unusually scaled predictor variables; (3) if the Hessian (curvature of the negative log-likelihood surface at the MLE) is positive definite (i.e., whether the MLE really represents an optimum). For each case it tries to isolate the particular parameters that are problematic.
diagnose(
fit,
eval_eps = 1e-05,
evec_eps = 0.01,
big_coef = 10,
big_sd_log10 = 3,
big_zstat = 5,
check_coefs = TRUE,
check_zstats = TRUE,
check_hessian = TRUE,
check_scales = TRUE
)
a glmmTMB
fit
numeric tolerance for 'bad' eigenvalues
numeric tolerance for 'bad' eigenvector elements
numeric tolerance for large coefficients
numeric tolerance for badly scaled parameters (log10 scale), i.e. for default value of 3, predictor variables with sd less than 1e-3 or greater than 1e3 will be flagged)
numeric tolerance for Z-statistic
identify large-magnitude coefficients? (Only checks conditional-model parameters if a (log, logit, cloglog, probit) link is used. Always checks zero-inflation, dispersion, and random-effects parameters. May produce false positives if predictor variables have extremely large scales.)
identify parameters with unusually large Z-statistics (ratio of standard error to mean)? Identifies likely failures of Wald confidence intervals/p-values.
identify non-positive-definite Hessian components?
identify predictors with unusually small or large scales?
a logical value based on whether anything questionable was found
Problems in one category (e.g. complete separation) will generally also appear in "downstream" categories (e.g. non-positive-definite Hessians). Therefore, it is generally advisable to try to deal with problems in order, e.g. address problems with complete separation first, then re-run the diagnostics to see whether Hessian problems persist.