Learn R Programming

jtools (version 0.7.3)

summ.merMod: Mixed effects regression summaries with options

Description

summ prints output for a regression model in a fashion similar to summary, but formatted differently with more options.

Usage

# S3 method for merMod
summ(model, standardize = FALSE, confint = FALSE,
  ci.width = 0.95, digits = getOption("jtools-digits", default = 3),
  model.info = TRUE, model.fit = TRUE, r.squared = FALSE, pvals = NULL,
  n.sd = 1, center = FALSE, standardize.response = FALSE,
  odds.ratio = FALSE, t.df = NULL, ...)

Arguments

model

A merMod object.

standardize

If TRUE, adds a column to output with standardized regression coefficients. Default is FALSE.

confint

Show confidence intervals instead of standard errors? Default is FALSE.

ci.width

A number between 0 and 1 that signifies the width of the desired confidence interval. Default is .95, which corresponds to a 95% confidence interval. Ignored if confint = FALSE.

digits

An integer specifying the number of digits past the decimal to report in the output. Default is 3. You can change the default number of digits for all jtools functions with options("jtools-digits" = digits) where digits is the desired number.

model.info

Toggles printing of basic information on sample size, name of DV, and number of predictors.

model.fit

Toggles printing of AIC/BIC (when applicable).

r.squared

Calculate an r-squared model fit statistic? Default is FALSE because it seems to have convergence problems too often.

pvals

Show p values and significance stars? If FALSE, these are not printed. Default is TRUE, except for merMod objects (see details).

n.sd

If standardize = TRUE, how many standard deviations should predictors be divided by? Default is 1, though some suggest 2.

center

If you want coefficients for mean-centered variables but don't want to standardize, set this to TRUE.

standardize.response

Should standardization apply to response variable? Default is FALSE.

odds.ratio

If TRUE, reports exponentiated coefficients with confidence intervals for exponential models like logit and Poisson models. This quantity is known as an odds ratio for binary outcomes and incidence rate ratio for count models.

t.df

For lmerMod models only. User may set the degrees of freedom used in conducting t-tests. See details for options.

...

This just captures extra arguments that may only work for other types of models.

Value

If saved, users can access most of the items that are returned in the output (and without rounding).

coeftable

The outputted table of variables and coefficients

model

The model for which statistics are displayed. This would be most useful in cases in which standardize = TRUE.

Much other information can be accessed as attributes.

Details

By default, this function will print the following items to the console:

  • The sample size

  • The name of the outcome variable

  • The (Pseudo-)R-squared value and AIC/BIC.

  • A table with regression coefficients, standard errors, and t-values.

The standardize and center options are performed via refitting the model with scale_lm and center_lm, respectively. Each of those in turn uses gscale for the mean-centering and scaling.

merMod models are a bit different than the others. The lme4 package developers have, for instance, made a decision not to report or compute p values for lmer models. There are good reasons for this, most notably that the t-values produced are not "accurate" in the sense of the Type I error rate. For certain large, balanced samples with many groups, this is no big deal. What's a "big" or "small" sample? How much balance is necessary? What type of random effects structure is okay? Good luck getting a statistician to give you any clear guidelines on this. Some simulation studies have been done on fewer than 100 observations, so for sure if your sample is around 100 or fewer you should not interpret the t-values. A large number of groups is also crucial for avoiding bias using t-values. If groups are nested or crossed in a linear model, it is best to just get the pbkrtest package.

By default, this function follows lme4's lead and does not report the p values for lmer models. If the user has pbkrtest installed, however, p values are reported using the Kenward-Roger d.f. approximation unless pvals = FALSE or t.df is set to something other than NULL.

See pvalues from the lme4 for more details. If you're looking for a simple test with no extra packages installed, it is better to use the confidence intervals and check to see if they exclude zero than use the t-test. For users of glmer, see some of the advice there as well. While lme4 and by association summ does as well, they are still imperfect.

You have some options to customize the output in this regard with the t.df argument. If NULL, the default, the degrees of freedom used depends on whether the user has pbkrtest installed. If installed, the Kenward-Roger approximation is used. If not, and user sets pvals = TRUE, then the residual degrees of freedom is used. If t.df = "residual", then the residual d.f. is used without a message. If the user prefers to use some other method to determine the d.f., then any number provided as the argument will be used.

References

Johnson, P. C. D. (2014). Extension of Nakagawa & Schielzeth<U+2019>s $R^2_GLMM$ to random slopes models. Methods in Ecology and Evolution, 5, 944<U+2013>946. https://doi.org/10.1111/2041-210X.12225

Kenward, M. G., & Roger, J. H. (1997). Small sample inference for fixed effects from restricted maximum likelihood. Biometrics, 53, 983. https://doi.org/10.2307/2533558

Luke, S. G. (2017). Evaluating significance in linear mixed-effects models in R. Behavior Research Methods, 49, 1494<U+2013>1502. https://doi.org/10.3758/s13428-016-0809-y

Nakagawa, S., & Schielzeth, H. (2013). A general and simple method for obtaining $R^2$ from generalized linear mixed-effects models. Methods in Ecology and Evolution, 4, 133<U+2013>142. https://doi.org/10.1111/j.2041-210x.2012.00261.x

See Also

scale_lm can simply perform the standardization if preferred.

gscale does the heavy lifting for mean-centering and scaling behind the scenes.

get_ddf_Lb gets the Kenward-Roger degrees of freedom if you have pbkrtest installed.

A tweaked version of r.squaredGLMM is used to generate the pseudo-R-squared estimates for linear models.

Examples

Run this code
# NOT RUN {
library(lme4, quietly = TRUE)
data(sleepstudy)
mv <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)

summ(mv) # Note lack of p values if you don't have pbkrtest

# Without pbkrtest, you'll get message about Type 1 errors
summ(mv, pvals = TRUE)

# To suppress message, manually specify t.df argument
summ(mv, t.df = "residual")

# }
# NOT RUN {
 # Confidence intervals may be better alternative in absence of pbkrtest
 summ(mv, confint = TRUE)
# }
# NOT RUN {
# }

Run the code above in your browser using DataLab