Learn R Programming

emmeans (version 1.10.6)

joint_tests: Compute joint tests of the terms in a model

Description

This function produces an analysis-of-variance-like table based on linear functions of predictors in a model or emmGrid object. Specifically, the function constructs, for each combination of factors (or covariates reduced to two or more levels), a set of (interaction) contrasts via contrast, and then tests them using test with joint = TRUE. Optionally, one or more of the predictors may be used as by variable(s), so that separate tables of tests are produced for each combination of them.

Usage

joint_tests(object, by = NULL, show0df = FALSE, showconf = TRUE,
  cov.reduce = make.meanint(1), ...)

make.meanint(delta)

meanint(x)

make.symmint(ctr, delta)

symmint(ctr)

Value

a summary_emm object (same as is produced by

summary.emmGrid). All effects for which there are no estimable contrasts are omitted from the results. There may be an additional row named (confounded) which accounts for additional degrees of freedom for effects not accounted for in the preceding rows.

The returned object also includes an "est.fcns" attribute, which is a named list containing the linear functions associated with each joint test. Each row of these is standardized to have length 1. No estimable functions are included for confounded effects.

make.meanint returns the function function(x) mean(x) + delta * c(-1, 1), and make.symmint(ctr, delta) returns the function function(x) ctr + delta * c(-1, 1)

(which does not depend on x). The cases with delta = 1, meanint = make.meanint(1)

and symmint(ctr) = make.symmint(ctr, 1)

are retained for back-compatibility reasons. These functions are available primarily for use with cov.reduce.

Arguments

object

a fitted model, emmGrid, or emm_list. If the latter, its first element is used.

by

character names of by variables. Separate sets of tests are run for each combination of these.

show0df

logical value; if TRUE, results with zero numerator degrees of freedom are displayed, if FALSE they are skipped

showconf

logical value. When we have models with estimability issues (e.g., missing cells), then with showconf = TRUE, we test any remaining effects that are not purely due to contrasts of a single term. If found, they are labeled (confounded). See vignette("xplanations") for more information.

cov.reduce

a function. If object is a fitted model, it is replaced by ref_grid(object, cov.reduce = cov.reduce, ...). For this purpose, the functions meanint and symmint are available for returning an interval around the mean or around zero, respectively. Se the section below on covariates.

...

additional arguments passed to ref_grid and emmeans

delta, ctr

arguments for make.meanint and make.symmint

x

argument for meanint and symmint

Dealing with covariates

A covariate (or any other predictor) must have more than one value in the reference grid in order to test its effect and be included in the results. Therefore, when object is a model, we default to cov.reduce = meanint which sets each covariate at a symmetric interval about its mean. But when object is an existing reference grid, it often has only one value for covariates, in which case they are excluded from the joint tests.

Covariates present further complications in that their values in the reference grid can affect the joint tests of other effects. When covariates are centered around their means (the default), then the tests we obtain can be described as joint tests of covariate-adjusted means; and that is our intended use here. However, some software such as SAS and car::Anova adopt the convention of centering covariates around zero; and for that purpose, one can use cov.reduce = symmint(0) when calling with a model object (or in constructing a reference grid). However, adjusted means with covariates set at or around zero do not make much sense in the context of interpreting estimated marginal means, unless the covariate means really are zero.

See the examples below with the toy dataset.

Details

In models with only factors, no covariates, these tests correspond to “type III” tests a la SAS, as long as equal-weighted averaging is used and there are no estimability issues. When covariates are present and they interact with factors, the results depend on how the covariate is handled in constructing the reference grid. See the section on covariates below. The point that one must always remember is that joint_tests always tests contrasts among EMMs, in the context of the reference grid, whereas type III tests are tests of model coefficients -- which may or may not have anything to do with EMMs or contrasts.

See Also

test

Examples

Run this code
pigs.lm <- lm(log(conc) ~ source * factor(percent), data = pigs)

(jt <- joint_tests(pigs.lm))             ## will be same as type III ANOVA

### Estimable functions associated with "percent"
attr(jt, "est.fcns") $ "percent"

joint_tests(pigs.lm, weights = "outer")  ## differently weighted

joint_tests(pigs.lm, by = "source")      ## separate joint tests of 'percent'

### Comparisons with type III tests in SAS
toy = data.frame(
    treat = rep(c("A", "B"), c(4, 6)),
    female = c(1, 0, 0, 1,   0, 0, 0, 1, 1, 0 ),
    resp = c(17, 12, 14, 19, 28, 26, 26, 34, 33, 27))
toy.fac = lm(resp ~ treat * factor(female), data = toy)
toy.cov = lm(resp ~ treat * female, data = toy)
# (These two models have identical fitted values and residuals)

# -- SAS output we'd get with toy.fac --
## Source          DF    Type III SS    Mean Square   F Value   Pr > F
## treat            1    488.8928571    488.8928571    404.60   <.0001
## female           1     78.8928571     78.8928571     65.29   0.0002
## treat*female     1      1.7500000      1.7500000      1.45   0.2741
# 
# -- SAS output we'd get with toy.cov --
## Source          DF    Type III SS    Mean Square   F Value   Pr > F
## treat            1    252.0833333    252.0833333    208.62   <.0001
## female           1     78.8928571     78.8928571     65.29   0.0002
## female*treat     1      1.7500000      1.7500000      1.45   0.2741

joint_tests(toy.fac)
joint_tests(toy.cov)   # female is regarded as a 2-level factor by default

## Treat 'female' as a numeric covariate (via cov.keep = 0)
## ... then tests depend on where we center things

# Center around the mean
joint_tests(toy.cov, cov.keep = 0, cov.reduce = make.meanint(delta = 1))
# Center around zero (like SAS's results for toy.cov)
joint_tests(toy.cov, cov.keep = 0, cov.reduce = make.symmint(ctr = 0, delta = 1))
# Center around 0.5 (like SAS's results for toy.fac)
joint_tests(toy.cov, cov.keep = 0, cov.reduce = range)

### Example with empty cells and confounded effects
low3 <- unlist(attr(ubds, "cells")[1:3]) 
ubds.lm <- lm(y ~ A*B*C, data = ubds, subset = -low3)

# Show overall joint tests by C:
ref_grid(ubds.lm, by = "C") |> contrast("consec") |> test(joint = TRUE)

# Break each of the above into smaller components:
joint_tests(ubds.lm, by = "C")

Run the code above in your browser using DataLab