This function performs various tests proposed in the context of multigroup analysis.
The following tests are implemented:
.approach_mgd = "Klesel"
: Approach suggested by Klesel2019;textualcSEMThe model-implied variance-covariance matrix (either indicator
(.type_vcv = "indicator"
) or construct (.type_vcv = "construct"
))
is compared across groups. If the model-implied indicator or construct correlation
matrix based on a saturated structural model should be compared, set .saturated = TRUE
.
To measure the distance between the model-implied variance-covariance matrices,
the geodesic distance (dG) and the squared Euclidean distance (dL) are used.
If more than two groups are compared, the average distance over all groups
is used.
.approach_mgd = "Sarstedt"
: Approach suggested by Sarstedt2011;textualcSEMGroups are compared in terms of parameter differences across groups.
Sarstedt2011;textualcSEM tests if parameter k is equal
across all groups. If several parameters are tested simultaneously
it is recommended to adjust the significance level or the p-values (in cSEM correction is
done by p-value). By default
no multiple testing correction is done, however, several common
adjustments are available via .approach_p_adjust
. See
stats::p.adjust()
for details. Note: the
test has some severe shortcomings. Use with caution.
.approach_mgd = "Chin"
: Approach suggested by Chin2010;textualcSEMGroups are compared in terms of parameter differences across groups.
Chin2010;textualcSEM tests if parameter k is equal
between two groups. If more than two groups are tested for equality, parameter
k is compared between all pairs of groups. In this case, it is recommended
to adjust the significance level or the p-values (in cSEM correction is
done by p-value) since this is essentially a multiple testing setup.
If several parameters are tested simultaneously, correction is by group
and number of parameters. By default
no multiple testing correction is done, however, several common
adjustments are available via .approach_p_adjust
. See
stats::p.adjust()
for details.
.approach_mgd = "Keil"
: Approach suggested by Keil2000;textualcSEMGroups are compared in terms of parameter differences across groups.
Keil2000;textualcSEM tests if parameter k is equal
between two groups. It is assumed, that the standard errors of the coefficients are
equal across groups. The calculation of the standard error of the parameter
difference is adjusted as proposed by Henseler2009;textualcSEM.
If more than two groups are tested for equality, parameter k is compared
between all pairs of groups. In this case, it is recommended
to adjust the significance level or the p-values (in cSEM correction is
done by p-value) since this is essentially a multiple testing setup.
If several parameters are tested simultaneously, correction
is by group and number of parameters. By default
no multiple testing correction is done, however, several common
adjustments are available via .approach_p_adjust
. See
stats::p.adjust()
for details.
.approach_mgd = "Nitzl"
: Approach suggested by Nitzl2010;textualcSEMGroups are compared in terms of parameter differences across groups.
Similarly to Keil2000;textualcSEM, a single parameter k is tested
for equality between two groups. In contrast to Keil2000;textualcSEM,
it is assumed, that the standard errors of the coefficients are
unequal across groups Sarstedt2011cSEM.
If more than two groups are tested for equality, parameter k is compared
between all pairs of groups. In this case, it is recommended
to adjust the significance level or the p-values (in cSEM correction is
done by p-value) since this is essentially a multiple testing setup.
If several parameters are tested simultaneously, correction
is by group and number of parameters. By default
no multiple testing correction is done, however, several common
adjustments are available via .approach_p_adjust
. See
stats::p.adjust()
for details.
.approach_mgd = "Henseler"
: Approach suggested by Henseler2007a;textualcSEMGroups are compared in terms of parameter differences across groups.
In doing so, the bootstrap estimates of one parameter are compared across groups.
In the literature, this approach is also known as PLS-MGA.
Originally, this test was proposed as an one-sided test.
In this function we perform a left-sided and a right-sided test
to investigate whether a parameter differs across two groups. In doing so, the significance
level is divided by 2 and compared to p-value of the left and right-sided test.
Moreover, .approach_p_adjust
is ignored and no overall decision
is returned.
For a more detailed description, see also Henseler2009;textualcSEM.
.approach_mgd = "CI_param"
: Approach mentioned in Sarstedt2011;textualcSEMThis approach is based on the confidence intervals constructed around the
parameter estimates of the two groups. If the parameter of one group falls within
the confidence interval of the other group and/or vice versa, it can be concluded
that there is no group difference.
Since it is based on the confidence intervals .approach_p_adjust
is ignored.
.approach_mgd = "CI_overlap"
: Approach mentioned in Sarstedt2011;textualcSEMThis approach is based on the confidence intervals (CIs) constructed around the
parameter estimates of the two groups. If the two CIs overlap, it can be concluded
that there is no group difference.
Since it is based on the confidence intervals .approach_p_adjust
is ignored.
Use .approach_mgd
to choose the approach. By default all approaches are computed
(.approach_mgd = "all"
).
For convenience, two types of output are available. See the "Value" section below.
By default, approaches based on parameter differences across groups compare
all parameters (.parameters_to_compare = NULL
). To compare only
a subset of parameters provide the parameters in lavaan model syntax just like
the model to estimate. Take the simple model:
model_to_estimate <- "
Structural model
eta2 ~ eta1
eta3 ~ eta1 + eta2# Each concept os measured by 3 indicators, i.e., modeled as latent variable
eta1 =~ y11 + y12 + y13
eta2 =~ y21 + y22 + y23
eta3 =~ y31 + y32 + y33
"
If only the path from eta1 to eta3 and the loadings of eta1 are to be compared
across groups, write:
to_compare <- "
Structural parameters to compare
eta3 ~ eta1# Loadings to compare
eta1 =~ y11 + y12 + y13
"
Note that the "model" provided to .parameters_to_compare
does not need to be an estimable model!
Note also that compared to all other functions in cSEM using the argument,
.handle_inadmissibles
defaults to "replace"
to accomdate the Sarstedt et al. (2011) approach.
Argument .R_permuation
is ignored for the "Nitzl"
and the "Keil"
approach.
.R_bootstrap
is ignored if .object
already contains resamples,
i.e. has class cSEMResults_resampled
and if only the "Klesel"
or the "Chin"
approach are used.
The argument .saturated
is used by "Klesel"
only. If .saturated = TRUE
the original structural model is ignored and replaced by a saturated model,
i.e. a model in which all constructs are allowed to correlate freely.
This is useful to test differences in the measurement models between groups
in isolation.