This function computes a variety of supplemental statistics for meta-analyses. The statistics here are included for interested users. It is strongly recommended that heterogeneity in meta-analysis be interpreted using the \(SD_{res}\), \(SD_{\rho}\), and \(SD_{\delta}\) statistics, along with corresponding credibility intervals, which are reported in the default ma_obj
output (Wiernik et al., 2017).
heterogeneity(ma_obj, es_failsafe = NULL,
conf_level = attributes(ma_obj)$inputs$conf_level,
var_res_ci_method = c("profile_var_es", "profile_Q", "normal_logQ"),
...)
Meta-analysis object.
Failsafe effect-size value for file-drawer analyses.
Confidence level to define the width of confidence intervals (default is conf_level
specified in ma_obj
).
Which method to use to estimate the limits. Options are profile_var_es
for a profile-likelihood interval assuming \(\sigma^{2}_es ~ \chi^{2}(k-1)\), profile_Q
for a profile-likelihood interval assuming \(Q ~ \chi^{2}(k-1, \lambda), \lambda = \sum_{i=1}^{k} w_i(\theta - \bar{\theta})^{2}\), and normal_logQ
for a delta method assuming log(Q) follows a standard normal distribution.
Additional arguments.
ma_obj with heterogeneity statistics added. Included statistics include:
es_type
The effect size metric used.
percent_var_accounted
Percent variance accounted for statistics (by sampling error, by other artifacts, and total). These statistics are widely reported, but not recommended, as they tend to be misinterpreted as suggesting only a small portion of the observed variance is accounted for by sampling error and other artifacts (Schmidt, 2010; Schmidt & Hunter, 2015, p. 15, 425). The square roots of these values are more interpretable and appropriate indices of the relations between observed effect sizes and statistical artifacts (see cor(es, perturbations)
).
cor(es, perturbations)
The correlation between observed effect sizes and statistical artifacts in each sample (with sampling error, with other artifacts, and with artifacts in total), computed as \(\sqrt{percent\;var\;accounted}\). These indices are more interpretable and appropriate indices of the relations between observed effect sizes and statistical artifacts than percent_var_accounted
.
rel_es_obs
\(1-\frac{var_{pre}}{var_{es}}\), the reliability of observed effect size differences as indicators of true effect sizes differences in the sampled studies. This value is useful for correcting correlations between moderators and effect sizes in meta-regression.
H_squared
The ratio of the observed effect size variance to the predicted (error) variance. Also the square root of Q
divided by its degrees of freedom.
H
The ratio of the observed effect size standard deviation to the predicted (error) standard deviation.
I_squared
The estimated percent variance not accounted for by sampling error or other artifacts (attributable to moderators and uncorrected artifacts). This statistic is simply rel_es_obs
expressed as a percentage rather than a decimal.
Q
Cochran's \(\chi^{2}\) statistic. Significance tests using this statistic are strongly discouraged; heterogeneity should instead be determined by examining the width of the credibility interval and the practical differences between effect sizes contained within it (Wiernik et al., 2017). This value is not accurate when artifact distribution methods are used for corrections.
tau_squared
\(\tau^{2}\), an estimator of the random effects variance component (analogous to the Hunter-Schmidt \(SD_{res}^{2}\), \(SD_{\rho}^{2}\), or \(SD_{\delta}^{2}\) statistics), with its confidence interval. This value is not accurate when artifact distribution methods are used for corrections.
tau
\(\sqrt{\tau^{2}}\), analogous to the Hunter-Schmidt \(SD_{res}\), \(SD_{\rho}\), and \(SD_{\delta}\) statistics, with its confidence interval. This value is not accurate when artifact distribution methods are used for corrections.
Q_r
, H_r_squared
, H_r
, I_r_squared
, tau_r_squared
, tau_r
Outlier-robust versions of these statistics, computed based on absolute deviations from the weighted mean effect size (see Lin et al., 2017). These values are not accurate when artifact distribution methods are used for corrections.
Q_m
, H_m_squared
, H_m
, I_m_squared
, tau_m_squared
, tau_m
Outlier-robust versions of these statistics, computed based on absolute deviations from the weighted median effect size (see Lin et al., 2017). These values are not accurate when artifact distribution methods are used for corrections.
file_drawer
Fail-safe N and k statistics (file-drawer analyses). These statistics should not be used to evaluate publication bias, as they counterintuitively suggest less when publication bias is strong (Becker, 2005). However, in the absence of publication bias, they can be used as an index of second-order sampling error (how likely is a mean effect to reduce to the specified value with additional studies?). The confidence interval around the mean effect can be used more directly for the same purpose.
Becker, B. J. (2005). Failsafe N or file-drawer number. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in meta-analysis: Prevention, assessment and adjustments (pp. 111<U+2013>125). Hoboken, NJ: Wiley. https://doi.org/10.1002/0470870168.ch7
Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. Statistics in Medicine, 21(11), 1539<U+2013>1558. https://doi.org/10.1002/sim.1186
Lin, L., Chu, H., & Hodges, J. S. (2017). Alternative measures of between-study heterogeneity in meta-analysis: Reducing the impact of outlying studies. Biometrics, 73(1), 156<U+2013>166. https://doi.org/10.1111/biom.12543
Schmidt, F. (2010). Detecting and correcting the lies that data tell. Perspectives on Psychological Science, 5(3), 233<U+2013>242. https://doi.org/10.1177/1745691610369339
Schmidt, F. L., & Hunter, J. E. (2015). Methods of meta-analysis: Correcting error and bias in research findings (3rd ed.). Thousand Oaks, CA: Sage. https://doi.org/10/b6mg. pp. 15, 414, 426, 533<U+2013>534.
Wiernik, B. M., Kostal, J. W., Wilmot, M. P., Dilchert, S., & Ones, D. S. (2017). Empirical benchmarks for interpreting effect size variability in meta-analysis. Industrial and Organizational Psychology, 10(3). https://doi.org/10.1017/iop.2017.44
# NOT RUN {
## Correlations
ma_obj <- ma_r_ic(rxyi = rxyi, n = n, rxx = rxxi, ryy = ryyi, ux = ux,
correct_rr_y = FALSE, data = data_r_uvirr)
ma_obj <- ma_r_ad(ma_obj, correct_rr_y = FALSE)
ma_obj <- heterogeneity(ma_obj = ma_obj)
ma_obj$heterogeneity[[1]]$barebones
ma_obj$heterogeneity[[1]]$individual_correction$true_score
ma_obj$heterogeneity[[1]]$artifact_distribution$true_score
## d values
ma_obj <- ma_d_ic(d = d, n1 = n1, n2 = n2, ryy = ryyi,
data = data_d_meas_multi)
ma_obj <- ma_d_ad(ma_obj)
ma_obj <- heterogeneity(ma_obj = ma_obj)
ma_obj$heterogeneity[[1]]$barebones
ma_obj$heterogeneity[[1]]$individual_correction$latentGroup_latentY
ma_obj$heterogeneity[[1]]$artifact_distribution$latentGroup_latentY
# }
Run the code above in your browser using DataLab