Learn R Programming

altmeta (version 4.1)

pb.hybrid.generic: Hybrid Test for Publication Bias/Small-Study Effects in Meta-Analysis With Generic Outcomes

Description

Performs the hybrid test for publication bias/small-study effects introduced in Lin (2020), which synthesizes results from multiple popular publication bias tests, in a meta-analysis with generic outcomes.

Usage

pb.hybrid.generic(y, s2, n, data, methods,
                  iter.resam = 1000, theo.pval = TRUE)

Value

This function returns a list containing p-values of the publication bias tests specified in methods as well as the hybrid test. Each element's name in this list has the format of pval.x, where x stands for the character string corresponding to a certain publication bias test, such as rank, reg, skew, etc. The hybrid test's p-value has the name pval.hybrid. If theo.pval = TRUE, additional elements of p-values of the tests in methods based on theorectical null distributions are included in the produced list; their names have the format of pval.x.theo. Another p-value of the hybrid test is also produced based on them; its corresponding element has the name pval.hybrid.theo.

Arguments

y

a numeric vector or the corresponding column name in the argument data, specifying the observed effect sizes in the collected studies.

s2

a numeric vector or the corresponding column name in the argument data, specifying the within-study variances.

n

an optional numeric vector or the corresponding column name in the argument data, specifying the study-specific total sample sizes. This argument is required if the sample-size-based test ("inv.sqrt.n") is included in method.

data

an optional data frame containing the meta-analysis dataset. If data is specified, the previous arguments, y, s2, and n, should be specified as their corresponding column names in data.

methods

a vector of character strings specifying the publication bias tests to be included in the hybrid test. They can be a subset of "rank" (Begg's rank test; see Begg and Mazumdar, 1994), "reg" (Egger's regression test under the fixed-effect setting; see Egger et al., 1997), "reg.het" (Egger's regression test accounting for additive heterogeneity), "skew" (the skewness-based test under the fixed-effect setting; see Lin and Chu, 2018), "skew.het" (the skewness-based test accounting for additive heterogeneity), "inv.sqrt.n" (the regression test based on sample sizes; see Tang and Liu, 2000), and "trimfill" (the trim-and-fill method; see Duval and Tweedie, 2000). The default is to include all aforementioned tests.

iter.resam

a positive integer specifying the number of resampling iterations for calculating the p-value of the hybrid test.

theo.pval

a logical value indicating whether additionally calculating the p-values of the tests specified in methods based on the test statistics' theoretical null distributions. Regardless of this argument, the resampling-based p-values are always produced by this function for the tests specified in methods.

Details

The hybrid test statistic is defined as the minimum p-value among the publication bias tests considered in the set specified by the argument methods. Note that the minimum p-value is no longer a genuine p-value because it cannot control the type I error rate. Its p-value needs to be calculated via the resampling approach. See more details in Lin (2020).

References

Begg CB, Mazumdar M (1994). "Operating characteristics of a rank correlation test for publication bias." Biometrics, 50(4), 1088--1101. <tools:::Rd_expr_doi("10.2307/2533446")>

Duval S, Tweedie R (2000). "A nonparametric `trim and fill' method of accounting for publication bias in meta-analysis." Journal of the American Statistical Association, 95(449), 89--98. <tools:::Rd_expr_doi("10.1080/01621459.2000.10473905")>

Egger M, Davey Smith G, Schneider M, Minder C (1997). "Bias in meta-analysis detected by a simple, graphical test." BMJ, 315(7109), 629--634. <tools:::Rd_expr_doi("10.1136/bmj.315.7109.629")>

Lin L (2020). "Hybrid test for publication bias in meta-analysis." Statistical Methods in Medical Research, 29(10), 2881--2899. <tools:::Rd_expr_doi("10.1177/0962280220910172")>

Lin L, Chu H (2018). "Quantifying publication bias in meta-analysis." Biometrics, 74(3), 785--794. <tools:::Rd_expr_doi("10.1111/biom.12817")>

Tang J-L, Liu JLY (2000). "Misleading funnel plot for detection of bias in meta-analysis." Journal of Clinical Epidemiology, 53(5), 477--484. <tools:::Rd_expr_doi("10.1016/S0895-4356(99)00204-8")>

Thompson SG, Sharp SJ (1999). "Explaining heterogeneity in meta-analysis: a comparison of methods." Statistics in Medicine, 18(20), 2693--2708. <tools:::Rd_expr_doi("10.1002/(SICI)1097-0258(19991030)18:20<2693::aid-sim235>3.0.CO;2-V")>

See Also

pb.bayesian.binary, pb.hybrid.binary

Examples

Run this code
## meta-analysis of mean differences
data("dat.plourde")
# based on only 10 resampling iterations
set.seed(1234)
out.plourde <- pb.hybrid.generic(y = y, s2 = s2, n = n,
  data = dat.plourde, iter.resam = 10)
out.plourde
# only produces resampling-based p-values
set.seed(1234)
pb.hybrid.generic(y = y, s2 = s2, n = n,
  data = dat.plourde, iter.resam = 10, theo.pval = FALSE)
# increases the number of resampling iterations to 10000,
#  taking longer time

## meta-analysis of standardized mean differences
data("dat.paige")
# based on only 10 resampling iterations
set.seed(1234)
out.paige <- pb.hybrid.generic(y = y, s2 = s2, n = n,
  data = dat.paige, iter.resam = 10)
out.paige
# increases the number of resampling iterations to 10000,
#  taking longer time

Run the code above in your browser using DataLab