The presence of small-study effects is a common threat to systematic reviews and meta-analyses, especially when
it is due to publication bias, which occurs when small primary studies are more likely to be reported (published)
if their findings were positive. The presence of small-study effects can be verified by visual inspection of
the funnel plot, where for each included study of the meta-analysis, the estimate of the reported effect size is
depicted against a measure of precision or sample size.
The premise is that the scatter of plots should reflect a funnel shape, if small-study
effects do not exist. However, when small studies are predominately in one direction (usually the
direction of larger effect sizes), asymmetry will ensue.
The fat
function implements several tests for detecting funnel plot asymmetry,
which can be used when the presence of between-study heterogeneity in treatment effect is relatively low.
fat(b, b.se, n.total, d.total, d1, d2, method = "E-FIV")
a list containing the following entries:
A two-sided P-value indicating statistical significance of the funnel plot asymettry test. Values below the significance level (usually defined as 10%) support the presence of funnel plot asymmetry, and thus small-study effects.
A fitted glm
object, representing the estimated regression model used for testing funnel
plot asymmetry.
Vector with the effect size of each study. Examples are log odds ratio, log hazards ratio, log relative risk.
Optional vector with the standard error of the effect size of each study
Optional vector with the total sample size of each study
Optional vector with the total number of observed events for each study
Optional vector with the total number of observed events in the exposed groups
Optional vector with the total number of observed events in the unexposed groups
Method for testing funnel plot asymmetry, defaults to "E-FIV"
(Egger's test with
multiplicative dispersion). Other options are E-UW
, M-FIV
, M-FPV
, D-FIV
and
D-FAV
. More info in "Details"
Thomas Debray <thomas.debray@gmail.com>
A common approach to test the presence of small-study effects is to
estimate a regression model where the standardized effect estimate
(effect/SE) is regressed on a measure of precision (1/SE),
(method="E-UW"
, Egger 1997).
It is possible to allow for between-study heterogeneity by adopting a
multiplicative overdispersion parameter by which the variance in each
study is multiplied (method="E-FIV"
, Sterne 2000).
Unfortunately, it has been demonstrated that the aforementioned two tests are biased because: (i) the independent variable is subject to sampling variability; (ii) the standardized treatment effect is correlated with its estimated precision; and (iii) for binary data, the independent regression variable is a biased estimate of the true precision, with larger bias for smaller sample sizes (Macaskill et al. 2001).
The standard approach estimates a regression model with the effect size as
a function of the study size (method="M-FIV"
, Macaskill et al.
2001). Each study is weighted by the precision of the treatment effect
estimate to allow for possible heteroscedasticity.
An alternative approach is to weight each study by a pooled' estimate of the
outcome proportion (method="M-FPV"
)
For studies with zero events, a continuity correction is applied by adding 0.5 to all cell counts.
This approach (method="P-FPV"
) estimates a regression model with the
treatment effect as a function of the inverse of the total sample size
(Peters et al. 2006).
For studies with zero events, a continuity correction is applied by adding 0.5 to all cell counts.
This approach was proposed for survival data, and uses the total
number of events as independent variable in the weighted regression model
(Debray et al. 2017). The study weights are based on the inverse variance
(method="D-FIV"
) or on an approximation thereof
(method="D-FAV"
).
Debray TPA, Moons KGM, Riley RD. Detecting small-study effects and funnel plot asymmetry in meta-analysis of
survival data: a comparison of new and existing tests. Res Syn Meth. 2018;9(1):41--50.
Egger M, Davey Smith G, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test.
BMJ. 1997;315(7109):629--34.
Macaskill P, Walter SD, Irwig L. A comparison of methods to detect publication bias in meta-analysis.
Stat Med. 2001;20(4):641--54.
Peters JL, Sutton AJ, Jones DR, Abrams KR, Rushton L. Comparison of two methods to detect publication bias
in meta-analysis. JAMA. 2006 Feb 8;295(6):676--80.
Sterne JA, Gavaghan D, Egger M. Publication and related bias in meta-analysis: power of statistical tests
and prevalence in the literature. J Clin Epidemiol. 2000;53(11):1119--29.
plot.fat
data(Fibrinogen)
b <- log(Fibrinogen$HR)
b.se <- ((log(Fibrinogen$HR.975) - log(Fibrinogen$HR.025))/(2*qnorm(0.975)))
n.total <- Fibrinogen$N.total
d.total <- Fibrinogen$N.events
fat(b=b, b.se=b.se)
fat(b=b, b.se=b.se, d.total=d.total, method="D-FIV")
# Note that many tests are also available via metafor
require(metafor)
fat(b=b, b.se=b.se, n.total=n.total, method="M-FIV")
regtest(x=b, sei=b.se, ni=n.total, model="lm", predictor="ni")
Run the code above in your browser using DataLab