escalc(measure, formula, ...)
"escalc"(measure, formula, ai, bi, ci, di, n1i, n2i, x1i, x2i, t1i, t2i, m1i, m2i, sd1i, sd2i, xi, mi, ri, ti, sdi, ni, yi, vi, sei, data, slab, subset, add=1/2, to="only0", drop00=FALSE, vtype="LS", var.names=c("yi","vi"), add.measure=FALSE, append=TRUE, replace=TRUE, digits=4, ...)
"escalc"(measure, formula, weights, data, add=1/2, to="only0", drop00=FALSE, vtype="LS", var.names=c("yi","vi"), digits=4, ...)
add
should be added (either "all"
, "only0"
, "if0all"
, or "none"
). See Details."LS"
, "UB"
, "HO"
, "ST"
, or vtype="CS"
). See Details."yi"
and "vi"
)."measure"
) that indicates the type of outcome measure computed. When using this option, var.names
can have a third element to change this variable name.data
argument (if one has been specified) should be returned together with the observed outcomes and corresponding sampling variances (the default is TRUE
).yi
and vi
in the data frame should be replaced or not. Only relevant when append=TRUE
and the data frame already contains the yi
and vi
variables. If replace=TRUE
(the default), all of the existing values will be overwritten. If replace=FALSE
, only NA
values will be replaced. See Value section below for more details.c("escalc","data.frame")
. The object is a data frame containing the following components:If append=TRUE
and a data frame was specified via the data
argument, then yi
and vi
are append to this data frame. Note that the var.names
argument actually specifies the names of these two variables.If the data frame already contains two variables with names as specified by the var.names
argument, the values for these two variables will be overwritten when replace=TRUE
(which is the default). By setting replace=FALSE
, only values that are NA
will be replaced.The object is formated and printed with the print.escalc
function. The summary.escalc
function can be used to obtain confidence intervals for the individual outcomes.
escalc
function. The measure
argument is a character string specifying which outcome measure should be calculated (see below for the various options), arguments ai
through ni
are then used to specify the information needed to calculate the various measures (depending on the chosen outcome measure, different arguments need to be specified), and data
can be used to specify a data frame containing the variables given to the previous arguments. The add
, to
, and drop00
arguments may be needed when dealing with frequency or count data that may need special handling when some of the frequencies or counts are equal to zero (see below for details). Finally, the vtype
argument is used to specify how to estimate the sampling variances (again, see below for details).
To provide a structure to the various effect size or outcome measures that can be calculated with the escalc
function, we can distinguish between measures that are used to:
Outcome Measures for Two-Group Comparisons
In many meta-analyses, the goal is to synthesize the results from studies that compare or contrast two groups. The groups may be experimentally defined (e.g., a treatment and a control group created via random assignment) or may naturally occur (e.g., men and women, employees working under high- versus low-stress conditions, people exposed to some environmental risk factor versus those not exposed).
Measures for Dichotomous Variables
In various fields (such as the health and medical sciences), the response or outcome variable measured is often dichotomous (binary), so that the data from a study comparing two different groups can be expressed in terms of a $2x2$ table, such as:
outcome 1 | outcome 2 | |
total | group 1 | ai |
bi |
n1i |
ai
, bi
, ci
, and di
denote the cell frequencies (i.e., the number of people falling into a particular category) and n1i
and n2i
the row totals (i.e., the group sizes).For example, in a set of randomized clinical trials, group 1 and group 2 may refer to the treatment and placebo/control group, respectively, with outcome 1 denoting some event of interest (e.g., death, complications, failure to improve under the treatment) and outcome 2 its complement. Similarly, in a set of cohort studies, group 1 and group 2 may denote those who engage in and those who do not engage in a potentially harmful behavior (e.g., smoking), with outcome 1 denoting the development of a particular disease (e.g., lung cancer) during the follow-up period. Finally, in a set of case-control studies, group 1 and group 2 may refer to those with the disease (i.e., cases) and those free of the disease (i.e., controls), with outcome 1 denoting, for example, exposure to some risk environmental risk factor in the past and outcome 2 non-exposure. Note that in all of these examples, the stratified sampling scheme fixes the row totals (i.e., the group sizes) by design.
A meta-analysis of studies reporting results in terms of $2x2$ tables can be based on one of several different outcome measures, including the relative risk (risk ratio), the odds ratio, the risk difference, and the arcsine square-root transformed risk difference (e.g., Fleiss & Berlin, 2009, Rücker et al., 2009). For any of these outcome measures, one needs to specify the cell frequencies via the ai
, bi
, ci
, and di
arguments (or alternatively, one can use the ai
, ci
, n1i
, and n2i
arguments).
The options for the measure
argument are then:
"RR"
for the log relative risk.
"OR"
for the log odds ratio.
"RD"
for the risk difference.
"AS"
for the arcsine square-root transformed risk difference (Rücker et al., 2009).
"PETO"
for the log odds ratio estimated with Peto's method (Yusuf et al., 1985).
Cell entries with a zero count can be problematic, especially for the relative risk and the odds ratio. Adding a small constant to the cells of the $2x2$ tables is a common solution to this problem. When to="only0"
(the default), the value of add
(the default is 1/2) is added to each cell of those $2x2$ tables with at least one cell equal to 0. When to="all"
, the value of add
is added to each cell of all $2x2$ tables. When to="if0all"
, the value of add
is added to each cell of all $2x2$ tables, but only when there is at least one $2x2$ table with a zero cell. Setting to="none"
or add=0
has the same effect: No adjustment to the observed table frequencies is made. Depending on the outcome measure and the data, this may lead to division by zero inside of the function (when this occurs, the resulting value is recoded to NA
). Also, studies where ai=ci=0
or bi=di=0
may be considered to be uninformative about the size of the effect and dropping such studies has sometimes been recommended (Higgins & Green, 2008). This can be done by setting drop00=TRUE
. The values for such studies will then be set to NA
.
A dataset corresponding to data of this type is provided in dat.bcg
.
Assuming that the dichotomous outcome is actually a dichotomized version of the responses on an underlying quantitative scale, it is also possible to estimate the standardized mean difference based on $2x2$ table data, using either the probit transformed risk difference or a transformation of the odds ratio (e.g., Cox & Snell, 1989; Chinn, 2000; Hasselblad & Hedges, 1995; Sanchez-Meca et al., 2003). The options for the measure
argument are then:
"PBIT"
for the probit transformed risk difference as an estimate of the standardized mean difference.
"OR2DN"
for the transformed odds ratio as an estimate of the standardized mean difference (normal distributions).
"OR2DL"
for the transformed odds ratio as an estimate of the standardized mean difference (logistic distributions).
A dataset corresponding to data of this type is provided in dat.gibson2002
.
Measures for Event Counts
In medical and epidemiological studies comparing two different groups (e.g., treated versus untreated patients, exposed versus unexposed individuals), results are sometimes reported in terms of event counts (i.e., the number of events, such as strokes or myocardial infarctions) over a certain period of time. Data of this type are also referred to as person-time data. In particular, assume that the studies report data in the form:
number of events | |
total person-time | group 1 |
x1i |
t1i |
x1i
and x2i
denote the total number of events in the first and the second group, respectively, and t1i
and t2i
the corresponding total person-times at risk. Often, the person-time is measured in years, so that t1i
and t2i
denote the total number of follow-up years in the two groups. This form of data is fundamentally different from what was described in the previous section, since the total follow-up time may differ even for groups of the same size and the individuals studied may experience the event of interest multiple times. Hence, different outcome measures than the ones described in the previous section must be considered when data are reported in this format. These inlude the incidence rate ratio, the incidence rate difference, and the square-root transformed incidence rate difference (Bagos & Nikolopoulos, 2009; Rothman et al., 2008). For any of these outcome measures, one needs to specify the total number of events via the x1i
and x2i
arguments and the corresponding total person-time values via the t1i
and t2i
arguments.
The options for the measure
argument are then:
"IRR"
for the log incidence rate ratio.
"IRD"
for the incidence rate difference.
"IRSD"
for the square-root transformed incidence rate difference.
Studies with zero events in one or both groups can be problematic, especially for the incidence rate ratio. Adding a small constant to the number of events is a common solution to this problem. When to="only0"
(the default), the value of add
(the default is 1/2) is added to x1i
and x2i
only in the studies that have zero events in one or both groups. When to="all"
, the value of add
is added to x1i
and x2i
in all studies. When to="if0all"
, the value of add
is added to x1i
and x2i
in all studies, but only when there is at least one study with zero events in one or both groups. Setting to="none"
or add=0
has the same effect: No adjustment to the observed number of events is made. Depending on the outcome measure and the data, this may lead to division by zero inside of the function (when this occurs, the resulting value is recoded to NA
). Like for $2x2$ table data, studies where x1i=x2i=0
may be considered to be uninformative about the size of the effect and dropping such studies has sometimes been recommended. This can be done by setting drop00=TRUE
. The values for such studies will then be set to NA
.
A dataset corresponding to data of this type is provided in dat.hart1999
.
Measures for Quantitative Variables
When the response or dependent variable assessed in the individual studies is measured on some quantitative scale, it is customary to report certain summary statistics, such as the mean and standard deviation of the scores. The data layout for a study comparing two groups with respect to such a variable is then of the form:
mean | standard deviation | |
group size | group 1 | m1i |
sd1i |
n1i |
m1i
and m2i
are the observed means of the two groups, sd1i
and sd2i
the observed standard deviations, and n1i
and n2i
the number of individuals in each group. Again, the two groups may be experimentally created (e.g., a treatment and control group based on random assignment) or naturally occurring (e.g., men and women). In either case, the raw mean difference, the standardized mean difference, and the (log transformed) ratio of means (also called log response ratio) are useful outcome measures when meta-analyzing studies of this type (e.g., Borenstein, 2009). The options for the measure
argument are then:
"MD"
for the raw mean difference.
"SMD"
for the standardized mean difference.
"SMDH"
for the standardized mean difference with heteroscedastic population variances in the two groups (Bonett, 2008, 2009).
"ROM"
for the log transformed ratio of means (Hedges et al., 1999; Lajeunesse, 2011).
m1i
and m2i
have opposite signs, this outcome measure cannot be computed. The positive bias in the standardized mean difference is automatically corrected for within the function, yielding Hedges' g for measure="SMD"
(Hedges, 1981). Similarly, the same bias correction is applied for measure="SMDH"
(Bonett, 2009). For measure="SMD"
, one can choose between vtype="LS"
(the default) and vtype="UB"
. The former uses a large-sample approximation to compute the sampling variances. The latter provides unbiased estimates of the sampling variances. Finally, for measure="MD"
and measure="ROM"
, one can choose between vtype="LS"
(the default) and vtype="HO"
. The former computes the sampling variances without assuming homoscedasticity (i.e., that the true variances of the measurements are the same in group 1 and group 2 within each study), while the latter assumes homoscedasticity.
A dataset corresponding to data of this type is provided in dat.normand1999
(for mean differences and standardized mean differences). A dataset showing the use of the ratio of means measure is provided in dat.curtis1998
.
It is also possible to transform standardized mean differences into log odds ratios (e.g., Cox & Snell, 1989; Chinn, 2000; Hasselblad & Hedges, 1995; Sanchez-Meca et al., 2003). The options for the measure
argument are then:
"D2ORN"
for the transformed standardized mean difference as an estimate of the log odds ratio (normal distributions).
"D2ORL"
for the transformed standardized mean difference as an estimate of the log odds ratio (logistic distributions).
A dataset illustrating the combined analysis of standardized mean differences and probit transformed risk differences is provided in dat.gibson2002
.
Outcome Measures for Variable Association
Meta-analyses are often used to synthesize studies that examine the direction and strength of the association between two variables measured concurrently and/or without manipulation by experimenters. In this section, a variety of outcome measures will be discussed that may be suitable for a meta-analyses with this purpose. We can distinguish between measures that are applicable when both variables are measured on quantitative scales, when both variables measured are dichotomous, and when the two variables are of mixed types.
Measures for Two Quantitative Variables
The (Pearson or product moment) correlation coefficient quantifies the direction and strength of the (linear) relationship between two quantitative variables and is therefore frequently used as the outcome measure for meta-analyses (e.g., Borenstein, 2009). Two alternative measures are a bias-corrected version of the correlation coefficient and Fisher's r-to-z transformed coefficient.
For these measures, one needs to specify ri
, the vector with the raw correlation coefficients, and ni
, the corresponding sample sizes. The options for the measure
argument are then:
"COR"
for the raw correlation coefficient.
"UCOR"
for the raw correlation coefficient corrected for its slight negative bias (based on equation 2.3 in Olkin & Pratt, 1958).
"ZCOR"
for the Fisher's r-to-z transformed correlation coefficient (Fisher, 1921).
measure="UCOR"
, one can choose between vtype="LS"
(the default) and vtype="UB"
. The former uses the standard large-sample approximation to compute the sampling variances. The latter provides unbiased estimates of the sampling variances (see Hedges, 1989, but using the exact equation instead of the approximation). Datasets corresponding to data of this type are provided in dat.mcdaniel1994
and dat.molloy2014
.
Measures for Two Dichotomous Variables
When the goal of a meta-analysis is to examine the relationship between two dichotomous variables, the data for each study can again be presented in the form of a $2x2$ table, except that there may not be a clear distinction between the group (i.e., the row) and the outcome (i.e., the column) variable. Moreover, the table may be a result of cross-sectional (i.e., multinomial) sampling, where none of the table margins (except the total sample size) is fixed by the study design.
The phi coefficient and the odds ratio are commonly used measures of association for $2x2$ table data (e.g., Fleiss & Berlin, 2009). The latter is particularly advantageous, as it is directly comparable to values obtained from stratified sampling (as described earlier). Yule's Q and Yule's Y (Yule, 1912) are additional measures of association for $2x2$ table data (although they are not typically used in meta-analyses). Finally, assuming that the two dichotomous variables are actually dichotomized versions of the responses on two underlying quantitative scales (and assuming that the two variables follow a bivariate normal distribution), it is also possible to estimate the correlation between the two variables using the tetrachoric correlation coefficient (Pearson, 1900; Kirk, 1973).
For any of these outcome measures, one needs to specify the cell frequencies via the ai
, bi
, ci
, and di
arguments. The options for the measure
argument are then:
"OR"
for the log odds ratio.
"PHI"
for the phi coefficient.
"YUQ"
for Yule's Q (Yule, 1912).
"YUY"
for Yule's Y (Yule, 1912).
"RTET"
for the tetrachoric correlation.
Measures for Mixed Variable Types
Finally, we will consider outcome measures that can be used to describe the relationship between two variables, where one variable is dichotomous and the other variable measures some quantitative characteristic. In that case, it is likely that study authors again report summary statistics, such as the mean and standard deviation of the scores within the two groups (defined by the dichotomous variable). In that case, one can compute the point-biserial (Tate, 1954) as a measure of association between the two variables. If the dichotomous variable is actually a dichotomized version of the responses on an underlying quantitative scale (and assuming that the two variables follow a bivariate normal distribution), it is also possible to estimate the correlation between the two variables using the biserial correlation coefficient (Pearson, 1909; Soper, 1914).
Here, one again needs to specify m1i
and m2i
for the observed means of the two groups, sd1i
and sd2i
for the observed standard deviations, and n1i
and n2i
for the number of individuals in each group. The options for the measure
argument are then:
"RPB"
for the point-biserial correlation.
"RBIS"
for the biserial correlation.
measure="RPB"
, one must indicate via vtype="ST"
or vtype="CS"
whether the data for the studies were obtained using stratified or cross-sectional (i.e., multinomial) sampling, respectively (it is also possible to specify an entire vector for the vtype
argument in case the sampling schemes differed for the various studies).
Outcome Measures for Individual Groups
In this section, outcome measures will be described which may be useful when the goal of a meta-analysis is to synthesize studies that characterize some property of individual groups. We will again distinguish between measures that are applicable when the characteristic of interest is a dichotomous variable, when the characteristic represents an event count, or when the characteristic assessed is a quantiative variable.
Measures for Dichotomous Variables
A meta-analysis may be conducted to aggregate studies that provide data for individual groups with respect to a dichotomous dependent variable. Here, one needs to specify xi
and ni
, denoting the number of individuals experiencing the event of interest and the total number of individuals, respectively. Instead of specifying ni
, one can use mi
to specify the number of individuals that do not experience the event of interest. The options for the measure
argument are then:
"PR"
for the raw proportion.
"PLN"
for the log transformed proportion.
"PLO"
for the logit transformed proportion (i.e., log odds).
"PAS"
for the arcsine square-root transformed proportion (i.e., the angular transformation).
"PFT"
for the Freeman-Tukey double arcsine transformed proportion (Freeman & Tukey, 1950).
to="only0"
(the default), the value of add
(the default is 1/2) is added to xi
and mi
only for studies where xi
or mi
is equal to 0. When to="all"
, the value of add
is added to xi
and mi
in all studies. When to="if0all"
, the value of add
is added in all studies, but only when there is at least one study with a zero value for xi
or mi
. Setting to="none"
or add=0
has the same effect: No adjustment to the observed values is made. Depending on the outcome measure and the data, this may lead to division by zero inside of the function (when this occurs, the resulting value is recoded to NA
). Datasets corresponding to data of this type are provided in dat.pritz1997
and dat.debruin2009
.
Measures for Event Counts
Various measures can be used to characterize individual groups when the dependent variable assessed is an event count. Here, one needs to specify xi
and ti
, denoting the total number of events that occurred and the total person-times at risk, respectively. The options for the measure
argument are then:
"IR"
for the raw incidence rate.
"IRLN"
for the log transformed incidence rate.
"IRS"
for the square-root transformed incidence rate.
"IRFT"
for the Freeman-Tukey transformed incidence rate (Freeman & Tukey, 1950).
to="only0"
(the default), the value of add
(the default is 1/2) is added to xi
only in the studies that have zero events. When to="all"
, the value of add
is added to xi
in all studies. When to="if0all"
, the value of add
is added to xi
in all studies, but only when there is at least one study with zero events. Setting to="none"
or add=0
has the same effect: No adjustment to the observed number of events is made. Depending on the outcome measure and the data, this may lead to division by zero inside of the function (when this occurs, the resulting value is recoded to NA
).
Measures for Quantitative Variables
The goal of a meta-analysis may also be to characterize individual groups, where the response, characteristic, or dependent variable assessed in the individual studies is measured on some quantitative scale. In the simplest case, the raw mean for the quantitative variable is reported for each group, which then becomes the observed outcome for the meta-analysis. Here, one needs to specify mi
, sdi
, and ni
for the observed means, the observed standard deviations, and the sample sizes, respectively. The only option for the measure
argument is then:
"MN"
for the raw mean.
sdi
is used to specify the standard deviations of the observed values of the response, characteristic, or dependent variable and not the standard errors of the means. A more complicated situation arises when the purpose of the meta-analysis is to assess the amount of change within individual groups. In that case, either the raw mean change, standardized versions thereof, or the (log transformed) ratio of means (log response ratio) can be used as outcome measures (Becker, 1988; Gibbons et al., 1993; Lajeunesse, 2011; Morris, 2000). Here, one needs to specify m1i
and m2i
, the observed means at the two measurement occasions, sd1i
and sd2i
for the corresponding observed standard deviations, ri
for the correlation between the scores observed at the two measurement occasions, and ni
for the sample size. The options for the measure
argument are then:
"MC"
for the raw mean change.
"SMCC"
for the standardized mean change using change score standardization.
"SMCR"
for the standardized mean change using raw score standardization.
"SMCRH"
for the standardized mean change using raw score standardization with heteroscedastic population variances at the two measurement occasions (Bonett, 2008).
"ROMC"
for the log transformed ratio of means (Lajeunesse, 2011).
A few notes about the change score measures. In practice, one often has a mix of information available from the individual studies to compute these measures. In particular, if m1i
and m2i
are unknown, but the raw mean change is directly reported in a particular study, then you can set m1i
to that value and m2i
to 0 (making sure that the raw mean change was computed as m1i-m2i
within that study and not the other way around). Note that this does not work for ratios of means ("ROMC"
). Also, for the raw mean change ("MC"
) or the standardized mean change using change score standardization ("SMCC"
), if sd1i
, sd2i
, and ri
are unknown, but the standard deviation of the change scores is directly reported, then you can set sd1i
to that value and both sd2i
and ri
to 0. Finally, for the standardized mean change using raw score standardization ("SMCR"
), argument sd2i
is actually not needed, as the standardization is only based on sd1i
(Becker, 1988; Morris, 2000), which is usually the pre-test standard deviation (if the post-test standard deviation should be used, then set sd1i
to that). Finally, all of these measures are also applicable for matched-pairs designs (subscripts 1 and 2 then simply denote the first and second group that are formed by the matching).
Other Outcome Measures for Meta-Analyses
Other outcome measures are sometimes used for meta-analyses that do not directly fall into the categories above. These are described in this section.
Cronbach's alpha and Transformations Thereof
Meta-analytic methods can also be used to aggregate Cronbach's alpha values. This is usually referred to as a reliability generalization meta-analysis (Vacha-Haase, 1998). Here, one needs to specify ai
, mi
, and ni
for the observed alpha values, the number of items/replications/parts of the measurement instrument, and the sample sizes, respectively. One can either directly analyze the raw Cronbach's alpha values or transformations thereof (Bonett, 2002, 2010; Hakstian & Whalen, 1976). The options for the measure
argument are then:
"ARAW"
for raw alpha values.
"AHW"
for transformed alpha values (Hakstian & Whalen, 1976).
"ABT"
for transformed alpha values (Bonett, 2002).
"AHW"
, the transformation $1-(1-\alpha)^(1/3)$ is used, while for "ABT"
, the transformation $-ln(1-\alpha)$ is used. This ensures that the transformed values are monotonically increasing functions of $\alpha$. A dataset corresponding to data of this type is provided in dat.bonett2010
.
Formula Interface
There are two general ways of specifying the data for computing the various effect size or outcome measures when using the escalc
function, the default and a formula interface. When using the default interface, which is described above, the information needed to compute the various outcome measures is passed to the function via the various arguments outlined above (i.e., arguments ai
through ni
).
The formula interface works as follows. As above, the argument measure
is a character string specifying which outcome measure should be calculated. The formula
argument is then used to specify the data structure as a multipart formula. The data
argument can be used to specify a data frame containing the variables in the formula. The add
, to
, and vtype
arguments work as described above.
Outcome Measures for Two-Group Comparisons
Measures for Dichotomous Variables
For $2x2$ table data, the formula
argument takes the form outcome ~ group | study
, where group
is a two-level factor specifying the rows of the tables, outcome
is a two-level factor specifying the columns of the tables (the two possible outcomes), and study
is a factor specifying the study factor. The weights
argument is used to specify the frequencies in the various cells.
Measures for Event Counts
For two-group comparisons with event counts, the formula
argument takes the form events/times ~ group | study
, where group
is a two-level factor specifying the group factor and study
is a factor specifying the study factor. The left-hand side of the formula is composed of two parts, with the first variable for the number of events and the second variable for the person-times at risk.
Measures for Quantitative Variables
For two-group comparisons with quantitative variables, the formula
argument takes the form means/sds ~ group | study
, where group
is a two-level factor specifying the group factor and study
is a factor specifying the study factor. The left-hand side of the formula is composed of two parts, with the first variable for the means and the second variable for the standard deviations. The weights
argument is used to specify the sample sizes in the groups.
Outcome Measures for Variable Association
Measures for Two Quantitative Variables
For these outcome measures, the formula
argument takes the form outcome ~ 1 | study
, where outcome
is used to specify the observed correlations and study
is a factor specifying the study factor. The weights
argument is used to specify the sample sizes.
Measures for Two Dichotomous Variables
Here, the data layout is assumed to be the same as for two-group comparisons with dichotomous variables. Hence, the formula
argument is specified in the same manner.
Measures for Mixed Variable Types
Here, the data layout is assumed to be the same as for two-group comparisons with quantitative variables. Hence, the formula
argument is specified in the same manner.
Outcome Measures for Individual Groups
Measures for Dichotomous Variables
For these outcome measures, the formula
argument takes the form outcome ~ 1 | study
, where outcome
is a two-level factor specifying the columns of the tables (the two possible outcomes) and study
is a factor specifying the study factor. The weights
argument is used to specify the frequencies in the various cells.
Measures for Event Counts
For these outcome measures, the formula
argument takes the form events/times ~ 1 | study
, where study
is a factor specifying the study factor. The left-hand side of the formula is composed of two parts, with the first variable for the number of events and the second variable for the person-times at risk.
Measures for Quantitative Variables
For this outcome measures, the formula
argument takes the form means/sds ~ 1 | study
, where study
is a factor specifying the study factor. The left-hand side of the formula is composed of two parts, with the first variable for the means and the second variable for the standard deviations. The weights
argument is used to specify the sample sizes.
Note: The formula interface is (currently) not implemented for the raw mean change and the standardized mean change measures.
Other Outcome Measures for Meta-Analyses
Cronbach's alpha and Transformations Thereof
For these outcome measures, the formula
argument takes the form alpha/items ~ 1 | study
, where study
is a factor specifying the study factor. The left-hand side of the formula is composed of two parts, with the first variable for the Cronbach's alpha values and the second variable for the number of items.
Converting a Data Frame to an 'escalc' Object
The function can also be used to convert a regular data frame to an 'escalc' object. One simply sets the measure
argument to one of the options described above (or to measure="GEN"
for a generic outcome measure not further specified) and passes the observed effect sizes or outcomes via the yi
argument and the corresponding sampling variances via the vi
argument (or the standard errors via the sei
argument).
Becker, B. J. (1988). Synthesizing standardized mean-change measures. British Journal of Mathematical and Statistical Psychology, 41, 257--278.
Bonett, D. G. (2002). Sample size requirements for testing and estimating coefficient alpha. Journal of Educational and Behavioral Statistics, 27, 335--340.
Bonett, D. G. (2008). Confidence intervals for standardized linear contrasts of means. Psychological Methods, 13, 99--109.
Bonett, D. G. (2009). Meta-analytic interval estimation for standardized and unstandardized mean differences. Psychological Methods, 14, 225--238.
Bonett, D. G. (2010). Varying coefficient meta-analytic methods for alpha reliability. Psychological Methods, 15, 368--385.
Borenstein, M. (2009). Effect sizes for continuous data. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 221--235). New York: Russell Sage Foundation.
Chinn, S. (2000). A simple method for converting an odds ratio to effect size for use in meta-analysis. Statistics in Medicine, 19, 3127--3131.
Cox, D. R., & Snell, E. J. (1989). Analysis of binary data (2nd ed.). London: Chapman & Hall.
Fisher, R. A. (1921). On the probable error of a coefficient of correlation deduced from a small sample. Metron, 1, 1--32.
Fleiss, J. L., & Berlin, J. (2009). Effect sizes for dichotomous data. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 237--253). New York: Russell Sage Foundation.
Freeman, M. F., & Tukey, J. W. (1950). Transformations related to the angular and the square root. Annals of Mathematical Statistics, 21, 607--611.
Gibbons, R. D., Hedeker, D. R., & Davis, J. M. (1993). Estimation of effect size from a series of experiments involving paired comparisons. Journal of Educational Statistics, 18, 271--279.
Hakstian, A. R., & Whalen, T. E. (1976). A k-sample significance test for independent alpha coefficients. Psychometrika, 41, 219--231.
Hasselblad, V., & Hedges, L. V. (1995). Meta-analysis of screening and diagnostic tests. Psychological Bulletin, 117(1), 167-178.
Hedges, L. V. (1981). Distribution theory for Glass's estimator of effect size and related estimators. Journal of Educational Statistics, 6, 107--128.
Hedges, L. V. (1989). An unbiased correction for sampling error in validity generalization studies. Journal of Applied Psychology, 74, 469--477.
Hedges, L. V., Gurevitch, J., & Curtis, P. S. (1999). The meta-analysis of response ratios in experimental ecology. Ecology, 80, 1150--1156.
Higgins, J. P. T., & Green, S. (Eds.) (2008). Cochrane handbook for systematic reviews of interventions. Chichester, Englang: Wiley.
Kirk, D. B. (1973). On the numerical approximation of the bivariate normal (tetrachoric) correlation coefficient. Psychometrika, 38, 259--268.
Lajeunesse, M. J. (2011). On the meta-analysis of response ratios for studies with correlated and multi-group designs. Ecology, 92, 2049--2055.
Morris, S. B. (2000). Distribution of the standardized mean change effect size for meta-analysis on repeated measures. British Journal of Mathematical and Statistical Psychology, 53, 17--29.
Morris, S. B., & DeShon, R. P. (2002). Combining effect size estimates in meta-analysis with repeated measures and independent-groups designs. Psychological Methods, 7, 105--125.
Olkin, I., & Pratt, J. W. (1958). Unbiased estimation of certain correlation coefficients. Annals of Mathematical Statistics, 29, 201--211.
Pearson, K. (1900). Mathematical contribution to the theory of evolution. VII. On the correlation of characters not quantitatively measurable. Philosophical Transactions of the Royal Society of London, Series A, 195, 1--47.
Pearson, K. (1909). On a new method of determining correlation between a measured character A, and a character B, of which only the percentage of cases wherein B exceeds (or falls short of) a given intensity is recorded for each grade of A. Biometrika, 7, 96--105.
Rothman, K. J., Greenland, S., & Lash, T. L. (2008). Modern epidemiology (3rd ed.). Philadelphia: Lippincott Williams & Wilkins.
Rücker, G., Schwarzer, G., Carpenter, J., & Olkin, I. (2009). Why add anything to nothing? The arcsine difference as a measure of treatment effect in meta-analysis with zero cells. Statistics in Medicine, 28, 721--738.
Sánchez-Meca, J., Marín-Martínez, F., & Chacón-Moscoso, S. (2003). Effect-size indices for dichotomized outcomes in meta-analysis. Psychological Methods, 8, 448--467.
Soper, H. E. (1914). On the probable error of the bi-serial expression for the correlation coefficient. Biometrika, 10, 384--390.
Tate, R. F. (1954). Correlation between a discrete and a continuous variable: Point-biserial correlation. Annals of Mathematical Statistics, 25, 603--607.
Vacha-Haase, T. (1998). Reliability generalization: Exploring variance in measurement error affecting score reliability across studies. Educational and Psychological Measurement, 58, 6--20.
Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36(3), 1--48. http://www.jstatsoft.org/v36/i03/.
Yule, G. U. (1912). On the methods of measuring association between two attributes. Journal of the Royal Statistical Society, 75, 579--652.
Yusuf, S., Peto, R., Lewis, J., Collins, R., & Sleight, P. (1985). Beta blockade during and after myocardial infarction: An overview of the randomized trials. Progress in Cardiovascular Disease, 27, 335--371.
print.escalc
, summary.escalc
, rma.uni
, rma.mh
, rma.peto
, rma.glmm
### load BCG vaccine data
dat <- get(data(dat.bcg))
### calculate log relative risks and corresponding sampling variances
dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat)
dat
### suppose that for a particular study, yi and vi are known (i.e., have
### already been calculated) but the 2x2 table counts are not known; with
### replace=FALSE, the yi and vi values for that study are not replaced
dat[1:12,10:11] <- NA
dat[13,4:7] <- NA
dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat, replace=FALSE)
dat
### using formula interface (first rearrange data into long format)
dat.long <- to.long(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg,
data=dat, append=FALSE, vlong=TRUE)
escalc(measure="RR", outcome ~ group | study, weights=freq, data=dat.long)
### convert a regular data frame to an 'escalc' object
### dataset from Lipsey & Wilson (2001), Table 7.1, page 130
dat <- data.frame(id = c(100, 308, 1596, 2479, 9021, 9028, 161, 172, 537, 7049),
yi = c(-0.33, 0.32, 0.39, 0.31, 0.17, 0.64, -0.33, 0.15, -0.02, 0.00),
vi = c(0.084, 0.035, 0.017, 0.034, 0.072, 0.117, 0.102, 0.093, 0.012, 0.067),
random = c(0, 0, 0, 0, 0, 0, 1, 1, 1, 1),
intensity = c(7, 3, 7, 5, 7, 7, 4, 4, 5, 6))
dat <- escalc(measure="SMD", yi=yi, vi=vi, data=dat, slab=paste("Study ID:", id), digits=3)
dat
Run the code above in your browser using DataLab