Learn R Programming

coin (version 1.4-3)

pvalue-methods: Computation of the \(p\)-Value, Mid-\(p\)-Value, \(p\)-Value Interval and Test Size

Description

Methods for computation of the \(p\)-value, mid-\(p\)-value, \(p\)-value interval and test size.

Usage

# S4 method for PValue
pvalue(object, q, ...)
# S4 method for NullDistribution
pvalue(object, q, ...)
# S4 method for ApproxNullDistribution
pvalue(object, q, ...)
# S4 method for IndependenceTest
pvalue(object, ...)
# S4 method for MaxTypeIndependenceTest
pvalue(object, method = c("global", "single-step",
                          "step-down", "unadjusted"),
       distribution = c("joint", "marginal"),
       type = c("Bonferroni", "Sidak"), ...)

# S4 method for NullDistribution midpvalue(object, q, ...) # S4 method for ApproxNullDistribution midpvalue(object, q, ...) # S4 method for IndependenceTest midpvalue(object, ...)

# S4 method for NullDistribution pvalue_interval(object, q, ...) # S4 method for IndependenceTest pvalue_interval(object, ...)

# S4 method for NullDistribution size(object, alpha, type = c("p-value", "mid-p-value"), ...) # S4 method for IndependenceTest size(object, alpha, type = c("p-value", "mid-p-value"), ...)

Value

The \(p\)-value, mid-\(p\)-value, \(p\)-value interval or test size computed from object. A numeric vector or matrix.

Arguments

object

an object from which the \(p\)-value, mid-\(p\)-value, \(p\)-value interval or test size can be computed.

q

a numeric, the quantile for which the \(p\)-value, mid-\(p\)-value or \(p\)-value interval is computed.

method

a character, the method used for the \(p\)-value computation: either "global" (default), "single-step", "step-down" or "unadjusted".

distribution

a character, the distribution used for the computation of adjusted \(p\)-values: either "joint" (default) or "marginal".

type

pvalue(): a character, the type of \(p\)-value adjustment when the marginal distributions are used: either "Bonferroni" (default) or "Sidak". size(): a character, the type of rejection region used when computing the test size: either "p-value" (default) or "mid-p-value".

alpha

a numeric, the nominal significance level \(\alpha\) at which the test size is computed.

...

further arguments (currently ignored).

Details

The methods pvalue, midpvalue, pvalue_interval and size compute the \(p\)-value, mid-\(p\)-value, \(p\)-value interval and test size, respectively.

For pvalue(), the global \(p\)-value (method = "global") is returned by default and is given with an associated 99% confidence interval when resampling is used to determine the null distribution (which for maximum statistics may be true even in the asymptotic case).

The familywise error rate (FWER) is always controlled under the global null hypothesis, i.e., in the weak sense, implying that the smallest adjusted \(p\)-value is valid without further assumptions. Control of the FWER under any partial configuration of the null hypotheses, i.e., in the strong sense, as is typically desired for multiple tests and comparisons, requires that the subset pivotality condition holds (Westfall and Young, 1993, pp. 42--43; Bretz, Hothorn and Westfall, 2011, pp. 136--137). In addition, for methods based on the joint distribution of the test statistics, failure of the joint exchangeability assumption (Westfall and Troendle, 2008; Bretz, Hothorn and Westfall, 2011, pp. 129--130) may cause excess Type I errors.

Assuming subset pivotality, single-step or free step-down adjusted \(p\)-values using max-\(T\) procedures are obtained by setting method to "single-step" or "step-down", respectively. In both cases, the distribution argument specifies whether the adjustment is based on the joint distribution ("joint") or the marginal distributions ("marginal") of the test statistics. For procedures based on the marginal distributions, Bonferroni- or Šidák-type adjustment can be specified through the type argument by setting it to "Bonferroni" or "Sidak", respectively.

The \(p\)-value adjustment procedures based on the joint distribution of the test statistics fully utilizes distributional characteristics, such as discreteness and dependence structure, whereas procedures using the marginal distributions only incorporate discreteness. Hence, the joint distribution-based procedures are typically more powerful. Details regarding the single-step and free step-down procedures based on the joint distribution can be found in Westfall and Young (1993); in particular, this implementation uses Equation 2.8 with Algorithm 2.5 and 2.8, respectively. Westfall and Wolfinger (1997) provide details of the marginal distributions-based single-step and free step-down procedures. The generalization of Westfall and Wolfinger (1997) to arbitrary test statistics, as implemented here, is given by Westfall and Troendle (2008).

Unadjusted \(p\)-values are obtained using method = "unadjusted".

For midpvalue(), the global mid-\(p\)-value is given with an associated 99% mid-\(p\) confidence interval when resampling is used to determine the null distribution. The two-sided mid-\(p\)-value is computed according to the minimum likelihood method (Hirji et al., 1991).

The \(p\)-value interval \((p_0, p_1]\) obtained by pvalue_interval() was proposed by Berger (2000, 2001), where the upper endpoint \(p_1\) is the conventional \(p\)-value and the mid-point, i.e., \(p_{0.5}\), is the mid-\(p\)-value. The lower endpoint \(p_0\) is the smallest \(p\)-value attainable if no conservatism attributable to the discreteness of the null distribution is present. The length of the \(p\)-value interval is the null probability of the observed outcome and provides a data-dependent measure of conservatism that is completely independent of the nominal significance level.

For size(), the test size, i.e., the actual significance level, at the nominal significance level \(\alpha\) is computed using either the rejection region corresponding to the \(p\)-value (type = "p-value", default) or the mid-\(p\)-value (type = "mid-p-value"). The test size is, in contrast to the \(p\)-value interval, a data-independent measure of conservatism that depends on the nominal significance level. A test size smaller or larger than the nominal significance level indicates that the test procedure is conservative or anti-conservative, respectively, at that particular nominal significance level. However, as pointed out by Berger (2001), even when the actual and nominal significance levels are identical, conservatism may still affect the \(p\)-value.

References

Berger, V. W. (2000). Pros and cons of permutation tests in clinical trials. Statistics in Medicine 19(10), 1319--1328. tools:::Rd_expr_doi("10.1002/(SICI)1097-0258(20000530)19:10<1319::aid-sim490>3.0.CO;2-0")

Berger, V. W. (2001). The \(p\)-value interval as an inferential tool. The Statistician 50(1), 79--85. tools:::Rd_expr_doi("10.1111/1467-9884.00262")

Bretz, F., Hothorn, T. and Westfall, P. (2011). Multiple Comparisons Using R. Boca Raton: CRC Press.

Hirji, K. F., Tan, S.-J. and Elashoff, R. M. (1991). A quasi-exact test for comparing two binomial proportions. Statistics in Medicine 10(7), 1137--1153. tools:::Rd_expr_doi("10.1002/sim.4780100713")

Westfall, P. H. and Troendle, J. F. (2008). Multiple testing with minimal assumptions. Biometrical Journal 50(5), 745--755. tools:::Rd_expr_doi("10.1002/bimj.200710456")

Westfall, P. H. and Wolfinger, R. D. (1997). Multiple tests with discrete distributions. The American Statistician 51(1), 3--8. tools:::Rd_expr_doi("10.1080/00031305.1997.10473577")

Westfall, P. H. and Young, S. S. (1993). Resampling-Based Multiple Testing: Examples and Methods for \(p\)-Value Adjustment. New York: John Wiley & Sons.

Examples

Run this code
## Two-sample problem
dta <- data.frame(
    y = rnorm(20),
    x = gl(2, 10)
)

## Exact Ansari-Bradley test
(at <- ansari_test(y ~ x, data = dta, distribution = "exact"))
pvalue(at)
midpvalue(at)
pvalue_interval(at)
size(at, alpha = 0.05)
size(at, alpha = 0.05, type = "mid-p-value")


## Bivariate two-sample problem
dta2 <- data.frame(
    y1 = rnorm(20) + rep(0:1, each = 10),
    y2 = rnorm(20),
    x = gl(2, 10)
)

## Approximative (Monte Carlo) bivariate Fisher-Pitman test
(it <- independence_test(y1 + y2 ~ x, data = dta2,
                         distribution = approximate(nresample = 10000)))

## Global p-value
pvalue(it)

## Joint distribution single-step p-values
pvalue(it, method = "single-step")

## Joint distribution step-down p-values
pvalue(it, method = "step-down")

## Sidak step-down p-values
pvalue(it, method = "step-down", distribution = "marginal", type = "Sidak")

## Unadjusted p-values
pvalue(it, method = "unadjusted")


## Length of YOY Gizzard Shad (Hollander and Wolfe, 1999, p. 200, Tab. 6.3)
yoy <- data.frame(
    length = c(46, 28, 46, 37, 32, 41, 42, 45, 38, 44,
               42, 60, 32, 42, 45, 58, 27, 51, 42, 52,
               38, 33, 26, 25, 28, 28, 26, 27, 27, 27,
               31, 30, 27, 29, 30, 25, 25, 24, 27, 30),
    site = gl(4, 10, labels = as.roman(1:4))
)

## Approximative (Monte Carlo) Fisher-Pitman test with contrasts
## Note: all pairwise comparisons
(it <- independence_test(length ~ site, data = yoy,
                         distribution = approximate(nresample = 10000),
                         xtrafo = mcp_trafo(site = "Tukey")))

## Joint distribution step-down p-values
pvalue(it, method = "step-down") # subset pivotality is violated

Run the code above in your browser using DataLab