All arguments are being recycled.
The Wald interval is obtained by inverting the acceptance region of the Wald
large-sample normal test.
The Wald with continuity correction interval is obtained by adding the term 1/(2*n) to the Wald interval.
The Wilson interval, which is the default, was introduced by Wilson (1927) and is
the inversion of the CLT approximation to the family of equal tail tests of p = p0.
The Wilson interval is recommended by Agresti and Coull (1998) as well as by
Brown et al (2001).
The Agresti-Coull interval was proposed by Agresti and Coull (1998) and is a slight
modification of the Wilson interval. The Agresti-Coull intervals are never shorter
than the Wilson intervals; cf. Brown et al (2001).
The Jeffreys interval is an implementation of the equal-tailed Jeffreys prior interval
as given in Brown et al (2001).
The modified Wilson interval is a modification of the Wilson interval for x close to 0
or n as proposed by Brown et al (2001).
The Wilson cc interval is a modification of the Wilson interval adding a continuity correction term.
The modified Jeffreys interval is a modification of the Jeffreys interval for
x == 0 | x == 1
and x == n-1 | x == n
as proposed by
Brown et al (2001).
The Clopper-Pearson interval is based on quantiles of corresponding beta
distributions. This is sometimes also called exact interval.
The arcsine interval is based on the variance stabilizing distribution for the binomial
distribution.
The logit interval is obtained by inverting the Wald type interval for the log odds.
The Witting interval (cf. Beispiel 2.106 in Witting (1985)) uses randomization to
obtain uniformly optimal lower and upper confidence bounds (cf. Satz 2.105 in
Witting (1985)) for binomial proportions.
The Pratt interval is obtained by extremely accurate normal approximation. (Pratt 1968)
The Mid-p approach is used to reduce the conservatism of the Clopper-Pearson, which is known to be very pronounced. The method midp accumulates the tail areas.
The lower bound \(p_l\) is found as the solution to the equation
$$\frac{1}{2} f(x;n,p_l) + (1-F(x;m,p_l)) = \frac{\alpha}{2}$$
where \(f(x;n,p)\) denotes the probability mass function (pmf) and
\(F(x;n,p)\) the (cumulative) distribution function of the binomial
distribution with size \(n\) and proportion \(p\) evaluated at
\(x\).
The upper bound \(p_u\) is found as the solution to the equation
$$\frac{1}{2} f(x;n,p_u) + F(x-1;m,p_u) = \frac{\alpha}{2}$$
In case x=0 then the lower bound is
zero and in case x=n then the upper bound is 1.
The Likelihood-based approach is said to be theoretically appealing. Confidence intervals are based on profiling the binomial deviance in the neighbourhood of the
MLE.
For the Blaker method refer to Blaker (2000).
For more details we refer to Brown et al (2001) as well as Witting (1985).
Some approaches for the confidence intervals can potentially yield negative results or values beyond 1. These would be reset such as not to exceed the range of [0, 1].
And now, which interval should we use? The Wald interval often has inadequate coverage, particularly for small n and values of p close to 0 or 1. Conversely, the Clopper-Pearson Exact method is very conservative and tends to produce wider intervals than necessary. Brown et al. recommends the Wilson or Jeffreys methods for small n and Agresti-Coull, Wilson, or Jeffreys, for larger n as providing more reliable coverage than the alternatives. Also note that the point estimate for the Agresti-Coull method is slightly larger than for other methods because of the way this interval is calculated.