For the binomial distribution, the parameter of interest is the probability of success, \(\pi\). ML estimators for the parameter, \(\pi\), and its standard deviation, \(\sigma_\pi\) are:
$$\hat{\pi}=\frac{x}{n},$$
$$\hat{\sigma}_{\hat{\pi}}=\sqrt{\frac{\hat{\pi}(1-\hat{\pi})}{n}}$$
where x is the number of successes and n is the number of observations.
Because the sampling distribution of any ML estimator is asymptotically normal, an "asymptotic" 100(1 - \(\alpha\))% confidence interval for \(\pi\) is found using:
$$\hat{\pi}\pm z_{1-(\alpha/2)}\hat{\sigma}_{\hat{\pi}}.$$
This method has also been called the Wald confidence interval.
These estimators can create extremely inaccurate confidence intervals, particularly for small sample sizes or when \(\pi\) is near 0 or 1 (Agresti 2012). A better method is to invert the Wald binomial test statistic and vary values for \(\pi_0\) in the test statistic numerator and standard error. The interval consists of values of \(\pi_0\)
in which result in a failure to reject null at \(\alpha\). Bounds can be obtained by finding the roots of a quadratic expansion of the binomial likelihood function (See Agresti 2012).
This has been called a "score" confidence interval (Agresti 2012). An simple approximation to this method can be obtained by adding \(z_{1-(\alpha/2)} (\approx 2\) for \(\alpha = 0.05\)) to the number of successes and failures (Agresti and Coull 1998). The resulting Agresti-Coull estimators for \(\pi\) and \(\sigma_{\hat{\pi}}\) are:
$$\hat{\pi}=\frac{x+z^2/2}{n+z^2},$$
$$\hat{\sigma}_{\hat{\pi}}=\sqrt{\frac{\hat{\pi}(1-\hat{\pi})}{n+z^2}}.$$
where \(z\) is the standard normal inverse cdf at probability 1 - \(\alpha/2\).
As above, the 100(1 - \(\alpha\))% confidence interval for the binomial parameter \(\pi\) is found using:
$$\hat{\pi}\pm z_{1-(\alpha/2)}\hat{\sigma}_{\hat{\pi}}.$$
The likelihood ratio method method = "LR"
finds points in the binomial log-likelihood function where the difference between the maximum likelihood and likelihood function is closest to \(\chi_1^{2}(1 - \alpha)/2\)
for support given in \(\pi_0\). As support the function uses
seq(0.00001, 0.99999, by = 0.00001)
.
The "exact" method of Clopper and Pearson (1934) is bounded at the nominal limits, but actual coverage may be well below this level, particularly when n is small and \(\pi\) is near 0 or 1.
Agresti (2012) recommends the Agresti-Coull method over the normal approximation, the score method over the Agresti-Coull method, and the likelihood ratio method over all others. The Clopper Pearson has been repeatedly criticized as being too conservative (Agresti and Coull 2012).