raftery.diag
is a run length control diagnostic based on a
criterion of accuracy of estimation of the quantile q
. It is
intended for use on a short pilot run of a Markov chain. The number
of iterations required to estimate the quantile \(q\) to within an
accuracy of +/- \(r\) with probability \(p\) is calculated. Separate
calculations are performed for each variable within each chain.
If the number of iterations in data
is too small,
an error message is printed indicating the minimum length of
pilot run. The minimum length is the required sample size for a
chain with no correlation between consecutive samples. Positive
autocorrelation will increase the required sample size above this
minimum value. An estimate I
(the `dependence factor') of the
extent to which autocorrelation inflates the required sample size
is also provided. Values of I
larger than 5 indicate strong
autocorrelation which may be due to a poor choice of starting value,
high posterior correlations or `stickiness' of the MCMC algorithm.
The number of `burn in' iterations to be discarded at the beginning of the chain is also calculated.
raftery.diag(data, q=0.025, r=0.005, s=0.95, converge.eps=0.001)
an mcmc
object
the quantile to be estimated.
the desired margin of error of the estimate.
the probability of obtaining an estimate in the interval (q-r,q+r).
Precision required for estimate of time to convergence.
A list with class raftery.diag
. A print method is available
for objects of this class. the contents of the list are
The time series parameters of data
A vector containing the parameters r
, s
and q
The number of iterations in data
A 3-d array containing the results: \(M\) the length of "burn in", \(N\) the required sample size, \(Nmin\) the minimum sample size based on zero autocorrelation and \(I = (M+N)/Nmin\) the "dependence factor"
The estimated sample size for variable U is based on the process \(Z_t
= d(U_t <= u)\) where \(d\) is the indicator function and u is the
qth quantile of U. The process \(Z_t\) is derived from the Markov
chain data
by marginalization and truncation, but is not itself
a Markov chain. However, \(Z_t\) may behave as a Markov chain if
it is sufficiently thinned out. raftery.diag
calculates the
smallest value of thinning interval \(k\) which makes the thinned
chain \(Z^k_t\) behave as a Markov chain. The required sample size is
calculated from this thinned sequence. Since some data is `thrown away'
the sample size estimates are conservative.
The criterion for the number of `burn in' iterations \(m\) to be
discarded, is that the conditional distribution of \(Z^k_m\)
given \(Z_0\) should be within converge.eps
of the equilibrium
distribution of the chain \(Z^k_t\).
Raftery, A.E. and Lewis, S.M. (1992). One long run with diagnostics: Implementation strategies for Markov chain Monte Carlo. Statistical Science, 7, 493-497.
Raftery, A.E. and Lewis, S.M. (1995). The number of iterations, convergence diagnostics and generic Metropolis algorithms. In Practical Markov Chain Monte Carlo (W.R. Gilks, D.J. Spiegelhalter and S. Richardson, eds.). London, U.K.: Chapman and Hall.