A Robbins-Munro stochastic approximation update is used to adapt the tuning parameter of the proposal kernel.
The idea is to update the tuning parameter at each iteration of the sampler:
$$h^{(i+1)} = h^{(i)} + \eta^{(i+1)}(\alpha^{(i)} - \alpha_{opt}),$$
where \(h^{(i)}\) and \(\alpha^{(i)}\) are the tuning parameter and acceptance probability at iteration
\(i\) and \(\alpha_{opt}\) is a target acceptance probability. For Gaussian targets, and in the limit
as the dimension of the problem tends to infinity, an appropriate target acceptance probability for
MALA algorithms is 0.574. The sequence \(\{\eta^{(i)}\}\) is chosen so that
\(\sum_{i=0}^\infty\eta^{(i)}\) is infinite whilst \(\sum_{i=0}^\infty\left(\eta^{(i)}\right)^{1+\epsilon}\) is
finite for \(\epsilon>0\). These two conditions ensure that any value of \(h\) can be reached, but in a way that
maintains the ergodic behaviour of the chain. One class of sequences with this property is,
$$\eta^{(i)} = \frac{C}{i^\alpha},$$
where \(\alpha\in(0,1]\) and \(C>0\).The scheme is set via
the mcmcpars
function.
andrieuthomsh(inith, alpha, C, targetacceptance = 0.574)
initial h
parameter \(\alpha\)
parameter \(C\)
target acceptance probability
an object of class andrieuthomsh
Andrieu C, Thoms J (2008). A tutorial on adaptive MCMC. Statistics and Computing, 18(4), 343-373.
Robbins H, Munro S (1951). A Stochastic Approximation Methods. The Annals of Mathematical Statistics, 22(3), 400-407.
Roberts G, Rosenthal J (2001). Optimal Scaling for Various Metropolis-Hastings Algorithms. Statistical Science, 16(4), 351-367.
# NOT RUN {
andrieuthomsh(inith=1,alpha=0.5,C=1,targetacceptance=0.574)
# }
Run the code above in your browser using DataLab