Learn R Programming

HelpersMG (version 3.6)

MHalgoGen: Monte-Carlo Markov-chain with Metropolis-Hastings algorithm

Description

The parameters must be stored in a data.frame with named rows for each parameter with the following columns:

  • Density. The density function name, example dnorm, dlnorm, dunif

  • Prior1. The first parameter to send to the Density function

  • Prior2. The second parameter to send to the Density function

  • SDProp. The standard error from new proposition value of this parameter

  • Min. The minimum value for this parameter

  • Max. The maximum value for this parameter

  • Init. The initial value for this parameter

This script has been deeply modified from a MCMC script provided by Olivier Martin (INRA, Paris-Grignon). The likelihood function must use a parameter named parameters_name for the nammed parameters. For adaptive mcmc, see: Rosenthal, J. S. 2011. Optimal Proposal Distributions and Adaptive MCMC. Pages 93-112 in S. Brooks, A. Gelman, G. Jones, and X.-L. Meng, editors. MCMC Handbook. Chapman and Hall/CRC.

Usage

MHalgoGen(likelihood = stop("A likelihood function must be supplied"),
  parameters_name = "x", parameters = stop("Priors  must be supplied"),
  ..., n.iter = 10000, n.chains = 1, n.adapt = 100, thin = 30,
  trace = FALSE, adaptive = FALSE, adaptive.lag = 500,
  adaptive.fun = function(x) {     ifelse(x > 0.234, 1.3, 0.7) },
  intermediate = NULL, filename = "intermediate.Rdata",
  previous = NULL)

Arguments

likelihood

The function that returns -ln likelihood using data and parameters

parameters_name

The name of the parameters in the likelihood function, default is "x"

parameters

A data.frame with priors; see description and examples

...

Parameters to be transmitted to likelihood function

n.iter

Number of iterations for each chain

n.chains

Number of chains

n.adapt

Number of iteration to stabilize likelihood

thin

Interval for thinning likelihoods

trace

Or FALSE or period to show progress

adaptive

Should an adaptive process for SDProp be used

adaptive.lag

Lag to analyze the SDProp value in an adaptive context

adaptive.fun

Function used to change the SDProp

intermediate

Or NULL of period to save intermediate result

filename

Name of file in which intermediate results are saved

previous

The content of the file in which intermediate results are saved

Value

A mcmcComposite object with all characteristics of the model and mcmc run

Details

MHalgoGen is a function to use mcmc with Metropolis-Hastings algorithm

See Also

Other mcmcComposite functions: as.mcmc.mcmcComposite, as.parameters, merge.mcmcComposite, plot.mcmcComposite, summary.mcmcComposite

Examples

Run this code
# NOT RUN {
library(HelpersMG)
require(coda)
val <- rnorm(30, 10, 2)
dnormx <- function(data, x) {
 data <- unlist(data)
 return(-sum(dnorm(data, mean=x['mean'], sd=x['sd'], log=TRUE)))
}
parameters_mcmc <- data.frame(Density=c('dnorm', 'dlnorm'), 
Prior1=c(10, 0.5), Prior2=c(2, 0.5), SDProp=c(0.35, 0.2), 
Min=c(-3, 0), Max=c(100, 10), Init=c(10, 2), stringsAsFactors = FALSE, 
row.names=c('mean', 'sd'))
mcmc_run <- MHalgoGen(n.iter=50000, parameters=parameters_mcmc, data=val, 
likelihood=dnormx, n.chains=1, n.adapt=100, thin=1, trace=1)
plot(mcmc_run, xlim=c(0, 20))
plot(mcmc_run, xlim=c(0, 10), parameters="sd")
library(graphics)
library(fields)
# show a scatter plot of the result
x <- mcmc_run$resultMCMC[[1]][, 1]
y <- mcmc_run$resultMCMC[[1]][, 2]
marpre <- par(mar=c(4, 4, 2, 6)+0.4)
smoothScatter(x, y)
# show a scale
n <- matrix(0, ncol=128, nrow=128)
xrange <- range(x)
yrange <- range(y)
for (i in 1:length(x)) {
  posx <- 1+floor(127*(x[i]-xrange[1])/(xrange[2]-xrange[1]))
  posy <- 1+floor(127*(y[i]-yrange[1])/(yrange[2]-yrange[1]))
  n[posx, posy] <- n[posx, posy]+1
}
image.plot(legend.only=TRUE, zlim= c(0, max(n)), nlevel=128, 
 col=colorRampPalette(c("white", blues9))(128))
# Compare with a heatmap
x <- seq(from=8, to=12, by=0.2)
y <- seq(from=1, to=4, by=0.2)
df <- expand.grid(mean=x, sd=y)
df <- cbind(df, L=rep(0, length(nrow(df))))
for (i in 1:nrow(df)) df[i, "L"] <- -sum(dnorm(val, df[i, 1], df[i, 2], log = TRUE))
hm <- matrix(df[, "L"], nrow=length(x))
par(mar = marpre)
image.plot(x=x, y=y, z=hm, las=1)
# Diagnostic function from coda library
mcmcforcoda <- as.mcmc(mcmc_run)
#' heidel.diag(mcmcforcoda)
raftery.diag(mcmcforcoda)
autocorr.diag(mcmcforcoda)
acf(mcmcforcoda[[1]][,"mean"], lag.max=20, bty="n", las=1)
acf(mcmcforcoda[[1]][,"sd"], lag.max=20, bty="n", las=1)
batchSE(mcmcforcoda, batchSize=100)
# The batch standard error procedure is usually thought to 
# be not as accurate as the time series methods used in summary
summary(mcmcforcoda)$statistics[,"Time-series SE"]
summary(mcmc_run)
as.parameters(mcmc_run)
lastp <- as.parameters(mcmc_run, index="last")
parameters_mcmc[,"Init"] <- lastp
# The n.adapt set to 1 is used to not record the first set of parameters
# then it is not duplicated (as it is also the last one for 
# the object mcmc_run)
mcmc_run2 <- MHalgoGen(n.iter=1000, parameters=parameters_mcmc, x=x, 
likelihood=dnormx, n.chains=1, n.adapt=1, thin=1, trace=1)
mcmc_run3 <- merge(mcmc_run, mcmc_run2)
####### no adaptation, n.adapt must be 0
parameters_mcmc[,"Init"] <- c(mean(x), sd(x))
mcmc_run3 <- MHalgoGen(n.iter=1000, parameters=parameters_mcmc, x=x, 
likelihood=dnormx, n.chains=1, n.adapt=0, thin=1, trace=1)
# Here is how to use adaptive mcmc
mcmc_run <- MHalgoGen(n.iter=50000, parameters=parameters_mcmc, data=val, adaptive = FALSE, 
likelihood=dnormx, n.chains=1, n.adapt=100, thin=1, trace=1)
1-rejectionRate(as.mcmc(mcmc_run))
mcmc_run <- MHalgoGen(n.iter=50000, parameters=parameters_mcmc, data=val, adaptive = TRUE,  
likelihood=dnormx, n.chains=1, n.adapt=100, thin=1, trace=1)
1-rejectionRate(as.mcmc(mcmc_run))
# To see the dynamics :
var <- "mean"
par(mar=c(4, 4, 1, 1)+0.4)
plot(1:nrow(mcmc_run$resultMCMC[[1]]), mcmc_run$resultMCMC[[1]][, var], type="l", 
       xlab="Iterations", ylab=var, bty="n", las=1)
# }

Run the code above in your browser using DataLab