This function generates a sample from the posterior distribution of an ordered probit regression model using the data augmentation approach of Albert and Chib (1993), with cut-points sampled according to Cowles (1996) or Albert and Chib (2001). The user supplies data and priors, and a sample from the posterior distribution is returned as an mcmc object, which can be subsequently analyzed with functions provided in the coda package.
MCMCoprobit(
formula,
data = parent.frame(),
burnin = 1000,
mcmc = 10000,
thin = 1,
tune = NA,
tdf = 1,
verbose = 0,
seed = NA,
beta.start = NA,
b0 = 0,
B0 = 0,
a0 = 0,
A0 = 0,
mcmc.method = c("Cowles", "AC"),
...
)
An mcmc object that contains the posterior sample. This object can be summarized by functions provided by the coda package.
Model formula.
Data frame.
The number of burn-in iterations for the sampler.
The number of MCMC iterations for the sampler.
The thinning interval used in the simulation. The number of Gibbs iterations must be divisible by this value.
The tuning parameter for the Metropolis-Hastings step. Default of NA corresponds to a choice of 0.05 divided by the number of categories in the response variable.
Degrees of freedom for the multivariate-t proposal distribution
when mcmc.method
is set to "IndMH". Must be positive.
A switch which determines whether or not the progress of the
sampler is printed to the screen. If verbose
is greater than 0 the
iteration number, the beta vector, and the Metropolis-Hastings acceptance
rate are printed to the screen every verbose
th iteration.
The seed for the random number generator. If NA, the Mersenne
Twister generator is used with default seed 12345; if an integer is passed
it is used to seed the Mersenne twister. The user can also pass a list of
length two to use the L'Ecuyer random number generator, which is suitable
for parallel computation. The first element of the list is the L'Ecuyer
seed, which is a vector of length six or NA (if NA a default seed of
rep(12345,6)
is used). The second element of list is a positive
substream number. See the MCMCpack specification for more details.
The starting value for the \(\beta\) vector. This can either be a scalar or a column vector with dimension equal to the number of betas. If this takes a scalar value, then that value will serve as the starting value for all of the betas. The default value of NA will use rescaled estimates from an ordered logit model.
The prior mean of \(\beta\). This can either be a scalar or a column vector with dimension equal to the number of betas. If this takes a scalar value, then that value will serve as the prior mean for all of the betas.
The prior precision of \(\beta\). This can either be a scalar or a square matrix with dimensions equal to the number of betas. If this takes a scalar value, then that value times an identity matrix serves as the prior precision of \(\beta\). Default value of 0 is equivalent to an improper uniform prior on \(\beta\).
The prior mean of \(\gamma\). This can either be a scalar or a column vector with dimension equal to the number of betas. If this takes a scalar value, then that value will serve as the prior mean for all of the betas.
The prior precision of \(\gamma\). This can either be a scalar or a square matrix with dimensions equal to the number of betas. If this takes a scalar value, then that value times an identity matrix serves as the prior precision of \(\gamma\). Default value of 0 is equivalent to an improper uniform prior on \(\gamma\).
Can be set to either "Cowles" (default) or "AC" to perform posterior sampling of cutpoints based on Cowles (1996) or Albert and Chib (2001) respectively.
further arguments to be passed
MCMCoprobit
simulates from the posterior distribution of a ordered
probit regression model using data augmentation. The simulation proper is
done in compiled C++ code to maximize efficiency. Please consult the coda
documentation for a comprehensive list of functions that can be used to
analyze the posterior sample.
The observed variable \(y_i\) is ordinal with a total of \(C\) categories, with distribution governed by a latent variable: $$z_i = x_i'\beta + \varepsilon_i$$ The errors are assumed to be from a standard Normal distribution. The probabilities of observing each outcome is governed by this latent variable and \(C-1\) estimable cutpoints, which are denoted \(\gamma_c\). The probability that individual \(i\) is in category \(c\) is computed by:
$$\pi_{ic} = \Phi(\gamma_c - x_i'\beta) - \Phi(\gamma_{c-1} - x_i'\beta)$$
These probabilities are used to form the multinomial distribution that defines the likelihoods.
MCMCoprobit
provides two ways to sample the cutpoints. Cowles (1996)
proposes a sampling scheme that groups sampling of a latent variable with
cutpoints. In this case, for identification the first element
\(\gamma_1\) is normalized to zero. Albert and Chib (2001) show
that we can sample cutpoints indirectly without constraints by transforming
cutpoints into real-valued parameters (\(\alpha\)).
Albert, J. H. and S. Chib. 1993. ``Bayesian Analysis of Binary and Polychotomous Response Data.'' J. Amer. Statist. Assoc. 88, 669-679
M. K. Cowles. 1996. ``Accelerating Monte Carlo Markov Chain Convergence for Cumulative-link Generalized Linear Models." Statistics and Computing. 6: 101-110.
Andrew D. Martin, Kevin M. Quinn, and Jong Hee Park. 2011. ``MCMCpack: Markov Chain Monte Carlo in R.'', Journal of Statistical Software. 42(9): 1-21. tools:::Rd_expr_doi("10.18637/jss.v042.i09").
Valen E. Johnson and James H. Albert. 1999. Ordinal Data Modeling. Springer: New York.
Albert, James and Siddhartha Chib. 2001. ``Sequential Ordinal Modeling with Applications to Survival Data." Biometrics. 57: 829-836.
Daniel Pemstein, Kevin M. Quinn, and Andrew D. Martin. 2007. Scythe Statistical Library 1.0. http://scythe.wustl.edu.s3-website-us-east-1.amazonaws.com/.
Martyn Plummer, Nicky Best, Kate Cowles, and Karen Vines. 2006. ``Output Analysis and Diagnostics for MCMC (CODA)'', R News. 6(1): 7-11. https://CRAN.R-project.org/doc/Rnews/Rnews_2006-1.pdf.
plot.mcmc
,summary.mcmc
if (FALSE) {
x1 <- rnorm(100); x2 <- rnorm(100);
z <- 1.0 + x1*0.1 - x2*0.5 + rnorm(100);
y <- z; y[z < 0] <- 0; y[z >= 0 & z < 1] <- 1;
y[z >= 1 & z < 1.5] <- 2; y[z >= 1.5] <- 3;
out1 <- MCMCoprobit(y ~ x1 + x2, tune=0.3)
out2 <- MCMCoprobit(y ~ x1 + x2, tune=0.3, tdf=3, verbose=1000, mcmc.method="AC")
summary(out1)
summary(out2)
plot(out1)
plot(out2)
}
Run the code above in your browser using DataLab