Learn R Programming

MSBVAR (version 0.9-2)

gibbs.msbvar: Gibbs sampler for a Markov-switching Bayesian reduced form vector autoregression model

Description

Draws a Bayesian posterior sample for a Markov-switching Bayesian reduced form vector autoregression model based on the setup from the msbvar function.

Usage

gibbs.msbvar(x, N1 = 1000, N2 = 1000, permute = TRUE, Beta.idx = NULL, Sigma.idx = NULL, Q.method="MH")

Arguments

x
MSBVAR setup and posterior mode estimate generated using the msbvar function.
N1
Number of burn-in iterations for the Gibbs sampler (should probably be greater than or equal to 1000)
N2
Number of iterations in the posterior sample.
permute
Logical (default = TRUE). Should random permutation sampling be used to explore the h! posterior modes?
Beta.idx
A two element vector indicating the MSBVAR ceofficient matrix that is to be ordered for non-permutation sampling, i.e., the ordering of the states. The states will be put into ascending order for the parameter selected. The two elements provide are for the two-dimensional array of the VAR coefficients. The first number gives the coefficient, the second the equation numbers. Coefficients are ordered by lag, then variable. So for an $m$ equation VAR where we want the AR(1) coefficient on the second variable's equation, use c(2,2). The intercept is the last value, or $mp+1$. So the intercept for the first equation in a 4 variable model with two lags is c(9,1).
Sigma.idx
Scalar integer giving the equation variance that is to be ordered for non-permutation sampling, i.e., the ordering of the states. The states will be put into ascending order for the variance parameter selected. So if you want to identify the results based on equation three, set Sigma.idx=3
Q.method
choice of the sampler step for the transition matrix, Q. default=MH uses a Metropolis-Hastings algorithm that assumes a stationary Markov process. The other option is Gibbs which uses a Gibbs sampler Dirichlet draw for a non-stationary Markov-switching process. See Fruwirth-Schnatter (2006: 318, 340-341 for details)

Value

A list summarizing the reduced form MSBVAR posterior:
Beta.sample
$N2 x h(m^2 p + m)$ of the BVAR regression coefficients for each regime. The ordering is based on regime, equation, intercept (and in the future covariates). So the first $p$ coefficients are the the first equation in the first regime, ordered by lag, not variable; the next is the intercept. This pattern repeats for the remaining coefficents across the regimes.
Sigma.sample
$N2 x 0.5*h(m(m+1))$ matrix of the covariance parameters for the error covariances $\Sigma(h)$. Since these matrices are symmetric p.d., we only store the upper (or lower) portion. The elements in the matrix are the first, second, etc. columns / rows of the lower / upper version of the matrix.
Q.sample
$N2 x h^2$
transition.sample
An array of N2 $h x h$ transition matrices.
ss.sample
List of class SS for the N2 estimates of the state-space matrices coded as bit objects for compression / efficiency.
pfit
A list of the posterior fit statistics for the MSBVAR model.
init.model
Initial model -- a varobj from a BVAR like szbvar that sets up the data and priors. See szbvar for a description.
alpha.prior
Prior for the state-space transitions Q. This is set in the call to msbvar and inherited here.
h
integer, number of regimes fit in the model.
p
integer, lag length
m
integer, number of equations

Details

This function implements a Gibbs sampler for the posterior of a MSBVAR model setup with msbvar. This is a reduced form MSBVAR model. The estimation is done in a mixture of native R code and Fortran. The sampling of the BVAR coefficients, the transition matrix, and the error covariances for each regime are done in native R code. The forward-filtering-backward-sampling of the Markov-switching process (The most computationally intensive part of the estimation) is handled in compiled Fortran code. As such, this model is reasonably fast for small samples / small numbers of regimes (say less than 5000 observations and 2-4 regimes). The reason for this mixed implementation is that it is easier to setup variants of the model (E.g., Some coefficients switching, others not; different sampling methods, etc. Details will come in future versions of the package.)

The random permuation of the states is done using a multinomial step: at each draw of the Gibbs sampler, the states are permuted using a multinomial draw. This generates a posterior sample where the states are unidentified. This makes sense, since the user may have little idea of how to select among the $h!$ posterior models of the reduced form MSBVAR model (see e.g., Fruhwirth-Schnatter (2006)). Once a posterior sample has been draw with random permuation, a clustering algorithm (see plotregimeid) can be used to identify the states, for example, by examining the intercepts or covariances across the regimes (see the example below for details).

Only the Beta.idx or Sigma.idx value is followed. If the first is given the second will be ignored. So variance ordering for identification can only be used when Beta.idx=NULL. See plotregimeid for plotting and summary methods for the permuted sampler.

The Gibbs sampler is estimated using six steps:

Drawing the state-space for the Markov process
This step uses compiled code to draw the 0-1 matrix of the regimes. It uses the Baum-Hamilton-Lee-Kim (BHLK) filter and smoother to estimate the regime probabilities. Draws are based on the standard forward-filter-backward-sample algorithm.

Drawing the Markov transition matrix $Q$
Conditional on the other parameters, this takes a draw from a Dirichlet posterior with the alpha.prior prior.

Regression step update
Conditional on the state-space and the Markov-switching process data augmentation steps, estimate a set of $h$ regressions, one for each regime.

Draw the error covariances, $Sigma(h)$
Conditional on the other steps, compute and draw the error covariances from an inverse Wishart pdf.

Draw the regression coefficients
For each set of classified observations' (based on the previous step) BVAR regression coefficients, take a draw from their multivariate normal posterior.

Permute the states
If permute = TRUE, then permute the states and the respective coefficients.

The state-space for the MS process is a $T x h$ matrix of zeros and ones. Since this matrix classifies the observations infor states for the N2 posterior draws, it does not make sense to store it in double precisions. We use the bit package to compress this matrix into a 2-bit integer representation for more efficient storage. Functions are provided (see below) for summarizing and plotting the resulting state-space of the MS process.

References

Brandt, Patrick T. 2009. "Empirical, Regime-Specific Models of International, Inter-group Conflict, and Politics" Fruhwirth-Schnatter, Sylvia. 2001. "Markov Chain Monte Carlo Estimation of Classical and Dynamic Switching and Mixture Models". Journal of the American Statistical Association. 96(153):194--209.

Fruhwirth-Schnatter, Sylvia. 2006. Finite Mixture and Markov Switching Models. Springer Series in Statistics New York: Springer. Sims, Christopher A. and Daniel F. Waggoner and Tao Zha. 2008. "Methods for inference in large multiple-equation Markov-switching models" Journal of Econometrics 146(2):255--274. Krolzig, Hans-Martin. 1997. Markov-Switching Vector Autoregressions: Modeling, Statistical Inference, and Application to Business Cycle Analysis.

See Also

msbvar for initial mode finding, plot.SS for plotting regime probabilities, mean.SS for computing the mean regime probabilities, plotregimeid for identifying the regimes from a permuted sample.

Examples

Run this code
## Not run: 
# # This example can be pasted into a script or copied into R to run.  It
# # takes a few minutes, but illustrates how the code can be used
# 
# data(IsraelPalestineConflict)  
# 
# # Find the mode of an msbvar model
# # Initial guess is based on random draw, so set seed.
# set.seed(123)
# 
# xm <- msbvar(IsraelPalestineConflict, p=3, h=2,
#              lambda0=0.8, lambda1=0.15,
#              lambda3=1, lambda4=1, lambda5=0, mu5=0,
#              mu6=0, qm=12,
#              alpha.prior=matrix(c(10,5,5,9), 2, 2))
# 
# # Plot out the initial mode
# plot(ts(xm$fp))
# print(xm$Q)
# 
# # Now sample the posterior
# N1 <- 1000
# N2 <- 2000
# 
# # First, so this with random permutation sampling
# x1 <- gibbs.msbvar(xm, N1=N1, N2=N2, permute=TRUE)
# 
# # Identify the regimes using clustering in plotregimeid()
# plotregimeid(x1, type="all")
# 
# # Now re-estimate based on desired regime identification seen in the
# # plots.  Here we are using the intercept of the first equation, so
# # Beta.idx=c(7,1).
# 
# x2 <- gibbs.msbvar(xm, N1=N1, N2=N2, permute=FALSE, Beta.idx=c(7,1))
# 
# # Plot regimes
# plot.SS(x2)
# 
# # Summary of transition matrix
# summary(x2$Q.sample)
# 
# # Plot of the variance elements
# plot(x2$Sigma.sample)
# ## End(Not run)

Run the code above in your browser using DataLab