Learn R Programming

label.switching (version 1.8)

ecr.iterative.2: ECR algorithm (iterative version 2)

Description

This function applies the second iterative version of Equivalence Classes Representatives (ECR) algorithm (Papastamoulis and Iliopoulos, 2010, Rodriguez and Walker, 2012). The set of all allocation variables is partitioned into equivalence classes and exactly one representative is chosen from each class. In this version the \(m\times n \times K\) of allocation probabilities should be given as input as well.

Usage

ecr.iterative.2(z, K, p, threshold, maxiter)

Arguments

z

\(m\times n\) integer array of the latent allocation vectors generated from an MCMC algorithm.

K

the number of mixture components (at least equal to 2).

p

\(m\times n \times K\) dimensional array of allocation probabilities of the \(n\) observations among the \(K\) mixture components, for each iteration \(t = 1,\ldots,m\) of the MCMC algorithm.

threshold

An (optional) positive number controlling the convergence criterion. Default value: 1e-6.

maxiter

An (optional) integer controlling the max number of iterations. Default value: 100.

Value

permutations

\(m\times K\) dimensional array of permutations

iterations

integer denoting the number of iterations until convergence

status

returns the exit status

Details

For a given MCMC iteration \(t=1,\ldots,m\), let \(w_k^{(t)}\) and \(\theta_k^{(t)}\), \(k=1,\ldots,K\) denote the simulated mixture weights and component specific parameters respectively. Then, the \((t,i,k)\) element of p corresponds to the conditional probability that observation \(i=1,\ldots,n\) belongs to component \(k\) and is proportional to \(p_{tik} \propto w_k^{(t)} f(x_i|\theta_k^{(t)}), k=1,\ldots,K\), where \(f(x_i|\theta_k)\) denotes the density of component \(k\). This means that: $$p_{tik} = \frac{w_k^{(t)} f(x_i|\theta_k^{(t)})}{w_1^{(t)} f(x_i|\theta_1^{(t)})+\ldots + w_K^{(t)} f(x_i|\theta_K^{(t)})}.$$ In case of hidden Markov models, the probabilities \(w_k\) should be replaced with the proper left (normalized) eigenvector of the state-transition matrix.

References

Papastamoulis P. and Iliopoulos G. (2010). An artificial allocations based solution to the label switching problem in Bayesian analysis of mixtures of distributions. Journal of Computational and Graphical Statistics, 19: 313-331.

Rodriguez C.E. and Walker S. (2014). Label Switching in Bayesian Mixture Models: Deterministic relabeling strategies. Journal of Computational and Graphical Statistics. 23:1, 25-45

See Also

permute.mcmc, label.switching, ecr, ecr.iterative.1, stephens

Examples

Run this code
# NOT RUN {
#load a toy example: MCMC output consists of the random beta model
# applied to a normal mixture of \code{K=2} components. The number of
# observations is equal to \code{n=5}. The number of MCMC samples is
# equal to \code{m=1000}. The 300 simulated allocations are stored to
# array \code{z}. The matrix of allocation probabilities is stored to
# array \code{p}. 
data("mcmc_output")
z<-data_list$"z"
K<-data_list$"K"
p<-data_list$"p"
# mcmc parameters are stored to array \code{mcmc.pars}
mcmc.pars<-data_list$"mcmc.pars"
# mcmc.pars[,,1]: simulated means of the two components
# mcmc.pars[,,2]: simulated variances 
# mcmc.pars[,,3]: simulated weights
# the relabelling algorithm will run with the default initialization
# (no opt_init is specified)
run<-ecr.iterative.2(z = z, K = 2, p = p)
# apply the permutations returned by typing:
reordered.mcmc<-permute.mcmc(mcmc.pars,run$permutations)
# reordered.mcmc[,,1]: reordered means of the two mixture components
# reordered.mcmc[,,2]: reordered variances of the two components
# reordered.mcmc[,,3]: reordered weights of the two components
# }

Run the code above in your browser using DataLab