Fitting Bivariate and univariate angular mixture models
fit_angmix(
model = "vmsin",
data,
ncomp,
cov.restrict = "NONE",
unimodal.component = FALSE,
start_par = NULL,
rand_start = rep(FALSE, n.chains),
method = "hmc",
perm_sampling = FALSE,
n.chains = 3,
chains_parallel = TRUE,
return_llik_contri = FALSE,
int.displ = 3,
epsilon = 0.1,
L = 10,
epsilon.random = TRUE,
L.random = FALSE,
burnin.prop = 0.5,
tune.prop = 1,
thin = 1,
propscale = 0.05,
n.iter = 500,
pmix.alpha = NULL,
norm.var = 1000,
autotune = TRUE,
show.progress = TRUE,
accpt.prob.upper,
accpt.prob.lower,
epsilon.incr = 0.05,
L.incr = 0.075,
tune.incr = 0.05,
tune_ave_size = 100,
kappa_upper = 150,
kappa_lower = 1e-04,
return_tune_param = FALSE,
qrnd = NULL,
n_qrnd = NULL,
...
)
angular model whose mixtures are to be fitted. Available choices are "vmsin", "vmcos"
and "wnorm2"
for
bivariate data, and "vm"
and "wnorm"
for univariate data.
data matrix (if bivarate, in which case it must have two columns) or vector. If outside, the values are transformed into the scale \([0, 2\pi)\). *Note:* BAMBI cannot handle missing data. Missing values must either be removed or properly imputed.
number of components in the mixture model. Must be a positive integer. vector values are not allowed.
If comp == 1
, a single component model is fitted.
Should there be any restriction on the covariance parameter for a bivariate model. Available choices are
"POSITIVE", "NEGATIVE", "ZERO"
and "NONE". Note that "ZERO"
fits a mixture with product components. Defaults to
"NONE"
.
logical. Should each component in the mixture model be unimodal? Only used if model
is either "vmsin"
or "vmcos"
. Defaults to FALSE.
list with elements pmix
(ignored if comp == 1
), together with kappa1, kappa2, mu1
and mu2
,
for bivariate models, and kappa
and mu
for univariate models,
all being vectors of length same as ncomp
.
These provides the starting values for the Markov chain; with \(j\)-th component of each vector corresponding to the \(j\)-th
component of the mixture distribution. If missing, the data is first clustered into ncomp
groups either via k-means (after
projecting onto a unit sphere), or randomly, depending on rand_start
, and then moment estimators for components are used as
the starting points. Note that a very wrong starting point can potentially lead the chain to get stuck at a wrong solution for thousands
of iterations. As such, we recommend using the default option, which is k-means followed by moment estimation.
logical. Should a random starting clustering be used? Must be either a scalar, or a vector of length ncomp
,
one for each chain. Ignored if start_par
is supplied. See start_par
for more details. Defaults to FALSE
.
MCMC strategy to be used for the model paramters: "hmc"
or "rwmh"
.
logical. Should the permutation sampling algorithm of Fruhwirth-Schnatter (2001) be used?
If TRUE, at every iteration after burnin, once model parameters and mixing proportions are sampled,
a random permutation of 1, ..., ncomp is considered, and components are relabelled according
to this random permutation. This forced random label switchings may imporve the mixing rate of the chage. However, (automated) tuning
is very difficult with such a scheme, as there is no simple way of keeping track of the "original" component labels. This creates problem
with computing standard deviations of the generated model parameters, thus making the
scaling step used in tuning for epsilon
or paramscale
problematic as well. As such, perm_sampling
is always turned
off during burn-in (even if autotune = FALSE
), and turned on thereafter, if TRUE
.
Defaults to and is set to FALSE
.
number of chains to run. Must be a positive integer.
logical. Should the chains be run in parallel? Defaluts to TRUE, and ignored if n.chains
= 1.
Note that parallelization is implemented via future_lapply from package future.apply
which
uses futures for this purpose, and thus provides a convenient way of parallelization across various OSs and computing environments.
However, a proper plan must be set for the parallization before running the chain. Otherwise the chains will run sequentially.
logical. Should the log likelihood contribution of each data point for each MCMC iteration in each chain be returned? This makes
computation of waic.angmcmc and loo.angmcmc much faster. *Warning*: Depending on the length of data and n.iter
, this can be
very memory intensive. We suggest setting return_llik_contri = TRUE
only if waic.angmcmc and loo.angmcmc are aimed for. Defaults to
FALSE
.
absolute integer displacement for each coordinate for wnorm
and wnorm2
models (ignored otherwise). Default is 3.
Allowed minimum and maximum are 1 and 5 respectively.
tuning parameters for HMC; ignored if method = "rwmh"
. epsilon
(step-size) is a single number,
or a vector of size 2*ncomp
for univariate models and 5*ncomp
for bivariate models. Note that the "mass matrix"
in HMC is assumed to be identity. As such, epsilon
's corresponding to different model parameters need to be in proper scale for
optimal acceptance rate. Can be autotuned during burnin. See autotune
.
L
(leapfrog steps) is a positive integer or a vector of positive integers of length n.chains
.
If multiple chains are used, we suggest same L
values acorss different chains to make the chains as homogenous as possible.
logical. Should epsilon*delta
, where delta
is a random
number between (1-epsilon.incr, 1+epsilon.incr)
be used instead of epsilon
at each iteration?
Ignored if method = "rwmh"
.
logical. Should a random integer between L.orig/exp(L.incr)
and L.orig*exp(L.incr)
be used instead as L
at each iteration? Ignored if method = "rwmh"
. Defaults to TRUE
.
proportion of iterations to used for burnin. Must be a be a number in [0, 1]. Default is 0.5.
proportion of *burnin
* used to tune the parameters (epsilon
in HMC and
propscale
in RWMH). Must be a number between 0 and 1; defaults to 1. Ignored if autotune == FALSE
.
thining size to be used. Must be a positive integer. If thin =
n, then every nth iteration is reatained
in the final MCMC sample.
tuning parameters for RWMH; a vector of size 5 (for bivariate models) or 2 (for univariate models) representing
the variances for the proposal normal densities
for the model parameters. Ignored if method = "hmc"
. Can be autotuned during burnin. See autotune
.
number of iterations for the Markov Chain.
concentration parameter(s) for the Dirichlet prior for pmix
. Must either be a positive real number, or a vector
with positive entries and of length ncomp
. The default is \((r+r(r+1)/2)/2+3\), where \(r\) is 1 or 2 according as whether
the model is univariate or bivariate. Note that it is recommended to use larger alpha
values to ensure the a good posterior behavior,
especially when fit_incremental_angmix is used for model selection, which handles overfitting in "let two component-specific parameters be
size, and then penalizes for model complexity. See Fruhwirth-Schnatter (2011) for more details on this.
variance (hyper-) parameters in the normal prior for log(kappa), log(kappa1), log(kappa2)
and kappa3
. (Prior mean is zero).
Can be a vector. Default is 1000 that makes the prior non-informative.
logical. Should the Markov chain auto-tune the parameter epsilon
(in HMC) or
propscale
(in RWMH) during burn-in? Set to TRUE
by default. An adaptive tuning strategy is implemented.
Here, at every 10th iteration during in burn-in, the acceptance ratio in the last tune_ave_size
iterations is calculated. Then the tuning parameter is decreased (increased) by a factor of
1-tune.incr
(1+tune.incr
) if the calculated acceptance rate
falls below (above) accpt.prob.lower
(accpt.prob.upper
). In addditon, when iter
is a multiple of
tune_ave_size
, epsilon
for each model parameter is rescaled via the standard deviation of
the corresponding parameter over the past tune_ave_size
iterations.
logical. Should a progress bar be displayed?
lower and upper limits of acceptance ratio to be maintained while tuning
during burn-in. Must be numbers between 0 and 1, which accpt.prob.lower < accpt.prob.upper
. See autotune
. Default to (0.6, 0,9) for HMC and (0.3, 0.5) for RWMH.
Ignored if autotune = FALSE
.
amount of randomness incorporated in epsilon
if epsilon.random = TRUE
.
amount of randomness incorporated in L if L.random = TRUE
.
how much should the tuning parameter be increased or decreased at each step while tuning during burn-in?
Must be a number between 0 and 1. See autotune
. Defaults to 0.05. Ignored if autotune = FALSE
.
number previous iterations used to compute the acceptance rate while tuning in burn-in. Must be a positive integer. Defaults to 100.
upper and lower bounds for the concentration and (absolute) association parameters. Must be a positive integers. Defaults to 150 and 1e-4, and parameter with value above or below these limits rarely make sense in practice. Warning: values much larger or smaller than the default are not recommended as they can cause numerical instability.
logical. Should the values of the tuning parameters used at each iteration in each chain be returned? Defaults to FALSE
.
Used only if method="vmcos"
. See dvmcos for details.
Unused.
Fruhwirth-Schnatter, S. (2011). Label switching under model uncertainty. Mixtures: Estimation and Application, 213-239.
Fruhwirth-Schnatter, S. (2001). Markov chain Monte Carlo estimation of classical and dynamic switching and mixture models. Journal of the American Statistical Association, 96(453), 194-209.
# illustration only - more iterations needed for convergence
fit.vmsin.20 <- fit_angmix("vmsin", tim8,
ncomp = 3, n.iter = 20,
n.chains = 1
)
fit.vmsin.20
# Parallelization is implemented via future_lapply from the
# package future.apply. To parallelize, first provide a parallel
# plan(); otherwise the chains will run sequentially.
# Note that not all plan() might work on every OS, as they execute
# functions defined internally in fit_mixmodel. We suggest
# plan(multisession) which works on every OS.
# \donttest{
library(future)
library(parallel)
# plan(multisession, gc = TRUE) # parallelize chains
set.seed(1)
MC.fit <- fit_angmix("vmsin", tim8,
ncomp = 3, n.iter = 5000,
n.chains = 3
)
pointest(MC.fit)
MC.fix <- fix_label(MC.fit)
contour(MC.fit)
contour(MC.fix)
lpdtrace(MC.fit)
# }
Run the code above in your browser using DataLab