Fits, by using the expectation conditional-maximization (ECM) algorithm, parsimonious mixtures of multivariate contaminated normal distributions (with eigen-decomposed scale matrices) to the given data within a clustering paradigm (default) or classification paradigm. Can be run in parallel. Likelihood-based model selection criteria are used to select the parsimonious model and the number of groups.
CNmixt(X, G, contamination = NULL, model = NULL,
initialization = "mixt", alphafix = NULL, alphamin = 0.5,
seed = NULL, start.z = NULL, start.v = NULL, start = 0,
label = NULL, AICcond = FALSE, iter.max = 1000,
threshold = 1.0e-10, parallel = FALSE, eps = 1e-100,verbose = TRUE)
CNmixtCV(X, G, contamination = NULL, model = NULL,
initialization = "mixt", k = 10,alphafix = NULL,
alphamin = 0.5, seed = NULL, start.z = NULL, start.v = NULL,
start = 0, label = NULL, iter.max = 1000, threshold = 1.0e-10,
parallel = FALSE, eps = 1e-100, verbose = TRUE)
CNmixt
returns an object of class ContaminatedMixt
.
CNmixtCV
returns a list with the cross-validated error rate estimated for each model.
a dim=c(n,p)
matrix such that the \(n\) rows correspond to observations and the \(p\) columns correspond to variables.
a vector containing the numbers of groups to be tried.
an optional boolean indicating if the model(s) to be fitted have to be contaminated or not.
If NULL
, then both types of models are fitted.
a vector indicating the model(s) to be fitted.
In the multivariate case (\(p>1\)), possible values are: "EII"
, "VII"
, "EEI"
, "VEI"
, "EVI"
, "VVI"
, "EEE"
, "VEE"
, "EVE"
, "EEV"
, "VVE"
, "VEV"
, "EVV"
, "VVV"
.
If NULL
, then all 14 models are fitted.
In the univariate case (\(p=1\)), possible values are "E"
and "V"
.
initialization strategy for the ECM algorithm. It can be:
"mixt"
(default): the initial (\(n \times G\)) matrix with posterior probabilities of groups membership arises from a preliminary run of mixtures of multivariate normal distributions as fitted by the gpcm()
function of the mixture package (see mixture:gpcm
for details).
"kmeans"
: the initial (\(n \times G\)) hard classification matrix arises from a preliminary run of the \(k\)-means algorithm;
"random.post"
: the initial (\(n \times G\)) matrix with posterior probabilities of groups membership is randomly generated;
"random.clas"
: the initial (\(n \times G\)) classification matrix is randomly generated;
"manual"
: the user must specify either the initial (\(n \times G\)) classification matrix or the initial (\(n \times G\)) matrix with posterior probabilities of groups membership, via the argument start.z
and, optionally, the initial (\(n \times G\)) matrix of posterior probabilities to be a good observation in each group, via the argument start.v
.
a vector of length \(G\) with the proportion of good observations in each group.
If length(alphafix) != G
, then the first element is replicated \(G\) times.
Default value is NULL
.
a vector of length \(G\) with the minimum proportion of good observations in each group.
If length(alphamin) != G
, then the first element is replicated \(G\) times.
Default value is 0.5
.
the seed for the random number generator, when random initializations are used; if NULL
, current seed is not changed.
Default value is NULL
.
initial \(n \times G\) matrix of either soft or hard classification.
Default value is NULL
.
initial \(n \times G\) matrix of posterior probabilities to be a good observation in each group. Default value is a \(n \times G\) matrix of ones.
when initialization = "mixt"
, initialization used for the gpcm()
function of the mixture package (see mixture:gpcm
for details).
a vector of integers of length equal to the number of rows of X
.
It indicates the known group of membership of each observation.
Use 0
when membership is not known.
Use NULL
when membership is unknown for all observations.
When TRUE
, the AICcond criterion, an estimate of the predictive ability of a generative model for classification, is computed (Vandewalle et al., 2013).
maximum number of iterations in the ECM algorithm.
Default value is 1000
.
threshold for Aitken's acceleration procedure.
Default value is 1.0e-03
.
When TRUE
, the package parallel
is used for parallel computation.
When several models are estimated, computational time is reduced.
The number of cores to use may be set with the global option cl.cores
; default value is detected using detectCores()
.
an optional scalar.
It sets the smallest value for the eigenvalues of the component scale matrices.
Default value is 1e-100
.
number of equal sized subsamples used in \(k\)-fold cross-validation.
write text to the console
Antonio Punzo, Angelo Mazza, Paul D. McNicholas
The multivariate data contained in X
are either clustered or classified using parsimonious mixtures of multivariate contaminated normal distributions with some or all of the 14 parsimonious models described in Punzo and McNicholas (2016).
Model specification (via the model
argument) follows the nomenclature popularized in other packages such as mixture and mclust.
Such a nomenclature refers to the decomposition and constraints on the scale matrix (see Banfield and Raftery, 1993, Celeux and Govaert, 1995 and Punzo and McNicholas, 2016 for details):
$$\Sigma_g = \lambda_g \Gamma_g \Delta_g \Gamma_g'.$$
The nomenclature describes (in order) the volume (\(\lambda_g\)), shape (\(\Delta_g\)), and orientation (\(\Gamma_g\)), in terms of "V"
ariable, "E"
qual, or the "I"
dentity matrix.
As an example, the string "VEI"
would refer to the model where \(\Sigma_g = \lambda_g \Delta\).
Note that for \(G=1\), several models are equivalent (for example, "EEE"
and "VVV"
).
Thus, for \(G=1\) only one model from each set of equivalent models will be run.
The algorithms detailed in Celeux and Govaert (1995) are considered in the first CM-step of the ECM algorithm to update \(\Sigma_g\) for all the models apart from "EVE"
and "VVE"
.
For "EVE"
and "VVE"
, majorization-minimization (MM) algorithms (Hunter and Lange, 2000) and accelerated line search algorithms on the Stiefel manifold (Absil, Mahony and Sepulchre, 2009 and Browne and McNicholas, 2014), which are especially preferable in higher dimensions (Browne and McNicholas, 2014), are used to update \(\Sigma_g\); the same approach is also adopted in the mixture package for those models.
Starting values are very important to the successful operation of these algorithms and so care must be taken in the interpretation of results.
All the initializations considered here provide initial quantities for the first CM-step of the ECM algorithm.
The predictive ability of a model for classification may be estimated using the cross-validated error rate, returned by CNmixtCV
or trough the the AICcond criterion (Vandewalle et al., 2013).
Absil P. A., Mahony R. and Sepulchre R. (2009). Optimization Algorithms on Matrix Manifolds. Princeton University Press, Princeton, NJ.
Banfield J. D. and Raftery A. E. (1993). Model-Based Gaussian and Non-Gaussian Clustering. Biometrics, 49(3), 803--821.
Browne R. P. and McNicholas P. D. (2013). Estimating Common Principal Components in High Dimensions. Advances in Data Analysis and Classification, 8(2), 217--226.
Browne, R. P. and McNicholas P. D. (2014). Orthogonal Stiefel manifold optimization for eigen-decomposed covariance parameter estimation in mixture models. Statistics and Computing, 24(2), 203--210.
Browne R. P. and McNicholas P. D. (2015). mixture: Mixture Models for Clustering and Classification. R package version 1.4.
Celeux G. and Govaert G. (1995). Gaussian Parsimonious Clustering Models. Pattern Recognition. 28(5), 781--793.
Hunter D. R. and Lange K. (2000). Rejoinder to Discussion of ``Optimization Transfer Using Surrogate Objective Functions''. Journal of Computational and Graphical Statistics, 9(1), 52--59.
Punzo A., Mazza A. and McNicholas P. D. (2018). ContaminatedMixt: An R Package for Fitting Parsimonious Mixtures of Multivariate Contaminated Normal Distributions. Journal of Statistical Software, 85(10), 1--25.
Punzo A. and McNicholas P. D. (2016). Parsimonious mixtures of multivariate contaminated normal distributions. Biometrical Journal, 58(6), 1506--1537.
Vandewalle V., Biernacki C., Celeux G. and Govaert G. (2013). A predictive deviance criterion for selecting a generative model in semi-supervised classification. Computational Statistics and Data Analysis, 64, 220--236.
ContaminatedMixt-package
## Note that the example is extremely simplified
## in order to reduce computation time
# Artificial data from an EEI Gaussian mixture with G = 2 components
library("mnormt")
p <- 2
set.seed(12345)
X1 <- rmnorm(n = 200, mean = rep(2, p), varcov = diag(c(5, 0.5)))
X2 <- rmnorm(n = 200, mean = rep(-2, p), varcov = diag(c(5, 0.5)))
noise <- matrix(runif(n = 40, min = -20, max = 20), nrow = 20, ncol = 2)
X <- rbind(X1, X2, noise)
group <- rep(c(1, 2, 3), times = c(200, 200, 20))
plot(X, col = group, pch = c(3, 4, 16)[group], asp = 1, xlab = expression(X[1]),
ylab = expression(X[2]))
# ---------------------- #
# Model-based clustering #
# ---------------------- #
res1 <- CNmixt(X, model = c("EEI", "VVV"), G = 2, parallel = FALSE)
summary(res1)
agree(res1, givgroup = group)
plot(res1, contours = TRUE, asp = 1, xlab = expression(X[1]), ylab = expression(X[2]))
# -------------------------- #
# Model-based classification #
# -------------------------- #
indlab <- sample(1:400, 20)
lab <- rep(0,nrow(X))
lab[indlab] <- group[indlab]
res2 <- CNmixt(X, G = 2, model = "EEI", label = lab)
agree(res2, givgroup = group)
Run the code above in your browser using DataLab