Learn R Programming

mclust (version 5.4.5)

MclustDR: Dimension reduction for model-based clustering and classification

Description

A dimension reduction method for visualizing the clustering or classification structure obtained from a finite mixture of Gaussian densities.

Usage

MclustDR(object, lambda = 0.5, normalized = TRUE, Sigma,
         tol = sqrt(.Machine$double.eps))

Arguments

object

An object of class 'Mclust' or 'MclustDA' resulting from a call to, respectively, Mclust or MclustDA.

lambda

A tuning parameter in the range [0,1] described in Scrucca (2014). The default 0.5 gives equal importance to differences in means and covariances among clusters/classes. To recover the directions that mostly separate the estimated clusters or classes set this parameter to 1.

normalized

Logical. If TRUE directions are normalized to unit norm.

Sigma

Marginal covariance matrix of data. If not provided is estimated by the MLE of observed data.

tol

A tolerance value.

Value

An object of class 'MclustDR' with the following components:

call

The matched call

type

A character string specifying the type of model for which the dimension reduction is computed. Currently, possible values are "Mclust" for clustering, and "MclustDA" or "EDDA" for classification.

x

The data matrix.

Sigma

The covariance matrix of the data.

mixcomp

A numeric vector specifying the mixture component of each data observation.

class

A factor specifying the classification of each data observation. For model-based clustering this is equivalent to the corresponding mixture component. For model-based classification this is the known classification.

G

The number of mixture components.

modelName

The name of the parameterization of the estimated mixture model(s). See mclustModelNames.

mu

A matrix of means for each mixture component.

sigma

An array of covariance matrices for each mixture component.

pro

The estimated prior for each mixture component.

M

The kernel matrix.

lambda

The tuning parameter.

evalues

The eigenvalues from the generalized eigen-decomposition of the kernel matrix.

raw.evectors

The raw eigenvectors from the generalized eigen-decomposition of the kernel matrix, ordered according to the eigenvalues.

basis

The basis of the estimated dimension reduction subspace.

std.basis

The basis of the estimated dimension reduction subspace standardized to variables having unit standard deviation.

numdir

The dimension of the projection subspace.

dir

The estimated directions, i.e. the data projected onto the estimated dimension reduction subspace.

Details

The method aims at reducing the dimensionality by identifying a set of linear combinations, ordered by importance as quantified by the associated eigenvalues, of the original features which capture most of the clustering or classification structure contained in the data.

Information on the dimension reduction subspace is obtained from the variation on group means and, depending on the estimated mixture model, on the variation on group covariances (see Scrucca, 2010).

Observations may then be projected onto such a reduced subspace, thus providing summary plots which help to visualize the underlying structure.

The method has been extended to the supervised case, i.e. when the true classification is known (see Scrucca, 2013).

This implementation doesn't provide a formal procedure for the selection of dimensionality. A future release will include one or more methods.

References

Scrucca, L. (2010) Dimension reduction for model-based clustering. Statistics and Computing, 20(4), pp. 471-484.

Scrucca, L. (2014) Graphical Tools for Model-based Mixture Discriminant Analysis. Advances in Data Analysis and Classification, 8(2), pp. 147-165.

See Also

summary.MclustDR, plot.MclustDR, Mclust, MclustDA.

Examples

Run this code
# NOT RUN {
# clustering
data(diabetes)
mod <- Mclust(diabetes[,-1])
summary(mod)

dr <- MclustDR(mod)
summary(dr)
plot(dr, what = "scatterplot")
plot(dr, what = "evalues")

# adjust the tuning parameter to show the most separating directions
dr1 <- MclustDR(mod, lambda = 1) 
summary(dr1)
plot(dr1, what = "scatterplot")
plot(dr1, what = "evalues")

# classification
data(banknote)

da <- MclustDA(banknote[,2:7], banknote$Status, modelType = "EDDA")
dr <- MclustDR(da)
summary(dr)

da <- MclustDA(banknote[,2:7], banknote$Status)
dr <- MclustDR(da)
summary(dr)
# }

Run the code above in your browser using DataLab