The sparse Fisher-EM algorithm is a sparse version of the Fisher-EM algorithm. The sparsity is introduced within the F step which estimates the discriminative subspace. The sparsity on U is obtained by adding a l1 penalty to the optimization problem of the F step.
sfem(Y,K=2:6,obj=NULL,model='AkjBk',method='reg',crit='icl',maxit=50,eps=1e-6,
init='kmeans',nstart=5,Tinit=c(),kernel='',disp=FALSE,l1=0.1,l2=0,nbit=2)
The data matrix. Categorical variables and missing values are not allowed.
An integer vector specifying the numbers of mixture components (clusters) among which the model selection criterion will choose the most appropriate number of groups. Default is 2:6.
An object of class 'fem' previously learned with the 'fem' function which will be used as initialization of the sparse FisherEM algorithm.
A vector of discriminative latent mixture (DLM) models to fit. There are 12 different models: "DkBk", "DkB", "DBk", "DB", "AkjBk", "AkjB", "AkBk", "AkBk", "AjBk", "AjB", "ABk", "AB". The option "all" executes the Fisher-EM algorithm on the 12 DLM models and select the best model according to the maximum value obtained by model selection criterion.
The method use for the fitting of the projection matrix associated to the discriminative subspace. Three methods are available: 'svd', 'reg' and 'gs'. The 'reg' method is the default.
The model selection criterion to use for selecting the most appropriate model for the data. There are 3 possibilities: "bic", "aic" or "icl". Default is "icl".
The maximum number of iterations before the stop of the Fisher-EM algorithm.
The threshold value for the likelihood differences to stop the Fisher-EM algorithm.
The initialization method for the Fisher-EM algorithm. There are 4 options: "random" for a randomized initialization, "kmeans" for an initialization by the kmeans algorithm, "hclust" for hierarchical clustering initialization or "user" for a specific initialization through the parameter "Tinit". Default is "kmeans". Notice that for "kmeans" and "random", several initializations are asked and the initialization associated with the highest likelihood is kept (see "nstart").
The number of restart if the initialization is "kmeans" or "random". In such a case, the initialization associated with the highest likelihood is kept.
A n x K matrix which contains posterior probabilities for initializing the algorithm (each line corresponds to an individual).
It enables to deal with the n < p problem. By default, no kernel (" ") is used. But the user has the choice between 3 options for the kernel: "linear", "sigmoid" or "rbf".
If true, some messages are printed during the clustering. Default is false.
The l1 penalty value (lasso) which has to be in [0,1]. A small value (close to 0) leads to a very sparse loading matrix whereas a value equals to 1 corresponds to no sparsity. Default is 0.1.
The l2 penalty value (elasticnet). Defaults is 0 (no regularization).
The number of iterations for the lasso procedure. Defaults is 2.
A list is returned:
The number of groups.
the group membership of each individual estimated by the Fisher-EM algorithm.
the posterior probabilities of each individual for each group.
The loading matrix which determines the orientation of the discriminative subspace.
The estimated mean in the subspace.
The estimated mean in the observation space.
The estimated mixture proportion.
The covariance matrices in the subspace.
The value of the Akaike information criterion.
The value of the Bayesian information criterion.
The value of the integrated completed likelihood criterion.
The log-likelihood values computed at each iteration of the FEM algorithm.
the log-likelihood value obtained at the last iteration of the FEM algorithm.
The method used.
The call of the function.
Some information to pass to the plot.fem function.
The model selction criterion used.
The l1 value.
The l2 value.
Charles Bouveyron and Camille Brunet (2012), Simultaneous model-based clustering and visualization in the Fisher discriminative subspace, Statistics and Computing, 22(1), 301-324 <doi:10.1007/s11222-011-9249-9>.
Charles Bouveyron and Camille Brunet (2014), "Discriminative variable selection for clustering with the sparse Fisher-EM algorithm", Computational Statistics, vol. 29(3-4), pp. 489-513 <10.1007/s00180-013-0433-6>.
fem, plot.fem, fem.ari, summary.fem
# NOT RUN {
data(iris)
res = sfem(iris[,-5],K=3,model='DkBk',l1=seq(.01,.3,.05))
res
plot(res)
fem.ari(res,as.numeric(iris[,5]))
# }
Run the code above in your browser using DataLab