Learn R Programming

Rmixmod (version 2.1.10)

mixmodStrategy: Create an instance of [Strategy] class

Description

This class will contain all the parameters needed by the estimation algorithms.

Usage

mixmodStrategy(...)

Value

a [Strategy] object

Arguments

...

all arguments are transfered to the Strategy constructor. Valid arguments are:

algo:

list of character string with the estimation algorithm. Possible values: "EM", "SEM", "CEM", c("EM","SEM"). Default value is "EM".

nbTry:

integer defining the number of tries. Default value: 1.

initMethod:

a character string with the method of initialization of the algorithm specified in the algo argument. Possible values: "random", "smallEM", "CEM", "SEMMax", "parameter", "label". Default value: "smallEM".

nbTryInInit:

integer defining number of tries in initMethod algorithm. Default value: 50.

nbIterationInInit:

integer defining the number of "EM" or "SEM" iterations in initMethod. Default values: 5 if initMethod is "smallEM" and 100 if initMethod is "SEMMax".

nbIterationInAlgo:

list of integers defining the number of iterations if user want to use nbIteration as rule to stop the algorithm(s). Default value: 200.

epsilonInInit:

real defining the epsilon value in the initialization step. Only available if initMethod is "smallEM". Default value: 0.001.

epsilonInAlgo:

list of reals defining the epsilon value for the algorithm. Warning: epsilonInAlgo doesn't have any sense if algo is SEM, so it needs to be set as NaN in that case. Default value: 0.001.

seed:

integer defining the seed of the random number generator. Setting a particular seed allows the user to (re)-generate a particular sequence of random numbers. Default value is NULL, i.e. a random seed.

parameter:

instance of "Parameter" subclass. Required if initMethod is "parameter", forbidden otherwise.

labels:

vector of integers containing labels. Required if initMethod is "label", forbidden otherwise.

Author

Florent Langrognet and Remi Lebret and Christian Poli ans Serge Iovleff, with contributions from C. Biernacki and G. Celeux and G. Govaert contact@mixmod.org

Details

There are different ways to initialize an algorithm :

random

Initialization from a random position is a standard way to initialize an algorithm. This random initial position is obtained by choosing at random centers in the data set. This simple strategy is repeated \(5\) times (the user can choose the number of times) from different random positions and the position that maximises the likelihood is selected.

smallEM

A maximum of \(50\) iterations of the EM algorithm according to the process : \(n_i\) numbers of iterations of EM are done (with random initialization) until the smallEM stop criterion value has been reached. This action is repeated until the sum of \(n_i\)

reaches \(50\) iterations (or if in one action \(50\) iterations are reached before the stop criterion value).\ It appears that repeating runs of EM is generally profitable since using a single run of EM can often lead to suboptimal solutions.

CEM

\(10\) repetitions of \(50\) iterations of the CEM algorithm are done. One advantage of initializing an algorithm with CEM lies in the fact that CEM converges generally in a small number of iterations. Thus, without consuming a large amount of CPU times, several runs of CEM are performed. Then EM is run with the best solution among the \(10\) repetitions.

SEMMax

A run of \(500\) iterations of SEM. The idea is that an SEM sequence is expected to enter rapidly in the neighbourhood of the global maximum of the likelihood function.

Defining the algorithms used in the strategy, the stopping rule and when to stop.

  • Algorithms :

    EM

    Expectation Maximisation

    CEM

    Classification EM

    SEM

    Stochastic EM

  • Stopping rules for the algorithm :

    nbIterationInAlgo

    Sets the maximum number of iterations

    epsilonInAlgo

    Sets relative increase of the log-likelihood criterion

  • Default values are \(200\) nbIterationInAlgo of EM with an epsilonInAlgo value of \(10-3\).

References

Biernacki, C., Celeux, G., Govaert, G., 2003. "Choosing starting values for the EM algorithm for getting the highest likelihood in multivariate gaussian mixture models". Computational Statistics and Data Analysis 41, 561-575.

Examples

Run this code
mixmodStrategy()
mixmodStrategy(algo = "CEM", initMethod = "random", nbTry = 10, epsilonInInit = 0.00001)
mixmodStrategy(
  algo = c("SEM", "EM"), nbIterationInAlgo = c(200, 100),
  epsilonInAlgo = c(NA, 0.000001)
)

Run the code above in your browser using DataLab