For $m = 1$, a generalization of the Lloyd-Forgy variant of the
$k$-means algorithm is used, which iterates between reclassifying
objects to their closest prototypes, and computing new prototypes as
consensus clusterings for the classes. This may result in degenerate
solutions, and will be replaced by a Hartigan-Wong style algorithm
eventually. For $m > 1$, a generalization of the fuzzy $c$-means recipe
(e.g., Bezdek (1981)) is used, which alternates between computing
optimal memberships for fixed prototypes, and computing new prototypes
as the consensus clusterings for the classes.
This procedure is repeated until convergence occurs, or the maximal
number of iterations is reached.
Consensus clusterings are computed using cl_consensus
.
Available control parameters are as follows.
[object Object],[object Object],[object Object],[object Object]
The dissimilarities $d$ and exponent $e$ are implied by the
consensus method employed, and inferred via a registration mechanism
currently only made available to built-in consensus methods. The
default methods compute Least Squares Euclidean consensus clusterings,
i.e., use Euclidean dissimilarity $d$ and $e = 2$.
The fixed point approach employed is a heuristic which cannot be
guaranteed to find the global minimum (as this is already true for the
computation of consensus clusterings). Standard practice would
recommend to use the best solution found in sufficiently many
replications of the base algorithm.