Returns EM algorithm output for a mixture-of-experts model. Currently, this code only handles a 2-component mixture-of-experts, but will be extended to the general k-component hierarchical mixture-of-experts.
hmeEM(y, x, lambda = NULL, beta = NULL, sigma = NULL, w = NULL,
k = 2, addintercept = TRUE, epsilon = 1e-08,
maxit = 10000, verb = FALSE)
hmeEM
returns a list of class mixEM
with items:
The set of predictors (which includes a column of 1's if addintercept
= TRUE).
The response values.
The final coefficients for the functional form of the mixing proportions.
An nxk matrix of the final mixing proportions.
The final regression coefficients.
The final standard deviations. If arbmean
= FALSE, then only the smallest standard
deviation is returned. See scale
below.
The final log-likelihood.
An nxk matrix of posterior probabilities for observations.
A vector of each iteration's log-likelihood.
The number of times the algorithm restarted due to unacceptable choice of initial values.
A character vector giving the name of the function.
An n-vector of response values.
An nxp matrix of predictors. See addintercept
below.
Initial value of mixing proportions, which are modeled as an inverse
logit function of the predictors. Entries should sum to 1.
If NULL, then lambda
is taken as 1/k
for each x
.
Initial value of beta
parameters. Should be a pxk matrix,
where p is the number of columns of x and k is number of components.
If NULL, then beta
has standard normal entries according to a binning method done on the data.
A vector of standard deviations. If NULL, then \(1/\code{sigma}^2\) has random standard exponential entries according to a binning method done on the data.
A p-vector of coefficients for the way the mixing proportions are modeled. See lambda
.
Number of components. Currently, only k
=2 is accepted.
If TRUE, a column of ones is appended to the x matrix before the value of p is calculated.
The convergence criterion.
The maximum number of iterations.
If TRUE, then various updates are printed during each iteration of the algorithm.
Jacobs, R. A., Jordan, M. I., Nowlan, S. J. and Hinton, G. E. (1991) Adaptive Mixtures of Local Experts, Neural Computation 3(1), 79--87.
McLachlan, G. J. and Peel, D. (2000) Finite Mixture Models, John Wiley and Sons, Inc.
regmixEM
## EM output for NOdata.
data(NOdata)
attach(NOdata)
set.seed(100)
em.out <- regmixEM(Equivalence, NO)
hme.out <- hmeEM(Equivalence, NO, beta = em.out$beta)
hme.out[3:7]
Run the code above in your browser using DataLab