fit.dependency.model(X, Y, zDimension = 1, marginalCovariances = "full", epsilon = 1e-3, priors = list(), matched = TRUE, includeData = TRUE, calculateZ = TRUE, verbose = FALSE)
ppca(X, Y = NULL, zDimension = NULL, includeData = TRUE, calculateZ = TRUE)
pfa(X, Y = NULL, zDimension = NULL, includeData = TRUE, calculateZ = TRUE, priors = NULL)
pcca(X, Y, zDimension = NULL, includeData = TRUE, calculateZ = TRUE)
Y
) is optional."identical isotropic"
, "isotropic"
,
"diagonal"
and "full"
. The difference between isotropic
and identical isotropic options is that in isotropic model, phi$X !=
phi$Y in general, whereas with isotropic model phi$X = phi$Y.
1
, Nm.wxwy.mean
will be made identity matrix of appropriate size.
T
. Described the allowed
deviation scale of the transformation matrix T
from the mean matrix
Nm.wxwy.mean
.
FALSE
can be used to
save memory.getZ
or
z.expectation
. Using FALSE
speeds up the calculation of
the dependency model.fit.dependency.model
function fits the dependency
model X = N(W$X * Z, phi$X); Y = N(W$Y * Z, phi$Y) with the
possibility to tune the model structure and parameter priors.In particular, the dataset-specific covariance structure phi can be defined; non-negative priors for W are possible; the relation between W$X and W$Y can be tuned. For a comprehensive set of examples, see the example scripts in the tests/ directory of this package.
Special cases of the model, obtained with particular prior
assumptions, include probabilistic canonical correlation analysis
(pcca
; Bach & Jordan 2005), probabilistic principal
component analysis (ppca
; Tipping & Bishop 1999),
probabilistic factor analysis (pfa
; Rubin & Thayer
1982), and a regularized version of canonical correlation analysis
(pSimCCA; Lahti et al. 2009).
The standard probabilistic PCA and factor analysis are methods for a
single data set (X ~ N(WZ, phi)), with isotropic and diagonal
covariance (phi) for pPCA and pFA, respectively. Analogous models for
two data sets are obtained by concatenating the two data sets, and
performing pPCA or pFA.
Such special cases are obtained with the following choices in the
fit.dependency.model
function:
marginalCovariances = "identical isotropic"
(Tipping & Bishop 1999)
marginalCovariances = "diagonal"
(Rubin & Thayer 1982)
marginalCovariances = "full"
(Bach & Jordan 2005)
marginaCovariances = "full", priors =
list(Nm.wxwy.mean = I, Nm.wxwy.sigma = 0)
. This is the default
method, corresponds to the case with W$X = W$Y. (Lahti et al.
2009)
marginalCovariances = "isotropic",
priors = list(Nm.wxwy.mean = 1, Nm.wx.wy.sigma = 1
(Lahti et al. 2009)
To avoid computational singularities, the covariance matrix phi is regularised by adding a small constant to the diagonal.
A Probabilistic Interpretation of Canonical Correlation Analysis, Bach Francis R. and Jordan Michael I. 2005 Technical Report 688. Department of Statistics, University of California, Berkley. http://www.di.ens.fr/~fbach/probacca.pdf
Probabilistic Principal Component Analysis, Tipping Michael E. and Bishop Christopher M. 1999. Journal of the Royal Statistical Society, Series B, 61, Part 3, pp. 611--622. http://research.microsoft.com/en-us/um/people/cmbishop/downloads/Bishop-PPCA-JRSS.pdf
EM Algorithms for ML Factorial Analysis, Rubin D. and Thayer D. 1982. Psychometrika, vol. 47, no. 1.
ppca
, pfa
, pcca
data(modelData) # Load example data X, Y
# probabilistic CCA
model <- pcca(X, Y)
# dependency model with priors (W>=0; Wx = Wy; full marginal covariances)
model <- fit.dependency.model(X, Y, zDimension = 1,
priors = list(W = 1e-3, Nm.wx.wy.sigma = 0),
marginalCovariances = "full")
# Getting the latent variable Z when it has been calculated with the model
#getZ(model)
Run the code above in your browser using DataLab