These functions provide the density and random number generation for the multivariate normal distribution, given the precision-Cholesky parameterization.
dmvnpc(x, mu, U, log=FALSE)
rmvnpc(n=1, mu, U)
This is data or parameters in the form of a vector of length \(k\) or a matrix with \(k\) columns.
This is the number of random draws.
This is mean vector \(\mu\) with length \(k\) or matrix with \(k\) columns.
This is the \(k \times k\) upper-triangular of the precision matrix that is Cholesky factor \(\textbf{U}\) of precision matrix \(\Omega\).
Logical. If log=TRUE
, then the logarithm of the
density is returned.
dmvnpc
gives the density and
rmvnpc
generates random deviates.
Application: Continuous Multivariate
Density: \(p(\theta) = (2\pi)^{-p/2} |\Omega|^{1/2} \exp(-\frac{1}{2} (\theta-\mu)^T \Omega (\theta-\mu))\)
Inventor: Unknown (to me, anyway)
Notation 1: \(\theta \sim \mathcal{MVN}(\mu, \Omega^{-1})\)
Notation 2: \(\theta \sim \mathcal{N}_k(\mu, \Omega^{-1})\)
Notation 3: \(p(\theta) = \mathcal{MVN}(\theta | \mu, \Omega^{-1})\)
Notation 4: \(p(\theta) = \mathcal{N}_k(\theta | \mu, \Omega^{-1})\)
Parameter 1: location vector \(\mu\)
Parameter 2: positive-definite \(k \times k\) precision matrix \(\Omega\)
Mean: \(E(\theta) = \mu\)
Variance: \(var(\theta) = \Omega^{-1}\)
Mode: \(mode(\theta) = \mu\)
The multivariate normal distribution, or multivariate Gaussian
distribution, is a multidimensional extension of the one-dimensional
or univariate normal (or Gaussian) distribution. It is usually
parameterized with mean and a covariance matrix, or in Bayesian
inference, with mean and a precision matrix, where the precision matrix
is the matrix inverse of the covariance matrix. These functions
provide the precision-Cholesky parameterization for convenience and
familiarity. It is easier to calculate a multivariate normal density
with the precision parameterization, because a matrix inversion can be
avoided. The precision matrix is replaced with an upper-triangular
\(k \times k\) matrix that is Cholesky factor
\(\textbf{U}\), as per the chol
function for Cholesky
decomposition.
A random vector is considered to be multivariate normally distributed if every linear combination of its components has a univariate normal distribution. This distribution has a mean parameter vector \(\mu\) of length \(k\) and a \(k \times k\) precision matrix \(\Omega\), which must be positive-definite.
In practice, \(\textbf{U}\) is fully unconstrained for proposals
when its diagonal is log-transformed. The diagonal is exponentiated
after a proposal and before other calculations. Overall, Cholesky
parameterization is faster than the traditional parameterization.
Compared with dmvnp
, dmvnpc
must additionally
matrix-multiply the Cholesky back to the covariance matrix, but it
does not have to check for or correct the precision matrix to
positive-definiteness, which overall is slower. Compared with
rmvnp
, rmvnpc
is faster because the Cholesky decomposition
has already been performed.
For models where the dependent variable, Y, is specified to be
distributed multivariate normal given the model, the Mardia test (see
plot.demonoid.ppc
, plot.laplace.ppc
, or
plot.pmc.ppc
) may be used to test the residuals.
chol
,
dmvn
,
dmvnc
,
dmvnp
,
dnorm
,
dnormp
,
dnormv
,
dwishartc
,
plot.demonoid.ppc
,
plot.laplace.ppc
, and
plot.pmc.ppc
.
# NOT RUN {
library(LaplacesDemon)
Omega <- diag(3)
U <- chol(Omega)
x <- dmvnpc(c(1,2,3), c(0,1,2), U)
X <- rmvnpc(1000, c(0,1,2), U)
joint.density.plot(X[,1], X[,2], color=TRUE)
# }
Run the code above in your browser using DataLab