These functions provide the density and random number generation for the multivariate normal distribution, given the precision parameterization.
dmvnp(x, mu, Omega, log=FALSE)
rmvnp(n=1, mu, Omega)
This is data or parameters in the form of a vector of length \(k\) or a matrix with \(k\) columns.
This is the number of random draws.
This is mean vector \(\mu\) with length \(k\) or matrix with \(k\) columns.
This is the \(k \times k\) precision matrix \(\Omega\).
Logical. If log=TRUE
, then the logarithm of the
density is returned.
dmvnp
gives the density and
rmvnp
generates random deviates.
Application: Continuous Multivariate
Density: \(p(\theta) = (2\pi)^{-p/2} |\Omega|^{1/2} \exp(-\frac{1}{2} (\theta-\mu)^T \Omega (\theta-\mu))\)
Inventor: Unknown (to me, anyway)
Notation 1: \(\theta \sim \mathcal{MVN}(\mu, \Omega^{-1})\)
Notation 2: \(\theta \sim \mathcal{N}_k(\mu, \Omega^{-1})\)
Notation 3: \(p(\theta) = \mathcal{MVN}(\theta | \mu, \Omega^{-1})\)
Notation 4: \(p(\theta) = \mathcal{N}_k(\theta | \mu, \Omega^{-1})\)
Parameter 1: location vector \(\mu\)
Parameter 2: positive-definite \(k \times k\) precision matrix \(\Omega\)
Mean: \(E(\theta) = \mu\)
Variance: \(var(\theta) = \Omega^{-1}\)
Mode: \(mode(\theta) = \mu\)
The multivariate normal distribution, or multivariate Gaussian distribution, is a multidimensional extension of the one-dimensional or univariate normal (or Gaussian) distribution. It is usually parameterized with mean and a covariance matrix, or in Bayesian inference, with mean and a precision matrix, where the precision matrix is the matrix inverse of the covariance matrix. These functions provide the precision parameterization for convenience and familiarity. It is easier to calculate a multivariate normal density with the precision parameterization, because a matrix inversion can be avoided.
A random vector is considered to be multivariate normally distributed if every linear combination of its components has a univariate normal distribution. This distribution has a mean parameter vector \(\mu\) of length \(k\) and a \(k \times k\) precision matrix \(\Omega\), which must be positive-definite.
The conjugate prior of the mean vector is another multivariate normal
distribution. The conjugate prior of the precision matrix is the
Wishart distribution (see dwishart
).
When applicable, the alternative Cholesky parameterization should be
preferred. For more information, see dmvnpc
.
For models where the dependent variable, Y, is specified to be
distributed multivariate normal given the model, the Mardia test (see
plot.demonoid.ppc
, plot.laplace.ppc
, or
plot.pmc.ppc
) may be used to test the residuals.
dmvn
,
dmvnc
,
dmvnpc
,
dnorm
,
dnormp
,
dnormv
,
dwishart
,
plot.demonoid.ppc
,
plot.laplace.ppc
, and
plot.pmc.ppc
.
# NOT RUN {
library(LaplacesDemon)
x <- dmvnp(c(1,2,3), c(0,1,2), diag(3))
X <- rmvnp(1000, c(0,1,2), diag(3))
joint.density.plot(X[,1], X[,2], color=TRUE)
# }
Run the code above in your browser using DataLab