eigen(x, symmetric, only.values = FALSE, EISPACK = FALSE)
TRUE
, the matrix is assumed to be symmetric
(or Hermitian if complex) and only its lower triangle (diagonal
included) is used. If symmetric
is not specified, the matrix
is inspected for symmetry.TRUE
, only the eigenvalues are computed
and returned, otherwise both eigenvalues and eigenvectors are
returned.x
is returned as components of a
list with componentsx
,
sorted in decreasing order, according to Mod(values)
in the asymmetric case when they might be complex (even for real
matrices). For real asymmetric matrices the vector will be
complex only if complex conjugate pairs of eigenvalues are detected.
x
, or NULL
if
only.values
is TRUE
. The vectors are normalized to
unit length.Recall that the eigenvectors are only defined up to a constant: even
when the length is specified they are still only defined up to a
scalar of modulus one (the sign for real matrices).
r <- eigen(A)
, and V <- r$vectors; lam <- r$values
,
then $$A = V \Lambda V^{-1}$$ (up to numerical
fuzz), where $Lmbd =$diag(lam)
.
eigen
uses the LAPACK routines DSYEVR
, DGEEV
,
ZHEEV
and ZGEEV
. LAPACK is from http://www.netlib.org/lapack and its guide is listed
in the references.symmetric
is unspecified, the code attempts to
determine if the matrix is symmetric up to plausible numerical
inaccuracies. It is faster and surer to set the value yourself.Computing the eigenvectors is the slow part for large matrices.
Computing the eigendecomposition of a matrix is subject to errors on a
real-world computer: the definitive analysis is Wilkinson (1965). All
you can hope for is a solution to a problem suitably close to
x
. So even though a real asymmetric x
may have an
algebraic solution with repeated real eigenvalues, the computed
solution may be of a similar matrix with complex conjugate pairs of
eigenvalues.
Unsuccessful results from the underlying LAPACK code will result in an
error giving a positive error code (most often 1
): these can
only be interpreted by detailed study of the FORTRAN code.
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole. Springer-Verlag Lecture Notes in Computer Science 6.
Wilkinson, J. H. (1965) The Algebraic Eigenvalue Problem. Clarendon Press, Oxford.
svd
, a generalization of eigen
; qr
, and
chol
for related decompositions. To compute the determinant of a matrix, the qr
decomposition is much more efficient: det
.
eigen(cbind(c(1,-1), c(-1,1)))
eigen(cbind(c(1,-1), c(-1,1)), symmetric = FALSE)
# same (different algorithm).
eigen(cbind(1, c(1,-1)), only.values = TRUE)
eigen(cbind(-1, 2:1)) # complex values
eigen(print(cbind(c(0, 1i), c(-1i, 0)))) # Hermite ==> real Eigenvalues
## 3 x 3:
eigen(cbind( 1, 3:1, 1:3))
eigen(cbind(-1, c(1:2,0), 0:2)) # complex values
Run the code above in your browser using DataLab