First consider choldrop
. If R
is upper triangular then t(R[,-k])%*%R[,-k] == A[-k,-k]
, but R[,-k]
has elements on the first sub-diagonal, from its kth column onwards. To get from this to a triangular Cholesky factor of A[-k,-k]
we can apply a sequence of Givens rotations from the left to eliminate the sub-diagonal elements. The routine does this. If R
is a lower triangular factor then Givens rotations from the right are needed to remove the extra elements. If n
is the dimension of R
then the update has \(O(n^2)\) computational cost.
cholup
(which assumes R
is upper triangular) updates based on the observation that \( R^TR + uu^T = [u,R^T][u,R^T]^T = [u,R^T]Q^TQ[u,R^T]^T\), and therefore we can construct \(Q\) so that \(Q[u,R^T]^T=[0,R_1^T]^T\), where \(R_1\) is the modified factor. \(Q\) is constructed from a sequence of Givens rotations in order to zero the elements of \(u\). Downdating is similar except that hyperbolic rotations have to be used in place of Givens rotations --- see Golub and van Loan (2013, section 6.5.4) for details. Downdating only works if \(A - uu^T\) is positive definite. Again the computational cost is \(O(n^2)\).
Note that the updates are vector oriented, and are hence not susceptible to speed up by use of an optimized BLAS. The updates are set up to be relatively Cache friendly, in that in the upper triangular case successive Givens rotations are stored for sequential application column-wise, rather than being applied row-wise as soon as they are computed. Even so, the upper triangular update is slightly slower than the lower triangular update.