inchol computes the incomplete Cholesky decomposition
  of the kernel matrix from a data matrix.
inchol(x, kernel="rbfdot", kpar=list(sigma=0.1), tol = 0.001, 
            maxiter = dim(x)[1], blocksize = 50, verbose = 0)An S4 object of class "inchol" which is an extension of the class "matrix". The object is the decomposed kernel matrix along with the slots :
Indices on which pivots where done
Residuals left on the diagonal
Residuals picked for pivoting
slots can be accessed either by object@slot
or by accessor functions with the same name (e.g., pivots(object))
The data matrix indexed by row
the kernel function used in training and predicting.
    This parameter can be set to any function, of class kernel,
    which computes the inner product in feature space between two
    vector arguments. kernlab provides the most popular kernel functions
    which can be used by setting the kernel parameter to the following
    strings:
rbfdot Radial Basis kernel function "Gaussian"
polydot Polynomial kernel function
vanilladot Linear kernel function
tanhdot Hyperbolic tangent kernel function
laplacedot Laplacian kernel function
besseldot Bessel kernel function
anovadot ANOVA RBF kernel function
splinedot Spline kernel
The kernel parameter can also be set to a user defined function of class kernel by passing the function name as an argument.
the list of hyper-parameters (kernel parameters). This is a list which contains the parameters to be used with the kernel function. Valid parameters for existing kernels are :
sigma inverse kernel width for the Radial Basis
      kernel function "rbfdot" and the Laplacian kernel "laplacedot".
degree, scale, offset for the Polynomial kernel "polydot"
scale, offset for the Hyperbolic tangent kernel
      function "tanhdot"
sigma, order, degree for the Bessel kernel "besseldot".
sigma, degree for the ANOVA kernel "anovadot".
Hyper-parameters for user defined kernels can be passed through the kpar parameter as well.
algorithm stops when remaining pivots bring less accuracy
    then tol (default: 0.001)
maximum number of iterations and columns in \(Z\)
add this many columns to matrix per iteration
print info on algorithm convergence
Alexandros Karatzoglou (based on Matlab code by 
  S.V.N. (Vishy) Vishwanathan and Alex Smola)
alexandros.karatzoglou@ci.tuwien.ac.at
An incomplete cholesky decomposition calculates \(Z\) where \(K= ZZ'\) \(K\) being the kernel matrix. Since the rank of a kernel matrix is usually low, \(Z\) tends to be smaller then the complete kernel matrix. The decomposed matrix can be used to create memory efficient kernel-based algorithms without the need to compute and store a complete kernel matrix in memory.
Francis R. Bach, Michael I. Jordan
      Kernel Independent Component Analysis
      Journal of Machine Learning Research  3, 1-48
      https://www.jmlr.org/papers/volume3/bach02a/bach02a.pdf
csi, inchol-class, chol
data(iris)
datamatrix <- as.matrix(iris[,-5])
# initialize kernel function
rbf <- rbfdot(sigma=0.1)
rbf
Z <- inchol(datamatrix,kernel=rbf)
dim(Z)
pivots(Z)
# calculate kernel matrix
K <- crossprod(t(Z))
# difference between approximated and real kernel matrix
(K - kernelMatrix(kernel=rbf, datamatrix))[6,]
Run the code above in your browser using DataLab