The Kernel Feature Analysis algorithm is an algorithm for extracting
structure from possibly high-dimensional data sets.
Similar to kpca
a new basis for the data is found.
The data can then be projected on the new basis.
# S4 method for formula
kfa(x, data = NULL, na.action = na.omit, ...)# S4 method for matrix
kfa(x, kernel = "rbfdot", kpar = list(sigma = 0.1),
features = 0, subset = 59, normalize = TRUE, na.action = na.omit)
kfa
returns an object of class kfa
containing the
features selected by the algorithm.
contains the features selected
contains the sparse alpha vector
The predict
function can be used to embed new data points into to the
selected feature base.
The data matrix indexed by row or a formula describing the model. Note, that an intercept is always included, whether given in the formula or not.
an optional data frame containing the variables in the model (when using a formula).
the kernel function used in training and predicting. This parameter can be set to any function, of class kernel, which computes an inner product in feature space between two vector arguments. kernlab provides the most popular kernel functions which can be used by setting the kernel parameter to the following strings:
rbfdot
Radial Basis kernel function "Gaussian"
polydot
Polynomial kernel function
vanilladot
Linear kernel function
tanhdot
Hyperbolic tangent kernel function
laplacedot
Laplacian kernel function
besseldot
Bessel kernel function
anovadot
ANOVA RBF kernel function
splinedot
Spline kernel
The kernel parameter can also be set to a user defined function of class kernel by passing the function name as an argument.
the list of hyper-parameters (kernel parameters). This is a list which contains the parameters to be used with the kernel function. Valid parameters for existing kernels are :
sigma
inverse kernel width for the Radial Basis
kernel function "rbfdot" and the Laplacian kernel "laplacedot".
degree, scale, offset
for the Polynomial kernel "polydot"
scale, offset
for the Hyperbolic tangent kernel
function "tanhdot"
sigma, order, degree
for the Bessel kernel "besseldot".
sigma, degree
for the ANOVA kernel "anovadot".
Hyper-parameters for user defined kernels can be passed through the kpar parameter as well.
Number of features (principal components) to return. (default: 0 , all)
the number of features sampled (used) from the data set
normalize the feature selected (default: TRUE)
A function to specify the action to be taken if NA
s are
found. The default action is na.omit
, which leads to rejection of cases
with missing values on any required variable. An alternative
is na.fail
, which causes an error if NA
cases
are found. (NOTE: If given, this argument must be named.)
additional parameters
Alexandros Karatzoglou
alexandros.karatzoglou@ci.tuwien.ac.at
Kernel Feature analysis is similar to Kernel PCA, but instead of extracting eigenvectors of the training dataset in feature space, it approximates the eigenvectors by selecting training patterns which are good basis vectors for the training set. It works by choosing a fixed size subset of the data set and scaling it to unit length (under the kernel). It then chooses the features that maximize the value of the inner product (kernel function) with the rest of the patterns.
Alex J. Smola, Olvi L. Mangasarian and Bernhard Schoelkopf
Sparse Kernel Feature Analysis
Data Mining Institute Technical Report 99-04, October 1999
ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/99-04.ps
kpca
, kfa-class
data(promotergene)
f <- kfa(~.,data=promotergene,features=2,kernel="rbfdot",
kpar=list(sigma=0.01))
plot(predict(f,promotergene),col=as.numeric(promotergene[,1]))
Run the code above in your browser using DataLab