
Support Vector Machines are an excellent tool for classification,
novelty detection, and regression. ksvm
supports the
well known C-svc, nu-svc, (classification) one-class-svc (novelty)
eps-svr, nu-svr (regression) formulations along with
native multi-class classification formulations and
the bound-constraint SVM formulations.
ksvm
also supports class-probabilities output and
confidence intervals for regression.
# S4 method for formula
ksvm(x, data = NULL, ..., subset, na.action = na.omit, scaled = TRUE)# S4 method for vector
ksvm(x, ...)
# S4 method for matrix
ksvm(x, y = NULL, scaled = TRUE, type = NULL,
kernel ="rbfdot", kpar = "automatic",
C = 1, nu = 0.2, epsilon = 0.1, prob.model = FALSE,
class.weights = NULL, cross = 0, fit = TRUE, cache = 40,
tol = 0.001, shrinking = TRUE, ...,
subset, na.action = na.omit)
# S4 method for kernelMatrix
ksvm(x, y = NULL, type = NULL,
C = 1, nu = 0.2, epsilon = 0.1, prob.model = FALSE,
class.weights = NULL, cross = 0, fit = TRUE, cache = 40,
tol = 0.001, shrinking = TRUE, ...)
# S4 method for list
ksvm(x, y = NULL, type = NULL,
kernel = "stringdot", kpar = list(length = 4, lambda = 0.5),
C = 1, nu = 0.2, epsilon = 0.1, prob.model = FALSE,
class.weights = NULL, cross = 0, fit = TRUE, cache = 40,
tol = 0.001, shrinking = TRUE, ...,
na.action = na.omit)
a symbolic description of the model to be fit. When not
using a formula x can be a matrix or vector containing the training
data
or a kernel matrix of class kernelMatrix
of the training data
or a list of character vectors (for use with the string
kernel). Note, that the intercept is always excluded, whether
given in the formula or not.
an optional data frame containing the training data, when using a formula. By default the data is taken from the environment which `ksvm' is called from.
a response vector with one label for each row/component of x
. Can be either
a factor (for classification tasks) or a numeric vector (for
regression).
A logical vector indicating the variables to be
scaled. If scaled
is of length 1, the value is recycled as
many times as needed and all non-binary variables are scaled.
Per default, data are scaled internally (both x
and y
variables) to zero mean and unit variance. The center and scale
values are returned and used for later predictions.
ksvm
can be used for classification
, for regression, or for novelty detection.
Depending on whether y
is
a factor or not, the default setting for type
is C-svc
or eps-svr
,
respectively, but can be overwritten by setting an explicit value.
Valid options are:
C-svc
C classification
nu-svc
nu classification
C-bsvc
bound-constraint svm classification
spoc-svc
Crammer, Singer native multi-class
kbb-svc
Weston, Watkins native multi-class
one-svc
novelty detection
eps-svr
epsilon regression
nu-svr
nu regression
eps-bsvr
bound-constraint svm regression
the kernel function used in training and predicting.
This parameter can be set to any function, of class kernel, which
computes the inner product in feature space between two
vector arguments (see kernels
).
kernlab provides the most popular kernel functions
which can be used by setting the kernel parameter to the following
strings:
rbfdot
Radial Basis kernel "Gaussian"
polydot
Polynomial kernel
vanilladot
Linear kernel
tanhdot
Hyperbolic tangent kernel
laplacedot
Laplacian kernel
besseldot
Bessel kernel
anovadot
ANOVA RBF kernel
splinedot
Spline kernel
stringdot
String kernel
Setting the kernel parameter to "matrix" treats x
as a kernel
matrix calling the kernelMatrix
interface.
The kernel parameter can also be set to a user defined function of class kernel by passing the function name as an argument.
the list of hyper-parameters (kernel parameters). This is a list which contains the parameters to be used with the kernel function. For valid parameters for existing kernels are :
sigma
inverse kernel width for the Radial Basis
kernel function "rbfdot" and the Laplacian kernel "laplacedot".
degree, scale, offset
for the Polynomial kernel "polydot"
scale, offset
for the Hyperbolic tangent kernel
function "tanhdot"
sigma, order, degree
for the Bessel kernel "besseldot".
sigma, degree
for the ANOVA kernel "anovadot".
length, lambda, normalized
for the "stringdot" kernel
where length is the length of the strings considered, lambda the
decay factor and normalized a logical parameter determining if the
kernel evaluations should be normalized.
Hyper-parameters for user defined kernels can be passed through the
kpar parameter as well. In the case of a Radial Basis kernel function (Gaussian)
kpar can also be set to the string "automatic" which uses the heuristics in
sigest
to calculate a good sigma
value for the
Gaussian RBF or Laplace kernel, from the data.
(default = "automatic").
cost of constraints violation (default: 1) this is the `C'-constant of the regularization term in the Lagrange formulation.
parameter needed for nu-svc
,
one-svc
, and nu-svr
. The nu
parameter sets the upper bound on the training error and the lower
bound on the fraction of data points to become Support Vectors (default: 0.2).
epsilon in the insensitive-loss function used for
eps-svr
, nu-svr
and eps-bsvm
(default: 0.1)
if set to TRUE
builds a model for calculating class
probabilities or in case of regression, calculates the scaling
parameter of the Laplacian distribution fitted on the residuals.
Fitting is done on output data created by performing a
3-fold cross-validation on the training data. For details see
references. (default: FALSE
)
a named vector of weights for the different classes, used for asymmetric class sizes. Not all factor levels have to be supplied (default weight: 1). All components have to be named.
cache memory in MB (default 40)
tolerance of termination criterion (default: 0.001)
option whether to use the shrinking-heuristics
(default: TRUE
)
if a integer value k>0 is specified, a k-fold cross validation on the training data is performed to assess the quality of the model: the accuracy rate for classification and the Mean Squared Error for regression
indicates whether the fitted values should be computed
and included in the model or not (default: TRUE
)
additional parameters for the low level fitting function
An index vector specifying the cases to be used in the training sample. (NOTE: If given, this argument must be named.)
A function to specify the action to be taken if NA
s are
found. The default action is na.omit
, which leads to rejection of cases
with missing values on any required variable. An alternative
is na.fail
, which causes an error if NA
cases
are found. (NOTE: If given, this argument must be named.)
An S4 object of class "ksvm"
containing the fitted model,
Accessor functions can be used to access the slots of the object (see
examples) which include:
The resulting support vectors, (alpha vector) (possibly scaled).
The index of the resulting support vectors in the data
matrix. Note that this index refers to the pre-processed data (after
the possible effect of na.omit
and subset
)
The corresponding coefficients times the training labels.
The negative intercept.
The number of Support Vectors
The value of the objective function. In case of one-against-one classification this is a vector of values
Training error
Cross validation error, (when cross > 0)
Contains the width of the Laplacian fitted on the residuals in case of regression, or the parameters of the sigmoid fitted on the decision values in case of classification.
ksvm
uses John Platt's SMO algorithm for solving the SVM QP problem an
most SVM formulations. On the spoc-svc
, kbb-svc
, C-bsvc
and
eps-bsvr
formulations a chunking algorithm based on the TRON QP
solver is used.
For multiclass-classification with ksvm
uses the
`one-against-one'-approach, in which spoc-svc
and the kbb-svc
formulations deal with the
multiclass-classification problems by solving a single quadratic problem involving all the classes.
If the predictor variables include factors, the formula interface must be used to get a
correct model matrix.
In classification when prob.model
is TRUE
a 3-fold cross validation is
performed on the data and a sigmoid function is fitted on the
resulting decision values ksvm
function in a matrix
or a
data.frame
, in addition ksvm
also supports input in the form of a
kernel matrix of class kernelMatrix
or as a list of character
vectors where a string kernel has to be used.
The plot
function for binary classification ksvm
objects
displays a contour plot of the decision values with the corresponding
support vectors highlighted.
The predict function can return class probabilities for
classification problems by setting the type
parameter to
"probabilities".
The problem of model selection is partially addressed by an empirical
observation for the RBF kernels (Gaussian , Laplace) where the optimal values of the
kpar
to "automatic", ksvm
uses the sigest
function
to estimate the quantiles and uses the median of the values.
Chang Chih-Chung, Lin Chih-Jen LIBSVM: a library for Support Vector Machines http://www.csie.ntu.edu.tw/~cjlin/libsvm
Chih-Wei Hsu, Chih-Jen Lin BSVM http://www.csie.ntu.edu.tw/~cjlin/bsvm/
J. Platt Probabilistic outputs for support vector machines and comparison to regularized likelihood methods Advances in Large Margin Classifiers, A. Smola, P. Bartlett, B. Schoelkopf and D. Schuurmans, Eds. Cambridge, MA: MIT Press, 2000. http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.1639
H.-T. Lin, C.-J. Lin and R. C. Weng A note on Platt's probabilistic outputs for support vector machines http://www.csie.ntu.edu.tw/~htlin/paper/doc/plattprob.pdf
C.-W. Hsu and C.-J. Lin A comparison on methods for multi-class support vector machines IEEE Transactions on Neural Networks, 13(2002) 415-425. http://www.csie.ntu.edu.tw/~cjlin/papers/multisvm.ps.gz
K. Crammer, Y. Singer On the learnability and design of output codes for multiclass prolems Computational Learning Theory, 35-46, 2000. http://webee.technion.ac.il/people/koby/publications/ecoc-mlj02.pdf
J. Weston, C. Watkins Multi-class support vector machines In M. Verleysen, Proceedings of ESANN99 Brussels, 1999 http://citeseer.ist.psu.edu/8884.html
# NOT RUN {
## simple example using the spam data set
data(spam)
## create test and training set
index <- sample(1:dim(spam)[1])
spamtrain <- spam[index[1:floor(dim(spam)[1]/2)], ]
spamtest <- spam[index[((ceiling(dim(spam)[1]/2)) + 1):dim(spam)[1]], ]
## train a support vector machine
filter <- ksvm(type~.,data=spamtrain,kernel="rbfdot",
kpar=list(sigma=0.05),C=5,cross=3)
filter
## predict mail type on the test set
mailtype <- predict(filter,spamtest[,-58])
## Check results
table(mailtype,spamtest[,58])
## Another example with the famous iris data
data(iris)
## Create a kernel function using the build in rbfdot function
rbf <- rbfdot(sigma=0.1)
rbf
## train a bound constraint support vector machine
irismodel <- ksvm(Species~.,data=iris,type="C-bsvc",
kernel=rbf,C=10,prob.model=TRUE)
irismodel
## get fitted values
fitted(irismodel)
## Test on the training set with probabilities as output
predict(irismodel, iris[,-5], type="probabilities")
## Demo of the plot function
x <- rbind(matrix(rnorm(120),,2),matrix(rnorm(120,mean=3),,2))
y <- matrix(c(rep(1,60),rep(-1,60)))
svp <- ksvm(x,y,type="C-svc")
plot(svp,data=x)
### Use kernelMatrix
K <- as.kernelMatrix(crossprod(t(x)))
svp2 <- ksvm(K, y, type="C-svc")
svp2
# test data
xtest <- rbind(matrix(rnorm(20),,2),matrix(rnorm(20,mean=3),,2))
# test kernel matrix i.e. inner/kernel product of test data with
# Support Vectors
Ktest <- as.kernelMatrix(crossprod(t(xtest),t(x[SVindex(svp2), ])))
predict(svp2, Ktest)
#### Use custom kernel
k <- function(x,y) {(sum(x*y) +1)*exp(-0.001*sum((x-y)^2))}
class(k) <- "kernel"
data(promotergene)
## train svm using custom kernel
gene <- ksvm(Class~.,data=promotergene[c(1:20, 80:100),],kernel=k,
C=5,cross=5)
gene
#### Use text with string kernels
data(reuters)
is(reuters)
tsv <- ksvm(reuters,rlabels,kernel="stringdot",
kpar=list(length=5),cross=3,C=10)
tsv
## regression
# create data
x <- seq(-20,20,0.1)
y <- sin(x)/x + rnorm(401,sd=0.03)
# train support vector machine
regm <- ksvm(x,y,epsilon=0.01,kpar=list(sigma=16),cross=3)
plot(x,y,type="l")
lines(x,predict(regm,x),col="red")
# }
Run the code above in your browser using DataLab