Learn R Programming

kernlab (version 0.9-24)

ksvm: Support Vector Machines

Description

Support Vector Machines are an excellent tool for classification, novelty detection, and regression. ksvm supports the well known C-svc, nu-svc, (classification) one-class-svc (novelty) eps-svr, nu-svr (regression) formulations along with native multi-class classification formulations and the bound-constraint SVM formulations. ksvm also supports class-probabilities output and confidence intervals for regression.

Usage

"ksvm"(x, data = NULL, ..., subset, na.action = na.omit, scaled = TRUE)
"ksvm"(x, ...)
"ksvm"(x, y = NULL, scaled = TRUE, type = NULL, kernel ="rbfdot", kpar = "automatic", C = 1, nu = 0.2, epsilon = 0.1, prob.model = FALSE, class.weights = NULL, cross = 0, fit = TRUE, cache = 40, tol = 0.001, shrinking = TRUE, ..., subset, na.action = na.omit)
"ksvm"(x, y = NULL, type = NULL, C = 1, nu = 0.2, epsilon = 0.1, prob.model = FALSE, class.weights = NULL, cross = 0, fit = TRUE, cache = 40, tol = 0.001, shrinking = TRUE, ...)
"ksvm"(x, y = NULL, type = NULL, kernel = "stringdot", kpar = list(length = 4, lambda = 0.5), C = 1, nu = 0.2, epsilon = 0.1, prob.model = FALSE, class.weights = NULL, cross = 0, fit = TRUE, cache = 40, tol = 0.001, shrinking = TRUE, ..., na.action = na.omit)

Arguments

x
a symbolic description of the model to be fit. When not using a formula x can be a matrix or vector containing the training data or a kernel matrix of class kernelMatrix of the training data or a list of character vectors (for use with the string kernel). Note, that the intercept is always excluded, whether given in the formula or not.
data
an optional data frame containing the training data, when using a formula. By default the data is taken from the environment which `ksvm' is called from.
y
a response vector with one label for each row/component of x. Can be either a factor (for classification tasks) or a numeric vector (for regression).
scaled
A logical vector indicating the variables to be scaled. If scaled is of length 1, the value is recycled as many times as needed and all non-binary variables are scaled. Per default, data are scaled internally (both x and y variables) to zero mean and unit variance. The center and scale values are returned and used for later predictions.
type
ksvm can be used for classification , for regression, or for novelty detection. Depending on whether y is a factor or not, the default setting for type is C-svc or eps-svr, respectively, but can be overwritten by setting an explicit value. Valid options are:

  • C-svc C classification

  • nu-svc nu classification
  • C-bsvc bound-constraint svm classification
  • spoc-svc Crammer, Singer native multi-class
  • kbb-svc Weston, Watkins native multi-class
  • one-svc novelty detection
  • eps-svr epsilon regression
  • nu-svr nu regression
  • eps-bsvr bound-constraint svm regression
  • kernel
    the kernel function used in training and predicting. This parameter can be set to any function, of class kernel, which computes the inner product in feature space between two vector arguments (see kernels). kernlab provides the most popular kernel functions which can be used by setting the kernel parameter to the following strings:

    • rbfdot Radial Basis kernel "Gaussian"

  • polydot Polynomial kernel
  • vanilladot Linear kernel
  • tanhdot Hyperbolic tangent kernel
  • laplacedot Laplacian kernel
  • besseldot Bessel kernel
  • anovadot ANOVA RBF kernel
  • splinedot Spline kernel
  • stringdot String kernel
  • Setting the kernel parameter to "matrix" treats x as a kernel matrix calling the kernelMatrix interface. The kernel parameter can also be set to a user defined function of class kernel by passing the function name as an argument.

    kpar
    the list of hyper-parameters (kernel parameters). This is a list which contains the parameters to be used with the kernel function. For valid parameters for existing kernels are :

    • sigma inverse kernel width for the Radial Basis kernel function "rbfdot" and the Laplacian kernel "laplacedot".

  • degree, scale, offset for the Polynomial kernel "polydot"
  • scale, offset for the Hyperbolic tangent kernel function "tanhdot"
  • sigma, order, degree for the Bessel kernel "besseldot".
  • sigma, degree for the ANOVA kernel "anovadot".
  • length, lambda, normalized for the "stringdot" kernel where length is the length of the strings considered, lambda the decay factor and normalized a logical parameter determining if the kernel evaluations should be normalized.
  • Hyper-parameters for user defined kernels can be passed through the kpar parameter as well. In the case of a Radial Basis kernel function (Gaussian) kpar can also be set to the string "automatic" which uses the heuristics in sigest to calculate a good sigma value for the Gaussian RBF or Laplace kernel, from the data. (default = "automatic").

    C
    cost of constraints violation (default: 1) this is the `C'-constant of the regularization term in the Lagrange formulation.
    nu
    parameter needed for nu-svc, one-svc, and nu-svr. The nu parameter sets the upper bound on the training error and the lower bound on the fraction of data points to become Support Vectors (default: 0.2).
    epsilon
    epsilon in the insensitive-loss function used for eps-svr, nu-svr and eps-bsvm (default: 0.1)
    prob.model
    if set to TRUE builds a model for calculating class probabilities or in case of regression, calculates the scaling parameter of the Laplacian distribution fitted on the residuals. Fitting is done on output data created by performing a 3-fold cross-validation on the training data. For details see references. (default: FALSE)
    class.weights
    a named vector of weights for the different classes, used for asymmetric class sizes. Not all factor levels have to be supplied (default weight: 1). All components have to be named.
    cache
    cache memory in MB (default 40)
    tol
    tolerance of termination criterion (default: 0.001)
    shrinking
    option whether to use the shrinking-heuristics (default: TRUE)
    cross
    if a integer value k>0 is specified, a k-fold cross validation on the training data is performed to assess the quality of the model: the accuracy rate for classification and the Mean Squared Error for regression
    fit
    indicates whether the fitted values should be computed and included in the model or not (default: TRUE)
    ...
    additional parameters for the low level fitting function
    subset
    An index vector specifying the cases to be used in the training sample. (NOTE: If given, this argument must be named.)
    na.action
    A function to specify the action to be taken if NAs are found. The default action is na.omit, which leads to rejection of cases with missing values on any required variable. An alternative is na.fail, which causes an error if NA cases are found. (NOTE: If given, this argument must be named.)

    Value

    An S4 object of class "ksvm" containing the fitted model, Accessor functions can be used to access the slots of the object (see examples) which include:
    alpha
    The resulting support vectors, (alpha vector) (possibly scaled).
    alphaindex
    The index of the resulting support vectors in the data matrix. Note that this index refers to the pre-processed data (after the possible effect of na.omit and subset)
    coef
    The corresponding coefficients times the training labels.
    b
    The negative intercept.
    nSV
    The number of Support Vectors
    obj
    The value of the objective function. In case of one-against-one classification this is a vector of values
    error
    Training error
    cross
    Cross validation error, (when cross > 0)
    prob.model
    Contains the width of the Laplacian fitted on the residuals in case of regression, or the parameters of the sigmoid fitted on the decision values in case of classification.

    Details

    ksvm uses John Platt's SMO algorithm for solving the SVM QP problem an most SVM formulations. On the spoc-svc, kbb-svc, C-bsvc and eps-bsvr formulations a chunking algorithm based on the TRON QP solver is used. For multiclass-classification with $k$ classes, $k > 2$, ksvm uses the `one-against-one'-approach, in which $k(k-1)/2$ binary classifiers are trained; the appropriate class is found by a voting scheme, The spoc-svc and the kbb-svc formulations deal with the multiclass-classification problems by solving a single quadratic problem involving all the classes. If the predictor variables include factors, the formula interface must be used to get a correct model matrix. In classification when prob.model is TRUE a 3-fold cross validation is performed on the data and a sigmoid function is fitted on the resulting decision values $f$. The data can be passed to the ksvm function in a matrix or a data.frame, in addition ksvm also supports input in the form of a kernel matrix of class kernelMatrix or as a list of character vectors where a string kernel has to be used. The plot function for binary classification ksvm objects displays a contour plot of the decision values with the corresponding support vectors highlighted. The predict function can return class probabilities for classification problems by setting the type parameter to "probabilities". The problem of model selection is partially addressed by an empirical observation for the RBF kernels (Gaussian , Laplace) where the optimal values of the $sigma$ width parameter are shown to lie in between the 0.1 and 0.9 quantile of the $\|x- x'\|$ statistics. When using an RBF kernel and setting kpar to "automatic", ksvm uses the sigest function to estimate the quantiles and uses the median of the values.

    References

    See Also

    predict.ksvm, ksvm-class, couple

    Examples

    Run this code
    
    ## simple example using the spam data set
    data(spam)
    
    ## create test and training set
    index <- sample(1:dim(spam)[1])
    spamtrain <- spam[index[1:floor(dim(spam)[1]/2)], ]
    spamtest <- spam[index[((ceiling(dim(spam)[1]/2)) + 1):dim(spam)[1]], ]
    
    ## train a support vector machine
    filter <- ksvm(type~.,data=spamtrain,kernel="rbfdot",
                   kpar=list(sigma=0.05),C=5,cross=3)
    filter
    
    ## predict mail type on the test set
    mailtype <- predict(filter,spamtest[,-58])
    
    ## Check results
    table(mailtype,spamtest[,58])
    
    
    ## Another example with the famous iris data
    data(iris)
    
    ## Create a kernel function using the build in rbfdot function
    rbf <- rbfdot(sigma=0.1)
    rbf
    
    ## train a bound constraint support vector machine
    irismodel <- ksvm(Species~.,data=iris,type="C-bsvc",
                      kernel=rbf,C=10,prob.model=TRUE)
    
    irismodel
    
    ## get fitted values
    fitted(irismodel)
    
    ## Test on the training set with probabilities as output
    predict(irismodel, iris[,-5], type="probabilities")
    
    
    ## Demo of the plot function
    x <- rbind(matrix(rnorm(120),,2),matrix(rnorm(120,mean=3),,2))
    y <- matrix(c(rep(1,60),rep(-1,60)))
    
    svp <- ksvm(x,y,type="C-svc")
    plot(svp,data=x)
    
    
    ### Use kernelMatrix
    K <- as.kernelMatrix(crossprod(t(x)))
    
    svp2 <- ksvm(K, y, type="C-svc")
    
    svp2
    
    # test data
    xtest <- rbind(matrix(rnorm(20),,2),matrix(rnorm(20,mean=3),,2))
    # test kernel matrix i.e. inner/kernel product of test data with
    # Support Vectors
    
    Ktest <- as.kernelMatrix(crossprod(t(xtest),t(x[SVindex(svp2), ])))
    
    predict(svp2, Ktest)
    
    
    #### Use custom kernel 
    
    k <- function(x,y) {(sum(x*y) +1)*exp(-0.001*sum((x-y)^2))}
    class(k) <- "kernel"
    
    data(promotergene)
    
    ## train svm using custom kernel
    gene <- ksvm(Class~.,data=promotergene[c(1:20, 80:100),],kernel=k,
                 C=5,cross=5)
    
    gene
    
    
    #### Use text with string kernels
    data(reuters)
    is(reuters)
    tsv <- ksvm(reuters,rlabels,kernel="stringdot",
                kpar=list(length=5),cross=3,C=10)
    tsv
    
    
    ## regression
    # create data
    x <- seq(-20,20,0.1)
    y <- sin(x)/x + rnorm(401,sd=0.03)
    
    # train support vector machine
    regm <- ksvm(x,y,epsilon=0.01,kpar=list(sigma=16),cross=3)
    plot(x,y,type="l")
    lines(x,predict(regm,x),col="red")
    

    Run the code above in your browser using DataLab