The Relevance Vector Machine is a Bayesian model for regression and
classification of identical functional form to the support vector
machine.
The rvm
function currently supports only regression.
# S4 method for formula
rvm(x, data=NULL, ..., subset, na.action = na.omit)# S4 method for vector
rvm(x, ...)
# S4 method for matrix
rvm(x, y, type="regression",
kernel="rbfdot", kpar="automatic",
alpha= ncol(as.matrix(x)), var=0.1, var.fix=FALSE, iterations=100,
verbosity = 0, tol = .Machine$double.eps, minmaxdiff = 1e-3,
cross = 0, fit = TRUE, ... , subset, na.action = na.omit)
# S4 method for list
rvm(x, y, type = "regression",
kernel = "stringdot", kpar = list(length = 4, lambda = 0.5),
alpha = 5, var = 0.1, var.fix = FALSE, iterations = 100,
verbosity = 0, tol = .Machine$double.eps, minmaxdiff = 1e-3,
cross = 0, fit = TRUE, ..., subset, na.action = na.omit)
An S4 object of class "rvm" containing the fitted model. Accessor functions can be used to access the slots of the object which include :
The resulting relevance vectors
The index of the resulting relevance vectors in the data matrix
Number of relevance vectors
The indexes of the relevance vectors
Training error (if fit = TRUE
)
...
a symbolic description of the model to be fit.
When not using a formula x can be a matrix or vector containing the training
data or a kernel matrix of class kernelMatrix
of the training data
or a list of character vectors (for use with the string
kernel). Note, that the intercept is always excluded, whether
given in the formula or not.
an optional data frame containing the variables in the model. By default the variables are taken from the environment which `rvm' is called from.
a response vector with one label for each row/component of x
. Can be either
a factor (for classification tasks) or a numeric vector (for
regression).
rvm
can only be used for regression at the moment.
the kernel function used in training and predicting. This parameter can be set to any function, of class kernel, which computes a dot product between two vector arguments. kernlab provides the most popular kernel functions which can be used by setting the kernel parameter to the following strings:
rbfdot
Radial Basis kernel "Gaussian"
polydot
Polynomial kernel
vanilladot
Linear kernel
tanhdot
Hyperbolic tangent kernel
laplacedot
Laplacian kernel
besseldot
Bessel kernel
anovadot
ANOVA RBF kernel
splinedot
Spline kernel
stringdot
String kernel
The kernel parameter can also be set to a user defined function of class kernel by passing the function name as an argument.
the list of hyper-parameters (kernel parameters). This is a list which contains the parameters to be used with the kernel function. For valid parameters for existing kernels are :
sigma
inverse kernel width for the Radial Basis
kernel function "rbfdot" and the Laplacian kernel "laplacedot".
degree, scale, offset
for the Polynomial kernel "polydot"
scale, offset
for the Hyperbolic tangent kernel
function "tanhdot"
sigma, order, degree
for the Bessel kernel "besseldot".
sigma, degree
for the ANOVA kernel "anovadot".
length, lambda, normalized
for the "stringdot" kernel
where length is the length of the strings considered, lambda the
decay factor and normalized a logical parameter determining if the
kernel evaluations should be normalized.
Hyper-parameters for user defined kernels can be passed through the
kpar parameter as well. In the case of a Radial Basis kernel function (Gaussian)
kpar can also be set to the string "automatic" which uses the heuristics in
sigest
to calculate a good sigma
value for the
Gaussian RBF or Laplace kernel, from the data.
(default = "automatic").
The initial alpha vector. Can be either a vector of length equal to the number of data points or a single number.
the initial noise variance
Keep noise variance fix during iterations (default: FALSE)
Number of iterations allowed (default: 100)
tolerance of termination criterion
termination criteria. Stop when max difference is equal to this parameter (default:1e-3)
print information on algorithm convergence (default = FALSE)
indicates whether the fitted values should be computed and included in the model or not (default: TRUE)
if a integer value k>0 is specified, a k-fold cross validation on the training data is performed to assess the quality of the model: the Mean Squared Error for regression
An index vector specifying the cases to be used in the training sample. (NOTE: If given, this argument must be named.)
A function to specify the action to be taken if NA
s are
found. The default action is na.omit
, which leads to rejection of cases
with missing values on any required variable. An alternative
is na.fail
, which causes an error if NA
cases
are found. (NOTE: If given, this argument must be named.)
additional parameters
Alexandros Karatzoglou
alexandros.karatzoglou@ci.tuwien.ac.at
The Relevance Vector Machine typically leads to sparser models then the SVM. It also performs better in many cases (specially in regression).
Tipping, M. E.
Sparse Bayesian learning and the relevance vector machine
Journal of Machine Learning Research 1, 211-244
https://www.jmlr.org/papers/volume1/tipping01a/tipping01a.pdf
ksvm
# create data
x <- seq(-20,20,0.1)
y <- sin(x)/x + rnorm(401,sd=0.05)
# train relevance vector machine
foo <- rvm(x, y)
foo
# print relevance vectors
alpha(foo)
RVindex(foo)
# predict and plot
ytest <- predict(foo, x)
plot(x, y, type ="l")
lines(x, ytest, col="red")
Run the code above in your browser using DataLab