Learn R Programming

VDA (version 1.3)

vda.r: Vertex Discriminant Analysis

Description

Multicategory Vertex Discriminant Analysis (VDA) for classifying an outcome with k possible categories and p features, based on a data set of n cases. The default penalty function is Ridge. Lasso, Euclidean, and a mixture of Lasso and Euclidean are also available. Please refer to vda.le

Usage

vda.r(x, y, lambda) vda(x, y, lambda)

Arguments

x
n x p matrix or data frame containing the cases for each feature. The rows correspond to cases and the columns to the features. Intercept column is not included in this.
y
n x 1 vector representing the outcome variable. Each element denotes which one of the k classes that case belongs to
lambda
Tuning constant. The default value is set as $1/n$. Can also be found using cv.vda.r, which uses K-fold cross validation to determine the optimal value.

Value

feature
Feature matrix x with an intercept vector added as the first column. All entries in the first column should equal 1.
stand.feature
The feature matrix where the all columns are standardized, with the exception of the intercept column which is left unstandardized.
class
Class vector y. All elements should be integers between 1 and classes.
cases
Number of cases, n.
classes
Number of classes, k.
features
Number of feautres, p.
lambda
Tuning constant lambda that was used during analysis.
predicted
Vector of predicted category values based on VDA.
coefficient
The estimated coefficient matrix where the columns represent the coefficients for each predictor variable corresponding to k-1 outcome categories. The coefficient matrix is used for classifying new cases.
training_error_rate
The percentage of instances in the training set where the predicted outcome category is not equal to the case's true category.
call
The matched call
attr(,"class")
The function results in an object of class "vda.r"

Details

Outcome classification is based on linear discrimination among the vertices of a regular simplex in a k-1-dimension Euclidean space, where each vertex represents one of the categories. Discrimination is phrased as a regression problem involving $\epsilon-$insensitive residuals and a L2 quadratic ("ridge") penalty on the coefficients of the linear predictors. The objective function can by minimized by a primal Majorization-Minimization (MM) algorithm that
  1. relies on quadratic majorization and iteratively re-weighted least squares,
  2. is simpler to program than algorithms that pass to the dual of the original optimization problem, and
  3. can be accelerated by step doubling.

Comparisons on real and simulated data suggest that the MM algorithm for VDA is competitive in statistical accuracy and computational speed with the best currently available algorithms for discriminant analysis, such as linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), k-nearest neighbor, one-vs-rest binary support vector machines, multicategory support vector machines, classification and regression tree (CART), and random forest prediction.

References

Lange, K. and Wu, T.T. (2008) An MM Algorithm for Multicategory Vertex Discriminant Analysis. Journal of Computational and Graphical Statistics, Volume 17, No 3, 527-544.

See Also

For determining the optimal values for lambda, refer to cv.vda.r.

For high-dimensional setting and conduct variable selection, please refer to vda.le.

Examples

Run this code
# load zoo data
# column 1 is name, columns 2:17 are features, column 18 is class
data(zoo)

#matrix containing all predictor vectors
x <- zoo[,2:17]

#outcome class vector
y <- zoo[,18]

#run VDA
out <- vda.r(x, y)

#Predict five cases based on VDA
fivecases <- matrix(0,5,16)
fivecases[1,] <- c(1,0,0,1,0,0,0,1,1,1,0,0,4,0,1,0)
fivecases[2,] <- c(1,0,0,1,0,0,1,1,1,1,0,0,4,1,0,1)
fivecases[3,] <- c(0,1,1,0,1,0,0,0,1,1,0,0,2,1,1,0)
fivecases[4,] <- c(0,0,1,0,0,1,1,1,1,0,0,1,0,1,0,0)
fivecases[5,] <- c(0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0)
predict(out, fivecases)

Run the code above in your browser using DataLab