Learn R Programming

rchemo (version 0.1-3)

lmrda: LMR-DA models

Description

Discrimination (DA) based on linear regression (LMR).

Usage

lmrda(X, y, weights = NULL)

# S3 method for Lmrda predict(object, X, ...)

Value

For lrmda:

fm

List with the outputs((coefficients): coefficient matrix; (residuals): residual matrix; (fitted.values): the fitted mean values; (effects): component relating to the linear fit, for use by extractor functions; (weights): Weights (\(n\)) applied to the training observations for the PLS2; (rank): the numeric rank of the fitted linear model; (assign): component relating to the linear fit, for use by extractor functions; (qr): component relating to the linear fit, for use by extractor functions; (df.residual): the residual degrees of freedom; (xlevels): (only where relevant) a record of the levels of the factors used in fitting; (call): the matched call; (terms): the terms object used; (model): the model frame used).

lev

y levels.

ni

number of observations by level of y.

For predict.Lrmda:

pred

predicted classes of observations.

posterior

posterior probability of belonging to a class for each observation.

Arguments

X

For the main function: Training X-data (\(n, p\)). --- For the auxiliary function: New X-data (\(m, p\)) to consider.

y

Training class membership (\(n\)). Note: If y is a factor, it is replaced by a character vector.

weights

Weights (\(n\)) to apply to the training observations for the PLS2. Internally, weights are "normalized" to sum to 1. Default to NULL (weights are set to \(1 / n\)).

object

For the auxiliary function: A fitted model, output of a call to the main functions.

...

For the auxiliary function: Optional arguments. Not used.

Details

The training variable \(y\) (univariate class membership) is transformed to a dummy table containing \(nclas\) columns, where \(nclas\) is the number of classes present in \(y\). Each column is a dummy variable (0/1). Then, a linear regression model (LMR) is run on the \(X-\)data and the dummy table, returning predictions of the dummy variables. For a given observation, the final prediction is the class corresponding to the dummy variable for which the prediction is the highest.

Examples

Run this code

n <- 50 ; p <- 8
Xtrain <- matrix(rnorm(n * p), ncol = p)
ytrain <- sample(c(1, 4, 10), size = n, replace = TRUE)
m <- 5
Xtest <- Xtrain[1:m, ] ; ytest <- ytrain[1:m]

fm <- lmrda(Xtrain, ytrain)
names(fm)
predict(fm, Xtest)

coef(fm$fm)

pred <- predict(fm, Xtest)$pred
err(pred, ytest)

Run the code above in your browser using DataLab