Compute least trimmed squares regression with an \(L_{1}\) penalty on the regression coefficients, which allows for sparse model estimates.
sparseLTS(x, ...)# S3 method for formula
sparseLTS(formula, data, ...)
# S3 method for default
sparseLTS(
x,
y,
lambda,
mode = c("lambda", "fraction"),
alpha = 0.75,
normalize = TRUE,
intercept = TRUE,
nsamp = c(500, 10),
initial = c("sparse", "hyperplane", "random"),
ncstep = 2,
use.correction = TRUE,
tol = .Machine$double.eps^0.5,
eps = .Machine$double.eps,
use.Gram,
crit = c("BIC", "PE"),
splits = foldControl(),
cost = rtmspe,
costArgs = list(),
selectBest = c("hastie", "min"),
seFactor = 1,
ncores = 1,
cl = NULL,
seed = NULL,
model = TRUE,
...
)
If crit
is "PE"
and lambda
contains more than one
value of the penalty parameter, an object of class "perrySparseLTS"
(inheriting from class "perryTuning"
, see
perryTuning
). It contains information on the
prediction error criterion, and includes the final model with the optimal
tuning paramter as component finalModel
.
Otherwise an object of class "sparseLTS"
with the following
components:
lambda
a numeric vector giving the values of the penalty parameter.
best
an integer vector or matrix containing the respective best subsets of \(h\) observations found and used for computing the raw estimates.
objective
a numeric vector giving the respective values of the sparse LTS objective function, i.e., the \(L_{1}\) penalized sums of the \(h\) smallest squared residuals from the raw fits.
coefficients
a numeric vector or matrix containing the respective coefficient estimates from the reweighted fits.
fitted.values
a numeric vector or matrix containing the respective fitted values of the response from the reweighted fits.
residuals
a numeric vector or matrix containing the respective residuals from the reweighted fits.
center
a numeric vector giving the robust center estimates of the corresponding reweighted residuals.
scale
a numeric vector giving the robust scale estimates of the corresponding reweighted residuals.
cnp2
a numeric vector giving the respective consistency factors applied to the scale estimates of the reweighted residuals.
wt
an integer vector or matrix containing binary weights that indicate outliers from the respective reweighted fits, i.e., the weights are \(1\) for observations with reasonably small reweighted residuals and \(0\) for observations with large reweighted residuals.
df
an integer vector giving the respective degrees of freedom of the obtained reweighted model fits, i.e., the number of nonzero coefficient estimates.
intercept
a logical indicating whether the model includes a constant term.
alpha
a numeric value giving the percentage of the residuals for which the \(L_{1}\) penalized sum of squares was minimized.
quan
the number \(h\) of observations used to compute the raw estimates.
raw.coefficients
a numeric vector or matrix containing the respective coefficient estimates from the raw fits.
raw.fitted.values
a numeric vector or matrix containing the respective fitted values of the response from the raw fits.
raw.residuals
a numeric vector or matrix containing the respective residuals from the raw fits.
raw.center
a numeric vector giving the robust center estimates of the corresponding raw residuals.
raw.scale
a numeric vector giving the robust scale estimates of the corresponding raw residuals.
raw.cnp2
a numeric value giving the consistency factor applied to the scale estimate of the raw residuals.
raw.wt
an integer vector or matrix containing binary weights that indicate outliers from the respective raw fits, i.e., the weights used for the reweighted fits.
crit
an object of class "bicSelect"
containing the
BIC values and indicating the final model (only returned if argument
crit
is "BIC"
and argument lambda
contains more
than one value for the penalty parameter).
x
the predictor matrix (if model
is TRUE
).
y
the response variable (if model
is TRUE
).
call
the matched function call.
a numeric matrix containing the predictor variables.
additional arguments to be passed down.
a formula describing the model.
an optional data frame, list or environment (or object coercible
to a data frame by as.data.frame
) containing the variables in
the model. If not found in data, the variables are taken from
environment(formula)
, typically the environment from which
sparseLTS
is called.
a numeric vector containing the response variable.
a numeric vector of non-negative values to be used as penalty parameter.
a character string specifying the type of penalty parameter. If
"lambda"
, lambda
gives the grid of values for the penalty
parameter directly. If "fraction"
, the smallest value of the penalty
parameter that sets all coefficients to 0 is first estimated based on
bivariate winsorization, then lambda
gives the fractions of that
estimate to be used (hence all values of lambda
should be in the
interval [0,1] in that case).
a numeric value giving the percentage of the residuals for which the \(L_{1}\) penalized sum of squares should be minimized (the default is 0.75).
a logical indicating whether the predictor variables
should be normalized to have unit \(L_{2}\) norm (the default is
TRUE
). Note that normalization is performed on the subsamples
rather than the full data set.
a logical indicating whether a constant term should be
included in the model (the default is TRUE
).
a numeric vector giving the number of subsamples to be used in
the two phases of the algorithm. The first element gives the number of
initial subsamples to be used. The second element gives the number of
subsamples to keep after the first phase of ncstep
C-steps. For
those remaining subsets, additional C-steps are performed until
convergence. The default is to first perform ncstep
C-steps on 500
initial subsamples, and then to keep the 10 subsamples with the lowest value
of the objective function for additional C-steps until convergence.
a character string specifying the type of initial subsamples
to be used. If "sparse"
, the lasso fit given by three randomly
selected data points is first computed. The corresponding initial subsample
is then formed by the fraction alpha
of data points with the smallest
squared residuals. Note that this is optimal from a robustness point of
view, as the probability of including an outlier in the initial lasso fit is
minimized. If "hyperplane"
, a hyperplane through \(p\) randomly
selected data points is first computed, where \(p\) denotes the number of
variables. The corresponding initial subsample is then again formed by the
fraction alpha
of data points with the smallest squared residuals.
Note that this cannot be applied if \(p\) is larger than the number of
observations. Nevertheless, the probability of including an outlier
increases with increasing dimension \(p\). If "random"
, the
initial subsamples are given by a fraction alpha
of randomly
selected data points. Note that this leads to the largest probability of
including an outlier.
a positive integer giving the number of C-steps to perform on all subsamples in the first phase of the algorithm (the default is to perform two C-steps).
currently ignored. Small sample correction factors may be added in the future.
a small positive numeric value giving the tolerance for convergence.
a small positive numeric value used to determine whether the variability within a variable is too small (an effective zero).
a logical indicating whether the Gram matrix of the
explanatory variables should be precomputed in the lasso fits on the
subsamples. If the number of variables is large, computation may be faster
when this is set to FALSE
. The default is to use TRUE
if the
number of variables is smaller than the number of observations in the
subsamples and smaller than 100, and FALSE
otherwise.
a character string specifying the optimality criterion to be
used for selecting the final model. Possible values are "BIC"
for
the Bayes information criterion and "PE"
for resampling-based
prediction error estimation. This is ignored if lambda
contains
only one value of the penalty parameter, as selecting the optimal value
is trivial in that case.
an object giving data splits to be used for prediction error
estimation (see perryTuning
). This is only relevant
if selecting the optimal lambda
via prediction error estimation.
a cost function measuring prediction loss (see
perryTuning
for some requirements). The
default is to use the root trimmed mean squared prediction error
(see cost
). This is only relevant if selecting
the optimal lambda
via prediction error estimation.
a list of additional arguments to be passed to the
prediction loss function cost
. This is only relevant if
selecting the optimal lambda
via prediction error estimation.
arguments specifying a criterion for selecting
the best model (see perryTuning
). The default is to
use a one-standard-error rule. This is only relevant if selecting the
optimal lambda
via prediction error estimation.
a positive integer giving the number of processor cores to be
used for parallel computing (the default is 1 for no parallelization). If
this is set to NA
, all available processor cores are used. For
prediction error estimation, parallel computing is implemented on the R
level using package parallel. Otherwise parallel computing is
implemented on the C++ level via OpenMP (https://www.openmp.org/).
a parallel cluster for parallel computing as generated by
makeCluster
. This is preferred over ncores
for prediction error estimation, in which case ncores
is only used on
the C++ level for computing the final model.
optional initial seed for the random number generator (see
.Random.seed
). On parallel R worker processes for prediction
error estimation, random number streams are used and the seed is set via
clusterSetRNGStream
.
a logical indicating whether the data x
and y
should be added to the return object. If intercept
is TRUE
,
a column of ones is added to x
to account for the intercept.
Andreas Alfons
Alfons, A., Croux, C. and Gelper, S. (2013) Sparse least trimmed squares regression for analyzing high-dimensional large data sets. The Annals of Applied Statistics, 7(1), 226--248. tools:::Rd_expr_doi("10.1214/12-AOAS575")
coef
,
fitted
,
plot
,
predict
,
residuals
,
rstandard
,
weights
,
ltsReg
## generate data
# example is not high-dimensional to keep computation time low
library("mvtnorm")
set.seed(1234) # for reproducibility
n <- 100 # number of observations
p <- 25 # number of variables
beta <- rep.int(c(1, 0), c(5, p-5)) # coefficients
sigma <- 0.5 # controls signal-to-noise ratio
epsilon <- 0.1 # contamination level
Sigma <- 0.5^t(sapply(1:p, function(i, j) abs(i-j), 1:p))
x <- rmvnorm(n, sigma=Sigma) # predictor matrix
e <- rnorm(n) # error terms
i <- 1:ceiling(epsilon*n) # observations to be contaminated
e[i] <- e[i] + 5 # vertical outliers
y <- c(x %*% beta + sigma * e) # response
x[i,] <- x[i,] + 5 # bad leverage points
## fit sparse LTS model for one value of lambda
sparseLTS(x, y, lambda = 0.05, mode = "fraction")
## fit sparse LTS models over a grid of values for lambda
frac <- seq(0.2, 0.05, by = -0.05)
sparseLTS(x, y, lambda = frac, mode = "fraction")
Run the code above in your browser using DataLab