Learn R Programming

MTE (version 1.0.2)

MTElasso: MTE-Lasso estimator

Description

MTELasso is the penalized MTE for robust estimation and variable selection for linear regression. It can deal with both fixed and high-dimensional settings.

Usage

MTElasso(
  X,
  y,
  beta.ini,
  p,
  lambda,
  adaptive = TRUE,
  t,
  method = "MTE",
  intercept = FALSE,
  penalty.factor = rep(1, ncol(X)),
  ...
)

Arguments

X

design matrix, standardization is recommended.

y

response vector.

beta.ini

initial estimates of beta. If not specified, LADLasso estimates from rq.lasso.fit() in rqPen is used. Otherwise, robust estimators are strongly recommended.

p

Taylor expansion order.

lambda

regularization parameter for LASSO, but not necessary if "adaptive=TRUE".

adaptive

logic argument to indicate if Adaptive-Lasso is used. Default is TRUE.

t

the tangent point. You may specify a sequence of values, so that the function automatically select the optimal one.

method

it can be ("MTE", "MLE"). The default is MTE. If MLE, classical LASSO is used.

intercept

logical input that indicates if intercept needs to be estimated. Default is FALSE.

penalty.factor

can be used to force nonzero coefficients. Default is rep(1, ncol(X)) as in glmnet.

...

other arguments that are used in glmnet.

Value

It returns a sparse vector of estimates of linear regression. It has two types of penalty, LASSO and AdaLasso. Coordinate descent algorithm is used for iteratively updating coefficients.

beta

sparse regression coefficient

fitted

predicted response

t

optimal tangent point

Examples

Run this code
# NOT RUN {
set.seed(2017)
n=200; d=500
X=matrix(rnorm(n*d), nrow=n, ncol=d)
beta=c(rep(2,6), rep(0, d-6))
y=X%*%beta+c(rnorm(150), rnorm(30,10,10), rnorm(20,0,100))
output.MTELasso=MTElasso(X, y, p=2, t=0.05, method="MTE")
beta.est=output.MTELasso$beta

# }

Run the code above in your browser using DataLab