Learn R Programming

camel (version 0.2.0)

camel.slim: Calibrated Linear Regression

Description

The function "camel.slime" implements LAD/L1 Lasso, SQRT/L2 Lasso, and carlibrated Dantizg selector using L1 regularization.

Usage

camel.slim(X, Y, lambda = NULL, nlambda = NULL, lambda.min.ratio = NULL, method="lq", q = 2, prec = 1e-4, max.ite = 1e4, mu = 0.01, intercept = TRUE, verbose = TRUE)

Arguments

Y
The $n$ dimensional response vector.
X
The $n$ by $d$ design matrix.
lambda
A sequence of decresing positive value to control the regularization. Typical usage is to leave the input lambda = NULL and have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Users can also specify a sequence to override this. Default value is from $lambda.max$ to lambda.min.ratio*lambda.max. For Lq regression, the default value of $lambda.max$ is $\pi\sqrt{\log(d)/n}$. For Dantzig selector, the default value of $lambda.max$ is the minimum regularization parameter, which yields an all-zero estiamtes.
nlambda
The number of values used in lambda. Default value is 5.
lambda.min.ratio
The smallest value for lambda, as a fraction of the uppperbound (MAX) of the regularization parameter. The program can automatically generate lambda as a sequence of length = nlambda starting from MAX to lambda.min.ratio*MAX in log scale. The default value is 0.25 for Lq Lasso and 0.5 for Dantzig selector.
method
Dantzig selector is applied if method = "dantzig" and $L_q$ Lasso is applied if method = "lq". The default value is "lq".
q
The loss function used in Lq Lasso. It is only applicable when method = "lq" and must be either 1 or 2. The default value is 2.
prec
Stopping criterion. The default value is 1e-4.
max.ite
The iteration limit. The default value is 1e4.
mu
The smoothing parameter. The default value is 0.01.
intercept
Whether the intercept is included in the model. The defulat value is TRUE.
verbose
Tracing information is disabled if verbose = FALSE. The default value is TRUE.

Value

An object with S3 class "camel.slim" is returned:
beta
A matrix of regression estimates whose columns correspond to regularization parameters.
intercept
The value of intercepts corresponding to regularization parameters.
Y
The value of Y used in the program.
X
The value of X used in the program.
lambda
The sequence of regularization parameters lambda used in the program.
nlambda
The number of values used in lambda.
method
The method from the input.
sparsity
The sparsity levels of the solution path.
ite
A list of vectors where ite[[1]] is the number of external iteration and ite[[2]] is the number of internal iteration with the i-th entry corresponding to the i-th regularization parameter.
verbose
The verbose from the input.

Details

Calibrated Linear Regression adjust the regularization with respect to the noise level. Thus it achieves both improved finite sample performance and tuning insensitiveness.

References

1. A. Belloni, V. Chernozhukov and L. Wang. Pivotal recovery of sparse signals via conic programming. Biometrika, 2012. 2. L. Wang. L1 penalized LAD estimator for high dimensional linear regression. Journal of Multivariate Analysis, 2013. 3. E. Candes and T. Tao. The Dantzig selector: Statistical estimation when p is much larger than n. Annals of Statistics, 2007.

See Also

camel-package.

Examples

Run this code
## Generate the design matrix and regression coefficient vector
n = 200
d = 400
X = matrix(rnorm(n*d), n, d)
beta = c(3,2,0,1.5,rep(0,d-4))

## Generate response using Gaussian noise, and fit a sparse linear model using SQRT Lasso
eps.sqrt = rnorm(n)
Y.sqrt = X%*%beta + eps.sqrt
out.sqrt = camel.slim(X = X, Y = Y.sqrt, lambda = seq(0.8,0.2,length.out=5))

## Generate response using Cauchy noise, and fit a sparse linear model using LAD Lasso
eps.lad = rt(n = n, df = 1)
Y.lad = X%*%beta + eps.lad
out.lad = camel.slim(X = X, Y = Y.lad, q = 1, lambda = seq(0.5,0.2,length.out=5))

## Visualize the solution path
plot(out.sqrt)
plot(out.lad)

Run the code above in your browser using DataLab