Learn R Programming

l2boost (version 1.0.3)

l2boost: Generic gradient descent boosting method for linear regression.

Description

Efficient implementation of Friedman's boosting algorithm [Friedman (2001)] with L2-loss function and coordinate direction (design matrix columns) basis functions. This includes the elasticNet data augmentation of Ehrlinger and Ishwaran (2012), which adds an L2-penalization (lambda) similar to the elastic net [Zou and Hastie (2005)].

Usage

l2boost(x, ...)

# S3 method for default l2boost(x, y, M, nu, lambda, trace, type , qr.tolerance, eps.tolerance, ...)

# S3 method for formula l2boost(formula, data, ...)

Arguments

x

design matrix of dimension n x p

...

other arguments (currently unused)

y

response variable of length n

M

number of steps to run boost algorithm (M >1)

nu

L1 shrinkage parameter (0 < nu <= 1)

lambda

L2 shrinkage parameter used for elastic net boosting (lambda > 0 || lambda = NULL)

trace

show runtime messages (default: FALSE)

type

Choice of l2boost algorithm from "discrete", "hybrid", "friedman","lars". See details below. (default "discrete")

qr.tolerance

tolerance limit for use in qr.solve (default: 1e-30)

eps.tolerance

dynamic step size lower limit (default: .Machine$double.eps)

formula

an object of class formula (or one that can be coerced to that class): a symbolic description of the model to be fitted. The details of model specification are given under formula.

data

an optional data frame, list or environment (or object coercible by as.data.frame to a data frame) containing the variables in the model used in the formula.

Value

A "l2boost" object is returned, for which print, plot, predict, and coef methods exist.

call

the matched call.

type

Choice of l2boost algorithm from "friedman", "discrete", "hybrid", "lars"

nu

The L1 boosting shrinkage parameter value

lambda

The L2 elasticNet shrinkage parameter value

x

The training dataset

x.na

Columns of original design matrix with values na, these have been removed from x

x.attr

scale attributes of design matrix

names

Column names of design matrix

y

training response vector associated with x, centered about the mean value ybar

ybar

mean value of training response vector

mjk

measure to favorability. This is a matrix of size p by m. Each coordinate j has a measure at each step m

stepSize

vector of step lengths taken (NULL unless type = "lars")

l.crit

vector of column index of critical direction

L.crit

number of steps along each l.crit direction

S.crit

The critical step value where a direction change occurs

path.Fm

estimates of response at each step m

Fm

estimate of response at final step M

rhom.path

boosting parameter estimate at each step m

betam.path

beta parameter estimates at each step m. List of m vectors of length p

betam

beta parameter estimate at final step M

The notation for the return values is described in Ehrlinger and Ishwaran (2012).

Details

The l2boost function is an efficient implementation of a generic boosting method [Friedman (2001)] for linear regression using an L2-loss function. The basis functions are the column vectors of the design matrix. l2boost scales the design matrix such that the coordinate columns of the design correspond to the gradient directions for each covariate. The boosting coefficients are equivalent to the gradient-correlation of each covariate. Friedman's gradient descent boosting algorithm proceeds at each step along the covariate direction closest (in L2 distance) to the maximal gradient descent direction.

We include a series of algorithms to solve the boosting optimization. These are selected through the type argument

  • friedman - The original, bare-bones l2boost (Friedman (2001)). This method takes a fixed step size of length nu.

  • lars - The l2boost-lars-limit (See Efron et.al (2004)). This algorithm takes a single step of the optimal length to the critical point required for a new coordinate direction to become favorable. Although optimal in the number of steps required to reach the OLS solution, this method may be computationally expensive for large p problems, as the method requires a matrix inversion to calculate the step length.

  • discrete - Optimized Friedman algorithm to reduce number of evaluations required [Ehrlinger and Ishwaran 2012]. The algorithm dynamically determines the number of steps of length nu to take along a descent direction. The discrete method allows the algorithm to take step sizes of multiples of nu at any evaluation.

  • hybrid - Similar to discrete, however only allows combining steps along the first descent direction. hybrid Works best if nu is moderate, but not too small. In this case, Friedman's algorithm would take many steps along the first coordinate direction, and then cycle when multiple coordinates have similar gradient directions (by the L2 measure).

l2boost keeps track of all gradient-correlation coefficients (rho) at each iteration in addition to the maximal descent direction taken by the method. Visualizing these coefficients can be informative of the inner workings of gradient boosting (see the examples in the plot.l2boost method).

The l2boost function uses an arbitrary L1-regularization parameter (nu), and includes the elementary data augmentation of Ehrlinger and Ishwaran (2012), to add an L2-penalization (lambda) similar to the elastic net [Zou and Hastie (2005)]. The L2-regularization reverses repressibility, a condition where one variable acts as a boosting surrogate for other, possibly informative, variables. Along with the decorrelation effect, this elasticBoost regularization circumvents L2Boost deficiencies in correlated settings.

We include a series of S3 functions for working with l2boost objects:

A cross-validation method (cv.l2boost) is also included for L2boost and elasticBoost, for cross-validated error estimates and regularization parameter optimizations.

References

Friedman J. (2001) Greedy function approximation: A gradient boosting machine. Ann. Statist., 29:1189-1232

Ehrlinger J., and Ishwaran H. (2012). "Characterizing l2boosting" Ann. Statist., 40 (2), 1074-1101

Zou H. and Hastie T (2005) "Regularization and variable selection via the elastic net" J. R. Statist. Soc. B, 67, Part 2, pp. 301-320

Efron B., Hastie T., Johnstone I., and Tibshirani R. (2004). "Least Angle Regression" Ann. Statist. 32:407-499

See Also

print.l2boost, plot.l2boost, predict.l2boost, coef.l2boost, residuals.l2boost, fitted.l2boost methods of l2boost and cv.l2boost for K fold cross-validation of the l2boost method.

Examples

Run this code
# NOT RUN {
#--------------------------------------------------------------------------
# Example 1: Diabetes data
#  
# See Efron B., Hastie T., Johnstone I., and Tibshirani R. 
# Least angle regression. Ann. Statist., 32:407-499, 2004.
data(diabetes, package="l2boost")

l2.object <- l2boost(diabetes$x,diabetes$y, M=1000, nu=.01)

# Plot the boosting rho, and regression beta coefficients as a function of
# boosting steps m
#
# Note: The selected coordinate trajectories are colored in red after selection, and 
# blue before. Unselected coordinates are colored grey.
#
par(mfrow=c(2,2))
plot(l2.object)
plot(l2.object, type="coef")

# increased shrinkage and number of iterations.
l2.shrink <- l2boost(diabetes$x,diabetes$y,M=5000, nu=1.e-3) 
plot(l2.shrink)
plot(l2.shrink, type="coef")

# }
# NOT RUN {
#--------------------------------------------------------------------------
# Example 2: elasticBoost simulation
# Compare l2boost and elastic net boosting
# 
# See Zou H. and Hastie T. Regularization and variable selection via the 
# elastic net. J. Royal Statist. Soc. B, 67(2):301-320, 2005
set.seed(1025)

# The default simulation uses 40 covariates with signal concentrated on 
# 3 groups of 5 correlated covariates (for 15 signal covariates)
dta <- elasticNetSim(n=100)

# l2boost the simulated data with groups of correlated coordinates
l2.object <- l2boost(dta$x,dta$y,M=10000, nu=1.e-3, lambda=NULL)

par(mfrow=c(2,2))
# plot the l2boost trajectories over all M
plot(l2.object, main="l2Boost nu=1.e-3")
# Then zoom into the first m=500 steps
plot(l2.object, xlim=c(0,500), ylim=c(.25,.5), main="l2Boost nu=1.e-3")

# elasticNet same data with L1 parameter lambda=0.1
en.object <- l2boost(dta$x,dta$y,M=10000, nu=1.e-3, lambda=.1) 

# plot the elasticNet trajectories over all M
#
# Note 2: The elasticBoost selects all coordinates close to the selection boundary,
# where l2boost leaves some unselected (in grey)
plot(en.object, main="elasticBoost nu=1.e-3, lambda=.1")
# Then zoom into the first m=500 steps
plot(en.object, xlim=c(0,500), ylim=c(.25,.5),
  main="elasticBoost nu=1.e-3, lambda=.1")
# }
# NOT RUN {
# }

Run the code above in your browser using DataLab