Learn R Programming

bst (version 0.3-24)

rbstpath: Robust Boosting Path for Nonconvex Loss Functions

Description

Gradient boosting path for optimizing robust loss functions with componentwise linear, smoothing splines, tree models as base learners. See details below before use.

Usage

rbstpath(x, y, rmstop=seq(40, 400, by=20), ctrl=bst_control(), del=1e-16, ...)

Value

A length rmstop vector of lists with each element being an object of class rbst.

Arguments

x

a data frame containing the variables in the model.

y

vector of responses. y must be in {1, -1}.

rmstop

vector of boosting iterations

ctrl

an object of class bst_control.

del

convergency criteria

...

arguments passed to rbst

Author

Zhu Wang

Details

This function invokes rbst with mstop being each element of vector rmstop. It can provide different paths. Thus rmstop serves as another hyper-parameter. However, the most important hyper-parameter is the loss truncation point or the point determines the level of nonconvexity. This is an experimental function and may not be needed in practice.

See Also

rbst

Examples

Run this code
x <- matrix(rnorm(100*5),ncol=5)
c <- 2*x[,1]
p <- exp(c)/(exp(c)+exp(-c))
y <- rbinom(100,1,p)
y[y != 1] <- -1
y[1:10] <- -y[1:10]
x <- as.data.frame(x)
dat.m <- bst(x, y, ctrl = bst_control(mstop=50), family = "hinge", learner = "ls")
predict(dat.m)
dat.m1 <- bst(x, y, ctrl = bst_control(twinboost=TRUE, 
coefir=coef(dat.m), xselect.init = dat.m$xselect, mstop=50))
dat.m2 <- rbst(x, y, ctrl = bst_control(mstop=50, s=0, trace=TRUE), 
rfamily = "thinge", learner = "ls")
predict(dat.m2)
rmstop <- seq(10, 40, by=10)
dat.m3 <- rbstpath(x, y, rmstop, ctrl=bst_control(s=0), rfamily = "thinge", learner = "ls")

Run the code above in your browser using DataLab