Workhorse function providing the link between R and the C++ gbm engine.
gbm
is a front-end to gbm.fit
that uses the familiar R
modeling formulas. However, model.frame
is very slow if
there are many predictor variables. For power-users with many variables use
gbm.fit
. For general practice gbm
is preferable.
gbm.fit(
x,
y,
offset = NULL,
misc = NULL,
distribution = "bernoulli",
w = NULL,
var.monotone = NULL,
n.trees = 100,
interaction.depth = 1,
n.minobsinnode = 10,
shrinkage = 0.001,
bag.fraction = 0.5,
nTrain = NULL,
train.fraction = NULL,
keep.data = TRUE,
verbose = TRUE,
var.names = NULL,
response.name = "y",
group = NULL
)
A gbm.object
object.
A data frame or matrix containing the predictor variables. The
number of rows in x
must be the same as the length of y
.
A vector of outcomes. The number of rows in x
must be the
same as the length of y
.
A vector of offset values.
An R object that is simply passed on to the gbm engine. It can be used for additional data for the specific distribution. Currently it is only used for passing the censoring indicator for the Cox proportional hazards model.
Either a character string specifying the name of the
distribution to use or a list with a component name
specifying the
distribution and any additional parameters needed. If not specified,
gbm
will try to guess: if the response has only 2 unique values,
bernoulli is assumed; otherwise, if the response is a factor, multinomial is
assumed; otherwise, if the response has class "Surv"
, coxph is
assumed; otherwise, gaussian is assumed.
Currently available options are "gaussian"
(squared error),
"laplace"
(absolute loss), "tdist"
(t-distribution loss),
"bernoulli"
(logistic regression for 0-1 outcomes),
"huberized"
(huberized hinge loss for 0-1 outcomes),
"adaboost"
(the AdaBoost exponential loss for 0-1 outcomes),
"poisson"
(count outcomes), "coxph"
(right censored
observations), "quantile"
, or "pairwise"
(ranking measure
using the LambdaMart algorithm).
If quantile regression is specified, distribution
must be a list of
the form list(name = "quantile", alpha = 0.25)
where alpha
is
the quantile to estimate. The current version's quantile regression method
does not handle non-constant weights and will stop.
If "tdist"
is specified, the default degrees of freedom is 4 and
this can be controlled by specifying
distribution = list(name = "tdist", df = DF)
where DF
is your
chosen degrees of freedom.
If "pairwise" regression is specified, distribution
must be a list of
the form list(name="pairwise",group=...,metric=...,max.rank=...)
(metric
and max.rank
are optional, see below). group
is
a character vector with the column names of data
that jointly
indicate the group an instance belongs to (typically a query in Information
Retrieval applications). For training, only pairs of instances from the same
group and with different target labels can be considered. metric
is
the IR measure to use, one of
Fraction of concordant pairs; for binary labels, this is equivalent to the Area under the ROC Curve
Fraction of concordant pairs; for binary labels, this is equivalent to the Area under the ROC Curve
Mean reciprocal rank of the highest-ranked positive instance
Mean reciprocal rank of the highest-ranked positive instance
Mean average precision, a generalization of mrr
to multiple positive instances
Mean average precision, a
generalization of mrr
to multiple positive instances
Normalized discounted cumulative gain. The score is the weighted sum (DCG) of the user-supplied target values, weighted by log(rank+1), and normalized to the maximum achievable value. This is the default if the user did not specify a metric.
ndcg
and conc
allow arbitrary target values, while binary
targets {0,1} are expected for map
and mrr
. For ndcg
and mrr
, a cut-off can be chosen using a positive integer parameter
max.rank
. If left unspecified, all ranks are taken into account.
Note that splitting of instances into training and validation sets follows
group boundaries and therefore only approximates the specified
train.fraction
ratio (the same applies to cross-validation folds).
Internally, queries are randomly shuffled before training, to avoid bias.
Weights can be used in conjunction with pairwise metrics, however it is assumed that they are constant for instances from the same group.
For details and background on the algorithm, see e.g. Burges (2010).
A vector of weights of the same length as the y
.
an optional vector, the same length as the number of predictors, indicating which variables have a monotone increasing (+1), decreasing (-1), or arbitrary (0) relationship with the outcome.
the total number of trees to fit. This is equivalent to the number of iterations and the number of basis functions in the additive expansion.
The maximum depth of variable interactions. A value
of 1 implies an additive model, a value of 2 implies a model with up to 2-way
interactions, etc. Default is 1
.
Integer specifying the minimum number of observations in the trees terminal nodes. Note that this is the actual number of observations not the total weight.
The shrinkage parameter applied to each tree in the
expansion. Also known as the learning rate or step-size reduction; 0.001 to
0.1 usually work, but a smaller learning rate typically requires more trees.
Default is 0.1
.
The fraction of the training set observations randomly
selected to propose the next tree in the expansion. This introduces
randomnesses into the model fit. If bag.fraction
< 1 then running the
same model twice will result in similar but different fits. gbm
uses
the R random number generator so set.seed
can ensure that the model
can be reconstructed. Preferably, the user can save the returned
gbm.object
using save
. Default is 0.5
.
An integer representing the number of cases on which to train.
This is the preferred way of specification for gbm.fit
; The option
train.fraction
in gbm.fit
is deprecated and only maintained
for backward compatibility. These two parameters are mutually exclusive. If
both are unspecified, all data is used for training.
The first train.fraction * nrows(data)
observations are used to fit the gbm
and the remainder are used for
computing out-of-sample estimates of the loss function.
Logical indicating whether or not to keep the data and an
index of the data stored with the object. Keeping the data and index makes
subsequent calls to gbm.more
faster at the cost of storing an
extra copy of the dataset.
Logical indicating whether or not to print out progress and
performance indicators (TRUE
). If this option is left unspecified for
gbm.more
, then it uses verbose
from object
. Default is
FALSE
.
Vector of strings of length equal to the number of columns
of x
containing the names of the predictor variables.
Character string label for the response variable.
The group
to use when distribution = "pairwise"
.
Greg Ridgeway gregridgeway@gmail.com
Quantile regression code developed by Brian Kriegler bk@stat.ucla.edu
t-distribution, and multinomial code developed by Harry Southworth and Daniel Edwards
Pairwise code developed by Stefan Schroedl schroedl@a9.com
This package implements the generalized boosted modeling framework. Boosting is the process of iteratively adding basis functions in a greedy fashion so that each additional basis function further reduces the selected loss function. This implementation closely follows Friedman's Gradient Boosting Machine (Friedman, 2001).
In addition to many of the features documented in the Gradient Boosting
Machine, gbm
offers additional features including the out-of-bag
estimator for the optimal number of iterations, the ability to store and
manipulate the resulting gbm
object, and a variety of other loss
functions that had not previously had associated boosting algorithms,
including the Cox partial likelihood for censored data, the poisson
likelihood for count outcomes, and a gradient boosting implementation to
minimize the AdaBoost exponential loss function.
Y. Freund and R.E. Schapire (1997) “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, 55(1):119-139.
G. Ridgeway (1999). “The state of boosting,” Computing Science and Statistics 31:172-181.
J.H. Friedman, T. Hastie, R. Tibshirani (2000). “Additive Logistic Regression: a Statistical View of Boosting,” Annals of Statistics 28(2):337-374.
J.H. Friedman (2001). “Greedy Function Approximation: A Gradient Boosting Machine,” Annals of Statistics 29(5):1189-1232.
J.H. Friedman (2002). “Stochastic Gradient Boosting,” Computational Statistics and Data Analysis 38(4):367-378.
B. Kriegler (2007). Cost-Sensitive Stochastic Gradient Boosting Within a Quantitative Regression Framework. Ph.D. Dissertation. University of California at Los Angeles, Los Angeles, CA, USA. Advisor(s) Richard A. Berk. https://dl.acm.org/doi/book/10.5555/1354603.
C. Burges (2010). “From RankNet to LambdaRank to LambdaMART: An Overview,” Microsoft Research Technical Report MSR-TR-2010-82.
gbm.object
, gbm.perf
,
plot.gbm
, predict.gbm
, summary.gbm
,
and pretty.gbm.tree
.