Functions for cross-validating gbm. These functions are used internally and are not intended for end-user direct usage.
gbmCrossVal(
cv.folds,
nTrain,
n.cores,
class.stratify.cv,
data,
x,
y,
offset,
distribution,
w,
var.monotone,
n.trees,
interaction.depth,
n.minobsinnode,
shrinkage,
bag.fraction,
var.names,
response.name,
group
)gbmCrossValErr(cv.models, cv.folds, cv.group, nTrain, n.trees)
gbmCrossValPredictions(
cv.models,
cv.folds,
cv.group,
best.iter.cv,
distribution,
data,
y
)
gbmCrossValModelBuild(
cv.folds,
cv.group,
n.cores,
i.train,
x,
y,
offset,
distribution,
w,
var.monotone,
n.trees,
interaction.depth,
n.minobsinnode,
shrinkage,
bag.fraction,
var.names,
response.name,
group
)
gbmDoFold(
X,
i.train,
x,
y,
offset,
distribution,
w,
var.monotone,
n.trees,
interaction.depth,
n.minobsinnode,
shrinkage,
bag.fraction,
cv.group,
var.names,
response.name,
group,
s
)
A list containing the cross-validation error and predictions.
The number of cross-validation folds.
The number of training samples.
The number of cores to use.
Whether or not stratified cross-validation samples are used.
The data.
The model matrix.
The response variable.
The offset.
The type of loss function. See gbm
.
Observation weights.
See gbm
.
The number of trees to fit.
The degree of allowed interactions. See
gbm
.
See gbm
.
See gbm
.
See gbm
.
See gbm
.
See gbm
.
Used when distribution = "pairwise"
. See
gbm
.
A list containing the models for each fold.
A vector indicating the cross-validation fold for each member of the training set.
The iteration with lowest cross-validation error.
Items in the training set.
Index (cross-validation fold) on which to subset.
Random seed.
Greg Ridgeway gregridgeway@gmail.com
These functions are not intended for end-user direct usage, but are used
internally by gbm
.
J.H. Friedman (2001). "Greedy Function Approximation: A Gradient Boosting Machine," Annals of Statistics 29(5):1189-1232.
L. Breiman (2001). https://www.stat.berkeley.edu/users/breiman/randomforest2001.pdf.
gbm