powered by
Gradient boosting for optimizing arbitrary loss functions, where component-wise arbitrary base-learners, e.g., smoothing procedures, are utilized as additive base-learners.
GAMBoostModel( family = NULL, baselearner = c("bbs", "bols", "btree", "bss", "bns"), dfbase = 4, mstop = 100, nu = 0.1, risk = c("inbag", "oobag", "none"), stopintern = FALSE, trace = FALSE )
MLModel class object.
MLModel
optional Family object. Set automatically according to the class type of the response variable.
Family
character specifying the component-wise base learner to be used.
base learner
gobal degrees of freedom for P-spline base learners ("bbs").
"bbs"
number of initial boosting iterations.
step size or shrinkage parameter between 0 and 1.
method to use in computing the empirical risk for each boosting iteration.
logical inidicating whether the boosting algorithm stops internally when the out-of-bag risk increases at a subsequent iteration.
logical indicating whether status information is printed during the fitting process.
binary factor, BinomialVariate, NegBinomialVariate, numeric, PoissonVariate, Surv
binary factor
BinomialVariate
NegBinomialVariate
numeric
PoissonVariate
Surv
mstop
Default argument values and further model details can be found in the source See Also links below.
gamboost, Family, baselearners, fit, resample
gamboost
baselearners
fit
resample
# \donttest{ ## Requires prior installation of suggested package mboost to run data(Pima.tr, package = "MASS") fit(type ~ ., data = Pima.tr, model = GAMBoostModel) # }
Run the code above in your browser using DataLab