BART algorithm implemented in C++, but without predict() support.
SL.dbarts(Y, X, newX, family, obsWeights, id, sigest = NA, sigdf = 3,
sigquant = 0.9, k = 2, power = 2, base = 0.95, binaryOffset = 0,
ntree = 200, ndpost = 1000, nskip = 100, printevery = 100,
keepevery = 1, keeptrainfits = TRUE, usequants = FALSE, numcut = 100,
printcutoffs = 0, nthread = 1, keepcall = TRUE, verbose = FALSE, ...)
Outcome variable
Covariate dataframe
Optional dataframe to predict the outcome. dbarts does not support predict() so any prediction needs to be via newX passed during model training.
"gaussian" for regression, "binomial" for binary classification.
Optional observation-level weights.
Optional id to group observations from the same unit (not used currently).
For continuous response models, an estimate of the error variance, \(\sigma^2\), used to calibrate an inverse-chi-squared prior used on that parameter. If not supplied, the least-squares estimate is derived instead. See sigquant for more information. Not applicable when y is binary.
Degrees of freedom for error variance prior. Not applicable when y is binary.
The quantile of the error variance prior that the rough estimate (sigest) is placed at. The closer the quantile is to 1, the more aggresive the fit will be as you are putting more prior weight on error standard deviations (\(\sigma\)) less than the rough estimate. Not applicable when y is binary.
For numeric y, k is the number of prior standard deviations E(Y|x) = f(x) is away from +/- 0.5. The response (Y) is internally scaled to range from -0.5 to 0.5. For binary y, k is the number of prior standard deviations f(x) is away from +/- 3. In both cases, the bigger k is, the more conservative the fitting will be.
Power parameter for tree prior.
Base parameter for tree prior.
Used for binary y. When present, the model is P(Y = 1 | x) = \(\Phi\)(f(x) + binaryOffset), allowing fits with probabilities shrunk towards values other than 0.5.
The number of trees in the sum-of-trees formulation.
The number of posterior draws after burn in, ndpost / keepevery will actually be returned.
Number of MCMC iterations to be treated as burn in.
As the MCMC runs, a message is printed every printevery draws.
Every keepevery draw is kept to be returned to the user. Useful for "thinning" samples.
If TRUE the draws of f(x) for x corresponding to the rows of x.train are returned.
When TRUE, determine tree decision rules using estimated quantiles derived from the x.train variables. When FALSE, splits are determined using values equally spaced across the range of a variable. See details for more information.
The maximum number of possible values used in decision rules (see usequants, details). If a single number, it is recycled for all variables; otherwise must be a vector of length equal to ncol(x.train). Fewer rules may be used if a covariate lacks enough unique values.
The number of cutoff rules to printed to screen before the MCMC is run. Given a single integer, the same value will be used for all variables. If 0, nothing is printed.
Integer specifying how many threads to use for rudimentary calculations such as means/variances. Depending on the CPU architecture, using more than one can degrade performance for small/medium data sets. As such some calculations may be executed single threaded regardless.
Logical; if FALSE, returned object will have call set to call("NULL"), otherwise the call used to instantiate BART.
If T output additional information during training.
Any remaining arguments (unused)
Chipman, H. A., George, E. I., & McCulloch, R. E. (2010). BART: Bayesian additive regression trees. The Annals of Applied Statistics, 4(1), 266-298. doi: 10.1214/09-AOAS285 (URL: http://doi.org/10.1214/09-AOAS285).
# NOT RUN {
data(Boston, package = "MASS")
Y = Boston$medv
# Remove outcome from covariate dataframe.
X = Boston[, -14]
set.seed(1)
# Sample rows to speed up example.
row_subset = sample(nrow(X), 30)
sl = SuperLearner(Y[row_subset], X[row_subset, ], family = gaussian(),
cvControl = list(V = 2), SL.library = c("SL.mean", "SL.dbarts"))
print(sl)
# }
Run the code above in your browser using DataLab