Support bayesian additive regression trees via the bartMachine package.
SL.bartMachine(Y, X, newX, family, obsWeights, id, num_trees = 50,
num_burn_in = 250, verbose = F, alpha = 0.95, beta = 2, k = 2,
q = 0.9, nu = 3, num_iterations_after_burn_in = 1000, ...)
Outcome variable
Covariate dataframe
Optional dataframe to predict the outcome
"gaussian" for regression, "binomial" for binary classification
Optional observation-level weights (supported but not tested)
Optional id to group observations from the same unit (not used currently).
The number of trees to be grown in the sum-of-trees model.
Number of MCMC samples to be discarded as "burn-in".
Prints information about progress of the algorithm to the screen.
Base hyperparameter in tree prior for whether a node is nonterminal or not.
Power hyperparameter in tree prior for whether a node is nonterminal or not.
For regression, k determines the prior probability that E(Y|X) is contained in the interval (y_min, y_max), based on a normal distribution. For example, when k=2, the prior probability is 95%. For classification, k determines the prior probability that E(Y|X) is between (-3,3). Note that a larger value of k results in more shrinkage and a more conservative fit.
Quantile of the prior on the error variance at which the data-based estimate is placed. Note that the larger the value of q, the more aggressive the fit as you are placing more prior weight on values lower than the data-based estimate. Not used for classification.
Degrees of freedom for the inverse chi^2 prior. Not used for classification.
Number of MCMC samples to draw from the posterior distribution of f(x).
Additional arguments (not used)