BART is a Bayesian “sum-of-trees” model.
For a numeric response \(y\), we have
\(y = f(x) + \epsilon\),
where \(\epsilon \sim N(0,\sigma^2)\).
\(f\) is the sum of many tree models. The goal is to have very flexible inference for the uknown function \(f\).
In the spirit of “ensemble models”, each tree is constrained by a prior to be a weak learner so that it contributes a small amount to the overall fit.
wbart(
x.train, y.train, x.test=matrix(0.0,0,0),
sparse=FALSE, theta=0, omega=1,
a=0.5, b=1, augment=FALSE, rho=NULL,
xinfo=matrix(0.0,0,0), usequants=FALSE,
cont=FALSE, rm.const=TRUE,
sigest=NA, sigdf=3, sigquant=.90,
k=2.0, power=2.0, base=.95,
sigmaf=NA, lambda=NA,
fmean=mean(y.train), w=rep(1,length(y.train)),
ntree=200L, numcut=100L,
ndpost=1000L, nskip=100L, keepevery=1L,
nkeeptrain=ndpost, nkeeptest=ndpost,
nkeeptestmean=ndpost, nkeeptreedraws=ndpost,
printevery=100L, transposed=FALSE
)
wbart
returns an object of type wbart
which is
essentially a list.
In the numeric \(y\) case, the list has components:
A matrix with ndpost rows and nrow(x.train) columns.
Each row corresponds to a draw \(f^*\) from the posterior of \(f\)
and each column corresponds to a row of x.train.
The \((i,j)\) value is \(f^*(x)\) for the \(i^{th}\) kept draw of \(f\)
and the \(j^{th}\) row of x.train.
Burn-in is dropped.
Same as yhat.train but now the x's are the rows of the test data.
train data fits = mean of yhat.train columns.
test data fits = mean of yhat.test columns.
post burn in draws of sigma, length = ndpost.
burn-in draws of sigma.
a matrix with ndpost rows and nrow(x.train) columns. Each row is for a draw. For each variable (corresponding to the columns), the total count of the number of times that variable is used in a tree decision rule (over all trees) is given.
The rough error standard deviation (\(\sigma\)) used in the prior.
Explanatory variables for training (in sample) data.
May be a matrix or a data frame,
with (as usual) rows corresponding to observations and columns to variables.
If a variable is a factor in a data frame, it is replaced with dummies.
Note that q dummies are created if q>2 and
one dummy is created if q=2, where q is the number of levels of the factor.
wbart
will generate draws of \(f(x)\) for each \(x\)
which is a row of x.train
.
Continuous dependent variable for training (in sample) data.
Explanatory variables for test (out of sample) data.
Should have same structure as x.train.
wbart
will generate draws of \(f(x)\) for each \(x\) which is a row of x.test.
Whether to perform variable selection based on a sparse Dirichlet prior rather than simply uniform; see Linero 2016.
Set \(theta\) parameter; zero means random.
Set \(omega\) parameter; zero means random.
Sparse parameter for \(Beta(a, b)\) prior: \(0.5<=a<=1\) where lower values inducing more sparsity.
Sparse parameter for \(Beta(a, b)\) prior; typically, \(b=1\).
Sparse parameter: typically \(rho=p\) where \(p\) is the number of covariates under consideration.
Whether data augmentation is to be performed in sparse variable selection.
You can provide the cutpoints to BART or let BART
choose them for you. To provide them, use the xinfo
argument to specify a list (matrix) where the items (rows) are the
covariates and the contents of the items (columns) are the
cutpoints.
If usequants=FALSE
, then the
cutpoints in xinfo
are generated uniformly; otherwise,
if TRUE
, uniform quantiles are used for the cutpoints.
Whether or not to assume all variables are continuous.
Whether or not to remove constant variables.
The prior for the error variance (\(\sigma^2\)) is inverted chi-squared (the standard conditionally conjugate prior). The prior is specified by choosing the degrees of freedom, a rough estimate of the corresponding standard deviation and a quantile to put this rough estimate at. If sigest=NA then the rough estimate will be the usual least squares estimator. Otherwise the supplied value will be used.
Degrees of freedom for error variance prior.
The quantile of the prior that the rough estimate (see sigest) is placed at. The closer the quantile is to 1, the more aggresive the fit will be as you are putting more prior weight on error standard deviations (\(\sigma\)) less than the rough estimate.
For numeric y, k is the number of prior standard deviations \(E(Y|x) = f(x)\) is away from +/-.5. k is the number of prior standard deviations \(f(x)\) is away from +/-3. The bigger k is, the more conservative the fitting will be.
Power parameter for tree prior.
Base parameter for tree prior.
The SD of f.
The scale of the prior for the variance.
BART operates on y.train
centered by fmean
.
Vector of weights which multiply the standard deviation.
The number of trees in the sum.
The number of possible values of c (see usequants). If a single number if given, this is used for all variables. Otherwise a vector with length equal to ncol(x.train) is required, where the \(i^{th}\) element gives the number of c used for the \(i^{th}\) variable in x.train. If usequants is false, numcut equally spaced cutoffs are used covering the range of values in the corresponding column of x.train. If usequants is true, then min(numcut, the number of unique values in the corresponding columns of x.train - 1) c values are used.
The number of posterior draws returned.
Number of MCMC iterations to be treated as burn in.
Number of MCMC iterations to be returned for train data.
Number of MCMC iterations to be returned for test data.
Number of MCMC iterations to be returned for test mean.
Number of MCMC iterations to be returned for tree draws.
As the MCMC runs, a message is printed every printevery draws.
Every keepevery draw is kept to be returned to the user.
When running wbart
in parallel, it is more memory-efficient
to transpose x.train
and x.test
, if any, prior to
calling mc.wbart
.
BART is an Bayesian MCMC method. At each MCMC interation, we produce a draw from the joint posterior \((f,\sigma) | (x,y)\) in the numeric \(y\) case.
Thus, unlike a lot of other modelling methods in R, we do not produce a single model object from which fits and summaries may be extracted. The output consists of values \(f^*(x)\) (and \(\sigma^*\) in the numeric case) where * denotes a particular draw. The \(x\) is either a row from the training data (x.train) or the test data (x.test).
pbart
##simulate data (example from Friedman MARS paper)
f = function(x){
10*sin(pi*x[,1]*x[,2]) + 20*(x[,3]-.5)^2+10*x[,4]+5*x[,5]
}
sigma = 1.0 #y = f(x) + sigma*z , z~N(0,1)
n = 100 #number of observations
set.seed(99)
x=matrix(runif(n*10),n,10) #10 variables, only first 5 matter
Ey = f(x)
y=Ey+sigma*rnorm(n)
lmFit = lm(y~.,data.frame(x,y)) #compare lm fit to BART later
##test BART with token run to ensure installation works
set.seed(99)
bartFit = wbart(x,y,nskip=5,ndpost=5)
if (FALSE) {
##run BART
set.seed(99)
bartFit = wbart(x,y)
##compare BART fit to linear matter and truth = Ey
fitmat = cbind(y,Ey,lmFit$fitted,bartFit$yhat.train.mean)
colnames(fitmat) = c('y','Ey','lm','bart')
print(cor(fitmat))
}
Run the code above in your browser using DataLab