BART is a Bayesian “sum-of-trees” model.
For numeric response \(y\), we have
\(y = f(x) +\epsilon\),
where \(\epsilon \sim N(0, 1)\).
For a multinomial response \(y\), \(P(Y=y | x) = F(f(x))\),
where \(F\) denotes the standard Normal CDF (probit link) or the
standard Logistic CDF (logit link).
In both cases, \(f\) is the sum of many tree models. The goal is to have very flexible inference for the uknown function \(f\).
In the spirit of “ensemble models”, each tree is constrained by a prior to be a weak learner so that it contributes a small amount to the overall fit.
mbart2(
x.train, y.train,
x.test=matrix(0,0,0), type='lbart',
ntype=as.integer(
factor(type,
levels=c('wbart', 'pbart', 'lbart'))),
sparse=FALSE, theta=0, omega=1,
a=0.5, b=1, augment=FALSE, rho=NULL,
xinfo=matrix(0,0,0), usequants=FALSE,
rm.const=TRUE,
k=2, power=2, base=0.95,
tau.num=c(NA, 3, 6)[ntype],
offset=NULL,
ntree=c(200L, 50L, 50L)[ntype], numcut=100L,
ndpost=1000L, nskip=100L,
keepevery=c(1L, 10L, 10L)[ntype],
printevery=100L, transposed=FALSE,
hostname=FALSE,
mc.cores = 2L, ## mc.bart only
nice = 19L, ## mc.bart only
seed = 99L ## mc.bart only
)mc.mbart2(
x.train, y.train,
x.test=matrix(0,0,0), type='lbart',
ntype=as.integer(
factor(type,
levels=c('wbart', 'pbart', 'lbart'))),
sparse=FALSE, theta=0, omega=1,
a=0.5, b=1, augment=FALSE, rho=NULL,
xinfo=matrix(0,0,0), usequants=FALSE,
rm.const=TRUE,
k=2, power=2, base=0.95,
tau.num=c(NA, 3, 6)[ntype],
offset=NULL,
ntree=c(200L, 50L, 50L)[ntype], numcut=100L,
ndpost=1000L, nskip=100L,
keepevery=c(1L, 10L, 10L)[ntype],
printevery=100L, transposed=FALSE,
hostname=FALSE,
mc.cores = 2L, ## mc.bart only
nice = 19L, ## mc.bart only
seed = 99L ## mc.bart only
)
mbart2
returns an object of type mbart2
which is
essentially a list.
A matrix with ndpost
rows and
nrow(x.train)*K
columns. Each row corresponds to a draw
\(f^*\) from the posterior of \(f\) and each column
corresponds to an estimate for a row of x.train
. For the
i
th row of x.train
, we provide the corresponding
(i-1)*K+j
th column of yhat.train
where
j=1,...,K
indexes the categories.
Burn-in is dropped.
train data fits = mean of yhat.train
columns.
a matrix with ndpost
rows and
nrow(x.train)
columns. Each row is for a draw. For each
variable (corresponding to the columns), the total count of the
number of times that variable is used in a tree decision rule (over
all trees) is given.
In addition, the list
has a offset
vector giving the value used.
Note that in the multinomial \(y\) case yhat.train
is
\(f(x) + offset[j]\).
Explanatory variables for training (in sample) data.
May be a matrix or a data frame,
with (as usual) rows corresponding to observations and columns to variables.
If a variable is a factor in a data frame, it is replaced with dummies.
Note that q dummies are created if q>2 and
one dummy is created if q=2, where q is the number of levels of the factor.
mbart2
will generate draws of \(f(x)\) for each \(x\)
which is a row of x.train
.
Categorical dependent variable for training (in sample) data.
Explanatory variables for test (out of sample) data.
Should have same structure as x.train
.
mbart2
will generate draws of \(f(x)\) for each \(x\) which is a row of x.test
.
You can use this argument to specify the type of fit.
'pbart'
for probit BART or 'lbart'
for logit BART.
The integer equivalent of type
where
'pbart'
is 2 and 'lbart'
is 3.
Whether to perform variable selection based on a sparse Dirichlet prior rather than simply uniform; see Linero 2016.
Set \(theta\) parameter; zero means random.
Set \(omega\) parameter; zero means random.
Sparse parameter for \(Beta(a, b)\) prior: \(0.5<=a<=1\) where lower values inducing more sparsity.
Sparse parameter for \(Beta(a, b)\) prior; typically, \(b=1\).
Sparse parameter: typically \(rho=p\) where \(p\) is the number of covariates under consideration.
Whether data augmentation is to be performed in sparse variable selection.
You can provide the cutpoints to BART or let BART
choose them for you. To provide them, use the xinfo
argument to specify a list (matrix) where the items (rows) are the
covariates and the contents of the items (columns) are the
cutpoints.
If usequants=FALSE
, then the
cutpoints in xinfo
are generated uniformly; otherwise,
if TRUE
, uniform quantiles are used for the cutpoints.
Whether or not to remove constant variables.
For categorical y.train
,
k is the number of prior standard deviations \(f(x)\) is away from +/-3.
Power parameter for tree prior.
Base parameter for tree prior.
The numerator in the tau
definition, i.e.,
tau=tau.num/(k*sqrt(ntree))
.
With Multinomial
BART, the centering is \(P(yj=1 | x) = F(fj(x) + offset[j])\) where
offset
defaults to F^{-1}(mean(y.train))
. You can use
the offset
parameter to over-ride these defaults.
The number of trees in the sum.
The number of possible values of c (see usequants). If a single number if given, this is used for all variables. Otherwise a vector with length equal to ncol(x.train) is required, where the \(i^{th}\) element gives the number of c used for the \(i^{th}\) variable in x.train. If usequants is false, numcut equally spaced cutoffs are used covering the range of values in the corresponding column of x.train. If usequants is true, then min(numcut, the number of unique values in the corresponding columns of x.train - 1) c values are used.
The number of posterior draws returned.
Number of MCMC iterations to be treated as burn in.
Every keepevery draw is kept to be returned to the user.
As the MCMC runs, a message is printed every printevery draws.
When running mbart2
in parallel, it is more memory-efficient
to transpose x.train
and x.test
, if any, prior to
calling mc.mbart2
.
When running on a cluster occasionally it is useful
to track on which node each chain is running; to do so
set this argument to TRUE
.
Setting the seed required for reproducible MCMC.
Number of cores to employ in parallel.
Set the job niceness. The default niceness is 19: niceness goes from 0 (highest) to 19 (lowest).
BART is an Bayesian MCMC method. At each MCMC interation, we produce a draw from \(f\) in the categorical \(y\) case.
Thus, unlike a lot of other modelling methods in R, we do not produce a single model object from which fits and summaries may be extracted. The output consists of values \(f^*(x)\) where * denotes a particular draw. The \(x\) is either a row from the training data (x.train).
gbart
, alligator
N=500
set.seed(12)
x1=runif(N)
x2=runif(N, max=1-x1)
x3=1-x1-x2
x.train=cbind(x1, x2, x3)
y.train=0
for(i in 1:N)
y.train[i]=sum((1:3)*rmultinom(1, 1, x.train[i, ]))
table(y.train)/N
##test mbart2 with token run to ensure installation works
set.seed(99)
post = mbart2(x.train, y.train, nskip=1, ndpost=1)
if (FALSE) {
set.seed(99)
post=mbart2(x.train, y.train, x.train)
##mc.post=mbart2(x.train, y.train, x.test, mc.cores=8, seed=99)
K=3
i=seq(1, N*K, K)-1
for(j in 1:K)
print(cor(x.train[ , j], post$prob.test.mean[i+j])^2)
}
Run the code above in your browser using DataLab