Learn R Programming

BayesVarSel (version 1.6.2)

Bvs: Bayesian Variable Selection for linear regression models

Description

Exact computation of summaries of the posterior distribution using sequential computation.

Usage

Bvs(formula, fixed.cov=c("Intercept"), data, prior.betas = "Robust", prior.models = "Constant", n.keep = 10, time.test = TRUE, priorprobs=NULL)

Arguments

formula
Formula defining the most complex regression model in the analysis. See details.
fixed.cov
A character vector with the names of the covariates that will be considered as fixed (no variable selection over these). This argument provides an implicit definition of the simplest model considered. Default is "Intercept". Use NULL if selection should be performed over all the variables defined by formula
data
data frame containing the data.
prior.betas
Prior distribution for regression parameters within each model. Possible choices include "Robust", "Liangetal", "gZellner", "ZellnerSiow" and "FLS" (see details).
prior.models
Prior distribution over the model space. Possible choices are "Constant", "ScottBerger" and "User" (see details).
n.keep
How many of the most probable models are to be kept? By default is set to 10, which is automatically adjusted if 10 is greater than the total number of models.
time.test
If TRUE and the number of variables is moderately large (>=18) a preliminary test to estimate computational time is performed.
priorprobs
A p+1 (p is the number of non-fixed covariates) dimensional vector defining the prior probabilities Pr(M_i) (should be used in the case where prior.models= "User"; see details.)

Value

Bvs returns an object of class Bvs with the following elements:
time
The internal time consumed in solving the problem
lmfull
The lm class object that results when the model defined by formula is fitted by lm
lmnull
The lm class object that results when the model defined by fixed.cov is fitted by lm
variables
The name of all the potential explanatory variables (the set of variables to select from).
n
Number of observations
p
Number of explanatory variables to select from
k
Number of fixed variables
HPMbin
The binary expression of the Highest Posterior Probability model
modelsprob
A data.frame which summaries the n.keep most probable, a posteriori models, and their associated probability.
inclprob
A data.frame with the inclusion probabilities of all the variables.
jointinclprob
A data.frame with the joint inclusion probabilities of all the variables.
postprobdim
Posterior probabilities of the dimension of the true model
call
The call to the function
method
full

Details

The model space is the set of all models, Mi, that contain the intercept and are nested in that specified by formula. The simplest of such models, M0, contains only the intercept. Then Bvs provides exact summaries of the posterior distribution over this model space, that is, summaries of the discrete distribution which assigns to each model Mi its probability given the data:

Pr(Mi | data)=Pr(Mi)*Bi/C,

where Bi is the Bayes factor of Mi to M0, Pr(Mi) is the prior probability of Mi and C is the normalizing constant.

The Bayes factor B_i depends on the prior assigned for the regression parameters in Mi and Bvs implements a number of popular choices plus the "Robust" prior recently proposed by Bayarri et al (2012). The "Robust" prior is the default choice for both theoretical (see the reference for details) and computational reasons since it produces Bayes factors with closed-form expressions. The "gZellner" prior implemented corresponds to the prior in Zellner (1986) with g=n while the "Liangetal" prior is the hyper-g/n with a=3 (see the original paper Liang et al 2008, for details). "ZellnerSiow" is the multivariate Cauchy prior proposed by Zellner and Siow (1980, 1984), further studied by Bayarri and Garcia-Donato (2007). Finally, "FLS" is the prior recommended by Fernandez, Ley and Steel (2001) which is the prior in Zellner (1986) with g=max(n, p*p) p being the number of covariates to choose from (the most complex model has p+number of fixed covariates).

With respect to the prior over the model space Pr(Mi) three possibilities are implemented: "Constant", under which every model has the same prior probability, "ScottBerger" under which Pr(Mi) is inversely proportional to the number of models of that dimension, and "User" (see below). The "ScottBerger" prior was studied by Scott and Berger (2010) and controls for multiplicity.

When the parameter prior.models="User", the prior probabilities are defined through the p+1 dimensional parameter vector priorprobs. Let k be the number of explanatory variables in the simplest model (the one defined by fixed.cov) then except for the normalizing constant, the first component of priorprobs must contain the probability of each model with k covariates (there is only one); the second component of priorprobs should contain the probability of each model with k+1 covariates and so on. Finally, the p+1 component in priorprobs defined the probability of the most complex model (that defined by formula. That is

priorprobs[j]=C*Pr(M_i such that M_i has j-1+k explanatory variables)

where C is the normalizing constant, i.e C=1/sum(priorprobs*choose(p,0:p).

Note that prior.models="Constant" is equivalent to the combination prior.models="User" and priorprobs=rep(1,(p+1)) but the internal functions are not the same and you can obtain small variations in results due to these differences in the implementation.

Similarly, prior.models = "ScottBerger" is equivalent to the combination prior.models= "User" and priorprobs = 1/choose(p,0:p).

Limitations: the error "A Bayes factor is infinite.". Bayes factors can be extremely big numbers if i) the sample size is even moderately large or if ii) a model is much better (in terms of fit) than the model taken as the null model. We are currently working on more robust implementations of the functions to handle these problems. In the meanwhile you could try using the g-Zellner prior (which is the most simple one and results, in these cases, should not vary much with the prior) and/or using more accurate definitions of the simplest model (via the fixed.cov argument).

References

Bayarri, M.J., Berger, J.O., Forte, A. and Garcia-Donato, G. (2012) Criteria for Bayesian Model choice with Application to Variable Selection. The Annals of Statistics. 40: 1550-1557.

Bayarri, M.J. and Garcia-Donato, G. (2007) Extending conventional priors for testing general hypotheses in linear models. Biometrika, 94:135-152. Barbieri, M and Berger, J (2004) Optimal Predictive Model Selection. The Annals of Statistics, 32, 870-897.

Fernandez, C., Ley, E. and Steel, M.F.J. (2001) Benchmark priors for Bayesian model averaging. Journal of Econometrics, 100, 381-427.

Liang, F., Paulo, R., Molina, G., Clyde, M. and Berger,J.O. (2008) Mixtures of g-priors for Bayesian Variable Selection. Journal of the American Statistical Association. 103:410-423 Zellner, A. and Siow, A. (1980) Posterior Odds Ratio for Selected Regression Hypotheses. In Bayesian Statistics 1 (J.M. Bernardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith, eds.) 585-603. Valencia: University Press. Zellner, A. and Siow, A. (1984). Basic Issues in Econometrics. Chicago: University of Chicago Press. Zellner, A. (1986) On Assessing Prior Distributions and Bayesian Regression Analysis with g-prior Distributions. In Bayesian Inference and Decision techniques: Essays in Honor of Bruno de Finetti (A. Zellner, ed.) 389-399. Edward Elgar Publishing Limited.

See Also

plotBvs for several plots of the result.

PBvs for a parallelized version of Bvs.

GibbsBvs for a heuristic approximation based on Gibbs sampling (recommended when p>20, no other possibilities when p>31).

Examples

Run this code
## Not run: 
# #Analysis of Crime Data
# #load data
# data(UScrime)
# 
# #Default arguments are Robust prior for the regression parameters
# #and constant prior over the model space
# #Here we keep the 1000 most probable models a posteriori:
# crime.Bvs<- Bvs(formula="y~.", data=UScrime, n.keep=1000)
# 
# #A look at the results:
# crime.Bvs
# 
# summary(crime.Bvs)
# 
# #A plot with the posterior probabilities of the dimension of the
# #true model:
# plotBvs(crime.Bvs, option="dimension")
# 
# #Two image plots of the conditional inclusion probabilities:
# plotBvs(crime.Bvs, option="conditional")
# plotBvs(crime.Bvs, option="not")
# ## End(Not run)

Run the code above in your browser using DataLab