Gibbs sampler for linear, generalized linear and survival models under product non-local priors, Zellner's prior and a Normal approximation to the posterior. Both sampling conditional on a model and Bayesian model averaging are implemented (see Details).
If x and y not specified samples from non-local priors/posteriors with density proportional to d(theta) N(theta; m, V) are produced, where d(theta) is the non-local penalty term.
rnlp(y, x, m, V, msfit, outcometype, family, priorCoef, priorGroup,
priorVar, isgroup, niter=10^3, burnin=round(niter/10), thinning=1, pp='norm')
Matrix with posterior samples
Vector with observed responses. When class(y)=='Surv'
sampling is based on the Cox partial likelihood, else a linear model
is assumed.
Design matrix with all potential predictors
Mean for the Normal kernel
Covariance for the Normal kernel
Object of class msfit
returned by
modelSelection
. If specified Bayesian model averaging
posterior samples are returned, according to posterior model
probabilities in msfit
, and then arguments y
,
x
, m
, V
etc. If msfit
is missing then
posterior samples under the full model y ~ x
are returned
Type of outcome. Possible values are "Continuous", "glm" or "Survival"
Assumed family for the family. Some possible values are "normal", "binomial logit" and "Cox"
Prior distribution for the coefficients. Ignored if
msfit
is supplied. Must be object of class
msPriorSpec
, e.g. created by momprior
,
emomprior
, imomprior
, zellnerprior
Prior on grouped coefficients (e.g. categorical
predictors with >2 categories, splines), as passed to modelSelection
Prior on residual variance. Ignored if msfit
supplied. Must be object of class msPriorSpec
, e.g. created
with igprior
Logical vector where TRUE
indicates that the
variable is part of a group, e.g. one of several dummy indicators
for a discrete covariate
Number of MCMC iterations
Number of burn-in MCMC iterations. Defaults to .1*niter
. Set to 0 for no burn-in
MCMC thinning factor, i.e. only one out of each thinning
iterations are reported. Defaults to no thinning
When msfit
is provided this is the method to compute posterior model probabilities,
which determine the sampled models. Can be 'norm' or 'exact', see postProb
for details.
David Rossell
The algorithm is implemented for product MOM (pMOM), product iMOM (piMOM) and product eMOM (peMOM) priors. The algorithm combines an orthogonalization that provides low serial correlation with a latent truncation representation that allows fast sampling.
When y
and x
are specified sampling is for the linear
regression posterior.
When argument msfit
is left missing, posterior sampling is for
the full model regressing y
on all covariates in x
.
When msfit
is specified each model is drawn with
probability given by postProb(msfit)
. In this case, a Bayesian
Model Averaging estimate of the regression coefficients can be
obtained by applying colMeans
to the rnlp
ouput matrix.
When y
and x
are left missing, sampling is from a
density proportional to d(theta) N(theta; m,V), where d(theta) is the
non-local penalty (e.g. d(theta)=prod(theta^(2r)) for the pMOM prior).
D. Rossell and D. Telesca. Non-local priors for high-dimensional estimation, 2014. http://arxiv.org/pdf/1402.5107v2.pdf
modelSelection
to perform model selection and compute
posterior model probabilities.
For more details on prior specification see msPriorSpec-class
.
#Simulate data
x <- matrix(rnorm(100*3),nrow=100,ncol=3)
theta <- matrix(c(1,1,0),ncol=1)
y <- x %*% theta + rnorm(100)
fit1 <- modelSelection(y=y, x=x, center=FALSE, scale=FALSE)
th <- rnlp(msfit=fit1, niter=100)
colMeans(th)
Run the code above in your browser using DataLab