JAGS
using the funciton selection
, pattern
or hurdle
Efficient approximate leave-one-out cross validation (LOO), deviance information criterion (DIC) and widely applicable information criterion (WAIC) for Bayesian models, calculated on the observed data.
pic(x, criterion = "dic", module = "total")
A named list containing different predictive information criteria results and quantities according to the value of criterion
. In all cases, the measures are
computed on the observed data for the specific modules of the model selected in module
.
Posterior mean deviance (only if criterion
is 'dic'
).
Effective number of parameters calculated with the formula used by JAGS
(only if criterion
is 'dic'
)
Deviance Information Criterion calculated with the formula used by JAGS
(only if criterion
is 'dic'
)
Deviance evaluated at the posterior mean of the parameters and calculated with the formula used by JAGS
(only if criterion
is 'dic'
)
Expected log pointwise predictive density and standard error calculated on the observed data for the model nodes indicated in module
(only if criterion
is 'waic'
or 'loo'
).
Effective number of parameters and standard error calculated on the observed data for the model nodes indicated in module
(only if criterion
is 'waic'
or 'loo'
).
The leave-one-out information criterion and standard error calculated on the observed data for the model nodes indicated in module
(only if criterion
is 'loo'
).
The widely applicable information criterion and standard error calculated on the observed data for the model nodes indicated in module
(only if criterion
is 'waic'
).
A matrix containing the pointwise contributions of each of the above measures calculated on the observed data for the model nodes indicated in module
(only if criterion
is 'waic'
or 'loo'
).
A vector containing the estimates of the shape parameter \(k\) for the generalised Pareto fit to the importance ratios for each leave-one-out distribution
calculated on the observed data for the model nodes indicated in module
(only if criterion
is 'loo'
).
See loo
for details about interpreting \(k\).
A missingHE
object containing the results of a Bayesian model fitted in cost-effectiveness analysis using the function selection
, pattern
or hurdle
.
type of information criteria to be produced. Available choices are 'dic'
for the Deviance Information Criterion,
'waic'
for the Widely Applicable Information Criterion, and 'looic'
for the Leave-One-Out Information Criterion.
The modules with respect to which the information criteria should be computed. Available choices are 'total'
for the whole model,
'e'
for the effectiveness variables only, 'c'
for the cost variables only, and 'both'
for both outcome variables.
Andrea Gabrio
The Deviance Information Criterion (DIC), Leave-One-Out Information Criterion (LOOIC) and the Widely Applicable Information Criterion (WAIC) are methods for estimating
out-of-sample predictive accuracy from a Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameters.
DIC is computationally simple to calculate but it is known to have some problems, arising in part from it not being fully Bayesian in that it is based on a point esitmate.
LOOIC can be computationally expensive but can be easily approximated using importance weights that are smoothed by fitting a generalised Pareto distribution to the upper tail
of the distribution of the importance weights. For more details about the methods used to compute LOOIC see the PSIS-LOO section in loo-package
.
WAIC is fully Bayesian and closely approximates Bayesian cross-validation. Unlike DIC, WAIC is invariant to parameterisation and also works for singular models.
In finite cases, WAIC and LOO give similar esitmates, but for influential observations WAIC underestimates the effect of leaving out one observation.
Plummer, M. JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. (2003).
Vehtari, A. Gelman, A. Gabry, J. (2016a) Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing. Advance online publication.
Vehtari, A. Gelman, A. Gabry, J. (2016b) Pareto smoothed importance sampling. ArXiv preprint.
Gelman, A. Hwang, J. Vehtari, A. (2014) Understanding predictive information criteria for Bayesian models. Statistics and Computing 24, 997-1016.
Watanable, S. (2010). Asymptotic equivalence of Bayes cross validation and widely application information criterion in singular learning theory. Journal of Machine Learning Research 11, 3571-3594.
jags
, loo
, waic
#For examples see the function selection, pattern or hurdle
#
#
Run the code above in your browser using DataLab