Learn R Programming

rethinking (version 1.59)

WAIC: Information Criteria and Pareto-Smoothed Importance Sampling Cross-Validation

Description

Computes WAIC, DIC, and PSIS-LOO for map and map2stan model fits. In addition, WAIC and PSIS-LOO can be calculated for Stan model fits (see details).

Usage

WAIC( object , n=1000 , refresh=0.1 , pointwise=FALSE , ... )
LOO( object , n=1000 , refresh=0.1 , pointwise=FALSE , ... )
DIC( object , ... )

Arguments

object

Object of class map or map2stan

n

Number of samples to use in computing WAIC. Set to n=0 to use all samples in map2stan fit

refresh

Refresh interval for progress display. Set to refresh=0 to suppress display.

pointwise

If TRUE, return a vector of WAIC values for each observation. Useful for computing standard errors.

...

Other parameters to pass to specific methods

Value

Details

These functions use the samples and model definition from a map or map2stan fit to compute the Widely Applicable Information Criterion (WAIC), Deviance Information Criterion (DIC), or Pareto-smoothed importance-sampling leave-one-out cross-validation estimate (PSIS-LOO).

WAIC is an estimate of out-of-sample relative K-L divergence (KLD), and it is defined as:

$$WAIC = -2(lppd - pWAIC)$$

Components lppd (log pointwise predictive density) and pWAIC (the effective number of parameters) are reported as attributes. See Gelman et al 2013 for definitions and formulas. This function uses the variance definition for pWAIC.

PSIS-LOO is another estimate of out-of-sample relative K-L divergence. It is computed by the loo package. See Vehtari et al 2015 for definitions and computation.

In practice, WAIC and PSIS-LOO are extremely similar estimates of KLD.

Both WAIC and LOO have methods for stanfit models, provided the posterior contains a log-likelihood matrix (samples on rows, observations on columns) named log_lik. See example.

References

Watanabe, S. 2010. Asymptotic equivalence of Bayes cross validation and Widely Applicable Information Criterion in singular learning theory. Journal of Machine Learning Research 11:3571-3594.

Gelman, A., J. Hwang, and A. Vehtari. 2013. Understanding predictive information criteria for Bayesian models.

Vehtari, A., A. Gelman, and J. Gabry. 2015. Efficient implementation of leave-one-out cross-validation and WAIC for evaluating fitted Bayesian models.

See Also

map, map2stan, link, loo

Examples

Run this code
# NOT RUN {
library(rethinking)
data(chimpanzees)
d <- chimpanzees
dat <- list(
    y = d$pulled_left,
    prosoc = d$prosoc_left,
    condition = d$condition,
    N = nrow(d)
)

m1s_code <- '
data{
    int<lower=1> N;
    int y[N];
    int prosoc[N];
}
parameters{
    real a;
    real bP;
}
model{
    vector[N] p;
    bP ~ normal( 0 , 1 );
    a ~ normal( 0 , 10 );
    for ( i in 1:N ) {
        p[i] <- a + bP * prosoc[i];
    }
    y ~ binomial_logit( 1 , p );
}
generated quantities{
    vector[N] p;
    vector[N] log_lik;
    for ( i in 1:N ) {
        p[i] <- a + bP * prosoc[i];
        log_lik[i] <- binomial_logit_log( y[i] , 1 , p[i] );
    }
}
'

m1s <- stan( model_code=m1s_code , data=dat , chains=2 , iter=2000 )

WAIC(m1s)

LOO(m1s)
# }

Run the code above in your browser using DataLab