Learn R Programming

abn: Additive Bayesian Networks

The R package abn is a tool for Bayesian network analysis, a form of probabilistic graphical model. It derives a directed acyclic graph (DAG) from empirical data that describes the dependency structure between random variables. The package provides routines for structure learning and parameter estimation of additive Bayesian network models.

Installation

The most recent development version is available from Github and can be installed with:

devtools::install_github("furrer-lab/abn")

It is recommended to install abn within a virtual environment, e.g., using renv which can be done with:

renv::install("bioc::graph")
renv::install("bioc::Rgraphviz")
renv::install("abn", dependencies = c("Depends", "Imports", "LinkingTo", "Suggests"))

Please note that the abn package is currently unavailable on CRAN. We are dedicated to providing a robust and reliable package, and we appreciate your understanding as we work towards making abn available on CRAN soon. [^1]

[^1]: The abn package includes certain features, such as multiprocessing and integration with the INLA package, which are limited or available only on specific CRAN flavors. While it is possible to relax the testing process by, e.g., excluding tests of these functionalities, we believe that rigorous testing is important for reliable software development, especially for a package like abn that includes complex functionalities. We have implemented a rigorous testing framework similar to CRAN's to validate these functionalities in our development process. Our aim is to maximize the reliability of the abn package under various conditions, and we are dedicated to providing a robust and reliable package. We appreciate your understanding as we work towards making abn available on CRAN soon.

Additional libraries

The following additional libraries are recommended to best profit from the abn features.

  • INLA, which is an R package used for model fitting. It is hosted separately from CRAN and is easy to install on common platforms (see instructions on the INLA website).
install.packages("INLA", repos = c(getOption("repos"), INLA = "https://inla.r-inla-download.org/R/stable"), dep = TRUE)
if (!requireNamespace("BiocManager", quietly = TRUE))
    install.packages("BiocManager")
BiocManager::install("Rgraphviz", version = "3.8")
  • JAGS is a program for analyzing Bayesian hierarchical models using Markov Chain Monte Carlo (MCMC) simulation. Its installation is platform-dependent and is, therefore, not covered here.

Quickstart

Explore the basics of data analysis using additive Bayesian networks with the abn package through our simple example. The datasets required for these examples are included within the abn package.

For a deeper understanding, refer to the manual pages on the abn homepage, which include numerous examples. Key pages to visit are fitAbn(), buildScoreCache(), mostProbable(), and searchHillClimber(). Also, see the examples below for a quick overview of the package's capabilities.

Features

The R package abn provides routines for determining optimal additive Bayesian network models for a given data set. The core functionality is concerned with model selection - determining the most likely model of data from interdependent variables. The model selection process can incorporate expert knowledge by specifying structural constraints, such as which arcs are banned or retained.

The general workflow with abn follows a three-step process:

  1. Determine the model search space: The function buildScoreCache() builds a cache of pre-computed scores for each possible DAG.

For this, it's required to specify the data types of the variables in the data set and the structural constraints of the model (e.g. which arcs are banned or retained and the maximum number of parents per node).

  1. Structure learning: abn offers different structure learning algorithms:

    • The exact structure learning algorithm from Koivisto and Sood (2004) is implemented in C and can be called with the function mostProbable(), which finds the most probable DAG for a given data set.

    The function searchHeuristic() provides a set of heuristic search algorithms. These include the hill-climber, tabu search, and simulated annealing algorithms implemented in R. searchHillClimber() searches for high-scoring DAGs using a random re-start greedy hill-climber heuristic search and is implemented in C. It slightly deviates from the method initially presented by Heckerman et al. 1995 (for details consult the respective help page ?abn::searchHillClimber()).

  2. Parameter estimation: The function fitAbn() estimates the model's parameters based on the DAG from the previous step.

abn allows for two different model formulations, specified with the argument method:

  • method = "mle" fits a model under the frequentist paradigm using information-theoretic criteria to select the best model.

  • method = "bayes" estimates the posterior distribution of the model parameters based on two Laplace approximation methods, that is, a method for Bayesian inference and an alternative to Markov Chain Monte Carlo (MCMC): A standard Laplace approximation is implemented in the abn source code but switches in specific cases (see help page ?fitAbn) to the Integrated Nested Laplace Approximation from the INLA package requiring the installation thereof.

To generate new observations from a fitted ABN model, the function simulateAbn() simulates data based on the DAG and the estimated parameters from the previous step. simulateAbn() is available for both method = "mle" and method = "bayes" and requires the installation of the JAGS package.

Supported Data types

The abn package supports the following distributions for the variables in the network:

  • Gaussian distribution for continuous variables.

  • Binomial distribution for binary variables.

  • Poisson distribution for variables with count data.

  • Multinomial distribution for categorical variables (only available with method = "mle").

Unlike other packages, abn does not restrict the combination of parent-child distributions.

Multilevel Models for Grouped Data Structures

The analysis of "hierarchical" or "grouped" data, in which observations are nested within higher-level units, requires statistical models with parameters that vary across groups (e.g. mixed-effect models).

abn allows to control for one-layer clustering, where observations are grouped into a single layer of clusters that are themself assumed to be independent, but observations within the clusters may be correlated (e.g. students nested within schools, measurements over time for each patient, etc). The argument group.var specifies the discrete variable that defines the group structure. The model is then fitted separately for each group, and the results are combined.

For example, studying student test scores across different schools, a varying intercept model would allow for the possibility that average test scores (the intercept) might be higher in one school than another due to factors specific to each school. This can be modeled in abn by setting the argument group.var to the variable containing the school names. The model is then fitted as a varying intercept model, where the intercept is allowed to vary across schools, but the slope is assumed to be the same for all schools.

Under the frequentist paradigm (method = "mle"), abn relies on the lme4 package to fit generalized linear mixed models (GLMMs) for Binomial, Poisson, and Gaussian distributed variables. For multinomial distributed variables, abn fits a multinomial baseline category logit model with random effects using the mclogit package. Currently, only one-layer clustering is supported (e.g., for method = "mle", this corresponds to a random intercept model).

With a Bayesian approach (method = "bayes"), abn relies on its own implementation of the Laplace approximation and the package INLA to fit a single-level hierarchical model for Binomial, Poisson, and Gaussian distributed variables. Multinomial distributed variables in general (see Section Supported Data Types) are not yet implemented with method = "bayes".

Basic Background

Bayesian network modeling is a data analysis technique ideally suited to messy, highly correlated and complex datasets. This methodology is rather distinct from other forms of statistical modeling in that its focus is on structure discovery—determining an optimal graphical model that describes the interrelationships in the underlying processes that generated the data. It is a multivariate technique and can be used for one or many dependent variables. This is a data-driven approach, as opposed to relying only on subjective expert opinion to determine how variables of interest are interrelated (for example, structural equation modeling).

Below and on the package's website, we provide some cookbook-type examples of how to perform Bayesian network structure discovery analyses with observational data. The particular type of Bayesian network models considered here are additive Bayesian networks. These are rather different, mathematically speaking, from the standard form of Bayesian network models (for binary or categorical data) presented in the academic literature, which typically use an analytically elegant but arguably interpretation-wise opaque contingency table parametrization. An additive Bayesian network model is simply a multidimensional regression model, e.g., directly analogous to generalized linear modeling but with all variables potentially dependent.

An example can be found in the American Journal of Epidemiology, where this approach was used to investigate risk factors for child diarrhea. A special issue of Preventive Veterinary Medicine on graphical modeling features several articles that use abn to fit epidemiological data. Introductions to this methodology can be found in Emerging Themes in Epidemiology and in Computers in Biology and Medicine where it is compared to other approaches.

What is an additive Bayesian network?

Additive Bayesian network (ABN) models are statistical models that use the principles of Bayesian statistics and graph theory. They provide a framework for representing data with multiple variables, known as multivariate data.

ABN models are a graphical representation of (Bayesian) multivariate regression. This form of statistical analysis enables the prediction of multiple outcomes from a given set of predictors while simultaneously accounting for the relationships between these outcomes.

In other words, additive Bayesian network models extend the concept of generalized linear models (GLMs), which are typically used to predict a single outcome, to scenarios with multiple dependent variables. This makes them a powerful tool for understanding complex, multivariate datasets.

The term Bayesian network is interpreted differently across various fields.

Bayesian network models often involve binary nodes, arguably the most frequently used type of Bayesian network. These models typically use a contingency table instead of an additive parameter formulation. This approach allows for mathematical elegance and enables key metrics like model goodness of fit and marginal posterior parameters to be estimated analytically (i.e., from a formula) rather than numerically (an approximation). However, this parametrization may not be parsimonious, and the interpretation of the model parameters is less straightforward than the usual Generalized Linear Model (GLM) type models, which are prevalent across all scientific disciplines.

While this is a crucial practical distinction, it’s a relatively low-level technical one, as the primary aspect of BN modeling is that it’s a form of graphical modeling – a model of the data’s joint probability distribution. This joint – multidimensional – aspect makes this methodology highly attractive for complex data analysis and sets it apart from more standard regression techniques, such as GLMs, GLMMs, etc., which are only one-dimensional as they assume all covariates are independent. While this assumption is entirely reasonable in a classical experimental design scenario, it’s unrealistic for many observational studies in fields like medicine, veterinary science, ecology, and biology.

Examples

Example 1: Basic Usage

This is a basic example which shows the basic workflow:

library(abn)

# Built-in toy dataset with two Gaussian variables G1 and G2, two Binomial variables B1 and B2, and one multinomial variable C
str(g2b2c_data)

# Define the distributions of the variables
dists <- list(G1 = "gaussian",
              B1 = "binomial",
              B2 = "binomial",
              C = "multinomial",
              G2 = "gaussian")


# Build the score cache
cacheMLE <- buildScoreCache(data.df = g2b2c_data,
                         data.dists = dists,
                         method = "mle",
                         max.parents = 2)

# Find the most probable DAG
dagMP <- mostProbable(score.cache = cacheMLE)

# Print the most probable DAG
print(dagMP)

# Plot the most probable DAG
plot(dagMP)

# Fit the most probable DAG
myfit <- fitAbn(object = dagMP,
                method = "mle")

# Print the fitted DAG
print(myfit)

Example 2: Restrict Model Search Space

Based on example 1, we may know that the arc G1->G2 is not possible and that the arc from C -> G2 must be present. This "expert knowledge" can be included in the model by banning the arc from G1 to G2 and retaining the arc from C to G2.

The retain and ban matrices are specified as an adjacency matrix of 0 and 1 entries, where 1 indicates that the arc is banned or retained, respectively. Row and column names must match the variable names in the data set. The corresponding column is a parent of the variable in the row. Each column represents the parents, and the row is the child. For example, the first row of the ban matrix indicates that G1 is banned as a parent of G2.

Further, we can restrict the maximum number of parents per node to 2.


# Ban the edge G1 -> G2
banmat <- matrix(0, nrow = 5, ncol = 5, dimnames = list(names(dists), names(dists)))
banmat[1, 5] <- 1

# retain always the edge C -> G2
retainmat <- matrix(0, nrow = 5, ncol = 5, dimnames = list(names(dists), names(dists)))
retainmat[5, 4] <- 1

# Limit the maximum number of parents to 2
max.par <- 2

# Build the score cache
cacheMLE_small <- buildScoreCache(data.df = g2b2c_data,
                            data.dists = dists,
                            method = "mle",
                            dag.banned = banmat,
                            dag.retained = retainmat,
                            max.parents = max.par)
print(paste("Without restrictions from example 1: ", nrow(cacheMLE$node.defn)))
print(paste("With restrictions as in example 2: ", nrow(cacheMLE_small$node.defn)))

Example 3: Grouped Data Structures

Depending on the data structure, we may want to control for one-layer clustering, where observations are grouped into a single layer of clusters that are themselves assumed to be independent, but observations within the clusters may be correlated (e.g., students nested within schools, measurements over time for each patient, etc.).

Currently, abn supports only one layer clustering.


# Built-in toy data set
str(g2pbcgrp)

# Define the distributions of the variables
dists <- list(G1 = "gaussian",
              P = "poisson",
              B = "binomial",
              C = "multinomial",
              G2 = "gaussian") # group is not among the list of variable distributions

# Ban arcs such that C has only B and P as parents
ban.mat <- matrix(0, nrow = 5, ncol = 5, dimnames = list(names(dists), names(dists)))
ban.mat[4, 1] <- 1
ban.mat[4, 4] <- 1
ban.mat[4, 5] <- 1

# Build the score cache
cache <- buildScoreCache(data.df = g2pbcgrp,
                         data.dists = dists,
                         group.var = "group",
                         dag.banned = ban.mat,
                         method = "mle",
                         max.parents = 2)

# Find the most probable DAG
dag <- mostProbable(score.cache = cache)

# Plot the most probable DAG
plot(dag)

# Fit the most probable DAG
fit <- fitAbn(object = dag,
              method = "mle")

# Plot the fitted DAG
plot(fit)

# Print the fitted DAG
print(fit)

Example 4: Using INLA vs internal Laplace approximation

Under a Bayesian approach, abn automatically switches to the Integrated Nested Laplace Approximation from the INLA package if the internal Laplace approximation fails to converge. However, we can also force the use of INLA by setting the argument control=list(max.mode.error=100).

The following example shows that the results are very similar. It also shows how to constrain arcs as formula objects and how to specify different parent limits for each node separately.

library(abn)

# Subset of the build-in dataset, see  ?ex0.dag.data
mydat <- ex0.dag.data[,c("b1","b2","g1","g2","b3","g3")] ## take a subset of cols

# setup distribution list for each node
mydists <- list(b1="binomial", b2="binomial", g1="gaussian",
                g2="gaussian", b3="binomial", g3="gaussian")

# Structural constraints
## ban arc from b2 to b1
## always retain arc from g2 to g1
## parent limits - can be specified for each node separately
max.par <- list("b1"=2, "b2"=2, "g1"=2, "g2"=2, "b3"=2, "g3"=2)

# now build the cache of pre-computed scores according to the structural constraints
res.c <- buildScoreCache(data.df=mydat, data.dists=mydists,
                         dag.banned= ~b1|b2, 
                         dag.retained= ~g1|g2, 
                         max.parents=max.par)


# repeat but using R-INLA. The mlik's should be virtually identical.
if(requireNamespace("INLA", quietly = TRUE)){
  res.inla <- buildScoreCache(data.df=mydat, data.dists=mydists,
                              dag.banned= ~b1|b2, # ban arc from b2 to b1
                              dag.retained= ~g1|g2, # always retain arc from g2 to g1
                              max.parents=max.par,
                              control=list(max.mode.error=100)) # force using of INLA
  
  ## comparison - very similar
  difference <- res.c$mlik - res.inla$mlik
  summary(difference)
}

Contributing

We greatly appreciate contributions from the community and are excited to welcome you to the development process of the abn package. Here are some guidelines to help you get started:

  1. Seeking Support:

If you need help with using the abn package, you can seek support by creating a new issue on our GitHub repository. Please describe your problem in detail and include a minimal reproducible example if possible.

  1. Reporting Issues or Problems:

If you encounter any issues or problems with the software, please report them by creating a new issue on our GitHub repository. When reporting an issue, try to include as much detail as possible, including steps to reproduce the issue, your operating system and R version, and any error messages you received.

  1. Software Contributions:

We encourage contributions directly via pull requests on our GitHub repository. Before starting your work, please first create an issue describing the contribution you wish to make. This allows us to discuss and agree on the best way to integrate your contribution into the package.

By participating in this project, you agree to abide by our code of conduct. We are committed to making participation in this project a respectful and harassment-free experience for everyone.

Citation

If you use abn in your research, please cite it as follows:

> citation("abn")
To cite the methodology of the R package 'abn' use:

  Kratzer G, Lewis F, Comin A, Pittavino M, Furrer R (2023). “Additive Bayesian Network Modeling with the R Package abn.” _Journal of Statistical Software_,
  *105*(8), 1-41. doi:10.18637/jss.v105.i08 <https://doi.org/10.18637/jss.v105.i08>.

To cite an example of a typical ABN analysis use:

  Kratzer, G., Lewis, F.I., Willi, B., Meli, M.L., Boretti, F.S., Hofmann-Lehmann, R., Torgerson, P., Furrer, R. and Hartnack, S. (2020). Bayesian Network
  Modeling Applied to Feline Calicivirus Infection Among Cats in Switzerland. Frontiers in Veterinary Science, 7, 73

To cite the software implementation of the R package 'abn' use:

  Furrer, R., Kratzer, G. and Lewis, F.I. (2023). abn: Modelling Multivariate Data with Additive Bayesian Networks. R package version 2.7-2.
  https://CRAN.R-project.org/package=abn

License

The abn package is licensed under the GNU General Public License v3.0.

Code of Conduct

Please note that the abn project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.

Applications

The abn website provides a comprehensive set of documented case studies, numerical accuracy/quality assurance exercises, and additional documentation.

Technical articles

Application articles

Workshops

Causality:

  • 4 December 2018, Beate Sick & Gilles Kratzer of the 1st Causality workshop talk, Bayesian Networks meet Observational data. (UZH, Switzerland)

ABN modeling

Presentations

  • 4 October 2018, talk in Nutricia (Danone). Multivariable analysis: variable and model selection in system epidemiology. (Utrecht, Netherland)

  • 30 May 2018. Brown Bag Seminar in ZHAW. Presentation: Bayesian Networks Learning in a Nutshell. (Winterthur, Switzerland)

Copy Link

Version

Install

install.packages('abn')

Monthly Downloads

739

Version

3.1.1

License

GPL (>= 3)

Issues

Pull Requests

Stars

Forks

Maintainer

Last Published

May 30th, 2024

Functions in abn (3.1.1)

build.control

Control the iterations in buildScoreCache
abn-package

abn Package
AIC.abnFit

Print AIC of objects of class abnFit
BIC.abnFit

Print BIC of objects of class abnFit
check.which.valid.nodes

Set of simple checks on the list given as parent limits
Cfunctions

Documentation of C Functions
check.valid.buildControls

Simple check on the control parameters
FCV

Dataset related to Feline calicivirus infection among cats in Switzerland.
adg

Dataset related to average daily growth performance and abattoir findings in pigs commercial production.
abn.version

abn Version Information
check.valid.fitControls

Simple check on the control parameters
expit_cpp

expit function
bern_bugs

Bugs code for Bernoulli response
eval.across.grid

function to get marginal across an equal grid
check.valid.data

Set of simple commonsense validity checks on the data.df and data.dists arguments
calc.node.inla.glm

Fit a given regression using INLA
check.valid.groups

Simple check on the grouping variable
compareEG

Compare two DAGs or EGs
ex0.dag.data

Synthetic validation data set for use with abn library examples
buildScoreCache

Build a cache of goodness of fit metrics for each node in a DAG, possibly subject to user-defined restrictions
categorical_bugs

Bugs code for Categorical response
entropyData

Computes an Empirical Estimation of the Entropy from a Table of Counts
check.valid.dag

Set of simple commonsense validity checks on the directed acyclic graph definition matrix
check.valid.parents

Set of simple checks on the given parent limits
factorial

Factorial
discretization

Discretization of a Possibly Continuous Data Frame of Random Variables based on their distribution
forLoopContentFitBayes

Regress each node on its parents.#'
expit

expit of proportions
calc.node.inla.glmm

Fit a given regression using INLA
.onAttach

Prints start up message
createAbnDag

Make DAG of class "abnDag"
ex7.dag.data

Valdiation data set for use with abn library examples
essentialGraph

Construct the essential graph
ex5.dag.data

Valdiation data set for use with abn library examples
getMSEfromModes

Extract Standard Deviations from all Gaussian Nodes
gauss_bugs

Bugs code for Gaussian response
ex6.dag.data

Valdiation data set for use with abn library examples
fitAbn

Fit an additive Bayesian network model
g2pbcgrp

Toy Data Set for Examples in README
getMargsINLA

function to extract marginals from INLA output
get.quantiles

function to extract quantiles from INLA output
logLik.abnFit

Print logLik of objects of class abnFit
ex1.dag.data

Synthetic validation data set for use with abn library examples
fit.control

Control the iterations in fitAbn
coef.abnFit

Print coefficients of objects of class abnFit
nobs.abnFit

Print number of observations of objects of class abnFit
logit_cpp

logit functions
get.var.types

Create ordered vector with integers denoting the distribution
logit

Logit of proportions
find.next.left.x

Find next X evaluation Point
irls_binomial_cpp_fast

Fast Iterative Reweighed Least Square algorithm for Binomials
linkStrength

Returns the strengths of the edge connections in a Bayesian Network learned from observational data
irls_binomial_cpp_br

BR Iterative Reweighed Least Square algorithm for Binomials
or

Odds Ratio from a matrix
irls_poisson_cpp_fast

Fast Iterative Reweighed Least Square algorithm for Poissons
factorial_fast

Fast Factorial
ex2.dag.data

Synthetic validation data set for use with abn library examples
miData

Empirical Estimation of the Entropy from a Table of Counts
mi_cpp

Mutual Information
plot.abnDag

Plots DAG from an object of class abnDag
plot.abnFit

Plot objects of class abnFit
pigs.vienna

Dataset related to diseases present in `finishing pigs', animals about to enter the human food chain at an abattoir.
print.abnMostprobable

Print objects of class abnMostprobable
family.abnFit

Print family of objects of class abnFit
strsplits

Recursive string splitting
std.area.under.grid

Standard Area Under the Marginal
odds

Probability to odds
compareDag

Compare two DAGs or EGs
rank_cpp

Rank of a matrix
print.abnDag

Print objects of class abnDag
ex4.dag.data

Valdiation data set for use with abn library examples
ex3.dag.data

Validation data set for use with abn library examples
print.abnFit

Print objects of class abnFit
getModeVector

function to extract the mode from INLA output
print.abnHeuristic

Print objects of class abnHeuristic
makebugs

Make BUGS model from fitted DAG
irls_binomial_cpp_fast_br

Fast Br Iterative Reweighed Least Square algorithm for Binomials
irls_binomial_cpp

Iterative Reweighed Least Square algorithm for Binomials
infoDag

Compute standard information for a DAG.
formula_abn

Formula to adjacency matrix
print.abnCache

Print objects of class abnCache
searchHillClimber

Find high scoring directed acyclic graphs using heuristic search.
simulateAbn

Simulate data from a fitted additive Bayesian network.
g2b2c_data

Toy Data Set for Examples in README
print.abnHillClimber

Print objects of class abnHillClimber
summary.abnDag

Prints summary statistics from an object of class abnDag
simulateDag

Simulate a DAG with with arbitrary arcs density
summary.abnFit

Print summary of objects of class abnFit
irls_gaussian_cpp

Iterative Reweighed Least Square algorithm for Gaussians
pois_bugs

Bugs code for Poisson response
summary.abnMostprobable

Print summary of objects of class abnMostprobable
skewness

Computes skewness of a distribution
getmarginals

Internal function called by fitAbn.bayes.
irls_poisson_cpp

Iterative Reweighed Least Square algorithm for Poissons
irls_gaussian_cpp_fast

Fast Iterative Reweighed Least Square algorithm for Gaussians
plot.abnHillClimber

Plot objects of class abnHillClimber
plot.abnHeuristic

Plot objects of class abnHeuristic
makebugsGroup

Make BUGS model from fitted DAG with grouping
modes2coefs

Convert modes to fitAbn.mle$coefs structure
mb

Compute the Markov blanket
plot.abnMostprobable

Plot objects of class abnMostprobable
scoreContribution

Compute the score's contribution in a network of each observation.
tidy.cache

tidy up cache
mostProbable

Find most probable DAG structure
toGraphviz

Convert a DAG into graphviz format
validate_dists

Check for valid distribution
plotAbn

Plot an ABN graphic
searchHeuristic

A family of heuristic algorithms that aims at finding high scoring directed acyclic graphs
var33

simulated dataset from a DAG comprising of 33 variables
validate_abnDag

Check for valid DAG of class abnDag