Learn R Programming

fairml (version 0.8)

fairml-package: Fair models in machine learning

Description

Fair machine learning models: estimation, tuning and prediction.

Arguments

Author

Marco Scutari
Istituto Dalle Molle di Studi sull'Intelligenza Artificiale (IDSIA)

Maintainer: Marco Scutari scutari@bnlearn.com

Details

fairml implements key algorithms for learning machine learning models while enforcing fairness with respect to a set of observed sensitive (or protected) attributes.

Currently fairml implements the following algorithms (references below):

  • nclm(): the non-convex formulation of fair linear regression model from Komiyama et al. (2018).

  • frrm(): the fair (linear) ridge regression model from Scutari, Panero and Proissl (2022).

  • fgrrm(): thefair generalized (linear) ridge regression model from Scutari, Panero and Proissl (2022), supporting the Gaussian, binomial, Poisson, multinomial and Cox (proportional hazards) families.

  • zlrm(): the fair logistic regression with covariance constraints from Zafar et al. (2019).

  • zlrm(): a fair linear regression with covariance constraints following Zafar et al. (2019).

Furthermore, different fairness definitions can be used in frrm() and fgrrm():

  • "sp-komiyama": the statistical parity fairness constraint from Komiyama et al. (2018);

  • "eo-komiyama": the analogous equality of opportunity constraint built on the proportion of variance (or deviance) explained by sensitive attributes;

  • "if-berk": the individual fairness constraint from Berk et al. (2017) adapted in Scutari, Panero and Proissl (2022);

  • user-provided functions for custom definitions.

In addition, fairml implements diagnostic plots, cross-validation, prediction and methods for most of the generics made available for linear models from lm() and glm(). Profile plots to trace key model and goodness-of-fit indicators at varying levels of fairness are available from
fairness.profile.plot().

References

Berk R, Heidari H, Jabbari S, Joseph M, Kearns M, Morgenstern J, Neel S, Roth A (2017). "A Convex Framework for Fair Regression". FATML.
https://www.fatml.org/media/documents/convex_framework_for_fair_regression.pdf

Komiyama J, Takeda A, Honda J, Shimao H (2018). "Nonconvex Optimization for Regression with Fairness Constraints". Proceedings of the 35th International Conference on Machine Learning (ICML), PMLR 80:2737--2746.
http://proceedings.mlr.press/v80/komiyama18a/komiyama18a.pdf

Scutari M, Panero F, Proissl M (2022). "Achieving Fairness with a Simple Ridge Penalty". Statistics and Computing, 32, 77.
https://link.springer.com/content/pdf/10.1007/s11222-022-10143-w.pdf

Zafar BJ, Valera I, Gomez-Rodriguez M, Gummadi KP (2019). "Fairness Constraints: a Flexible Approach for Fair Classification". Journal of Machine Learning Research, 30:1--42.
https://www.jmlr.org/papers/volume20/18-262/18-262.pdf