A regression model enforcing fairness with a ridge penalty.
# a fair ridge regression model.
frrm(response, predictors, sensitive, unfairness,
definition = "sp-komiyama", lambda = 0, save.auxiliary = FALSE)
# a fair generalized ridge regression model.
fgrrm(response, predictors, sensitive, unfairness,
definition = "sp-komiyama", family = "binomial", lambda = 0,
save.auxiliary = FALSE)
a numeric vector, the response variable.
a numeric matrix or a data frame containing numeric and factor columns; the predictors.
a numeric matrix or a data frame containing numeric and factor columns; the sensitive attributes.
a positive number in [0, 1], how unfair is the model allowed
to be. A value of 0
means the model is completely fair, while a value
of 1
means the model is not constrained to be fair at all.
a character string, the label of the definition of fairness
used in fitting the model. Currently either "sp-komiyama"
or
"eo-komiyama"
. See below for details.
a character string, either "binomial"
to fit a logistic
regression or "gaussian"
to fit a linear regression.
a non-negative number, a ridge-regression penalty coefficient. It defaults to zero.
a logical value, whether to save the fitted values and
the residuals of the auxiliary model that constructs the decorrelated
predictors. The default value is FALSE
.
frrm()
returns an object of class c("frrm", "fair.model")
.
fgrrm()
returns an object of class c("fgrrm", "fair.model")
.
frrm()
can accommodate different definitions of fairness, which can
be selected via the definition
argument.
"sp-komiyama"
uses the same definition of fairness as
nclm()
: the model bounds the proportion of the variance that is
explained by the sensitive attributes over the total explained variance.
This falls within the definition of statistical parity.
"eo-komiyama"
enforces equality of opportunity in a similar
way: it regresses the fitted values against the sensitive attributes and
the response, and it bounds the proportion of the variance explained by
the sensitive attributes over the total explained variance in that model.
The algorithm works like this:
regresses the predictors against the sensitive attributes;
constructs a new set of predictors that are decorrelated from the sensitive attributes using the residuals of this regression;
regresses the response against the decorrelated predictors and the sensitive attributes; while
using a ridge penalty to control the proportion of variance the sensitive attributes can explain with respect to the overall explained variance of the model.
Both sensitive
and predictors
are standardized internally before
estimating the regression coefficients, which are then rescaled back to match
the original scales of the variables.
fgrrm()
is the extension of frrm()
to generalized linear models,
currently implementing linear (family = "gaussian"
) and logistic
(family = "binomial"
) regressions. fgrrm()
is equivalent to
frrm()
with family = "gaussian"
. The definition of fairness are
identical between frrm()
and fgrrm()
.