Learn R Programming

iml (version 0.11.1)

Shapley: Prediction explanations with game theory

Description

Shapley computes feature contributions for single predictions with the Shapley value, an approach from cooperative game theory. The features values of an instance cooperate to achieve the prediction. The Shapley value fairly distributes the difference of the instance's prediction and the datasets average prediction among the features.

Arguments

Super class

iml::InterpretationMethod -> Shapley

Public fields

x.interest

data.frame
Single row with the instance to be explained.

y.hat.interest

numeric
Predicted value for instance of interest.

y.hat.average

numeric(1)
Average predicted value for data X.

sample.size

numeric(1)
The number of times coalitions/marginals are sampled from data X. The higher the more accurate the explanations become.

Methods

Inherited methods


Method new()

Create a Shapley object

Usage

Shapley$new(predictor, x.interest = NULL, sample.size = 100)

Arguments

predictor

Predictor
The object (created with Predictor$new()) holding the machine learning model and the data.

x.interest

data.frame
Single row with the instance to be explained.

sample.size

numeric(1)
The number of Monte Carlo samples for estimating the Shapley value.

Returns

data.frame
data.frame with the Shapley values (phi) per feature.


Method explain()

Set a new data point which to explain.

Usage

Shapley$explain(x.interest)

Arguments

x.interest

data.frame
Single row with the instance to be explained.


Method clone()

The objects of this class are cloneable with this method.

Usage

Shapley$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Details

For more details on the algorithm see https://christophm.github.io/interpretable-ml-book/shapley.html

References

Strumbelj, E., Kononenko, I. (2014). Explaining prediction models and individual predictions with feature contributions. Knowledge and Information Systems, 41(3), 647-665. https://doi.org/10.1007/s10115-013-0679-x

See Also

Shapley

A different way to explain predictions: LocalModel

Examples

Run this code
library("rpart")
# First we fit a machine learning model on the Boston housing data
data("Boston", package = "MASS")
rf <- rpart(medv ~ ., data = Boston)
X <- Boston[-which(names(Boston) == "medv")]
mod <- Predictor$new(rf, data = X)

# Then we explain the first instance of the dataset with the Shapley method:
x.interest <- X[1, ]
shapley <- Shapley$new(mod, x.interest = x.interest)
shapley

# Look at the results in a table
shapley$results
# Or as a plot
plot(shapley)

# Explain another instance
shapley$explain(X[2, ])
plot(shapley)
if (FALSE) {
# Shapley() also works with multiclass classification
rf <- rpart(Species ~ ., data = iris)
X <- iris[-which(names(iris) == "Species")]
mod <- Predictor$new(rf, data = X, type = "prob")

# Then we explain the first instance of the dataset with the Shapley() method:
shapley <- Shapley$new(mod, x.interest = X[1, ])
shapley$results
plot(shapley)

# You can also focus on one class
mod <- Predictor$new(rf, data = X, type = "prob", class = "setosa")
shapley <- Shapley$new(mod, x.interest = X[1, ])
shapley$results
plot(shapley)
}

Run the code above in your browser using DataLab