Learn R Programming

iml (version 0.9.0)

FeatureEffect: Effect of a feature on predictions

Description

FeatureEffect computes and plots (individual) feature effects of prediction models.

Format

R6Class object.

Usage

effect = FeatureEffect$new(predictor, feature, method = "ale", 
    grid.size = 20,  center.at = NULL, run = TRUE)

plot(effect) effect$results print(effect) effectd$set.feature("x2")

Arguments

For FeatureEffect$new():

predictor:

(Predictor) The object (created with Predictor$new()) holding the machine learning model and the data.

feature:

(`character(1)` | `character(2)` | `numeric(1)` | `numeric(2)`) The feature name or index for which to compute the effects.

method:

(`character(1)`) 'ale' for accumulated local effects (the default), 'pdp' for partial dependence plot, 'ice' for individual conditional expectation curves, 'pdp+ice' for partial dependence plot and ice curves within the same plot.

center.at:

(`numeric(1)`) Value at which the plot should be centered. Ignored in the case of two features.

grid.size:

(`numeric(1)` | `numeric(2)`) The size of the grid for evaluating the predictions

Fields

method:

(`character(1)`) 'ale' for accumulated local effects, 'pdp' for partial dependence plot, 'ice' for individual conditional expectation curves, 'pdp+ice' for partial dependence plot and ice curves within the same plot.

feature.name:

(`character(1)` | `character(2)`) The names of the features for which the partial dependence was computed.

feature.type:

(`character(1)` | `character(2)`) The detected types of the features, either "categorical" or "numerical".

grid.size:

(`numeric(1)` | `numeric(2)`) The size of the grid.

center.at:

(`numeric(1)` | `character(1)`) The value for the centering of the plot. Numeric for numeric features, and the level name for factors.

n.features:

(`numeric(1)`) The number of features (either 1 or 2)

predictor:

(Predictor) The prediction model that was analysed.

results:

(data.frame) data.frame with the grid of feature of interest and the predicted \(\hat{y}\). Can be used for creating custom effect plots.

Methods

center()

method to set the value at which the ice computations are centered. See examples.

set.feature()

method to get/set feature(s) (by index) fpr which to compute pdp. See examples for usage.

plot()

method to plot the effects. See plot.FeatureEffect

predict()

method to predict the marginal outcome given a feature. Accepts a data.frame with the feature or a vector. Returns the values of the effect curves at the given values.

clone()

[internal] method to clone the R6 object.

initialize()

[internal] method to initialize the R6 object.

Details

The FeatureEffect class compute the effect a feature has on the prediction. Different methods are implemented:

  • Accumulated Local Effect (ALE) plots

  • Partial Dependence Plots (PDPs)

  • Individual Conditional Expectation (ICE) curves

Accumuluated local effects and partial dependence plots both show the average model prediction over the feature. The difference is that ALE are computed as accumulated differences over the conditional distribution and partial dependence plots over the marginal distribution. ALE plots preferable to PDPs, because they are faster and unbiased when features are correlated.

ALE plots for categorical features are automatically ordered by the similarity of the categories based on the distribution of the other features for instances in a category. When the feature is an ordered factor, the ALE plot leaves the order as is.

Individual conditional expectation curves describe how, for a single observation, the prediction changes when the feature changes and can be combined with partial dependence plots.

To learn more about accumulated local effects, read the Interpretable Machine Learning book: https://christophm.github.io/interpretable-ml-book/ale.html

And for the partial dependence plot: https://christophm.github.io/interpretable-ml-book/pdp.html

And for individual conditional expectation: https://christophm.github.io/interpretable-ml-book/ice.html

References

Apley, D. W. 2016. "Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models." ArXiv Preprint.

Friedman, J.H. 2001. "Greedy Function Approximation: A Gradient Boosting Machine." Annals of Statistics 29: 1189-1232.

Goldstein, A., Kapelner, A., Bleich, J., and Pitkin, E. (2013). Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation, 1-22. https://doi.org/10.1080/10618600.2014.907095

See Also

plot.FeatureEffect

Examples

Run this code
# NOT RUN {
# We train a random forest on the Boston dataset:
if (require("randomForest")) {
data("Boston", package  = "MASS")
rf = randomForest(medv ~ ., data = Boston, ntree = 50)
mod = Predictor$new(rf, data = Boston)

# Compute the accumulated local effects for the first feature
eff = FeatureEffect$new(mod, feature = "rm",grid.size = 30)
eff$plot()

# Again, but this time with a partial dependence plot and ice curves
eff = FeatureEffect$new(mod, feature = "rm", method = "pdp+ice", grid.size = 30)
plot(eff)

# Since the result is a ggplot object, you can extend it: 
if (require("ggplot2")) {
 plot(eff) + 
 # Adds a title
 ggtitle("Partial dependence") + 
 # Adds original predictions
 geom_point(data = Boston, aes(y = mod$predict(Boston)[[1]], x = rm), 
 color =  "pink", size = 0.5)
}

# If you want to do your own thing, just extract the data: 
eff.dat = eff$results
head(eff.dat)

# You can also use the object to "predict" the marginal values.
eff$predict(Boston[1:3,])
# Instead of the entire data.frame, you can also use feature values:
eff$predict(c(5,6,7))

# You can reuse the pdp object for other features: 
eff$set.feature("lstat")
plot(eff)

# Only plotting the aggregated partial dependence:  
eff = FeatureEffect$new(mod, feature = "crim", method = "pdp")
eff$plot() 

# Only plotting the individual conditional expectation:  
eff = FeatureEffect$new(mod, feature = "crim", method = "ice")
eff$plot() 
  
# Accumulated local effects and partial dependence plots support up to two features: 
eff = FeatureEffect$new(mod, feature = c("crim", "lstat"))  
plot(eff)


# FeatureEffect plots also works with multiclass classification
rf = randomForest(Species ~ ., data = iris, ntree=50)
mod = Predictor$new(rf, data = iris, type = "prob")

# For some models we have to specify additional arguments for the predict function
plot(FeatureEffect$new(mod, feature = "Petal.Width"))

# FeatureEffect plots support up to two features: 
eff = FeatureEffect$new(mod, feature = c("Sepal.Length", "Petal.Length"))
eff$plot()   

# show where the actual data lies
eff$plot(show.data = TRUE)   

# For multiclass classification models, you can choose to only show one class:
mod = Predictor$new(rf, data = iris, type = "prob", class = 1)
plot(FeatureEffect$new(mod, feature = "Sepal.Length"))
}
# }

Run the code above in your browser using DataLab