⚠️There's a newer version (0.11.4) of this package.Take me there.
iml (version 0.6.0)
Interpretable Machine Learning
Description
Interpretability methods to analyze the behavior and predictions of
any machine learning model.
Implemented methods are:
Feature importance described by Fisher et al. (2018) ,
partial dependence plots described by Friedman (2001) ,
individual conditional expectation ('ice') plots described by Goldstein et al. (2013) ,
local models (variant of 'lime') described by Ribeiro et. al (2016) ,
the Shapley Value described by Strumbelj et. al (2014) ,
feature interactions described by Friedman et. al and
tree surrogate models.