Interpretability methods to analyze the behavior and
predictions of any machine learning model. Implemented methods are:
Feature importance described by Fisher et al. (2018)
, accumulated local effects plots described by Apley
(2018) , partial dependence plots described by
Friedman (2001) , individual conditional
expectation ('ice') plots described by Goldstein et al. (2013)
, local models (variant of 'lime')
described by Ribeiro et. al (2016) , the Shapley
Value described by Strumbelj et. al (2014)
, feature interactions described by
Friedman et. al and tree surrogate models.