Feature Impact is computed for each column by creating new data with that column randomly
permuted (but the others left unchanged), and seeing how the error metric score for the
predictions is affected. The 'impactUnnormalized' is how much worse the error metric score is
when making predictions on this modified data. The 'impactNormalized' is normalized so that the
largest value is 1. In both cases, larger values indicate more important features. Elsewhere this
technique is sometimes called 'Permutation Importance'.
Feature impact also runs redundancy detection, which detects if some features are redundant with
higher importance features. Note that some types of projects, like multiclass, do not run
redundancy detection. This function will generate a warning if redundancy detection was not run.