Chapter 9 Local Model-Agnostic Methods
Local interpretation methods explain individual predictions. In this chapter, you will learn about the following local explanation methods:
- Individual conditional expectation curves are the building blocks for partial dependence plots and describe how changing a feature changes the prediction.
- Local surrogate models (LIME) explain a prediction by replacing the complex model with a locally interpretable surrogate model.
- Scoped rules (anchors) are rules that describe which feature values anchor a prediction, in the sense that they lock the prediction in place.
- Counterfactual explanations explain a prediction by examining which features would need to be changed to achieve a desired prediction.
- Shapley values are an attribution method that fairly assigns the prediction to individual features.
- SHAP is another computation method for Shapley values, but also proposes global interpretation methods based on combinations of Shapley values across the data.
LIME and Shapley values are attribution methods, so that the prediction of a single instance is described as the sum of feature effects. Other methods, such as counterfactual explanations, are example-based.