Chapter 6 Example-based explanations

Example-based explanation methods select particular instances of the dataset to explain the behavior of machine learning models or to explain the underlying data distribution.

Keywords: example-based explanations, case-based reasoning (CBR), solving by analogy

Example-based explanations are mostly model-agnostic, because they make any machine learning model more interpretable. The difference with model-agnostic methods is that the example-based explanation methods explain a model by selecting instances of the dataset and not by creating summaries of features (such as feature importance or partial dependence). Example-based explanations only make sense if we can represent an instance of the data in a humanly understandable way. This works well for images, because we can view them directly. In general, example-based methods work well if the feature values of an instance carry more context, meaning the data has a structure, like images or texts do. It’s more challenging to represent tabular data in a meaningful way, because an instance can consist of hundreds or thousands of (less structured) features. Listing all feature values to describe an instance is usually not useful. It works well if there are only a handful of features or if we have a way to summarize an instance.

Example-based explanations help humans construct mental models of the machine learning model and the data the machine learning model has been trained on. It especially helps to understand complex data distributions.

But what do I mean by example-based explanations? We often use them in our jobs and daily lives. Let’s start with some examples 35:

A physician sees a patient with an unusual cough and a mild fever. The patient’s symptoms remind her of another patient she had years ago with similar symptoms. She suspects that her current patient could have the same disease and she takes a blood sample to test for this specific disease.

A data scientist is working on a new project for one of his clients: Analysis of the risk factors that lead to the failure of production machines for keyboards. The data scientist remembers a similar project he worked on and reuses parts of the code from the old project because he thinks the client wants the same analysis.

A kitten sits on the window ledge of a burning and uninhabited house. The fire department has already arrived and one of the firefighters ponders for a second whether he can risk going into the building to save the kitten. He remembers similar cases in his life as a firefighter: Old wooden houses that have been burning slowly for some time were often unstable and eventually collapsed. Because of the similarity of this case, he decides not to enter, because the risk of the house collapsing is too great. Fortunately, the kitty jumps out of the window, lands safely and nobody is harmed in the fire (Happy end!).

These stories illustrate how we humans think in examples or analogies. The blueprint of example-based explanations is: Thing B is similar to thing A and A caused Y, so I predict that B will cause Y as well. Implicitly, some machine learning approaches work example-based. Decision trees partition the data into nodes based on the similarities of the data points in the features that are important for predicting the target. A decision tree gets the prediction for a new data instance by finding the instances that are similar (= in the same terminal node) and returning the average of the outcomes of those instances as the prediction. The k-nearest neighbours (knn) model works explicitly with example-based predictions. For a new instance, a knn model locates the k nearest neighbours (e.g. the k=3 closest instances) and returns the average of the outcomes of those neighbours as a prediction. The prediction of a knn can be explained by returning the k neighbours, which - again - is only meaningful if we have a good way to represent a single instance.

This chapter covers the following example-based interpretability methods:

  • k-nearest neighbours model: A (interpretable) machine learning model based on examples.
  • Counterfactuals and adversarial examples. Counterfactuals tell us how an instance has to change to reverse its prediction. The focus is on explaining a single prediction. Adversarial examples are counterfactuals used to fool machine learning models. The emphasis is on flipping the prediction and not explaining it. (WORK IN PROGRESS)
  • Prototypes and criticisms. Prototypes are a selection of representative instances from the data and criticisms are instances that are not well represented by those prototypes. 36
  • Archetypes are the most extreme instances in the data based on the features. (WORK IN PROGRESS)
  • Influence Functions is a method to identify the instances that were most influential for a prediction model. 37 (WORK IN PROGRESS)

  1. Aamodt, A., & Plaza, E. (1994). Case-based reasoning : Foundational issues, methodological variations, and system approaches. AI Communications, 7(1), 39–59.

  2. Kim, Been, Rajiv Khanna, and Oluwasanmi O. Koyejo. “Examples are not enough, learn to criticize! criticism for interpretability.” Advances in Neural Information Processing Systems. 2016.

  3. Koh, P. W., & Liang, P. (2017). Understanding Black-box Predictions via Influence Functions. Retrieved from http://arxiv.org/abs/1703.04730