Let us dig deeper and discover what we humans see as “good” explanations and what the implications are for interpretable machine learning. Humanities research can help us find out. Miller (2017) has conducted a huge survey of publications on explanations, and this chapter builds on his summary.
In this chapter, I want to convince you of the following: As an explanation for an event, humans prefer short explanations (only 1 or 2 causes) that contrast the current situation with a situation in which the event would not have occurred. Especially abnormal causes provide good explanations. Explanations are social interactions between the explainer and the explainee (recipient of the explanation) and therefore the social context has a great influence on the actual content of the explanation.
When you need explanations with ALL factors for a particular prediction or behavior, you do not want a human-friendly explanation, but a complete causal attribution. You probably want a causal attribution if you are legally required to specify all influencing features or if you debug the machine learning model. In this case, ignore the following points. In all other cases, where lay people or people with little time are the recipients of the explanation, the following sections should be interesting to you.
An explanation is the answer to a why-question (Miller 2017).
- Why did not the treatment work on the patient?
- Why was my loan rejected?
- Why have we not been contacted by alien life yet?
The first two questions can be answered with an “everyday”-explanation, while the third one comes from the category “More general scientific phenomena and philosophical questions”. We focus on the “everyday”-type explanations, because those are relevant to interpretable machine learning. Questions that start with “how” can usually be rephrased as “why” questions: “How was my loan rejected?” can be turned into “Why was my loan rejected?”.
In the following, the term “explanation” refers to the social and cognitive process of explaining, but also to the product of these processes. The explainer can be a human being or a machine.
This section further condenses Miller’s summary on “good” explanations and adds concrete implications for interpretable machine learning.
Explanations are contrastive (Lipton 199010).
Humans usually do not ask why a certain prediction was made, but why this prediction was made instead of another prediction.
We tend to think in counterfactual cases, i.e. “How would the prediction have been if input X had been different?”.
For a house price prediction, the house owner might be interested in why the predicted price was high compared to the lower price they had expected.
If my loan application is rejected, I do not care to hear all the factors that generally speak for or against a rejection.
I am interested in the factors in my application that would need to change to get the loan.
I want to know the contrast between my application and the would-be-accepted version of my application.
The recognition that contrasting explanations matter is an important finding for explainable machine learning.
From most interpretable models, you can extract an explanation that implicitly contrasts a prediction of an instance with the prediction of an artificial data instance or an average of instances.
Physicians might ask: “Why did the drug not work for my patient?”.
And they might want an explanation that contrasts their patient with a patient for whom the drug worked and who is similar to the non-responding patient.
Contrastive explanations are easier to understand than complete explanations.
A complete explanation of the physician’s question why the drug does not work might include:
The patient has had the disease for 10 years, 11 genes are over-expressed, the patients body is very quick in breaking the drug down into ineffective chemicals, …
A contrastive explanation might be much simpler:
In contrast to the responding patient, the non-responding patient has a certain combination of genes that make the drug less effective.
The best explanation is the one that highlights the greatest difference between the object of interest and the reference object.
What it means for interpretable machine learning: Humans do not want a complete explanation for a prediction, but want to compare what the differences were to another instance’s prediction (can be an artificial one). Creating contrastive explanations is application-dependent because it requires a point of reference for comparison. And this may depend on the data point to be explained, but also on the user receiving the explanation. A user of a house price prediction website might want to have an explanation of a house price prediction contrastive to their own house or maybe to another house on the website or maybe to an average house in the neighborhood. The solution for the automated creation of contrastive explanations might also involve finding prototypes or archetypes in the data.
Explanations are selected.
People do not expect explanations that cover the actual and complete list of causes of an event.
We are used to selecting one or two causes from a variety of possible causes as THE explanation.
As proof, turn on the TV news:
“The decline in stock prices is blamed on a growing backlash against the company’s product due to problems with the latest software update.”
“Tsubasa and his team lost the match because of a weak defense: they gave their opponents too much room to play out their strategy.”
“The increasing distrust of established institutions and our government are the main factors that have reduced voter turnout.”
The fact that an event can be explained by various causes is called the Rashomon Effect. Rashomon is a Japanese movie that tells alternative, contradictory stories (explanations) about the death of a samurai. For machine learning models, it is advantageous if a good prediction can be made from different features. Ensemble methods that combine multiple models with different features (different explanations) usually perform well because averaging over those “stories” makes the predictions more robust and accurate. But it also means that there is more than one selective explanation why a certain prediction was made.
What it means for interpretable machine learning: Make the explanation very short, give only 1 to 3 reasons, even if the world is more complex. The LIME method does a good job with this.
Explanations are social.
They are part of a conversation or interaction between the explainer and the receiver of the explanation.
The social context determines the content and nature of the explanations.
If I wanted to explain to a technical person why digital cryptocurrencies are worth so much, I would say things like:
“The decentralized, distributed, blockchain-based ledger, which cannot be controlled by a central entity, resonates with people who want to secure their wealth, which explains the high demand and price.”
But to my grandmother I would say:
“Look, Grandma: Cryptocurrencies are a bit like computer gold. People like and pay a lot for gold, and young people like and pay a lot for computer gold.”
What it means for interpretable machine learning: Pay attention to the social environment of your machine learning application and the target audience. Getting the social part of the machine learning model right depends entirely on your specific application. Find experts from the humanities (e.g. psychologists and sociologists) to help you.
Explanations focus on the abnormal.
People focus more on abnormal causes to explain events (Kahnemann and Tversky, 198111).
These are causes that had a small probability but nevertheless happened.
The elimination of these abnormal causes would have greatly changed the outcome (counterfactual explanation).
Humans consider these kinds of “abnormal” causes as good explanations.
An example from Štrumbelj and Kononenko (2011)12 is:
Assume we have a dataset of test situations between teachers and students.
Students attend a course and pass the course directly after successfully giving a presentation.
The teacher has the option to additionally ask the student questions to test their knowledge.
Students who cannot answer these questions will fail the course.
Students can have different levels of preparation, which translates into different probabilities for correctly answering the teacher’s questions (if they decide to test the student).
We want to predict whether a student will pass the course and explain our prediction.
The chance of passing is 100% if the teacher does not ask any additional questions, otherwise the probability of passing depends on the student’s level of preparation and the resulting probability of answering the questions correctly.
Scenario 1: The teacher usually asks the students additional questions (e.g. 95 out of 100 times). A student who did not study (10% chance to pass the question part) was not one of the lucky ones and gets additional questions that he fails to answer correctly. Why did the student fail the course? I would say that it was the student’s fault to not study.
Scenario 2: The teacher rarely asks additional questions (e.g. 2 out of 100 times). For a student who has not studied for the questions, we would predict a high probability of passing the course because questions are unlikely. Of course, one of the students did not prepare for the questions, which gives him a 10% chance of passing the questions. He is unlucky and the teacher asks additional questions that the student cannot answer and he fails the course. What is the reason for the failure? I would argue that now, the better explanation is “because the teacher tested the student”. It was unlikely that the teacher would test, so the teacher behaved abnormally.
What it means for interpretable machine learning: If one of the input features for a prediction was abnormal in any sense (like a rare category of a categorical feature) and the feature influenced the prediction, it should be included in an explanation, even if other ‘normal’ features have the same influence on the prediction as the abnormal one. An abnormal feature in our house price prediction example might be that a rather expensive house has two balconies. Even if some attribution method finds that the two balconies contribute as much to the price difference as the above average house size, the good neighborhood or the recent renovation, the abnormal feature “two balconies” might be the best explanation for why the house is so expensive.
Explanations are truthful.
Good explanations prove to be true in reality (i.e. in other situations).
But disturbingly, this is not the most important factor for a “good” explanation.
For example, selectiveness seems to be more important than truthfulness.
An explanation that selects only one or two possible causes rarely covers the entire list of relevant causes.
Selectivity omits part of the truth.
It is not true that only one or two factors, for example, have caused a stock market crash, but the truth is that there are millions of causes that influence millions of people to act in such a way that in the end a crash was caused.
What it means for interpretable machine learning: The explanation should predict the event as truthfully as possible, which in machine learning is sometimes called fidelity. So if we say that a second balcony increases the price of a house, then that also should apply to other houses (or at least to similar houses). For humans, fidelity of an explanation is not as important as its selectivity, its contrast and its social aspect.
Good explanations are consistent with prior beliefs of the explainee.
Humans tend to ignore information that is inconsistent with their prior beliefs.
This effect is called confirmation bias (Nickerson 199813).
Explanations are not spared by this kind of bias.
People will tend to devalue or ignore explanations that do not agree with their beliefs.
The set of beliefs varies from person to person, but there are also group-based prior beliefs such as political worldviews.
What it means for interpretable machine learning: Good explanations are consistent with prior beliefs. This is difficult to integrate into machine learning and would probably drastically compromise predictive performance. Our prior belief for the effect of house size on predicted price is that the larger the house, the higher the price. Let us assume that a model also shows a negative effect of house size on the predicted price for a few houses. The model has learned this because it improves predictive performance (due to some complex interactions), but this behavior strongly contradicts our prior beliefs. You can enforce monotonicity constraints (a feature can only affect the prediction in one direction) or use something like a linear model that has this property.
Good explanations are general and probable.
A cause that can explain many events is very general and could be considered a good explanation.
Note that this contradicts the claim that abnormal causes make good explanations.
As I see it, abnormal causes beat general causes.
Abnormal causes are by definition rare in the given scenario.
In the absence of an abnormal event, a general explanation is considered a good explanation.
Also remember that people tend to misjudge probabilities of joint events.
(Joe is a librarian. Is he more likely to be a shy person or to be a shy person who likes to read books?)
A good example is “The house is expensive because it is big”, which is a very general, good explanation of why houses are expensive or cheap.
What it means for interpretable machine learning: Generality can easily be measured by the feature’s support, which is the number of instances to which the explanation applies divided by the total number of instances.
Lipton, Peter. “Contrastive explanation.” Royal Institute of Philosophy Supplements 27 (1990): 247-266.↩︎
Kahneman, Daniel, and Amos Tversky. “The simulation heuristic.” Stanford Univ CA Dept of Psychology. (1981).↩︎
Štrumbelj, Erik, and Igor Kononenko. “A general method for visualizing and explaining black-box regression models.” In International Conference on Adaptive and Natural Computing Algorithms, 21–30. Springer. (2011).↩︎
Nickerson, Raymond S. “Confirmation Bias: A ubiquitous phenomenon in many guises.” Review of General Psychology 2 (2). Educational Publishing Foundation: 175. (1998).↩︎