## 5.1 Partial Dependence Plot (PDP)

The partial dependence plot shows the marginal effect of a feature on the predicted outcome of a previously fit model (J. H. Friedman 200123). The prediction function is fixed at a few values of the chosen features and averaged over the other features.

Other names: marginal means, predictive margins, marginal effects.

A partial dependence plot can show if the relationship between the target and a feature is linear, monotonic or more complex. Applied to a linear regression model, partial dependence plots will always show a linear relationship, for example.

The partial dependence function for regression is defined as:

$\hat{f}_{x_S}(x_S)=E_{x_C}\left[\hat{f}(x_S,x_C)\right]=\int\hat{f}(x_S,x_C)d\mathbb{P}(x_C)$

The term $$x_S$$ is the set of features for which the partial dependence function should be plotted and $$x_C$$ are the other features that were used in the machine learning model $$\hat{f}$$. Usually, there are only one or two features in $$x_S$$. Concatenated, $$x_S$$ and $$x_C$$ make up $$x$$. Partial dependence works by marginalizing the machine learning model output $$\hat{f}$$ over the distribution of the features $$x_C$$, so that the remaining function shows the relationship between the $$x_S$$, in which we are interested, and the predicted outcome. By marginalizing over the other features, we get a function that only depends on features $$x_S$$, interactions between $$x_S$$ and other features included.

The partial function $$\hat{f}_{x_S}$$ along $$x_S$$ is estimated by calculating averages in the training data, which is also known as Monte Carlo method:

$\hat{f}_{x_S}(x_S)=\frac{1}{n}\sum_{i=1}^n\hat{f}(x_S,x_{Ci})$

In this formula, $$x_{iC}$$ are actual feature values from the dataset for the features in which we are not interested and $$n$$ is the number of instances in the dataset. One assumption made for the PDP is that the features in $$x_C$$ are uncorrelated with the features in $$x_S$$. If this assumption is violated, the averages, which are computed for the partial dependence plot, incorporate data points that are very unlikely or even impossible (see disadvantages).

For classification, where the machine model outputs probabilities, the partial dependence function displays the probability for a certain class given different values for features $$x_S$$. A straightforward way to handle multi-class problems is to plot one line or one plot per class.

The partial dependence plot is a global method: The method takes into account all instances and makes a statement about the global relationship of a feature with the predicted outcome.

### 5.1.1 Examples

In practice, the set of features $$x_S$$ usually only contains one feature or a maximum of two, because one feature produces 2D plots and two features produce 3D plots. Everything beyond that is quite tricky. Even 3D on a 2D paper or monitor is already challenging.

Let’s return to the regression example, in which we predict bike rentals. We first fit a machine learning model on the dataset, for which we want to analyse the partial dependencies. In this case, we fitted a RandomForest to predict the bike rentals and use the partial dependence plot to visualize the relationships the model learned. The influence of the weather features on the predicted bike counts:

For warm (but not too hot) weather, the model predicts a high number of bike rentals on average. The potential bikers are increasingly inhibited in engaging in cycling when humidity reaches above 60%. Also, the more wind the less people like to bike, which makes sense. Interestingly, the predicted bike counts don’t drop between 25 and 35 km/h windspeed, but there is just not so much training data, so we can’t be confident about the effect. At least intuitively I would expect the bike rentals to drop with any increase in windspeed, especially when the windspeed is very high.

We also compute the partial dependence for cervical cancer classification. Again, we fit a RandomForest to predict whether a woman has cervical cancer given some risk factors. Given the model, we compute and visualize the partial dependence of the cancer probability on different features:

We can also visualizes the partial dependence of two features at once:

• The computation of partial dependence plots is intuitive: The partial dependence curve at a certain feature value represents the average prediction when we force all data points to take on that feature value. In my experience, laypersons usually grasp the idea of PDPs quickly.
• If the feature for which you computed the PDP is uncorrelated with the other model features, then the PDPs are perfectly representing how the feature influences the target on average. In this uncorrelated case the interpretation is clear: The partial dependence plots shows how on average the prediction changes in your dataset, when the j-th feature is changed. It’s complicated when features are correlated, see also disadvantages.
• Partial dependence plots are simple to implement.
• Causal interpretation : The calculation for the partial dependence plots has a causal interpretation: We intervene on $$x_j$$ and measure the changes in the predictions. By doing this, we analyse the causal relationship between the feature and the outcome.24 The relationship is causal for the model - because we explicitly model the outcome on the feature - but not necessarily for the real world!

• Heterogenous effects might be hidden because the PDP only shows the average over the observations. Assume that for feature $$x_j$$ half your data points have a positive assocation with the outcome - the greater $$x_j$$ the greater $$\hat{y}$$ - and the other half has negative assocation - the smaller $$x_j$$ the greater $$\hat{y}$$. The PDP curve might be a straight, horizontal line, because the effects of both dataset halves cancel each other out. You then conclude that the feature has no effect on the outcome. By plotting the individiual conditional expectation curves instead of the aggregated line, we can uncover heterogeneous effects.