Exploring Learner Predictions
Learners use features to make predictions but how those features are used is often not apparent. mlr can estimate the dependence of a learned function on a subset of the feature space using generatePartialPredictionData.
Partial prediction plots reduce the potentially high dimensional function estimated by the learner, and display a marginalized version of this function in a lower dimensional space. For example suppose , where . With pairs drawn independently from this statistical model, a learner may estimate , which, if is high dimensional can be uninterpretable. Suppose we want to approximate the relationship between some subset of . We partition into two sets, and such that , where is a subset of of interest.
The partial dependence of on is
is integrated out. We use the following estimator:
The individual conditional expectation of an observation can also be estimated using the above algorithm absent the averaging, giving . This allows the discovery of features of that may be obscured by an aggregated summary of .
The partial derivative of the partial prediction function, , and the individual conditional expectation function, , can also be computed. For regression and survival tasks the partial derivative of a single feature is the gradient of the partial prediction function, and for classification tasks where the learner can output class probabilities the Jacobian. Note that if the learner produces discontinuous partial predictions (e.g., piecewise constant functions such as decision trees, ensembles of decision trees, etc.) the derivative will be 0 (where the function is not changing) or trending towards positive or negative infinity (at the discontinuities where the derivative is undefined). Plotting the partial prediction function of such learners may give the impression that the function is not discontinuous because the prediction grid is not composed of all discontinuous points in the predictor space. This results in a line interpolating that makes the function appear to be piecewise linear (where the derivative would be defined except at the boundaries of each piece).
The partial derivative can be informative regarding the additivity of the learned function in certain features. If is an additive function in a feature , then its partial derivative will not depend on any other features () that may have been used by the learner. Variation in the estimated partial derivative indicates that there is a region of interaction between and in .
See Goldstein, Kapelner, Bleich, and Pitkin (2014) for more details and their package ICEbox for the original implementation. The algorithm works for any supervised learner with classification, regression, and survival tasks.
Generating partial predictions
Our implementation, following mlr's visualization pattern, consists of the above mentioned function generatePartialPredictionData, as well as two visualization functions, plotPartialPrediction and plotPartialPredictionGGVIS. The former generates input (objects of class PartialPredictionData) for the latter.
The first step executed by generatePartialPredictionData is to generate a feature grid for
every element of the character vector features
passed. The data are given by the input
argument, which can be a Task or a data.frame. The feature grid can be generated in
several ways. A uniformly spaced grid of length gridsize
(default 10) from the empirical
minimum to the empirical maximum is created by default, but arguments fmin
and fmax
may
be used to override the empirical default (the lengths of fmin
and fmax
must match the
length of features
). Alternatively the feature data can be resampled, either by using a
bootstrap or by subsampling.
lrn.classif = makeLearner("classif.ksvm", predict.type = "prob")
fit.classif = train(lrn.classif, iris.task)
pd = generatePartialPredictionData(fit.classif, iris.task, "Petal.Width")
pd
#> PartialPredictionData
#> Task: iris-example
#> Features: Petal.Width
#> Target: setosa, versicolor, virginica
#> Derivative: FALSE
#> Interaction: FALSE
#> Individual: FALSE
#> Class Probability Petal.Width
#> 1 setosa 0.1133617 2.500000
#> 2 setosa 0.1016932 2.233333
#> 3 setosa 0.1000598 1.966667
#> 4 setosa 0.1091532 1.700000
#> 5 setosa 0.1406860 1.433333
#> 6 setosa 0.2131172 1.166667
As noted above, does not have to be unidimensional. If it is not, the interaction
flag must be set to TRUE
. Then the individual feature grids are combined using the Cartesian
product, and the estimator above is applied, producing a partial prediction for every combination
of unique feature values. If the interaction
flag is FALSE
(the default) then by default
is assumed unidimensional, and partial predictions are generated for each feature separately.
The resulting output when interaction = FALSE
has a column for each feature, and NA
where
the feature was not used in generating partial predictions.
pd.lst = generatePartialPredictionData(fit.classif, iris.task, c("Petal.Width", "Petal.Length"), FALSE)
head(pd.lst$data)
#> Class Probability Petal.Width Petal.Length
#> 1 setosa 0.1133617 2.500000 NA
#> 2 setosa 0.1016932 2.233333 NA
#> 3 setosa 0.1000598 1.966667 NA
#> 4 setosa 0.1091532 1.700000 NA
#> 5 setosa 0.1406860 1.433333 NA
#> 6 setosa 0.2131172 1.166667 NA
tail(pd.lst$data)
#> Class Probability Petal.Width Petal.Length
#> 55 virginica 0.3386905 NA 4.277778
#> 56 virginica 0.2364844 NA 3.622222
#> 57 virginica 0.1700154 NA 2.966667
#> 58 virginica 0.1774907 NA 2.311111
#> 59 virginica 0.2287907 NA 1.655556
#> 60 virginica 0.2683431 NA 1.000000
pd.int = generatePartialPredictionData(fit.classif, iris.task, c("Petal.Width", "Petal.Length"), TRUE)
pd.int
#> PartialPredictionData
#> Task: iris-example
#> Features: Petal.Width, Petal.Length
#> Target: setosa, versicolor, virginica
#> Derivative: FALSE
#> Interaction: TRUE
#> Individual: FALSE
#> Class Probability Petal.Width Petal.Length
#> 1 setosa 0.1307126 2.500000 6.9
#> 2 setosa 0.1158181 2.233333 6.9
#> 3 setosa 0.1110669 1.966667 6.9
#> 4 setosa 0.1160515 1.700000 6.9
#> 5 setosa 0.1316584 1.433333 6.9
#> 6 setosa 0.1575610 1.166667 6.9
At each step in the estimation of a set of predictions of length is generated.
By default the mean prediction is used. For classification where predict.type = "prob"
this
entails the mean class probabilities. However, other summaries of the predictions may be used.
For regression and survival tasks the function used here must either return one number or three,
and, if the latter, the numbers must be sorted lowest to highest. For classification tasks
the function must return a number for each level of the target feature.
As noted, the fun
argument can be a function which returns three numbers (sorted low to high)
for a regression task. This allows further exploration of relative feature importance. If a
feature is relatively important, the bounds are necessarily tighter because the feature accounts
for more of the variance of the predictions, i.e., it is "used" more by the learner.
lrn.regr = makeLearner("regr.ksvm")
fit.regr = train(lrn.regr, bh.task)
pd.regr = generatePartialPredictionData(fit.regr, bh.task, "lstat", fun = median)
pd.regr
#> PartialPredictionData
#> Task: BostonHousing-example
#> Features: lstat
#> Target: medv
#> Derivative: FALSE
#> Interaction: FALSE
#> Individual: FALSE
#> medv lstat
#> 1 18.31668 37.97000
#> 2 18.17512 33.94333
#> 3 18.25889 29.91667
#> 4 18.57818 25.89000
#> 5 18.98526 21.86333
#> 6 19.59239 17.83667
pd.ci = generatePartialPredictionData(fit.regr, bh.task, "lstat",
fun = function(x) quantile(x, c(.25, .5, .75)))
pd.ci
#> PartialPredictionData
#> Task: BostonHousing-example
#> Features: lstat
#> Target: medv
#> Derivative: FALSE
#> Interaction: FALSE
#> Individual: FALSE
#> medv lstat lower upper
#> 1 18.31668 37.97000 15.17984 20.60821
#> 2 18.17512 33.94333 14.07372 20.54043
#> 3 18.25889 29.91667 13.60811 20.93051
#> 4 18.57818 25.89000 13.81609 21.43523
#> 5 18.98526 21.86333 14.80808 22.23535
#> 6 19.59239 17.83667 16.52891 22.96918
pd.classif = generatePartialPredictionData(fit.classif, iris.task, "Petal.Length", fun = median)
pd.classif
#> PartialPredictionData
#> Task: iris-example
#> Features: Petal.Length
#> Target: setosa, versicolor, virginica
#> Derivative: FALSE
#> Interaction: FALSE
#> Individual: FALSE
#> Class Probability Petal.Length
#> 1 setosa 0.10847632 6.900000
#> 2 setosa 0.05687223 6.244444
#> 3 setosa 0.03133824 5.588889
#> 4 setosa 0.02133358 4.933333
#> 5 setosa 0.03139629 4.277778
#> 6 setosa 0.06787746 3.622222
In addition to bounds based on a summary of the distribution of the conditional expectation of each observation, learners which can estimate the variance of their predictions can also be used. The argument bounds
is a numeric vector of length two which is added (so the first number should be negative) to the point prediction to produce a confidence interval for the partial prediction. The default is the .025 and .975 quantiles of the Gaussian distribution.
fit.se = train(makeLearner("regr.randomForest", predict.type = "se"), bh.task)
pd.se = generatePartialPredictionData(fit.se, bh.task, c("lstat", "crim"))
head(pd.se$data)
#> medv lstat crim lower upper
#> 1 19.43694 37.97000 NA 17.58548 21.28839
#> 2 19.41572 33.94333 NA 17.55798 21.27345
#> 3 19.40258 29.91667 NA 17.54418 21.26098
#> 4 19.51110 25.89000 NA 17.66800 21.35421
#> 5 19.68226 21.86333 NA 17.88024 21.48427
#> 6 20.36382 17.83667 NA 18.63275 22.09489
tail(pd.se$data)
#> medv lstat crim lower upper
#> 15 21.65121 NA 49.434031 19.39979 23.90263
#> 16 21.66094 NA 39.548489 19.41322 23.90866
#> 17 21.75073 NA 29.662947 19.52723 23.97423
#> 18 21.86104 NA 19.777404 19.67790 24.04418
#> 19 22.32959 NA 9.891862 20.13048 24.52869
#> 20 22.96415 NA 0.006320 21.11620 24.81210
As previously mentioned if the aggregation function is not used, i.e., it is the identity,
then the conditional expectation of is estimated. If individual = TRUE
then generatePartialPredictionData returns partial predictions made at each point in
the prediction grid constructed from the features.
pd.ind.regr = generatePartialPredictionData(fit.regr, bh.task, "lstat", individual = TRUE)
pd.ind.regr
#> PartialPredictionData
#> Task: BostonHousing-example
#> Features: lstat
#> Target: medv
#> Derivative: FALSE
#> Interaction: FALSE
#> Individual: TRUE
#> Predictions centered: FALSE
#> medv lstat idx
#> 1 19.94794 37.97000 1
#> 2 20.00442 33.94333 1
#> 3 20.28895 29.91667 1
#> 4 20.79977 25.89000 1
#> 5 21.52534 21.86333 1
#> 6 22.45576 17.83667 1
The resulting output, particularly the element data
in the returned object, has an additional
column idx
which gives the index of the observation to which the row pertains.
For classification tasks this index references both the class and the observation index.
pd.ind.classif = generatePartialPredictionData(fit.classif, iris.task, "Petal.Length", individual = TRUE)
pd.ind.classif
#> PartialPredictionData
#> Task: iris-example
#> Features: Petal.Length
#> Target: setosa, versicolor, virginica
#> Derivative: FALSE
#> Interaction: FALSE
#> Individual: TRUE
#> Predictions centered: FALSE
#> Class Probability Petal.Length idx
#> 1 setosa 0.2526891 6.9 1.setosa
#> 2 setosa 0.2503856 6.9 2.setosa
#> 3 setosa 0.2524189 6.9 3.setosa
#> 4 setosa 0.2522449 6.9 4.setosa
#> 5 setosa 0.2531258 6.9 5.setosa
#> 6 setosa 0.2529763 6.9 6.setosa
Individual partial predictions can also be centered by predictions made at all observations
for a particular point in the prediction grid created by the features. This is controlled by
the argument center
which is a list of the same length as the length of the features
argument and contains the values of the features
desired.
iris = getTaskData(iris.task)
pd.ind.classif = generatePartialPredictionData(fit.classif, iris.task, "Petal.Length", individual = TRUE,
center = list("Petal.Length" = min(iris$Petal.Length)))
Partial derivatives can also be computed for individual partial predictions and aggregate partial predictions. This is restricted to a single feature at a time. The derivatives of individual partial predictions can be useful in finding regions of interaction between the feature for which the derivative is estimated and the features excluded.
pd.regr.der = generatePartialPredictionData(fit.regr, bh.task, "lstat", derivative = TRUE)
head(pd.regr.der$data)
#> medv lstat
#> 1 0.13406892 37.97000
#> 2 0.05090044 33.94333
#> 3 -0.05778299 29.91667
#> 4 -0.17728166 25.89000
#> 5 -0.28990445 21.86333
#> 6 -0.37760851 17.83667
pd.regr.der.ind = generatePartialPredictionData(fit.regr, bh.task, "lstat", derivative = TRUE,
individual = TRUE)
head(pd.regr.der.ind$data)
#> medv lstat idx
#> 1 0.01294649 37.97000 1
#> 2 -0.04198499 33.94333 1
#> 3 -0.09918516 29.91667 1
#> 4 -0.15399777 25.89000 1
#> 5 -0.20601386 21.86333 1
#> 6 -0.25553074 17.83667 1
pd.classif.der = generatePartialPredictionData(fit.classif, iris.task, "Petal.Width", derivative = TRUE)
head(pd.classif.der$data)
#> Class Probability Petal.Width
#> 1 setosa 0.06479796 2.500000
#> 2 setosa 0.02417364 2.233333
#> 3 setosa -0.01188692 1.966667
#> 4 setosa -0.06364970 1.700000
#> 5 setosa -0.18686143 1.433333
#> 6 setosa -0.34859672 1.166667
pd.classif.der.ind = generatePartialPredictionData(fit.classif, iris.task, "Petal.Width", derivative = TRUE,
individual = TRUE)
head(pd.classif.der.ind$data)
#> Class Probability Petal.Width idx
#> 1 setosa -0.001631997 2.5 1.setosa
#> 2 setosa 0.013456089 2.5 2.setosa
#> 3 setosa 0.001246305 2.5 3.setosa
#> 4 setosa 0.006129489 2.5 4.setosa
#> 5 setosa -0.003554730 2.5 5.setosa
#> 6 setosa -0.002439572 2.5 6.setosa
Plotting partial predictions
Results from generatePartialPredictionData can be visualized with plotPartialPrediction and plotPartialPredictionGGVIS.
With one feature and a regression task the output is a line plot, with a point for each point in the corresponding feature's grid.
plotPartialPrediction(pd.regr)
With a classification task, a line is drawn for each class, which gives the estimated partial probability of that class for a particular point in the feature grid.
plotPartialPrediction(pd.classif)
For regression tasks, when the fun
argument of generatePartialPredictionData is used,
the bounds will automatically be displayed using a gray ribbon.
plotPartialPrediction(pd.ci)
The same goes for plots of partial predictions where the learner has predict.type = "se"
.
plotPartialPrediction(pd.se)
When multiple features are passed to generatePartialPredictionData but interaction = FALSE
,
facetting is used to display each estimated bivariate relationship.
plotPartialPrediction(pd.lst)
When interaction = TRUE
in the call to generatePartialPredictionData, one variable must
be chosen to be used for facetting, and a subplot for each value in the chosen feature's grid
is created, wherein the other feature's partial predictions within the facetting feature's
value are shown. Note that this type of plot is limited to two features.
plotPartialPrediction(pd.int, facet = "Petal.Length")
plotPartialPredictionGGVIS can be used similarly, however, since ggvis currently lacks
subplotting/facetting capabilities, the argument interact
maps one feature to an interactive
sidebar where the user can select a value of one feature.
plotPartialPredictionGGVIS(pd.int, interact = "Petal.Length")
When individual = TRUE
each individual conditional expectation curve is plotted.
plotPartialPrediction(pd.ind.regr)
When the individual curves are centered by subtracting the individual conditional expectations estimated at a particular value of this results in a fixed intercept which aids in visualizing variation in predictions made by .
plotPartialPrediction(pd.ind.classif)
Plotting partial derivative functions is the same as partial predictions. Below are estimates of the derivative of the mean aggregated partial prediction function, and the individual partial prediction functions for a regression and a classification task respectively.
plotPartialPrediction(pd.regr.der)
This suggests that is not additive in lstat
except in the neighborhood of .
plotPartialPrediction(pd.regr.der.ind)
This suggests that Petal.Width
interacts with some other feature in the neighborhood of
for classes "virginica" and "versicolor".
plotPartialPrediction(pd.classif.der.ind)