Configure mlr
If you really know what you are doing you may think mlr is limiting you in certain ways. mlr is designed to make usage errors due to typos or invalid parameter values as unlikely as possible. But sometimes you want to break those barriers and get full access. For all parameters, simply refer to the documentation of configureMlr.
Example: Reduce the output on the console
You are bothered by all the output on the console like in this example?
## Perform a 3-fold cross-validation
rdesc = makeResampleDesc("CV", iters = 3)
r = resample("classif.ksvm", iris.task, rdesc)
#> [Resample] cross-validation iter: 1
#> Using automatic sigma estimation (sigest) for RBF or laplace kernel
#> [Resample] cross-validation iter: 2
#> Using automatic sigma estimation (sigest) for RBF or laplace kernel
#> [Resample] cross-validation iter: 3
#> Using automatic sigma estimation (sigest) for RBF or laplace kernel
#> [Resample] Result: mmce.test.mean=0.06
Just try the following:
configureMlr(show.learner.output = FALSE)
rdesc = makeResampleDesc("CV", iters = 3)
r = resample("classif.ksvm", iris.task, rdesc)
#> [Resample] cross-validation iter: 1
#> [Resample] cross-validation iter: 2
#> [Resample] cross-validation iter: 3
#> [Resample] Result: mmce.test.mean=0.0533
Access the current configuration
Function getMlrOptions returns a list
that shows the current configuration:
getMlrOptions()
#> $on.learner.error
#> [1] "stop"
#>
#> $on.learner.warning
#> [1] "warn"
#>
#> $on.par.out.of.bounds
#> [1] "stop"
#>
#> $on.par.without.desc
#> [1] "stop"
#>
#> $show.info
#> [1] TRUE
#>
#> $show.learner.output
#> [1] FALSE
Example: Turn off parameter checking
Or maybe you want to access a new parameter of a Learner where the learner is already available in mlr, but the parameter is not "registered" in the learner's parameter set yet. If this is the case you might want to contact us or open an issue as well! But until then you can turn off mlr's parameter checking like this:
lrn = makeLearner("classif.ksvm", newPar = 3)
#> Error in setHyperPars2.Learner(learner, insert(par.vals, args)): classif.ksvm: Setting parameter newPar without available description object!
#> You can switch off this check by using configureMlr!
lrn = makeLearner("classif.ksvm", epsilon = -3)
#> Error in setHyperPars2.Learner(learner, insert(par.vals, args)): classif.ksvm: Setting parameter epsilon without available description object!
#> You can switch off this check by using configureMlr!
configureMlr(on.par.without.desc = "quiet")
lrn = makeLearner("classif.ksvm", newPar = 3)
lrn = makeLearner("classif.ksvm", epsilon = -3)
The parameter setting will then be passed to the underlying function without further ado.
Example: Handle errors in an underlying learning method
Another common situation is that a particular learning method throws an error. The default behavior of mlr is to generate an exception as well. However, in some situations, for example if you conduct a benchmark study with multiple data sets and learners, you normally do not want to terminate the whole benchmark experiment due to one error. The following example shows how to prevent this:
## This call gives an error caused by the low number of observations in class `virginica`
train("classif.qda", task = iris.task, subset = 1:104)
#> Error in qda.default(x, grouping, ...): some group is too small for 'qda'
#> Timing stopped at: 0.005 0 0.005
configureMlr(on.learner.error = "warn")
mod = train("classif.qda", task = iris.task, subset = 1:104)
#> Warning in train("classif.qda", task = iris.task, subset = 1:104): Could not train learner classif.qda: Error in qda.default(x, grouping, ...) :
#> some group is too small for 'qda'
mod
#> Model for learner.id=classif.qda; learner.class=classif.qda
#> Trained on: task.id = iris-example; obs = 104; features = 4
#> Hyperparameters:
#> Training failed: Error in qda.default(x, grouping, ...) :
#> some group is too small for 'qda'
#>
#> Training failed: Error in qda.default(x, grouping, ...) :
#> some group is too small for 'qda'
## mod is an object of class FailureModel
isFailureModel(mod)
#> [1] TRUE
## Get the error message
getFailureModelMsg(mod)
#> [1] "Error in qda.default(x, grouping, ...) : \n some group is too small for 'qda'\n"
## NAs are predicted
predict(mod, iris.task)
#> Prediction: 150 observations
#> predict.type: response
#> threshold:
#> time: NA
#> id truth response
#> 1 1 setosa <NA>
#> 2 2 setosa <NA>
#> 3 3 setosa <NA>
#> 4 4 setosa <NA>
#> 5 5 setosa <NA>
#> 6 6 setosa <NA>
Instead of an exception, a warning is issued and a FailureModel is created that predicts
NA
s for all new observations. Function getFailureModelMsg extracts the error
message.