Learn R Programming

mlr (version 2.17.0)

makeLearner: Create learner object.

Description

For a classification learner the predict.type can be set to “prob” to predict probabilities and the maximum value selects the label. The threshold used to assign the label can later be changed using the setThreshold function.

To see all possible properties of a learner, go to: LearnerProperties.

Usage

makeLearner(
  cl,
  id = cl,
  predict.type = "response",
  predict.threshold = NULL,
  fix.factors.prediction = FALSE,
  ...,
  par.vals = list(),
  config = list()
)

Arguments

cl

(character(1)) Class of learner. By convention, all classification learners start with “classif.” all regression learners with “regr.” all survival learners start with “surv.” all clustering learners with “cluster.” and all multilabel classification learners start with “multilabel.”. A list of all integrated learners is available on the learners help page.

id

(character(1)) Id string for object. Used to display object. Default is cl.

predict.type

(character(1)) Classification: “response” (= labels) or “prob” (= probabilities and labels by selecting the ones with maximal probability). Regression: “response” (= mean response) or “se” (= standard errors and mean response). Survival: “response” (= some sort of orderable risk) or “prob” (= time dependent probabilities). Clustering: “response” (= cluster IDS) or “prob” (= fuzzy cluster membership probabilities), Multilabel: “response” (= logical matrix indicating the predicted class labels) or “prob” (= probabilities and corresponding logical matrix indicating class labels). Default is “response”.

predict.threshold

(numeric) Threshold to produce class labels. Has to be a named vector, where names correspond to class labels. Only for binary classification it can be a single numerical threshold for the positive class. See setThreshold for details on how it is applied. Default is NULL which means 0.5 / an equal threshold for each class.

fix.factors.prediction

(logical(1)) In some cases, problems occur in underlying learners for factor features during prediction. If the new features have LESS factor levels than during training (a strict subset), the learner might produce an error like “type of predictors in new data do not match that of the training data”. In this case one can repair this problem by setting this option to TRUE. We will simply add the missing factor levels missing from the test feature (but present in training) to that feature. Default is FALSE.

...

(any) Optional named (hyper)parameters. If you want to set specific hyperparameters for a learner during model creation, these should go here. You can get a list of available hyperparameters using getParamSet(<learner>). Alternatively hyperparameters can be given using the par.vals argument but ... should be preferred!

par.vals

(list) Optional list of named (hyper)parameters. The arguments in ... take precedence over values in this list. We strongly encourage you to use ... for passing hyperparameters.

config

(named list) Named list of config option to overwrite global settings set via configureMlr for this specific learner.

Value

(Learner).

<code>par.vals</code> vs. <code>...</code>

The former aims at specifying default hyperparameter settings from mlr which differ from the actual defaults in the underlying learner. For example, respect.unordered.factors is set to order in mlr while the default in ranger::ranger depends on the argument splitrule. getHyperPars(<learner>) can be used to query hyperparameter defaults that differ from the underlying learner. This function also shows all hyperparameters set by the user during learner creation (if these differ from the learner defaults).

regr.randomForest

For this learner we added additional uncertainty estimation functionality (predict.type = "se") for the randomForest, which is not provided by the underlying package.

Currently implemented methods are:

  • If se.method = "jackknife" the standard error of a prediction is estimated by computing the jackknife-after-bootstrap, the mean-squared difference between the prediction made by only using trees which did not contain said observation and the ensemble prediction.

  • If se.method = "bootstrap" the standard error of a prediction is estimated by bootstrapping the random forest, where the number of bootstrap replicates and the number of trees in the ensemble are controlled by se.boot and se.ntree respectively, and then taking the standard deviation of the bootstrap predictions. The "brute force" bootstrap is executed when ntree = se.ntree, the latter of which controls the number of trees in the individual random forests which are bootstrapped. The "noisy bootstrap" is executed when se.ntree < ntree which is less computationally expensive. A Monte-Carlo bias correction may make the latter option prefarable in many cases. Defaults are se.boot = 50 and se.ntree = 100.

  • If se.method = "sd", the default, the standard deviation of the predictions across trees is returned as the variance estimate. This can be computed quickly but is also a very naive estimator.

For both “jackknife” and “bootstrap”, a Monte-Carlo bias correction is applied and, in the case that this results in a negative variance estimate, the values are truncated at 0.

Note that when using the “jackknife” procedure for se estimation, using a small number of trees can lead to training data observations that are never out-of-bag. The current implementation ignores these observations, but in the original definition, the resulting se estimation would be undefined.

Please note that all of the mentioned se.method variants do not affect the computation of the posterior mean “response” value. This is always the same as from the underlying randomForest.

regr.featureless

A very basic baseline method which is useful for model comparisons (if you don't beat this, you very likely have a problem). Does not consider any features of the task and only uses the target feature of the training data to make predictions. Using observation weights is currently not supported.

Methods “mean” and “median” always predict a constant value for each new observation which corresponds to the observed mean or median of the target feature in training data, respectively.

The default method is “mean” which corresponds to the ZeroR algorithm from WEKA, see https://weka.wikispaces.com/ZeroR.

classif.featureless

Method “majority” predicts always the majority class for each new observation. In the case of ties, one randomly sampled, constant class is predicted for all observations in the test set. This method is used as the default. It is very similar to the ZeroR classifier from WEKA (see https://weka.wikispaces.com/ZeroR). The only difference is that ZeroR always predicts the first class of the tied class values instead of sampling them randomly.

Method “sample-prior” always samples a random class for each individual test observation according to the prior probabilities observed in the training data.

If you opt to predict probabilities, the class probabilities always correspond to the prior probabilities observed in the training data.

See Also

Other learner: LearnerProperties, getClassWeightParam(), getHyperPars(), getLearnerId(), getLearnerNote(), getLearnerPackages(), getLearnerParVals(), getLearnerParamSet(), getLearnerPredictType(), getLearnerShortName(), getLearnerType(), getParamSet(), helpLearnerParam(), helpLearner(), makeLearners(), removeHyperPars(), setHyperPars(), setId(), setLearnerId(), setPredictThreshold(), setPredictType()

Examples

Run this code
# NOT RUN {
makeLearner("classif.rpart")
makeLearner("classif.lda", predict.type = "prob")
lrn = makeLearner("classif.lda", method = "t", nu = 10)
getHyperPars(lrn)
# }

Run the code above in your browser using DataLab