measures
. A learner is trained on a training set d1, results in a model m and predicts another set d2
(which may be a different one or the training set) resulting in the prediction.
The performance measure can now be defined using all of the information of the original task,
the fitted model and the prediction. Object slots:
character(1)
]logical(1)
]character
]function
]list
]Aggregation
]numeric(1)
]numeric(1)
]character(1)
]character(1)
]makeMeasure(id, minimize, properties = character(0L), fun,
extra.args = list(), aggr = test.mean, best = NULL, worst = NULL,
name = id, note = "")
character(1)
]
Name of measure.logical(1)
]
Should the measure be minimized?
Default is TRUE
.character
]
Set of measure properties. Some standard property names include:
Default is character(0)
.
function(task, model, pred, feats, extra.args)
]
Calculates the performance value. Usually you will only need the prediction
object pred
.
task
[Task
]model
[WrappedModel
]pred
[Prediction
]feats
[data.frame
]extra.args
[list
]list
]
List of extra arguments which will always be passed to fun
.
Default is empty list.Aggregation
]
Aggregation funtion, which is used to aggregate the values measured
on test / training sets of the measure to a single value.
Default is test.mean
.numeric(1)
]
Best obtainable value for measure.
Default is -Inf
or Inf
, depending on minimize
.numeric(1)
]
Worst obtainable value for measure.
Default is Inf
or -Inf
, depending on minimize
.character
]
Name of the measure. Default is id
.character
]
Description and additional notes for the measure. Default is “”.Measure
].ConfusionMatrix
,
calculateConfusionMatrix
,
calculateROCMeasures
,
estimateRelativeOverfitting
,
makeCostMeasure
,
makeCustomResampledMeasure
,
measures
, performance
f = function(task, model, pred, extra.args)
sum((pred$data$response - pred$data$truth)^2)
makeMeasure(id = "my.sse", minimize = TRUE, properties = c("regr", "response"), fun = f)
Run the code above in your browser using DataLab