Construct your own performance measure, used after resampling. Note that
individual training / test set performance values will be set to NA
, you
only calculate an aggregated value. If you can define a function that makes
sense for every single training / test set, implement your own Measure.
makeCustomResampledMeasure(
measure.id,
aggregation.id,
minimize = TRUE,
properties = character(0L),
fun,
extra.args = list(),
best = NULL,
worst = NULL,
measure.name = measure.id,
aggregation.name = aggregation.id,
note = ""
)
Measure.
(character(1)
)
Short name of measure.
(character(1)
)
Short name of aggregation.
(logical(1)
)
Should the measure be minimized?
Default is TRUE
.
(character)
Set of measure properties. For a list of values see Measure.
Default is character(0)
.
(function(task, group, pred, extra.args)
)
Calculates performance value from ResamplePrediction object. For rare
cases you can also use the task, the grouping or the extra arguments
extra.args
.
- task
(Task)
The task.
- group
(factor)
Grouping of resampling iterations. This encodes whether specific
iterations 'belong together' (e.g. repeated CV).
- pred
(Prediction)
Prediction object.
- extra.args
(list)
See below.
(list)
List of extra arguments which will always be passed to fun
.
Default is empty list.
(numeric(1)
)
Best obtainable value for measure.
Default is -Inf
or Inf
, depending on minimize
.
(numeric(1)
)
Worst obtainable value for measure.
Default is Inf
or -Inf
, depending on minimize
.
(character(1)
)
Long name of measure.
Default is measure.id
.
(character(1)
)
Long name of the aggregation.
Default is aggregation.id
.
(character)
Description and additional notes for the measure. Default is “”.
Other performance:
ConfusionMatrix
,
calculateConfusionMatrix()
,
calculateROCMeasures()
,
estimateRelativeOverfitting()
,
makeCostMeasure()
,
makeMeasure()
,
measures
,
performance()
,
setAggregation()
,
setMeasurePars()