NA
, you
only calculate an aggregated value. If you can define a function that makes sense
for every single training / test set, implement your own Measure
.
makeCustomResampledMeasure(measure.id, aggregation.id, minimize = TRUE, properties = character(0L), fun, extra.args = list(), best = NULL, worst = NULL, measure.name = measure.id, aggregation.name = aggregation.id, note = "")
character(1)
]
Short name of measure.character(1)
]
Short name of aggregation.logical(1)
]
Should the measure be minimized?
Default is TRUE
.character
]
Set of measure properties. Some standard property names include:
Default is character(0)
.
function(task, group, pred, extra.args)
]
Calculates performance value from ResamplePrediction
object.
For rare cases you can also use the task, the grouping or the extra arguments extra.args
.
list
]
List of extra arguments which will always be passed to fun
.
Default is empty list.numeric(1)
]
Best obtainable value for measure.
Default is -Inf
or Inf
, depending on minimize
.numeric(1)
]
Worst obtainable value for measure.
Default is Inf
or -Inf
, depending on minimize
.character(1)
]
Long name of measure.
Default is measure.id
.character(1)
]
Long name of the aggregation.
Default is aggregation.id
.character
]
Description and additional notes for the measure. Default is .Measure
].
estimateRelativeOverfitting
,
makeCostMeasure
, makeMeasure
,
measures
, performance