Learn R Programming

mlr (version 2.17.0)

friedmanPostHocTestBMR: Perform a posthoc Friedman-Nemenyi test.

Description

Performs a [PMCMR::posthoc.friedman.nemenyi.test] for a [BenchmarkResult] and a selected measure. This means *all pairwise comparisons* of `learners` are performed. The null hypothesis of the post hoc test is that each pair of learners is equal. If the null hypothesis of the included ad hoc [stats::friedman.test] can be rejected an object of class `pairwise.htest` is returned. If not, the function returns the corresponding friedman.test. Note that benchmark results for at least two learners on at least two tasks are required.

Usage

friedmanPostHocTestBMR(
  bmr,
  measure = NULL,
  p.value = 0.05,
  aggregation = "default"
)

Arguments

bmr

(BenchmarkResult) Benchmark result.

measure

(Measure) Performance measure. Default is the first measure used in the benchmark experiment.

p.value

(`numeric(1)`) p-value for the tests. Default: 0.05

aggregation

(character(1)) “mean” or “default”. See getBMRAggrPerformances for details on “default”.

Value

([pairwise.htest]): See [PMCMR::posthoc.friedman.nemenyi.test] for details. Additionally two components are added to the list:

f.rejnull (`logical(1)`)

Whether the according friedman.test rejects the Null hypothesis at the selected p.value

crit.difference (`list(2)`)

Minimal difference the mean ranks of two learners need to have in order to be significantly different

See Also

Other benchmark: BenchmarkResult, batchmark(), benchmark(), convertBMRToRankMatrix(), friedmanTestBMR(), generateCritDifferencesData(), getBMRAggrPerformances(), getBMRFeatSelResults(), getBMRFilteredFeatures(), getBMRLearnerIds(), getBMRLearnerShortNames(), getBMRLearners(), getBMRMeasureIds(), getBMRMeasures(), getBMRModels(), getBMRPerformances(), getBMRPredictions(), getBMRTaskDescs(), getBMRTaskIds(), getBMRTuneResults(), plotBMRBoxplots(), plotBMRRanksAsBarChart(), plotBMRSummary(), plotCritDifferences(), reduceBatchmarkResults()

Examples

Run this code
# NOT RUN {
# see benchmark
# }

Run the code above in your browser using DataLab