Learn R Programming

mlr3 (version 0.14.1)

benchmark_grid: Generate a Benchmark Grid Design

Description

Takes a lists of Task, a list of Learner and a list of Resampling to generate a design in an expand.grid() fashion (a.k.a. cross join or Cartesian product).

Resampling strategies are not allowed to be instantiated when passing the argument, and instead will be instantiated per task internally. The only exception to this rule applies if all tasks have exactly the same number of rows, and the resamplings are all instantiated for such tasks.

Usage

benchmark_grid(tasks, learners, resamplings)

Value

(data.table::data.table()) with the cross product of the input vectors.

Arguments

tasks

(list of Task).

learners

(list of Learner).

resamplings

(list of Resampling).

See Also

Other benchmark: BenchmarkResult, benchmark()

Examples

Run this code
tasks = list(tsk("penguins"), tsk("sonar"))
learners = list(lrn("classif.featureless"), lrn("classif.rpart"))
resamplings = list(rsmp("cv"), rsmp("subsampling"))

grid = benchmark_grid(tasks, learners, resamplings)
print(grid)
if (FALSE) {
benchmark(grid)
}

# manual construction of the grid with data.table::CJ()
grid = data.table::CJ(task = tasks, learner = learners,
  resampling = resamplings, sorted = FALSE)

# manual instantiation (not suited for a fair comparison of learners!)
Map(function(task, resampling) {
  resampling$instantiate(task)
}, task = grid$task, resampling = grid$resampling)
if (FALSE) {
benchmark(grid)
}

Run the code above in your browser using DataLab