Given a data.frame of Pareto-front approximations for different
sets of problems, algorithms and replications, the function computes sets
of unary and binary EMOA performance indicators.
This function makes use of parallelMap
to
parallelize the computation of indicators.
computeIndicators(
df,
obj.cols = c("f1", "f2"),
unary.inds = NULL,
binary.inds = NULL,
normalize = FALSE,
offset = 0,
ref.points = NULL,
ref.sets = NULL
)
[list
] List with components “unary” (data frame of
unary indicators), “binary” (list of matrizes of binary indicators),
“ref.points” (list of reference points used) and “ref.sets”
(reference sets used).
[data.frame
]
Data frame with columns obj.cols
, “prob”, “algorithm”
and “repl”.
[character(>= 2)
]
Column names of the objective functions.
Default is c("f1", "f2")
, i.e., the bi-objective case is assumed.
[list
]
Named list of unary indicators which shall be calculated.
Each component must be another list with mandatory argument fun
(the
function which calculates the indicator) and optional argument pars
(a named
list of parameters for fun
). Function fun
must have the
signiture “function(points, arg1, ..., argk, ...)”.
The arguments “points” and “...” are mandatory, the remaining are
optional.
The names of the components on the first level are used for the column names
of the output data.frame.
Default is list(HV = list(fun = computeHV))
, i.e., the dominated
Hypervolume indicator.
[list
]
Named list of binary indicators which shall be applied for each algorithm
combination. Parameter binary.inds
needs the same structure as unary.inds
.
However, the function signature of fun
is slighly different:
“function(points1, points2, arg1, ..., argk, ...)”.
See function emoaIndEps
for an example.
Default is list(EPS = list(fun = emoaIndEps))
.
[logical(1)
]
Normalize approximation sets to \([0, 1]^p\) where \(p\) is the number of
objectives? Normalization is done on the union of all approximation sets for each
problem.
Default is FALSE
.
[numeric(1)
]
Offset added to reference point estimations.
Default is 0.
[list
]
Named list of numeric vectors (the reference points). The names must be the
unique problem names in df$prob
or a subset of these.
If NULL
(the default), reference points are estimated from the
approximation sets for each problem.
[list
]
Named list matrizes (the reference sets). The names must be the
unique problem names in df$prob
or a subset of these.
If NULL
(the default), reference points are estimated from the
approximation sets for each problem.
[1] Knowles, J., Thiele, L., & Zitzler, E. (2006). A Tutorial on the Performance Assessment of Stochastic Multiobjective Optimizers. Retrieved from https://sop.tik.ee.ethz.ch/KTZ2005a.pdf [2] Knowles, J., & Corne, D. (2002). On Metrics for Comparing Non-Dominated Sets. In Proceedings of the 2002 Congress on Evolutionary Computation Conference (CEC02) (pp. 711–716). Honolulu, HI, USA: Institute of Electrical and Electronics Engineers. [3] Okabe, T., Yaochu, Y., & Sendhoff, B. (2003). A Critical Survey of Performance Indices for Multi-Objective Optimisation. In Proceedings of the 2003 Congress on Evolutionary Computation Conference (CEC03) (pp. 878–885). Canberra, ACT, Australia: IEEE.