Ranking is performed by merging all approximation sets over all
algorithms and runs per instance. Next, each approximation set \(C\) is assigned a
rank which is 1 plus the number of approximation sets that are better than
\(C\). A set \(D\) is better than \(C\), if for each point \(x \in C\) there
exists a point in \(y \in D\) which weakly dominates \(x\).
Thus, each approximation set is reduced to a number -- its rank. This rank distribution
may act for first comparrison of multi-objecitve stochastic optimizers.
See [1] for more details.
This function makes use of parallelMap
to
parallelize the computation of dominance ranks.
computeDominanceRanking(df, obj.cols)
[data.frame
] Reduced df
with columns “prob”, “algorithm”, “repl”
and “rank”.
[data.frame
]
Data frame with columns at least “prob”, “algorithm”, “repl” and
column names specified via parameter obj.cols
.
[character(>= 2)
]
Column names in df
which store the objective function values.
[1] Knowles, J., Thiele, L., & Zitzler, E. (2006). A Tutorial on the Performance Assessment of Stochastic Multiobjective Optimizers. Retrieved from https://sop.tik.ee.ethz.ch/KTZ2005a.pdf
Other EMOA performance assessment tools:
approximateNadirPoint()
,
approximateRefPoints()
,
approximateRefSets()
,
emoaIndEps()
,
makeEMOAIndicator()
,
niceCellFormater()
,
normalize()
,
plotDistribution()
,
plotFront()
,
plotScatter2d()
,
plotScatter3d()
,
toLatex()