Evaluates provided expression in a loop and reports mean evaluation time.
This is inferior to microbenchmark and other benchmarking tools in many
ways except that it has zero dependencies or suggests which helps with
package build and test times. Used in vignettes.
Usage
bench_mark(..., times = 1000L, deparse.width = 40)
Value
NULL, invisibly, reports timings as a side effect as screen output
Arguments
...
expressions to benchmark, are captured unevaluated
times
how many times to loop, defaults to 1000
deparse.width
how many characters to deparse for labels
Details
Runs gc() before each expression is evaluated. Expressions are evaluated
in the order provided. Attempts to estimate the overhead of the loop by
running a loop that evaluates NULL the times times.
Unfortunately because this computes the average of all iterations it is very
susceptible to outliers in small sample runs, particularly with fast running
code. For that reason the default number of iterations is one thousand.