Note that arguments after ...
must be matched exactly.
The method used is a combination of golden section search and
successive parabolic interpolation, and was designed for use with
continuous functions. Convergence is never much slower
than that for a Fibonacci search. If f
has a continuous second
derivative which is positive at the minimum (which is not at lower
or
upper
), then convergence is superlinear, and usually of the
order of about 1.324.
The function f
is never evaluated at two points closer together
than \(\epsilon\)\( |x_0| + (tol/3)\), where
\(\epsilon\) is approximately sqrt(.Machine$double.eps)
and \(x_0\) is the final abscissa optimize()$minimum
.
If f
is a unimodal function and the computed values of f
are always unimodal when separated by at least \(\epsilon\)
\( |x| + (tol/3)\), then \(x_0\) approximates the abscissa of the
global minimum of f
on the interval lower,upper
with an
error less than \(\epsilon\)\( |x_0|+ tol\).
If f
is not unimodal, then optimize()
may approximate a
local, but perhaps non-global, minimum to the same accuracy.
The first evaluation of f
is always at
\(x_1 = a + (1-\phi)(b-a)\) where (a,b) = (lower, upper)
and
\(\phi = (\sqrt 5 - 1)/2 = 0.61803..\)
is the golden section ratio.
Almost always, the second evaluation is at
\(x_2 = a + \phi(b-a)\).
Note that a local minimum inside \([x_1,x_2]\) will be found as
solution, even when f
is constant in there, see the last
example.
f
will be called as f(x, ...)
for a numeric value
of x.
The argument passed to f
has special semantics and used to be
shared between calls. The function should not copy it.
The implementation is a vectorised version of the optimize
function.