optim(par, fn, gr = NULL, …,
method = c("Nelder-Mead", "BFGS", "CG", "L-BFGS-B", "SANN",
"Brent"),
lower = -Inf, upper = Inf,
control = list(), hessian = FALSE)optimHess(par, fn, gr = NULL, …, control = list())
"BFGS"
,
"CG"
and "L-BFGS-B"
methods. If it is NULL
, a
finite-difference approximation will be used. For the "SANN"
method it specifies a function to generate a new
candidate point. If it is NULL
a default Gaussian Markov
kernel is used.
fn
and gr
."L-BFGS-B"
method, or bounds in which to search for method "Brent"
.optim
, a list with components:
fn
corresponding to par
.fn
and gr
respectively. This excludes those calls needed
to compute the Hessian, if requested, and any calls to fn
to
compute a finite-difference approximation to the gradient.0
indicates successful
completion (which is always the case for "SANN"
and
"Brent"
). Possible error codes are
1
maxit
had been reached.10
51
"L-BFGS-B"
method; see component message
for further details.52
"L-BFGS-B"
method; see component message
for further details.NULL
.hessian
is true. A symmetric
matrix giving an estimate of the Hessian at the solution found. Note
that this is the Hessian of the unconstrained problem even if the
box constraints are active.optimHess
, the description of the hessian
component
applies.…
must be matched exactly. By default optim
performs minimization, but it will maximize
if control$fnscale
is negative. optimHess
is an
auxiliary function to compute the Hessian at a later stage if
hessian = TRUE
was forgotten. The default method is an implementation of that of Nelder and Mead
(1965), that uses only function values and is robust but relatively
slow. It will work reasonably well for non-differentiable functions. Method "BFGS"
is a quasi-Newton method (also known as a variable
metric algorithm), specifically that published simultaneously in 1970
by Broyden, Fletcher, Goldfarb and Shanno. This uses function values
and gradients to build up a picture of the surface to be optimized. Method "CG"
is a conjugate gradients method based on that by
Fletcher and Reeves (1964) (but with the option of Polak--Ribiere or
Beale--Sorenson updates). Conjugate gradient methods will generally
be more fragile than the BFGS method, but as they do not store a
matrix they may be successful in much larger optimization problems. Method "L-BFGS-B"
is that of Byrd et. al. (1995) which
allows box constraints, that is each variable can be given a lower
and/or upper bound. The initial value must satisfy the constraints.
This uses a limited-memory modification of the BFGS quasi-Newton
method. If non-trivial bounds are supplied, this method will be
selected, with a warning. Nocedal and Wright (1999) is a comprehensive reference for the
previous three methods. Method "SANN"
is by default a variant of simulated annealing
given in Belisle (1992). Simulated-annealing belongs to the class of
stochastic global optimization methods. It uses only function values
but is relatively slow. It will also work for non-differentiable
functions. This implementation uses the Metropolis function for the
acceptance probability. By default the next candidate point is
generated from a Gaussian Markov kernel with scale proportional to the
actual temperature. If a function to generate a new candidate point is
given, method "SANN"
can also be used to solve combinatorial
optimization problems. Temperatures are decreased according to the
logarithmic cooling schedule as given in Belisle (1992, p. 890);
specifically, the temperature is set to
temp / log(((t-1) %/% tmax)*tmax + exp(1))
, where t
is
the current iteration step and temp
and tmax
are
specifiable via control
, see below. Note that the
"SANN"
method depends critically on the settings of the control
parameters. It is not a general-purpose method but can be very useful
in getting to a good value on a very rough surface. Method "Brent"
is for one-dimensional problems only, using
optimize()
. It can be useful in cases where
optim()
is used inside other functions where only method
can be specified, such as in mle
from package stats4. Function fn
can return NA
or Inf
if the function
cannot be evaluated at the supplied value, but the initial value must
have a computable finite value of fn
.
(Except for method "L-BFGS-B"
where the values should always be
finite.) optim
can be used recursively, and for a single parameter
as well as many. It also accepts a zero-length par
, and just
evaluates the function with that argument. The control
argument is a list that can supply any of the
following components:
trace
"L-BFGS-B"
there are six levels of tracing. (To understand exactly what
these do see the source code: higher levels give more detail.)fnscale
fn
and gr
during optimization. If negative,
turns the problem into a maximization problem. Optimization is
performed on fn(par)/fnscale
.parscale
par/parscale
and these should be
comparable in the sense that a unit change in any element produces
about a unit change in the scaled value. Not used (nor needed)
for method = "Brent"
.ndeps
par/parscale
scale. Defaults to 1e-3
.maxit
100
for the derivative-based methods, and
500
for "Nelder-Mead"
. For "SANN"
maxit
gives the total number of function
evaluations: there is no other stopping criterion. Defaults to
10000
.
abstol
reltol
reltol * (abs(val) + reltol)
at a step. Defaults to
sqrt(.Machine$double.eps)
, typically about 1e-8
.alpha
, beta
, gamma
"Nelder-Mead"
method. alpha
is the reflection
factor (default 1.0), beta
the contraction factor (0.5) and
gamma
the expansion factor (2.0).REPORT
"BFGS"
,
"L-BFGS-B"
and "SANN"
methods if control$trace
is positive. Defaults to every 10 iterations for "BFGS"
and
"L-BFGS-B"
, or every 100 temperatures for "SANN"
.type
1
for the Fletcher--Reeves update, 2
for
Polak--Ribiere and 3
for Beale--Sorenson.lmm
"L-BFGS-B"
method, It defaults to 5
.factr
"L-BFGS-B"
method. Convergence occurs when the reduction in the objective is
within this factor of the machine tolerance. Default is 1e7
,
that is a tolerance of about 1e-8
.pgtol
"L-BFGS-B"
method. It is a tolerance on the projected gradient in the current
search direction. This defaults to zero, when the check is
suppressed.temp
"SANN"
method. It is the
starting temperature for the cooling schedule. Defaults to
10
.tmax
"SANN"
method. Defaults to 10
.par
will be copied to the vectors passed to
fn
and gr
. Note that no other attributes of par
are copied over. The parameter vector passed to fn
has special semantics and may
be shared between calls: the function should not change or copy it.nlm
, nlminb
. optimize
for one-dimensional minimization and
constrOptim
for constrained optimization.