constrOptim(theta, f, grad, ui, ci, mu = 1e-04, control = list(), method = if(is.null(grad)) "Nelder-Mead" else "BFGS", outer.iterations = 100, outer.eps = 1e-05, ..., hessian = FALSE)
f
(a function
as well),
or NULL
(see below).optim
.f
and grad
:
needs to be passed through optim
so should not match its
argument names.ui %*% theta - ci >= 0
. The
starting value must be in the interior of the feasible region, but the
minimum may be on the boundary. A logarithmic barrier is added to enforce the constraints and then
optim
is called. The barrier function is chosen so that
the objective function should decrease at each outer iteration. Minima
in the interior of the feasible region are typically found quite
quickly, but a substantial number of outer iterations may be needed
for a minimum on the boundary.
The tuning parameter mu
multiplies the barrier term. Its precise
value is often relatively unimportant. As mu
increases the
augmented objective function becomes closer to the original objective
function but also less smooth near the boundary of the feasible
region.
Any optim
method that permits infinite values for the
objective function may be used (currently all but "L-BFGS-B").
The objective function f
takes as first argument the vector
of parameters over which minimisation is to take place. It should
return a scalar result. Optional arguments ...
will be
passed to optim
and then (if not used by optim
) to
f
. As with optim
, the default is to minimise, but
maximisation can be performed by setting control$fnscale
to a
negative value.
The gradient function grad
must be supplied except with
method = "Nelder-Mead"
. It should take arguments matching
those of f
and return a vector containing the gradient.
optim
, especially method = "L-BFGS-B"
which
does box-constrained optimisation.