Sequentially Unconstrained Maximization Technique (SUMT) based optimization for linear equality constraints.
This implementation is primarily intended to be called from other
maximization routines, such as maxNR
.
sumt(fn, grad=NULL, hess=NULL,
start,
maxRoutine, constraints,
SUMTTol = sqrt(.Machine$double.eps),
SUMTPenaltyTol = sqrt(.Machine$double.eps),
SUMTQ = 10,
SUMTRho0 = NULL,
printLevel=print.level, print.level = 0, SUMTMaxIter = 100, ...)
Object of class 'maxim'. In addition, a component
A list, describing the constrained optimization. Includes the following components:
type of constrained optimization
value of the penalty function at maximum
code for the stopping condition
a short message, describing the stopping condition
number of iterations in the SUMT step
function of a (single) vector parameter. The function may have more arguments (passed by ...), but those are not treated as the parameter.
gradient function of fn
. NULL if missing
function, Hessian of the fn
. NULL if missing
numeric, initial value of the parameter
maximization algorithm, such as maxNR
list, information for constrained maximization.
Currently two components are supported: eqA
and eqB
for linear equality constraints: \(A \beta + B = 0\). The user must ensure that the matrices A
and
B
are conformable.
stopping condition. If the estimates at successive outer iterations are close enough, i.e. maximum of the absolute value over the component difference is smaller than SUMTTol, the algorithm stops.
Note this does not necessarily mean that the constraints are satisfied. If the penalty function is too “weak”, SUMT may repeatedly find the same optimum. In that case a warning is issued. The user may set SUMTTol to a lower value, e.g. to zero.
stopping condition. If the barrier value (also called penalty)
\((A \beta + B)'(A \beta + B)\) is less than
SUMTTol
, the algorithm stops
a double greater than one, controlling the growth of the rho
as described in Details. Defaults to 10.
Initial value for rho
. If not specified, a (possibly)
suitable value is
selected. See Details.
One should consider supplying SUMTRho0
in case where the
unconstrained problem does not have a maximum, or the maximum is too
far from the constrained value. Otherwise the authomatically selected
value may not lead to convergence.
Integer, debugging information. Larger number prints more details.
same as ‘printLevel’, for backward compatibility
Maximum SUMT iterations
Other arguments to maxRoutine
and fn
.
Ott Toomet, Arne Henningsen
The Sequential Unconstrained Minimization Technique is a heuristic for constrained optimization. To minimize a function \(f\) subject to constraints, it uses a non-negative penalty function \(P\), such that \(P(x)\) is zero iff \(x\) satisfies the constraints. One iteratively minimizes \(f(x) + \varrho_k P(x)\), where the \(\varrho\) values are increased according to the rule \(\varrho_{k+1} = q \varrho_k\) for some constant \(q > 1\), until convergence is achieved in the sense that the barrier value \(P(x)'P(x)\) is close to zero. Note that there is no guarantee that the global constrained optimum is found. Standard practice recommends to use the best solution found in “sufficiently many” replications.
Any of
the maximization algorithms in the maxLik, such as
maxNR
, can be used for the unconstrained step.
Analytic gradient and hessian are used if provided.
sumt
in package clue.
## We maximize exp(-x^2 - y^2) where x+y = 1
hatf <- function(theta) {
x <- theta[1]
y <- theta[2]
exp(-(x^2 + y^2))
## Note: you may prefer exp(- theta %*% theta) instead
}
## use constraints: x + y = 1
A <- matrix(c(1, 1), 1, 2)
B <- -1
res <- sumt(hatf, start=c(0,0), maxRoutine=maxNR,
constraints=list(eqA=A, eqB=B))
print(summary(res))
Run the code above in your browser using DataLab