Learn R Programming

stats (version 3.5.3)

nlminb: Optimization using PORT routines

Description

Unconstrained and box-constrained optimization using PORT routines.

For historical compatibility.

Usage

nlminb(start, objective, gradient = NULL, hessian = NULL, …,
       scale = 1, control = list(), lower = -Inf, upper = Inf)

Arguments

start

numeric vector, initial values for the parameters to be optimized.

objective

Function to be minimized. Must return a scalar value. The first argument to objective is the vector of parameters to be optimized, whose initial values are supplied through start. Further arguments (fixed during the course of the optimization) to objective may be specified as well (see ).

gradient

Optional function that takes the same arguments as objective and evaluates the gradient of objective at its first argument. Must return a vector as long as start.

hessian

Optional function that takes the same arguments as objective and evaluates the hessian of objective at its first argument. Must return a square matrix of order length(start). Only the lower triangle is used.

Further arguments to be supplied to objective.

scale

See PORT documentation (or leave alone).

control

A list of control parameters. See below for details.

lower, upper

vectors of lower and upper bounds, replicated to be as long as start. If unspecified, all parameters are assumed to be unconstrained.

Value

A list with components:

par

The best set of parameters found.

objective

The value of objective corresponding to par.

convergence

An integer code. 0 indicates successful convergence.

message

A character string giving any additional information returned by the optimizer, or NULL. For details, see PORT documentation.

iterations

Number of iterations performed.

evaluations

Number of objective function and gradient function evaluations

Control parameters

Possible names in the control list and their default values are:

eval.max

Maximum number of evaluations of the objective function allowed. Defaults to 200.

% MXFCAL
iter.max

Maximum number of iterations allowed. Defaults to 150.

% MXITER
trace

The value of the objective function and the parameters is printed every trace'th iteration. Defaults to 0 which indicates no trace information is to be printed.

abs.tol

Absolute tolerance. Defaults to 0 so the absolute convergence test is not used. If the objective function is known to be non-negative, the previous default of 1e-20 would be more appropriate.

% AFCTOL 31
rel.tol

Relative tolerance. Defaults to 1e-10.

% RFCTOL 32
x.tol

X tolerance. Defaults to 1.5e-8.

% XCTOL 33
xf.tol

false convergence tolerance. Defaults to 2.2e-14.

% XFTOL 34
step.min, step.max

Minimum and maximum step size. Both default to 1..

% LMAX0 35 / LMAXS 36
sing.tol

singular convergence tolerance; defaults to rel.tol.

% SCTOL 37
scale.init

...

% DINIT 38
diff.g

an estimated bound on the relative error in the objective function value.

% ETA0 42

Details

Any names of start are passed on to objective and where applicable, gradient and hessian. The parameter vector will be coerced to double.

If any of the functions returns NA or NaN this is an error for the gradient and Hessian, and such values for function evaluation are replaced by +Inf with a warning.

References

David M. Gay (1990), Usage summary for selected optimization routines. Computing Science Technical Report 153, AT&T Bell Laboratories, Murray Hill.

See Also

optim (which is preferred) and nlm.

optimize for one-dimensional minimization and constrOptim for constrained optimization.

Examples

Run this code
# NOT RUN {
x <- rnbinom(100, mu = 10, size = 10)
hdev <- function(par)
    -sum(dnbinom(x, mu = par[1], size = par[2], log = TRUE))
nlminb(c(9, 12), hdev)
nlminb(c(20, 20), hdev, lower = 0, upper = Inf)
nlminb(c(20, 20), hdev, lower = 0.001, upper = Inf)

## slightly modified from the S-PLUS help page for nlminb
# this example minimizes a sum of squares with known solution y
sumsq <- function( x, y) {sum((x-y)^2)}
y <- rep(1,5)
x0 <- rnorm(length(y))
nlminb(start = x0, sumsq, y = y)
# now use bounds with a y that has some components outside the bounds
y <- c( 0, 2, 0, -2, 0)
nlminb(start = x0, sumsq, lower = -1, upper = 1, y = y)
# try using the gradient
sumsq.g <- function(x, y) 2*(x-y)
nlminb(start = x0, sumsq, sumsq.g,
       lower = -1, upper = 1, y = y)
# now use the hessian, too
sumsq.h <- function(x, y) diag(2, nrow = length(x))
nlminb(start = x0, sumsq, sumsq.g, sumsq.h,
       lower = -1, upper = 1, y = y)

## Rest lifted from optim help page

fr <- function(x) {   ## Rosenbrock Banana function
    x1 <- x[1]
    x2 <- x[2]
    100 * (x2 - x1 * x1)^2 + (1 - x1)^2
}
grr <- function(x) { ## Gradient of 'fr'
    x1 <- x[1]
    x2 <- x[2]
    c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1),
       200 *      (x2 - x1 * x1))
}
nlminb(c(-1.2,1), fr)
nlminb(c(-1.2,1), fr, grr)


flb <- function(x)
    { p <- length(x); sum(c(1, rep(4, p-1)) * (x - c(1, x[-p])^2)^2) }
## 25-dimensional box constrained
## par[24] is *not* at boundary
nlminb(rep(3, 25), flb, lower = rep(2, 25), upper = rep(4, 25))
## trying to use a too small tolerance:
r <- nlminb(rep(3, 25), flb, control = list(rel.tol = 1e-16))
stopifnot(grepl("rel.tol", r$message))
# }

Run the code above in your browser using DataLab