Learn R Programming

gsl (version 2.1-1)

multimin: Function minimization

Description

Function minimization using the Gnu Scientific Library, reference manual section 35. These functions are declared in header file gsl_multimin.h Several algorithms for finding (local) minima of functions in one or more variables are provided. All of the algorithms operate locally, in the sense that they maintain a best guess and require the function to be continuous. Apart from the Nelder-Mead algorithm, these algorithms also use a derivative.

Usage

multimin(..., prec=0.0001) multimin.init(x, f, df=NA, fdf=NA, method=NA, step.size=NA, tol=NA) multimin.iterate(state) multimin.restart(state) multimin.fminimizer.size(state)

Arguments

...
In function multimin(), the argument list passed to multimin.init()
x
A starting point. These algorithms are faster with better initial guesses
f
The function to minimize. This function must take a single numeric vector as input, and output a numeric scalar
df
The derivative of f. This is required for all algorithms except Nelder-Mead
fdf
A function that evaluates f and df simultaneously. This is optional, and is only useful if simultaneous evaluation is faster
method
The algorithm to use, which is one of “conjugate-fr”, “conjugate-pr”, “bfgs”, “steepest-descent” and “nm
step.size
This step size guides the algorithm to pick a good distance between points in its search
tol
This parameter is relevant for gradient-based methods. It controls how much the gradient should flatten out in each line search. More specifically, let $u(t) = f(x + st)$ be the function restricted to the search ray. Then a point $t$ is tolerable if $u'(t) < tol u'(0)$. Higher values give more lax linesearches. This parameter trades-off searching intensively in the outer loop (finding search directions) versus the inner loop (finding a good point in a particular direction)
prec
The stopping-rule precision parameter. For the derivative-based methods, a solution is good enough if the norm of the gradient is smaller than prec. For the non-derivative-based methods, a solution is good enough if the norm of successive solutions is smaller than prec
state
This stores all information relating to the progress of the optimization problem

Value

of the following items:

Details

There are two ways to call multimin. The simple way is to merely call multimin directly. A more complicated way is to call multimin.init first, and then repeatedly call multimin.iterate until the guess gets good enough. In addition, multimin.restart can be used with the second approach to discard accumulated information (such as curvature information) if that information turns out to be unhelpful. This is roughly equivalent to calling multimin.init by setting the starting point to be the current best guess.

All of the derivative-based methods consist of iterations that pick a descent direction, and conduct a line search for a better point along the ray in that direction from the current point. The Fletcher-Reeves and Polak-Ribiere conjugate gradient algorithms maintain a a vector that summarizes the curvature at that point. These are useful for high-dimensional problems (eg: more than 100 dimensions) because they don't use matrices which become expensive to keep track of. The Broyden-Fletcher-Goldfarb-Shanno is better for low-dimensional problems, since it maintains an approximation of the Hessian of the function as well, which gives better curvature information. The steepest-descent algorithm is a naive algorithm that does not use any curvature information. The Nelder-Mead algorithm which does not use derivatives.

References

http://www.gnu.org/software/gsl

See Also

optim and nlm are the standard optimization functions in R.

deriv and D are the standard symbolic differentation functions in R. Ryacas provides more extensive differentiation support using Yet Another Computer Algebra System.

numericDeriv is the standard numerical differentation function in R. GSL can also do numerical differentiation, but no-one has written an R interface yet.

multimin requires the objective function to have a single (vector) argument. unlist and relist are useful for converting between more convenient forms.

Examples

Run this code
# The Rosenbrock function:
x0 <- c(-1.2, 1)
f <- function(x) (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2
df <- function(x) c(-2*(1 - x[1]) + 100 * 2 * (x[2] - x[1]^2) * (-2*x[1]),
                    100 * 2 * (x[2] - x[1]^2))

# The simple way to call multimin.
state <- multimin(x0, f, df)
print(state$x)

# The fine-control way to call multimin.
state <- multimin.init(x0, f, df, method="conjugate-fr")
for (i in 1:200)
	state <- multimin.iterate(state)
print(state$x)

Run the code above in your browser using DataLab