Learn R Programming

nloptr (version 2.2.0)

mlsl: Multi-level Single-linkage

Description

The “Multi-Level Single-Linkage” (MLSL) algorithm for global optimization searches by a sequence of local optimizations from random starting points. A modification of MLSL is included using a low-discrepancy sequence (LDS) instead of pseudorandom numbers.

Usage

mlsl(
  x0,
  fn,
  gr = NULL,
  lower,
  upper,
  local.method = "LBFGS",
  low.discrepancy = TRUE,
  nl.info = FALSE,
  control = list(),
  ...
)

Value

List with components:

par

the optimal solution found so far.

value

the function value corresponding to par.

iter

number of (outer) iterations, see maxeval.

convergence

integer code indicating successful completion (> 0) or a possible error number (< 0).

message

character string produced by NLopt and giving additional information.

Arguments

x0

initial point for searching the optimum.

fn

objective function that is to be minimized.

gr

gradient of function fn; will be calculated numerically if not specified.

lower, upper

lower and upper bound constraints.

local.method

only BFGS for the moment.

low.discrepancy

logical; shall a low discrepancy variation be used.

nl.info

logical; shall the original NLopt info be shown.

control

list of options, see nl.opts for help.

...

additional arguments passed to the function.

Author

Hans W. Borchers

Details

MLSL is a ‘multistart’ algorithm: it works by doing a sequence of local optimizations---using some other local optimization algorithm---from random or low-discrepancy starting points. MLSL is distinguished, however, by a `clustering' heuristic that helps it to avoid repeated searches of the same local optima and also has some theoretical guarantees of finding all local optima in a finite number of local minimizations.

The local-search portion of MLSL can use any of the other algorithms in NLopt, and, in particular, can use either gradient-based or derivative-free algorithms. For this wrapper only gradient-based LBFGS is available as local method.

References

A. H. G. Rinnooy Kan and G. T. Timmer, “Stochastic global optimization methods” Mathematical Programming, vol. 39, p. 27-78 (1987).

Sergei Kucherenko and Yury Sytsko, “Application of deterministic low-discrepancy sequences in global optimization”, Computational Optimization and Applications, vol. 30, p. 297-318 (2005).

See Also

direct

Examples

Run this code

## Minimize the Hartmann 6-Dimensional function
## See https://www.sfu.ca/~ssurjano/hart6.html

a <- c(1.0, 1.2, 3.0, 3.2)
A <- matrix(c(10,  0.05, 3, 17,
              3, 10, 3.5, 8,
              17, 17, 1.7, 0.05,
              3.5, 0.1, 10, 10,
              1.7, 8, 17, 0.1,
              8, 14, 8, 14), nrow = 4)

B  <- matrix(c(.1312, .2329, .2348, .4047,
               .1696, .4135, .1451, .8828,
               .5569, .8307, .3522, .8732,
               .0124, .3736, .2883, .5743,
               .8283, .1004, .3047, .1091,
               .5886, .9991, .6650, .0381), nrow = 4)

hartmann6 <- function(x, a, A, B) {
  fun <- 0
  for (i in 1:4) {
    fun <- fun - a[i] * exp(-sum(A[i, ] * (x - B[i, ]) ^ 2))
  }

  fun
}

## The function has a global minimum of -3.32237 at
## (0.20169, 0.150011, 0.476874, 0.275332, 0.311652, 0.6573)

S <- mlsl(x0 = rep(0, 6), hartmann6, lower = rep(0, 6), upper = rep(1, 6),
      nl.info = TRUE, control = list(xtol_rel = 1e-8, maxeval = 1000),
      a = a, A = A, B = B)

Run the code above in your browser using DataLab