Learn R Programming

NMOF (version 0.22-0)

GAopt: Optimisation with a Genetic Algorithm

Description

A simple Genetic Algorithm for minimising a function.

Usage

GAopt(OF, algo = list(), ...)

Arguments

OF
The objective function, to be minimised. See Details.
algo
A list with the settings for algorithm. See Details and Examples.
...
Other pieces of data required to evaluate the objective function. See Details and Examples.

Value

  • A list:
  • xbestthe solution (the best member of the population)
  • OFvalueobjective function value of best solution
  • popFa vector. The objective function values in the final population.
  • Fmatif algo$storeF is TRUE, a matrix of size algo$nG times algo$nP containing the objective function values of all solutions over the generations; else NA
  • xlistif algo$storeSolutions is TRUE, a list that contains a list P of matrices; else NA.

Details

The function implements a simple Genetic Algorithm (GA). A GA evolves a collection of solutions (the so-called population), all of which are coded as vectors containing only zeros and ones. (In GAopt, solutions are of mode logical.) The algorithm starts with randomly-chosen or user-supplied population and aims to iteratively improve this population by mixing solutions and by switching single bits in solutions, both at random. In each iteration, such randomly-changed solutions are compared with the original population and better solutions replace inferior ones. In GAopt, the population size is kept constant. GA language: iterations are called generations; new solutions are called offspring or children (and the existing solutions, from which the children are created, are parents); the objective function is called a fitness function; mixing solutions is a crossover; and randomly changing solutions is called mutation. The choice which solutions remain in the population and which ones are discarded is called selection. In GAopt, selection is pairwise: a given child is compared with a given parent; the better of the two is kept. In this way, the best solution is automatically retained in the population. To allow for constraints, the evaluation works as follows: after new solutions are created, they are (i) repaired, (ii) evaluated through the objective function, (iii) penalised. Step (ii) is done by a call to OF; steps (i) and (iii) by calls to algo$repair and algo$pen. Step (i) and (iii) are optional, so the respective functions default to NULL. A penalty can also be directly written in the OF, since it amounts to a positive number added to the clean objective function value; but a separate function is often clearer. A separate penalty function is advantagous if either only the objective function or only the penalty function can be vectorised. Conceptually a GA consists of two loops: one loop across the generations and, in any given generation, one loop across the solutions. This is the default, controlled by the variables algo$loopOF, algo$loopRepair and algo$loopPen, which all default to TRUE. But it does not matter in what order the solutions are evaluated (or repaired or penalised), so the second loop can be vectorised. The respective algo$loopFun must then be set to FALSE. (See also the examples for DEopt and PSopt.) The evaluation of the objective function in a given generation can even be distributed. For this, an argument algo$methodOF needs to be set; see below for details (and Schumann, 2011, for examples). All objects that are passed through ... will be passed to the objective function, to the repair function and to the penalty function. The list algo contains the following items: [object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]

References

Gilli, M., Maringer, D. and Schumann, E. (2011) Numerical Methods and Optimization in Finance. Elsevier. http://www.elsevierdirect.com/product.jsp?isbn=9780123756626 Schumann, E. (2011) Examples and Extensions for the NMOF Package. http://enricoschumann.net/files/NMOFex.pdf

See Also

DEopt, PSopt

Examples

Run this code
## a *very* simple problem (why?):
## match a binary (logical) string y

size <- 20L  ### the length of the string
OF <- function(x, y) sum(x != y)
y <- runif(size) > 0.5
x <- runif(size) > 0.5
OF(y, y)     ### the optimum value is zero
OF(x, y)
algo <- list(nB = size, nP = 20L, nG = 100L, prob = 0.002,
             printBar = TRUE)
sol <- GAopt(OF, algo = algo, y = y)

## show differences (if any: marked by a '^')
cat(as.integer(y), "", as.integer(sol$xbest), "",
    ifelse(y == sol$xbest , "", "^"), "", sep = "")

algo$nP <- 3L  ### that shouldn't work so well
sol2 <- GAopt(OF, algo = algo, y = y)

## show differences (if any: marked by a '^')
cat(as.integer(y), "", as.integer(sol2$xbest), "",
    ifelse(y == sol2$xbest , "", "^"), "", sep = "")

Run the code above in your browser using DataLab