Learn R Programming

TMB (version 1.9.15)

newton: Generalized newton optimizer.

Description

Generalized newton optimizer used for the inner optimization problem.

Usage

newton(
  par,
  fn,
  gr,
  he,
  trace = 1,
  maxit = 100,
  tol = 1e-08,
  alpha = 1,
  smartsearch = TRUE,
  mgcmax = 1e+60,
  super = TRUE,
  silent = TRUE,
  ustep = 1,
  power = 0.5,
  u0 = 1e-04,
  grad.tol = tol,
  step.tol = tol,
  tol10 = 0.001,
  env = environment(),
  ...
)

Value

List with solution similar to optim output.

Arguments

par

Initial parameter.

fn

Objective function.

gr

Gradient function.

he

Sparse hessian function.

trace

Print tracing information?

maxit

Maximum number of iterations.

tol

Convergence tolerance.

alpha

Newton stepsize in the fixed stepsize case.

smartsearch

Turn on adaptive stepsize algorithm for non-convex problems?

mgcmax

Refuse to optimize if the maximum gradient component is too steep.

super

Supernodal Cholesky?

silent

Be silent?

ustep

Adaptive stepsize initial guess between 0 and 1.

power

Parameter controlling adaptive stepsize.

u0

Parameter controlling adaptive stepsize.

grad.tol

Gradient convergence tolerance.

step.tol

Stepsize convergence tolerance.

tol10

Try to exit if last 10 iterations not improved more than this.

env

Environment for cached Cholesky factor.

...

Currently unused.

Details

If smartsearch=FALSE this function performs an ordinary newton optimization on the function fn using an exact sparse hessian function. A fixed stepsize may be controlled by alpha so that the iterations are given by: $$u_{n+1} = u_n - \alpha f''(u_n)^{-1}f'(u_n)$$

If smartsearch=TRUE the hessian is allowed to become negative definite preventing ordinary newton iterations. In this situation the newton iterations are performed on a modified objective function defined by adding a quadratic penalty around the expansion point \(u_0\): $$f_{t}(u) = f(u) + \frac{t}{2} \|u-u_0\|^2$$ This function's hessian ( \(f''(u)+t I\) ) is positive definite for \(t\) sufficiently large. The value \(t\) is updated at every iteration: If the hessian is positive definite \(t\) is decreased, otherwise increased. Detailed control of the update process can be obtained with the arguments ustep, power and u0.

See Also

newtonOption