Generalized newton optimizer used for the inner optimization problem.
newton(
par,
fn,
gr,
he,
trace = 1,
maxit = 100,
tol = 1e-08,
alpha = 1,
smartsearch = TRUE,
mgcmax = 1e+60,
super = TRUE,
silent = TRUE,
ustep = 1,
power = 0.5,
u0 = 1e-04,
grad.tol = tol,
step.tol = tol,
tol10 = 0.001,
env = environment(),
...
)
List with solution similar to optim
output.
Initial parameter.
Objective function.
Gradient function.
Sparse hessian function.
Print tracing information?
Maximum number of iterations.
Convergence tolerance.
Newton stepsize in the fixed stepsize case.
Turn on adaptive stepsize algorithm for non-convex problems?
Refuse to optimize if the maximum gradient component is too steep.
Supernodal Cholesky?
Be silent?
Adaptive stepsize initial guess between 0 and 1.
Parameter controlling adaptive stepsize.
Parameter controlling adaptive stepsize.
Gradient convergence tolerance.
Stepsize convergence tolerance.
Try to exit if last 10 iterations not improved more than this.
Environment for cached Cholesky factor.
Currently unused.
If smartsearch=FALSE
this function performs an ordinary newton optimization
on the function fn
using an exact sparse hessian function.
A fixed stepsize may be controlled by alpha
so that the iterations are
given by:
$$u_{n+1} = u_n - \alpha f''(u_n)^{-1}f'(u_n)$$
If smartsearch=TRUE
the hessian is allowed to become negative definite
preventing ordinary newton iterations. In this situation the newton iterations are performed on
a modified objective function defined by adding a quadratic penalty around the expansion point \(u_0\):
$$f_{t}(u) = f(u) + \frac{t}{2} \|u-u_0\|^2$$
This function's hessian ( \(f''(u)+t I\) ) is positive definite for \(t\) sufficiently
large. The value \(t\) is updated at every iteration: If the hessian is positive definite \(t\) is
decreased, otherwise increased. Detailed control of the update process can be obtained with the
arguments ustep
, power
and u0
.
newtonOption