Learn R Programming

nlmrt (version 2016.3.2)

nlsmnq: Nash variant of Marquardt nonlinear least squares solution via qr linear solver.

Description

Given a nonlinear model expressed as an expression of the form lhs ~ formula_for_rhs and a start vector where parameters used in the model formula are named, attempts to find the minimum of the residual sum of squares using the Nash variant (Nash, 1979) of the Marquardt algorithm, where the linear sub-problem is solved by a qr method.

Usage

nlsmnq(formula, start, trace=FALSE, data, control, ...)

Arguments

formula
This is a modeling formula of the form (as in nls) lhsvar ~ rhsexpression for example, y ~ b1/(1+b2*exp(-b3*T)) You may also give this as a string.
start
A named parameter vector. For our example, we could use start=c(b1=1, b2=2.345, b3=0.123)
trace
Logical TRUE if we want intermediate progress to be reported. Default is FALSE.
data
A data frame containing the data of the variables in the formula. This data may, however, be supplied directly in the parent frame.
control
A list of controls for the algorithm. These are:
...
Any data needed for computation of the residual vector from the expression rhsexpression - lhsvar. Note that this is the negative of the usual residual, but the sum of squares is the same.

Value

  • A list of the following items
  • coeffsA named vector giving the parameter values at the supposed solution.
  • ssquaresThe sum of squared residuals at this set of parameters.
  • residThe residual vector at the returned parameters.
  • jacobianThe jacobian matrix (partial derivatives of residuals w.r.t. the parameters) at the returned parameters.
  • fevalThe number of residual evaluations (sum of squares computations) used.
  • jevalThe number of Jacobian evaluations used.

Details

nlsmnq attempts to solve the nonlinear sum of squares problem by using a variant of Marquardt's approach to stabilizing the Gauss-Newton method using the Levenberg-Marquardt adjustment. This is explained in Nash (1979 or 1990) in the sections that discuss Algorithm 23. (?? do we want a vignette. Yes, because folk don't have access to book easily, but finding time.)

In this code, we solve the (adjusted) Marquardt equations by use of the qr.solve(). Rather than forming the J'J + lambda*D matrix, we augment the J matrix with extra rows and the y vector with null elements.

References

Nash, J. C. (1979, 1990) _Compact Numerical Methods for Computers. Linear Algebra and Function Minimisation._ Adam Hilger./Institute of Physics Publications

others!!

See Also

Function nls(), packages optim and optimx.

Examples

Run this code
ydat<-c(5.308, 7.24, 9.638, 12.866, 17.069, 23.192, 31.443, 
          38.558, 50.156, 62.948, 75.995, 91.972) # for testing
y<-ydat  # for testing
tdat<-1:length(ydat) # for testing
# WARNING -- using T can get confusion with TRUE
t<-tdat
start1<-c(b1=1, b2=1, b3=1)
eunsc<- y ~ b1/(1+b2*exp(-b3*t))

an1<-try(nls(eunsc, start=start1, trace=TRUE))
print(an1)

cat("GLOBAL DATA
")
an1q<-try(nlsmnq(eunsc, start=start1, trace=TRUE))
print(an1q)

rm(y, t)

cat("LOCAL DATA
")
ydata1<-data.frame(y=ydat, t=tdat)
ydata2<-data.frame(y=1.5*ydat, t=tdat)
an1ql1<-try(nlsmnq(eunsc, start=start1, trace=TRUE, data=ydata1))
print(an1ql1)

an1ql2<-try(nlsmnq(eunsc, start=start1, trace=TRUE, data=ydata2))
print(an1ql2)

cat("GLOBAL DATA AGAIN -- should fail due to no data
")
an1q<-try(nlsmnq(eunsc, start=start1, trace=TRUE))
print(an1q)

Run the code above in your browser using DataLab