maxLik package is a set of convenience tools and wrappers
focusing on
Maximum Likelihood (ML) analysis, but it also contains tools for
other optimization tasks.
The package includes a) wrappers for several
existing optimizers (implemented by optim
); b) original
optimizers, including Newton-Raphson and Stochastic Gradient Ascent;
and c) several convenience tools
to use these optimizers from the ML perspective. Examples are BHHH
optimization (maxBHHH
) and utilities that extract
standard errors from the estimates. Other highlights include a unified
interface for all included optimizers, tools to test user-provided analytic
derivatives, and constrained optimization.
A good starting point to learn about the usage of maxLik are the
included vignettes “Introduction: what is maximum likelihood”,
“Maximum likelihood estimation with maxLik” and
“Stochastic Gradient Ascent in maxLik”. Another good
source is
Henningsen & Toomet (2011), an introductory paper to the package.
Use
vignette(package="maxLik")
to see the available vignettes, and
vignette("using-maxlik")
to read the usage vignette.
From the user's perspective, the
central function in the package is maxLik
. In its
simplest form it takes two arguments: the log-likelihood function, and
a vector of initial parameter values (see the example below).
It returns an object of class
‘maxLik’ with convenient methods such as
summary
,
coef
, and
stdEr
. It also supports a plethora
of other arguments, for instance one can supply analytic gradient and
Hessian, select the desired optimizer, and control the optimization in
different ways.
A useful utility functions in the package is
compareDerivatives
that
allows one to compare the analytic and numeric derivatives for debugging
purposes.
Another useful function is condiNumber
for
analyzing multicollinearity problems in the estimated models.
In the interest of providing a unified user interface, all the
optimizers are implemented as maximizers in this package. This includes
the optim
-based methods, such as maxBFGS
and
maxSGA
, the maximizer version of popular Stochastic
Gradient Descent.