Learn R Programming

OpenMx (version 2.21.13)

mxOption: Set or Clear an Optimizer Option

Description

The function sets, shows, or clears an option that is specific to the optimizer in the back-end.

Usage

mxOption(model=NULL, key=NULL, value, reset = FALSE)

Value

If a model is provided, it is returned with the optimizer option either set or cleared. If value is empty, the current value is returned.

Arguments

model

An MxModel object or NULL

key

The name of the option.

value

The value of the option.

reset

If TRUE then reset all options to their defaults.

Details

mxOption is used to set, clear, or query an option (given in the ‘key’ argument) in the back-end optimizer. Valid option keys are listed below.

Use value = NULL to remove an existing option. Leaving value blank will return the current value of the option specified by ‘key’.

To reset all options to their default values, use ‘reset = TRUE’. When reset = TRUE, ‘key’ and ‘value’ are ignored.

If the ‘model’ argument is set to NULL, the default optimizer option (i.e those applying to all models by default) will be set.

To see the defaults, use getOption('mxOptions').

Before the model is submitted to the back-end, all keys and values are converted into strings using the as.character function.

Optimizer specific options

The “Default optimizer” option can only be set globally (i.e., with model=NULL), and not locally (i.e., specifically to a given MxModel). Although the checkpointing options may be set globally, OpenMx's behavior is only affected by locally set checkpointing options (that is, global checkpointing options are ignored at runtime).

Gradient-based optimizers require the gradient of the fit function. When analytic derivatives are not available, the gradient is estimated numerically. There are a variety of options to control the numerical estimation of the gradient. One option for CSOLNP and SLSQP is the gradient algorithm. CSOLNP uses the forward method by default, while SLSQP uses the central method. The forward method requires 1 time “Gradient iterations” function evaluation per parameter per gradient, while central method requires 2 times “Gradient iterations” function evaluations per parameter per gradient. Users can change the default methods for either of these optimizers by setting the “Gradient algorithm” option. NPSOL usually uses the forward method, but adaptively switches to central under certain circumstances.

Options “Gradient step size”, “Gradient iterations”, and “Function precision” have on-load global defaults of "Auto". If value "Auto" is in effect for any of these three options at runtime, then OpenMx selects a reasonable numerical value in its place. These automated numerical values are intended to (1) adjust for the limited precision of the algorithm for computing multivariate-normal probability integrals, and (2) calculate accurate numeric derivatives at the optimizer's solution. If the user replaces "Auto" with a valid numerical value, then OpenMx uses that value as-is.

By default, CSOLNP uses a step size of 10^-7 whereas SLSQP uses 10^-5. The purpose of this difference is to obtain roughly the same accuracy given other differences in numerical procedure. If you set a non-default “Gradient step size”, it will be used as-is. NPSOL ignores “Gradient step size”, and instead uses a function of mxOption “Function precision” to determine its gradient step size.

Option “Analytic Gradients” affects all three optimizers, but some options only affect certain optimizers. Option “Gradient algorithm” is used by CSOLNP and SLSQP, and ignored by NPSOL. Option “Gradient iterations” only affects SLSQP. Option “Gradient step size” is used slightly differently by SLSQP and CSOLNP, and is ignored by NPSOL (see mxComputeGradientDescent() for details).

If an mxModel contains mxConstraints, NPSOL is given .4 times the value of the option “Feasibility tolerance”. If there are no constraints, NPSOL is given a hard-coded value of 1e-5 (its own native default).

Note: Where constraints are present, NPSOL is given 0.4 times the value of the mxOption “Feasibility Tolerance”, and this is about a million times bigger than NPSOL's own native default. Values of “Feasibility Tolerance” around 1e-5 may be needed to get constraint performance similar to NPSOL's default. Note also that NPSOL's criterion for returning a status code of 0 versus 1 for a given solution depends partly on “Optimality tolerance”.

For a block of n ordinal variables, the maximum number of integration points that OpenMx may use to calculate multivariate-normal probability integrals is given by mvnMaxPointsA + mvnMaxPointsB*n + mvnMaxPointsC*n*n + exp(mvnMaxPointsD + mvnMaxPointsE * n * log(mvnRelEps)). Integral approximation is stopped once either ‘mvnAbsEps’ or ‘mvnRelEps’ is satisfied. Use of ‘mvnAbsEps’ is deprecated.

The maximum number of major iterations (the option “Major iterations”) for optimization for NPSOL can be specified either by using a numeric value (such as 50, 1000, etc) or by specifying a user-defined function. The user-defined function should accept two arguments as input, the number of parameters and the number of constraints, and return a numeric value as output.

OpenMx options

Calculate Hessian[Yes | No]calculate the Hessian explicitly after optimization.
Standard Errors[Yes | No]return standard error estimates from the explicitly calculate hessian.
Default optimizer[NPSOL | SLSQP | CSOLNP]the gradient-descent optimizer to use
Number of Threads[0|1|2|...|10|...]number of threads used for optimization. Default value is taken from the environment variable OMP_NUM_THREADS or, if that is not set, 1.
Feasibility tolerancerthe maximum acceptable absolute violations in linear and nonlinear constraints.
Optimality tolerancerthe accuracy with which the final iterate approximates a solution to the optimization problem; roughly, the number of reliable significant figures that the fitfunction value should have at the solution.
Gradient algorithmsee listfinite difference method, either 'forward' or 'central'.
Gradient iterations1:4the number of Richardson extrapolation iterations
Gradient step sizeramount of change made to free parameters when numerically calculating gradient
Analytic Gradients[Yes | No]should the optimizer use analytic gradients (if available)?
loglikelihoodScaleifactor by which the loglikelihood is scaled.
Parallel diagnostics[Yes | No]whether to issue diagnostic messages about use of multiple threads
Nudge zero starts[TRUE | FALSE]Should OpenMx "nudge" starting values of zero to 0.1 at runtime?
Status OKcharacter vectorStatus codes that are considered to indicate a successful optimization
Max minutesnumericMaximum backend elapsed time, in minutes

NPSOL-specific options

Nolistthis option suppresses printing of the options
Print levelithe value of i controls the amount of printout produced by the major iterations
Minor print levelithe value of i controls the amount of printout produced by the minor iterations
Print fileifor i > 0 a full log is sent to the file with logical unit number i.
Summary fileifor i > 0 a brief log will be output to file i.
Function precisionra measure of accuracy with which the fitfunction and constraint functions can be computed.
Infinite bound sizerif r > 0 defines the "infinite" bound bigbnd.
Major iterationsi or a functionthe maximum number of major iterations before termination.
Verify level[-1:3 | Yes | No]see NPSOL manual.
Line search tolerancercontrols the accuracy with which a step is taken.
Derivative level[0-3]see NPSOL manual.
Hessian[Yes | No]return the Hessian (Yes) or the transformed Hessian (No).
Step Limitrmaximum change in free parameters at first step of linesearch.

Checkpointing options

Always Checkpoint[Yes | No]whether to checkpoint all models during optimization.
Checkpoint Directorypaththe directory into which checkpoint files are written.
Checkpoint Prefixstringthe string prefix to add to all checkpoint filenames.
Checkpoint Fullpathpathoverrides the directory and prefix (useful to output to /dev/fd/2)
Checkpoint Unitssee listthe type of units for checkpointing: 'minutes', 'iterations', or 'evaluations'.
Checkpoint Countithe number of units between checkpoint intervals.

Model transformation options

Error Checking[Yes | No]whether model consistency checks are performed in the OpenMx front-end
No Sort Datacharacter vector of model names for which FIML data sorting is not performed
RAM Inverse Optimization[Yes | No]whether to enable solve(I - A) optimization
RAM Max Depthithe maximum depth to be used when solve(I - A) optimization is enabled

Multivariate normal integration parameters

maxOrdinalPerBlockimaximum number of ordinal variables to evaluate together
mvnMaxPointsAibase number of integration points
mvnMaxPointsBinumber of integration points per ordinal variable
mvnMaxPointsCinumber of integration points per squared ordinal variables
mvnMaxPointsDisee details
mvnMaxPointsEisee details
mvnAbsEpsiabsolute error tolerance
mvnRelEpsirelative error tolerance

References

The OpenMx User's guide can be found at https://openmx.ssri.psu.edu/documentation/.

See Also

See mxModel(), as almost all uses of mxOption() are via an mxModel whose options are set or cleared. See mxComputeGradientDescent() for details on how different optimizers are affected by different options. See as.statusCode for information about the Status OK option.

Examples

Run this code
# set the Numbder of Threads (cores to use)
mxOption(key="Number of Threads", value=imxGetNumThreads())

testModel <- mxModel(model = "testModel5") # make a model to use for example
testModel$options   # show the model options (none yet)
options()$mxOptions # list all mxOptions (global settings)

testModel <- mxOption(testModel, "Function precision", 1e-5) # set precision
testModel <- mxOption(testModel, "Function precision", NULL) # clear precision
# N.B. This is model-specific precision (defaults to global setting)

# may optimize for speed
# at cost of not getting standard errors
testModel <- mxOption(testModel, "Calculate Hessian", "No")
testModel <- mxOption(testModel, "Standard Errors"  , "No")

testModel$options # see the list of options you set

Run the code above in your browser using DataLab