Learn R Programming

OpenMx (version 2.7.9)

mxOption: Set or Clear an Optimizer Option

Description

The function sets, shows, or clears an option that is specific to the optimizer in the back-end.

Usage

mxOption(model, key, value, reset = FALSE)

Arguments

model
An MxModel object or NULL
key
The name of the option.
value
The value of the option.
reset
If TRUE then reset all options to their defaults.

Value

If a model is provided, it is returned with the optimizer option either set or cleared. If value is empty, the current value is returned.

Details

mxOption is used to set, clear, or query an option (given in the ‘key’ argument) in the back-end optimizer. Valid option keys are listed below. Use value = NULL to remove an existing option. Leaving value blank will return the current value of the option specified by ‘key’. To reset all options to their default values, use ‘reset = TRUE’. When reset = TRUE, ‘key’ and ‘value’ are ignored. If the ‘model’ argument is set to NULL, the default optimizer option (i.e those applying to all models by default) will be set. To see the defaults, use getOption('mxOptions'). Before the model is submitted to the back-end, all keys and values are converted into strings using the as.character function. The “Default optimizer” option can only be set globally (i.e., with model=NULL), and not locally (i.e., specifically to a given MxModel). Although the checkpointing options may be set globally, OpenMx's behavior is only affected by locally set checkpointing options (that is, global checkpointing options are ignored at runtime). Options “Gradient step size”, “Gradient iterations”, and “Function precision” have on-load global defaults of "Auto". If value "Auto" is in effect for any of these three options at runtime, then OpenMx selects a reasonable numerical value in its place. These automated numerical values are intended to (1) adjust for the limited precision of the algorithm for computing multivariate-normal probability integrals, and (2) calculate accurate numeric derivatives at the optimizer's solution. If the user replaces "Auto" with a valid numerical value, then OpenMx uses that value as-is. Some options only affect certain optimizers. Currently, options “Gradient algorithm” and “Gradient iterations” are ignored by all optimizers other than SLSQP. Also, currently option “Analytic Gradients” only affects SLSQP and NPSOL. Option “Gradient step size” is used slightly differently by SLSQP and CSOLNP, and is ignored by NPSOL (see mxComputeGradientDescent() for details). If the MxModel containsMxConstraints, OpenMx's interface to NPSOL scales “Feasibility tolerance” by multiplying it by 0.02 and dividing it by 0.05 before passing it to NPSOL; if there are no MxConstraints, OpenMx passes NPSOL a hardcoded value of 1e-5. Note that NPSOL's criterion for returning a status code of 0 versus 1 for a given solution depends partly on “Optimality tolerance”. For a block of n ordinal variables, the maximum number of integration points that OpenMx may use to calculate multivariate-normal probability integrals is given by mvnMaxPointsA + mvnMaxPointsB*n + mvnMaxPointsC*n*n The maximum number of major iterations (the option “Major iterations”) for optimization for NPSOL can be specified either by using a numeric value (such as 50, 1000, etc) or by specifying a user-defined function. The user-defined function should accept two arguments as input, the number of parameters and the number of constraints, and return a numeric value as output. OpenMx options
Calculate Hessian [Yes | No] calculate the Hessian explicitly after optimization.
Standard Errors [Yes | No] return standard error estimates from the explicitly calculate hessian.
Default optimizer [NPSOL | SLSQP | CSOLNP] the gradient-descent optimizer to use
Number of Threads [0|1|2|...|10|...] number of threads used for optimization. Default value is taken from the environment variable OMP_NUM_THREADS or, if that is not set, 2.
Feasibility tolerance r the maximum acceptable absolute violations in linear and nonlinear constraints.
Optimality tolerance r the accuracy with which the final iterate approximates a solution to the optimization problem; roughly, the number of reliable significant figures that the fitfunction value should have at the solution.
Gradient algorithm see list finite difference method, either 'forward' or 'central'.
Gradient iterations 1:4 the number of Richardson extrapolation iterations
Gradient step size r amount of change made to free parameters when numerically calculating gradient
Analytic Gradients [Yes | No] should the optimizer use analytic gradients (if available)?
loglikelihoodScale i factor by which the loglikelihood is scaled.
Parallel diagnostics [Yes | No] whether to issue diagnostic messages about use of multiple threads
NPSOL-specific options
Nolist this option suppresses printing of the options
Print level i the value of i controls the amount of printout produced by the major iterations
Minor print level i the value of i controls the amount of printout produced by the minor iterations
Print file i for i > 0 a full log is sent to the file with logical unit number i.
Summary file i for i > 0 a brief log will be output to file i.
Function precision r a measure of accuracy with which the fitfunction and constraint functions can be computed.
Infinite bound size r if r > 0 defines the "infinite" bound bigbnd.
Major iterations i or a function the maximum number of major iterations before termination.
Verify level [-1:3 | Yes | No] see NPSOL manual.
Line search tolerance r controls the accuracy with which a step is taken.
Derivative level [0-3] see NPSOL manual.
Hessian [Yes | No] return the Hessian (Yes) or the transformed Hessian (No).
Step Limit r maximum change in free parameters at first step of linesearch.
Checkpointing options
Always Checkpoint [Yes | No] whether to checkpoint all models during optimization.
Checkpoint Directory path the directory into which checkpoint files are written.
Checkpoint Prefix string the string prefix to add to all checkpoint filenames.
Checkpoint Fullpath path overrides the directory and prefix (useful to output to /dev/fd/2)
Checkpoint Units see list the type of units for checkpointing: 'minutes', 'iterations', or 'evaluations'.
Checkpoint Count i the number of units between checkpoint intervals.
Model transformation options
Error Checking [Yes | No] whether model consistency checks are performed in the OpenMx front-end
No Sort Data character vector of model names for which FIML data sorting is not performed
RAM Inverse Optimization [Yes | No] whether to enable solve(I - A) optimization
RAM Max Depth i the maximum depth to be used when solve(I - A) optimization is enabled
Multivariate normal integration parameters
maxOrdinalPerBlock i maximum number of ordinal variables to evaluate together
mvnMaxPointsA i base number of integration points
mvnMaxPointsB i number of integration points per row
mvnMaxPointsC i number of integration points per rows^2
mvnAbsEps i absolute error tolerance
mvnRelEps i relative error tolerance

References

The OpenMx User's guide can be found at http://openmx.ssri.psu.edu/documentation.

See Also

See mxModel(), as almost all uses of mxOption() are via an mxModel whose options are set or cleared. See mxComputeGradientDescent() for details on how different optimizers are affected by different options.

Examples

Run this code
# set the Numbder of Threads (cores to use)
mxOption(NULL, "Number of Threads", parallel::detectCores() - 1)

testModel <- mxModel(model = "testModel") # make a model to use for example
testModel$options   # show the model options (none yet)
options()$mxOptions # list all mxOptions (global settings)

testModel <- mxOption(testModel, "Function precision", 1e-5) # set precision
testModel <- mxOption(testModel, "Function precision", NULL) # clear precision
# N.B. This is model-specific precision (defaults to global setting)

# may optimize for speed
# at cost of not getting standard errors
testModel <- mxOption(testModel, "Calculate Hessian", "No")
testModel <- mxOption(testModel, "Standard Errors"  , "No")

testModel$options # see the list of options you set

Run the code above in your browser using DataLab