lp.control
function. "none"
: No anti-degeneracy handling.
"fixedvars"
: Check if there are equality slacks in the basis and try to drive them out in order to reduce chance of degeneracy in Phase 1.
"columncheck"
:
"stalling"
:
"numfailure"
:
"lostfeas"
:
"infeasible"
:
"dynamic"
:
"duringbb"
:
"rhsperturb"
: Perturbation of the working RHS at refactorization
"boundflip"
: Limit bound flips that can sometimes contribute to degeneracy in some models.
}
The default is c("infeasible", "stalling", "fixedvars")
.}
"none"
: No basis crash.
"mostfeasible"
: Most feasible basis.
"leastdegenerate"
: Construct a basis that is in some sense the least degenerate.
}
The default is "none"
.}
-x
results in a maximum depth of x
times the order of the MIP problem.
This control option only applies if there are integer, SC or SOS variables in the model (i.e., when the branch-and-bound algorithm is used). The branch-and-bound algorithm will not go deeper than this level. Limiting the depth speeds up the solving time but there is a chance that the solution obtained is sub-optimal. Be aware of this. Another possible consequence is that no solution will be found.
The default value is -50
; a value of zero implies no limit to the depth.}
"ceiling"
: Take ceiling branch first.
"floor"
: Take floor branch first.
"auto"
: lpSolve decides which branch to take first.
}
The value of this option can influence solving times considerably. However, the real-world performance will be model dependent. The default is "auto"
.}
"first"
: Select the lowest indexed non-integer column.
"gap"
: Selection based on the distance from the current bounds.
"range"
: Selection based on the largest current bound.
"fraction"
: Selection based on the largest fractional value.
"pseudocost"
: Simple, unweighted pseudo-cost of a variable.
"pseudononint"
: An extended pseudo-costing strategy based on minimizing the number of integer infeasibilities.
"pseudoratio"
: An extended pseudo-costing strategy based on maximizing the normal pseudo-cost divided by the number of infeasibilities. Effectively, it is similar to (the reciprocal of) a cost/benefit ratio.
}
Additional modes (if any) may be appended to augment the rule specified in the first element of bb.rule
.
"weightreverse"
: Select by criterion minimum (worst), rather than by criterion maximum (best).
"branchreverse"
: When bb.floorfirst
is "auto"
, select the direction (lower/upper branch) opposite to that chosen by lpSolve.
"greedy"
:
"pseudocost"
: Toggle between weighting based on pseudocost or objective function value.
"depthfirst"
: Select the node that has been selected before the most number of times.
"randomize"
: Add a randomization factor to the score for all the node candidates.
"gub"
: This option is still in development and should not be used at this time.
"dynamic"
: When "depthfirst"
is selected, switch it off once the first solution is found.
"restart"
: Regularly restart the pseudocost value calculations.
"breadthfirst"
: Select the node that has been selected the fewest number of times (or not at all).
"autoorder"
: Create an optimal branch-and-bound variable ordering. Can speed up branch-and-bound algorithm.
"rcostfixing"
: Do bound tightening during branch-and-bound based on the reduced cost information.
"stronginit"
: Initialize pseudo-costs by strong branching.
}
The value of this rule can influence solving times considerably. However, the real-world performance will be model dependent. The default value is c("pseudononint", "greedy", "dynamic", "rcostfixing")
.}
TRUE
then the branch-and-bound algorithm stops at the first solution found. The default (FALSE
) is to continue until an optimal solution is found.}
epsel
, epsb
, epsd
, epspivot
, epsint
and mip.gap
.
"tight"
: Use tight tolerance values.
"medium"
: Use medium tolerance values.
"loose"
: Use loose tolerance values.
"baggy"
: Use very loose tolerance values.
}
The default is "tight"
.}
epsb
then it is treated as zero by the solver. The default value is 1.0e-10
.}
epsd
then it is treated as zero by the solver. The default value is 1.0e-9
.}
epsel
then it is rounded to zero by the solver. The default value is 1.0e-12
. This parameter is used in situations where none of epsint
, epsb
, epsd
, epspivot
norepsperturb
apply.}
epsint
then it is considered an integer. The default value is 1.0e-7
.}
1.0e-5
.}
epspivot
then it is treated as zero by the solver. Pivots will be performed on elements smaller (in absolute terms) than epspivot
when no other larger pivot element can be found. The default value is 2.0e-7
.}
"none"
: None.
"solution"
: Running accuracy measurement of solved equations based on $Bx=r$ (primal simplex), remedy is refactorization.
"dualfeas"
: Improve initial dual feasibility by bound flips (highly recommended).
"thetagap"
: Low-cost accuracy monitoring in the dual, remedy is refactorization.
"bbsimplex"
: By default there is a check for primal/dual feasibility at the optimum only for the relaxed problem, this also activates the test at the node level.
}
The default is c("dualfeas", "thetagap")
.}
1.0e30
.}
250
.}
1.0e-11
.}
-1.0e6
.}
FALSE
then the objective function is moved to separate storage. When the objective function is not stored in the basis the computation of reduced costs is somewhat slower. In the later versions of v5.5 there is the option to calculate reduced cost in the textbook way: completely independently of the basis.}
"firstindex"
: Select first.
"dantzig"
: Select according to Dantzig.
"devex"
: Devex pricing from Paula Harris.
"steepestedge"
: Steepest edge.
}
"primalfallback"
: When using the steepest edge rule, fall back to "devex"
in the primal.
"multiple"
: A preliminary implementation of the multiple pricing scheme. Attractive candidate columns from one iteration may be used in subsequent iterations thus avoiding full updating of reduced costs. In the current implementation, lpSolve only reuses the second best entering column alternative.
"partial"
: Enables partial pricing.
"adaptive"
: Temporarily use an alternative strategy if cycling is detected.
"randomize"
: Adds a small randomization effect to the selected pricer.
"autopartial"
: Indicates automatic detection of segmented/staged/blocked models. It refers to partial pricing rather than full pricing. With full pricing, all non-basic columns are scanned, but with partial pricing only a subset is scanned for every iteration. This can speed up several models.
"loopleft"
: Scan entering/leaving columns left rather than right.
"loopalternate"
: Scan entering/leaving columns alternating left/right.
"harristwopass"
: Use Harris' primal pivot logic rather than the default.
"truenorminit"
: Use true norms for Devex and steepest edge initializations.
}
The default is c("devex", "adaptive")
.}
"lindep"
presolve option can result in the deletion of rows (the linear dependent ones). The get.constraints
function will then return only the values of the rows that are kept.
The presolve options are given in the following table. If any element of presolve
is "none"
then no presolving is done.
"none"
: No presolve.
"rows"
: Presolve rows.
"cols"
: Presolve columns.
"lindep"
: Eliminate linearly dependent rows.
"sos"
: Convert constraints to special ordered sets (SOS), only SOS1 is handled.
"reducemip"
: Constraints found redundant in phase 1 are deleted. This is no longer active since it is rarely effective and also because it adds code complications and delayed presolve effects that are not captured properly.
"knapsack"
: Simplification of knapsack-type constraints through the addition of an extra variable. This also helps bound the objective function.
"elimeq2"
: Direct substitution of one variable in 2-element equality constraints; this requires changes to the constraint matrix.
"impliedfree"
: Identify implied free variables (releasing their explicit bounds).
"reducegcd"
: Reduce (tighten) coefficients in integer models based on GCD argument.
"probefix"
: Attempt to fix binary variables at one of their bounds.
"probereduce"
: Attempt to reduce coefficients in binary models.
"rowdominate"
: Identify and delete qualifying constraints that are dominated by others, also fixes variables at a bound.
"coldominate"
: Delete variables (mainly binary) that are dominated by others (only one can be non-zero).
"mergerows"
: Merges neighboring >=
or <=< code=""> constraints when the vectors are otherwise relatively identical into a single ranged constraint.
"impliedslk"
: Converts qualifying equalities to inequalities by converting a column singleton variable to a slack variable. The routine also detects implicit duplicate slacks from inequality constraints and fixes and removes the redundant variable. This removal also tends to reduce the risk of degeneracy. The combined function of this option can have a dramatic simplifying effect on some models.
"colfixdual"
: Variable fixing and removal based on the signs of the associated dual constraint.
"bounds"
: Bound tightening based on full-row constraint information. This can assist in tightening the objective function bound, eliminate variables and constraints. At the end of presolve, it is checked if any variables can be deemed free, thereby reducing any chance that degeneracy is introduced via this presolve option.
"duals"
: Calculate duals.
"sensduals"
: Calculate sensitivity if there are integer variables.
}=<>
The default is c("none")
.}
5
.}
"none"
: No scaling (not advised).
"extreme"
: Scale to convergence using largest absolute value.
"range"
: Scale based on the simple numerical range.
"mean"
: Numerical range-based scaling.
"geometric"
: Geometric scaling.
"curtisreid"
: Curtis-Reid scaling.
}
Additional elements (if any) from the following table can be included to augment the scaling algorithm.
"quadratic"
:
"logarithmic"
: Scale to convergence using logarithmic mean of all values.
"power2"
: Power scaling.
"equilibrate"
: Make sure that no scaled number is above 1
.
"integers"
: Scale integer variables.
"dynupdate"
: Recompute scale factors when resolving the model.
"rowsonly"
: Only scale rows.
"colsonly"
: Only scale columns.
}
By default, lpSolve computes scale factors once for the original model. If a solve is done again (most probably after changing some data in the model), the scaling factors are not recomputed. Instead, the scale factors from the original model are used. This is not always desirable, especially if the data has changed considerably. Including "dynupdate"
among the scale algorithm augmentations instructs lpSolve to recompute the scale factors each time solve
is called. Note that the scaling done by "dynupdate"
is incremental and the resulting scale factors are typically different from those computed from scratch.
The default is c("geometric", "equilibrate", "integers")
.}
"max"
or "min"
specifying whether the model is a maximization or a minimization problem.}
"primal"
and "dual"
. If length two then the first element describes the simplex type used in phase 1 and the second element the simplex type used in phase 2. If length one then that simplex type is used for both phases. The default is c("dual", "primal")
.}
lp.control