SNOPT Solver Properties
Feastol
Feasibility tolerance
Type: numeric
Default:
1.0·106
The solver tries to ensure that all bound and linear constraints are eventually satisfied to within the feasibility tolerance t. (Feasibility with respect to nonlinear constraints is instead judged by the major feasibility tolerance, majfeastol.)
If the bounds and linear constraints cannot be satisfied to within t, the problem is declared infeasible. Let sInf be the corresponding sum of infeasibilities. If sInf is quite small, it might be appropriate to raise t by a factor of 10 or 100. Otherwise you should suspect some error in the data.
Nonlinear functions are evaluated only at points that satisfy the bound and linear constraints. If there are regions where a function is undefined, every attempt should be made to eliminate these regions from the problem. For example, if
it is essential to place lower bounds on both variables. If t = 106, the bounds
might be appropriate. (The log singularity is more serious. In general, keep x as far away from singularities as possible.)
In practice, the solver uses t as a feasibility tolerance for satisfying the bound and linear constraints in each QP subproblem. If the sum of indefeasibility cannot be reduced to zero, the QP subproblem is declared infeasible. The solver is then in the Elastic mode thereafter (with only the linearized nonlinear constraints defined to be elastic).
Funcprec
Function precision
Type: numeric
Default:
ε0.83.8·1011
The relative function precision is intended to be a measure of the relative accuracy with which the nonlinear functions can be computed. For example, if f(x) is computed as 1000.56789 for some relevant x and if the first 6 significant digits are known to be correct, the appropriate value for the function precision would be 106. (Ideally the functions should have a magnitude of order 1. If all functions are substantially less than 1 in magnitude, the function precision should be the absolute precision. For example, if f(x) = 1.23456789·104 at some point and if the first 6 significant digits are known to be correct, the appropriate precision would be 1010.)
The default value is appropriate for simple analytic functions.
In some cases the function values are the result of extensive computations, possibly involving an iterative procedure that can provide rather few digits of precision at reasonable cost. Specifying an appropriate function precision might lead to savings by allowing the line search procedure to terminate when the difference between function values along the search direction becomes as small as the absolute error in the values.
Hessupd
Hessian updates
Type: integer
Default:
10
When the number of nonlinear variables is large (more than 75) or when the QP problem solver is set to conjugate-gradient, a limited-memory procedure stores a fixed number of BFGS update vectors and a diagonal Hessian approximation. In this case, if hessupd BFGS updates have already been carried out, all but the diagonal elements of the accumulated updates are discarded and the updating process starts again. Broadly speaking, the more updates stored, the better the quality of the approximate Hessian. However, the more vectors stored, the greater the cost of each QP iteration. The default value is likely to give a robust algorithm without significant expense, but faster convergence can sometimes be obtained with significantly fewer updates (for example, hessupd = 5).
Majfeastol
Major feasibility tolerance
Type: numeric
Default:
1.0·106
This parameter specifies how accurately the nonlinear constraints should be satisfied. The default value of 1.0·106 is appropriate when the linear and nonlinear constraints contain data to roughly that accuracy.
Let rowerr be the maximum nonlinear constraint violation, normalized by the size of the solution. It is required to satisfy
where violi is the violation of the ith nonlinear constraint. If some of the problem functions are known to be of low accuracy, a larger major feasibility tolerance might be appropriate.
Opttol
Optimality tolerance
Type: numeric
Default:
1.0·103
This is the major optimality tolerance and specifies the final accuracy of the dual variables. On successful termination, the solver computes a solution (x, s, π) such that
where Compj is an estimate of the complementarity slackness for variable j. The values Compj are computed from the final QP solution using the reduced gradients dj = gj − πTaj, as above. Hence you have
Qpsolver
QP problem solver
Type: string 'cholesky', 'cg', or 'qn'
Default: 'cholesky'
Specifies the active-set algorithm used to solve the QP problem, or in the nonlinear case, the QP subproblem.
'cholesky' holds the full Cholesky factor R of the reduced Hessian ZTHZ. As the QP iterations proceed, the dimension of R changes with the number of superbasic variables.
'qn' solves the QP subproblem using a quasi-Newton method. In this case, R is the factor of a quasi-Newton approximate reduced Hessian.
'cg' uses the conjugate-gradient method to solve all systems involving the reduced Hessian. No storage needs to be allocated for a Cholesky factor.
The Cholesky QP solver is the most robust but might require a significant amount of computation and memory if the number of superbasics is large.
The quasi-Newton QP solver does not require the computation of the exact R at the start of each QP and might be appropriate when the number of superbasics is large but each QP subproblem requires relatively few minor iterations.
The conjugate-gradient QP solver is appropriate for problems with large numbers of degrees of freedom (many superbasic variables). The Hessian memory option 'hessmem' is defaulted to 'limited' when this solver is used.