The derivative-free methods Nelder–Mead,
COBYLA, and
EGO can internally handle simple bounds and general constraints on global scalar expressions. In practice this means that in addition to simple bounds and constraints defined in the Optimization study step,
Integral Inequality Constraint nodes and
Global Inequality Constraint nodes in Optimization interfaces are accounted for.
The constraint handling algorithm used in Nelder–Mead is in principle based on filtering out candidate points in the control variable space which fall outside the feasible region, and to some extent adjust search directions accordingly. The procedure is not guaranteed to find a constrained local minimum fulfilling the KKT conditions.
COBYLA, in contrast, approximates objective function and constraints in a uniform way. Therefore, provided all functions are sufficiently smooth, it will in general find an approximate constrained local minimum. The returned solution may, however, lie slightly outside the feasible set. This can happen, in particular, if the constraints are nonlinear.
The constraint handling algorithm in EGO is based on incorporating the constraints directly into the surrogate model using Gaussian process functions. The algorithm models both the objective function and constraints, balancing their improvement when selecting new sample points using the acquisition function. While the procedure improves constraint satisfaction as the model evolves, the returned solution may still lie slightly outside the feasible region, especially in the early stages or if the constraints are highly nonlinear.
BOBYQA handles simple bounds internally, but general constraints only via an external iterative procedure based on repeated minimization of an augmented Lagrangian. This augmented Lagrangian method can also be used as an alternative to the internal penalty method in the
Nelder–Mead solver, but is not selected by default.
In the Settings window for the
Optimization study step, you can choose to use an
Automatic or a
Manual definition of the initial
Penalty parameter ρ. The automatic setting is default and computes an initial value based on the objective and constraint function values at the initial point. You can also limit the
Maximum number of augmented iterations and select a strategy for updating
δ (tolerance for the subsolver). The options
Dynamic I and
Dynamic II both tighten the subsolver tolerance from iteration to iteration, the latter providing some additional control. There is also a
Manual option. Finally, specify the
Constraint tolerance, that is, the maximum allowable constraint violation in the final solution.
The SNOPT algorithm handles constraints of all types efficiently. Constraint handling in this SQP method is based on linearizing the constraint in the outer, major, iteration, and using an active-set QP solver in the inner loop to decide which constraints are active and bounding the solution at the current iterate. This process requires accurate evaluation of the gradient of the constraints, also known as the
constraint Jacobian.
The IPOPT algorithm supports constraints and bounds by using an interior point line search method, see
IPOPT-Specific Settings.
The MMA algorithm accepts constraints of the same general type as SNOPT, requiring an accurate constraint Jacobian, but handles them differently. In each outer, major, iteration, linear and linearized constraints are combined with a linearized objective function into a convex smooth approximation whose unique optimum is always feasible unless the feasible set is empty. The globally convergent version of MMA implemented in the Optimization module is conservative in a way which ensures that each major iterate is feasible not only with respect to the linearized constraints, but with respect to the fully nonlinear constraints.
The Levenberg–Marquardt solver supports bounds but not constraints.