About Gradient-Based Solvers
The defining characteristic of a gradient-based solver is that follows a path in the control variable space where each new iterate is based on local derivative information evaluated at previously visited points. The methods implemented in the Optimization Module require the complete vector of first-order derivatives of the objective function with respect to the discrete vector of control variable degrees of freedom, which is referred to as the discrete gradient of the objective function in the control variable space.
The gradient can be computed in different ways. In general, the Adjoint method is the most efficient (and also the default), followed by the Forward method. The pure Numeric method is the most expensive as it is based on repeated solution of the multiphysics problem, while the Forward numeric method requires only repeated assembly of the problem residual. The Adjoint method is required for problems with many controls, but it is not supported in all situations, and it can be slower than the Numeric method for transient problems with few controls.
The Optimization module provides three different gradient-based algorithms:
The SNOPT solver is a general purpose solver suitable for dealing with large-scale problems with many or difficult constraints. See The SNOPT Solver.
The IPOPT solver is a general purpose solver suitable for dealing with large-scale problems with many or difficult constraints. See The IPOPT Solver.
The MMA solver can handle problems of any form and is especially suitable for problems with a large number of control variables, such as topology optimization. See The MMA Solver.
The Levenberg–Marquardt solver is specifically designed for solving least-squares problems. See The Levenberg–Marquardt Solver.
These methods are each described in more details below.