The MMA Solver
The MMA implementation in the Optimization Module is the globally convergent version of the method of moving asymptotes, referred to as GCMMA in Ref. 8., but it is also possible to use the classical implementation (without an inner loop) described in Ref. 9.
This is a three-level algorithm:
Outer iteration k uses the current control variable estimate, xk, to evaluate objective function, constraints, and their gradients, which are used together with current asymptote estimates, lk and uk, to construct an approximating subproblem. This subproblem, which is guaranteed to be convex and feasible, is passed to the inner iterations.
Each inner iteration m solves an approximating subproblem for its unique optimum xkm and then evaluates the true objective function and constraints at this point. If the approximating subproblem is found to be conservative compared to the true function values, the inner iteration is terminated and the point is accepted as the next outer estimate xk+1. Otherwise, the approximating subproblem is modified to make it more conservative and then passed to the next inner iteration.
Note that function (objective and constraints) gradients are computed strictly once in each outer iteration, while function values must be computed once for each extra inner iteration required. The innermost level sees only an analytical approximating form of the subproblem where current function and gradient estimates appear in various coefficients.
The special structure of the generated approximating subproblems influences the global behavior of the algorithm. In contrast to the SNOPT and Levenberg–Marquardt solvers, which rely on approximating second-order information about the objective function, MMA is essentially a linear method. Its subproblems are linear approximations to the original problem but with barrier-like rational function contributions controlled by the moving asymptotes. No information about the problem is retained between outer iterations except the current position of the asymptotes.
In practice, this means that MMA does not show the quadratic convergence close to the optimum associated with Newton-like methods. In fact, there are very simple problems dominated by a quadratic term in the objective function for which MMA converges very slowly or not at all. In particular, in order for MMA to work efficiently, least-squares problems must be formulated using Least Squares Objective features in an Optimization interface. These features trigger a reformulation of the problem to a form that is more suitable for MMA. Moreover, if the defect is complex valued, only the real part is used by MMA internally.
Because of the linear approximation of the objective function, the first inner iteration in each outer MMA iteration effectively steps into a corner of the feasible set, where it is completely bound by constraints and simple bounds. If this point is found to be nonconservative, as is the case if the objective function is convex with an optimum in the interior of the feasible set, the inner iteration generates a series of iterates gradually moving away from the constraints until a conservative point is found. This behavior favors points close to the constraints, in contrast to the line search used in SNOPT and the trust region in Levenberg–Marquardt which favor points close to the previous iterate. If the objective function has multiple local minima, the different methods can therefore be expected to find different local solutions.
For further details, see Ref. 8, which you can find under <COMSOL_root>/doc/pdf/Optimization_Module/gcmma.pdf, where <COMSOL_root> is the root folder of your COMSOL installation.