The Sensitivity Analysis Algorithm
When you enable Sensitivity analysis, the stationary solvers compute — in addition to the basic forward solution — the sensitivity of a functional
(20-17)
with respect to the sensitivity variables p. The forward solution up is a solution to the parameterized discrete forward problem
(20-18)
where Λp are the constraint Lagrange multipliers, or (generalized) reaction forces, corresponding to the constraints M. It is assumed that Q does not explicitly depend on Λp.
To compute the sensitivity of Q with respect to p, first apply the chain rule:
(20-19)
where uflux are the accurate boundary degrees of freedom. In this expression, the sensitivity of the solution with respect to the sensitivity variables, u/∂p, is still an unknown quantity. Therefore, differentiate the forward problem in Equation 20-18 formally with respect to p:
(20-20)
Here, K = −∂L/∂u and N = −∂M/∂u as usual. Assuming that the constraint force Jacobian NF is independent of p (that is, NF/∂p = 0), you can write the above relations in matrix form
(20-21)
Solve for the sensitivities up/∂p and ∂Λp/∂p, and plug them back into Equation 20-19:
(20-22)
This formula gives dQ/dp explicitly in terms of known quantities, but in practice it is too expensive to invert the matrix J.
The accurate boundary flux degrees of freedom are obtained internally by solving equation for flux degrees of freedom:
where Kflux is not dependent on either the parameter p or the solution. The Lflux, on the other hand, can be a solution- and parameter-dependent quantity, and therefore the sensitivity solvers assemble these derivatives:
and.
If the number of individual sensitivity variables, pj, is small, Equation 20-21 can be solved for each right-hand side [∂L/∂pj ∂Μ/∂pj]T, and the solution is then inserted into Equation 20-19. This is the forward method, which in addition to the sensitivity dQ/dp returns the sensitivity of the solution, up/∂p. The matrix J is in fact the same matrix as in the last linearization of the forward problem. The forward method therefore requires one additional back-substitution for each sensitivity variable.
If there are many sensitivity variables and the sensitivity of the solution itself, up/∂p, is not required, the adjoint method is more efficient. It is based on using auxiliary variables u* and L*, known as the adjoint solution, to rewrite Equation 20-22:
(20-23)
In this form only one linear system of equations must be solved regardless of the number of sensitivity variables, followed by a simple scalar product for each variable. This is much faster than the forward method if the number of variables is large. The system matrix, which is solved for, is the transpose of the last linearization of the forward problem. This makes no significant difference for the iterative linear solvers. For the direct solvers, if J is symmetric or Hermitian, this makes no difference compared to the forward method, and the direct solvers can reuse the factorization. In the nonsymmetric case, MUMPS and PARDISO can reuse the factorization of J while SPOOLES needs to do a new factorization of JT.
Segregated Sensitivity Solver
When using the segregated solver together with sensitivity, a segregated approach will also be taken for the sensitivity problem. This is important from several aspects, but most importantly to not increase the computational requirements.
When using the segregated solver, you need to add the control variables to the right segregated groups. From Equation 20-20, it is clear that for the forward sensitivity problem to be constrained correctly, the control variables need to be added to all the segregated groups where they are part of the constraints. For the adjoint method, the equations are the ones in Equation 20-23 and here the control variables are not involved. The correct constraint handling is taken into account after the segregated solver has converged by using the formula in Equation 20-23 without the explicit need to add them to any group.
Sensitivity in the COMSOL Multiphysics Programming Reference Manual.