About the SOR, SOR Gauge, SOR Line, and SOR Vector Iterative Solver Algorithms
The background information for the SOR, SOR Gauge, SOR Line, and SOR Vector attribute nodes are described in this section.
The SOR Method
The SOR (successive over-relaxation) method provides a simple and memory-efficient solver/preconditioner/smoother based on classical iteration methods for solving a linear system A x = b. Given a relaxation factor ω (usually between 0 and 2), a sweep of the SOR method transforms an initial guess x0 to an improved approximation x1 = x0 + M1(b − Ax0), where the preconditioning matrix M = L + D, and D is the diagonal part of A, and L is the strictly lower triangular part of A. When ω = 1 (the default), the Gauss-Seidel method is obtained.
In the SORU method, M = U + D, where U is the strictly upper triangular part of A. The SOR and SORU methods use a more accurate approximation of the matrix, which leads to fewer iterations but slightly more work per iteration than in the Jacobi method.
The SSOR (symmetric successive over-relaxation) method is one SOR sweep followed by a SORU sweep. The output x1 for an input x0 also comes from the above formula but with
When the system matrix A is symmetric, the SSOR method has the advantage that the preconditioning matrix M is symmetric. Symmetry of the preconditioner matrix is necessary when using the conjugate gradients iterative method. In such cases, the SSOR preconditioner is preferable to the SOR preconditioner.
The SSOR Gauge, SOR Gauge, and SORU Gauge Algorithms
The SOR Gauge algorithms are described.
Magnetostatic problems are often formulated in terms of a magnetic vector potential. The solution of problems formulated with such a potential is in general not unique. Infinitely many vector potentials result in the same magnetic field, which typically is the quantity of interest. A finite element discretization of such a problem results in a singular linear system of equations, Ax = b. Despite being singular, these systems can be solved using iterative solvers if the right-hand side of the discretized problem is the range of the matrix A. For discretized magnetostatic problems, the range of A consists of all divergence-free vectors. Even if the right side of the mathematical problem is divergence free, the right side of the finite element discretization might not be numerically divergence free. To ensure that b is in the range of A, SOR gauge performs a divergence cleaning of the right side by using the matrices T and TT similar to the algorithm for the SOR Vector iterative method. To this end, the system TTTψ = −TTb is first solved. Adding Tψ to b then makes the numerical divergence of the right side small.
The SOR Line Algorithm
In regions where the mesh is sufficiently anisotropic, the algorithm forms lines of nodes (SOR Line) that connect nodes that are relatively close to each other (Ref. 40). Thus, in a boundary layer, a line is a curve along the thin direction of the mesh elements. A smoothing iteration does two things:
Like the SOR and Jacobi smoothers/preconditioners, the algorithm gives an error message if it finds zeros on the diagonal of the system matrix.
The SOR Vector Algorithm
The SOR Vector algorithm is an implementation of the concepts in Ref. 37 and Ref. 24. The algorithm applies SOR iterations on the main linear equation Ax = b but also makes SOR iterations on a projected linear equation TTAT y = TTb. Here the projection matrix, T, is the discrete gradient operator, which takes values of a scalar field in the mesh vertices and computes the vector-element representation of its gradient. Loosely speaking, the argument for using this projection is the following: For example, let the linear equation Ax = b represent the discretization of a PDE problem originating from the vector Helmholtz equation
for the unknown vector field E, where a and c are scalars, and F is some right-hand side vector. Standard preconditioners/smoothers cannot smooth the error in the null space of the operator ∇ × (a∇ × .). This null space is the range of the gradient operator. This algorithm adds a correction E →E + ∇ϕ to the standard SOR smoothed solution (or residual), where it computes ϕ from SOR iterations on a projected auxiliary problem. The projected problem is obtained by taking the divergence (or discretely TT) of the Helmholtz equation and plugging in the correction. You then obtain
(for clarity, boundary constraints are disregarded), which, if c is definite (strictly positive or strictly negative), is a standard elliptic type of equation for the scalar field ϕ. 
When using this algorithm as a smoother for the multigrid solver/preconditioner, it is important — for the correct discrete properties of the projected problem — to generate nested meshes. Also, it performs an element assembly on all mesh levels (controlled by the multigrid Assemble on all levels check box). You can generate nested meshes through manual mesh refinements or do so automatically by selecting Refine mesh from the Hierarchy generation method list in the Multigrid node.
The projection matrix T is computed in such a way that nonvector shape functions are disregarded, and you can therefore use it in a multiphysics setting. It can also handle contributions from different geometries. Nonvector shape function variables are not affected by the correction from the projected system, and the effects on them are the same as when you apply the standard SOR algorithm.