Settings for Contact Nodes
The Contact Node
Penalty Factor
An important parameter in the settings for the Contact node is the penalty factor. When running into convergence problems, check the penalty factor settings and consider changing the current value. It is used by all contact methods, but its interpretation differs:
High values of the penalty factor can lead to an ill-conditioned stiffness matrix and convergence problems in the Newton iterations. This is often identified by that the damping factor reported by the solver becomes less than 1 for many Newton iterations, or by that the structure “jumps” into an unphysical state. Large errors returned from the linear solver are also an indication that the stiffness matrix is ill-conditioned. If this occurs, decrease the penalty factor.
The default value for the penalty factor is based on a characteristic stiffness. The default is an “equivalent” Young’s modulus (Eequ) of the material on the destination side. For linear elastic isotropic materials, Eequ is the actual Young’s modulus. For other types of materials, Eequ is defined by an estimate of a similar stiffness at zero strain. For materials that are user defined or in other ways nonstandard (for example, anisotropic with large differences in stiffness in different directions), Eequ might need to be replaced with another estimate.
When using the augmented Lagrangian formulation with a segregated solution method, having a well-tuned penalty factor is important for the performance of the outer contact iterations. The default value is selected as a compromise between speed and stability, but with more weight on stability. The strategy is for each new step (parametric step or time step) to start with a softened penalty factor, which is then increased over the first four iterations. The purpose is to stabilize the problem in case there are large overclosures during the first iterations. This is called relaxation.
In a situation where the contact is well established, using relaxation will however cost extra iterations, and it may even lead to a loss of convergence.
For this method, the penalty factor can be tuned in several ways. You have three basic choices, ranging from simple to advanced:
With a Preset penalty factor, you can choose having it tuned for Stability, Speed, or Bending. With Stability, relaxation is used in every step. With Speed, a constant penalty factor is used, and the value that is used is equal to the final value obtained when using Stability. For Bending, a constant and low penalty factor is used in all iterations. The value corresponds to the initial value when using Stability. As the name implies, this option is intended for bending dominated problems where the structural stiffness is much lower than the bulk stiffness of the material.
With Manual tuning, you can adjust the magnitude of the penalty factor, and also the relaxation strategy.
With User defined, you can enter any expression for the penalty factor.
Some hints for selecting the penalty factor for the segregated augmented Lagrangian method:
Trigger Cutback
If, during the iterations, a contact problem comes into a state where it is far from the converged solution, it is unlikely that the solution will ever converge. In such a case, much computing time can be spent before the maximum number of iterations is reached, and the solver makes an attempt with a smaller time or parameter step. To speed up this process, you can select the Trigger cutback check box when using the augmented Lagrangian method. You can then enter a logical expression that will force the solver to immediately abandon the iterations and try a smaller step when fulfilled. Such an expression can, for example, be a maximum displacement (like solid.disp > 5[mm]), based on what is physically realizable for the structure. The expression is evaluated in all points on the boundary, but you can also use a global value, like an average or a maximum.
Contact Surface Offset and Adjustment
It is possible to assign an offset to both the source and destination boundaries. When an offset is given, the boundary used in the computations is not the geometrical boundary, but a virtual boundary displaced by the offset value. You can use this option for several purposes:
When the source and destination boundaries are curved, the discretization introduced by the meshing may lead to small variations in the computed distance between the source and destination boundaries, even though the geometrical shapes of the two objects are ideal. When modeling for example a shrinkage fit, this effect can cause significant fluctuations in the contact pressure. If you select Force zero initial gap, the computed distance from destination to source will be adjusted by the initial gap distance detected by the contact search. Positive gap distances smaller than the tolerance Δgap are adjusted to be zero. By default, Δgap is set to Inf, which means that all gaps and overclosures detected are adjusted to be zero. This adjustment can be combined with an offset. The offset is applied to the adjusted gap value.
It is only the gap computation that is affected by this setting, the mesh as such is not adjusted. This type of adjustment is most useful when the sliding is small, so that the gap distance is always computed between the same points on source and destination.
You can only apply an offset to the source boundaries if they belong to the same physics interface as the destination boundaries.
Initial Value
In the augmented Lagrangian method, where the contact pressure is a dependent variable, it can be given an initial value. In force-controlled contact problems where no other stiffness than the contact prohibits the deformation, the initial contact pressure is crucial for convergence. If it is too low, the parts might pass through each other in the first iteration. If it is too high, they will never come into contact.
Discretization
When using the augmented Lagrangian method it is possible to change the type and order of the shape functions used for the contact pressure and friction force degrees of freedom. The default is linear shape functions, which ensures that there are no over-constraints in the contact interface. It is allowed to use a shape function order that is equal to or lower than the order of the displacement field. Increasing the order of the contact variables from the default can increase the accuracy of how well the contact conditions are enforced, but can impair convergence and increase the computational cost.
For a discretization other than Linear, the lumped solver is no longer optimal for the contact pressure update when using a segregated solution method. In such cases, a standard segregated step should be used. The default solver generation takes this into account, but if you later modify the discretization, you should update the solver sequence.
Quadrature Settings
The weak equations set up by the Contact node and its subnodes typically involve discontinuous functions. These originate from the contact mapping, where the source and destination meshes, in most cases, are nonconforming. The default quadrature used in the numerical integration of these integrals is equal to the order used by the displacement field. For a quadratic displacement field, this means integration order equal to four.
In most situations, the default quadrature provides sufficient accuracy of the numerical integration. However, it is sometimes necessary to increase the integration order for the contact weak equations. This can improve the stability of the contact model since it reduces numerical errors that might occur during assembly of the system matrices. Note that using a too high integration order can significantly increase the cost of assembling the matrices.
Jacobian Contribution
In the Advanced section of the Contact node, there is an option to specify the type of Jacobian contribution from the contact equations. The default Automatic option will choose a suitable setting depending on the mapping method used by the contact pair. However, if controlled manually the Nonsymmetric option is the preferable choice especially when the source boundaries undergoes large deformations, since it is more robust. The Symmetric option can be attractive for large models since it preserves the symmetry of the global stiffness matrix, as long as no other features cause it to be nonsymmetric. This can decrease the solution time and memory requirements when solving the model.
The Friction Node
When adding a Friction node, you can specify a constitutive model (friction model) for the behavior of the tangential contact. This model includes conditions for switching between sticking and sliding, as well as computation of the current friction forces.
Friction Parameters
The friction model specifies the threshold for the friction force in the contact pair. If the computed (trial) friction force is above this threshold value, the contact is in a stick state; if the (trial) friction force is above the threshold, the contact is in a slip (or sliding) state.
Two predefined friction models are available based on the classical Coulomb law, where the friction force is proportional to the contact pressure through the friction coefficient. Both Coulomb laws are additionally generalized by allowing specification of minimum (cohesion) sliding resistance and a cap that sets the maximum tangential traction.
It is also possible to define the threshold for the friction force as an arbitrary expression that may depend on any quantity in the model, for example temperature or position. The only limitation is that the expression may not implicitly depend on the current value of the friction force.
Friction Force Penalty Factor Control
This section provides similar settings as described in Penalty Factor of the Contact node, but the penalty factor is here used to regularize the stick constraint. However, the same considerations for how to set an appropriate value apply. For convenience, it is also possible to utilize the penalty factor set in the parent contact node.
This section is not available when the Nitsche method is used. The same penalty factor as set in the parent Contact node will be used for friction as well.
The Slip Velocity Node
The Slip Velocity node facilitates a simplified form of slip friction modeling, which can be used in the case that the direction and speed of the sliding is known. The same friction models as for the Friction node are available. However, it is here assumed that the tangential contact is in a sliding state, and that the slip velocity is known beforehand. The latter is supplied to the feature as a user input in the local coordinate system.
Knowing the slip velocity greatly simplifies the computation of friction forces. There is no need to determine the transition between stick and slip contact, which can be difficult.
The Adhesion Node
When using the penalty method, you can specify that the boundaries in the contact pair should stick to each other after coming into contact by adding an Adhesion node.
The adhesive layer is conceptually without thickness, but by specifying an offset in the Contact node, you can to some extent include the dimensions of the adhesive layer.
The adhesive layer always has a finite stiffness. For a glue layer, this represents the true stiffness. For a more conceptual joining of two boundaries, this stiffness should be considered in the same way as the penalty stiffness in the contact formulation. The stiffness can differ between tension and compression: In compression the stiffness is always taken as the penalty stiffness, whereas you can change the tensile stiffness.
The Decohesion Node
When adhesion is active, it is possible to break the bond between the source and destination boundaries by adding a Decohesion subnode to Contact. To model decohesion, it is required that an Adhesion node is present and active in the same parent Contact node.
Decohesion defines a cohesive zone model (CZM) based on interface damage mechanics on the adhesive layer. Damage is assumed to be a scalar variable that initiates as 0 and grows to 1 during decohesion, and in principle degrades the stiffness of the adhesive layer. Since damage is a scalar, both the normal and tangential stiffness components degrade simultaneously, irrespective of whether the actual loading direction. However, the penalty stiffness of the contact condition is not affected by damage.
Two alternative CZM are available. The Displacement-based damage models defines damage growth as a function of a mixed mode displacement quantity. It comes with several traction separation laws that associate the onset of damage with the peak strength of the interface. For some of them, it is possible to choose between different mixed mode failure criteria. The Energy-based damage models define damage growth as a function of the stored undamaged elastic energy density of the interface. It also comes with several different traction separation laws. However, these are more general and define the onset of damage at an arbitrary elastic energy density. In principle, you can define the model so that damage initiates immediately during loading of the adhesive layer, that is for zero energy density. The strength of the interface is then determined by the critical energy release rate and the shape of the damage evolution function. In this way, the energy-based damage models can be viewed as a regularization of linear elastic fracture mechanics.
Decohesion is an inherently unstable process. The elastic energy in the strained adhesive layer is released during decohesion. Numerically, the decreasing branch of the traction-separation curve manifests itself as a local negative stiffness. Such problems are only possible to solve as long as the surrounding material can absorb the released energy. The numerical stability is, furthermore, closely coupled to the physical stability of the structure. The following points can help to set up a model with decohesion and to overcome problems with convergence.
Sometimes it is not possible to use prescribed displacements, for example if the load is distributed. You can then add a Global Equation to control the loading rate by some other quantity that increases monotonically. This is the same technique as the one used for postbuckling problems.
To improve the robustness of the solver, it is sometimes beneficial to modify the settings in the Method and Termination section of the Fully Coupled or Segregated nodes in the solver sequence. For example, allow a larger number of iterations or try a different nonlinear method. Often, the Constant (Newton) method can improve the convergence of models with decohesion.
The robustness of the solver can also be improved by modifying the parameter or time stepping algorithm. For a stationary study, you can tune the step size in the Parametric node, and for a time-dependent study, you can modify the time stepping of the Time-Dependent Solver. A good idea is often to reduce the maximum allowed step size of the solver and to allow for smaller step size than the default. Note that if the maximum step size allowed is too large, the solver might bypass the decohesion process altogether; in other words, even though a converged solution is obtained, it might be invalid.
The solution of the unstable failure due to decohesion is, to some degree, always mesh dependent; see, for example, Ref. 1. It is therefore good practice to make sure that the mesh of the interface and in its vicinity is fine enough to allow the energy released during decohesion to properly redistribute in the structure. This can help avoid solution jumps where several mesh elements are completely damaged in a single step. Such solution jumps can be difficult for the solver to get pass, and even if it does, the solution after the jump might be invalid.
For time-dependent studies, it is possible regularize the CZM with a viscous delay by selecting Delayed damage in the Regularization list. This option adds a delay to the release of energy, which is controlled by the Characteristic time τ. Using this option can help to suppress the instability of the solution when the step size or mesh size is too large. If the viscous damage is used to stabilize a rate-independent decohesion problem, the value of τ must be chosen with care. As a rule of thumb, τ should at least be one or two orders of magnitude smaller than the expected time step.
For an example showing a decohesion analysis, including how to use a global equation to control an unstable problem, see Mixed-Mode Debonding of a Laminated Composite: Application Library path Structural_Mechanics_Module/Contact_and_Friction/cohesive_zone_debonding.
The Wear Node
By adding a Wear subnode to a Contact node, it is possible to model adhesive or abrasive wear of the material when the contacting boundaries are sliding along each other. Since wear involves solving evolution equations, the Wear node only adds a contribution for time-dependent studies. Moreover, wear is typically a slow process where dynamic effects are of small significance. You should, therefore, usually set Structural transient behavior to Quasistatic in the Structural Transient Behavior section of the physics interface settings.
The most general technique to model the removal of material during the wear process relies on the deformed geometry concept. When selecting the Deformed geometry formulation, the wear feature adds a (hidden) Deforming Domain feature that controls the material frame through an adaptive mesh smoothing. The wear, as computed in the Wear node is fed as a (hidden) Prescribed Normal Mesh Displacement boundary condition to the deforming domain, and thus describes the actual removal of material from the geometry. When using this formulation, you must be aware that the adaptive mesh means that state variables stored in Gauss points, for example plastic strains or creep strains, will not represent the same material points all the time. Whether or not this effect is acceptable must be judged on a case-by-case basis. Unless the amount of material that is removed is large, or gradients are strong, this is mainly an issue close to the boundary where material is removed by the wear process.
Alternatively, wear can be incorporated as an offset in the contact condition. This formulation is computationally less expensive, and is suitable as long as only small amounts of material are removed, and the wear does not change the orientation of the normal to the boundary significantly.
The slip velocity used for the wear computation can be obtained from either a Friction node or a Slip Velocity node, so one of these two nodes should be present and active under the same Contact parent node. For the Generalized Archard wear model, this is a requirement. In most cases, the orientation of the slip velocity is known a priori in a wear analysis, in which case Slip Velocity provides the more efficient solution.
In general, modeling wear on the destination side is slightly more accurate, since it is there that the contact pressure and slip velocity are originally computed. When modeling wear on the source side, these quantities are mapped from the destination boundary. Multiple Wear nodes under the same Contact node contributes with each other, which means that it is possible to model wear on both source and destination simultaneously. However, adding multiple wear contributions to either source or destination may give unphysical results. On the source side it is possible to also use the Rigid Material material model; this is not permitted on the destination due to general restrictions of the Contact node.
Contact Analysis Theory in the Structural Mechanics Theory chapter.