(Lecture Notes in Mathematics, 2318) Jean Deteix, Thierno Diop, Michel Fortin - Numerical Methods for Mixed Finite Element Problems_ Applications to Incompressible Materials and Contact Problems-Sprin
(Lecture Notes in Mathematics, 2318) Jean Deteix, Thierno Diop, Michel Fortin - Numerical Methods for Mixed Finite Element Problems_ Applications to Incompressible Materials and Contact Problems-Sprin
Volume 2318
Editors-in-Chief
Jean-Michel Morel, CMLA, ENS, Cachan, France
Bernard Teissier, IMJ-PRG, Paris, France
Series Editors
Karin Baur, University of Leeds, Leeds, UK
Michel Brion, UGA, Grenoble, France
Annette Huber, Albert Ludwig University, Freiburg, Germany
Davar Khoshnevisan, The University of Utah, Salt Lake City, UT, USA
Ioannis Kontoyiannis, University of Cambridge, Cambridge, UK
Angela Kunoth, University of Cologne, Cologne, Germany
Ariane Mézard, IMJ-PRG, Paris, France
Mark Podolskij, University of Luxembourg, Esch-sur-Alzette, Luxembourg
Mark Policott, Mathematics Institute, University of Warwick, Coventry, UK
Sylvia Serfaty, NYU Courant, New York, NY, USA
László Székelyhidi , Institute of Mathematics, Leipzig University, Leipzig,
Germany
Gabriele Vezzosi, UniFI, Florence, Italy
Anna Wienhard, Ruprecht Karl University, Heidelberg, Germany
This series reports on new developments in all areas of mathematics and their
applications - quickly, informally and at a high level. Mathematical texts analysing
new developments in modelling and numerical simulation are welcome. The type of
material considered for publication includes:
1. Research monographs
2. Lectures on a new field or presentations of a new angle in a classical field
3. Summer schools and intensive courses on topics of current research.
Texts which are out of print but still in demand may also be considered if they fall
within these categories. The timeliness of a manuscript is sometimes more important
than its form, which may be preliminary or tentative.
Titles from this series are indexed by Scopus, Web of Science, Mathematical
Reviews, and zbMATH.
Jean Deteix • Thierno Diop • Michel Fortin
Numerical Methods
for Mixed Finite Element
Problems
Applications to Incompressible Materials
and Contact Problems
Jean Deteix Thierno Diop
GIREF, Département de Mathématiques et GIREF, Département de Mathématiques et
de Statistique de Statistique
Université Laval Université Laval
Québec, QC, Canada Québec, QC, Canada
Michel Fortin
GIREF, Département de Mathématiques et
de Statistique
Université Laval
Québec, QC, Canada
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland
AG 2022
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1
2 Mixed Problems .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3
2.1 Some Reminders About Mixed Problems . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3
2.1.1 The Saddle Point Formulation . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4
2.1.2 Existence of a Solution . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4
2.1.3 Dual Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5
2.1.4 A More General Case: A Regular Perturbation .. . . . . . . . . . . . . . 6
2.1.5 The Case b(v, q) = (Bv, q)Q . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6
2.2 The Discrete Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7
2.2.1 Error Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8
2.2.2 The Matricial Form of the Discrete Problem . . . . . . . . . . . . . . . . . 9
2.2.3 The Discrete Dual Problem: The Schur Complement . . . . . . . . 11
2.3 Augmented Lagrangian . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 13
2.3.1 Augmented or Regularised Lagrangians . .. . . . . . . . . . . . . . . . . . . . 13
2.3.2 Discrete Augmented Lagrangian in Matrix Form . . . . . . . . . . . . 15
2.3.3 Augmented Lagrangian and the Condition Number
of the Dual Problem . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 15
2.3.4 Augmented Lagrangian: An Iterated Penalty .. . . . . . . . . . . . . . . . 17
3 Iterative Solvers for Mixed Problems . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 19
3.1 Classical Iterative Methods . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 19
3.1.1 Some General Points . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 20
3.1.2 The Preconditioned Conjugate Gradient Method . . . . . . . . . . . . . 22
3.1.3 Constrained Problems: Projected Gradient and
Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 24
3.1.4 Hierarchical Basis and Multigrid Preconditioning . . . . . . . . . . . 26
3.1.5 Conjugate Residuals, Minres, Gmres and the
Generalised Conjugate Residual Algorithm . . . . . . . . . . . . . . . . . . 27
3.2 Preconditioners for the Mixed Problem . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 32
3.2.1 Factorisation of the System . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 32
v
vi Contents
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 107
Index . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 113
Chapter 1
Introduction
Mixed Finite Element Methods are often discarded because they lead to indefinite
systems which are more difficult to solve than the nice positive definite problems of
standard methods. Indeed, solving indefinite systems is a challenge : direct methods
[4, 14, 78] might need renumbering (see [28]) of the equations and standard iterative
methods [22, 81, 86] are likely to stagnate or to diverge as proved in [86]. As an
example, consider the classical conjugate gradient method. Applied to a symmetric
indefinite problem it will generate a diverging sequence. As the conjugate gradient
method is (in exact arithmetic) a direct method, it will yield the exact solution if
the problem is small enough to avoid loosing orthogonality. Applying a minimum
residual method to the same problem will in most cases yield stagnation.
These two classical methods are the simplest in a list which grows constantly.
This monograph does not intend to introduce new iteration methods. We shall rely
mostly on existing packages, mostly Petsc from Argonne Laboratory [11].
Our concern is to solve algebraic systems associated to mixed discretisation.
Several approaches (see, for example, [8, 15, 43]) exist in the literature to solve
this type of problem but convergence is not always guaranteed. They are indefinite
systems but also structured systems associated with matrices of the form,
A Bt
(1.1)
B0
We also want to make clear that our numerical results should be taken as
examples and that we do not claim that they are optimal. Our hope is that they
could be a starting point for further research.
Here is therefore our plan.
• Chapter 1 rapidly recalls the classical theory of mixed problems, including
Augmented Lagrangian methods and their matricial form.
• Chapter 2 presents some classical iterative methods and describes the precondi-
tioner which will be central to our development. We come back to augmented
Lagrangian and a mixed form for penalty methods.
• Chapter 3 is devoted to numerical examples. The first one will be the approxima-
tion of a Dirichlet problem with Raviart-Thomas elements [20]. This is a simple
case which will however permit to consider the fundamental issues. We shall see
how an Augmented Lagrangian method enables us to circumvent the fact that we
do not have coercivity on the whole space.
We shall thereafter consider incompressible elasticity in solid mechanics, first
in the linear case and then for a non linear Mooney-Rivlin model.
In all those problems, the space of multipliers is L2 () and can therefore be
identified with its dual. We also present some ideas for the solution of the Navier-
Stokes equations. In those problems, with the discrete spaces that we employ, we
shall not be able to use a real augmented Lagrangian. However, we shall consider
a regularised formulation which will accelerate the convergence of our iterations.
• Chapter 4: We consider contact problems. In this case, the space of multipliers
is not identified to its dual. We shall present some ideas for which we do not
have numerical results but which we think could be research avenues for the
future. In particular, we present ideas about discrete Steklov-Poincaré operators.
Numerical results will be presented in the more classical formulation where the
duality product is approximated by the L2 scalar product.
• Chapter 5: Finally, we shall consider a case, arising from contact mechanics
between incompressible bodies, in which we have two different types of con-
straints. This will lead us to modify accordingly our preconditioners.
Chapter 2
Mixed Problems
Basically, mixed problems arise from the simple problem of minimising a quadratic
functional under linear constraints. Let then V a function space and a bilinear form
defining an operator A from V into V .
where · V is the norm of V and a is the norm of a(·, ·). In the same way,
consider another function space Q, a bilinear form on V × Q defining an operator
B from V into Q ,
1
F (v) := a(v, v) − f, v (2.4)
2
1
inf sup a(v, v) − b(v, q) − f, v + g, q, (2.6)
v∈V q∈Q 2
For problem (2.7) to have a solution, it is clearly necessary that there exists some ug
satisfying Bug = g. Moreover, the lifting from g to ug should be continuous. This
is classically equivalent [20] to the inf-sup condition
b(u, q)
inf sup ≥ k0 (2.8)
q∈Q v∈V uV qQ
We shall refer to (2.9) as the primal problem. We therefore suppose, as in [20] that
the bilinear form a(·, ·) is coercive on Ker B, that is there exists a constant α0 such
that,
Remark 2.1 (Coercivity) Unless there is a simple way to build a basis of Ker B
or a simple projection operator on Ker B coercivity on Ker B is not suitable for
numerical computations. In our algorithms, we shall need, in most cases, coercivity
2.1 Some Reminders About Mixed Problems 5
on the whole of V .
It is always possible to get this by changing a(u, v) (see [43, 46, 79]) into
a (u, v) = a(u, v) + α(Bu, Bv)Q .
Such a change will have different consequences in the development of our algo-
rithms and will lead to augmented and regularised Lagrangians which will be
considered in detail in Sect. 2.3
Remark 2.2 It should also be noted that (2.11) and (2.1) imply that a(v, v) is a norm
on v, equivalent to the standard norm.
one falls back on the original problem, the primal problem. Reversing the order of
operations, (this cannot always be done, but no problems arise in the examples we
present) and eliminating v from L(v, q) by defining
sup D(q).
q∈Q
The discrete form of the dual problem and the associated Schur’s complement will
have an important role in the algorithms which we shall introduce.
6 2 Mixed Problems
We will also have to consider a more general form of problem (2.7). Let us suppose
that we have a bilinear form c(p, q) on Q. We also suppose that it is coercive on Q,
that is
c(q, q) ≥ γ |q|2Q
1 1
αu2V + γ p2Q ≤ f 2V + g2Q (2.13)
α γ
The bound (2.13) explodes for γ small which can be a problem. If we still have
the inf-sup condition and the coercivity on the kernel then we can have a bound on
the solution independent of . We refer to [20] for a proof and the analysis of some
more general cases.
Remark 2.3 In this case, the perturbed problem can be seen as a penalty form for the
unperturbed problem. We shall come back to this in Sect. 2.3.4. It is then easy to see
that we have an O() bound for the difference between the solution of the penalised
problem and the standard one. The bound depends on the coercivity constant and
the inf-sup constant.
The above presentation is abstract and general. We now consider a special case
which will be central to the examples that we shall present later. Indeed, in many
cases, it will more suitable to define the problem through an operator B from V into
Q. We suppose that on Q, we have a scalar product defining a Ritz operator R from
Q into Q
B = RB (2.14)
and
Bh = PQh B
The scalar product (·, ·)Q defines an operator Rh from Qh onto Qh and we can
introduce
Bh = Rh Bh
We want to solve
a(uh , vh ) + b(vh , ph ) = f, vh ∀vh ∈ Vh
(2.17)
b(uh , qh ) = (gh , qh )Q , ∀qh ∈ Q
is weaker than Buh = g which for almost all cases will lead to bad convergence or
even ‘locking ’, that is a null solution.
For example, we shall meet later the divergence operator div (which acts on a
space of vector valued functions which we shall then denote by u). When we take
for Qh a space of piecewise constant functions, divh uh is a local average of div uh .
We recall here, in its simplest form the theory developed in [20]. This will be
sufficient for our needs. In the proof of existence of Sect. 2.1.2, we relied on two
conditions: the coercivity on the kernel (2.10) and the inf-sup condition (2.8). We
introduce their discrete counterpart. We thus suppose
∃ α0h > 0 such that a(v0h , v0h ) ≥ α0h v0h 2V ∀ v0h ∈ Ker Bh , (2.18)
b(vh , qh )
∃ βh > 0 such that sup ≥ βh qh Q . (2.19)
vh ∈Vh vh V
Eu := inf u − vh V
vh ∈Vh
Ep := inf p − qh Q
qh ∈Qh
Proposition 2.1 (The Basic Error Estimate) Assume that Vh and Qh verify (2.18)
and (2.19). Let f ∈ V and g ∈ Q . Assume that the continuous problem (2.17) has
a solution (u, p) and let (uh , ph ) be the unique solution of the discretised problem
(2.17). If a(·, ·) is symmetric and positive semi-definite we have the estimates
2a 2a1/2b b
uh − uV ≤ + Eu + h Ep ,
α0h (α0h )1/2 βh α0
2.2 The Discrete Problem 9
2a3/2 a b 3a1/2 b
ph − pQ ≤ h 1/2
+ Eu + Ep
(α0 ) βh βh2
(α0h )1/2 βh
To make explicit the numerical problem associated with (2.17) we need to introduce
j
basis vectors φhi for Vh and ψh for Qh and the vectors of coefficients u, q which
define uh and qh with respect to these bases.
j
uh = ui φhi , qh = qj ψh . (2.20)
i j
Remark 2.5 (Notation) To avoid adding cumbersome notation, unless a real ambi-
guity would arise, we shall denote in the following by u and p either the unknowns
of the continuous problem or the unknowns of the numerical problems arising from
their discretisation.
We shall also denote by the same symbol the operators A and B of the continuous
problem and the matrices associated to the discrete problems. As they are used in
very different contexts, this should not induce confusion.
Denoting ·, · the scalar product in Rn , we can now define the matrices associated
with the bases (2.20)
A u, v = a(uh , vh ).
We also have a matrix R associated with the scalar product in Qh which represents
the discrete Ritz operator
We then have
B = R −1 Bu (2.21)
Vh A Vh
B Bt
B Bt
R−1
Qh Qh
R
Remark 2.6 (Qh = Qh ) It is important to note that even in the case where Q =
Q and R = I , this is not the case in the discrete problem. The matrix R defined
above is not the identity matrix.
The choice MQ = R is frequent but not optimal in many cases [48]. In Sect. 2.2.3
we shall discuss the choice MQ = MS where MS is some approximation of the
discrete Schur complement.
We now consider the finite dimensional matricial problems associated to our mixed
formulation, in fact the actual problem for which we want to build efficient solvers.
2.2 The Discrete Problem 11
Although this block matrix is a non singular matrix the numerical solution of
(2.22) is not so simple. The main problem being that this matrix is indefinite. If one
wanted to employ a direct solver, we might have to introduce a renumbering of the
equations [28]. Moreover, our examples will come from the application of the mixed
finite element methods [20]. Indeed, we shall focus on methods suitable for large
systems arising from the discretisation of three-dimensional problems in mechanics
which lead to large problems of the form (2.22). Such large systems are not suitable
for direct solvers and require iterative methods. However, without preconditionning
the system (acting on its eigenvalues), iterative methods, such as Krylov methods,
are likely to diverge or stagnate on indefinite problems (see [86]). Therefore the key
to obtain convergence will be the construction of a good preconditioner.
If we eliminate u from (2.22) we obtain the discrete form of the dual problem,
The matrix S = BA−1 B t is often called the Schur complement. The system (2.23)
is equivalent to the maximisation problem
1
sup {−A−1 B t q, B t q + g − BA−1 f, q} (2.24)
q 2
When building numerical methods, a crucial point will be the condition number
of S. Indeed the condition number is an important measure for the behaviour
(convergence) of iterative methods. Following Remark 2.7, we have on Qh a metric
defined by the matrix MQ . The standard choice would be to take MQ = R where
R is associated to the scalar product induced on Qh by the scalar product of Q. In
some cases, it will be convenient to change this scalar product, using a well chosen
matrix MS instead of R.
Remark 2.8 We write MS to emphasise that this matrix should be chosen to
approximate the Schur complement.
We can change (2.23) into
In Sect. 3.1, this will correspond to a preconditioning. The idea is that MS should
be an approximation of BA−1 B t improving the condition number of the resulting
system.
To quantify this, we consider the eigenproblem
λmax
Kh =
λmin
A−1 B t q, B t q
RQ(q) =
MS p, p
We also suppose that we have the inf-sup condition (2.19). Then taking MS = R,
we have in (2.26)
βh2 B2
λmin ≥ , λmax ≤ . (2.27)
A α
2.3 Augmented Lagrangian 13
We recall that the dual norm associated to the norm defined by A is defined by A−1 .
We can therefore write the Rayleigh quotient, as
A−1 B t q, B t q v, B t 2
= sup
Rp, p v Av, vRq, q
The inf-sup condition yields the lower bound and the upper bound is direct.
This clearly also holds for another choice of MS if we have
The bounds of (2.27) show that we have a condition number independent of the
mesh size if βh ≥ β0 > 0 . This is an important property if we want to solve large-
scale problems. However, if we consider MS as a preconditioner, we shall also want
to build it as a good approximation of S.
The extra term can be written differently using (2.15). We would then have,
a(u, v) + α(Bu, Bv)Q + b(v, p) = f, v + α(g, Bv)Q ∀v ∈ V
b(u, q) = (g, q)Q ∀q ∈ Q
with g = R −1 g . This does not change the solution of the problem. In the discrete
problems, using an augmented Lagrangian is a classical way of accelerating some
algorithms and cancelling errors associated with penalty methods. We shall have
two different ways of implementing this idea.
14 2 Mixed Problems
Remark 2.11 (MS−1 a Full Matrix?) Another important point is the presence of
MS−1 which is in general a full matrix. This makes (2.34) hardly usable unless MS
is diagonal or block diagonal and leads us to employ the regularised formulation.
Remark 2.12 One could also write (2.34) writing the penalty term in mixed form,
that is
⎛ ⎞⎛ ⎞ ⎛ ⎞
A Bt Bt u f
⎝ B −MS 0 ⎠ ⎝ p
⎠ = ⎝ g ⎠
B 0 0 p g
1 t −1
Aφu = B MS Bφu
λ
1 t −1
(A + αB t MS−1 B)φu = B MS Bφu .
λ
Denoting λα the corresponding eigenvalues, one easily sees that
λ
λα =
1+λα
Denoting λM and λm the largest and the smallest eigenvalues, the condition number
of the system is thus
λM (1 + α λm )
Kα =
λm (1 + α λM )
which converges to 1 when α increases. One sees that this holds for α large even
if the initial condition number is bad. One also sees that improving K = λM /λm
also improves Kα for a given α. Augmented Lagrangian therefore seems to be the
perfect solver. However, things are not so simple.
• The first problem is that the condition number of A + α B t MS−1 B worsens when
α increases. As solving systems with this matrix is central to the algorithms that
we shall introduce, we loose on our right hand what we gain with the left one. In
practice, this means finding the correct balance between the conflicting effects.
• The other point is the computability of B t MS−1 B. The matrix MS−1 could for
example be a full matrix. Even if we approximate it by a diagonal matrix, the
structure of the resulting matrix could be less manageable. This will be the case
in the examples of Sect. 4.2.
• When the real augmented Lagrangian cannot be employed, the regularised for-
mulation (2.33) might have a positive effect. However, the solution is perturbed
and only small values of α will be admissible.
2.3 Augmented Lagrangian 17
and
S = B A−1 B t + MS .
This makes the condition number of the perturbed problem better than that of the
standard one which makes one hope of a better convergence for the dual variable.
One also sees that if MS is an approximation of S one should rather use now,
MS = (1 + )MS
This means that there will be an optimum value of = 1/α. Taking a larger
for a better convergence enters in conflict with the paradigm of the augmented
Lagrangian which would need α large and thus small. We shall present a numerical
example in Remark 4.11.
18 2 Mixed Problems
Remark 2.14 (Correcting the Regularised Form) One could use a similar idea to
eliminate the perturbation introduced by formulation (2.33), writing
A + αC B t δu f − A un − B t p n run
= = (2.37)
B 0 δp g − Bun rpn
We now come to our main issue, the numerical solution of problems (2.22), (2.33)
and (2.34) which are indefinite problems, although with a well defined structure.
A Bt
A= . (3.1)
B0
Contact problems will also bring us to consider a more general non symmetric
system
A B1t
A= . (3.2)
B2 0
We intend to solve large problems and iterative methods will be essential. We shall
thus first recall some classical iterative methods and discuss their adequacy to the
problems that we consider. From there we introduce a general procedure to obtain
preconditioners using a factorisation of matrices (3.1) or (3.2).
Iterative methods is a topic in itself and has been the subject of many books
and research articles. For the basic notions, one may consult [47, 50, 86] but this
is clearly not exhaustive. Moreover the field is evolving and new ideas appear
constantly. Our presentation will then be necessarily sketchy and will be restricted
to the points directly relevant with our needs.
When considering iterative method, one may view things from at least two different
points of view:
• linear algebra methods,
• optimisation methods.
These are of course intersecting and each one can bring useful information.
When dealing with systems associated to matrix (3.1), from the linear algebra
perspective, we have an indefinite problem and from the optimisation perspective,
we have a saddle point problem.
• We have positive eigenvalues associated with the problem in u and the matrix A
is often symmetric positive definite defining a minimisation problem.
• On the other hand, we have negative eigenvalues associated to the problem in p,
that is the dual problem (2.24) which is a maximisation problem.
This induces a challenge for iterative methods which have to deal with conflicting
goals.
Norms
The linear systems which we consider arise from the discretisation of partial
differential equations, and are therefore special. It is also useful to see if the iteration
considered would make sense in the infinite dimensional case. Ideally, such an
iterative method would have convergence properties independent of the mesh size.
Considering system (3.1), we have a very special block structure and variables u
and p which represent functions uh ∈ Vh and ph ∈ Qh that have norms which are
not the standard norm of Rn .
• This is an important point: we have a problem in two variables which belong to
spaces with different norms.
In Remark 2.7 we introduced a matrix MQ associated with a norm in Qh , by the
same reasoning we associate a matrix MV for a norm in Vh . We thus have matrices
MV and MQ associated with these norms
This also means that residuals must be read in a space with the dual norms MV−1
−1
and MQ and this will have an incidence on the construction of iterative methods.
3.1 Classical Iterative Methods 21
Krylov Subspace
One should recall that classical iterative methods are based on the Krylov subspace,
looking for an approximate solution in this space. This is made possible by building
an orthogonal basis. If the matrix is symmetric one uses the Lanczos method [60]
to build orthogonal vectors. The important point is that symmetry allows to store a
small and fixed number of vectors. This is the case in the conjugate gradient method
and in the Minres [51, 74, 75, 83] algorithm.
• When A is symmetric positive definite (SPD) it defines a norm and one can look
for a solution in Kr(A, b) minimising x − A−1 b2A = Ax − b2A−1 . This is the
conjugate gradient method.
• When A is not positive definite, it does not define a norm. One then must choose
a norm M and minimise Ax − b2M −1 . This is the Minres algorithm.
When the matrix is not symmetric, the Arnoldi process [7] can be used to build an
orthogonal basis. This yields the GMRES algorithm [81] and related methods.
Preconditioning
ui+1 = ui + αi P −1 (A ui − b)
ri+1 = ri − αAP −1 ri .
ri+1 2A−1
P −1 ri , ri zi , ri
α= −1 −1
= .
P ri , AP ri zi , Azi
1
inf (Av, v)P −1 − (b, v)P −1 (3.4)
v 2
3.1 Classical Iterative Methods 23
The projected gradient method is a very classical method for constrained optimi-
sation [18]. In the simpler case, let us suppose that the solution of a minimisation
problem must satisfy a linear constraint,
Bu = g (3.5)
1
inf A v0 − Aug , v0 − f, v0 .
v0 ∈KerB 2
This is in fact what we did to prove existence of the mixed problem. We shall
meet this procedure in Sect. 6.2. We can then apply the conjugate gradient method
provided the gradient is projected on Ker B. We shall also consider this method in
Sect. 4.3.
Inequality Constraints
Bi u ≤ gi 1≤i≤m
The monitoring implies a change of active constraints if one has one of two
conditions.
• The solution is modified such that an inactive constraint is violated, one then
projects the solution on the constraint and restarts the iteration, now making this
constraint active.
• On an active constraint, the iteration creates a descent direction that would bring
the solution to the unconstrained region. This constraint is then made inactive.
This method is specially attractive in the following case.
Positivity Constraints
An important special case is when the solution must satisfy uj ≥ 0. The constraint
is active if uj = 0 and inactive if uj > 0. The projection on the active set is readily
computed by putting inactive values to zero (see [3, 54, 55]). The gradient (or more
generally the descent direction) is also easily projected. We shall meet this case in
contact problems (Sect. 5.2.2).
Convex Constraints
To complete this section, we give a hint on how this can be extended to a convex
constraint [18]. We consider as in Fig. 3.1 a point u0 on the boundary of the convex
set C. If the constraint is active, the gradient (or some descent direction) z is pointing
to the exterior of C. We can then project the gradient on the tangent to C (in red in
Fig. 3.1), search for an optimal point u∗ on the tangent as we now have a linear
constraint, and then project the result on C to obtain u1 . This will converge if the
curvature of the boundary of C is not too large.
The preconditioning consists in solving with a block Gauss-Seidel iteration for the
piecewise linear part (A11) and the piecewise quadratic correction on the edges
(A22).
• It must be noted that for a typical three dimensional mesh, matrix A11 is about
eight times smaller than the global matrix A. This makes it possible to use a
direct method to solve the associated problem.
• This being a multigrid method, it can also be completed by applying an Algebraic
Multigrid (AMG) method to the problem in A11 .
• The matrix A22 has a small condition number independent of the mesh size [89]
and a SSOR method is quite appropriate.
In our examples, this preconditioner will be applied to the rigidity matrix of
elasticity problems. As we shall see, it is quite efficient for large problems.
The idea of hierarchical basis could be employed for other discretisations. In
some finite element approximations, degrees of freedom are added on the faces to
better satisfy a constraint.
Internal degrees of freedom are also amenable to this technique in a variant of
the classical ‘static condensation’.
3.1 Classical Iterative Methods 27
Kx = b
r − αT 2M −1
which yields
r, M −1 T
α=
T , M −1 T
This can be written in many equivalent ways. We refer for example to [41, 49] for
a complete presentation. In the classical Minres algorithm, the Lanczos method is
completed by Givens rotation to achieve orthogonality of residuals.
Remark 3.4 Although Minres is a well studied and popular method, we did not use
it for three reasons.
• Preconditioning: the change of metric is often said to be a SPD preconditioning.
We shall present in Sect. 3.2 preconditioners which are symmetric but not positive
definite.
• Contact problems are constrained problems and one must have access to the
part of the residual with respect to the multiplier. This was not possible to our
knowledge with the standard Minres implementation or indeed with GMRES
(see however [51]).
• Frictional contact leads to non symmetric problems.
28 3 Iterative Solvers for Mixed Problems
Our choice was rather to consider a method allowing both a general preconditioning
as a change of metric and non symmetrical matrix.
Remark 3.5 (Change of Metric) We can introduce a metric M different from the
standard euclidean metric on Rn in the left and right preconditioned GCR algorithm.
The metric M is applied in the space of solutions while the dual metric M −1 is
applied to the residuals.
We thus have two possibilities while using the preconditioner P to accelerate a
GCR method. We give here both left and right generic P -GCR using an arbitrary
metric M.
We first present the left preconditioner. This is the standard procedure as presented
to precondition the GMRES method in [81].
Algorithm 3.2 Left preconditioned P -GCR algorithm with an arbitrary metric M
1: Initialization
• i=0
• Let x0 the initial value.
• r0 = b − K x0
• r0P = P −1 (r0 )
3.1 Classical Iterative Methods 29
• zi = riP
• Ti = Kzi
• zi = P −1 Ti
• From zi , using the modified Gram-Schmidt (MGS), compute zi⊥ orthonor-
mal in the M-norm to [z0⊥ , · · · , zi−1
⊥ ]. Using the same transformation on T
i
⊥ ⊥
compute Ti based on [T0 , · · · , Ti−1⊥ ].
• β = (riP , Mzi⊥ )
• Update
⎧
⎪
⎪ ri+1 = ri − βTi⊥
⎪
⎪
⎪
⎨ r P = r P − βz
i+1 i i
⎪
⎪ xi+1 = xi + βzi
⎪
⎪
⎪
⎩
i =i+1
endwhile
This is the method which was used in most of our numerical results.
Algorithm 3.3 Right preconditioned P −GCR algorithm with an arbitrary metric
M −1
1: Initialization
• i=0
• Let x0 the initial value.
• r0 = b − K x0
2: while criterion > tolerance do
• zi = P −1 ri .
• Ti = K zi
• From Ti , using the MGS, compute Ti⊥ orthonormal in the M −1 -norm to
[T0⊥ , · · · , Ti−1
⊥ ]. Using the same transformation to z compute z⊥ based on
i i
⊥ ⊥
[z0 , · · · , zi−1 ].
• β = r, M −1 Ti
30 3 Iterative Solvers for Mixed Problems
• Update
⎧
⎪
⎪ xi+1 = xi + βzi⊥
⎨
ri+1 = ri − βTi⊥
⎪
⎪
⎩
i =i+1
endwhile
Remark 3.6 (Choice of Metric) If M −1 is the identity, that is the standard metric of
Rn , this reduce to the classical algorithm. This raise the question of the choice of
M. To fix ideas, let us consider two cases frequently met.
When K and the preconditioner P are both SPD. We would then want to obtain
a method equivalent to the preconditioned conjugate gradient method. As we have
seen in Remark 3.2 the norm should then be defined using P 1/2 K −1 P 1/2 . If the
preconditioner is good, this is close to identity. Thus M = I is not a bad choice
when we have a good precondioner.
For a symmetric system, if the preconditioner is SPD, we could take M = P .
One could then read this as a variant of Minres.
• Ti given
2: for j = i − 1, . . . , 0 do
• sj = MTi , Tj⊥ ,
• Ti = Ti − sj Tj⊥
endfor
• Ti⊥ = Ti /Ti M
For the modified Gram-Schmidt method, one thus needs to compute M Ti at every
step as Ti is changed, which could be expensive. This can be avoided at the cost of
storing in the stack both Tj⊥ and MTj⊥ .
3.1 Classical Iterative Methods 31
We now come to our central concern: solving mixed problems. We now have an
indefinite system. In that case, iterative methods will often diverge or stagnate [45].
For example, without preconditioning a conjugate gradient method applied to an
indefinite system will diverge.
Remark 3.7 (We Have a Dream) As we have already stated,we are dealing with a
problem in two variables. We dream of a method which would really take this fact
into account. The preconditioner that we shall develop will. The GCR method is
then not really satisfying even if it provides good results.
Alas, nothing is perfect in our lower world.
To fix ideas, let us consider a direct application of a right preconditioned GCR
algorithm. Let then A be as in (3.1) and b = (f, g) ∈ Rn × Rm . We are looking
for x = (u, p) and we use the right preconditioned GCR as in Algorithm 3.3. As
before we introduce an arbitrary metric. In this case the metric take into account the
mixed nature of the problem and is characterised by two different matrices Mu and
Mp giving a (Mu , Mp )-norm for (u, p).
Obviously, if both Mu and Mp are identities we have the usual right preconditionned
GCR algorithm.
Algorithm 3.5 Right preconditioned Mixed-P-GCR algorithm with arbitrary met-
ric (Mu , Mp )
1: Initialization
• i=0;
• Let x 0 = (u0 , p0 ) the initial value.
• ru = f − Au0 − B t p0 , rp = g − Bu0 and r = (ru , rp )
2: while criterion > tolerance do
• Update
⎧
⎪
⎪ x = x i + βz⊥
⎨ i+1 i
r = r − βT ⊥
i
⎪
⎪
⎩
i =1+1
end while
Remark 3.8 In the above GCR algorithm, we could decompose β in two component
⊥
β = βu + βp βu = ru , Mu Tiu , βp = rp , Mp Tip⊥
with A an indefinite matrix of the form (3.1) or more generally for the non
symmetric case (3.2). Our first step toward a general solver for systems (3.6) will be
3.2 Preconditioners for the Mixed Problem 33
ru = f − Au0 − B t p0 rp = g − Bu0 .
Using the factorisation (3.8) to obtain (u, p) = (u0 + δu, p0 + δp) leads to three
subsystems, two with the matrix A and one with the matrix S.
Algorithm 3.6
⎧ ∗
⎨ δu = A−1 ru
δp = S −1 (Bδu∗ − rp ) (3.9)
⎩
δu = A−1 (ru − B t δp) = δu∗ − A−1 B t δp.
It must be noted that this is symmetric as it is based on (3.8). Particular attention
must be given to the solution
With the exception of very special cases, S is a full matrix and it is not thinkable
of building it explicitly. Even computing matrix-vector product (needed by any
iterative solver) would require an exact solver for A which may become impossible
for very large problems. Therefore, except in rare cases where systems in A can
34 3 Iterative Solvers for Mixed Problems
where
S = B A−1 B t (3.12)
Using the factorisation (3.11), we may associate (3.8) with a matrix A
A Bt
A= (3.9)
B BA Bt − S
−1
• i=0;
• Let zu and r0p be given values.
• zp = 0
2: while criterion > tolerance or maximum number of iterations do
end while
One could also use the more general version of Algorithm 3.3 with a change of
metric.
Remark 3.13 The computation of zu is optional. It avoids an additional use of A−1
when this is included in the preconditioner 3.7 to the price of some additional work
in the Gram-Schmidt process.
If A and MS are symmetric, we can also use the simpler Conjugate Gradient
method, which takes the following form
• i=0
• zu and rp , given values
2: while criterion > tolerance or maximum number of iterations do
• zzp = MS−1 rp
• If i > 0 α = (zzp , rp )/(zzp0 , rp0 )
• wp = zzp + α wp0
• zzu = −A−1 B t wp
• Tp = Bzzu .
3.2 Preconditioners for the Mixed Problem 37
• β = (wp , rp )/(Tp , wp )
• wp0 = wp , zzp0 = zzp , rp0 = rp
• Update
⎧
⎪
⎪ zp = zp + βzzp
⎪
⎪
⎪
⎨ zu = zu − βzzu
⎪
⎪ rp = rp − βTp .
⎪
⎪
⎪
⎩
i =i+1
end while
When A = A we recall that S = S and for sufficiently strict convergence criteria the
last algorithm corresponds to a solver for S. Then when included in Algorithm 3.7
it yields solutions of (3.6) that is Algorithm 3.7 coincides with Algorithm 3.6. In
[43], this was called the Uzawa algorithm, which we can summarise as formed of
two steps illustrated in Fig. 3.2.
• Solve the unconstrained problem Au = f .
• Project this solution, in the norm defined by A, on the set Bu = g.
In [43] Uzawa’s algorithm was presented in its simplest form depending on an
arbitrary parameter β. The parameter β must then be chosen properly and depends
on the spectrum of BA−1 B t . This was studied in detail and convergence follows as
in the classical analysis of gradient methods (an acceleration by a conjugate gradient
method was also considered. Using this method implies that one has an efficient
solver for A.
Many variants of this method are proposed in the literature (see [15, 81]).
Several authors have studied numerically and in the theoretical framework of the
variants of the method. Elman and Golub [40] proposed the Uzawa method called
inexact whose theoretical analysis is made in [23]. Bai et al. [10] presented the
so-called parametrised method. More recently, Ma and Zang in [66], dealt with
the so-called corrected method. They have shown that it converges faster than the
classical method and several of its variants under certain assumptions. However,
their approach come up against the question of determining the optimal parameter.
We now come to the use of Algorithm 3.7: the General Mixed Preconditioner.
• We must first chose A−1 . For this we rely on standard and well proven iterative
methods. Whenever possible we precondition these methods by a multigrid
procedure.
• We also need an approximate solver for the approximate Schur complement S.
• We also choose a norm N in which we minimise residuals. This will most of
times be the euclidean norm but better choices are possible
If for S we use Algorithm 3.9 or Algorithm 3.8 (i.e. we solve S) then we use a
limited number of iterations. Choosing S = S (i.e. fully converging Algorithm 3.9
or Algorithm 3.8) is a possibility, but in general it is not a good idea to solve too
well something that you will throw away at the next iteration. We shall thus develop
the case where only one iteration is done.
Remark 3.14 One should note that using a better A and more iterations for S is a
direct way to make things better.
• If A = A we have the Uzawa algorithms or close variants.
• If S is solved exactly, we have a form of the projected gradient method.
This being said we shall focus on a simple form of Algorithm 3.7.
Algorithm 3.10 A simple mixed preconditioner
1: Initialization
• ru , rp given
• zu = A−1 ru
• rp = Bzu − rp
2: Approximation of S −1
• zp = MS−1 rp
• zzu = A−1 B t zp
• Tp = Bzzu
(rp , Tp )
• β=
(Tp , Tp )
3.2 Preconditioners for the Mixed Problem 39
3: Final computation
• zp = βzp
• zu = zu − βzzu
4: End
• zu = A−1 ru
40 3 Iterative Solvers for Mixed Problems
• rp = rp − Bzu
• zp = βMS−1 rp
The problem here is that β has to be determined by the user, while in the previous
version, everything was automatic. The choice of parameters had been discussed in
[43]. Moreover, as we said above, the last part of Algorithm 3.7 requires an extra
resolution.
where MQ is the matrix defining the metric on Q. We have noted in Remark 2.13
that S then becomes S + MQ and that MS is changed into (1 + )MS if MQ = MS
.
The preconditioner of Algorithm 3.10 is easily adapted to the perturbed case. If
one uses MS = MQ one should change,
• rp = Bzu − MQ p − rp ,
1
• zp = M −1 rp
1+ Q
• Tp = Bzzu − MQ zp
When the preconditioner 3.10 is employed for the modified problem of Sect. 2.1.4,
one should also modify the computation of residuals in the associated CG or GCR
method, taking into account for example Remark 3.9.
3.2 Preconditioners for the Mixed Problem 41
We shall now illustrate the behaviour of these algorithms on some examples. In all
the cases considered here, the space of multipliers Q can be identified with its dual.
In fact we shall have in all cases Q = L2 ().
The first example will be the mixed approximation of a Poisson problem with
the simplest Raviart-Thomas element. This will allow us to consider a real discrete
Augmented Lagrangian method and its impact in iterative solvers. We shall then
consider incompressible problems in elasticity in both the linear and non linear
cases. Finally we shall introduce an application to the Navier-Stokes equations
The first example that we consider is the mixed formulation of a Dirichlet problem
using Raviart-Thomas element. As an application, we can think of a potential fluid
flow problem in porous media as in the simulation of Darcy’s law in reservoir
simulation. This can be seen as the paradigm of mixed methods and the simplest
case of a whole family of problems. Higher order elements and applications to a
mixed formulation of elasticity problems are described in [20].
Let
and
Q = L2 ().
In this context, the inf-sup condition [20] is verified and, the operator a(·, ·) defined
on V × V by
a(u , v) = u · v dx
is coercive on the kernel of the divergence operator but not on the whole space V .
The mixed formulation (4.3) is the optimality condition of the following inf-sup
problem
1 2
inf sup |v| dx − g · v ds + q div v dx + f q dx (4.4)
v∈V q∈Q 2 N
4.1 Mixed Laplacian Problem 45
The properties of these spaces and related ones are well described in [20]. Indeed
they have been built to be applied to problem (4.3). If we take
Qh = {qh ∈ Pk (K) ∀K ∈ Th }
The discrete problem is clearly of the form (2.22) and indefinite. In [20, p. 427], this
was considered ‘a considerable source of trouble’. Let us consider things in some
detail.
• The matrix A is built from a scalar product in L2 ().
• The operator B is the standard divergence
• B t is a kind of finite volume gradient.
• The Schur complement BA−1 B t is a (strange) form of the Laplace operator
acting on piecewise constant.
It was shown in [12] how using a diagonalised matrix for the matrix A, one
indeed obtains a finite volume method. In the two-dimensional case, it is known
(see [68]) that the solution can be obtained from the non-conforming discretisation
of the Laplacian.
Another general approach (see [6] for example) is to impose the interface
continuity of the normal components in Vh by Lagrange multipliers to generate a
positive definite form.
We shall rather stick to the indefinite formulation an show how the methods that
we developed in Chap. 3 can be applied.
The first point is that we have a lack of coercivity on the whole space. The result of
Proposition 2.2 does not hold and the convergence of Uzawa’s method, for example
46 4 Numerical Results: Cases Where Q = Q
Solving the equation with the augmented Lagrangian method gives us the inf-sup
problem
1 α
inf sup |v|2 dx+ | div v + f |2 dx
v q 2 2
(4.6)
− g · v ds + q div v dx + f q dx
N
where α > 0 is a scalar representing the parameter of regularisation. It’s well known
that the equation (4.4) is equivalent to (4.6) for which optimality conditions are
⎧
⎪
⎪
⎪
⎪ u · v dx + α div u div v dx + p div v dx
⎪
⎪
⎪
⎪
⎨
= g · v ds − α f div v dx ∀ v ∈ V, (4.7)
⎪
⎪ N
⎪
⎪
⎪
⎪
⎪
⎪
⎩ div u q dx = − f q dx ∀ q ∈ Q.
Here, for all α greater than zero, the coercivity is satisfied on V since
a(v, v) + α div u div v dx ≥ min(1, α 2 )v2V , ∀ v ∈ V,
with
v2V = |u|2 dx + | div v|2 dx
A consequence is that the two forms of the augmented Lagrangian (2.30) and
(2.32) are the same and that using them will not change the solution of the problem.
Obviously (2.32) corresponds to the linear system of equations associated to the
discrete version of (4.7). As we said earlier, for the Raviart-Thomas elements MS =
R is a block diagonal matrix associated to the L2 scalar product on Qh and we are
in the perfect situation (Remark 2.11) for the augmented Lagrangian.
−1
A + αB t R −1 B B t u g − αB t MD f
= .
B 0 p f
4.1 Mixed Laplacian Problem 47
V = {v |v ∈ (H 1 ())d , v = 0 on D }
1
ε (v) = (∂i uj + ∂j ui ) (4.8)
ij 2
and its deviator
1
εD = ε − tr(ε)I
3
One then has,
1
|εD (v)|2 = |ε(v)|2 − tr(ε(v))2 (4.9)
3
To define our problem, we have to define some parameters. Elasticity problems are
usually described by the Young Modulus E and the Poisson ratio ν. We shall rather
employ the Lamé coefficients μ, λ
E Eν
μ= , λ= (4.10)
2(1 + ν) (1 + ν)(1 − 2ν)
or equivalently,
λ
inf μ |ε(v)|2 dx + | div v − g|2 dx − f · v dx. (4.13)
v∈V 2
50 4 Numerical Results: Cases Where Q = Q
It it is well known that a brute force use of (4.14) or (4.13) could lead to bad
results for large values of λ (or as the Poisson ratio ν nearing its maximal value of
1/2). In extreme cases, one gets a locking phenomenon that is an identically zero
solution.
The standard way to circumvent this locking phenomenon is to switch to a
mixed formulation with a suitable choice of elements. Essentially, we introduce the
variable
p = λ(div v − g)
for which optimality conditions are and denoting (u, p) ∈ V × Q the saddle point,
⎧
⎪
⎨ 2μ ε(u) : ε(v) dx +
⎪ p div v dx = f · v ds ∀ v ∈ V,
(4.15)
⎪
⎪ 1
⎩ div u q dx − pq dx = g qdx ∀q ∈ Q
λ
In the limiting case of λ becoming infinite, the case that we want to consider, the
second equation of (4.15) becomes
div u q dx = g qdx ∀q ∈ Q
b(v, q)
sup ≥βq ∀q ∈ Q.
v∈V v
Remark 4.2 If we use formulation (4.12) instead of (4.13) as the starting point, the
bilinear form
a D (u, v) = εD (u) : εD (v) dx
is coercive on the kernel of B. Indeed, by Korn’s inequality and (4.9) : there exist a
constant α such that
1 μ
μ|εD (v)|2 = μ |ε(v)|2 − tr(ε(v))2 ≥ αv2 − tr(ε(v))2 ) ∀v ∈ V
3 3
This makes it possible for the matrix AD defined by the bilinear form a D (·, ·) to be
singular, a situation which is not be acceptable in our algorithms.
problems. The numerical solution of the Stokes problem has been the object of a
huge number of articles and books.
x = u(X)
∂x
The deformation F of this transformation is given by F = = I + ∇X u and its
∂X
determinant detF is denoted J . Note that ∇X stands for the gradient with respect
to the variable X. The Cauchy-Green tensor is then defined as C = FT F and its
principal invariants I1 , I2 and I3 are given by :
1 2
I1 = C : I, I2 = (I − C : C), I3 = det(C) = J 2 .
2 1
As in the case of linear elasticity, the boundary is composed of (at least) two
parts: D where a Dirichlet condition is given and N where a Neumann (pressure)
condition g is imposed.
• The Neo-Hookean model
Although there are many formulations of Neo-Hookean models for compress-
ible materials, they share a unique elastic potential energy function or strain
energy function W as named in [73]. Following [91], we define a particular
Neo-Hookean material where the potential energy function W is given by
μ λ λ
W= (I1 − 3) + (J 2 − 1) − ( + μ) ln J
2 4 2
μ − 13 1
= I I1 − 3 + κ(J − 1)2
2 3 2
where κ, the bulk modulus, μ10 and μ01 are parameters characterizing the
material.
The elasticity problem consists in minimizing the potential energy W under appro-
priate boundary conditions. The weak formulation on the reference configuration
can be written as a nonlinear problem,
(F · S) : ∇X v dX = f · v dX + g · v dγ (4.17)
N
for any v in a proper functional space and where S = 2∂W /∂C is the second Piola-
Kirchoff stress tensor. More details on the formulation can be found in [36].
S = S − pJ C−1 .
We are interested in the really incompressible case when the bulk modulus
becomes infinite. As we have seen in the linear case, this may lead to an ill
conditioned or even singular matrix in u. To get good results, we shall again
introduce a stabilisation parameter K̂. We then define Ŝ using this artificial small
bulk modulus. and solve
⎧
⎪
⎪
⎪
⎪ (F · Ŝ) : ∇X v dX − pJ F−T : ∇X v dX
⎪
⎪
⎪
⎪
⎨
g · v dS + f · v dX ∀v ∈ V (4.19)
⎪
⎪ N
⎪
⎪
⎪
⎪
⎪
⎪
⎩ (J − 1)q dX = 0 ∀q ∈ Q
R1 ((u, p), v) = 0,
R2 ((u, p), q) = 0.
∂R1 ((un , pn ), v)
an (δu, v) = · δu
∂u
T
= S (un ) : ∇X (δu) · ∇X v dX
+ C(un ) : FT (un ) · ∇X (δu) : FT (un ) · ∇X v dX,
∂R1 ((un , pn ), v)
bn (v, δp) = · δp
∂p
= − J δp F−T (un ) : ∇X v dX
∂R2 ((un , pn ), q)
cn (δp, q) = · δp
∂p
1
= − (δp)q dX.
k
4.2 Application to Incompressible Elasticity 55
The linearised variational formulation is, knowing (un , pn ), the previous solution,
to find (δu, δp) such that
⎧
⎨ an (δu, v) + bn (v, δp) = −R1 (u , pn ), v , ∀ v ∈ V ,
n
(4.20)
⎩ bn (δu, q) − cn (δp, q) = −R2 (u , pn ), q , ∀ q ∈ Q.
n
Remark 4.5 An important point is that the linearised system (4.20) depends on
some initial value of the displacement. In general, we do not have the equivalent
of Korn’s inequality and the matrix can in fact be singular if one has chosen as
initial guess a bifurcation point.
To illustrate the behaviour of our algorithms, we first consider the simple case of
linear elasticity on a three-dimensional problem. The results will also be applicable
to the Stokes problem.
To obtain an approximation of (4.16), we have to choose a finite element space
Vh ⊂ V and a space Qh ⊂ Q. In our numerical experiments, we consider a three-
dimensional problem. We thus had to make a choice of a suitable finite element
approximation. The catalogue of possibilities has been well studied [20] and we
made a choice which seemed appropriate with respect to the standard engineering
applications. We employ tetrahedral elements, a choice motivated by our eventual
interest in mesh adaptation and we want to respect the inf-sup condition without
having to use elements of too high degree. The popular Taylor-Hood element was
retained:
• A piecewise quadratic approximation for the displacement u and a piecewise
linear approximation of the pressure p.
Remark 4.6 This choice of element is good but there is a restriction for the
construction of element at the boundary: no element should have all its vertices
on the boundary. This might happen on an edge if no special care is taken when the
mesh is built. A bubble can be added at the displacement to avoid this restriction,
but to the price of more degrees of freedom.
In order to use the real augmented Lagrangian (2.32) , one would need a
discontinuous pressure element, which would make MS = R block diagonal. For
three dimensional problems such elements are of high polynomial degree [20] or
induce a loss in the order of convergence.
We therefore consider the discrete regularised Lagrangian corresponding to
(4.16)
⎧
⎪
⎪ 2μ ε(u ) : ε(v ) dx + λ̂ (div uh − g) div v h dx
⎪
⎪ h h
⎪
⎪
⎪
⎨
+ ph div v h dx = f · v h ds = 0 ∀ v h ∈ Vh , (4.21)
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎩ div uh qh dx = g qh dx ∀ qh ∈ Qh
Approximate Solver in u
Our mixed solver relies on an approximate solver for the problem in u. We shall
explore various possibilities for this choice, corresponding to different choice for
−1 in (3.8).
A
• The direct solver is denoted LU.
• The conjugate gradient method CG, GMRES or GCR methods can also be used.
• We also consider the HP method of [37] already discussed in Sect. 3.1.4. In this
method, the quadratic approximation P2 is split into a linear P1 part defined on
the vertices and a complement P2 part defined on edges and the matrix A is split
into four submatrices
A11 A12
A=
A21 A22
4.2 Application to Incompressible Elasticity 57
Remark 4.9 (Variable Coefficients) Related problems for non Newtonian flows
lead to variable coefficients. In [48] one considers the choice of the approximation
MS (Sect. 2.2.3) to the Schur complement, which also defines a scalar product on
Qh .They show that if the bilinear form
2μ ε(u) : ε(v) dx
is changed into
2 μ(x)ε(u) : ε(v) dx (4.22)
From Sect. 2.3 one should change the regularised form (4.16) into
2 μ(x)ε(u) : ε(v) dx + λ̂ μ(x)(div u − g) div v dx
+ p div v dx = f · v ds = 0 ∀v ∈ V
We consider a simple academic example : a cube [0, 1]3 clamped at the bottom (the
plane z = 0) is submitted to a vertical displacement imposed on the top (plane
z = 1) (see Fig. 4.1).
We consider a linear incompressible material with a Young’s modulus
E = 102 MPa and, depending on the numerical experiment, an artificial
Poisson coefficient ν̂ varying between 0.0 and 0.4. Four meshes of sizes
h = 0.5, 0.25, 0.125, 0.0625 (respectively 2 187, 14 739, 107 811 and 823 875
degrees of freedom) will be considered. Exceptionally, for Fig. 4.3, a very coarse
mesh (h = 0.5) will also be used. Although this can be seen as a simple problem, it
must be noted that it is a true three-dimensional case.
Remark 4.10 We shall present some examples illustrating the use of our mixed
solvers. In these experiments, we have imposed a rather strict tolerance of 10−10
on the l 2 norm of the residual in p.
We have introduced in (4.21) a ‘regularised formulation’ parametrised by λ̂
which we may associate with an artificial Poisson ratio ν̂. We emphasise that this
is not an Augmented Lagrangian: the penalty term is introduced for the continuous
divergence-free condition and not for the discrete one so that, for the discretisation
that we employ, the penalty parameter must be small in order not to perturbate the
u(x, y, 1) := [0, 0, −2]
z
u(x, y, 0) := [0, 0, 0]
y
x
100
ν̂ =0 ν̂ =0
100
ν̂ = 0.1 ν̂ = 0.1
ν̂ ν̂
Residual in u
Residual in p
= 0.2 = 0.2
10−4
10−3
ν̂ = 0.4 ν̂ = 0.4
10−6 10−8
10−9
10− 12
0 10 20 30 0 10 20 30
Fig. 4.2 Linear elasticity problem with GCR(3)-HP-AMG as primal solver: convergence in l 2 -
norm of the primal and dual residuals according to the artificial Poisson ratio ν̂
1.8
10−1
uν̂ − u0 L2
1.0
10−2 ν̂ = 0.1
ν̂ = 0.2
ν̂ = 0.3
10−3 ν̂ = 0.4
h
Fig. 4.3 Linear elasticity problem with GCR(3)-HP-AMG as primal solver: uν̂ − u0 L2 with
respect h according to the value of ν̂
Residual in p
10−3
= 0.1
10−5
10−7
0 5 10 15
We then consider the same test case and following Sect. 3.17, we solve by the same
algorithm which we used for the regularised problem.
As we had discussed in Sect. 2.3.4, one sees in Fig. 4.4 that increasing
accelerates the convergence in p. In this test the acceleration does not justify the
extra iteration in δu, δp. we conclude that, at least with the solver employed, this
method would be useful only if the acceleration of the convergence of p is very
important.
When the solver in u is LU, the algorithm becomes the standard Uzawa method
(Sect. 3.2.2). For the problem that we consider, the number of iterations in p is then
independent of the mesh size as is the condition number of the dual problem (see
[90]). It is interesting that this property holds even with our iterative solver in u as
can be seen in Table 4.4
4.2 Application to Incompressible Elasticity 61
Table 4.4 Linear elasticity problem with GCR(3)-HP-AMG as primal solver: global number of
iterations with ν = 0.4 according to the size of the mesh
Number of degrees of freedom 2187 14,739 107,811 823,875
Number of iterations 21 22 22 21
Table 4.5 Elasticity problem: Algorithm 3.7 method. Performance with optimal (automated)
parameter β
Value of n (CG(n)) 1 2 3 4 5
# iterations 662 188 134 130 117
CPU time (s) 128 53 44 52 61
Table 4.6 Elasticity problem: Algorithm 3.7, number of iterations and CPU time according to the
value of β
β 1 5 7 8 9
# iterations Max iter 649 471 414 diverged
CPU time (s) 283 188 127 109 –
Table 4.7 Elasticity problem: performance of GCR’s method preconditioned with Algorithm 3.7
Value of n (CG(n)) 1 2 3 4 5
# iterations 117 91 82 98 81
CPU time (s) 29 30 33 47 51
One sees that the optimal β is slightly higher that 8 and that the computing time is
the double than with a computed β. Moreover, the optimal values has to be guessed.
In the last comparison we use a GCR to accelerate the precedent solver, once
again different number of iteration are taken for the CG(n).
In Table 4.7 the CPU time is increasing with n. However choosing a value of n
between 1 and 3 achieves a good reduction of CPU time with respect to the optimal
value of Table 4.5.
The next point that we will address is related to the use of HP solver. We are
interested in the effect of HP-AMG and HP-LU when employed in the solver in
u.
In the next numerical tests we use a Mixed-GMP-GCR with different variants
of Algorithm 3.7 with Algorithm 3.10 using solvers in u based on the HP-AMG or
HP-LU.
We present in Fig. 4.5 the convergence of the residuals on the finer mesh for
different solvers in u based on the HP-AMG.
One sees that the better the solver in u, the better the convergence. Furthermore,
the gain becomes negligible if this solver is good enough while using HP in
PREONLY mode seems to be rather poor. However the picture is quite different
if one considers computing time. This is what we present in Table 4.8.
For all three meshes, we present the computing time for the different versions of
the mixed-GMP-GCR solver. Each row corresponds to the use of a specific version
of the solver in u. For coarse grids, the direct solver is much more efficient than the
iterative ones but this advantage rapidly disappears when the mesh gets finer. The
solvers using HP-AMG are clearly the best for large meshes. The good news is that
there is little difference as soon as a good enough solver is employed.
Remark 4.12 (GCR or GMRES?) In the previous results, we have chosen GCR(n)
for the solution in u. One might wonder why this choice. Although mathematically
4.2 Application to Incompressible Elasticity 63
Residual norm of u
Residual norm of p
10−5 10−2
10−9 10−6
10−13 10−10
0 20 40 60 80 0 20 40 60 80
Iterations number
GCR(n)-HP or
60
GMRES(n)-HP
preconditioner
40
20
1 2 3 4 5
Table 4.9 Elasticity problem with a fine mesh (823 875 dof): CPU time according to the
preconditionner of GMRES(n)-HP of the primal problem
GMRES(n)
1 2 3 4 5
HP-AMG 32,163 22,051 21,581 21,381 19,980
HP-LU 50,314 39,616 48,920 33,563 50,829
In this section, we consider the solution of non linear elasticity problems for
incompressible materials. We thus consider a Newton method which brings us to
solve a sequence of linearised problems. We take as examples the neo-hookean
and Money-Rivlin models. As we know, for non-linear elasticity, the coercivity of
the linearised system (4.17) is not guaranteed. It might indeed fail near bifurcation
points and this would require techniques [61] which are beyond the scope of our
presentation. We shall focus on two points.
• The algorithms presented for the linear case are directly amenable to the
linearised problems which we now consider.
• The stabilising terms are important.
In our numerical tests, we apply the Mixed-GMP-GCR method to the linearised
problem. The solver in u is GCR(3) prconditioned by HP-AMG.
Neo-Hookean Material
102
displ. (u)
pressure (p)
Residual
10−5
10−12
0 100 200 300 400 500 600 700
Iteration number
102 displ. (u)
pressure (p)
Residual
10−5
10−12
0 20 40 60 80 100 120 140 160
Iteration number
Fig. 4.7 Non linear elasticity problem (Neo-Hookean): convergence in l 2 -norm of the residuals
for Poisson ratio ν̂ = 0 (left) and ν̂ = 0.4 (right)
k̂ corresponding to an artificial Poisson ratio ν̂. We can see in Fig. 4.7, how the
stabilizing term accelerates the convergence of the problem.
In this result, GCR(3)-HP-AMG is used as the solver in u. Each decreasing curve
corresponds to a Newton iteration. Stabilisation does provide a better non linear
behaviour. All the parameters of the algorithm are computed automatically.
Mooney-Rivlin Material
Fig. 4.8 Non linear elasticity problem (Mooney-Rivlin): solution of the problem for K̂ = 0. On
the left using a displacement only formulation and on the right using a mixed formulation
Displ. (u)
101
Pression(p)
Residual
10−1
10−3
10−5
0 500 1,000
Iteration number
Fig. 4.9 Non linear elasticity problem in mixed formulation: convergence in l 2 -norm of the
residuals for K̂ = 0 for the two first Newton steps
When we stabilise with K̂ > 0 we have a better convergence (see Fig. 4.10). In
this case an optimal value on K̂ is 5 and is independent of the size of the problem. As
we could expect, taking K̂ too large has an adverse effect on the condition number
of the problem in u and the algorithm slows down.
Finally we would like to confirm that the computed parameter β of Algo-
rithm 3.10 is still correct for this non linear problem. In order to do so, we consider
the same Mooney-Rivlin case as above using K̂ = 10 for stabilisation and tested
with Algorithm 3.11 using some fixed values of β. Figure 4.11 allows us to compare
the convergence for these fixed values of β with the behaviour when using βopt (that
is using Algorithm 3.10).
Obviously the choice of fixed values of β was not totally random, ‘educated
guess’ were involved giving reasonable numerical behaviour. We can see again that
the computed value of β is nearing the optimal value for this parameter. This, once
again, justifies the use of Algorithm 3.10 and it also shows that the preconditioner
is independent of the problem and avoids more or less justified guesses.
4.2 Application to Incompressible Elasticity 67
100 K̂ = 5
Residual in u
K̂ = 7
−5
10 K̂ = 10
K̂ = 20
10−10 K̂ = 50
Iteration number
100
K̂ = 5
Residual in p
K̂ = 7
K̂ = 10
10−5
K̂ = 20
K̂ = 50
10−10
Iteration number
Fig. 4.10 Non linear elasticity problem (Mooney-Rivlin): convergence in l 2 -norm of the residuals
according to different artificial bulk modulus K̂
101
β = 20
Residual in u
β = 25
β = 30
10−5 β = 40
βopt
10−11
0 20 40 60 80 100 120 140
Number of iterations
101
β = 20
Residual in p
β = 25
β = 30
10−6 β = 40
βopt
10−13
0 20 40 60 80 100 120 140
Number of iterations
Fig. 4.11 Non linear elasticity problem (Mooney-Rivlin): convergence in l 2 -norm of the residuals
according to the value of β
68 4 Numerical Results: Cases Where Q = Q
σ = −pI + 2με(u),
We thus define,
V = {v ∈ (H 1 ())d | v = 0 on D }, Q = L2 (),
a(u, v) = ε(u) : ε(v) dx,
c(u, v, w) = u · grad v · w dx.
We also denote
(u, v) = u · v dx
4.3 Navier-Stokes Equations 69
and we write the Navier-Stokes equations for the fluid velocity u and pressure p as
Here, we choose a uniform time step δt and a backward (also called implicit)
Euler time discretisation. For the spatial discretisation we choose a finite element
approximation (Vh , Qh ) for the velocity and pressure. At time t k = kδt < T ,
knowing (uhk−1 , phk−1 ) we consider the system
⎧
⎪
⎪ (ukh , v h ) + δt c(ukh , ukh , v h ) − 2μδt a(ukh , v h )
⎪
⎨
− δt (phk . div v h ) = (uk−1
h , v h ) + δt (f , v h ) ∀v h ∈ Vh
⎪
⎪
⎪
⎩
(div ukh , qh ) = 0 ∀qh ∈ Qh .
(4.25)
Remark 4.13 We present this simple implicit time discretisation to fix ideas. Our
development is in no way restricted to this example. In (4.25) we have a non linear
problem for ukh for which we can consider a Newton linearisation. One could also
consider a semi implicit formulation with c(uhk−1 , ukh , v h ) instead of c(ukh , ukh , v h ).
= δt p. We can write (4.25) in the form,
Let us denote p
⎧
⎪
⎪ (ukh , v h ) − (
phk . div v h ) + δt c(ukh , ukh , v h ) − 2μ δt a(ukh , v h )
⎪
⎨
= (uk−1
h , v h ) + δt (f , v h ) ∀v h ∈ Vh
⎪
⎪
⎪
⎩
(div ukh , qh ) = 0 ∀qh ∈ Qh .
uh , v h ) + δt c(
( uh , v h ) − 2μ δt a(
uh , uh , v h )
(4.26)
= (uk−1 k−1
h , v h ) + (p̃h , div v h ) + δt (f , v h ) ∀v h ∈ Vh
70 4 Numerical Results: Cases Where Q = Q
then project
uh on the divergence-free subspace by solving formally,
⎧
⎪
⎪ − δ p̃ = div
uh
⎪
⎪
⎨
∂δ p̃
= 0, on D (4.27)
⎪
⎪ ∂n
⎪
⎪
⎩
δ p̃ = 0, on N
ukh =
uh − grad δ p̃
p̃k = p̃k−1 + δ p̃
The basic flaw in this approach is that this projection takes place in H (div, ) and
not in H 1 (). Tangential boundary values are lost. Many possibilities have been
explores to cure this and we shall propose one below.
In a finite element formulation, the exact meaning of uh − grad δp must also be
precised. We consider a non standard way of doing this.
Referring to our mixed formulation (4.3) and defining δuh = ukh − uh
The Poisson problem (4.27) can be written as finding δuh ∈ V0h , solution of
With the discretisation considered, this will not yield a fully consistent aug-
mented Lagrangian and the stabilising parameter will have to be kept small. We are
4.3 Navier-Stokes Equations 71
fortunate: our experiments of Sect. 4.1 show that even a small value will be enough
to get a good convergence.
Remark 4.14 (Why This Strange Mixed Form) Using the mixed form (4.30) makes
uh and grad ph to be in the same finite element space, which is a nice property.
We recall that the normal condition δuh · n = 0 must be imposed explicitly on
D in either (4.29) or (4.30). If no condition is imposed on δuh · n, one then imposes
δ p̃h = 0 in a weak form.
Since we have no control on the tangential part of grad ph on D this method
potentially leaves us with a tangential boundary conditions which is not satisfied.
The simplest remedy would be an iteration on the projection. To fix ideas, we shall
use the semi-implicit problem
⎧
⎪
⎪ (ukh , v h ) + δt c(uk−1h , uh , v h ) − 2μδt a(uh , v h )
k k
⎪
⎨
− (p̃hk . div v h ) = (uk−1
h , v h ) + δt (f , v h ) ∀v h ∈ Vh (4.31)
⎪
⎪
⎪
⎩
(div ukh , qh ) = 0 ∀qh ∈ Qh .
Finally we consider the direct use of the same Algorithms 3.7 and Algorithm 3.10
that we used for incompressible elasticity. To get a better coercivity we had the
regularised penalty terms and we change the first equation of (4.25) into
This is a non linear problem which should be linearised. We can then apply the
Mixed-GMP-GCR method to the resulting linearised form.
72 4 Numerical Results: Cases Where Q = Q
In order to show that this technique is feasible, we consider a very simple example.
We consider =]0, 1[×]0, 1[ and an artificial solution,
Results for the method (4.32) are presented in Table 4.10, which represents the
global number of iterations and CPU time in seconds to reach T = 1. This is done
for different values of the time step δt and the regularity coefficient α. The table
shows the interest of the stabilisation parameter α, which can reduce the number
of global iterations and CPU time. The optimal value of the stabilisation parameter
is also stable with respect to the time step. We used an iterative solver (precisely
CG(10) ) for solving the problem in u, a choice which could clearly be improved.
The mesh is composed of 3200 triangles and 6561 degrees of freedom.
The following figures show the convergence for the problem with α = 0.1 and
δt = 10−1 . Figure 4.12 presents the convergence of both primal and dual residuals
for Newton iterations of a single arbitrary time step (here the fifth one corresponding
to t = 0.5).
Table 4.10 Navier-Stokes problem: iteration’s number and CPU time in second according to the
step time (δt) and the regularity coefficient (α)
δt = 0.05 δt = 0.1 δt = 0.25
α # it. CPU (s) # it. CPU (s) # it. CPU (s)
0 13,593 945 6701 475 3542 254
1 × 10−2 10,567 739 5167 366 2765 178
1 × 10−1 5709 412 3203 233 2231 161
2.5 × 10−1 5440 396 3756 267 3150 221
1 8315 580 6767 468 6033 441
4.3 Navier-Stokes Equations 73
10−3 primal
dual
Residual
10−6
10−9
10−12
1,250 1,300 1,350 1,400 1,450 1,500
This chapter presents solution methods for the sliding contact. We shall first develop
some issues related to functional spaces: indeed, in contact problems, we have a case
where the space of multipliers is not identified to its dual. To address this, we first
consider the case where a Dirichlet boundary condition is imposed by a Lagrange
multiplier and present the classical obstacle problem as a simplified model for the
contact problem. We shall then give a description of the contact problem and its
discretisation with a numerical example.
1/2
Remark 5.2 (Sobolev Spaces) The space H00 (C ) was introduced in [62]. The
elements of this space are in a weak sense null at the boundary of C and thus match
with the zero boundary conditions on 0 . The dual of H00 (C ) is H −1/2(C ). The
1/2
1/2
scalar product on H00 (C ) is usually defined by an interpolation norm which also
defines a Ritz operator R from H00 (C ) onto H −1/2 (C ). We shall consider later
1/2
1/2
To simplify the notation we write (λ, μ)1/2 the scalar product in H00 . We want to
find u ∈ V , λ ∈ solution of
a(u, v) + b(v, λ) = (f, v) ∀v ∈ V ,
(5.3)
b(u, μ) = (g, μ)1/2 ∀μ ∈ .
This problem is well posed. Indeed the bilinear form a(u, v) is coercive and we
have an inf-sup condition. To show this, we use the fact that there exists a continuous
1/2
lifting L from H00 (C ) into V . Denoting vλ = Lλ, we have
|vλ |V ≤ C|λ|
b(v, λ) b(vλ , λ) 1
sup ≥ ≥ |λ|λ .
v |v|V |vλ |V C
1/2
Remark 5.3 (This May Seem a Non Standard Formulation!) Taking λ ∈ H00 (C )
may seem a little strange. Indeed, a more standard formulation would define
where R is the Ritz operator on . We therefore have the choice of working with λ
or with λ , and this choice will be dictated by numerical considerations.
We may then introduce the operator B = RB from V onto and we have
Remark 5.4 We have considered a case where the choice of and is simple. In
more realistic situations is a subspace of H 1/2(C ) corresponding to the trace of
the elements of a subspace of H 1 (). The space is then the dual of which in
general will contain boundary terms.
As a simple numerical procedure to solve (5.3) we could use Algorithm 3.7. A
central point of this algorithm would be to compute,
zλ =
S −1 rλ =
S −1 (Bu − g).
We rapidly present some results that should help to understand the spaces with
which we have to work. In the present case, R is the Ritz operator from into
, that is from H00 (D ) onto H −1/2 (D ). This corresponds to the Dirichlet-
1/2
1/2
We can also define on H00 (C ) a norm
|v|21/2 = |v|2 dx + |grad 1/2 v|2 dx (5.4)
C C
with λ0 ∈ L2 (C ) and λ1 ∈ (L2 (C ))n−1 . As regular functions are dense in
H00 (C ), we could say that H −1/2(C ) is the sum
1/2
λ0 + div1/2 λ1
To solve the Dirichlet problem we can now build a Neumann problem using the SP
operator. Assuming we have some initial guess λ0 we shall first solve the Neumann
problem
Remark 5.7 We have thus written our problem in the form (5.3). One could argue
that we did nothing as the Steklov-Poincaré operator implies the solution of a
Dirichlet’s problem. This will become useful whenever the λ has an importance
by itself. This will be the case in a similar formulation of the contact problem where
λ is the physically important contact pressure.
where (λh , μh )1/2,h is some discrete norm in h . We look for uh and λh solution of
a(uh , vh ) + b(vh , λh ) = (f, vh ) ∀vh ∈ Vh ,
(5.7)
b(uh , μh ) = (gh , μh )1/2,h ∀μh ∈ h ,
The scalar product defines a discrete Ritz operator Rh from h onto h . We have,
In the framework of Sect. 2.2.2 and Remark 2.5 we can associate to uh and λh their
coordinates u and λ on given bases.
R will the matrix associated to Rh that is
B u, λ = (Bh uh , λh )1/2,h
As in the continuous case, we can write the problem using h ∈ h or λh ∈ h .
The two formulations are equivalent for the equality condition u = g but this will
not be the case for inequality condition u ≥ g.
We shall first rely, on a discrete Steklov-Poncaré operator which will enable us
to define a ‘perfect’ discrete scalar product in h .
We shall also denote the matrix B associated to Bh ., the same notation as the
continuous operator !B as they are used in a different context.
The goal is to associate to an element rh ∈ h an element λh in h . To do so,
we first build φhr a function of Vh such that Bh φhr = rh and we solve
where vh0 = 0 on C and the bilinear form is as in (5.2). Let φh = φh0 + φhr . We
now define SPh rh = λh , an element of h by
This would enable us to solve the problem (5.7) just as we had done for the
continuous case:
• Given λ0h solve the Neumann problem
We first note that to compute SPh rh as in (5.8), we need only to compute a(φh , vih )
for all vih associated to a node i on C as in Fig. 5.1.
82 5 Contact Problems: A Case Where Q = Q
It is also interesting to see what this result looks like. To fix ideas, let us consider
a piecewise linear approximation for Vh .
Referring to Fig. 5.1, we obtain at node i
But this is not all: the second term depends on the tangential derivative of φr near
the boundary, and could be thought as a half order derivative of the boundary value
r. We have
(ri+1 − 2ri + ri−1 )
hy
hx
SPij = a(φih , φj h )
cases where this matrix form would be used and the cleverness with which we solve
the Dirichlet problems.
We shall ultimately solve our discrete problem (5.7) by Algorithm 3.7 with a suitable
preconditioner. In Algorithm 3.10, our standard preconditioner, we compute an
approximation of the Schur complement B A −1 B t and approximate its inverse by
MS . In this case, the iteration is don in λ.
One can see SPh as a representation of the Schur complement. To employ it in
Algorithm 3.7, one could rely on an approximation SP h
• One could think of building SP h on a subdomain around C and not on the
whole .
• The computation of SP h could also be done using a simpler problem, for
instance using a Laplace operator instead of an elasticity operator.
• The Dirichlet problem defining SP h could be solved only approximately with A.
One should now modify Algorithm 3.10 as we are now iterating in λ and not in
λ.We must also note that we have an approximation SP of the Schur complement
S.
• ru , rλ given ,
• zu = Ã−1 ru
• r̃u = ru + Bzu
• zλ = SP r̃u
• zλ = βzλ
• zu = zu − βzzu
3: End
The Choice of h
We have considered the case where h is the trace of Vh . We could also take a
subspace of the traces. A simple and important example would be to have piecewise
quadratic elements for Vh and a piecewise linear subspace for h . We refer to [24]
for an analysis of the inf-sup condition and the choice of spaces. Why would we
do this? Essentially because of β in the inf-sup condition: A larger h means a
smaller βh . This is a standard point in the analysis of mixed methods, augmenting
the space of multipliers makes the inf-sup condition harder to satisfy. We then have
two consequences.
• As λh is a representation of λh which converges in H −1/2, a richer h will
produce an oscillatory looking λh .
• A smaller β will mean a slower convergence of the solver.
If we take a reduced h it must be noted that the solution is changed as Bh uh =
gh means
Ph (Buh − gh ) = 0
Bu = g on C
5.1 Imposing Dirichlet’s Condition Through a Multiplier 85
To get a discrete version, we have to choose Vh and h and we must give a sense to
λh ≥ 0.
For the choice of Vh , we take a standard approximation of H 1 (). For h we
take the traces of Vh or more generally a subspace. For example, if Vh is made of
quadratic functions, we can take h the piecewise quadratic traces or a piecewise
linear subspace.
To define +h the obvious choice is to ask for nodal values to be positive. As
we can see in Fig. 5.2 this works well for piecewise linear approximation. For
a piecewise quadratic approximation one sees that positivity at the nodes does
not yield positivity everywhere. Piecewise linear approximations are thus more
attractive even if this is not mandatory.
86 5 Contact Problems: A Case Where Q = Q
In Sect. 5.2 we shall us the active set strategy [3, 54, 55] which we already discussed
in Sect. 3.1.3
This is an iterative procedure, determining the zone where the equality condition
u = g must be imposed. We define the contact status dividing C in two parts.
• Active zone : λh > 0 or λh = 0 and Brh < 0
• Inactive zone : λh = 0 and Brh ≥ 0
5.2 Sliding Contact 87
We consider, for example as in Fig. 5.3, an elastic body in contact with a rigid
surface. We shall restrict ourselves to the case of frictionless contact as the frictional
case would need a much more complex development. We refer to [34] for a more
general presentation. We thus look for a displacement u minimising some elasticity
potential J (v) under suitable boundary conditions. In the case of linear elasticity,
we would have
J (v) = μ |ε(v)|2 dx − f · v dx.
ΓD
• x ΓC
n(x)
rigid plane
Fig. 5.3 Contact with a horizontal rigid plane. An elastic body submitted to a displacement u.
Illustration of the oriented distance computed on the potential contact surface C
88 5 Contact Problems: A Case Where Q = Q
d(u) ≥ 0 on C
inf J (v).
d(v)≥0
Contact Pressure
The next step is to introduce a Lagrange multiplier λ ∈ for the constraint: the
contact pressure [35, 59]. We thus transform our minimisation problem into the
saddle-point problem,
where the operator A(u) represent the constitutive law of the material. From this
system we deduce the Kuhn-Tucker conditions,
⎧
⎪
⎪ λn ≥ 0,
⎨
(d(u), μ) ≥ 0 ∀μ ≥ 0,
⎪
⎪
⎩
(d(u), λn ) = 0.
5.2 Sliding Contact 89
d (u0 ) · δu = δu · n
Here δu is the correction of the initial value u0 , and we have linearised both the
constitutive law of the material represented by the operator A and the distance
function. The Kuhn Tucker conditions then become
⎧
⎪ λ ≥ 0,
⎪ n
⎨
(gn0 − δu · n, μn ) ≥ 0 ∀μn ≥ 0,
⎪
⎪
⎩ 0
(gn − δu · n, λn ) = 0.
b(v, λn ) = (v · n, λn )
that is with the scalar product in ⊂ H 1/2(C ). As we have seen in Remark 5.3
we can also write (5.17) with
λ , v · n = Rλ, v · n
We recall that this is the same formulation written in two different ways.
and the associated matrix R. In our computations we shall rely on Algorithm 3.10.
An important issue is the choice of MS and its inverse MS−1 . the normal choice is
here MS = R. From Proposition 2.2, to obtain a convergence independent of the
mesh size, R should define on h a scalar product coherent with the scalar product
of H 1/2 in order to have an inf-sup condition independent of h.
We thus have to make a compromise: a better R yields a better convergence but
may be harder to compute.
In the results presented below, we use the simple approximation by the L2 scalar
product in h and the matrix R becomes M0 defined by,
M0 λ, μ = λh μh ds. (5.18)
C
thus have
u = {uh (y j ), 1 ≤ j ≤ NV }
λ = {λh (x i ), 1 ≤ i ≤ N }
Denoting by ·, · the scalar product in RNV or RN , we thus have the matrices
associated with operators Rh and Bh
λnh (x i ) = λh (x i ) · ni
We denote by nh the subset of h of normal vectors of the form (5.19) and
+
nh = {λnh ∈ nh | λn (x i ) ≥ 0}
Bn u = {(Bu)i · ni }
Bn u, μ+
n
≥ 0 ∀μ+
n
.
1
inf sup Av, v + Rλn , (R −1 Bv − g) − f , v (5.20)
v λn 2
that is also
1
inf sup Av, v + λn , (Bn v − Rg) − f , v. (5.21)
v λn 2
This problem can clearly be solved by the algorithms of Chap. 3. We must however
introduce a way to handle the inequality constraint. To do this we first need the
notion of contact status.
Contact Status
Let rn = Bn u − g the normal residual. A basic tool in the algorithms which follow,
will be the contact status. It will be represented point wise by the operator P (λ, rn )
defined by,
⎧
⎪
⎪ if λn = 0,
⎪
⎪
⎪
⎪
⎪
⎪ (1) if rn ≤ 0 then P (λ, rn ) = 0
⎨
(2) if rn > 0 then P (λ, rn ) = rn
⎪
⎪
⎪
⎪
⎪
⎪ if λn > 0,
⎪
⎪
⎩
(3) P (λ, rn ) = rn
We can now present our solution strategy for the sliding contact problem.
A Newton Method
To illustrate the behaviour of the algorithm we present the case of an elastic cube
being forced on a rigid sphere (see Fig. 5.4). In this case a displacement is imposed
on the upper surface of the cube. We want to illustrate the behaviour of the algorithm
as the mesh gets refined (see Table 5.1).
94 5 Contact Problems: A Case Where Q = Q
Table 5.1 Iteration’s number and CPU time according to the total number of degrees of freedom
(dof) of u and the solving method for the primal system
14,739 dof 107,811 dof 823,875 dof
# it. CPU (s) # it. CPU (s ) # it. CPU (s)
LU 32 6.85 39 208.46 41 13099
GCR(10,HP-LU) 40 8.96 42 59.06 53 477.36
101
LU
GCR-HP-LU
Normal residual
Contact status
Newton
10−4
10−9
10−14
0 10 20 30 40 50
Normal residual
10−2 Contact status Contact status
Newton 10−2 Newton
10−4
10−4
10−6
10−8
10−6
0 10 20 30 40 50 0 10 20 30 40 50
Fig. 5.5 Convergence for rn . On top a coarse mesh (14,739 dof for u). Bottom left a mesh with
107,811 dof, on the right a mesh with 823,875 dof
In theory, this is not fundamentally different from the problem (2.22). From the
numerical point of view, things are not so simple as the presence of two independent
constraints brings new difficulties in the building of algorithms and preconditioners.
We present some ideas which indeed yield more questions than answers.
1
inf sup Av, v + λn , (Cv − rλ ) + p, (Bv − rp ) − r u , v.
v p,λn 2
This is indeed a problem similar to (6.1). We shall first consider a naive solution
technique, the interlaced method and then reconsider the use of approximate
factorisations.
It would be natural to rewrite the system (6.1) as a problem with a single constraint.
To do so we introduce block matrices
A Ct u r
= u
C 0 λ rλ
A Bt u r rλ
A= , C= C0 , u= , ru = u and rλ = .
B 0 p rp 0
Remark 6.2 (Projected Gradient) If we refer ourselves to Sect. 3.1.3 one sees that
what we are doing is iterating in the divergence-free subspace provided the problems
in (u, p) are solved precisely.
To illustrate the behaviour of the algorithm, we first consider the case of an accurate
solution in (u, p). This will be done as in Sect. 4.2.4. For the solve in u, we take
either a direct LU solver or some iterations of GCR preconditioned by HP-lU. This
is clearly an expensive procedure only used for comprehension.
We present the convergence in λ for the intermediate mesh of Sect. 5.2.3 with
107811 degrees of freedom.
As can be expected the results are comparable for the two solver in u as we use
essentially the same information to update λ. Indeed if we have an accurate solution
in (u, p) it should not be dependent on the way it is computed (Fig. 6.1).
When an incomplete solution in (u, p) is considered by limiting the number of
iterations permitted in the Mixed-GMP-GCR method, the iterative solution in λn is
loosing effectiveness (Fig. 6.2).
10−2
residual according to the Contact status
primal solver with the Newton
intermediate mesh
10−5
10−8
0 20 40 60
10−8
10−11
0 20 40 60 80 100
Although it works, it is clearly an expensive method (Table 6.1) and we did not
push the test to finer meshes. We now consider another approach which looked more
promising.
where
We shall denote
SBB SBC
S=
SCB SCC
−1 ru
zu∗ = A
rp = Bzu∗ − rp ,
rλ − Czu∗ − rλ . (6.1)
6.3 Preconditioners Based on Factorisation 101
SBB SBC zp rp
= (6.2)
SCB SCC zλ rλ
• Compute
−1 B t zp − A
zu = zu∗ − A −1 C t zλ (6.3)
The key is thus to solve (6.2). There is clearly no general way of doing this.
Starting from the point of view that we have approximate solvers for SBB and SCC ,
we may use a Gauss-Seidel iteration for zp and zλ .
In the simplest case we do one iteration for each problem.
Table 6.2 Number of iterations and CPU time (s) according to the size of the mesh and the solving
method for the primal system
14,739 dof 107,811 dof 823,875 dof
#it. CPU (s) #it . CPU (s ) # it. CPU (s)
LU 55 8.36 88 120.94 124 3432.87
GCR(10) 129 92.97 141 724.54 209 7940.04
CG(10,GAMG) 129 62.98 99 332.34 162 4459.59
GCR(5,HP-AMG) 174 36.66 159 191.78 159 1533.71
102 6 Solving Problems with More Than One Constraint
101
LU LU
HP 10−3 HP
Normal residual
Contact status
Residual in p
10−3 Newton
10−6
10−7
10−9
10−11 10−12
0 50 100 150 0 50 100 150
Fig. 6.3 Convergence curves for the normal residual (left) and hydrostatic pressure p (right)
according to the primal problem solving method when the middle mesh is considered
101
LU LU
GCR 10−3 HP
Normal residual
Contact status
Residual in p
10−7 10−9
10−12
10−11
0 50 100 150 0 50 100 150
Fig. 6.4 Convergence curves for the normal residual (left) and hydrostatic pressure p (right)
according to the primal problem solving method when the finest mesh is considered
stagnation, keeping in mind that LU becomes more and more inefficient (Figs. 6.3
and 6.4).
Table 6.3 Alternating method: iteration’s number/CPU(s) according to the size of the mesh and
the solving method for the primal system
14,739 dof 107,811 dof 823,875 dof
# it. CPU (s) # it. CPU (s ) # it. CPU (s)
LU 66 7 110 93 120 2585
GCR(10) 110 35 120 294 150 2966
GCR(5,HP-LU) 120 17 150 139 160 1168
LU LU
10−3
GCR GCR
Normal residual
GCR-HP GCR-HP
Residual in p
10−2
10−7
10−6
10−11
10−10 10−15
0 50 100 150 0 20 40 60 80
Fig. 6.6 Alternating method: convergence curves for the normal residual (left) and hydrostatic
pressure (p) according to the primal problem solving method when the middle mesh is considered
GCR-HP
Residual p
10−3 10−7
10−6 10−11
10−9
10−15
Fig. 6.7 Alternating method: convergence curves for the normal residual (left) and p the
hydrostatic pressure (right) according to the primal problem solving method when the finest mesh
is considered
This is a crude first testing and things could clearly be ameliorated. In particular,
one could think of marrying the alternating idea into the sequential technique of the
previous Sect. 6.3.1. There is room for new ideas...
Chapter 7
Conclusion
We hope to have shown that solving mixed problems can be accomplished effi-
ciently. This work is clearly not exhaustive and we have indeed tried to open the
way for future research. We have relied as building bricks on rather classical iterative
methods. However, we think that we have assembled these bricks in some new ways.
We have also insisted in developing methods as free as possible of user depending
parameters.
We have also considered Augmented Lagrangians, either in an exact or a
regularised version. This was done in the mind that direct methods should be
avoided for large scale computations and that penalty terms destroy the condition
number of the penalised system.
• For mixed formulations based on elements satisfying the equilibrium conditions,
we have shown that Augmented Lagrangian is efficient. The problem presented
was very simple but we think that the results could be extended to more realistic
situations. In the case of mixed elasticity in which a symmetry condition has to
be imposed [19] one would have to deal with two constraints as in Chap. 6. The
situation would be better than in the example presented there as the equilibrium
constraint is amenable to a real augmented Lagrangian.
• Problems involving incompressible elasticity are of central importance in many
applications. Unfortunately, they are often solved with poor methods using
penalty and low order elements. We have shown that continuous pressure
elements, which are essential for accurate three-dimensional computations at
reasonable cost, are manageable and can be accelerated by a stabilisation term.
• For contact problems, we have considered some possible alternative avenues
to the standard approximations where the constraint is treated in L2 instead
of the correct H 1/2. This is still an open area. We have shown that using the
more classical formulation, one can obtain results for large meshes with good
efficiency.
• For problems involving two constraints, we have explored some possibilities and
many variants are possible.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 105
J. Deteix et al., Numerical Methods for Mixed Finite Element Problems,
Lecture Notes in Mathematics 2318, https://doi.org/10.1007/978-3-031-12616-1_7
106 7 Conclusion
The methods that we discussed can in most cases be employed for parallel
computing. We did not adventure ourselves in this direction which would need a
research work by itself. We must nevertheless emphasise that the Petsc package
which we used is intrinsically built for parallel computing.
Bibliography
1. B. Achchab, J.F. Maître, Estimate of the constant in two strengthened C.B.S. inequalities
for F.E.M. systems of 2D slasticity: application to multilevel methods and a posteriori error
estimators. Numer. Linear Algebra Appl. 3(2), 147–159 (1996)
2. ADINA Inc., Instability of two-term mooney-rivlin model. https://www.adina.com/newsgH48.
shtml
3. P. Alart, A. Curnier, A mixed formulation for frictional contact problems prone to Newton like
solution methods. Comput. Methods Appl. Mech. Eng. 92(3), 353–375 (1991)
4. G. Allaire, S.M. Kaber, Numerical Linear Algebra. Texts in Applied Mathematics, vol. 55
(Springer, New York, 2008)
5. M. Arioli, D. Kourounis, D. Loghin, Discrete fractional Sobolev norms for domain decompo-
sition preconditioning. IMA J. Numer. Anal. 33(1), 318–342 (2013)
6. D.N. Arnold, F. Brezzi, Mixed and nonconforming finite element methods: implementation,
postprocessing and error estimates. ESAIM: Math. Model. Numer. Anal. 19(1), 7–32 (1985)
7. W.E. Arnoldi, The principle of minimized iterations in the solution of the matrix eigenvalue
problem. Quart. Appl. Math. 9(1):17–29 (1951)
8. K.J. Arrow, L. Hurwicz, H. Uzawa, Studies in Linear and Non-linear Programming. Stanford
Mathematical Studies in the Social Sciences, vol. 2 (Stanford University Press, Stanford, 1958)
9. O. Axelsson, G. Lindskog, On the rate of convergence of the preconditioned conjugate gradient
method. Numer. Math. 48(5), 499–523 (1986)
10. Z.Z. Bai, B.N. Parlett, Z.Q. Wang, On generalized successive overrelaxation methods for
augmented linear systems. Numer. Math. 102(1), 1–38 (2005)
11. S. Balay, S. Abhyankar, M. Adams, J. Brown, P. Brune, K. Buschelman, L. Dalcin, A. Dener,
V. Eijkhout, W. Gropp, D. Karpeyev, D. Kaushik, M. Knepley, D. May, L. Curfman McInnes,
R. Mills, T. Munson, K. Rupp, P. Sanan, B. Smith, S. Zampini, H. Zhang, H. Zhang, PETSc
Users Manual: Revision 3.10. (Argonne National Lab. (ANL), Argonne, 2018)
12. J. Baranger, J.F. Maitre, F. Oudin, Connection between finite volume and mixed finite element
methods. ESAIM Math. Model. Numer. Anal. 30(4), 445–465 (1996)
13. K.J. Bathe, F. Brezzi, Stability of finite element mixed interpolations for contact problems.
Atti della Accademia Nazionale dei Lincei. Classe di Scienze Fisiche, Matematiche e Naturali.
Rendiconti Lincei. Matematica e Applicazioni 12(3), 167–183 (2001)
14. L. Beilina, E. Karchevskii, M. Karchevskii, Solving systems of linear equations, in Numerical
Linear Algebra: Theory and Applications (Springer, Cham, 2017), pp. 249–289
15. M. Benzi, G.H. Golub, J. Liesen, Numerical solution of saddle point problems. Acta Numer.
14, 1–137 (2005)
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 107
J. Deteix et al., Numerical Methods for Mixed Finite Element Problems,
Lecture Notes in Mathematics 2318, https://doi.org/10.1007/978-3-031-12616-1
108 Bibliography
40. H.C. Elman, G.H. Golub, Inexact and preconditioned Uzawa algorithms for saddle point
problems. SIAM J. Numer. Anal. 31(6), 1645–1661 (1994)
41. H.C. Elman, D.J. Silvester, A.J. Wathen, Finite Elements and Fast Iterative Solvers: With
Applications in Incompressible Fluid Dynamics. Numerical Mathematics and Scientific Com-
putation (Oxford University Press, Oxford, 2005)
42. A. Fortin, A. Garon, Les Élements Finis de La Théorie à La Pratique. GIREF, Université Laval
(2018)
43. M. Fortin, R. Glowinski, Augmented Lagrangian Methods: Applications to the Numerical
Solution of Boundary-Value Problems. Studies in Mathematics and its Applications, vol. 15
(North-Holland, Amsterdam, 1983)
44. M.J. Gander, Optimized Schwarz methods. SIAM J. Numer. Anal. 44(2), 699–731 (2006)
45. A. Ghai, C. Lu, X. Jiao, A comparison of preconditioned Krylov subspace methods for large-
scale nonsymmetric linear systems. Numer. Linear Algebra Appl. 26(1), e2215 (2019)
46. G.H. Golub, C. Greif, On solving block-structured indefinite linear systems. SIAM J. Sci.
Comput. 24(Part 6), 2076–2092 (2003)
47. G.H. Golub, C.F. Van Loan, Matrix Computations, Johns Hopkins Studies in the Mathematical
Sciences, 3rd edn. (Johns Hopkins University Press, Baltimore, 1996)
48. P.P. Grinevich, M.A. Olshanskii, An iterative method for the stokes-type problem with variable
viscosity. SIAM J. Sci. Comput. 31(5), 3959–3978 (2010)
49. A. Günnel, R. Herzog, E. Sachs, A note on preconditioners and scalar products in Krylov
subspace methods for self-adjoint problems in Hilbert space. Electron. Trans. Numer. Anal.
41, 13–20 (2014)
50. W. Hackbusch, Iterative Solution of Large Sparse Systems of Equations. Applied Mathematical
Sciences, vol. 95, 2nd edn. (Springer, Amsterdam, 2016)
51. R. Herzog, K.M. Soodhalter, A modified implementation of Minres to monitor residual
subvector norms for block systems. SIAM J. Sci. Comput. 39(6), A2645–A2663 (2017)
52. M.R. Hestenes, E. Stiefel, Methods of conjugate gradients for solving linear systems. J. Res.
Natl. Bureau Standards 49(6), 409–436 (1952)
53. G.A. Holzapfel, Nonlinear Solid Mechanics : A Continuum Approach for Engineering (Wiley,
New York, 2000)
54. S. Hüeber, B.I. Wohlmuth, A primal-dual active set strategy for non-linear multibody contact
problems. Comput. Methods Appl. Mech. Eng. 194(27), 3147–3166 (2005)
55. S. Hüeber, G. Stadler, B.I. Wohlmuth, A primal-dual active set algorithm for three-dimensional
contact problems with coulomb friction. SIAM J. Sci. Comput. 30(2), 572–596 (2009)
56. K. Ito, K. Kunisch, Augmented Lagrangian methods for nonsmooth, convex optimization in
Hilbert spaces. Nonlinear Anal. 41(5), 591–616 (2000)
57. K. Ito, K. Kunisch, Optimal control of elliptic variational inequalities. Appl. Math. Optim. Int.
J. Appl. Stoch. 41(3), 343–364 (2000)
58. E.G. Johnson, A.O. Nier, Angular aberrations in sector shaped electromagnetic lenses for
focusing beams of charged particles. Phys. Rev. 91(1), 10–17 (1953)
59. N. Kikuchi, J.T. Oden, Contact Problems in Elasticity: A Study of Variational Inequalities and
Finite Element Methods. SIAM Studies in Applied Mathematics, vol. 8 (SIAM, Philadelphia,
1988)
60. C. Lanczos, An iteration method for the solution of the eigenvalue problem of linear differential
and integral operators. J. Res. Natl. Bureau Standards 45, 255–282 (1950)
61. S. Léger, Méthode lagrangienne actualisée pour des problèmes hyperélastiques
en très grandes déformations. Ph.D. Thesis, Université Laval, Canada (2014).
https://corpus.ulaval.ca/jspui/handle/20.500.11794/25402
62. J.L. Lions, E. Magenes, Non-Homogeneous Boundary Value Problems and Applications.
Grundlehren der mathematischen wissenschaften; bd. 181, vol. 1 (Springer, Berlin, 1972)
63. D.C. Liu, J. Nocedal, On the limited memory BFGS method for large scale optimization. Math.
Program. 45(1–3), 503–528 (1989)
64. D. Loghin, A.J. Wathen, Analysis of preconditioners for saddle-point problems. SIAM J. Sci.
Comput. 25(6), 2029–2049 (2004)
110 Bibliography
65. D.G. Luenberger, The conjugate residual method for constrained minimization problems.
SIAM J. Numer. Anal. 7(3), 390–398 (1970)
66. C.F. Ma, Q.Q. Zheng, The corrected Uzawa method for solving saddle point problems. Numer.
Linear Algebra Appl. 22(4), 717–730 (2015)
67. K.-A. Mardal, R. Winther, Preconditioning discretizations of systems of partial differential
equations. Numer. Linear Algebra Appl. 18(1), 1–40 (2011)
68. L.D. Marini, An inexpensive method for the evaluation of the solution of the lowest order
Raviart-Thomas mixed method. SIAM J. Numer. Anal. 22(3), 493–496 (1985)
69. H.O. MaY, The conjugate gradient method for unilateral problems. Comput. Struct. 22(4),
595–598 (1986)
70. S.F. McCormick, Multigrid Methods. Frontiers in applied mathematics, vol. 3 (Society for
Industrial and Applied Mathematics, Philadelphia, 1987)
71. G.A. Meurant, Computer Solution of Large Linear Systems. Studies in Mathematics and Its
Applications, vol. 28 (North-Holland, Amsterdam, 1999)
72. J. Nocedal, S. Wright, Numerical Optimization. Springer Series in Operations Research and
Financial Engineering, 2nd edn. (Springer, New York, 2006)
73. R.W. Ogden, Non-linear Elastic Deformations. Ellis Horwood Series in Mathematics and Its
Applications (Ellis Horwood, Chichester, 1984)
74. C.C. Paige, M.A. Saunders, Solution of sparse indefinite systems of linear equations. SIAM J.
Numer. Anal. 12(4), 617–629 (1975)
75. J. Pestana, A.J. Wathen, Natural preconditioning and iterative methods for saddle point
systems. SIAM Rev. 57(1), 71–91 (2015)
76. L. Plasman, J. Deteix, D. Yakoubi, A projection scheme for Navier-Stokes with variable
viscosity and natural boundary condition. Int. J. Numer. Methods Fluids 92(12), 1845–1865
(2020)
77. A. Quarteroni, A. Valli, Theory and application of Steklov-Poincaré operators for boundary-
value problems, in Applied and Industrial Mathematics: Venice - 1, 1989, ed. by R. Spigler,
Mathematics and Its Applications (Springer, Dordrecht, 1991)
78. A. Quarteroni, R. Sacco, F. Saleri, Méthodes Numériques: Algorithmes, Analyse et Applica-
tions (Springer, Milano, 2007)
79. R.T. Rockafellar, The multiplier method of Hestenes and Powell applied to convex program-
ming. J. Optim. Theory Appl. 12(6), 555–562 (1973)
80. Y. Saad, A flexible inner-outer preconditioned GMRES algorithm. SIAM J. Sci. Comput. 14(2),
461–469 (1993)
81. Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd edn. (Society for Industrial and
Applied Mathematics, Philadelphia, 2003)
82. B.V. Shah, R.J. Buehler, O. Kempthorne, Some algorithms for minimizing a function of several
variables. J. Soc. Ind. Appl. Math. 12(1), 74–92 (1964)
83. D.J. Silvester, V. Simoncini, An optimal iterative solver for symmetric indefinite systems
stemming from mixed approximation. ACM Trans. Math. Softw. 37(4), (2011)
84. J.C. Simo, T.J.R. Hughes, Computational Inelasticity. Interdisciplinary Applied Mathematics,
vol. 7. (Springer, New York, 1998)
85. R. Temam, Navier–Stokes Equations Theory and Numerical Analysis. Studies in Mathematics
and Its Applications (North-Holland, Amsterdam, 1977)
86. L.N. Trefethen, D. Bau, Numerical Linear Algebra (Society for Industrial and Applied
Mathematics, Philadelphia, 1997)
87. A. van der Sluis, H.A. van der Vorst, The rate of convergence of conjugate gradients. Numer.
Math. 48(5), 543–560 (1986)
88. H.A. van der Vorst, Iterative Krylov Methods for large Linear Systems. Cambridge Monographs
on Applied and Computational Mathematics, vol. 13 (Cambridge University Press, New York,
2003)
89. R. Verfürth, A posteriori error estimation and adaptive mesh-refinement techniques. J. Comput.
Appl. Math. 50(1), 67–83 (1994)
Bibliography 111
90. A.J. Wathen, Realistic eigenvalue bounds for the Galerkin mass matrix. IMA J. Numer. Anal.
7(4), 449–457 (1987)
91. P. Wriggers, Computational Contact Mechanics, 2nd edn. (Springer, Berlin, 2006)
Index
MS , 11 choice of elements, 56
Mooney Rivlin model, 52
neo-hookean model, 52
Active constraints, 24 Ellipticity
Active set stategy, 86 global, 8
Active set strategy, 93 on the kernel, 8
Arrow-Hurwicz-Uzawa, 39, 61 Error estimates, 8
Augmented Lagrangian, 5, 13, 41, 45 Existence, 4
dual problem, 15
iterated penalty, 17
discrete, 15 Factorisation, 33
Fractional order derivatives, 77
Choice of MS , 35
variable coefficients, 57 GCR, 28
Coercivity, 46 GCR solver for the Schur complement, 36
on the kernel, 4, 6, 51 General mixed preconditioner, 34, 38, 56
Condition number, 12 GMRES, 62
Conjugate gradient, 22, 23
contact pressure, 88
Contact problems, 75 Hierarchical basis, 26
Contact status, 92
Convergence
independence of mesh size, 60 Incompressible elasticity, 48
Convex constraints, 25 inequality, 24
Inequality constraints, 24
inf-sup condition, 4
Dirichlet condition, 75
Discrete dual problem, 11
Discrete mixed problem, 7 Kuhn-Tucker conditions, 85, 92
Discrete norm, 82
Discrete scalar product, 90
Dual problem, 5 Linear elasticity, 49
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 113
J. Deteix et al., Numerical Methods for Mixed Finite Element Problems,
Lecture Notes in Mathematics 2318, https://doi.org/10.1007/978-3-031-12616-1
114 Index
Editorial Policy
1. Lecture Notes aim to report new developments in all areas of mathematics and their
applications – quickly, informally and at a high level. Mathematical texts analysing new
developments in modelling and numerical simulation are welcome.
Manuscripts should be reasonably self-contained and rounded off. Thus they may, and
often will, present not only results of the author but also related work by other people.
They may be based on specialised lecture courses. Furthermore, the manuscripts should
provide sufficient motivation, examples and applications. This clearly distinguishes
Lecture Notes from journal articles or technical reports which normally are very concise.
Articles intended for a journal but too long to be accepted by most journals, usually do not
have this “lecture notes” character. For similar reasons it is unusual for doctoral theses to
be accepted for the Lecture Notes series, though habilitation theses may be appropriate.
4. In general, monographs will be sent out to at least 2 external referees for evaluation.
A final decision to publish can be made only on the basis of the complete manuscript,
however a refereeing process leading to a preliminary decision can be based on a pre-final
or incomplete manuscript.
Volume Editors of multi-author works are expected to arrange for the refereeing, to the
usual scientific standards, of the individual contributions. If the resulting reports can be
forwarded to the LNM Editorial Board, this is very helpful. If no reports are forwarded
or if other questions remain unclear in respect of homogeneity etc, the series editors may
wish to consult external referees for an overall evaluation of the volume.
6. Careful preparation of the manuscripts will help keep production time short besides
ensuring satisfactory appearance of the finished book in print and online. After
acceptance of the manuscript authors will be asked to prepare the final LaTeX source
files (see LaTeX templates online: https://www.springer.com/gb/authors-editors/book-
authors-editors/manuscriptpreparation/5636) plus the corresponding pdf- or zipped ps-
file. The LaTeX source files are essential for producing the full-text online version of
the book, see http://link.springer.com/bookseries/304 for the existing online volumes
of LNM). The technical production of a Lecture Notes volume takes approximately 12
weeks. Additional instructions, if necessary, are available on request from lnm@springer.
com.
7. Authors receive a total of 30 free copies of their volume and free access to their book on
SpringerLink, but no royalties. They are entitled to a discount of 33.3 % on the price of
Springer books purchased for their personal use, if ordering directly from Springer.
Addresses:
Professor Jean-Michel Morel, CMLA, École Normale Supérieure de Cachan, France
E-mail: [email protected]