(l’1) (l’1) (l)

= p— xl , y l’1

r xl , y l’1 = xl , p y l’1 .

If p is the canonical prolongation, then r is called the canonical restriction.

For the representation matrices,

Sl’1

r = pT = R . (5.105)

Sl

In example (5.102) for d = 2 with hl = hl’1 /2 we have Sl’1 /Sl = 1/4. Due

to Pl p = Pl’1 , the canonical restriction of RMl on RMl’1 satis¬es

rRl = Rl’1 ,

where Rl : Vl ’ RMl is de¬ned as the adjoint of Pl ,

(l)

for all xl ∈ RMl , vl ∈ Vl ,

Pl xl , vl = xl , Rl vl

0

because for any y l’1 ∈ RMl’1 and for vl’1 ∈ Vl’1 ‚ Vl ,

(l’1) (l)

rRl vl’1 , y l’1 = Rl vl’1 , pyl’1 = vl’1 , Pl pyl’1 0

(l’1)

= vl’1 , Pl’1 y l’1 = Rl’1 vl’1 , y l’1 .

0

Using (5.105) we see the equivalence of equation (5.98) to

(rAl p)y l’1 = rdl . (5.106)

˜ ˜

Setting v := Pl’1 y l’1 for a perhaps only approximative solution y l’1 of

’1

(5.106), the coarse grid correction will be ¬nished by addition of Pl v. Due

to

Pl’1 v = Pl’1 Pl’1 Pl’1 v = p Pl’1 v ,

’1 ’1

the coarse grid correction is

(k+2/3) (k+1/3)

xl = xl + p(˜ l’1 ) .

y

The above-mentioned facts suggest the following structure of a general

multigrid method: For discretizations de¬ning a hierarchy of discrete

problems,

Al xl = bl ,

one needs prolongations

p : RMk’1 ’ RMk

248 5. Iterative Methods for Systems of Linear Equations

and restrictions

r : RMk ’ RMk’1

˜

for k = 1, . . . , l and the matrices Ak’1 for the error equations. The coarse

grid correction steps (5.93) and (5.95) hence take the following form:

Solve (with µ steps of the multigrid method)

(k+1/3)

Al’1 y l’1 = r bl ’ Al xl

˜

and set

(k+2/3) (k+1/3)

xl = xl + py l’1 .

The above choice

˜

Al’1 = rAl p

is called the Galerkin product. For Galerkin approximations this coincides

with the discretization matrix of the same type on the grid of level l ’ 1 due

to (5.97). This is also a common choice for other discretizations and then

an alternative to the Galerkin product. In view of the choice of p and r we

should observe the validity of (5.104). An interpolational de¬nition of the

prolongation on the basis of (¬nite element) basis functions as for example

(5.101) (see also example (5.99)) is also common in other discretizations. In

more di¬cult problems, as for example those with (dominant) convection

in addition to di¬usive transport processes, nonsymmetric problems arise

with a small constant of V -ellipticity. Here the use of matrix-dependent,

that means Al -dependent, prolongations and restrictions is recommended.

5.5.3 E¬ort and Convergence Behaviour

In order to judge the e¬ciency of a multigrid method the number of opera-

tions per iteration and the number of iterations (required to reach an error

level µ, see (5.4)) has to be estimated. Due to the recursive structure, the

¬rst number is not immediately clear. The aim is to have only the optimal

amount of O(Ml ) operations for sparse matrices. For this the dimensions

of the auxiliary problems have to decrease su¬ciently. This is expressed by

the following:

There exists a constant C > 1 such that

Ml’1 ¤ Ml /C for l ∈ N . (5.107)

Hence we assume an in¬nite hierarchy of problems and/or grids, which

also corresponds to the asymptotic point of view of a discretization from

Section 3.4. Relation (5.107) is thus a condition for a re¬nement strategy.

For the model problem of the Friedrichs“Keller triangulation of a rectangle

(see Figure 2.9) in the case of a regular “red” re¬nement we have hl =

hl’1 /2. Thus C = 4, and for analogous constructions in d space dimensions

5.5. Multigrid Method 249

C = 2d . The matrices that appear should be sparse, so that for level l the

following holds:

smoothing step = CS Ml operations,

error calculation and restrictions = CD Ml operations,

prolongation and correction = CC Ml operations.

Then we can prove the following (see [16, p. 326]):

If the number µ of multigrid steps in the recursion satis¬es

µ<C, (5.108)

then the number of operations for an iteration step for a problem on level

l can be estimated by

C(ν)Ml . (5.109)

Here ν is the number of a priori and a posteriori smoothing steps and

νCS + CD + CS

+ O (µ/C)l .

C(ν) =

1 ’ µ/C

The requirement (5.108) will be satis¬ed in general through the restriction

to µ = 1, µ = 2. Analogously, the memory requirement is O(Ml ), since

l

C

Mk ¤ Ml .

C ’1

k=0

Whether this larger e¬ort (of equal complexity) in comparison to other

methods discussed is justi¬ed will be decided by the rate of convergence.

The multigrid method is a linear stationary method. The iteration matrix

MlT GM of the two-grid method results from

(k+1/2) (k)

= Slν xl

xl ,

+ p A’1 r bl ’ Al xl

(k+1) (k+1/2) (k+1/2)

xl = xl l’1

to

MlT GM = (I ’ pA’1 rAl )Slν . (5.110)

l’1

Also, the consistency of the method follows immediately if the smoothing

iteration is consistent.

The analysis of the convergence of the multigrid method can be reduced

to the analysis of the two-grid method, since the iteration matrix is a

modi¬cation of MlT GM (see [16, p. 328]). For a large class of a priori and a

posteriori smoothing operators as well as of restrictions and prolongations

it can be shown (see [16, p. 347]) that there exists a constant ∈ (0, 1)

independent of the discretization parameter hl such that (MT GM ) ¤ .

Combined with (5.109) this shows that multigrid methods are optimal in

their complexity. This also shows their potential superiority compared with

all other methods described.

250 5. Iterative Methods for Systems of Linear Equations

In the following we will only indicate the schematic procedure to prove

this assertion. It is su¬cient to prove the following two properties, where

the spectral norm is used as the matrix norm, that is, the matrix norm

that is induced by the Euclidean vector norm.

(1) Smoothing property:

CS

Al Slν ¤

There exists CS > 0 : Al .

ν

(2) Approximation property:

A’1 ’ pA’1 r ¤ CA Al ’1

There exists CA > 0 : . (5.111)

l l’1

Due to

MT GM = A’1 ’ pA’1 r Al Slν ,

l l’1

we can conclude that

CS CA

MT GM ¤ A’1 ’ pA’1 r Al Slν ¤ ,

l l’1

ν

which means that for su¬ciently large ν,

MT GM ¤ <1

with independent of l.

The smoothing property is of an algebraic nature, but for the proof of

the approximation property we will use ” at least indirectly ” the original

variational formulation of the boundary value problem and the correspond-

ing error estimate. Therefore, we discuss only the smoothing property for,

as an example, the relaxed Richardson method for a symmetric positive

de¬nite matrix Al , i.e.,

1

Sl = Il ’ ωAl ω∈

with 0, .

»max (Al )

Let {z i }Ml be an orthonormal basis of eigenvectors of Al . For any initial

i=1

Ml

vector x(0) represented in this basis as x(0) = i=1 ci z i it follows that

(compare (5.68))

Ml Ml

2 ’2

’ (»i ω)2 (1 ’ »i ω)2ν c2

Al Slν x(0) »2 (1 »i ω)2ν c2

= =ω

i i i

i=1 i=1

2 Ml

’2

¤ω max ξ(1 ’ ξ) ν

c2 .

i

ξ∈[0,1]

i=1

The function ξ ’ ξ(1 ’ ξ)ν has its maximum at ξmax = (ν + 1)’1 ; thus

ν ν+1

1 1 1 ν 1

ξmax (1 ’ ξmax ) = 1’ ¤

ν

= .

ν +1 ν +1 ν ν+1 eν

5.6. Nested Iterations 251

Hence

1

Al Slν x(0) ¤ x(0) ,

ωeν

which implies

1

Al Slν ¤ .

ωeν

Since the inclusion ω ∈ (0, 1/»max(Al )] can be written in the form ω =

σ/ Al with σ ∈ (0, 1], we have CS = 1/(σe).

The approximation property can be motivated in the following way. The

¬ne grid solution xl of Al xl = dl is replaced in the coarse grid correction

by pxl’1 from Al’1 xl’1 = dl’1 := rdl . Therefore, pxl’1 ≈ A’1 dl should

l

hold. The formulation (5.111) thus is just a quantitative version of this

requirement. Since in the symmetric case Al ’1 is simply the reciprocal

value of the largest eigenvalue, (3.140) in Theorem 3.45 establishes the

relation to the statements of convergence in Section 3.4. For a more exact

analysis of convergence and a more extensive description of this topic we

refer to the cited literature (see also [17]).

Exercises

5.12 Determine the prolongation and restriction according to (5.101) and

(5.104) for the case of a linear ansatz on a Friedrichs“Keller triangulation.

5.13 Prove the consistency of the two-grid method (5.110) in the case of

the consistent smoothing property.

5.6 Nested Iterations

As in Section 5.5 we assume that besides the system of equations

Al xl = bl

with Ml unknowns, there are given analogous low-dimensional systems of

equations

k = 0, . . . , l ’ 1 ,

Ak xk = bk , (5.112)

with Mk unknowns, where M0 < M1 < · · · < Ml . Let all systems of

equations be an approximation of the same continuous problem such that

an error estimate of the type

u ’ Pl xl ¤ CA h±

l

holds, with Pl according to (5.90) and ± > 0. Here · is a norm on the

basic space V , and the constant CA generally depends on the solution u

252 5. Iterative Methods for Systems of Linear Equations

of the continuous problem. The discretization parameter hl determines the

dimension Ml : In the simplest case of a uniform re¬nement, hd ∼ 1/Ml

l

holds in d space dimensions. One may also expect that for the discrete

solution,

pxk’1 ’ xk ¤ C1 CA h± , k = 1, . . . , l ,

k k

holds with a constant C1 > 0. Here · k is a norm on RMk , and the

mapping p = pk’1,k : RMk’1 ’ RMk is a prolongation, for example the

canonical prolongation introduced in Section 5.5. In this case the estimate

can be rigorously proven with the de¬nition of the canonical prolongation

’1

p = Pk Pk’1 :

’1

pxk’1 ’ xk Pk Pk’1 xk’1 ’ Pk xk

=

k k

’1

¤ Pk’1 xk’1 ’ Pk xk

Pk L[V ,RMk ]

k

’1

¤ ¤ C1 CA h±

CA h± + CA h±

Pk L[V ,RMk ] k k’1 k

k

with

±

hj’1

Pj’1 L[Vj ,RMj ]

C1 = max 1+ .

hj

j=1,...,l

Let the system of equations be solved with an iterative method given by

the ¬xed-point mapping ¦k , k = 0, . . . , l, which means that xk according

to (5.112) satis¬es xk = ¦k (xk , bk ). Then it is su¬cient to determine an

˜

iterate xl with an accuracy

xl ’ xl l ¤ CA h±

˜

˜ (5.113)

l

˜

with CA := CA / Pl L[RMl ,V ] , because then we also have

Pl xl ’ Pl xl ¤ CA h± .

˜ l

If one does not have a good initial iterate from the concrete context, the

algorithm of nested iterations explained in Table 5.7 can be used. It is

indeed a ¬nite process.

The question is how to choose the iteration numbers mk such that (5.113)

¬nally holds, and whether the arising overall e¬ort is acceptable. An answer

to this question is provided by the following theorem:

Theorem 5.19 Let the iterative method ¦k have the contraction number

k with respect to · k . Assume that there exist constants C2 , C3 > 0 such

that

¤ C2 ,

p L[RMk’1 ,RMk ]

¤ C3 hk ,

hk’1

for all k = 1, . . . , l. If the iteration numbers mk for the nested iterations

are chosen in such a way that

¤ 1/(C2 C3 + C1 Pl ) ,

mk ±

(5.114)

k

5.6. Nested Iterations 253

Choose mk , k = 1, . . . , l.

˜

Let x0 be an approximation of x0 ,

for example x0 = x0 = A’1 b0 .

˜ 0

For k = 1, . . . , l:

(0)

˜ ˜

xk := p xk’1 .

Perform mk iterations:

(i) (i’1)

˜ ˜

xk := ¦k xk , bk , i = 1, . . . , mk .

(mk )

˜ ˜

Set xk := xk .

Table 5.7. Nested Iteration.

then

xk ’ xk ¤ CA h± ,

˜k

˜ k

for all k = 1, . . . , l, provided that this estimate holds for k = 0.

Proof: The proof is given by induction on k. Assume that the assertion

is true for k ’ 1. This induces

xk ’ xk ¤ p˜ k’1 ’ xk

mk

˜ x

k k k

¤ ( p(˜ k’1 ’ xk’1 ) + pxk’1 ’ xk

mk

x k)

k

k

¤ ˜

mk

C2 CA h± + C1 CA h±

k’1 k

k

¤ ˜k

mk

(C2 C3 + C1 Pl ) CA h± .

±

k

2

Theorem 5.19 allows the calculation of the necessary number of iterations

’1

for the inner iteration from the norms p L[RMk’1 ,RMk ] , Pk L[Vk ,RMk ] and

the constants hhk for k = 1, . . . , l, as well as the order of convergence ±

k’1

of the discretization.

In order to estimate the necessary e¬ort according to (5.114) more ex-

actly, the dependence of k of k must be known. In the following we consider

only the situation, known as the multigrid method, of a method of optimal

complexity

¤ < 1.

k

Here, in contrast to other methods, the number of iterations can be cho-

sen constant (mk = m for all k = 1, . . . , l). If, furthermore, the estimate

(5.107) holds with the constant C, then analogously to the consideration

in Section 5.5 the total number of operations for the nested iteration can

254 5. Iterative Methods for Systems of Linear Equations

be estimated by