<< . .

. 48
( : 59)



. . >>

„ „ tn
tn+1
1 1
(u(tn+1 ) ’ u(tn )) (tn ’ s)u (s) ds .
= u (tn+1 ) +
„ „ tn

Multiplying the ¬rst equation by ˜ and the second one by ˜, the
summation of the results yields
1
(u(tn+1 ) ’ u(tn )) = ˜u (tn+1 ) + ˜u (tn )

1 tn+1
[˜tn + ˜tn+1 ’ s]u (s) ds .
+
„ tn
Since |˜tn + ˜tn+1 ’ s| ¤ „ , the second term in the decomposition (7.128)
of wn can be estimated as
tn+1
1
(u(tn+1 ) ’ u(tn )) ’ ˜u (tn+1 ) ’ ˜u (tn ) ¤ u (s) ds .
0
„ tn
0
To estimate the ¬rst term in (7.128), Taylor expansion with integral
remainder is applied to the function v(t) := (Rh ’ I)u(t). Then we have
tn+1
1 1
((Rh ’ I)u(tn+1 ) ’ (Rh ’ I)u(tn )) = [(Rh ’ I)u(s)] ds .
„ „ tn
338 7. Discretization of Parabolic Problems

With the assumption on u using the fact that the derivative and the elliptic
projection commute, we get
tn+1
1 1
((Rh ’ I)u(tn+1 ) ’ (Rh ’ I)u(tn )) ¤ (Rh ’ I)u (s) ds .
0
„ „ tn
0

With (7.127) and summing the estimates for wn we obtain the following
0
result:
Theorem 7.31 Let a be a V -elliptic, continuous bilinear form, u0h ∈ Vh ,
u0 ∈ V , ˜ ∈ [ 1 , 1]. If u ∈ C 2 ([0, T ], V ), then
2

u(tn ) ’ U n ¤ u0h ’ Rh u0 + (I ’ Rh )u(tn )
0 0 0
tn tn
(I ’ Rh )u (s)
+ ds + „ u (s) ds .
0 0
0 0

Remark 7.32 (i) Under stronger smoothness assumptions on u and by de-
tailed considerations it can also be shown that the Crank“Nicolson method
(˜ = 1 ) is of order 2 in „ .
2
(ii) Contrary to the semidiscrete situation (Theorem 7.12), the fully
discrete estimate does not re¬‚ect any exponential decay in time.
Utilizing the error estimate for the elliptic projection as in Section 7.2 (cf.
Theorem 7.12) and assuming u0 ∈ V © H 2 („¦), we have
tn
u(tn ) ’ U ¤ u0h ’ u0 2
n
+ Ch u0 + u(tn ) + u (s) ds
0 0 2 2 2
0
tn
+„ u (s) ds .
0
0

If, in addition, u0h ’ u0 ¤ Ch2 u0 2 , we obtain
0

u(tn ) ’ U n ¤ C(u)(h2 + „ ) ,
0

with C(u) > 0 depending on the solution u (and thus on u0 ) but not
depending on h and „ .
To conclude this section we give without proof a summary of error
estimates for all possible values of ˜:
±
 C(u)(h2 + „ ) , if ˜ ∈ [ 1 , 1] ,
2
u(tn ) ’ U n 0 ¤ C(u)(h2 + „ 2 ) , if ˜ = 1 , (7.129)
 2
if ˜ ∈ [0, 1] and „ ¤ ‘h ,
2 2
C(u)h ,
where ‘ > 0 is a constant upper bound of the step size relation „ /h2 .
The occurrence of such a restriction is not surprising, since similar
requirements have already appeared for the ¬nite di¬erence method.
We also mention that the above restriction to a constant step size „ is
only for simplicity of the notation. If a variable step size „n+1 is used (which
is typically determined by a step size control strategy), then the number „
in Theorem 7.31 is to be replaced by maxn=0,...,N ’1 „n .
7.6. Order of Convergence Estimates 339

Order of Convergence Estimates for the Finite Volume Method
We now consider the one-step-theta method for the ¬nite volume method
as introduced in (7.75).
The error analysis will run in a similar way as for the ¬nite element
method.
We write

u(tn ) ’ U n = u(tn ) ’ Rh u(tn ) + Rh u(tn ) ’ U n =: (tn ) + θn ,

where Rh is the auxiliary operator de¬ned in (7.63). So for the ¬rst term
of the right-hand side, an estimate is already known.
From the de¬nition (7.63) and (7.32), we immediately derive the
following identity:

1 n+1
’ θn ), vh + ah (˜θn+1 + ˜θn , vh )

„ 0,h
1
(Rh u(tn+1 ) ’ Rh u(tn )), vh
= + ah (˜Rh u(tn+1 ) + ˜Rh u(tn ), vh )
„ 0,h
1 n+1
’ ’ U n ), vh ’ ah (˜U n+1 + ˜U n , vh )
(U
„ 0,h
1
(Rh u(tn+1 ) ’ Rh u(tn )), vh
= + a(˜u(tn+1 ) + ˜u(tn ), vh )
„ 0,h
’ f n+˜ , vh 0,h
1
(Rh u(tn+1 ) ’ Rh u(tn )), vh ’ ˜u (tn+1 ) + ˜u (tn ), vh 0
=
„ 0,h
+ f n+˜ , vh 0 ’ f n+˜ , vh 0,h
1
(Rh u(tn+1 ) ’ Rh u(tn )), vh ’ ˜u (tn+1 ) + ˜u (tn ), vh 0,h
=
„ 0,h
+ ˜u (tn+1 ) + ˜u (tn ), vh 0,h ’ ˜u (tn+1 ) + ˜u (tn ), vh 0
’ f n+˜ , vh
+ f n+˜ , vh 0 0,h
n n
= w , vh + r (vh ) ,
0,h

where
1
(Rh u(tn+1 ) ’ Rh u(tn )) ’ ˜u (tn+1 ) ’ ˜u (tn )
wn :=

and

˜u (tn+1 ) + ˜u (tn ), vh 0,h ’ ˜u (tn+1 ) + ˜u (tn ), vh
rn (vh ) := 0
+ f n+˜ , vh 0 ’ f n+˜ , vh 0,h .

Under the assumptions of Theorem 6.15, we know that ah (vh , vh ) ≥ 0 for
all vh ∈ Vh . The particular choice of the test function as vh = vh := ˜

˜θn+1 + ˜θn yields, similarly to the ¬nite element case, for ˜ ∈ [ 1 , 1] the
2
340 7. Discretization of Parabolic Problems

estimate
’ θn
θn+1 ˜ θn+1 + ˜ θn
0,h 0,h 0,h 0,h

¤„ wn , vh
˜
+ rn (vh )
˜
0,h

rn (vh )
¤„ wn ˜
0,h + sup vh 0,h
vh 0,h
vh ∈Vh
rn (vh )
¤„ wn ˜ θn+1 + ˜ θn
0,h + sup .
0,h 0,h
vh 0,h
vh ∈Vh

Dividing each side by the expression in the square brackets, we get
rn (vh )
¤θ
n+1 n n
θ +„ w 0,h + sup .
0,h 0,h
vh ∈Vh vh 0,h

The recursive application of this inequality leads to
n n
rj (vh )
¤θ
n+1 0 j
θ 0,h + „ w 0,h + „ sup . (7.130)
0,h
vh 0,h
j=0 vh ∈Vh
j=0

The representation of wj obtained in the subsection on the ¬nite element
method yields the following estimate:
tj+1 tj+1
1
¤ (Rh ’ I)u (s)
j
w 0,h ds + u (s) ds .
0,h 0,h
„ tj tj

Furthermore, by Lemma 7.14, we have
|rj (vh )| ¤ Ch ˜|u (tj+1 )|1,∞ + ˜|u (tj )|1,∞ + |f j+˜ |1,∞ vh 0,h .

Using both estimates in (7.130), we obtain
θn+1 0,h
tn+1 tn+1
¤θ (Rh ’ I)u (s)
0
+C ds + „ u (s) ds
0,h 0,h 0,h
0 0
n
|u (tj )|1,∞ + ˜|u (tn+1 )|1,∞
+ Ch„ ˜|u (0)|1,∞ +
j=1
n
|f j+˜ |1,∞
+
j=0
tn+1 tn+1
¤θ (Rh ’ I)u (s)
0
+C ds + „ u (s) ds
0,h 0,h 0,h
0 0

|u (s)|1,∞ + |f (s)|1,∞ .
+ Ch sup sup
s∈(0,tn+1 ) s∈(0,tn+1 )

This is the basic estimate. The ¬nal estimate is easily obtained by the
same approach as in the ¬nite element method. In summary, we have the
following result.
Theorem 7.33 In addition to the assumptions of Theorem 6.15, consider
the ¬nite volume method on Donald diagrams. Furthermore, let u0h ∈ Vh ,
7.6. Order of Convergence Estimates 341

u0 ∈ V ©H 2 („¦), f ∈ C([0, T ], C 1 („¦)), ˜ ∈ [ 1 , 1]. Then if u(t) is su¬ciently
2
smooth, the following estimate is valid:

u(tn ) ’ U n ¤ u0h ’ u0 + Ch u0 + u(tn )
0,h 0,h 2 2
tn
ds + sup |u (s)|1,∞
+ u (s) 2
s∈(0,tn )
0
tn
+ sup |f (s)|1,∞ + C„ u (s) ds .
0,h
s∈(0,tn ) 0




Exercise
7.17 Verify Remark 7.32.
8
Iterative Methods for Nonlinear
Equations




In the same way as linear (initial-) boundary value problems by the dis-
cretization techniques discussed in this book lead to (sequences of) linear
equations, we get nonlinear equations of similar type from nonlinear prob-
lems. Two of them will be treated in this chapter. As in the Sections 1.2,
3.4, 7.3, and 6.2.4, we have to answer the question of the quality of the
approximation, and as in Section 2.5 and Chapter 5, the question of the
approximative resolution of the systems of equations. We will focus on the
latter in this chapter.
In general, the problem may be formulated in di¬erent equivalent
settings, namely:

x∈U
Find with f (x) = b . (8.1)
x∈U
Find with f (x) = 0 . (8.2)
Then x is called a root of (8.2) and a zero of f .
x ∈ U with f (x) = x .
Find (8.3)
Then x is called a ¬xed point.

Here U ‚ Rm , f : U ’ Rm is a mapping, and b ∈ Rm . The transition from
one formulation to another follows by rede¬ning f in evident ways.
In most cases, a root or a ¬xed point cannot be calculated (with ex-
act arithmetic) in a ¬nite number of operations, but only by an iterative
method, i.e., by a mapping

¦:U ’U,
8. Iterative Methods for Nonlinear Equations 343

so that (as in (5.7)) for the sequence
x(k+1) := ¦ x(k) (8.4)
with given x(0) we get
x(k) ’ x for k ’ ∞ . (8.5)
Here x is the solution of (8.1), (8.2), or (8.3).
As we already stated in Section 5.1, in the case of a continuous ¦ it
follows from (8.4), (8.5) that the limit x satis¬es
x = ¦(x) . (8.6)
This means that (8.6) should imply that x is a solution of (8.1), (8.2), or
(8.3). The extension of the de¬nition of consistency in Section 5.1 requires
the inverse implication.
Concerning the error level that we should achieve in relation to the ap-
proximation error of the discretization, the statements in the introduction
of Chapter 5 still hold. In addition to the criteria of comparison for lin-
ear stationary methods we now have to take into account the following:
Methods may, if they do at all, converge only locally, which leads to the
following de¬nition:
De¬nition 8.1 If in the above situation (8.5) holds for all x(0) ∈ U (i.e.,
for arbitrary starting values), then (x(k) )k is called globally convergent. If
an open U ‚ U exists such that (8.5) holds for x(0) ∈ U, then (x(k) )k
˜ ˜
˜
is called locally convergent. In the latter case U is called the range of the
iteration.
On the other hand, we may observe a faster convergence than the linear
convergence introduced in (5.3):
De¬nition 8.2 Let (x(k) )k be a sequence in Rm , x ∈ Rm , and · a norm
on Rm . The sequence (x(k) )k converges linearly to x with respect to · if
there exists a C with 0 < C < 1 such that
x(k+1) ’ x ¤ C x(k) ’ x for all k ∈ N .
The sequence (x(k) )k converges with order of convergence p > 1 to x if
x(k) ’ x for k ’ ∞ and if there exists a C > 0 such that
p
x(k+1) ’ x ¤ C x(k) ’ x for all k ∈ N .
The sequence (x(k) )k converges superlinearly to x if
x(k+1) ’ x
lim = 0.
x(k) ’ x
k’∞

The case p = 2 is also called quadratic convergence. Thus, while a linearly
converging method guarantees a reduction of the error by a constant factor
C, this reduction is improved step by step in the case of superlinear or
344 8. Iterative Methods for Nonlinear Equations

higher-order convergence. When we encounter quadratic convergence, for
example, the number of signi¬cant digits is doubled in every step (minus
a ¬xed number), so that usually only a small number of iterations will be
necessary. For this reason variants of the quadratically converging Newton
method (Section 8.2) are attractive. But the restriction of local convergence
may require modi¬cations to enlarge the range of convergence.
To evaluate the complexity of a numerical method the number of ele-
mentary operations for an iteration has to be considered. By an elementary
operation we want also to understand the evaluation of functions like the
sine, although this is much more costly than an ordinary ¬‚oating-point
operation. A typical subproblem during an iteration cycle is the solution
of a system of linear equations, analogously to the simpler systems in the
form (5.10) occurring in linear stationary problems. Besides the e¬ort to
assemble this system of equations, we have to account for the work to
solve it, which can be done with one of the methods described in Sec-
tion 2.5 and Chapter 5, i.e., in particular, again with an iterative method.
We call this a secondary or inner iteration, which is attractive because of
the sparse structure of the matrices originating from the discretization, as
already discussed in Chapter 5. Here an inexact variant may be useful, with
which the inner iteration is performed only up to a precision that conserves
the convergence properties of the outer iteration. The numerical cost for
the assembling may, in fact, be more expensive than the cost for the in-
ner iteration. Hence methods with low cost for the assembling (but worse
convergence) should also be considered. Keeping this in mind, we devote
an introductory chapter to the ¬xed-point iterations, which are, roughly
speaking, methods in which the iteration ¦ coincides with the mapping f .



8.1 Fixed-Point Iterations
For the ¬xed-point formulation (8.3) the choice ¦ := f is evident according
to (8.6); in other words, the ¬xed-point iteration reads
x(k+1) := f x(k) . (8.7)
To diminish the distance of two succeeding members of the sequence, i.e.,
¦(x(k+1) ) ’ ¦(x(k) ) = x(k+2) ’ x(k+1) < x(k+1) ’ x(k) ,
it is su¬cient that the iteration function (here ¦ = f ) be contractive (see
Appendix A.4).
Su¬cient conditions for a contraction are given by the following lemma:
Lemma 8.3 Let U ‚ Rm be open and convex, and g : U ’ Rm
continuously di¬erentiable. If
sup Dg(x) =: L < 1
x∈U
8.1. Fixed-Point Iterations 345

holds, where · in Rm,m is compatible with · in Rm , then g is contracting
in U .

2
Proof: Exercise 8.1.

Therefore, if U ‚ Rm is open, f : U ‚ Rm ’ Rm is continuously
di¬erentiable, and if there exists some x ∈ U with Df (˜) < 1, then there

<< . .

. 48
( : 59)



. . >>