<< . .

. 45
( : 59)



. . >>

¦(„, t, ξ) = ’(I + „ ˜Ah )’1 [Ah ξ ’ q h (t + ˜„ )] .
Thus nonexpansiveness models the fact that perturbances, i.e., in partic-
ular errors, are not ampli¬ed in time by the numerical method. This is
considerably weaker than the exponential decay in the continuous solution
(see (7.18)), which would be too strong a request.
Having in mind (7.79)“(7.81), and expecting the (real parts of the) ei-
genvalues to be positive, the following restriction is su¬cient:
De¬nition 7.18 A one-step method is called A-stable if its application to
the scalar model problem (7.80)
ξ + »ξ = 0 , t ∈ (0, T ) ,
ξ(0) = ξ0 ,
yields a nonexpansive method for all complex parameters » with »>0
and arbitrary step sizes „ > 0.
Because of (7.81) we have
˜ ˜
ξn+1 ’ ξn+1 = R(’»„ )[ξn ’ ξn ]
for two approximations of the one-step-theta method applied to (7.80).
This shows that the condition
|R(z)| ¤ 1 for all z with z<0
is su¬cient for the A-stability of the method. More generally, any one-step
method that can be written for (7.80) in the form
ξn+1 = R(’»i „ )ξn (7.82)
is nonexpansive i¬
|R(’»i „ )| ¤ 1 . (7.83)
The one-step-theta method is nonexpansive for (7.41) in the case of an
eigenvector basis if (7.83) holds for all eigenvalues »i and step size „. A
convenient formulation can be achieved by the notion of the domain of
stability.
De¬nition 7.19 Given a stability function R : C ’ C, the set
SR := {z ∈ C : |R(z)| < 1}
is called a domain of (absolute) stability of the one-step method ξn+1 =
R(’»„ )ξn .
Example 7.20
For the one-step-theta method we have:

(1) For ˜ = 0, SR is the (open) unit disk with centre z = ’1.
318 7. Discretization of Parabolic Problems

(2) For ˜ = 1 , SR coincides with the left complex half-plane (except for
2
the imaginary axis).
(3) For ˜ = 1, SR is the whole complex plane except for the closed unit
disk with centre z = 1.
The notion of A-stability re¬‚ects the fact that the property |e’»„ | ¤ 1 for
» > 0 is satis¬ed by the function R(’»„ ), too:
Corollary 7.21 For a continuous stability function R the one-step method
ξ n+1 = R(’»„ )ξ n is A-stable if the closure S R of its domain of stability
contains the left complex half-plane.
Thus the Crank“Nicolson and the implicit Euler methods are A-stable,
but not the explicit Euler method. To have nonexpansiveness, we need the
requirement
|1 ’ »i „ | = |R(’»i „ )| ¤ 1 , (7.84)
which is a step size restriction: For positive »i it reads
„ ¤ 2/ max{»i | i = 1, . . . , M } . (7.85)
For the example of the ¬ve-point stencil discretization of the heat equation
on a rectangle with Dirichlet boundary conditions according to (7.37)“
(7.39), equation (7.84) reads
„ π π
1’ 2 2 ’ cos ν h ’ cos µ h ¤1 (7.86)
h2 a b
for all ν = 1, . . . , l ’ 1, µ = 1, . . . , m ’ 1.
The following condition is su¬cient (and for l, m ’ ∞ also necessary):
„ 1
¤. (7.87)
2
h 4
For the ¬nite element method a similar estimate holds in a more general
context. Under the assumptions of Theorem 3.45 we conclude from its proof
(see (3.141)) that the following holds:
’2
max{»i | i = 1, . . . , M } ¤ C min hK
K∈Th

’1 ˆ T
for the eigenvalues of Bh Ah , where Bh = Eh Eh is the mass matrix and
’1 ˆ ’1
ˆ
Ah the sti¬ness matrix, and thus also for Ah = Eh Bh Ah Eh . Here C is
a constant independent of h.
Therefore, we have
2
¤ 2/C
„ min hK (7.88)
K∈Th

as a su¬cient condition for the nonexpansiveness of the method with a
speci¬c constant depending on the stability constant of the bilinear form
and the constant from Theorem 3.43, (2).
7.4. Stability 319

These step size restrictions impede the attractivity of the explicit Euler
method, and so implicit versions are often used. But also in the A-stable
case there are distinctions in the behaviour (of the stability functions).
Comparing them, we see that
R(’x) ’ ’1 x ’ ∞;
1
for ˜= : for
2
(7.89)
R(’x) ’ 0 x ’ ∞.
for ˜=1: for
This means that for the implicit Euler method the in¬‚uence of large eigen-
values will be more greatly damped, the larger they are, corresponding
to the exponential function to be approximated, but the Crank“Nicolson
method preserves these components nearly undamped in an oscillatory
manner. This may lead to a problem for “rough” initial data or discontinu-
ities between initial data and Dirichlet boundary conditions. On the other
hand, the implicit Euler method also may damp solution components too
strongly, making the solution “too” smooth.
The corresponding notion is the following:
De¬nition 7.22 One-step methods whose stability function satis¬es
R(z) ’ 0 for z ’ ’∞,
are called L-stable.
An intermediate position is ¬lled by the strongly A-stable methods. They
are characterized by the properties
• |R(z)| < 1 for all z with z < 0,
• |R(z)| < 1.
lim
z’’∞

Example 7.23
(1) Among the one-step-theta methods, only the implicit Euler method
(˜ = 1) is L-stable.
(2) The Crank“Nicolson method (˜ = 1 ) is not strongly A-stable,
2
because of (7.89).
The nonexpansiveness of a one-step method can also be characterized by
a norm condition for the solution operator Eh,„ .
Theorem 7.24 Let the spatial discretization matrix Ah have a basis
of eigenvectors w i orthogonal with respect to the scalar product ·, · h ,
with eigenvalues »i , i = 1, . . . , M. Consider the problem (7.41) and
its discretization in time by a one-step method with a linear solution
representation
un = Eh,„ u0
n
(7.90)
h

for q h = 0, where Eh,„ ∈ RM,M , and a stability function R such that (7.82)
and
Eh,„ wi = R(’»i „ )w i (7.91)
320 7. Discretization of Parabolic Problems

for i = 1, . . . , M . Then the following statements are equivalent:
(1) The one-step method is nonexpansive for the model problem (7.80)
and all eigenvalues »i of Ah .
(2) The one-step method is nonexpansive for the problem (7.41), with
respect to the norm · h induced by ·, · h .
(3) Eh,„ h ¤ 1 in the matrix norm · h induced by the vector norm
· h.

Proof: We prove (1) ’ (3) ’ (2) ’ (1):
(1) ’ (3): According to (7.83) (1) is characterized by
|R(’»i „ )| ¤ 1 , (7.92)
for the eigenvalues »i .
For the eigenvector, wi with eigenvalue »i we have (7.91), and thus, for
an arbitrary u0 = M ci wi ,
i=1
M
2 2
Eh,„ u0 = ci Eh,„ wi
h h
i=1
M M
c2 |R(’»i „ )|2 wi
ci R(’»i „ )w i 2 2
= = h,
i
h
i=1 i=1

because of the orthogonality of the w i , and analogously,
M
u0 2 c2 w i 2
= h,
h i
i=1

and ¬nally, because of (7.92),
M
¤
Eh,„ u0 2 c2 w i 2 2
= u0 h,
h i h
i=1

which is assertion (3).
(3) ’ (2): is obvious.
(2) ’ (3):
|R(’»i „ )| wi ¤ wi
= R(’»i „ )w i = Eh,„ w i h.
h h h

2

Thus, nonexpansiveness is often identical to what is (vaguely) called
stability:
De¬nition 7.25 A one-step method with a solution representation Eh,„
for qh = 0 is called stable with respect to the vector norm · h if
¤1
Eh,„ h

·
in the induced matrix norm h.
7.4. Stability 321

Till now we have considered only homogeneous boundary data and right-
hand sides. At least for the one-step-theta method this is not a restriction:
Theorem 7.26 Consider the one-step-theta method under the assumption
of Theorem 7.24, with »i ≥ 0, i = 1, . . . , M, and with „ such that the
method is stable. Then the solution is stable in initial condition u0 and
right-hand side q h in the following sense:
n
¤ u0 q h tk ’ ˜„
un h +„ . (7.93)
h
h h
k=1


Proof: From the solution representation (7.70) we conclude that
n
(I + „ ˜Ah )’1
¤ q h (tk ’ ˜„ )
n’k
un h Eh,„ n u0 h+„ Eh,„ h h
h h h
k=1
(7.94)
using the submultiplicativity of the matrix norm.
We have the estimate
1
(I + ˜„ Ah )’1 wi ¤
= wi wi h,
h h
1 + ˜ „ »i
and thus as in the proof of Theorem 7.24, (1) ’ (3),
(I + ˜ „ Ah )’1 ¤1
h

2
concludes the proof.

1
The stability condition requires step size restrictions for ˜ < 2, which
have been discussed above for ˜ = 0.
The requirement of stability can be weakened to
¤ 1 + K„
Eh,„ (7.95)
h

for some constant K > 0, which in the situation of Theorem 7.24 is equi-
valent to
|R(’»„ )| ¤ 1 + K„,
for all eigenvalues » of Ah . Because of
(1 + K„ )n ¤ exp(Kn„ ),
in (7.93) the additional factor exp(KT ) appears and correspondingly
exp(K(n ’ k)„ ) in the sum. If the process is to be considered only in a
small time interval, this becomes part of the constant, but for large time
horizons the estimate becomes inconclusive.
On the other hand, for the one-step-theta method for 1 < ˜ ¤ 1 the
2
estimate Eh,„ h ¤ 1 and thus the constants in (7.93) can be sharpened
to Eh,„ h ¤ R(’»min „ ), where »min is the smallest eigenvalue of Ah ,
322 7. Discretization of Parabolic Problems

re¬‚ecting the exponential decay. For example, for ˜ = 1, the (error in the)
initial data is damped with the factor
1
n
= R(’»min „ )n =
Eh,„ ,
h
(1 + »min „ )n
which for „ ¤ „0 for some ¬xed „0 > 0 can be estimated by

exp(’»n„ ) for some » > 0.

We conclude this section with an example.
Example 7.27 (Prothero-Robinson model) Let g ∈ C 1 [0, T ] be given.
We consider the initial value problem
ξ + »(ξ ’ g) = t ∈ (0, T ) ,
g,
ξ(0) = ξ0 .
Obviously, g is a particular solution of the di¬erential equation, so the
general solution is
ξ(t) = e’»t [ξ0 ’ g(0)] + g(t) .
In the special case g(t) = arctan t , » = 500, and for the indicated values of
ξ0 , Figure 7.1 shows the qualitative behaviour of the solution.

400
50
0
-100




Figure 7.1. Prothero“Robinson model.

It is worth mentioning that the ¬gure is extremely scaled: The continuous
line (to ξ0 = 0) seems to be straight, but it is the graph of g.
The explicit Euler method for this model is

ξ n+1 = (1 ’ »„ )ξ n + „ [g (tn ) + »g(tn )] .
7.5. The Maximum Principle for the One-Step-Theta Method 323

According to the above considerations, it is nonexpansive only if »„ ¤ 1
holds. For large numbers », this is a very restrictive step size condition; see
also the discussion of (7.85) to (7.87).
Due to their better stability properties, implicit methods such as the
Crank“Nicolson and the implicit Euler methods do not have such step
size restrictions. Nevertheless, the application of implicit methods is not
free from surprises. For example, in the case of large numbers », an order
reduction can occur.



Exercises
7.10 Determine the corresponding domain of stability SR of the one-step-
theta method for the following values of the parameter ˜ : 0, 1 , 1.
2


7.11 Show the L-stability of the implicit Euler method.

7.12 (a) Show that the discretization
ξ n = ξ n’2 + 2„ f (tn’1 , ξ n’1 ) , n = 2, . . . N
(midpoint rule), applied to the model equation ξ = f (t, ξ) with
f (t, ξ) = ’»ξ and » > 0 leads, for a su¬ciently small step size
„ > 0, to a general solution that can be additively decomposed into a
decaying and an increasing (by absolute value) oscillating component.
(b) Show that the oscillating component can be damped if additionally
N
the quantity ξ— is computed (modi¬ed midpoint rule):
1N
ξ + ξ N ’1 + „ f (tN , ξ N ) .
N
ξ— =
2

7.13 Let m ∈ N be given. Find a polynomial Rm (z) = 1 + z +
m
j=2 γj z (γj ∈ R) such that the corresponding domain of absolute sta-
j

bility for R(z) := Rm (z) contains an interval of the negative real axis that
is as large as possible.



7.5 The Maximum Principle for the
One-Step-Theta Method
In Section 1.4 we have seen that for a discrete problem of the form (1.31)
there is a hierarchy of properties ranging from a comparison principle to
a strong maximum principle, which is in turn applied by a hierarchy of
conditions, partly summarized as (1.32) or (1.32)— . To remind the reader,
we regroup these conditions accordingly:
324 7. Discretization of Parabolic Problems

The collection of conditions (1.32), (1), (2), (3) i), (4)— is called (IM).
(IM) implies the inverse monotonicity of Ah (Theorem 1.12, (1.39)).
The collection of conditions (IM), (5) is called (CP).
(CP) implies a comparison principle in the sense of Corollary 1.13.
The collection of conditions (CP), (6)— is called (M P )— .
(M P )— implies a maximum principle in the form of Theorem 1.10 (1.38).
Alternatively, the collection of conditions (CP) (6)# (see Exercise 1.13) is
called (MP).
(MP) implies a maximum principle in the form of Theorem 1.9 (1.34).
Finally, the collection of conditions (CP), (6), (4) (instead of (4)— ), (7) is
called (SMP).
(SMP) implies a strong maximum principle in the sense of Theorem 1.9.
An L∞ -stability estimate in the sense of Theorem 1.14 is closely related.
This will be taken up in the next section.
In the following we will discuss the above-mentioned properties for the
one-step-theta method, cast into the form (1.31), on the basis of correspond-
ing properties of the underlying elliptic problem and its discretization. It
will turn out that under a reasonable condition (see (7.100)), condition (4)—
(and thus (3) ii)) will not be necessary for the elliptic problem. This re¬‚ects
the fact that contrary to the elliptic problem, for the parabolic problem also
the case of a pure Neumann boundary condition (where no degrees of free-
dom are given and thus eliminated) is allowed, since the initial condition
acts as a Dirichlet boundary condition.
In assuming that the discretization of the underlying elliptic problem is
of the form (1.31), we return to the notation M = M1 + M2 , where M2 is
the number of degrees of freedom eliminated, and thus Ah , Bh ∈ RM1 ,M1 .
We write the discrete problem according to (7.66) as one large system of
equations for the unknown


« 
u1h
¬ ·
u2
¬ ·
h
uh = ¬ ·, (7.96)
.
 
.
.
uN
h




in which the vector of grid values ui ∈ RM1 are collected to one large
h
vector of dimension M 1 := N · M1 . Thus the grid points in „¦ — (0, T ) are

<< . .

. 45
( : 59)



. . >>