<< . .

. 6
( : 59)



. . >>

1.6 Let „¦ ‚ R2 be a bounded domain. For a su¬ciently smooth function
u : „¦ ’ R determine the di¬erence formula with an order as high as
possible to approximate ‚11 u(x1 , x2 ), using the 9 values u(x1 + γ1 h, x2 +
γ2 h), where γ1 , γ2 ∈ {’1, 0, 1}.

1.7 Let „¦ ‚ R2 be a bounded domain. Show that in (1.21) there exists
no choice of cij such that for an arbitrary smooth function u : „¦ ’ R,
|∆u(x) ’ ∆h u(x)| ¤ Ch3
is valid with a constant C independent of h.

1.8 For the example (1.24), (1.25), investigate the order of consistency
both for the discretization (1.26) and (1.27) in the maximum norm. Are
there improvements possible considering the discrete L2 -norm? (See (1.18).)

1.9 Consider example (1.24) with
“1 := {(a, y) | y ∈ (0, b)} ∪ {(x, b) | x ∈ (0, a]},
“3 := “ \ “1 ,
and discuss the applicability of the one-sided and the symmetric di¬er-
ence quotients for the approximation of the Neumann boundary condition,
in particular with respect to properties (1.15). In which way does the
boundary condition at (a, b), where no unique normal exists, have to be
interpreted?
36 1. Finite Di¬erence Method for the Poisson Equation

1.10 Generalize the discussion concerning the ¬ve-point stencil dis-
cretization (including the order of convergence) to the boundary value
problem
’∆u + ru = f in „¦,
u= g on ‚„¦,
for r > 0 and „¦ := (0, a) — (0, b). To approximate the reactive term ru, the
following schemes in the notation of (1.21) are to be used:
(a) c0,0 = 1, cij = 0 otherwise,
(b) c0,0 > 0, c0,1 , c1,0 , c0,’1 , c’1,0 ≥ 0, cij = 0 otherwise, and
1
i,j=’1 cij = 1 .




1.4 Maximum Principles and Stability
In this section the proof of the stability estimate (1.20), which is still miss-
ing, will be given. For this reason we develop a more general framework, in
which we will then also discuss the ¬nite element method (see Section 3.9)
and the time-dependent problems (see Section 7.5). The boundary value
problem (1.1), (1.2) satis¬es a (weak ) maximum principle in the following
sense: If f is continuous and f (x) ¤ 0 for all x ∈ „¦ (for short f ¤ 0), then
max u(x) ¤ max u(x) .
x∈‚„¦
x∈„¦

This maximum principle is also strong in the following sense: The maxi-
mum of u on „¦ can be attained in „¦ only if u is constant (see, for example,
[13], also for the following assertions). By exchanging u, f, g by ’u, ’f, ’g,
respectively, we see that there is an analogous (strong) minimum principle.
The same holds for more general linear di¬erential equations as in (1.28),
which may also contain convective parts (this means ¬rst-order deriva-
tives). But if the equation contains a reactive part (this means without
derivatives), as in the example
’∆u + ru = f in „¦
with a continuous function r : „¦ ’ R such that r(x) ≥ 0 for x ∈ „¦, there
is a weak maximum principle only in the following form: If f ¤ 0, then

max u(x) ¤ max max u(x), 0 .
x∈‚„¦
x∈„¦

The weak maximum principle directly implies assertions about the de-
pendence of the solution u of the boundary value problem on the data f
and g; this means stability properties. One can also follow this method in
investigating the discretization. For the basic example we have
1.4. Maximum Principles and Stability 37

Theorem 1.8 Let uh be a grid function on „¦h de¬ned by (1.7), (1.8) and
suppose fij ¤ 0 for all i = 1, . . . , l ’ 1, j = 1, . . . , m ’ 1. Then if uh attains
its maximum on „¦h ∪ ‚„¦— at a point (i0 h, j0 h) ∈ „¦h , then the following
h
holds:
uh is constant on „¦h ∪ ‚„¦— .
h

Here
‚„¦— := ‚„¦h \ {(0, 0), (a, 0), (0, b), (a, b)} .
h

In particular, we have
max uh (x, y) ¤ max uh (x, y) .
(x,y)∈‚„¦—
(x,y)∈„¦h h



Proof: Let u := uh (i0 h, j0 h). Then because of (1.7) and fij ¤ 0 we have
¯

4¯ ¤ uh (kh, lh) ¤ 4¯ ,
u u
(k,l)∈N(i0 ,j0 )

since in particular uh (kh, lh) ¤ u for (k, l) ∈ N(i0 ,j0 ) . Here we used the
¯
notation
N(i0 ,j0 ) = {((i0 ’ 1), j0 ), ((i0 + 1), j0 ), (i0 , (j0 + 1)), (i0 , (j0 ’ 1))}
for the set of indices of neighbours of (i0 h, j0 h) in the ¬ve-point stencil.
From these inequalities we conclude that
(k, l) ∈ N(i0 ,j0 ) .
uh (kh, lh) = u for
¯
If we apply this argument to the neighbours in „¦h of the grid points (kh, lh)
for (k, l) ∈ N(i0 ,j0 ) and then continue in the same way to the sets of neigh-
bours in „¦h arising in every such step, then ¬nally, for each grid point
(ih, jh) ∈ „¦h ∪ ‚„¦— the claimed identity uh (ih, jh) = u is achieved. 2
¯
h

The exceptional set of vertices ‚„¦h \ ‚„¦— does not participate in any
h
di¬erence stencil, so that the values there are of no relevance for uh .
We want to generalize this result and therefore consider a system of
equations as in (1.10), (1.11):
Ah uh = q h = ’Ah uh + f ,
ˆˆ (1.31)
where Ah ∈ RM1 ,M1 as in (1.10), Ah ∈ RM1 ,M2 as in (1.11), uh , f ∈ RM1 ,
ˆ
and uh ∈ RM2 . This may be interpreted as the discretization of a bound-
ˆ
ary value problem obtained by the ¬nite di¬erence method or any other
approach and without restrictions on the dimensionality of the domain.
At least on one part of the boundary Dirichlet boundary conditions are re-
quired. Then the entries of the vector uh can be interpreted as the unknown
(1) (1)
values at the grid points in „¦h ∪ ‚„¦h , where ‚„¦h correspond to a part
ˆ
of ‚„¦ (with ¬‚ux or mixed boundary condition). Analogously, the vector uh
38 1. Finite Di¬erence Method for the Poisson Equation

(indexed from M1 + 1 to M1 + M2 ) corresponds to the values ¬xed by the
(2)
Dirichlet boundary conditions on ‚„¦h . Again let M = M1 + M2 and

Ah := Ah Ah ∈ RM1 ,M .
˜ ˆ

This means in particular that the dimensions M1 and M2 are not ¬xed,
but are in general unbounded for h ’ 0.
Oriented on (1.15) we require the following general assumptions for the
rest of the section:
(1) (Ah )rr > 0 for all r = 1, . . . , M1 ,
(Ah )rs ¤ 0
(2) for all r, s = 1, . . . , M1 such that r = s ,
M1
(Ah )rs ≥ 0 for all r = 1, . . . , M1 ,
(3) (i)
s=1
(ii) for at least one index the strict inequality holds ,
(4) Ah is irreducible , (1.32)
(Ah )rs ¤ 0 for all r = 1, . . . , M1 , s = M1 + 1, . . . , M ,
ˆ
(5)
M
(Ah )rs ≥ 0 for all r = 1, . . . , M1 ,
˜
(6)
s=1
for every s = M1 + 1, . . . , M there exists r ∈ {1, . . . , M1 }
(7)
ˆ
such that (Ah )rs = 0.
Generalizing the notation above for r ∈ {1, . . . , M1 }, the indices s ∈
{1, . . . , M } \ {r} are called neighbours, for which (Ah )rs = 0, and they are
˜
assembled to form the set Nr . Therefore, the irreducibility of Ah means
that arbitrary r, s ∈ {1, . . . , M1 } can be connected by neighbourhood
relationships.
The condition (7) is not a restriction: It only avoids the inclusion of
known values (ˆ h )s that do not in¬‚uence the solution of (1.31) at all. For
u
the ¬ve-point stencil on the rectangle, these are the values at the corner
points. Because of the condition (7), every index r ∈ {M1 + 1, . . . , M }
is connected to every index s ∈ {1, . . . , M1 } by means of neighbourhood
relationships.
The conditions (2) and (3) imply the weak diagonal dominance of Ah .
Note that the conditions are formulated redundantly: The condition (3)
also follows from (5) through (7).
To simplify the notation we will use the following conventions, where u,
v and A, B are vectors and matrices, respectively, of suitable dimensions:
≥ ≥
u 0 if and only if (u)i 0 for all indices i ,
≥ if u ’ v ≥
u v if and only 0,
(1.33)
≥ ≥
A 0 if and only if (A)ij 0 for all indices (i, j) ,
≥ if A ’ B ≥
A B if and only 0.
1.4. Maximum Principles and Stability 39

Theorem 1.9 We consider (1.31) under the assumptions (1.32). Fur-
thermore, let f ¤ 0. Then a strong maximum principle holds: If the
u
components of uh = uh attain a nonnegative maximum for some in-
˜ ˆh
dex r ∈ {1, . . . , M1 }, then all the components are equal. In particular, a
weak maximum principle is ful¬lled:

(˜ h )r ¤ max 0,
max u max (ˆ h )r
u . (1.34)
r∈{1,...,M} r∈{M1 +1,...,M}


Proof: Let u = maxs∈{1,...,M} (˜ h )s , and u = (uh )r where r ∈
¯ u ¯
{1, . . . , M1 }. Because of (1.32) (2), (5), (6) the rth row of (1.31) implies

¤’ ˜ ˜
(Ah )rr u
¯ Ah (˜ h )s =
u Ah (˜ h )s
u
rs rs
s∈Nr s∈Nr
(1.35)
¤ u ¤ (Ah )rr u ,
˜
Ah ¯ ¯
rs
s∈Nr

where the assumption u ≥ 0 is used in the last estimate. Therefore, ev-
¯
erywhere equality has to hold. Since the second inequality is valid also
˜
for every single term and (Ah )rs = 0 by the de¬nition of Nr , we ¬nally
conclude that
for all s ∈ Nr .
(˜ h )s = u
u ¯
This allows us to apply this argument to all s ∈ Nr © {1, . . . , M1 }, then
to the corresponding sets of neighbours, and so on, until the assertion is
2
proven.

The requirement of irreducibility can be weakened if instead of (1.32) (6)
we have
M
— ˜
(6) Ah = 0 for all r = 1, . . . , M1 .
rs
s=1
Then condition (4) can be replaced by the requirement
(4)— For every r1 ∈ {1, . . . , M1 } such that
M1
(Ah )r1 s = 0 (1.36)
s=1
there are indices r2 , . . . , rl+1 such that
(Ah )ri ri+1 = 0 for i = 1, . . . , l
and
M1
(Ah )rl+1 s > 0 . (1.37)
s=1

These modi¬ed conditions without (7) will be denoted by (1.32) .
40 1. Finite Di¬erence Method for the Poisson Equation

Motivated by the example above we call a point r ∈ {1, . . . , M1 } far from
the boundary if (1.36) holds, and close to the boundary if (1.37) holds, and
the points r ∈ {M1 + 1, . . . , M } are called boundary points.
Theorem 1.10 We consider (1.31) under the assumption (1.32)— .
If f ¤ 0, then
(˜ h )r ¤
max u max (ˆ h )r .
u (1.38)
r∈{1,...,M} r∈{M1 +1,...,M}


Proof: We use the same notation and the same arguments as in the
proof of Theorem 1.9. In (1.35) in the last estimate equality holds, so that
no sign conditions for u are necessary. Because of (4)— the maximum will
¯
also be attained at a point close to the boundary and therefore also at
its neighbours. Because of (6)— a boundary point also belongs to these
2
neighbours, which proves the assertion.
From the maximum principles we immediately conclude a comparison
principle:
Lemma 1.11 We assume (1.32) or (1.32)— .
Let uh1 , uh2 ∈ RM1 be solutions of
Ah uhi = ’Ah uhi + f i
ˆˆ for i = 1, 2
for given f 1 , f 2 ∈ RM1 , uh1 , uh2 ∈ RM2 , which satisfy f 1 ¤ f 2 , uh1 ¤
ˆ ˆ ˆ
ˆ
uh2 . Then
uh1 ¤ uh2 .

Proof: From Ah (uh1 ’ uh2 ) = ’Ah (ˆ h1 ’ uh2 ) + f 1 ’ f 2 we can conclude
ˆu ˆ
with Theorem 1.9 or 1.10 that
(uh1 ’ uh2 )r ¤ 0 .
max
r∈{1,...,M1 }

2
This implies in particular the uniqueness of a solution of (1.31) for
ˆ
arbitrary uh and f and also the regularity of Ah .
In the following we denote by 0 and 0 the zero vector and the zero
matrix, respectively, where all components are equal to 0. An immediate
consequence of Lemma 1.11 is the following
Theorem 1.12 Let Ah ∈ RM1 ,M1 be a matrix with the properties (1.32)
(1)“(3) (i), (4)— , and uh ∈ RM1 . Then
Ah uh ≥ 0 uh ≥ 0 .
implies (1.39)

Proof: To be able to apply Lemma 1.11, one has to construct a matrix
Ah ∈ RM1 ,M2 such that (1.32)* holds. Obviously, this is possible. Then one
ˆ
1.4. Maximum Principles and Stability 41

can choose
ˆ
uh2 := uh , f 2 := Ah uh2 , uh2 := 0 ,
ˆ
uh1 := 0 , f 1 := 0 , uh1 := 0
ˆ
to conclude the assertion. Because of uhi := 0 for i = 1, 2 the speci¬c
2
ˆ
de¬nition of Ah plays no role.
A matrix with the property (1.39) is called inverse monotone. An
equivalent requirement is
A’1 v h ≥ 0 ,
vh ≥ 0 ’ h

and therefore by choosing the unit vectors as v h ,
A’1 ≥ 0 .
h

Inverse monotone matrices that also satisfy (1.32) (1), (2) are called M-
matrices.
Finally, we can weaken the assumptions for the validity of the comparison
principle.
Corollary 1.13 Suppose that Ah ∈ RM1 ,M1 is inverse monotone and
(1.32) (5) holds. Let uh1 , uh2 ∈ RM1 be solutions of
Ah uhi = ’Ah uhi + f i
ˆˆ for i = 1, 2
for given f 1 , f 2 ∈ RM1 , uh1 , uh2 ∈ RM2 that satisfy f 1 ¤ f 2 , uh1 ¤ uh2 .
ˆ ˆ ˆ ˆ
Then
uh1 ¤ uh2 .

Proof: Multiplying the equation
Ah (uh1 ’ uh2 ) = ’Ah (ˆ h1 ’ uh2 ) + f 1 ’ f 2
ˆu ˆ
from the left by the matrix A’1 , we get
h

uh1 ’ uh2 = ’ A’1 Ah (ˆ h1 ’ uh2 ) + A’1 (f 1 ’ f 2 ) ¤ 0 .
ˆu ˆ
h h
¤0 ¤0 ¤0
≥0 ≥0
2
The importance of Corollary 1.13 lies in the fact that there exist
˜
discretization methods, for which the matrix Ah does not satisfy, e.g., con-
’1
dition (1.32) (6), or (6)— but Ah ≥ 0. A typical example of such a method
is the ¬nite volume method described in Chapter 6.
In the following we denote by 1 a vector (of suitable dimension) whose
components are all equal to 1.
Theorem 1.14 We assume (1.32) (1)“(3), (4)— , (5). Furthermore, let
(1) (2)
wh , wh ∈ RM1 be given such that
(1) (2)
Ah w h ≥ 1 , Ah w h ≥ ’Ah 1 .
ˆ (1.40)
42 1. Finite Di¬erence Method for the Poisson Equation

Then a solution of Ah uh = ’Ah uh + f satis¬es
ˆˆ
(1) (2) (1) (2)
(1) ’ |f |∞ wh + |ˆ h |∞ wh ¤ uh ¤ |f |∞ wh + |ˆ h |∞ wh ,
u u
(1) (2)
(2) |uh |∞ ¤ wh |f |∞ + wh |ˆ h |∞ .
u
∞ ∞

Under the assumptions (1.32) (1)“(3), (4)— , and (1.40) the matrix norm
· ∞ induced by | · |∞ satis¬es

A’1
(1)
¤ wh .
∞ ∞
h


Proof: Since ’|f |∞ 1 ¤ f ¤ |f |∞ 1 and the analogous statement for uh
ˆ
(1) (2)
is valid, the vector v h := |f |∞ wh + |ˆ h |∞ wh ’ uh satis¬es
u

Ah v h ≥ |f |∞ 1 ’ f ’ Ah (|ˆ h |∞ 1 ’ uh ) ≥ 0 ,
ˆu ˆ
where we have also used ’Ah ≥ 0 in the last estimate. Therefore, the right
ˆ
inequality of (1) implies from Theorem 1.12 that the left inequality can be

<< . .

. 6
( : 59)



. . >>