Then,

Lh e = Lh u ’ Lh uh = Lh u ’ fh = „h . (12.22)

It can be shown (see Exercise 5) that

¤3

2 2 2

„h f +f (12.23)

L2 (0,1)

h h

from which it follows that the norm of the discrete second-order derivative

of the discretization error is bounded, provided that the norms of f at the

right-hand side of (12.23) are also bounded.

12.2.2 Convergence Analysis

The ¬nite di¬erence solution uh can be characterized by a discrete Green™s

function as follows. For a given grid point xk de¬ne a grid function Gk ∈ Vh

0

as the solution to the following problem

Lh Gk = ek , (12.24)

12.2 Finite Di¬erence Approximation 539

where ek ∈ Vh satis¬es ek (xj ) = δkj , 1 ¤ j ¤ n ’ 1. It is easy to see

0

that Gk (xj ) = hG(xj , xk ), where G is the Green™s function introduced in

(12.4) (see Exercise 6). For any grid function g ∈ Vh we can de¬ne the grid

0

function

n’1

g(xk )Gk .

wh = Th g, wh = (12.25)

k=1

Then

n’1 n’1

k

Lh wh = g(xk )Lh G = g(xk )ek = g.

k=1 k=1

In particular, the solution uh of (12.10) satis¬es uh = Th f , therefore

n’1 n’1

k

uh = f (xk )G , and uh (xj ) = h G(xj , xk )f (xk ). (12.26)

k=1 k=1

Theorem 12.1 Assume that f ∈ C 2 ([0, 1]). Then, the nodal error e(xj ) =

u(xj ) ’ uh (xj ) satis¬es

h2

u ’ uh h,∞ ¤ f ∞, (12.27)

96

i.e. uh converges to u (in the discrete maximum norm) with second order

with respect to h.

Proof. We start by noticing that, thanks to the representation (12.25), the

following discrete counterpart of (12.5) holds

1

¤

uh f h,∞ . (12.28)

h,∞

8

Indeed, we have

n’1 n’1

|uh (xj )| ¤h G(xj , xk )|f (xk )| ¤ f h G(xj , xk )

h,∞

k=1 k=1

1 1

= f h,∞ xj (1 ’ xj ) ¤ f h,∞

2 8

since, if g = 1, then Th g is such that Th g(xj ) = 1 xj (1 ’ xj ) (see Exercise 7).

2

Inequality (12.28) provides a result of stability in the discrete maximum norm

for the ¬nite di¬erence solution uh . Using (12.22), by the same argument used to

prove (12.28) we obtain

1

¤

e „h h,∞ .

h,∞

8

3

Finally, the thesis (12.27) follows owing to (12.20).

Observe that for the derivation of the convergence result (12.27) we have

used both stability and consistency. In particular, the discretization error

is of the same order (with respect to h) as the consistency error „h .

540 12. Two-Point Boundary Value Problems

12.2.3 Finite Di¬erences for Two-Point Boundary Value

Problems with Variable Coe¬cients

A two-point boundary value problem more general than (12.1)-(12.2) is the

following one

Lu(x) = ’(J(u)(x)) + γ(x)u(x) = f (x) 0 < x < 1,

(12.29)

u(0) = d0 , u(1) = d1

where

J(u)(x) = ±(x)u (x), (12.30)

d0 and d1 are assigned constants and ±, γ and f are given functions that

are continuous in [0, 1]. Finally, γ(x) ≥ 0 in [0, 1] and ±(x) ≥ ±0 > 0 for a

suitable ±0 . The auxiliary variable J(u) is the ¬‚ux associated with u and

very often has a precise physical meaning.

For the approximation, it is convenient to introduce on [0, 1] a new grid

made by the midpoints xj+1/2 = (xj + xj+1 )/2 of the intervals [xj , xj+1 ]

for j = 0, . . . , n ’ 1. Then, a ¬nite di¬erence approximation of (12.29) is

given by: ¬nd uh ∈ Vh such that

Lh uh (xj ) = f (xj ) for all j = 1, . . . , n ’ 1,

(12.31)

uh (x0 ) = d0 , uh (xn ) = d1 ,

where Lh is de¬ned for j = 1, . . . , n ’ 1 as

Jj+1/2 (wh ) ’ Jj’1/2 (wh )

Lh w(xj ) = ’ + γj wj . (12.32)

h

We have de¬ned γj = γ(xj ) and, for j = 0, . . . , n ’ 1, the approximate

¬‚uxes are given by

wj+1 ’ wj

Jj+1/2 (wh ) = ±j+1/2 (12.33)

h

with ±j+1/2 = ±(xj+1/2 ).

The ¬nite di¬erence scheme (12.31)-(12.32) with the approximate ¬‚uxes

(12.33) can still be cast in the form (12.7) by setting

Afd = h’2 tridiagn’1 (a, d, a) + diagn’1 (c) (12.34)

where

T

∈ Rn’2 ,

a = ±1/2 , ±3/2 , . . . , ±n’1/2

T

∈ Rn’1 ,

d = ±1/2 + ±3/2 , . . . , ±n’3/2 + ±n’1/2

T

c = (γ1 , . . . , γn’1 ) ∈ Rn’1 .

12.2 Finite Di¬erence Approximation 541

The matrix (12.34) is symmetric positive de¬nite and is also strictly diag-

onally dominant if γ > 0.

The convergence analysis of the scheme (12.31)-(12.32) can be carried

out by extending straightforwardly the techniques developed in Sections

12.2.1 and 12.2.2.

We conclude this section by addressing boundary conditions that are more

general than those considered in (12.29). For this purpose we assume that

u(0) = d0 , J(u(1)) = g1 ,

where d0 and g1 are two given data. The boundary condition at x = 1 is

called a Neumann condition while the condition at x = 0 (where the value

of u is assigned) is a Dirichlet boundary condition. The ¬nite di¬erence

discretization of the Neumann boundary condition can be performed by

using the mirror imaging technique. For any su¬ciently smooth function

ψ we write its truncated Taylor™s expansion at xn as

h2

ψn’1/2 + ψn+1/2

’ (ψ (·n ) + ψ (ξn ))

ψn =

2 16

for suitable ·n ∈ (xn’1/2 , xn ), ξn ∈ (xn , xn+1/2 ). Taking ψ = J(u) and

neglecting the term containing h2 yields

Jn+1/2 (uh ) = 2g1 ’ Jn’1/2 (uh ). (12.35)

Notice that the point xn+1/2 = xn + h/2 and the corresponding ¬‚ux Jn+1/2

do not really exist (indeed, xn+1/2 is called a “ghost” point), but it is

generated by linear extrapolation of the ¬‚ux at the nearby nodes xn’1/2

and xn . The ¬nite di¬erence equation (12.32) at the node xn reads

Jn’1/2 (uh ) ’ Jn+1/2 (uh )

+ γn un = fn .

h

Using (12.35) to obtain Jn+1/2 (uh ) we ¬nally get the second-order accurate

approximation

±n’1/2

un’1 γn g1 fn

’±n’1/2 + + un = +.

h2 h2 2 h 2

This formula suggests easy modi¬cation of the matrix and right-hand side

entries in the ¬nite di¬erence system (12.7).

For a further generalization of the boundary conditions in (12.29) and

their discretization using ¬nite di¬erences we refer to Exercise 10 where

boundary conditions of the form »u + µu = g at both the endpoints of

(0, 1) are considered for u (Robin boundary conditions).

For a thorough presentation and analysis of ¬nite di¬erence approxima-

tions of two-point boundary value problems, see, e.g., [Str89] and [HGR96].

542 12. Two-Point Boundary Value Problems

12.3 The Spectral Collocation Method

Other discretization schemes can be derived which exhibit the same struc-

ture as the ¬nite di¬erence problem (12.10), with a discrete operator Lh

being de¬ned in a di¬erent manner, though.

Actually, numerical approximations of the second derivative other than

the centred ¬nite di¬erence one can be set up, as described in Section

10.10.3. A noticeable instance is provided by the spectral collocation method.

In that case we assume the di¬erential equation (12.1) to be set on the in-

terval (’1, 1) and choose the nodes {x0 , . . . , xn } to coincide with the n + 1

Legendre-Gauss-Lobatto nodes introduced in Section 10.4. Besides, uh is a

polynomial of degree n. For coherence, we will use throughout the section

the index n instead of h.

The spectral collocation problem reads

¬nd un ∈ P0 : Ln un (xj ) = f (xj ), j = 1, . . . , n ’ 1 (12.36)

n

where P0 is the set of polynomials p ∈ Pn ([0, 1]) such that p(0) = p(1) = 0.

n

Besides, Ln v = LIn v for any continuous function v where In v ∈ Pn is the

interpolant of v at the nodes {x0 , . . . , xn } and L denotes the di¬erential

operator at hand, which, in the case of equation (12.1), coincides with

’d2 /dx2 . Clearly, if v ∈ Pn then Ln v = Lv.

The algebraic form of (12.36) becomes

Asp u = f ,

where uj = un (xj ), fj = f (xj ) j = 1, . . . , n’1 and the spectral collocation

˜ ˜

matrix Asp ∈ R(n’1)—(n’1) is equal to D2 , where D is the matrix obtained

from the pseudo-spectral di¬erentiation matrix (10.73) by eliminating the

¬rst and the n + 1-th rows and columns.

For the analysis of (12.36) we can introduce the following discrete scalar

product

n

(u, v)n = u(xj )v(xj )wj , (12.37)

j=0

where wj are the weights of the Legendre-Gauss-Lobatto quadrature for-

mula (see Section 10.4). Then (12.36) is equivalent to

∀vn ∈ P0 .

(Ln un , vn )n = (f, vn )n (12.38)

n

Since (12.37) is exact for u, v such that uv ∈ P2n’1 (see Section 10.2) then

∀vn ∈ P0 .

2

(Ln vn , vn )n = (Ln vn , vn ) = vn L2 (’1,1) , n

Besides,

√

(f, vn )n ¤ f ¤

vn 6f vn L2 (’1,1) ,

∞

n n

12.3 The Spectral Collocation Method 543

where f ∞ denotes the maximum of f in [’1, 1] and we have used the

√

fact that f n ¤ 2 f ∞ and the result of equivalence

√

vn L2 (’1,1) ¤ vn n ¤ 3 vn L2 (’1,1) , ∀vn ∈ Pn

(see [CHQZ88], p. 286).

Taking vn = un in (12.38) and using the Poincar´ inequality (12.16) we

e

¬nally obtain

√

un L2 (’1,1) ¤ 6CP f ∞

which ensures that problem (12.36) has a unique solution which is stable.

As for consistency, we can notice that

„n (xj ) = (Ln u ’ f )(xj ) = (’(In u) ’ f )(xj ) = (u ’ In u) (xj )

and this right-hand side tends to zero as n ’ ∞ provided that u ∈

C 2 ([’1, 1]).

Let us now establish a convergence result for the spectral collocation

approximation of (12.1). In the following, C is a constant independent of

n that can assume di¬erent values at di¬erent places.

Moreover, we denote by Hs (a, b), for s ≥ 1, the space of the functions f ∈

C s’1 (a, b) such that f (s’1) is continuous and piecewise di¬erentiable, so

that f (s) exists unless for a ¬nite number of points and belongs to L2 (a, b).

The space Hs (a, b) is known as the Sobolev function space of order s and is

endowed with the norm · Hs (a,b) de¬ned in (10.35).

Theorem 12.2 Let f ∈ Hs (’1, 1) for some s ≥ 1. Then

¤ Cn’s

u ’ un f +u . (12.39)

L2 (’1,1) Hs (’1,1) Hs+1 (’1,1)

Proof. Note that un satis¬es

(un , vn ) = (f, vn )n

1

uvdx is the scalar product of L2 (’1, 1). Similarly, u satis¬es

where (u, v) = ’1

∀v ∈ C 1 ([0, 1]) such that v(0) = v(1) = 0

(u , v ) = (f, v)

(see (12.43) of Section 12.4). Then

((u ’ un ) , vn ) = (f, vn ) ’ (f, vn )n =: E(f, vn ), ∀vn ∈ P0 .

n

It follows that

((u ’ un ) , (u ’ un ) ) = ((u ’ un ) , (u ’ In u) ) + ((u ’ un ) , (In u ’ un ) )

= ((u ’ un ) , (u ’ In u) ) + E(f, In u ’ un ).

We recall the following result (see (10.36))

|E(f, vn )| ¤ Cn’s f vn L2 (’1,1) .

Hs (’1,1)

544 12. Two-Point Boundary Value Problems

Then

|E(f, In u ’ un )| ¤ Cn’s f In u ’ u + u ’ un .

Hs (’1,1) L2 (’1,1) L2 (’1,1)

We recall now the following Young™s inequality (see Exercise 8)

12

ab ¤ µa2 + ∀a, b ∈ R, ∀µ > 0.

b, (12.40)

4µ

Using this inequality we obtain

1

(u ’ un ) , (u ’ In u) ¤ (u ’ un ) + (u ’ In u)

2 2

L2 (’1,1) ,

L2 (’1,1)

4

and also (using the Poincar´ inequality (12.16))

e

Cn’s f ¤ C CP n’s f

u ’ un (u ’ un )

Hs (’1,1) Hs (’1,1)

L2 (’1,1) L2 (’1,1)

1

¤ (CCP )2 n’2s f (u ’ un ) 2

2

+ L2 (’1,1) .

Hs (’1,1)

4

Finally,

1 2 ’2s 1

Cn’s f In u ’ u ¤ In u ’ u

2 2

Cn f + L2 (’1,1) .

Hs (’1,1) Hs (’1,1)

L2 (’1,1)

2 2

Using the interpolation error estimate (10.22) for u ’ In u we ¬nally obtain the

3

desired error estimate (12.39).

12.4 The Galerkin Method

We now derive the Galerkin approximation of problem (12.1)-(12.2), which

is the basic ingredient of the ¬nite element method and the spectral method,

widely employed in the numerical approximation of boundary value prob-

lems.

12.4.1 Integral Formulation of Boundary Value Problems

We consider a problem which is slightly more general than (12.1), namely

’(±u ) (x) + (βu )(x) + (γu)(x) = f (x) 0 < x < 1, (12.41)

with u(0) = u(1) = 0, where ±, β and γ are continuous functions on [0, 1]

with ±(x) ≥ ±0 > 0 for any x ∈ [0, 1]. Let us now multiply (12.41) by

a function v ∈ C 1 ([0, 1]), hereafter called a “test function”, and integrate

over the interval [0, 1]

1 1 1 1

f v dx + [±u v]1 ,

±u v dx + βu v dx + γuv dx = 0

0 0 0 0

12.4 The Galerkin Method 545

where we have used integration by parts on the ¬rst integral. If the function

v is required to vanish at x = 0 and x = 1 we obtain

1 1 1 1

±u v dx + βu v dx + γuv dx = f v dx.

0 0 0 0

We will denote by V the test function space. This consists of all functions v

that are continuous, vanish at x = 0 and x = 1 and whose ¬rst derivative is

piecewise continuous, i.e., continuous everywhere except at a ¬nite number

of points in [0, 1] where the left and right limits v’ and v+ exist but do not

necessarily coincide.

V is actually a vector space which is denoted by H1 (0, 1). Precisely,

0

H1 (0, 1) = v ∈ L2 (0, 1) : v ∈ L2 (0, 1), v(0) = v(1) = 0 (12.42)

0

where v is the distributional derivative of v whose de¬nition is given in

Section 12.4.2.

We have therefore shown that if a function u ∈ C 2 ([0, 1]) satis¬es (12.41),

then u is also a solution of the following problem

¬nd u ∈ V : a(u, v) = (f, v) for all v ∈ V, (12.43)

1

f v dx denotes the scalar product of L2 (0, 1) and

where now (f, v) = 0

1 1 1

a(u, v) = ±u v dx + βu v dx + γuv dx (12.44)

0 0 0