<< . .

. 64
( : 95)



. . >>



xi’1 xi xi+1
FIGURE 10.4. Finite di¬erence approximation of f (xi ): backward (solid line),
forward (pointed line) and centred (dashed line)

Obviously, instead of (10.58) we could employ a centred incremental
ratio, obtaining the following approximation

f (xi+1 ) ’ f (xi’1 )
1 ¤ i ¤ n ’ 1.
uCD = , (10.61)
i
2h

Scheme (10.61) is a special instance of (10.57) setting m = 0, ±0 = 1,
m = 1, β’1 = 1/2, β0 = 0, β1 = ’1/2.

The right side of (10.61) is called the centred ¬nite di¬erence and geometri-
cally amounts to replacing f (xi ) with the slope of the straight line passing
through the points (xi’1 , f (xi’1 )) and (xi+1 , f (xi+1 )) (see Figure 10.4).
Resorting again to Taylor™s series, we get

h2
f (xi ) ’ uCD = ’ f (ξi ). (10.62)
i
6
Formula (10.61) thus provides a second-order approximation to f (xi ) with
respect to h.
Finally, with a similar procedure, we can derive a backward ¬nite di¬er-
ence scheme, where

f (xi ) ’ f (xi’1 )
1 ¤ i ¤ n,
uBD = , (10.63)
i
h
444 10. Orthogonal Polynomials in Approximation Theory

which is a¬ected by the following error
h
f (xi ) ’ uBD = f (ξi ). (10.64)
i
2
The values of the parameters in (10.57) are m = 0, ±0 = 1, m = 1 and
β’1 = 0, β0 = 1, β1 = ’1.
Higher-order schemes, as well as ¬nite di¬erence approximations of higher-
order derivatives of f , can be constructed using Taylor™s expansions of
higher order. A remarkable example is the approximation of f ; if f ∈
C 4 ([a, b]) we easily get

f (xi+1 ) ’ 2f (xi ) + f (xi’1 )
f (xi ) =
h2
2
h
’ f (4) (xi + θi h) + f (4) (xi ’ ωi h) , 0 < θi , ωi < 1.
24
The following centred ¬nite di¬erence scheme can thus be derived

f (xi+1 ) ’ 2f (xi ) + f (xi’1 )
1¤i¤n’1
ui = , (10.65)
h2

which is a¬ected by the error

h2
f (xi ) ’ ui = ’ f (4) (xi + θi h) + f (4) (xi ’ ωi h) . (10.66)
24
Formula (10.65) provides a second-order approximation to f (xi ) with re-
spect to h.


10.10.2 Compact Finite Di¬erences
More accurate approximations are provided by using the following formula
(which we call compact di¬erences)

β γ
(fi+1 ’ fi’1 ) + (fi+2 ’ fi’2 )
±ui’1 + ui + ±ui+1 = (10.67)
2h 4h
for i = 2, . . . , n ’ 1. We have set, for brevity, fi = f (xi ).
The coe¬cients ±, β and γ are to be determined in such a way that the
relations (10.67) yield values ui that approximate f (xi ) up to the highest
order with respect to h. For this purpose, the coe¬cients are selected in
such a way as to minimize the consistency error (see Section 2.2)
(1) (1) (1)
’ ±fi+1
σi (h) = ±fi’1 + fi
(10.68)
β γ
’ (fi+1 ’ fi’1 ) + (fi+2 ’ fi’2 )
2h 4h
10.10 Approximation of Function Derivatives 445

which comes from “forcing” f to satisfy the numerical scheme (10.67). For
(k)
brevity, we set fi = f (k) (xi ), k = 1, 2, . . . .
Precisely, assuming that f ∈ C 5 ([a, b]) and expanding it in a Taylor™s
series around xi , we ¬nd
h2 (2) h3 (3) h4 (4) h5 (5)
(1)
fi±1 = fi ± hfi ± ± + O(h6 ),
+ 2 fi 6 fi + 24 fi 120 fi

h2 (3) h3 (4) h4 (5)
(1) (1) (2)
± hfi ± + O(h5 ).
fi±1 = fi + 2 fi 6 fi + 24 fi

Substituting into (10.68) we get

h2 (3) h4 (5)
(1) (1)
+ ± fi + ± fi ’ (β + γ)fi
σi (h) = (2± + 1)fi
2 12

h2 h4
β 2γ β
(3) (5)
’ ’ + 8γ fi + O(h6 ).
+ fi
2 6 3 60 2
Second-order methods are obtained by equating to zero the coe¬cient of
(1)
fi , i.e., if 2± + 1 = β + γ, while we obtain schemes of order 4 by equating
(3)
to zero also the coe¬cient of fi , yielding 6± = β +4γ and ¬nally, methods
(5)
of order 6 are obtained by setting to zero also the coe¬cient of fi , i.e.,
10± = β + 16γ.
The linear system formed by these last three equations has a nonsingular
matrix. Thus, there exists a unique scheme of order 6 that corresponds to
the following choice of the parameters

± = 1/3, β = 14/9, γ = 1/9, (10.69)

while there exist in¬nitely many methods of second and fourth order.
Among these in¬nite methods, a popular scheme has coe¬cients ± = 1/4,
β = 3/2 and γ = 0. Schemes of higher order can be generated at the
expense of furtherly expanding the computational stencil.
Traditional ¬nite di¬erence schemes correspond to setting ± = 0 and
allow for computing explicitly the approximant of the ¬rst derivative of f
at a node, in contrast with compact schemes which are required in any case
to solve a linear system of the form Au = Bf (where the notation has the
obvious meaning).
To make the system solvable, it is necessary to provide values to the vari-
ables ui with i < 0 and i > n. A particularly favorable instance is that
where f is a periodic function of period b ’ a, in which case ui+n = ui
for any i ∈ Z. In the nonperiodic case, system (10.67) must be supplied
by suitable relations at the nodes near the boundary of the approximation
interval. For example, the ¬rst derivative at x0 can be computed using the
relation
1
u0 + ±u1 = (Af1 + Bf2 + Cf3 + Df4 ),
h
446 10. Orthogonal Polynomials in Approximation Theory

and requiring that
1 ’ ± + 6D
3 + ± + 2D
A=’ , B = 2 + 3D, C = ’ ,
2 2
in order for the scheme to be at least second-order accurate (see [Lel92]
for the relations to enforce in the case of higher-order methods). Finally,
we notice that, for any given order of accuracy, compact schemes have a
stencil smaller than the one of standard ¬nite di¬erences.

Program 91 provides an implementation of the compact ¬nite di¬erence
schemes (10.67) for the approximation of the derivative of a given function f
which is assumed to be periodic on the interval [a, b). The input parameters
alpha, beta and gamma contain the coe¬cients of the scheme, a and b are
the endpoints of the interval, f is a string containing the expression of f
and n denotes the number of subintervals in which [a, b] is partitioned. The
output vectors u and x contain the computed approximate values ui and
the node coordinates. Notice that setting alpha=gamma=0 and beta=1 we
recover the centered ¬nite di¬erence approximation (10.61).

Program 91 - compdi¬ : Compact di¬erence schemes
function [u, x] = compdi¬(alpha,beta,gamma,a,b,n,f)
h=(b-a)/(n+1); x=[a:h:b]; fx = eval(f);
A = eye(n+2)+alpha*diag(ones(n+1,1),1)+alpha*diag(ones(n+1,1),-1);
rhs = 0.5*beta/h*(fx(4:n+1)-fx(2:n-1))+0.25*gamma/h*(fx(5:n+2)-fx(1:n-2));
if gamma == 0
rhs=[0.5*beta/h*(fx(3)-fx(1)), rhs, 0.5*beta/h*(fx(n+2)-fx(n))];
A(1,1:n+2) = zeros(1,n+2);
A(1,1) = 1; A(1,2)=alpha; A(1,n+1)=alpha;
rhs=[0.5*beta/h*(fx(2)-fx(n+1)), rhs];
A(n+2,1:n+2) = zeros(1,n+2);
A(n+2,n+2) = 1; A(n+2,n+1)=alpha; A(n+2,2)=alpha;
rhs=[rhs, 0.5*beta/h*(fx(2)-fx(n+1))];
else
rhs=[0.5*beta/h*(fx(3)-fx(1))+0.25*gamma/h*(fx(4)-fx(n+1)), rhs];
A(1,1:n+2) = zeros(1,n+2);
A(1,1) = 1; A(1,2)=alpha; A(1,n+1)=alpha;
rhs=[0.5*beta/h*(fx(2)-fx(n+1))+0.25*gamma/h*(fx(3)-fx(n)), rhs];
rhs=[rhs,0.5*beta/h*(fx(n+2)-fx(n))+0.25*gamma/h*(fx(2)-fx(n-1))];
A(n+2,1:n+2) = zeros(1,n+2);
A(n+2,n+2) = 1; A(n+2,n+1)=alpha; A(n+2,2)=alpha;
rhs=[rhs,0.5*beta/h*(fx(2)-fx(n+1))+0.25*gamma/h*(fx(3)-fx(n))];
end
u = A \ rhs™;
return

Example 10.4 Let us consider the approximate evaluation of the derivative of
the function f (x) = sin(x) on the interval [0, 2π]. Figure 10.5 shows the loga-
10.10 Approximation of Function Derivatives 447

rithm of the maximum nodal errors for the second-order centered ¬nite di¬er-
ence scheme (10.61) and of the fourth and sixth-order compact di¬erence schemes

introduced above, as a function of p = log(n).

0
10



’2
10



’4
10



’6
10



’8
10



’10
10
4 8 16 32 64



FIGURE 10.5. Maximum nodal errors for the second-order centred ¬nite di¬er-
ence scheme (solid line) and for the fourth (dashed line) and sixth-order (dotted
line) compact di¬erence schemes as functions of p = log(n)


Another nice feature of compact schemes is that they maximize the range
of well-resolved waves as we are going to explain. Assume that f is a real
and periodic function on [0, 2π], that is, f (0) = f (2π). Using the same
notation as in Section 10.9, we let N be an even positive integer and set
h = 2π/N . Then replace f by its truncated Fourier series
N/2’1

fk eikx .
fN (x) =
k=’N/2

¯
Since the function f is real-valued, fk = f ’k for k = 1, . . . , N/2 and
¯
f0 = f 0 . For sake of convenience, introduce the normalized wave number
wk = kh = 2πk/N and perform a scaling of the coordinates setting s = x/h.
As a consequence, we get
N/2’1 N/2’1
— iksh
fk eiwk s .
fN (x(s)) = fk e = (10.70)
k=’N/2 k=’N/2

Taking the ¬rst derivative of (10.70) with respect to s yields a function
whose Fourier coe¬cients are fk = iwk fk . We can thus estimate the ap-

proximation error on (fN ) by comparing the exact coe¬cients fk with the
corresponding ones obtained by an approximate derivative, in particular,
by comparing the exact wave number wk with the approximate one, say
wk,app .
448 10. Orthogonal Polynomials in Approximation Theory

Let us neglect the subscript k and perform the comparison over the whole
interval [0, π) where wk is varying. It is clear that methods based on the
Fourier expansion have wapp = w if w = π (wapp = 0 if w = π). The family
of schemes (10.67) is instead characterized by the wave number
a sin(z) + (b/2) sin(2z) + (c/3) sin(3z)
z ∈ [0, π)
wapp (z) = ,
1 + 2± cos(z) + 2β cos(2z)
(see [Lel92]). Figure 10.6 displays a comparison among wave numbers of
several schemes, of compact and non compact type.
The range of values for which the wave number computed by the numer-
ical scheme adequately approximates the exact wave number, is the set of
well-resolved waves. As a consequence, if wmin is the smallest well-resolved
wave, the di¬erence 1 ’ wmin /π represents the fraction of waves that are
unresolved by the numerical scheme. As can be seen in Figure 10.6, the
standard ¬nite di¬erence schemes approximate correctly the exact wave
number only for small wave numbers.

3


2.5
(d)

(c)
2

(b)
1.5


1
(a)

0.5


0
0.5 1 1.5 2 2.5 3


FIGURE 10.6. Computed wave numbers for centred ¬nite di¬erences (10.61) (a)
and for compact schemes of fourth (b), sixth (c) and tenth (d) order, compared
with the exact wave number (the straight line). On the x axis the normalized
coordinate s is represented



10.10.3 Pseudo-Spectral Derivative
An alternative way for numerical di¬erentiation consists of approximating
the ¬rst derivative of a function f with the exact ¬rst derivative of the
polynomial Πn f interpolating f at the nodes {x0 , . . . , xn }.
Exactly as happens for Lagrange interpolation, using equally spaced
nodes does not yield stable approximations to the ¬rst derivative of f for
n large. For this reason, we limit ourselves to considering the case where
the nodes are nonuniformly distributed according to the Gauss-Lobatto-
Chebyshev formula.
10.10 Approximation of Function Derivatives 449

For simplicity, assume that I = [a, b] = [’1, 1] and for n ≥ 1, take in
I the Gauss-Lobatto-Chebyshev nodes as in (10.21). Then, consider the
Lagrange interpolating polynomial ΠGL f , introduced in Section 10.3. We
n,w
de¬ne the pseudo-spectral derivative of f ∈ C 0 (I) to be the derivative of
the polynomial ΠGL f
n,w


Dn f = (ΠGL f ) ∈ Pn’1 (I).
n,w

The error made in replacing f with Dn f is of exponential type, that is, it
only depends on the smoothness of the function f . More precisely, there
exists a constant C > 0 independent of n such that

f ’ Dn f ¤ Cn1’m f m,w , (10.71)
w

for any m ≥ 2 such that the norm f m,w , introduced in (10.23), is ¬nite.
Recalling (10.19) and using (10.27) yields
n
f (¯j )¯j (¯i ),
(Dn f )(¯i ) =
x xlx i = 0, . . . , n, (10.72)
j=0

so that the pseudo-spectral derivative at the interpolation nodes can be
computed knowing only the nodal values of f and of ¯j . These values can
l
be computed once for all and stored in a matrix D ∈ R(n+1)—(n+1) : Dij =
¯ (¯i ) for i, j = 0, ..., n, called a pseudo-spectral di¬erentiation matrix.
lj x
Relation (10.72) can thus be cast in matrix form as f = Df , letting
f = [f (¯i )] and f = [(Dn f )(¯i )] for i = 0, ..., n.
x x
The entries of D have the following explicit form (see [CHQZ88], p. 69)
± l+j
 dl (’1) ,

 d x ’x l = j,
 j ¯l ¯j



 ’¯j
 x
 1 ¤ l = j ¤ n ’ 1,
 ,
2(1 ’ x2 )
¯j
Dlj = (10.73)
 2n2 + 1
’

 , l = j = 0,

 6


 2n2 + 1

 , l = j = n,
6
where the coe¬cients dl have been de¬ned in Section 10.3 (see also Example
5.13 concerning the approximation of the multiple eigenvalue » = 0 of D).
To compute the pseudo-spectral derivative of a function f over the generic
interval [a, b], we only have to resort to the change of variables considered

<< . .

. 64
( : 95)



. . >>