<< . .

. 48
( : 95)



. . >>

of the data f (xi ) relative to the nodes xi , with i = 0, . . . , n, in an interval
[a, b]. The perturbation may be due, for instance, to the e¬ect of rounding
errors, or may be caused by an error in the experimental measure of the
data.
Denoting by Πn f the interpolating polynomial on the set of values f (xi ),
we have
n
Π n f ’ Πn f (f (xj ) ’ f (xj ))lj (x)
= max

a¤x¤b
j=0
¤ Λn (X) max |f (xi ) ’ f (xi )|.
i=0,...,n

As a consequence, small changes on the data give rise to small changes
on the interpolating polynomial only if the Lebesgue constant is small.
This constant plays the role of the condition number for the interpolation
problem.
As previously noticed, Λn grows as n ’ ∞ and in particular, in the case
of Lagrange interpolation on equally spaced nodes, it can be proved that
(see [Nat65])
2n+1
Λn (X)
en log n
where e 2.7183 is the naeperian number. This shows that, for n large,
this form of interpolation can become unstable. Notice also that so far we
have completely neglected the errors generated by the interpolation process
in constructing Πn f . However, it can be shown that the e¬ect of such errors
is generally negligible (see [Atk89]).
8.2 Newton Form of the Interpolating Polynomial 333

2.5


2


1.5


1


0.5


0


’0.5


’1


’1.5
’1 ’0.5 0 0.5 1


FIGURE 8.3. Instability of Lagrange interpolation. In solid line Π21 f , on unper-
turbed data, in dashed line Π21 f , on perturbed data, for Example 8.2

Example 8.2 On the interval [’1, 1] let us interpolate the function f (x) =
sin(2πx) at 22 equally spaced nodes xi . Next, we generate a perturbed set of val-
ues f (xi ) of the function evaluations f (xi ) = sin(2πxi ) with maxi=0,...,21 |f (xi ) ’
9.5 · 10’4 . In Figure 8.3 we compare the polynomials Π21 f and Π21 f :
f (xi )|
notice how the di¬erence between the two interpolating polynomials, around the
end points of the interpolation interval, is quite larger than the impressed per-
turbation (actually, Π21 f ’ Π21 f ∞ 2.1635 and Λ21 24000). •



8.2 Newton Form of the Interpolating Polynomial
The Lagrange form (8.4) of the interpolating polynomial is not the most
convenient from a practical standpoint. In this section we introduce an
alternative form characterized by a cheaper computational cost. Our goal
is the following:
given n + 1 pairs {xi , yi }, i = 0, . . . , n, we want to represent Πn (with
Πn (xi ) = yi for i = 0, . . . , n) as the sum of Πn’1 (with Πn’1 (xi ) = yi for
i = 0, . . . , n ’ 1) and a polynomial of degree n which depends on the nodes
xi and on only one unknown coe¬cient. We thus set
Πn (x) = Πn’1 (x) + qn (x), (8.13)
where qn ∈ Pn . Since qn (xi ) = Πn (xi ) ’ Πn’1 (xi ) = 0 for i = 0, . . . , n ’ 1,
it must necessarily be that
qn (x) = an (x ’ x0 ) . . . (x ’ xn’1 ) = an ωn (x).
334 8. Polynomial Interpolation

To determine the unknown coe¬cient an , suppose that yi = f (xi ), i =
0, . . . , n, where f is a suitable function, not necessarily known in explicit
form. Since Πn f (xn ) = f (xn ), from (8.13) it follows that

f (xn ) ’ Πn’1 f (xn )
an = . (8.14)
ωn (xn )

The coe¬cient an is called n-th the Newton divided di¬erence and is gen-
erally denoted by

an = f [x0 , x1 , . . . , xn ] (8.15)

for n ≥ 1. As a consequence, (8.13) becomes

Πn f (x) = Πn’1 f (x) + ωn (x)f [x0 , x1 , . . . , xn ]. (8.16)

If we let y0 = f (x0 ) = f [x0 ] and ω0 = 1, by recursion on n we can obtain
from (8.16) the following formula
n
Πn f (x) = ωk (x)f [x0 , . . . , xk ]. (8.17)
k=0

Uniqueness of the interpolating polynomial ensures that the above expres-
sion yields the same interpolating polynomial generated by the Lagrange
form. Form (8.17) is commonly known as the Newton divided di¬erence
formula for the interpolating polynomial.

Program 65 provides an implementation of Newton™s formula. The input
vectors x and y contain the interpolation nodes and the corresponding func-
tional evaluations of f , respectively, while vector z contains the abscissae
where the polynomial Πn f is to be evaluated. This polynomial is stored in
the output vector f.

Program 65 - interpol : Lagrange polynomial using Newton™s formula
function [f] = interpol (x,y,z)
[m n] = size(y);
for j = 1:m
a (:,1) = y (j,:)™;
for i = 2:n
a (i:n,i) = ( a(i:n,i-1)-a(i-1,i-1) )./(x(i:n)-x(i-1))™;
end
f(j,:) = a(n,n).*(z-x(n-1)) + a(n-1,n-1);
for i = 2:n-1
f(j,:) = f(j,:).*(z-x(n-i))+a(n-i,n-i);
end
end
8.2 Newton Form of the Interpolating Polynomial 335

8.2.1 Some Properties of Newton Divided Di¬erences
The n-th divided di¬erence f [x0 , . . . , xn ] = an can be further characterized
by noticing that it is the coe¬cient of xn in Πn f . Isolating such a coe¬cient
from (8.5) and equating it with the corresponding coe¬cient in the Newton
formula (8.17), we end up with the following explicit representation
n
f (xi )
f [x0 , . . . , xn ] = . (8.18)
ωn+1 (xi )
i=0

This formula has remarkable consequences:

1. the value attained by the divided di¬erence is invariant with respect
to permutations of the indexes of the nodes. This instance can be
pro¬tably employed when stability problems suggest exchanging the
indexes (for example, if x is the point where the polynomial must be
computed, it is convenient to introduce a permutation of the indexes
such that |x ’ xk | ¤ |x ’ xk’1 | with k = 1, . . . , n);

2. if f = ±g + βh for some ±, β ∈ R, then

f [x0 , . . . , xn ] = ±g[x0 , . . . , xn ] + βh[x0 , . . . , xn ];

3. if f = gh, the following formula (called the Leibniz formula) holds
(see [Die93])
n
f [x0 , . . . , xn ] = g[x0 , . . . , xj ]h[xj , . . . , xn ];
j=0


4. an algebraic manipulation of (8.18) (see Exercise 7) yields the follow-
ing recursive formula for computing divided di¬erences

f [x1 , . . . , xn ] ’ f [x0 , . . . , xn’1 ]
n ≥ 1. (8.19)
f [x0 , . . . , xn ] = ,
xn ’ x0

Program 66 implements the recursive formula (8.19). The evaluations of f
at the interpolation nodes x are stored in vector y, while the output matrix
d (lower triangular) contains the divided di¬erences, which are stored in
the following form

x0 f [x0 ]
x1 f [x1 ] f [x0 , x1 ]
x2 f [x2 ] f [x1 , x2 ] f [x0 , x1 , x2 ]
. . . ..
. . . .
. . .
xn f [xn ] f [xn’1 , xn ] f [xn’2 , xn’1 , xn ] . . . f [x0 , . . . , xn ]
336 8. Polynomial Interpolation

The coe¬cients involved in the Newton formula are the diagonal entries of
the matrix.

Program 66 - dividif : Newton divided di¬erences

function [d]=dividif(x,y)
[n,m]=size(y);
if n == 1, n = m; end
n = n-1; d = zeros (n+1,n+1); d (:,1) = y™;
for j = 2:n+1
for i = j:n+1
d (i,j) = ( d (i-1,j-1)-d (i,j-1))/(x (i-j+1)-x (i));
end
end


Using (8.19), n(n + 1) sums and n(n + 1)/2 divisions are needed to gen-
erate the whole matrix. If a new evaluation of f were available at a new
node xn+1 , only the calculation of a new row of the matrix would be re-
quired (f [xn , xn+1 ], . . . , f [x0 , x1 , . . . , xn+1 ]). Thus, in order to construct
Πn+1 f from Πn f , it su¬ces to add to Πn f the term an+1 ωn+1 (x), with a
computational cost of (n + 1) divisions and 2(n + 1) sums. For the sake of
notational simplicity, we write below Dr fi = f [xi , xi+1 , . . . , xr ].


Example 8.3 In Table 8.1 we show the divided di¬erences on the interval (0,2)
for the function f (x) = 1+sin(3x). The values of f and the corresponding divided
di¬erences have been computed using 16 signi¬cant ¬gures, although only the ¬rst
5 ¬gures are reported. If the value of f were available at node x = 0.2, updating
the divided di¬erence table would require only to computing the entries denoted

by italics in Table 8.1.



D2 fi D 3 fi D4 fi D 5 fi D6 fi
xi f (xi ) f [xi , xi’1 ]
0 1.0000
0.2 1.5646 2.82
0.4 1.9320 1.83 -2.46
0.8 1.6755 -0.64 -4.13 -2.08
1.2 0.5575 -2.79 -2.69 1.43 2.93
1.6 0.0038 -1.38 1.76 3.71 1.62 -0.81
2.0 0.7206 1.79 3.97 1.83 -1.17 -1.55 -0.36
TABLE 8.1. Divided di¬erences for the function f (x) = 1 + sin(3x) in the case
in which the evaluation of f at x = 0.2 is also available. The newly computed
values are denoted by italics
8.2 Newton Form of the Interpolating Polynomial 337

Notice that f [x0 , . . . , xn ] = 0 for any f ∈ Pn’1 . This property, how-
ever, is not always veri¬ed numerically, since the computation of divided
di¬erences might be highly a¬ected by rounding errors.

Example 8.4 Consider again the divided di¬erences for the function f (x) =
1 + sin(3x) on the interval (0, 0.0002). The function behaves like 1 + 3x in a
su¬ciently small neighbourhood of 0, so that we expect to ¬nd smaller numbers as
the order of divided di¬erences increases. However, the results obtained running
Program 66, and shown in Table 8.2 in exponential notation up to the ¬rst 4
signi¬cant ¬gures (although 16 digits have been employed in the calculations),
exhibit a substantially di¬erent pattern. The small rounding errors introduced in
the computation of divided di¬erences of low order have dramatically propagated

on the higher order divided di¬erences.


D2 fi D3 fi D4 fi D 5 fi
xi f (xi ) f [xi , xi’1 ]
0 1.0000
4.0e-5 1.0001 3.000
8.0e-5 1.0002 3.000 -5.39e-4
1.2e-4 1.0004 3.000 -1.08e-3 -4.50
1.6e-4 1.0005 3.000 -1.62e-3 -4.49 1.80e+1
’1.2e + 5
2.0e-4 1.0006 3.000 -2.15e-3 -4.49 -7.23


TABLE 8.2. Divided di¬erences for the function f (x) = 1+sin(3x) on the interval
(0,0.0002). Notice the completely wrong value in the last column (it should be
approximately equal to 0), due to the propagation of rounding errors throughout
the algorithm




8.2.2 The Interpolation Error Using Divided Di¬erences
Consider the nodes x0 , . . . , xn and let Πn f be the interpolating polynomial
of f on such nodes. Now let x be a node distinct from the previous ones;
letting xn+1 = x, we denote by Πn+1 f the interpolating polynomial of f
at the nodes xk , k = 0, . . . , n + 1. Using the Newton divided di¬erences
formula, we get

Πn+1 f (t) = Πn f (t) + (t ’ x0 ) . . . (t ’ xn )f [x0 , . . . , xn , t].

Since Πn+1 f (x) = f (x), we obtain the following formula for the interpola-
tion error at t = x

En (x) = f (x) ’ Πn f (x) = Πn+1 f (x) ’ Πn f (x)
= (x ’ x0 ) . . . (x ’ xn )f [x0 , . . . , xn , x] (8.20)
= ωn+1 (x)f [x0 , . . . , xn , x].
338 8. Polynomial Interpolation

Assuming f ∈ C (n+1) (Ix ) and comparing (8.20) with (8.7), yields

f (n+1) (ξ)
f [x0 , . . . , xn , x] = (8.21)
(n + 1)!
for a suitable ξ ∈ Ix . Since (8.21) resembles the remainder of the Tay-
lor series expansion of f , the Newton formula (8.17) for the interpolating
polynomial is often regarded as being a truncated expansion around x0
provided that |xn ’ x0 | is not too big.


8.3 Piecewise Lagrange Interpolation
In Section 8.1.1 we have outlined the fact that, for equally spaced inter-
polating nodes, uniform convergence of Πn f to f is not guaranteed as
n ’ ∞. On the other hand, using equally spaced nodes is clearly computa-
tionally convenient and, moreover, Lagrange interpolation of low degree is
su¬ciently accurate, provided su¬ciently small interpolation intervals are
considered.
Therefore, it is natural to introduce a partition Th of [a, b] into K subin-
tervals Ij = [xj , xj+1 ] of length hj , with h = max0¤j¤K’1 hj , such that
[a, b] = ∪K’1 Ij and then to employ Lagrange interpolation on each Ij
j=0
(i)
using n + 1 equally spaced nodes xj , 0 ¤ i ¤ n with a small n.
For k ≥ 1, we introduce on Th the piecewise polynomial space

Xh = v ∈ C 0 ([a, b]) : v|Ij ∈ Pk (Ij ) ∀Ij ∈ Th
k
(8.22)

which is the space of the continuous functions over [a, b] whose restric-
tions on each Ij are polynomials of degree ¤ k. Then, for any continuous
function f in [a, b], the piecewise interpolation polynomial Πk f coincides
h
on each Ij with the interpolating polynomial of f|Ij at the n + 1 nodes
(i)
xj , 0 ¤ i ¤ n . As a consequence, if f ∈ C k+1 ([a, b]), using (8.7) within
each interval we obtain the following error estimate

f ’ Πk f ¤ Chk+1 f (k+1) ∞. (8.23)

h

Note that a small interpolation error can be obtained even for low k pro-
vided that h is su¬ciently “small”.

Example 8.5 Let us go back to the function of Runge™s counterexample. Now,
piecewise polynomials of degree k = 1 and k = 2 are employed. We check ex-
perimentally for the behavior of the error as h decreases. In Table 8.3 we show
the absolute errors measured in the maximum norm over the interval [’5, 5] and
the corresponding estimates of the convergence order p with respect to h. Except
when using an excessively small number of subintervals, the results con¬rm the

theoretical estimate (8.23), that is p = k + 1.
8.3 Piecewise Lagrange Interpolation 339

f ’ Πh ∞ f ’ Πh ∞
h p p
1 2
5 0.4153 0.0835
2.5 0.1787 1.216 0.0971 -0.217
1.25 0.0631 1.501 0.0477 1.024
0.625 0.0535 0.237 0.0082 2.537
0.3125 0.0206 1.374 0.0010 3.038
0.15625 0.0058 1.819 1.3828e-04 2.856
0.078125 0.0015 1.954 1.7715e-05 2.964
TABLE 8.3. Interpolation error for Lagrange piecewise interpolation of degree
k = 1 and k = 2, in the case of Runge™s function (8.12); p denotes the trend of
the exponent of h. Notice that, as h ’ 0, p ’ k + 1, as predicted by (8.23)

Besides estimate (8.23), convergence results in integral norms exist (see
[QV94], [EEHJ96]). For this purpose, we introduce the following space
± 
 
b

L (a, b) = f : (a, b) ’ R, |f (x)| dx < +∞ ,
2 2
(8.24)
 
a

with
« 1/2
b

=  |f (x)|2 dx
f . (8.25)
L2 (a,b)
a

Formula (8.25) de¬nes a norm for L2 (a, b). (We recall that norms and semi-
norms of functions can be de¬ned in a manner similar to what was done in
De¬nition 1.17 in the case of vectors). We warn the reader that the integral
of the function |f |2 in (8.24) has to be intended in the Lebesgue sense (see,
e.g., [Rud83]). In particular, f needs not be continuous everywhere.

Theorem 8.3 Let 0 ¤ m ¤ k + 1, with k ≥ 1 and assume that f (m) ∈
L2 (a, b) for 0 ¤ m ¤ k + 1; then there exists a positive constant C, inde-
pendent of h, such that

(f ’ Πk f )(m) ¤ Chk+1’m f (k+1) L2 (a,b) . (8.26)
L2 (a,b)
h

In particular, for k = 1, and m = 0 or m = 1, we obtain

f ’ Π1 f ¤ C1 h2 f L2 (a,b) ,
L2 (a,b)
h
(8.27)
(f ’ Π1 f ) ¤ C2 h f ,
L2 (a,b) L2 (a,b)
h

for two suitable positive constants C1 and C2 .
Proof. We only prove (8.27) and refer to [QV94], Chapter 3 for the proof of
(8.26) in the general case.

<< . .

. 48
( : 95)



. . >>