then, set zi = ξ. Thus, the nonlinear Gauss-Seidel method converts

problem (7.1) into the successive solution of n scalar nonlinear equations.

In the case of system (7.64), each of these equations is linear in the unknown

(k+1)

zi (since ξ does not explicitly appear in the bracketed term at the right

side of (7.64)). This allows for its exact solution in one step.

In the case of system (7.65), the equation (7.66) is genuinely nonlinear

with respect to ξ, and is solved taking one step of a ¬xed-point iteration.

The nonlinear Gauss-Seidel (7.66) has been implemented in MATLAB

to solve systems (7.64) and (7.65) in the case of the initial triangulation

shown in Figure 7.7 (left). Such a triangulation covers the external region

of a two dimensional wing section of type NACA 2316. The grid contains

NT = 534 triangles and n = 198 internal nodes.

7.5 Exercises 325

The algorithm reached convergence in 42 iterations for both kinds of reg-

ularization, having used as stopping criterion the test z(k+1) ’ z(k) ∞ ¤

10’4 . In Figure 7.7 (right) the discretization grid obtained after the reg-

ularization by areas is shown (a similar result has been provided by the

regularization by edges). Notice the higher uniformity of the triangles with

respect to those of the starting grid.

FIGURE 7.7. Triangulation before (left) and after (right) the regularization

7.5 Exercises

1. Prove (7.8) for the m-step Newton-SOR method.

[Hint: use the SOR method for solving a linear system Ax=b with A=D-

E-F and express the k-th iterate as a function of the initial datum x(0) ,

obtaining

x(k+1) = x(0) + (Mk+1 ’ I)x(0) + (Mk + . . . + I)B’1 b,

where B= ω ’1 (D ’ ωE) and M = B’1 ω ’1 [(1 ’ ω)D + ωF ]. Since B’1 A =

I ’ M and

(I + . . . + Mk )(I ’ M) = I ’ Mk+1

then (7.8) follows by suitably identifying the matrix and the right-side of

the system.]

2. Prove that using the gradient method for minimizing f (x) = x2 with the

directions p(k) = ’1 and the parameters ±k = 2’k+1 , does not yield the

minimizer of f .

3. Show that for the steepest descent method applied to minimizing a quadratic

functional f of the form (7.35) the following inequality holds

2

»max ’ »min

)¤

(k+1)

f (x(k) ),

f (x

»max + »min

326 7. Nonlinear Systems and Numerical Optimization

where »max , »min are the eigenvalues of maximum and minimum module,

respectively, of the matrix A that appears in (7.35).

[Hint: proceed as done for (7.38).]

4. Check that the parameters ±k of Exercise 2 do not ful¬ll the conditions

(7.31) and (7.32).

5. Consider the function f : Rn ’ R introduced in (7.58) and check that it is

uniformly convex on Rn , that is

»f (u) + (1 ’ »)f (v) ’ f (»u + (1 ’ »)v) > (1/2)»(1 ’ ») u ’ v 2

A

for any u, v ∈ Rn with u = v and 0 < » < 1.

[Hint: notice that cosh(·) is a convex function.]

6. To solve the nonlinear system

± 1 1 1

’ cos x1 + x2 + sin x3 = x1

2

81 9 3

1 1

sin x1 + cos x3 = x2

3

3

’ 1 cos x + 1 x + 1 sin x = x ,

1 2 3 3

9 3 6

use the ¬xed-point iteration x(n+1) = Ψ(x(n) ), where x = (x1 , x2 , x3 )T and

Ψ(x) is the left-hand side of the system. Analyze the convergence of the

iteration to compute the ¬xed point ± = (0, 1/3, 0)T .

[Solution: the ¬xed-point method is convergent since Ψ(±) ∞ = 1/2.]

7. Using Program 50 implementing Newton™s method, determine the global

maximizer of the function

x2 1

f (x) = e’ ’

2 cos(2x)

4

and analyze the performance of the method (input data: xv=1; toll=1e-6;

nmax=500). Solve the same problem using the following ¬xed-point iteration

®2

x

e 2 (x sin(2x) + 2 cos(2x)) ’ 2 »

with g(x) = sin(2x) °

x(k+1) = g(xk ) .

2 (x sin(2x) + 2 cos(2x))

Analyze the performance of this second scheme, both theoretically and

experimentally, and compare the results obtained using the two methods.

[Solution: the function f has a global maximum at x = 0. This point is

a double zero for f . Thus, Newton™s method is only linearly convergent.

Conversely, the proposed ¬xed-point method is third-order convergent.]

8

Polynomial Interpolation

This chapter is addressed to the approximation of a function which is known

through its nodal values.

Precisely, given m+1 pairs (xi , yi ), the problem consists of ¬nding a func-

tion ¦ = ¦(x) such that ¦(xi ) = yi for i = 0, . . . , m, yi being some given

values, and say that ¦ interpolates {yi } at the nodes {xi }. We speak about

polynomial interpolation if ¦ is an algebraic polynomial, trigonometric ap-

proximation if ¦ is a trigonometric polynomial or piecewise polynomial

interpolation (or spline interpolation) if ¦ is only locally a polynomial.

The numbers yi may represent the values attained at the nodes xi by a

function f that is known in closed form, as well as experimental data. In the

former case, the approximation process aims at replacing f with a simpler

function to deal with, in particular in view of its numerical integration

or derivation. In the latter case, the primary goal of approximation is to

provide a compact representation of the available data, whose number is

often quite large.

Polynomial interpolation is addressed in Sections 8.1 and 8.2, while piece-

wise polynomial interpolation is introduced in Sections 8.3, 8.4 and 8.5. Fi-

nally, univariate and parametric splines are addressed in Sections 8.6 and

8.7. Interpolation processes based on trigonometric or algebraic orthogonal

polynomials will be considered in Chapter 10.

328 8. Polynomial Interpolation

8.1 Polynomial Interpolation

Let us consider n + 1 pairs (xi , yi ). The problem is to ¬nd a polynomial

Πm ∈ Pm , called an interpolating polynomial, such that

Πm (xi ) = am xm + . . . + a1 xi + a0 = yi i = 0, . . . , n. (8.1)

i

The points xi are called interpolation nodes. If n = m the problem is over

or under-determined and will be addressed in Section 10.7.1. If n = m, the

following result holds.

Theorem 8.1 Given n+1 distinct points x0 , . . . , xn and n+1 correspond-

ing values y0 , . . . , yn , there exists a unique polynomial Πn ∈ Pn such that

Πn (xi ) = yi for i = 0, . . . , n.

Proof. To prove existence, let us use a constructive approach, providing an

expression for Πn . Denoting by {li }n a basis for Pn , then Πn admits a repre-

i=0

n

sentation on such a basis of the form Πn (x) = i=0 bi li (x) with the property

that

n

Πn (xi ) = bj lj (xi ) = yi , i = 0, . . . , n. (8.2)

j=0

If we de¬ne

n

x ’ xj

li ∈ Pn : li (x) = i = 0, . . . , n, (8.3)

xi ’ xj

j=0

j=i

then li (xj ) = δij and we immediately get from (8.2) that bi = yi .

The polynomials {li , i = 0, . . . , n} form a basis for Pn (see Exercise 1). As a con-

sequence, the interpolating polynomial exists and has the following form (called

Lagrange form)

n

Πn (x) = yi li (x). (8.4)

i=0

To prove uniqueness, suppose that another interpolating polynomial Ψm of de-

gree m ¤ n exists, such that Ψm (xi ) = yi for i = 0, ..., n. Then, the di¬erence

polynomial Πn ’ Ψm vanishes at n + 1 distinct points xi and thus coincides with

the null polynomial. Therefore, Ψm = Πn .

An alternative approach to prove existence and uniqueness of Πn is provided

3

in Exercise 2.

It can be checked that (see Exercise 3)

n

ωn+1 (x)

Πn (x) = yi (8.5)

(x ’ xi )ωn+1 (xi )

i=0

8.1 Polynomial Interpolation 329

where ωn+1 is the nodal polynomial of degree n + 1 de¬ned as

n

(x ’ xi ).

ωn+1 (x) = (8.6)

i=0

Formula (8.4) is called the Lagrange form of the interpolating polynomial,

while the polynomials li (x) are the characteristic polynomials. In Figure

8.1 we show the characteristic polynomials l2 (x), l3 (x) and l4 (x), in the

case of degree n = 6, on the interval [-1,1] where equally spaced nodes are

taken, including the end points.

1.5

l2 l3 l4

1

0.5

0

’0.5

’1

’1.5

’1 ’0.5 0 0.5 1

FIGURE 8.1. Lagrange characteristic polynomials

Notice that |li (x)| can be greater than 1 within the interpolation interval.

If yi = f (xi ) for i = 0, . . . , n, f being a given function, the interpolating

polynomial Πn (x) will be denoted by Πn f (x).

8.1.1 The Interpolation Error

In this section we estimate the interpolation error that is made when re-

placing a given function f with its interpolating polynomial Πn f at the

nodes x0 , x1 , . . . , xn (for further results, we refer the reader to [Wen66],

[Dav63]).

Theorem 8.2 Let x0 , x1 , . . . , xn be n+1 distinct nodes and let x be a point

belonging to the domain of a given function f . Assume that f ∈ C n+1 (Ix ),

where Ix is the smallest interval containing the nodes x0 , x1 , . . . , xn and x.

Then the interpolation error at the point x is given by

f (n+1) (ξ)

En (x) = f (x) ’ Πn f (x) = ωn+1 (x), (8.7)

(n + 1)!

where ξ ∈ Ix and ωn+1 is the nodal polynomial of degree n + 1.

330 8. Polynomial Interpolation

Proof. The result is obviously true if x coincides with any of the interpola-

tion nodes. Otherwise, de¬ne, for any t ∈ Ix , the function G(t) = En (t) ’

ωn+1 (t)En (x)/ωn+1 (x). Since f ∈ C (n+1) (Ix ) and ωn+1 is a polynomial, then

G ∈ C (n+1) (Ix ) and it has n + 2 distinct zeros in Ix , since

G(xi ) = En (xi ) ’ ωn+1 (xi )En (x)/ωn+1 (x) = 0, i = 0, . . . , n

G(x) = En (x) ’ ωn+1 (x)En (x)/ωn+1 (x) = 0.

Then, thanks to the mean value theorem, G has n + 1 distinct zeros and, by

recursion, G(j) admits n + 2 ’ j distinct zeros. As a consequence, G(n+1) has a

(n+1)

(t) = f (n+1) (t)

unique zero, which we denote by ξ. On the other hand, since En

(n+1)

and ωn+1 (x) = (n + 1)! we get

(n + 1)!

G(n+1) (t) = f (n+1) (t) ’ En (x),

ωn+1 (x)

3

which, evaluated at t = ξ, gives the desired expression for En (x).

8.1.2 Drawbacks of Polynomial Interpolation on Equally

Spaced Nodes and Runge™s Counterexample

In this section we analyze the behavior of the interpolation error (8.7) as

n tends to in¬nity. For this purpose, for any function f ∈ C 0 ([a, b]), de¬ne

its maximum norm

= max |f (x)|.

f (8.8)

∞

x∈[a,b]

Then, let us introduce a lower triangular matrix X of in¬nite size, called the

interpolation matrix on [a, b], whose entries xij , for i, j = 0, 1, . . . , represent

points of [a, b], with the assumption that on each row the entries are all

distinct.

Thus, for any n ≥ 0, the n + 1-th row of X contains n + 1 distinct

values that we can identify as nodes, so that, for a given function f , we

can uniquely de¬ne an interpolating polynomial Πn f of degree n at those

nodes (any polynomial Πn f depends on X, as well as on f ).

Having ¬xed f and an interpolation matrix X, let us de¬ne the interpo-

lation error

En,∞ (X) = f ’ Πn f ∞, n = 0, 1, . . . (8.9)

Next, denote by p— ∈ Pn the best approximation polynomial, for which

n

En = f ’ p—

—

¤ f ’ qn ∀qn ∈ Pn .

∞ ∞

n

The following comparison result holds (for the proof, see [Riv74]).

8.1 Polynomial Interpolation 331

Property 8.1 Let f ∈ C 0 ([a, b]) and X be an interpolation matrix on [a, b].

Then

—

En,∞ (X) ¤ En (1 + Λn (X)) , n = 0, 1, . . . (8.10)

where Λn (X) denotes the Lebesgue constant of X, de¬ned as

n

(n)

|lj |

Λn (X) = , (8.11)

j=0

∞

(n)

∈ Pn is the j-th characteristic polynomial associated with

and where lj

(n)

the n + 1-th row of X, that is, satisfying lj (xnk ) = δjk , j, k = 0, 1, . . .

—

Since En does not depend on X, all the information concerning the e¬ects

of X on En,∞ (X) must be looked for in Λn (X). Although there exists an

interpolation matrix X— such that Λn (X) is minimized, it is not in general a

simple task to determine its entries explicitly. We shall see in Section 10.3,

that the zeros of the Chebyshev polynomials provide on the interval [’1, 1]

an interpolation matrix with a very small value of the Lebesgue constant.

On the other hand, for any possible choice of X, there exists a constant

C > 0 such that (see [Erd61])

2

log(n + 1) ’ C,

Λn (X) > n = 0, 1, . . .

π

This property shows that Λn (X) ’ ∞ as n ’ ∞. This fact has important

consequences: in particular, it can be proved (see [Fab14]) that, given an

interpolation matrix X on an interval [a, b], there always exists a continuous

function f in [a, b], such that Πn f does not converge uniformly (that is, in

the maximum norm) to f . Thus, polynomial interpolation does not allow for

approximating any continuous function, as demonstrated by the following

example.

Example 8.1 (Runge™s counterexample) Suppose we approximate the fol-

lowing function

1

’5 ¤ x ¤ 5

f (x) = , (8.12)

1 + x2

using Lagrange interpolation on equally spaced nodes. It can be checked that

some points x exist within the interpolation interval such that

lim |f (x) ’ Πn f (x)| = 0.

n’∞

In particular, Lagrange interpolation diverges for |x| > 3.63 . . . . This phenomenon

is particularly evident in the neighborhood of the end points of the interpolation

interval, as shown in Figure 8.2, and is due to the choice of equally spaced nodes.

We shall see in Chapter 10 that resorting to suitably chosen nodes will allow for

uniform convergence of the interpolating polynomial to the function f to hold. •

332 8. Polynomial Interpolation

2

1.5

1

0.5

0

’0.5

’5 ’4 ’3 ’2 ’1 0 1 2 3 4 5

FIGURE 8.2. Lagrange interpolation on equally spaced nodes for the function

f (x) = 1/(1 + x2 ): the interpolating polynomials Π5 f and Π10 f are shown in

dotted and dashed line, respectively

8.1.3 Stability of Polynomial Interpolation

Let us consider a set of function values f (xi ) which is a perturbation