<< . .

. 55
( : 95)



. . >>

xn

(s ’ x0 ) . . . (s ’ xn’1 )(s ’ xn )(xn ’ s)ds
=
x0
n

’hn+3 „ („ ’ 1) . . . („ ’ n + 1)(„ ’ n)2 d„.
=
0

Finally, letting t = n ’ „ and combining this result with (9.23), we get (9.19). 3

Relations (9.19) and (9.20) are a priori estimates for the quadrature error
(see Chapter 2, Section 2.3). Their use in generating a posteriori estimates
of the error in the frame of adaptive algorithms will be examined in Section
9.7.
In the case of closed Newton-Cotes formulae, we show in Table 9.3, for
1 ¤ n ¤ 6, the degree of exactness (that we denote henceforth by rn ) and
the absolute value of the constant Mn = Mn /(n + 2)! (if n is even) or
Kn = Kn /(n + 1)! (if n is odd).

Example 9.2 The purpose of this example is to assess the importance of the
regularity assumption on f for the error estimates (9.19) and (9.20). Consider
the closed Newton-Cotes formulae, for 1 ¤ n ¤ 6, to approximate the integral
1 5/2
x dx = 2/7 0.2857. Since f is only C 2 ([0, 1]), we do not expect a substan-
0
tial increase of the accuracy as n gets larger. Actually, this is con¬rmed by Table
9.4, where the results obtained by running Program 74 are reported.
382 9. Numerical Integration

Mn Kn Mn Kn Mn Kn
n rn n rn n rn

1 3 275
1 1 3 3 5 5
12 80 12096
1 8 9
2 3 4 5 6 7
90 945 1400

TABLE 9.3. Degree of exactness and error constants for closed Newton-Cotes
formulae

c
For n = 1, . . . , 6, we have denoted by En (f ) the module of the absolute error,
c s
by qn the computed order of in¬nitesimal and by qn the corresponding theoretical
value predicted by (9.19) and (9.20) under optimal regularity assumptions for f .
As is clearly seen, qn is de¬nitely less than the potential theoretical value qn . •
c s




c c s c c s
n En (f ) qn qn n En (f ) qn qn
5.009 · 10’5
1 0.2143 3 3 4 4.7 7
1.196 · 10’3 3.189 · 10’5
2 3.2 5 5 2.6 7
5.753 · 10’4 7.857 · 10’6
3 3.8 5 6 3.7 9
1
x5/2 dx
TABLE 9.4. Error in the approximation of 0




Example 9.3 From a brief analysis of error estimates (9.19) and (9.20), we could
be led to believe that only non-smooth functions can be a source of trouble when
dealing with Newton-Cotes formulae. Thus, it is a little surprising to see results
like those in Table 9.5, concerning the approximation of the integral
5
1
I(f ) = dx = 2 arctan 5 2.747, (9.24)
1 + x2
’5


where f (x) = 1/(1 + x2 ) is Runge™s function (see Section 8.1.2), which belongs to
C ∞ (R). The results clearly demonstrate that the error remains almost unchanged
as n grows. This is due to the fact that singularities on the imaginary axis may
also a¬ect the convergence properties of a quadrature formula. This is indeed the

case with the function at hand, which exhibits two singularities at ± ’1 (see

[DR75], pp. 64-66).


n En (f ) n En (f ) n En (f )
1 0.8601 3 0.2422 5 0.1599
2 -1.474 4 0.1357 6 -0.4091
TABLE 9.5. Relative error En (f ) = [I(f ) ’ In (f )]/In (f ) in the approximate
evaluation of (9.24) using closed Newton-Cotes formulae

To increase the accuracy of an interpolatory quadrature rule, it is by
no means convenient to increase the value of n. By doing so, the same
9.4 Composite Newton-Cotes Formulae 383

drawbacks of Lagrange interpolation on equally spaced nodes would arise.
For example, the weights of the closed Newton-Cotes formula with n = 8
do not have the same sign (see Table 9.6 and recall that wi = wn’i for
i = 0, . . . , n ’ 1).

n w0 w1 w2 w3 w4 rn Mn
3712 18160
’ ’
3956 23552 41984 2368
8 9
14175 14175 14175 467775
14175 14175
TABLE 9.6. Weights of the closed Newton-Cotes formula with 9 nodes

This can give rise to numerical instabilities, due to rounding errors (see
Chapter 2), and makes this formula useless in the practice, as happens for
all the Newton-Cotes formulae using more than 8 nodes. As an alternative,
one can resort to composite formulae, whose error analysis is addressed in
Section 9.4, or to Gaussian formulae, which will be dealt with in Chapter
10 and which yield maximum degree of exactness with a non equally spaced
nodes distribution.

The closed Newton-Cotes formulae, for 1 ¤ n ¤ 6, are implemented in
Program 74.

Program 74 - newtcot : Closed Newton-Cotes formulae

function int = newtcot(a,b,n,fun)
h=(b-a)/n; n2=¬x(n/2);
if n > 6, disp(™maximum value of n equal to 6 ™); return; end
a03=1/3; a08=1/8; a45=1/45; a288=1/288; a140=1/140;
alpha=[0.5 0 0 0; ...
a03 4*a03 0 0; ...
3*a08 9*a08 0 0; ...
14*a45 64*a45 24*a45 0; ...
95*a288 375*a288 250*a288 0; ...
41*a140 216*a140 27*a140 272*a140];
x=a; y(1)=eval(fun);
for j=2:n+1, x=x+h; y(j)=eval(f); end; int=0;
for j=1:n2+1, int=int+y(j)*alpha(n,j); end;
for j=n2+2:n+1, int=int+y(j)*alpha(n,n-j+2); end; int=int*h;




9.4 Composite Newton-Cotes Formulae
The examples of Section 9.2 have already pointed out that composite
Newton-Cotes formulae can be constructed by replacing f with its com-
posite Lagrange interpolating polynomial, introduced in Section 8.1.
384 9. Numerical Integration

The general procedure consists of partitioning the integration interval
[a, b] into m subintervals Tj = [yj , yj+1 ] such that yj = a + jH, where H =
(b ’ a)/m for j = 0, . . . , m. Then, over each subinterval, an interpolatory
(j) (j)
formula with nodes {xk , 0 ¤ k ¤ n} and weights {±k , 0 ¤ k ¤ n} is
used. Since
b m’1
I(f ) = f (x)dx = f (x)dx,
j=0 T
a j


a composite interpolatory quadrature formula is obtained by replacing I(f )
with
m’1 n
(j) (j)
In,m (f ) = ±k f (xk ). (9.25)
j=0 k=0

The quadrature error is de¬ned as En,m (f ) = I(f ) ’ In,m (f ). In particular,
over each subinterval Tj one can resort to a Newton-Cotes formula with
(j)
n + 1 equally spaced nodes: in such a case, the weights ±k = hwk are still
independent of Tj .
Using the same notation as in Theorem 9.2, the following convergence
result holds for composite formulae.

Theorem 9.3 Let a composite Newton-Cotes formula, with n even, be
used. If f ∈ C n+2 ([a, b]), then
b’a Mn
H n+2 f (n+2) (ξ)
En,m (f ) = (9.26)
(n + 2)! (n + 2)n+3
where ξ ∈ (a, b). Therefore, the quadrature error is an in¬nitesimal in H
of order n + 2 and the formula has degree of exactness equal to n + 1.
For a composite Newton-Cotes formula, with n odd, if f ∈ C n+1 ([a, b])
b ’ a Kn n+1 (n+1)
En,m (f ) = H f (·) (9.27)
(n + 1)! nn+2
where · ∈ (a, b). Thus, the quadrature error is an in¬nitesimal in H of
order n + 1 and the formula has degree of exactness equal to n.
Proof. We only consider the case where n is even. Using (9.19), and noticing
that Mn does not depend on the integration interval, we get
m’1 m’1
Mn
I(f )|Tj ’ In (f )|Tj hn+3 f (n+2) (ξj ),
En,m (f ) = = j
(n + 2)!
j=0 j=0

where, for j = 0, . . . , (m ’ 1), hj = |Tj |/(n + 2) = (b ’ a)/(m(n + 2)); this time,
ξj is a suitable point of Tj . Since (b ’ a)/m = H, we obtain
m’1
b’a
Mn
H n+2 f (n+2) (ξj ),
En,m (f ) =
(n + 2)! m(n + 2)n+3 j=0
9.4 Composite Newton-Cotes Formulae 385

from which, applying Theorem 9.1 with u(x) = f (n+2) (x) and δj = 1 for j =
0, . . . , m ’ 1, (9.26) immediately follows. A similar procedure can be followed to
3
prove (9.27).

We notice that, for n ¬xed, En,m (f ) ’ 0 as m ’ ∞ (i.e., as H ’ 0).
This ensures the convergence of the numerical integral to the exact value
I(f ). We notice also that the degree of exactness of composite formulae
coincides with that of simple formulae, whereas its order of in¬nitesimal
(with respect to H) is reduced by 1 with respect to the order of in¬nitesimal
(in h) of simple formulae.
In practical computations, it is convenient to resort to a local interpolation
of low degree (typically n ¤ 2, as done in Section 9.2), this leads to com-
posite quadrature rules with positive weights, with a minimization of the
rounding errors.

Example 9.4 For the same integral (9.24) considered in Example 9.3, we show
in Table 9.7 the behavior of the absolute error as a function of the number of
subintervals m, in the case of the composite midpoint, trapezoidal and Cavalieri-
Simpson formulae. Convergence of In,m (f ) to I(f ) as m increases can be clearly
observed. Moreover, we notice that E0,m (f ) E1,m (f )/2 for m ≥ 32 (see Exer-
cise 1).

|E0,m | |E1,m | |E2,m |
m
1 7.253 2.362 4.04
9.65 · 10’2
2 1.367 2.445
3.90 · 10’2 3.77 · 10’2 1.35 · 10’2
8
1.20 · 10’4 2.40 · 10’4 4.55 · 10’8
32
7.52 · 10’6 1.50 · 10’5 1.63 · 10’10
128
4.70 · 10’7 9.40 · 10’7 6.36 · 10’13
512
TABLE 9.7. Absolute error for composite quadratures in the computation of
(9.24)



Convergence of In,m (f ) to I(f ) can be established under less stringent
regularity assumptions on f than those required by Theorem 9.3. In this
regard, the following result holds (see for the proof [IK66], pp. 341-343).
(j)
Property 9.1 Let f ∈ C 0 ([a, b]) and assume that the weights ±k in (9.25)
are nonnegative. Then
b
∀n ≥ 0.
lim In,m (f ) = f (x)dx,
m’∞ a
Moreover
b
f (x)dx ’ In,m (f ) ¤ 2(b ’ a)„¦(f ; H),
a
386 9. Numerical Integration

where

„¦(f ; H) = sup{|f (x) ’ f (y)|, x, y ∈ [a, b], x = y, |x ’ y| ¤ H}

is the module of continuity of function f .


9.5 Hermite Quadrature Formulae
Thus far we have considered quadrature formulae based on Lagrange inter-
polation (simple or composite). More accurate formulae can be devised by
resorting to Hermite interpolation (see Section 8.4).
Suppose that 2(n + 1) values f (xk ), f (xk ) are available at n + 1 distinct
points x0 , . . . , xn , then the Hermite interpolating polynomial of f is given
by
n n
H2n+1 f (x) = f (xi )Li (x) + f (xi )Mi (x), (9.28)
i=0 i=0

where the polynomials Lk , Mk ∈ P2n+1 are de¬ned, for k = 0, . . . , n, as

ωn+1 (xk )
Lk (x) = 1 ’ (x ’ xk ) lk (x), Mk (x) = (x ’ xk )lk (x).
2 2
ωn+1 (xk )

Integrating (9.28) over [a, b], we get the quadrature formula of type (9.4)
n n
In (f ) = ±k f (xk ) + βk f (xk ) (9.29)
k=0 k=0

where

±k = I(Lk ), βk = I(Mk ), k = 0, . . . , n.

Formula (9.29) has degree of exactness equal to 2n + 1. Taking n = 1, the
so-called corrected trapezoidal formula is obtained

b’a (b ’ a)2
[f (a) ’ f (b)]
corr
I1 (f ) = [f (a) + f (b)] + (9.30)
2 12
with weights ±0 = ±1 = (b’a)/2, β0 = (b’a)2 /12 and β1 = ’β0 . Assuming
f ∈ C 4 ([a, b]), the quadrature error associated with (9.30) is

h5 (4)
h=b’a
corr
E1 (f ) = f (ξ), (9.31)
720
with ξ ∈ (a, b). Notice the increase of accuracy from O(h3 ) to O(h5 ) with
respect to the corresponding expression (9.12) (of the same order as the
9.6 Richardson Extrapolation 387

Cavalieri-Simpson formula (9.15)). The composite formula can be generated
in a similar manner
b’a 1
corr
I1,m (f ) = [f (x0 ) + f (xm )]
m 2
(9.32)
(b ’ a)2
[f (a) ’ f (b)] ,
+f (x1 ) + . . . + f (xm’1 )} +
12

where the assumption that f ∈ C 1 ([a, b]) gives rise to the cancellation of
the ¬rst derivatives at the nodes xk , with k = 1, . . . , m ’ 1.

Example 9.5 Let us check experimentally the error estimate (9.31) in the simple
(m = 1) and composite (m > 1) cases, running Program 75 for the approximate
computation of integral (9.18). Table 9.8 reports the behavior of the module of
the absolute error as H is halved (that is, m is doubled) and the ratio Rm between
two consecutive errors. This ratio, as happens in the case of Cavalieri-Simpson
formula, tends to 16, demonstrating that formula (9.32) has order of in¬nitesimal
equal to 4. Comparing Table 9.8 with the corresponding Table 9.1, we can also
notice that |E1,m (f )| 4|E2,m (f )| (see Exercise 9). •
corr




Rm Rm Rm
corr corr corr
m E1,m (f ) m E1,m (f ) m E1,m (f )
4.4 · 10’3 1.1 · 10’6
1 3.4813 8 6.1 64 15.957
2.9 · 10’4 7.3 · 10’8
2 1.398 2.4 16 14.9 128 15.990
2.72 · 10’2 1.8 · 10’5 4.5 · 10’9
4 51.4 32 15.8 256 15.997
TABLE 9.8. Absolute error for the corrected trapezoidal formula in the compu-
tation of I(f ) = 0 xe’x cos(2x)dx





The corrected composite trapezoidal quadrature is implemented in Pro-
gram 75, where dfun contains the expression of the derivative of f .

Program 75 - trapmodc : Composite corrected trapezoidal formula
function int = trapmodc(a,b,m,fun,dfun)
h=(b-a)/m; x=[a:h:b]; y=eval(fun);
f1a=feval(dfun,a); f1b=feval(dfun,b);
int=h*(0.5*y(1)+sum(y(2:m))+0.5*y(m+1))+(hˆ2/12)*(f1a-f1b);




9.6 Richardson Extrapolation
The Richardson extrapolation method is a procedure which combines several
approximations of a certain quantity ±0 in a smart way to yield a more
accurate approximation of ±0 . More precisely, assume that a method is
available to approximate ±0 by a quantity A(h) that is computable for any
388 9. Numerical Integration

value of the parameter h = 0. Moreover, assume that, for a suitable k ≥ 0,
A(h) can be expanded as follows
A(h) = ±0 + ±1 h + . . . + ±k hk + Rk+1 (h), (9.33)
where |Rk+1 (h)| ¤ Ck+1 hk+1 . The constants Ck+1 and the coe¬cients ±i ,
for i = 0, . . . , k, are independent of h. Henceforth, ±0 = limh’0 A(h).
Writing (9.33) with δh instead of h, for 0 < δ < 1 (typically, δ = 1/2),
we get
A(δh) = ±0 + ±1 (δh) + . . . + ±k (δh)k + Rk+1 (δh).
Subtracting (9.33) multiplied by δ from this expression then yields
A(δh) ’ δA(h)
B(h) = = ±0 + ±2 h2 + . . . + ±k hk + Rk+1 (h),
1’δ
having de¬ned, for k ≥ 2, ±i = ±i (δ i ’ δ)/(1 ’ δ), for i = 2, . . . , k and
Rk+1 (h) = [Rk+1 (δh) ’ δRk+1 (h)] /(1 ’ δ).
Notice that ±i = 0 i¬ ±i = 0. In particular, if ±1 = 0, then A(h) is a ¬rst-
order approximation of ±0 , while B(h) is at least second-order accurate.
More generally, if A(h) is an approximation of ±0 of order p, then the
quantity B(h) = [A(δh) ’ δ p A(h)] /(1 ’ δ p ) approximates ±0 up to order
p + 1 (at least).
Proceeding by induction, the following Richardson extrapolation algorithm
is generated: setting n ≥ 0, h > 0 and δ ∈ (0, 1), we construct the sequences
Am,0 = A(δ m h), m = 0, . . . , n,
Am,q ’ δ q+1 Am’1,q (9.34)

<< . .

. 55
( : 95)



. . >>