<< . .

. 56
( : 95)



. . >>

Am,q+1 q = 0, . . . , n ’ 1,
= ,
1 ’ δ q+1
m = q + 1, . . . , n,
which can be represented by the diagram below
A0,0

A1,0 ’ A1,1

A2,0 ’ A2,1 ’ A2,2

A3,0 ’ A3,1 ’ A3,2 ’ A3,3

. .. .. .. ..
. . . . .
.

An,0 ’ An,1 ’ An,2 ’ An,3 ’ An,n
...
where the arrows indicate the way the terms which have been already
computed contribute to the construction of the “new” ones.
The following result can be proved (see [Com95], Proposition 4.1).
9.6 Richardson Extrapolation 389

Property 9.2 For n ≥ 0 and δ ∈ (0, 1)
Am,n = ±0 + O((δ m h)n+1 ), m = 0, . . . , n. (9.35)
In particular, for the terms in the ¬rst column (n = 0) the convergence
rate to ±0 is O((δ m h)), while for those of the last one it is O((δ m h)n+1 ),
i.e., n times higher.

Example 9.6 Richardson extrapolation has been employed to approximate at
x = 0 the derivative of the function f (x) = xe’x cos(2x), introduced in Ex-
ample 9.1. For this purpose, algorithm (9.34) has been executed with A(h) =
[f (x + h) ’ f (x)] /h, δ = 0.5, n = 5 and h = 0.1. Table 9.9 reports the sequence
of absolute errors Em,k = |±0 ’ Am,k |. The results demonstrate that the error

decays as predicted by (9.35).


Em,0 Em,1 Em,2 Em,3 Em,4 Em,5
0.113 “ “ “ “ “
5.3 · 10’2 6.1 · 10’3 “ “ “ “
2.6 · 10’2 1.7 · 10’3 2.2 · 10’4 “ “ “
1.3 · 10’2 4.5 · 10’4 2.8 · 10’5 5.5 · 10’7 “ “
6.3 · 10’3 1.1 · 10’4 3.5 · 10’6 3.1 · 10’8 3.0 · 10’9 “
3.1 · 10’3 2.9 · 10’5 4.5 · 10’7 1.9 · 10’9 9.9 · 10’11 4.9 · 10’12
TABLE 9.9. Errors in the Richardson extrapolation for the approximate evalua-
tion of f (0) where f (x) = xe’x cos(2x)



9.6.1 Romberg Integration
The Romberg integration method is an application of Richardson extrap-
olation to the composite trapezoidal rule. The following result, known as
the Euler-MacLaurin formula, will be useful (for its proof see, e.g., [Ral65],
pp. 131-133, and [DR75], pp. 106-111).

Property 9.3 Let f ∈ C 2k+2 ([a, b]), for k ≥ 0, and let us approximate
b
±0 = a f (x)dx by the composite trapezoidal rule (9.14). Letting hm =
(b ’ a)/m for m ≥ 1,
k
B2i 2i (2i’1)
(b) ’ f (2i’1) (a)
I1,m (f ) = ±0 + hm f
(2i)!
(9.36)
i=1
B2k+2 2k+2
hm (b ’ a)f (2k+2) (·),
+
(2k + 2)!
+∞
where · ∈ (a, b) and B2j = (’1) 2/(2nπ)2j (2j)!, for j ≥ 1, are
j’1

n=1
the Bernoulli numbers.
390 9. Numerical Integration

Equation (9.36) is a special case of (9.33) where h = h2 and A(h) =
m
I1,m (f ); notice that only even powers of the parameter h appear in the
expansion.
The Richardson extrapolation algorithm (9.34) applied to (9.36) gives

Am,0 = A(δ m h), m = 0, . . . , n,
Am,q ’ δ 2(q+1) Am’1,q (9.37)
Am,q+1 = q = 0, . . . , n ’ 1,
,
1 ’ δ 2(q+1)
m = q + 1, . . . , n.

Setting h = b ’ a and δ = 1/2 into (9.37) and denoting by T (hs ) = I1,s (f )
the composite trapezoidal formula (9.14) over s = 2m subintervals of width
hs = (b ’ a)/2m , for m ≥ 0, the algorithm (9.37) becomes

Am,0 = T ((b ’ a)/2m ), m = 0, . . . , n,
4q+1 Am,q ’ Am’1,q
Am,q+1 q = 0, . . . , n ’ 1,
= ,
4q+1 ’ 1
m = q + 1, . . . , n.

This is the Romberg numerical integration algorithm. Recalling (9.35), the
following convergence result holds for Romberg integration
b

Am,n = f (x)dx + O(h2(n+1) ), n ≥ 0.
s
a

Example 9.7 Table 9.10 shows the results obtained by running Program 76 to
π
(1)
compute the quantity ±0 in the two cases ±0 = 0 ex cos(x)dx = ’(eπ + 1)/2
1√
(2)
and ±0 = 0 xdx = 2/3.
The maximum size n has been set equal to 9. In the second and third columns
(r) (r) (r)
we show the modules of the absolute errors Ek = |±0 ’ Ak+1,k+1 |, for r = 1, 2
and k = 0, . . . , 6.
(1) (2)
The convergence to zero is much faster for Ek than for Ek . Indeed, the ¬rst
integrand function is in¬nitely di¬erentiable whereas the second is only continu-

ous.

(1) (2) (1) (2)
k Ek Ek k Ek Ek
8.923 · 10’7 1.074 · 10’3
0 22.71 0.1670 4
2.860 · 10’2 6.850 · 10’11 3.790 · 10’4
1 0.4775 5
5.926 · 10’2 8.910 · 10’3 5.330 · 10’14 1.340 · 10’4
2 6
7.410 · 10’5 3.060 · 10’3 4.734 · 10’5
3 7 0
TABLE 9.10. Romberg integration for the approximate evaluation of
1√
πx (1) (2)
e cos(x)dx (error Ek ) and 0 xdx (error Ek )
0


The Romberg algorithm is implemented in Program 76.
9.7 Automatic Integration 391




Program 76 - romberg : Romberg integration
function [A]=romberg(a,b,n,fun);
for i=1:(n+1), A(i,1)=trapezc(a,b,2ˆ(i-1),fun); end;
for j=2:(n+1), for i=j:(n+1),
A(i,j)=(4ˆ(j-1)*A(i,j-1)-A(i-1,j-1))/(4ˆ(j-1)-1); end; end;




9.7 Automatic Integration
An automatic numerical integration program, or automatic integrator, is
a set of algorithms which yield an approximation of the integral I(f ) =
b
f (x)dx, within a given tolerance, µa , or relative tolerance, µr , prescribed
a
by the user.
With this aim, the program generates a sequence {Ik , Ek }, for k =
1, . . . , N , where Ik is the approximation of I(f ) at the k-th step of the
computational process, Ek is an estimate of the error I(f ) ’ Ik , and is N
a suitable ¬xed integer.
The sequence terminates at the s-th level, with s ¤ N , such that the
automatic integrator ful¬lls the following requirement on the accuracy

max µa , µr |I(f )| ≥ |Es |( |I(f ) ’ Is |), (9.38)

where I(f ) is a reasonable guess of the integral I(f ) provided as an input
datum by the user. Otherwise, the integrator returns the last computed
approximation IN , together with a suitable error message that warns the
user of the algorithm™s failure to converge.
Ideally, an automatic integrator should:

(a) provide a reliable criterion for determining |Es | that allows for moni-
toring the convergence check (9.38);

(b) ensure an e¬cient implementation, which minimizes the number of
functional evaluations for yielding the desired approximation Is .

In computational practice, for each k ≥ 1, moving from level k to level
k + 1 of the automatic integration process can be done according to two
di¬erent strategies, which we de¬ne as non adaptive or adaptive.
In the non adaptive case, the law of distribution of the quadrature nodes
is ¬xed a priori and the quality of the estimate Ik is re¬ned by increas-
ing the number of nodes corresponding to each level of the computational
process. An example of an automatic integrator that is based on such a
procedure is provided by the composite Newton-Cotes formulae on m and
392 9. Numerical Integration

2m subintervals, respectively, at levels k and k + 1, as described in Section
9.7.1.
In the adaptive case, the positions of the nodes is not set a priori, but at
each level k of the process they depend on the information that has been
stored during the previous k ’ 1 levels. An adaptive automatic integration
algorithm is performed by partitioning the interval [a, b] into successive
subdivisions which are characterized by a nonuniform density of the nodes,
this density being typically higher in a neighborhood of strong gradients
or singularities of f . An example of an adaptive integrator based on the
Cavalieri-Simpson formula is described in Section 9.7.2.



9.7.1 Non Adaptive Integration Algorithms
In this section, we employ the composite Newton-Cotes formulae. Our aim
is to devise a criterion for estimating the absolute error |I(f ) ’ Ik | by
using Richardson extrapolation. From (9.26) and (9.27) it turns out that,
for m ≥ 1 and n ≥ 0, In,m (f ) has order of in¬nitesimal equal to H n+p ,
with p = 2 for n even and p = 1 for n odd, where m, n and H = (b ’ a)/m
are the number of partitions of [a, b], the number of quadrature nodes over
each subinterval and the constant length of each subinterval, respectively.
By doubling the value of m (i.e., halving the stepsize H) and proceeding
by extrapolation, we get

1
I(f ) ’ In,2m (f ) [I(f ) ’ In,m (f )] . (9.39)
2n+p

The use of the symbol instead of = is due to the fact that the point ξ
or ·, where the derivative in (9.26) and (9.27) must be evaluated, changes
when passing from m to 2m subintervals. Solving (9.39) with respect to
I(f ) yields the following absolute error estimate for In,2m (f )

In,2m (f ) ’ In,m (f )
I(f ) ’ In,2m (f ) . (9.40)
2n+p ’ 1

If the composite Simpson rule is considered (i.e., n = 2), (9.40) predicts a
reduction of the absolute error by a factor of 15 when passing from m to 2m
subintervals. Notice also that only 2m’1 extra functional evaluations are
needed to compute the new approximation I1,2m (f ) starting from I1,m (f ).
Relation (9.40) is an instance of an a posteriori error estimate (see Chapter
2, Section 2.3). It is based on the combined use of an a priori estimate (in
this case, (9.26) or (9.27)) and of two evaluations of the quantity to be ap-
proximated (the integral I(f )) for two di¬erent values of the discretization
parameter (that is, H = (b ’ a)/m).
9.7 Automatic Integration 393

Example 9.8 Let us employ the a posteriori estimate (9.40) in the case of the
composite Simpson formula (n = p = 2), for the approximation of the integral
π

(ex/2 + cos 4x)dx = 2(eπ ’ 1) 7.621,
0

where we require the absolute error to be less than 10’4 . For k = 0, 1, . . . , set
hk = (b’a)/2k and denote by I2,m(k) (f ) the integral of f which is computed using
the composite Simpson formula on a grid of size hk with m(k) = 2k intervals. We
can thus assume as a conservative estimate of the quadrature error the following
quantity
1
|Ek | = |I(f ) ’ I2,m(k) (f )| |I2,2m(k) (f ) ’ I2,m(k) (f )| = |Ek |, k ≥ 1.
10
(9.41)

Table 9.11 shows the sequence of the estimated errors |Ek | and of the correspond-
ing absolute errors |Ek | that have been actually made by the numerical integration
process. Notice that, when convergence has been achieved, the error estimated
by (9.41) is de¬nitely higher than the actual error, due to the conservative choice

above.


|Ek | |Ek | |Ek | |Ek |
k k
4.52 · 10’5
0 3.156 2 0.10
5.8 · 10’6 2 · 10’9
1 0.42 1.047 3
TABLE 9.11. Non adaptive automatic Simpson rule for the approximation of
π x/2
(e + cos 4x)dx
0


An alternative approach for ful¬lling the constraints (a) and (b) con-
sists of employing a nested sequence of special Gaussian quadratures Ik (f )
(see Chapter 10), having increasing degree of exactness for k = 1, . . . , N .
These formulae are constructed in such a way that, denoting by Snk =
{x1 , . . . , xnk } the set of quadrature nodes relative to quadrature Ik (f ),
Snk ‚ Snk+1 for any k = 1, . . . , N ’ 1. As a result, for k ≥ 1, the formula
at the k + 1-th level employs all the nodes of the formula at level k and
this makes nested formulae quite e¬ective for computer implementation.
As an example, we recall the Gauss-Kronrod formulae with 10, 21, 43
¨
and 87 points, that are available in [PdKUK83] (in this case, N = 4). The
Gauss-Kronrod formulae have degree of exactness rnk (optimal) equal to
2nk ’1, where nk is the number of nodes for each formula, with n1 = 10 and
nk+1 = 2nk +1 for k = 1, 2, 3. The criterion for devising an error estimate is
based on comparing the results given by two successive formulae Ink (f ) and
Ink+1 (f ) with k = 1, 2, 3, and then terminating the computational process
at the level k such that (see also [DR75], p. 321)

|Ik+1 ’ Ik | ¤ max {µa , µr |Ik+1 |} .
394 9. Numerical Integration

9.7.2 Adaptive Integration Algorithms
The goal of an adaptive integrator is to yield an approximation of I(f )
within a ¬xed tolerance µ by a non uniform distribution of the integration
stepsize along the interval [a, b]. An optimal algorithm is able to adapt
automatically the choice of the steplength according to the behavior of the
integrand function, by increasing the density of the quadrature nodes where
the function exhibits stronger variations.
In view of describing the method, it is convenient to restrict our attention
to a generic subinterval [±, β] ⊆ [a, b]. Recalling the error estimates for the
Newton-Cotes formulae, it turns out that the evaluation of the derivatives
of f , up to a certain order, is needed to set a stepsize h such that a ¬xed
accuracy is ensured, say µ(β’±)/(b’a). This procedure, which is unfeasible
in practical computations, is carried out by an automatic integrator as
follows. We consider throughout this section the Cavalieri-Simpson formula
(9.15), although the method can be extended to other quadrature rules.
β
Set If (±, β) = ± f (x)dx, h = h0 = (β ’ ±)/2 and

Sf (±, β) = (h0 /3) [f (±) + 4f (± + h0 ) + f (β)] .

From (9.16) we get

h5 (4)
If (±, β) ’ Sf (±, β) = ’ 0 f (ξ), (9.42)
90
where ξ is a point in (±, β). To estimate the error If (±, β) ’ Sf (±, β)
without using explicitly the function f (4) we employ again the Cavalieri-
Simpson formula over the union of the two subintervals [±, (± + β)/2] and
[(± + β)/2, β], obtaining, for h = h0 /2 = (β ’ ±)/4

(h0 /2)5
If (±, β) ’ Sf,2 (±, β) = ’ f (4) (ξ) + f (4) (·) ,
90
where ξ ∈ (±, (± + β)/2), · ∈ ((± + β)/2, β) and Sf,2 (±, β) = Sf (±, (± +
β)/2) + Sf ((± + β)/2, β).
Let us now make the assumption that f (4) (ξ) f (4) (·) (which is true,
in general, only if the function f (4) does not vary “too much” on [±, β]).
Then,

1 h5 (4)
If (±, β) ’ Sf,2 (±, β) ’ 0
f (ξ), (9.43)
16 90
with a reduction of the error by a factor 16 with respect to (9.42), corre-
sponding to the choice of a steplength of doubled size. Comparing (9.42)
and (9.43), we get the estimate

h5 (4) 16
Ef (±, β),
0
f (ξ)
90 15
9.7 Automatic Integration 395

where Ef (±, β) = Sf (±, β) ’ Sf,2 (±, β). Then, from (9.43), we have

|Ef (±, β)|
|If (±, β) ’ Sf,2 (±, β)| . (9.44)
15
We have thus obtained a formula that allows for easily computing the error
made by using composite Cavalieri-Simpson numerical integration on the
generic interval [±, β]. Relation (9.44), as well as (9.40), is another instance
of an a posteriori error estimate. It combines the use of an a priori es-
timate (in this case, (9.16)) and of two evaluations of the quantity to be
approximated (the integral I(f )) for two di¬erent values of the discretiza-
tion parameter h.
In the practice, it might be convenient to assume a more conservative
error estimate, precisely

|If (±, β) ’ Sf,2 (±, β)| |Ef (±, β)|/10.

Moreover, to ensure a global accuracy on [a, b] equal to the ¬xed tolerance
µ, it will su¬ce to enforce that the error Ef (±, β) satis¬es on each single
subinterval [±, β] ⊆ [a, b] the following constraint
|Ef (±, β)| β’±
¤µ . (9.45)
b’a
10
The adaptive automatic integration algorithm can be described as follows.
Denote by:
1. A: the active integration interval, i.e., the interval where the integral
is being computed;
2. S: the integration interval already examined, for which the error test
(9.45) has been successfully passed;
3. N : the integration interval yet to be examined.
At the beginning of the integration process we have N = [a, b], A = N
and S = …, while the situation at the generic step of the algorithm is
±
depicted in Figure 9.3. Set JS (f ) f (x)dx, with JS (f ) = 0 at the
a
beginning of the process; if the algorithm successfully terminates, JS (f )
yields the desired approximation of I(f ). We also denote by J(±,β) (f ) the
approximate integral of f over the “active” interval [±, β]. This interval
is drawn in bold in Figure 9.3. At each step of the adaptive integration
method the following decisions are taken:

1. if the local error test (9.45) is passed, then:
(i) JS (f ) is increased by J(±,β) (f ), that is, JS (f ) ← JS (f )+J(±,β) (f );
(ii) we let S ← S ∪ A, A = N (corresponding to the path (I) in
Figure 9.3), β = b.
396 9. Numerical Integration

S ± Aβ N
a b

<< . .

. 56
( : 95)



. . >>