j=0 j=0

Here m ≥ 1 denotes the (¬xed) number of corrector iterations that are

carried out in the following steps [E], [C]: for k = 0, 1, . . . , m ’ 1

(k) (k)

[E] fn+1 = f (tn+1 , un+1 ),

p p

(k+1) (m) (k) (m’1)

[C] un+1 = aj un’j + hb’1 fn+1 +h bj fn’j .

j=0 j=0

These implementations of the predictor-corrector technique are referred to

as P (EC)m . Another implementation, denoted by P (EC)m E, consists of

updating at the end of the process also the function f and is given by

p

˜ p

˜

(0) (m) ˜j f (m) ,

[P ] un+1 = aj un’j

˜ +h b n’j

j=0 j=0

and for k = 0, 1, . . . , m ’ 1,

(k) (k)

[E] fn+1 = f (tn+1 , un+1 ),

p p

(k+1) (m) (k) (m)

[C] un+1 = aj un’j + hb’1 fn+1 +h bj fn’j ,

j=0 j=0

followed by

(m) (m)

[E] fn+1 = f (tn+1 , un+1 ).

Example 11.8 Heun™s method (11.10) can be regarded as a predictor-corrector

method whose predictor is the forward Euler method, while the corrector is the

Crank-Nicolson method.

Another example is provided by the Adams-Bashforth method of order 2

(11.50) and the Adams-Moulton method of order 3 (11.51). Its corresponding

11.7 Predictor-Corrector Methods 505

(0) (1) (0) (1) (0)

P EC implementation is: given u0 = u0 = u0 , u1 = u1 = u1 and f0 =

(0) (0) (0)

f (t0 , u0 ), f1 = f (t1 , u1 ), compute for n = 1, 2, . . . ,

h

(0) (1) (0)

3fn ’ fn’1 ,

(0)

[P ] un+1 = un +

2

(0) (0)

[E] fn+1 = f (tn+1 , un+1 ),

h

(1) (1) (0) (0)

5fn+1 + 8fn ’ fn’1 ,

(0)

[C] un+1 = un +

12

(0) (1) (0) (1)

while the P ECE implementation is: given u0 = u0 = u0 , u1 = u1 = u1 and

(1) (1) (1) (1)

f0 = f (t0 , u0 ), f1 = f (t1 , u1 ), compute for n = 1, 2, . . . ,

h

(0) (1) (1)

3fn ’ fn’1 ,

(1)

[P ] un+1 = un +

2

(0) (0)

[E] fn+1 = f (tn+1 , un+1 ),

h

(1) (1) (0) (1)

5fn+1 + 8fn ’ fn’1 ,

(1)

[C] un+1 = un +

12

(1) (1)

[E] fn+1 = f (tn+1 , un+1 ).

•

Before studying the convergence of predictor-corrector methods, we intro-

duce a simpli¬cation in the notation. Usually the number of steps of the

predictor is greater than those of the corrector, so that we de¬ne the num-

ber of steps of the predictor-corrector pair as being equal to the number of

steps of the predictor. This number will be denoted henceforth by p. Owing

to this de¬nition we no longer demand that the coe¬cients of the corrector

satisfy |ap | + |bp | = 0. Consider for example the predictor-corrector pair

(0) (1) (0)

[P ] un+1 = un + hf (tn’1 , un’1 ),

(1) (1) (0) (0)

h

[C] un+1 = un + f (tn , un ) + f (tn+1 , un+1 ) ,

2

for which p = 2 (even though the corrector is a one-step method). Conse-

quently, the ¬rst and the second characteristic polynomials of the corrector

method will be ρ(r) = r2 ’ r and σ(r) = (r2 + r)/2 instead of ρ(r) = r ’ 1

and σ(r) = (r + 1)/2.

In any predictor-corrector method, the truncation error of the predictor

combines with the one of the corrector, generating a new truncation error

which we are going to examine. Let q and q be, respectively, the orders of the

˜

predictor and the corrector and assume that y ∈ C q+1 , where q = max(˜, q).

q

506 11. Numerical Solution of Ordinary Di¬erential Equations

Then

p p

˜j f (tn’j , yn’j )

’ aj y(tn’j ) ’ h

y(tn+1 ) ˜ b

j=0 j=0

˜˜

= Cq+1 hq+1 y (˜+1) (tn ) + O(hq+2 ),

˜ q ˜

p p

’ aj y(tn’j ) ’ h

y(tn+1 ) bj f (tn’j , yn’j )

j=0 j=’1

(tn ) + O(hq+2 ),

q+1 (q+1)

= Cq+1 h y

˜˜

where Cq+1 , Cq+1 are the error constants of the predictor and the corrector

method respectively. The following result holds.

Property 11.3 Let the predictor method have order q and the corrector

˜

method have order q. Then:

If q ≥ q (or q < q with m > q ’ q ), then the predictor-corrector method

˜ ˜ ˜

has the same order and the same PLTE as the corrector.

If q < q and m = q ’ q , then the predictor-corrector method has the same

˜ ˜

order as the corrector, but di¬erent PLTE.

If q < q and m ¤ q ’ q ’ 1, then the predictor-corrector method has order

˜ ˜

equal to q + m (thus less than q).

˜

In particular, notice that if the predictor has order q ’ 1 and the corrector

has order q, the P EC su¬ces to get a method of order q. Moreover, the

P (EC)m E and P (EC)m schemes have always the same order and the same

PLTE.

Combining the Adams-Bashforth method of order q with the correspond-

ing Adams-Moulton method of the same order we obtain the so-called ABM

method of order q. It is possible to estimate its PLTE as

Cq+1 (m) (0)

un+1 ’ un+1 ,

— ’ Cq+1

Cq+1

—

where Cq+1 and Cq+1 are the error constants given in Table 11.1. Accord-

ingly, the steplength h can be decreased if the estimate of the PLTE exceeds

a given tolerance and increased otherwise (for the adaptivity of the step

length in a predictor-corrector method, see [Lam91], pp.128“147).

Program 93 provides an implementation of the P (EC)m E methods. The

input parameters at, bt, a, b contain the coe¬cients aj , ˜j (j = 0, . . . , p)

˜b ˜

of the predictor and the coe¬cients aj (j = 0, . . . , p), bj (j = ’1, . . . , p) of

the corrector. Moreover, f is a string containing the expression of f (t, y),

11.7 Predictor-Corrector Methods 507

h is the stepsize, t0 and tf are the end points of the time integration

interval, u0 is the vector of the initial data, m is the number of the corrector

inner iterations. The input variable pece must be set equal to ™y™ if the

P (EC)m E is selected, conversely the P (EC)m scheme is chosen.

Program 93 - predcor : Predictor-corrector scheme

function [u,t]=predcor(a,b,at,bt,h,f,t0,u0,tf,pece,m)

p = max(length(a),length(b)-1); pt = max(length(at),length(bt));

q = max(p,pt); if length(u0) < q, break, end;

t = [t0:h:t0+(q-1)*h]; u = u0; y = u0; fe = eval(f);

k = q;

for t = t0+q*h:h:tf

ut = sum(at.*u(k:-1:k-pt+1))+h*sum(bt.*fe(k:-1:k-pt+1));

y = ut; foy = eval(f);

uv = sum(a.*u(k:-1:k-p+1))+h*sum(b(2:p+1).*fe(k:-1:k-p+1));

k = k+1;

for j = 1:m

fy = foy; up = uv + h*b(1)*fy; y = up; foy = eval(f);

end

if (pece==™y™|pece==™Y™)

fe = [fe, foy];

else

fe = [fe, fy];

end

u = [u, up];

end

t = [t0:h:tf];

Example 11.9 Let us check the performance of the P (EC)m E method on the

Cauchy problem y (t) = e’y(t) for t ∈ [0, 1] with y(0) = 1. The exact solution is

y(t) = log(1 + t). In all the numerical experiments, the corrector method is the

Adams-Moulton third-order scheme (AM3), while the explicit Euler (AB1) and

the Adams-Bashforth second-order (AB2) methods are used as predictors. Figure

11.8 shows that the pair AB2-AM3 (m = 1) yields third-order convergence rate,

while AB1-AM3 (m = 1) has a ¬rst-order accuracy. Taking m = 2 allows to

recover the third-order convergence rate of the corrector for the AB1-AM3 pair.

•

As for the absolute stability, the characteristic polynomial of P (EC)m

methods reads

H m (1 ’ H)

ΠP (EC)m (r) = b’1 rp (ρ(r) ’ h»σ(r)) + (˜(r)σ(r) ’ ρ(r)˜ (r))

ρ σ

1 ’ Hm

while for P (EC)m E we have

H m (1 ’ H)

ΠP (EC)m E (r) = ρ(r) ’ h»σ(r) + (˜(r) ’ h»˜ (r)) .

ρ σ

1 ’ Hm

508 11. Numerical Solution of Ordinary Di¬erential Equations

’2

10

’4

10

’6

10

’8

10

’10

10

’12

10 ’3 ’2 ’1

10 10 10

FIGURE 11.8. Convergence rate for P (EC)m E methods as a function of log(h).

The symbol ∇ refers to the AB2-AM3 method (m = 1), —¦ to AB1-AM3 (m = 1)

and to AB1-AM3 with m = 2

We have set H = h»b’1 and denoted by ρ and σ the ¬rst and second charac-

˜ ˜

teristic polynomial of the predictor method, respectively. The polynomials

ρ and σ are related to the ¬rst and second characteristic polynomials of the

corrector, as previously explained after Example 11.8. Notice that in both

cases the characteristic polynomial tends to the corresponding polynomial

of the corrector method, since the function H m (1 ’ H)/(1 ’ H m ) tends to

zero as m tends to in¬nity.

Example 11.10 If we consider the ABM methods with a number of steps p, the

characteristic polynomials are ρ(r) = ρ(r) = r(rp’1 ’ rp’2 ), while σ(r) = rσ(r),

˜

where σ(r) is the second characteristic polynomial of the corrector. In Figure 11.9

(right) the stability regions for the ABM methods of order 2 are plotted. In the

case of the ABM methods of order 2, 3 and 4, the corresponding stability regions

can be ordered by size, namely, from the largest to the smallest one the regions of

P ECE, P (EC)2 E, the predictor and P EC methods are plotted in Figure 11.9,

left. The one-step ABM method is an exception to the rule and the largest region

•

is the one corresponding to the predictor method (see Figure 11.9, left).

11.8 Runge-Kutta (RK) Methods

When evolving from the forward Euler method (11.7) toward higher-order

methods, linear multistep methods (MS) and Runge-Kutta methods (RK)

adopt two opposite strategies.

Like the Euler method, MS schemes are linear with respect to both un

and fn = f (tn , un ), require only one functional evaluation at each time

step and their accuracy can be increased at the expense of increasing the

number of steps. On the other hand, RK methods maintain the structure

of one-step methods, and increase their accuracy at the price of an increase

of functional evaluations at each time level, thus sacrifying linearity.

11.8 Runge-Kutta (RK) Methods 509

2

1.5

2

P(EC) E

1.5

1

1

0.5

0.5

0

0

P(EC)2

PECE

PEC

’0.5

PEC

’0.5

’1

PECE

’1

’1.5 P(EC)2E

’2

’1.5

’2 ’1.5 ’1 ’0.5 0 0.5

’1.5 ’1 ’0.5 0 0.5

FIGURE 11.9. Stability regions for the ABM methods of order 1 (left) and 2

(right)

A consequence is that RK methods are more suitable than MS methods

at adapting the stepsize, whereas estimating the local error for RK methods

is more di¬cult than it is in the case of MS methods.

In its most general form, an RK method can be written as

n≥0

un+1 = un + hF (tn , un , h; f ), (11.70)

where F is the increment function de¬ned as follows

s

F (tn , un , h; f ) = bi Ki ,

i=1

(11.71)

s

Ki = f (tn + ci h, un + h aij Kj ), i = 1, 2, . . . , s

j=1

and s denotes the number of stages of the method. The coe¬cients {aij },

{ci } and {bi } fully characterize an RK method and are usually collected in

the so-called Butcher array

c1 a11 a12 ... a1s

c2 a21 a22 a2s c A

. . .

..

. . . or

.

. . .

bT

cs as1 as2 ... ass

b1 b2 ... bs

T T

where A = (aij ) ∈ Rs—s , b = (b1 , . . . , bs ) ∈ Rs and c = (c1 , . . . , cs ) ∈

Rs . We shall henceforth assume that the following condition holds

s

ci = aij i = 1, . . . , s. (11.72)

j=1

510 11. Numerical Solution of Ordinary Di¬erential Equations

If the coe¬cients aij in A are equal to zero for j ≥ i, with i = 1, 2, . . . , s,

then each Ki can be explicitly computed in terms of the i ’ 1 coe¬cients

K1 , . . . , Ki’1 that have already been determined. In such a case the RK

method is explicit. Otherwise, it is implicit and solving a nonlinear system

of size s is necessary for computing the coe¬cients Ki .

The increase in the computational e¬ort for implicit schemes makes their

use quite expensive; an acceptable compromise is provided by RK semi-

implicit methods, in which case aij = 0 for j > i so that each Ki is the

solution of the nonlinear equation

«

i’1

Ki = f tn + ci h, un + haii Ki + h aij Kj .

j=1

A semi-implicit scheme thus requires s nonlinear independent equations to

be solved.

The local truncation error „n+1 (h) at node tn+1 of the RK method

(11.70) is de¬ned through the residual equation

h„n+1 (h) = yn+1 ’ yn ’ hF (tn , yn , h; f ),

where y(t) is the exact solution to the Cauchy problem (11.1). Method

(11.70) is consistent if „ (h) = maxn |„n (h)| ’ 0 as h ’ 0. It can be shown