Corollary 11.1 (Equivalence theorem) A consistent multistep method

is convergent i¬ it is zero-stable and if the error on the initial data tends

to zero as h tends to zero.

We conclude this section with the following result, which establishes an

upper limit for the order of multistep methods (see [Dah63]).

Property 11.1 (First Dahlquist barrier) There isn™t any zero-stable,

p-step linear multistep method with order greater than p + 1 if p is odd,

p + 2 if p is even.

11.6.4 Absolute Stability of Multistep Methods

Consider again the di¬erence equation (11.54), which was obtained by ap-

plying the MS method (11.45) to the model problem (11.24). According to

(11.33), its solution takes the form

mj ’1

k

γsj ns [rj (h»)]n ,

un = n = 0, 1, . . .

s=0

j=1

where rj (h»), j = 1, . . . , k , are the distinct roots of the characteristic

polynomial (11.55), and having denoted by mj the multiplicity of rj (h»).

In view of (11.25), it is clear that the absolute root condition introduced

by De¬nition 11.12 is necessary and su¬cient to ensure that the multistep

method (11.45) is absolutely stable as h ¤ h0 .

Among the methods enjoying the absolute stability property, the pref-

erence should go to those for which the region of absolute stability A,

introduced in (11.26), is as wide as possible or even unbounded. Among

these are the A-stable methods introduced at the end of Section 11.3.3 and

500 11. Numerical Solution of Ordinary Di¬erential Equations

the ‘-stable methods, for which A contains the angular region de¬ned by

z ∈ C such that ’‘ < π ’ arg(z) < ‘, with ‘ ∈ (0, π/2). A-stable meth-

ods are of remarkable importance when solving sti¬ problems (see Section

11.10).

0000000000000

1111111111111 1111111111111

0000000000000

1111111111111

0000000000000 1111111111111

0000000000000

Im Im

0000000000000

1111111111111 0000000000000

1111111111111

1111111111111

0000000000000 0000000000000

1111111111111

0000000000000

1111111111111 1111111111111

0000000000000

1111111111111

0000000000000 0000000000000

1111111111111

0000000000000

1111111111111 0000000000000

1111111111111

0000000000000

1111111111111 0000000000000

1111111111111

0000000000000

1111111111111 1111111111111

0000000000000

1111111111111

0000000000000 0000000000000

1111111111111

0000000000000

1111111111111 1111111111111

0000000000000

0000000000000

1111111111111 1111111111111

0000000000000

1111111111111

0000000000000 0000000000000

1111111111111

0000000000000

1111111111111 0000000000000

1111111111111

0000000000000

1111111111111 0000000000000

1111111111111

0000000000000

1111111111111 0000000000000

1111111111111

0000000000000

1111111111111 0000000000000

1111111111111

‘

0000000000000

1111111111111 1111111111111

0000000000000

0000000000000

1111111111111 0000000000000

1111111111111

1111111111111

0000000000000 1111111111111

0000000000000

1111111111111 1111111111111

0000000000000

0000000000000 1111111111111

0000000000000

1111111111111

0000000000000 1111111111111

0000000000000 Re

0000000000000 Re

1111111111111 0000000000000

1111111111111

0000000000000

1111111111111 ‘

0000000000000

1111111111111

0000000000000

1111111111111 1111111111111

0000000000000

1111111111111

0000000000000 1111111111111

0000000000000

1111111111111

0000000000000 0000000000000

1111111111111

1111111111111

0000000000000 1111111111111

0000000000000

0000000000000

1111111111111 0000000000000

1111111111111

1111111111111

0000000000000 1111111111111

0000000000000

0000000000000

1111111111111 0000000000000

1111111111111

0000000000000

1111111111111 1111111111111

0000000000000

1111111111111

0000000000000 1111111111111

0000000000000

1111111111111

0000000000000 0000000000000

1111111111111

1111111111111

0000000000000 0000000000000

1111111111111

1111111111111

0000000000000 1111111111111

0000000000000

1111111111111

0000000000000 0000000000000

1111111111111

1111111111111

0000000000000 0000000000000

1111111111111

0000000000000

1111111111111 0000000000000

1111111111111

1111111111111

0000000000000 0000000000000

1111111111111

1111111111111

0000000000000

FIGURE 11.4. Regions of absolute stability for A-stable (left) and (right) ‘-stable

methods

The following result, whose proof is given in [Wid67], establishes a relation

between the order of a multistep method, the number of its steps and its

stability properties.

Property 11.2 (Second Dahlquist barrier) A linear explicit multistep

method can be neither A-stable, nor ‘-stable. Moreover, there is no A-

stable linear multistep method with order greater than 2. Finally, for any

‘ ∈ (0, π/2), there only exist ‘-stable p-step linear multistep methods of

order p for p = 3 and p = 4.

Let us now examine the region of absolute stability of several MS methods.

The regions of absolute stability of both explicit and implicit Adams schemes

reduce progressively as the order of the method increases. In Figure 11.5

(left) we show the regions of absolute stability for the AB methods exam-

ined in Section 11.5.1, with exception of the Forward Euler method whose

region is shown in Figura 11.3.

The regions of absolute stability of the Adams-Moulton schemes, except

for the Crank-Nicolson method which is A-stable, are represented in Figure

11.5 (right).

In Figure 11.6 the regions of absolute stability of some of the BDF meth-

ods introduced in Section 11.5.2 are drawn. They are unbounded and al-

11.6 Analysis of Multistep Methods 501

4

3

0.8

0.6 2

0.4

1

0.2

0

AM4

0

AM5

AB4

’1

’0.2

AB3

’2 AM3

’0.4

AB2

’0.6 ’3

’0.8

’4

’2 ’1.5 ’1 ’0.5 0

’4 ’2 0

’6

FIGURE 11.5. Outer contours of the regions of absolute stability for

Adams-Bashforth methods (left) ranging from second to fourth-order (AB2, AB3

and AB4) and for Adams-Moulton methods (right), from third to ¬fth-order

(AM3, AM4 and AM5). Notice that the region of the AB3 method extends into

the half-plane with positive real part. The region for the explicit Euler (AB1)

method was drawn in Figure 11.3

ways contain the negative real numbers. These stability features make BDF

methods quite attractive for solving sti¬ problems (see Section 11.10).

6

4

BDF3

2

0

’2

BDF5

’4

BDF6

’6

2 4 6 8 10 12 14

’4 ’2 0

FIGURE 11.6. Inner contours of regions of absolute stability for three-step

(BDF3), ¬ve-step (BDF5) and six-step (BDF6) BDF methods. Unlike Adams

methods, these regions are unbounded and extend outside the limited portion

that is shown in the ¬gure

Remark 11.3 Some authors (see, e.g., [BD74]) adopt an alternative de¬-

nition of absolute stability by replacing (11.25) with the milder property

∃C > 0 : |un | ¤ C, as tn ’ +∞.

According to this new de¬nition, the absolute stability of a numerical

method should be regarded as the counterpart of the asymptotic stabil-

502 11. Numerical Solution of Ordinary Di¬erential Equations

ity (11.6) of the Cauchy problem. The new region of absolute stability A—

would be

A— = {z ∈ C : ∃C > 0, |un | ¤ C, ∀n ≥ 0}

and it would not necessarily coincide with A. For example, in the case of

the midpoint method A is empty (thus, it is unconditionally absolutely

unstable), while A— = {z = ±i, ± ∈ [’1, 1]}.

In general, if A is nonempty, then A— is its closure. We notice that zero-

stable methods are those for which the region A— contains the origin z = 0

of the complex plane.

To conclude, let us notice that the strong root condition (11.57) implies,

for a linear problem, that

∀h ¤ h0 , ∃ C > 0 : |un | ¤ C(|u0 | + . . . + |up |), ∀n ≥ p + 1. (11.66)

We say that a method is relatively stable if it satis¬es (11.66). Clearly,

(11.66) implies zero-stability, but the converse does not hold.

Figure 11.7 summarizes the main conclusions that have been drawn in this

section about stability, convergence and root-conditions, in the particular

case of a consistent method applied to the model problem (11.24).

⇐= Strong root ⇐=

Root Absolute root

condition condition condition

⇐’ ⇐= ⇐=

Convergence Zero (11.66) Absolute

stability stability

FIGURE 11.7. Relations between root conditions, stability and convergence for

a consistent method applied to the model problem (11.24)

11.7 Predictor-Corrector Methods

When solving a nonlinear Cauchy problem of the form (11.1), at each time

step implicit schemes require dealing with a nonlinear equation. For in-

stance, if the Crank-Nicolson method is used, we get the nonlinear equation

h

un+1 = un + [fn + fn+1 ] = Ψ(un+1 ),

2

11.7 Predictor-Corrector Methods 503

that can be cast in the form ¦(un+1 ) = 0, where ¦(un+1 ) = un+1 ’

Ψ(un+1 ). To solve this equation the Newton method would give

(k+1) (k) (k) (k)

un+1 = un+1 ’ ¦(un+1 )/¦ (un+1 ),

(0)

for k = 0, 1, . . . , until convergence and require an initial datum un+1 su¬-

ciently close to un+1 . Alternatively, one can resort to ¬xed-point iterations

(k+1) (k)

un+1 = Ψ(un+1 ) (11.67)

for k = 0, 1, . . . , until convergence. In such a case, the global convergence

condition for the ¬xed-point method (see Theorem 6.1) sets a constraint

on the discretization stepsize of the form

1

h< (11.68)

|b’1 |L

where L is the Lipschitz constant of f with respect to y. In practice, ex-

cept for the case of sti¬ problems (see Section 11.10), this restriction on

h is not signi¬cant since considerations of accuracy put a much more re-

strictive constraint on h. However, each iteration of (11.67) requires one

evaluation of the function f and the computational cost can be reduced by

(0)

providing a good initial guess un+1 . This can be done by taking one step of

an explicit MS method and then iterating on (11.67) for a ¬xed number m

of iterations. By doing so, the implicit MS method that is employed in the

¬xed-point scheme “corrects” the value of un+1 “predicted” by the explicit

MS method. A procedure of this sort is called a predictor-corrector method,

or PC method. There are many ways in which a predictor-corrector method

can be implemented.

(0)

In its basic version, the value un+1 is computed by an explicit p + 1-step

˜

method, called the predictor (here identi¬ed by the coe¬cients {˜j , ˜j })

ab

p

˜ p

˜

(0) (1) ˜j f (0) ,

[P ] un+1 = aj un’j

˜ +h b n’j

j=0 j=0

(0) (0) (1)

where fk = f (tk , uk ) and uk are the solutions computed by the PC

method at the previous steps or are the initial conditions. Then, we evaluate

(0)

the function f at the new point (tn+1 , un+1 ) (evaluation step)

(0) (0)

[E] fn+1 = f (tn+1 , un+1 ),

and ¬nally, one single ¬xed-point iteration is carried out using an implicit

MS scheme of the form (11.45)

p p

(1) (1) (0) (0)

[C] un+1 = aj un’j + hb’1 fn+1 +h bj fn’j .

j=0 j=0

504 11. Numerical Solution of Ordinary Di¬erential Equations

This second step of the procedure, which is actually explicit, is called the

corrector. The overall procedure is shortly denoted by P EC or P (EC)1

method, in which P and C denote one application at time tn+1 of the

predictor and the corrector methods, respectively, while E indicates one

evaluation of the function f .

This strategy above can be generalized supposing to perform m > 1 iter-

ations at each step tn+1 . The corresponding methods are called predictor-

(0)

multicorrector schemes and compute un+1 at time step tn+1 using the pre-

dictor in the following form

p

˜ p

˜

(0) (m) ˜j f (m’1) .

[P ] un+1 = aj un’j