f is uniformly Lipschitz continuous with respect to y.

In view of the stability analysis of the Cauchy problem, we consider the

following problem

t ∈ I,

z (t) = f (t, z(t)) + δ(t),

(11.4)

z(t0 ) = y0 + δ0 ,

where δ0 ∈ R and δ is a continuous function on I. Problem (11.4) is derived

from (11.1) by perturbing both the initial datum y0 and the source func-

tion f . Let us now characterize the sensitivity of the solution z to those

perturbations.

11.1 The Cauchy Problem 471

De¬nition 11.1 ([Hah67], [Ste71] or [PS91]). Let I be a bounded set. The

Cauchy problem (11.1) is stable in the sense of Liapunov (or stable) on I

if, for any perturbation (δ0 , δ(t)) satisfying

|δ0 | < µ, |δ(t)| < µ ∀t ∈ I,

with µ > 0 su¬ciently small to guarantee that the solution to the perturbed

problem (11.4) does exist, then

∃C > 0 independent of µ such that |y(t) ’ z(t)| < Cµ, ∀t ∈ I.

(11.5)

If I has no upperly bound we say that (11.1) is asymptotically stable if, as

well as being Liapunov stable in any bounded interval I, the following limit

also holds

|y(t) ’ z(t)| ’ 0, for t ’ +∞. (11.6)

The requirement that the Cauchy problem is stable is equivalent to requir-

ing that it is well-posed in the sense stated in Chapter 2.

The uniform Lipschitz-continuity of f with respect to y su¬ces to ensure

the stability of the Cauchy problem. Indeed, letting w(t) = z(t) ’ y(t), we

have

w (t) = f (t, z(t)) ’ f (t, y(t)) + δ(t).

Therefore,

t t

[f (s, z(s)) ’ f (s, y(s))] ds + ∀t ∈ I.

w(t) = δ0 + δ(s)ds,

t0 t0

Thanks to previous assumptions, it follows that

t

|w(t)| ¤ (1 + |t ’ t0 |) µ + L |w(s)|ds.

t0

Applying the Gronwall lemma (which we include below for the reader™s

ease) yields

|w(t)| ¤ (1 + |t ’ t0 |) µeL|t’t0 | , ∀t ∈ I

and, thus, (11.5) with C = (1 + KI )eLKI where KI = maxt∈I |t ’ t0 |.

Lemma 11.1 (Gronwall) Let p be an integrable function nonnegative on

the interval (t0 , t0 + T ), and let g and • be two continuous functions on

[t0 , t0 + T ], g being nondecreasing. If • satis¬es the inequality

t

•(t) ¤ g(t) + ∀t ∈ [t0 , t0 + T ],

p(„ )•(„ )d„,

t0

472 11. Numerical Solution of Ordinary Di¬erential Equations

then «

t

•(t) ¤ g(t) exp p(„ )d„ , ∀t ∈ [t0 , t0 + T ].

t0

For the proof, see, for instance, [QV94], Lemma 1.4.1.

The constant C that appears in (11.5) could be very large and, in general,

depends on the upper extreme of the interval I, as in the proof above.

For that reason, the property of asymptotic stability is more suitable for

describing the behavior of the dynamical system (11.1) as t ’ +∞ (see

[Arn73]).

As is well-known, only a restricted number of nonlinear ODEs can be

solved in closed form (see, for instance, [Arn73]). Moreover, even when

this is possible, it is not always a straightforward task to ¬nd an explicit

expression of the solution; for example, consider the (very simple) equation

y = (y ’ t)/(y + t), whose solution is only implicitly de¬ned by the relation

(1/2) log(t2 + y 2 ) + tan’1 (y/t) = C, where C is a constant depending on

the initial condition.

For this reason we are interested in numerical methods, since these can

be applied to any ODE under the sole condition that it admits a unique

solution.

11.2 One-Step Numerical Methods

Let us address the numerical approximation of the Cauchy problem (11.1).

Fix 0 < T < +∞ and let I = (t0 , t0 + T ) be the integration interval and,

correspondingly, for h > 0, let tn = t0 + nh, with n = 0, 1, 2, . . . , Nh , be

the sequence of discretization nodes of I into subintervals In = [tn , tn+1 ].

The width h of such subintervals is called the discretization stepsize. Notice

that Nh is the maximum integer such that tNh ¤ t0 + T . Let uj be the

approximation at node tj of the exact solution y(tj ); this solution will be

henceforth shortly denoted by yj . Similarly, fj denotes the value f (tj , uj ).

We obviously set u0 = y0 .

De¬nition 11.2 A numerical method for the approximation of problem

(11.1) is called a one-step method if ∀n ≥ 0, un+1 depends only on un .

Otherwise, the scheme is called a multistep method.

For now, we focus our attention on one-step methods. Here are some of

them:

11.3 Analysis of One-Step Methods 473

1. forward Euler method

un+1 = un + hfn ; (11.7)

2. backward Euler method

un+1 = un + hfn+1 . (11.8)

In both cases, y is approximated through a ¬nite di¬erence: forward and

backward di¬erences are used in (11.7) and (11.8), respectively. Both ¬nite

di¬erences are ¬rst-order approximations of the ¬rst derivative of y with

respect to h (see Section 10.10.1).

3. trapezoidal (or Crank-Nicolson) method

h

un+1 = un + [fn + fn+1 ] . (11.9)

2

This method stems from approximating the integral on the right side of

(11.2) by the trapezoidal quadrature rule (9.11).

4. Heun method

h

un+1 = un + [fn + f (tn+1 , un + hfn )]. (11.10)

2

This method can be derived from the trapezoidal method substituting

f (tn+1 , un + hf (tn , un )) for f (tn+1 , un+1 ) in (11.9) (i.e., using the forward

Euler method to compute un+1 ).

In this last case, we notice that the aim is to transform an implicit method

into an explicit one. Addressing this concern, we recall the following.

De¬nition 11.3 (explicit and implicit methods) A method is called

explicit if un+1 can be computed directly in terms of (some of) the previous

values uk , k ¤ n. A method is said to be implicit if un+1 depends implicitly

on itself through f .

Methods (11.7) and (11.10) are explicit, while (11.8) and (11.9) are implicit.

These latter require at each time step to solving a nonlinear problem if f

depends nonlinearly on the second argument.

A remarkable example of one-step methods are the Runge-Kutta meth-

ods, which will be analyzed in Section 11.8.

11.3 Analysis of One-Step Methods

Any one-step explicit method for the approximation of (11.1) can be cast

in the concise form

0 ¤ n ¤ Nh ’ 1,

un+1 = un + h¦(tn , un , fn ; h), u0 = y0 , (11.11)

474 11. Numerical Solution of Ordinary Di¬erential Equations

where ¦(·, ·, ·; ·) is called an increment function. Letting as usual yn = y(tn ),

analogously to (11.11) we can write

0 ¤ n ¤ Nh ’ 1, (11.12)

yn+1 = yn + h¦(tn , yn , f (tn , yn ); h) + µn+1 ,

where µn+1 is the residual arising at the point tn+1 when we pretend that

the exact solution “satis¬es” the numerical scheme. Let us write the residual

as

µn+1 = h„n+1 (h).

The quantity „n+1 (h) is called the local truncation error (LTE) at the node

tn+1 . We thus de¬ne the global truncation error to be the quantity

|„n+1 (h)|

„ (h) = max

0¤n¤Nh ’1

Notice that „ (h) depends on the solution y of the Cauchy problem (11.1).

The forward Euler™s method is a special instance of (11.11), where

¦(tn , un , fn ; h) = fn ,

while to recover Heun™s method we must set

1

¦(tn , un , fn ; h) = [fn + f (tn + h, un + hfn )] .

2

A one-step explicit scheme is fully characterized by its increment function

¦. This function, in all the cases considered thus far, is such that

∀tn ≥ t0

lim ¦(tn , yn , f (tn , yn ); h) = f (tn , yn ), (11.13)

h’0

Property (11.13), together with the obvious relation yn+1 ’ yn = hy (tn ) +

O(h2 ), ∀n ≥ 0, allows one to obtain from (11.12) that lim „n (h) = 0,

h’0

0 ¤ n ¤ Nh ’ 1. In turn, this condition ensures that

lim „ (h) = 0

h’0

which expresses the consistency of the numerical method (11.11) with the

Cauchy problem (11.1). In general, a method is said to be consistent if its

LTE is in¬nitesimal with respect to h. Moreover, a scheme has order p if,

∀t ∈ I, the solution y(t) of the Cauchy problem (11.1) ful¬lls the condition

„ (h) = O(hp ) for h ’ 0. (11.14)

Using Taylor expansions, as was done in Section 11.2, it can be proved that

the forward Euler method has order 1, while the Heun method has order 2

(see Exercises 1 and 2).

11.3 Analysis of One-Step Methods 475

11.3.1 The Zero-Stability

Let us formulate a requirement analogous to the one for Liapunov stability

(11.5), speci¬cally for the numerical scheme. If (11.5) is satis¬ed with a

constant C independent of h, we shall say that the numerical problem is

zero-stable. Precisely:

De¬nition 11.4 (zero-stability of one-step methods) The numerical

method (11.11) for the approximation of problem (11.1) is zero-stable if

∃h0 > 0, ∃C > 0 : ∀h ∈ (0, h0 ], |zn ’ u(h) | ¤ Cµ, 0 ¤ n ¤ Nh , (11.15)

(h)

n

(h) (h)

where zn , un are respectively the solutions of the problems

±

z (h) = zn + h ¦(tn , zn , f (tn , zn ); h) + δn+1 ,

(h) (h) (h)

n+1

(11.16)

z0 = y0 + δ0 ,

±

u(h) = u(h) + h¦(tn , u(h) , f (tn , u(h) ); h),

n n n

n+1

(11.17)

u =y ,

0 0

for 0 ¤ n ¤ Nh ’ 1, under the assumption that |δk | ¤ µ, 0 ¤ k ¤ Nh .

Zero-stability thus requires that, in a bounded interval, (11.15) holds for

any value h ¤ h0 . This property deals, in particular, with the behavior

of the numerical method in the limit case h ’ 0 and this justi¬es the

name of zero-stability. This latter is therefore a distinguishing property of

the numerical method itself, not of the Cauchy problem (which, indeed,

is stable due to the uniform Lipschitz continuity of f ). Property (11.15)

ensures that the numerical method has a weak sensitivity with respect to

small changes in the data and is thus stable in the sense of the general

de¬nition given in Chapter 2.

Remark 11.1 The constant C in (11.15) is independent of h (and thus

of Nh ), but it can depend on the width T of the integration interval I.

Actually, (11.15) does not exclude a priori the constant C from being an

unbounded function of T .

The request that a numerical method be stable arises, before anything else,

from the need of keeping under control the (unavoidable) errors introduced

by the ¬nite arithmetic of the computer. Indeed, if the numerical method

were not zero-stable, the rounding errors made on y0 as well as in the pro-

cess of computing f (tn , un ) would make the computed solution completely

useless.

476 11. Numerical Solution of Ordinary Di¬erential Equations

Theorem 11.1 (Zero-stability) Consider the explicit one-step method

(11.11) for the numerical solution of the Cauchy problem (11.1). Assume

that the increment function ¦ is Lipschitz continuous with respect to the

second argument, with constant Λ independent of h and of the nodes tj ∈

[t0 , t0 + T ], that is

∃h0 > 0, ∃Λ > 0 : ∀h ∈ (0, h0 ]

(h) (h) (h) (h)

|¦(tn , un , f (tn , un ); h) ’ ¦(tn , zn , f (tn , zn ); h)| (11.18)

(h) (h)

¤ Λ|un ’ zn |, 0 ¤ n ¤ Nh .

Then, method (11.11) is zero-stable.

Proof. Setting wj = zj ’u(h) , by subtracting (11.17) from (11.16) we obtain,

(h) (h)

j

for j = 0, . . . , Nh ’ 1,

(h) (h) (h) (h) (h) (h)

+ h ¦(tj , zj , f (tj , zj ); h) ’ ¦(tj , uj , f (tj , uj ); h) + hδj+1 .

wj+1 = wj

Summing over j gives, for n = 1, . . . , Nh ,

(h) (h)

wn = w0

n’1 n’1

(h) (h) (h) (h)

¦(tj , zj , f (tj , zj ); h) ’ ¦(tj , uj , f (tj , uj ); h) ,

+h δj+1 + h

j=0 j=0

so that, by (11.18)

n’1 n’1

(h)

|wn | ¤ |w0 | + h |δj+1 | + hΛ |wj |, 1 ¤ n ¤ Nh .

(h)

(11.19)

j=0 j=0

Applying the discrete Gronwall lemma, given below, we obtain

|wn | ¤ (1 + hn) µenhΛ , 1 ¤ n ¤ Nh .

(h)

Then (11.15) follows from noticing that hn ¤ T and setting C = (1 + T ) eΛT . 3

Notice that zero-stability implies the boundedness of the solution when f

is linear with respect to the second argument.

Lemma 11.2 (discrete Gronwall) Let kn be a nonnegative sequence and

•n a sequence such that

±

•0 ¤ g0

n’1 n’1

•n ¤ g0 + n ≥ 1.

ps + ks φs ,

s=0 s=0

11.3 Analysis of One-Step Methods 477

If g0 ≥ 0 and pn ≥ 0 for any n ≥ 0, then

n’1 n’1

•n ¤ n ≥ 1.

g0 + ps exp ks ,

s=0 s=0

For the proof, see, for instance, [QV94], Lemma 1.4.2. In the speci¬c case

of the Euler method, checking the property of zero-stability can be done

directly using the Lipschitz continuity of f (we refer the reader to the end

of Section 11.3.2). In the case of multistep methods, the analysis will lead to

the veri¬cation of a purely algebraic property, the so-called root condition

(see Section 11.6.3).

11.3.2 Convergence Analysis

De¬nition 11.5 A method is said to be convergent if

∀n = 0, . . . , Nh , |un ’ yn | ¤ C(h)

where C(h) is an in¬nitesimal with respect to h. In that case, it is said to

be convergent with order p if ∃C > 0 such that C(h) = Chp .

We can prove the following theorem.

Theorem 11.2 (Convergence) Under the same assumptions as in The-

orem 11.1, we have

|yn ’ un | ¤ (|y0 ’ u0 | + nh„ (h)) enhΛ , 1 ¤ n ¤ Nh . (11.20)

Therefore, if the consistency assumption (11.13) holds and |y0 ’ u0 | ’ 0

as h ’ 0, then the method is convergent. Moreover, if |y0 ’ u0 | = O(hp )

and the method has order p, then it is also convergent with order p.

Proof. Setting wj = yj ’ uj , subtracting (11.11) from (11.12) and proceed-

ing as in the proof of the previous theorem yields inequality (11.19), with the

understanding that

w0 = y0 ’ u0 , and δj+1 = „j+1 (h).

The estimate (11.20) is then obtained by applying again the discrete Gronwall

lemma. From the fact that nh ¤ T and „ (h) = O(hp ), we can conclude that

|yn ’ un | ¤ Chp with C depending on T and Λ but not on h. 3

A consistent and zero-stable method is thus convergent. This property is

known as the Lax-Richtmyer theorem or equivalence theorem (the converse:

“a convergent method is zero-stable” being obviously true). This theorem,

which is proven in [IK66], was already advocated in Section 2.2.1 and is a

central result in the analysis of numerical methods for ODEs (see [Dah56] or

478 11. Numerical Solution of Ordinary Di¬erential Equations

[Hen62] for linear multistep methods, [But66], [MNS74] for a wider classes

of methods). It will be considered again in Section 11.5 for the analysis of

multistep methods.

We carry out in detail the convergence analysis in the case of the forward

Euler method, without resorting to the discrete Gronwall lemma. In the

¬rst part of the proof we assume that any operation is performed in exact

arithmetic and that u0 = y0 .

Denote by en+1 = yn+1 ’ un+1 the error at node tn+1 with n = 0, 1, . . .

and notice that

en+1 = (yn+1 ’ u— ) + (u— ’ un+1 ), (11.21)

n+1 n+1

where u—n+1 = yn + hf (tn , yn ) is the solution obtained after one step of

the forward Euler method starting from the initial datum yn (see Figure

11.1). The ¬rst addendum in (11.21) accounts for the consistency error, the

second one for the cumulation of these errors. Then

yn+1 ’ u— = h„n+1 (h), u— ’ un+1 = en + h [f (tn , yn ) ’ f (tn , un )] .

n+1 n+1

yn+1