<< . .

. 36
( : 70)



. . >>

y = w, w = ’y

or, vectorially, as a single ¬rst order di¬erential equation

(y, w) = (w, ’y).

Lemma 12.2.8. Suppose g : Rn+1 ’ R is continuous, t0 ∈ R, yj ∈ Rn for
0 ¤ j ¤ n ’ 1 and δ > 0. Suppose, further, that there exists a K > 0 such
that (K + 1)δ < 1 and

|g(t, u) ’ g(t, v)| ¤ K u ’ v

for all t ∈ [t0 ’ δ, t0 + δ]. Then there exist a unique, n times di¬erentiable,
function y : (t0 ’ δ, t0 + δ) ’ R with

y (n) (t) = g(t, y(t), y (t), . . . , y (n’1) (t)) and y (j) (t0 ) = yj for 0 ¤ j ¤ n ’ 1.

Proof. This uses the trick described above. We de¬ne

f (t, u1 , u2 , . . . , un ) = (u1 , u2 , . . . , un , g(t, u1 , u2 , . . . , un’1 )).

The di¬erential equation

y (t) = f (t, y(t))

is equivalent to the system of equations

[1 ¤ j ¤ n]
yj (t) = fj (t, y(t))

which for our choice of f becomes

[1 ¤ j ¤ n ’ 1]
yj (t) = yj (t)
yn (t) = g(t, y(t), y (t), . . . , y (n’1) (t)).

Taking y(t) = y1 (t), this gives us yj (t) = y (j’1) (t) and

y (n) (t) = g(t, y(t), y (t), . . . , y (n’1) (t)),
310 A COMPANION TO ANALYSIS

which is precisely the di¬erential equation we wish to solve. Our boundary
conditions
y (j) (t0 ) = yj for 0 ¤ j ¤ n ’ 1
now take the form y(t0 ) = y0 with
y0 = (y0 , y1 , . . . , yn’1 ),
and we have reduced our problem to that studied in Exercise 12.2.7.
To prove existence and uniqueness we need only verify that f satis¬es the
appropriate Lipschitz condition. But
f (t, u) ’ f (t, v)
= (u1 ’ v1 , u2 ’ v2 , . . . , un’1 ’ vn’1 , g(t, u1 , u2 , . . . , un ) ’ g(t, v1 , v2 , . . . , vn ))
¤ u ’ v + |g(t, u1 , u2 , . . . , un ) ’ g(t, v1 , v2 , . . . , vn )| ¤ (K + 1) u ’ v ,
so we are done.


Local to global ™
12.3
We proved Theorem 12.2.3 for functions f with
|f (t, u) ’ f (t, v)| ¤ K|u ’ v|
for all t ∈ [t0 ’ δ, t0 + δ] and all u and v. However, this condition is more
restrictive than is necessary.
Theorem 12.3.1. Suppose · > 0 and f : [t0 ’ ·, t0 + ·] — [y0 ’ ·, y0 + ·] ’ R
is a continuous function satisfying the condition
|f (t, u) ’ f (t, v)| ¤ K|u ’ v|
whenever t ∈ [t0 ’ ·, t0 + ·] and u, v ∈ [y0 ’ ·, y0 + ·]. Then we can ¬nd
a δ > 0 with · ≥ δ such that there exists a unique di¬erentiable function
y : (t0 ’ δ, t0 + δ) ’ R which satis¬es the equation y (t) = f (t, y(t)) for all
t ∈ (t0 ’ δ, t0 + δ) together with the boundary condition y(t0 ) = y0 .
Proof. This is an easy consequence of Theorem 12.2.3. De¬ne a function
˜
f : R2 ’ R as follows.
˜ if |t ’ t0 | ¤ ·, |y ’ y0 | ¤ ·,
f (t, y) = f (t, y)
˜ if t > t0 + ·, |y ’ y0 | ¤ ·,
f (t, y) = f (t0 + ·, y)
˜
f (t, y) = f (t0 ’ ·, y) if t < t0 ’ ·, |y ’ y0 | ¤ ·,
˜ ˜
f (t, y) = f (t, y0 + ·) if y > y0 + ·,
˜ ˜
f (t, y) = f (t, y0 ’ ·) if y < y0 ’ ·.
311
Please send corrections however trivial to twk@dpmms.cam.ac.uk

˜
We observe that f is continuous and
˜ ˜
|f (t, u) ’ f (t, v)| ¤ K|u ’ v|
for all t, u and v.
˜ ˜
If we choose δ > 0 with K δ < 1, then Theorem 12.2.3 tells us that there
˜ ˜
exists a unique di¬erentiable function y : (t0 ’ δ, t0 + δ) ’ R which satis¬es
˜
˜˜ ˜ ˜
the equation y (t) = f (t, y (t)) for all t ∈ (t0 ’ δ, t0 + δ) together with the
˜
boundary condition y (t0 ) = y0 . Since y is continuous, we can ¬nd a δ > 0
˜ ˜
˜
with · ≥ δ, δ ≥ δ and
|˜(t) ’ y0 | < ·
y
for all t ∈ (t0 ’ δ, t0 + δ). If we set y = y|(t0 ’δ,t0 +δ) (the restriction of y to
(t0 ’ δ, t0 + δ)), then
(t, y(t)) ∈ [t0 ’ ·, t0 + ·] — [y0 ’ ·, y0 + ·]
and so
˜
f (t, y(t)) = f (t, y(t))
for all t ∈ (t0 ’ δ, t0 + δ), so y is the unique solution of
y (t) = f (t, y(t))
as required.
˜
Exercise 12.3.2. (i) Describe f in words.
˜
(ii) It is, I think, clear that f is continuous and
˜ ˜
|f (t, u) ’ f (t, v)| ¤ K|u ’ v|
for all t, u and v. Carry out some of the detailed checking which would be
required if someone demanded a complete proof.
Theorem 12.3.1 tells us, that under very wide conditions, the di¬erential
equation has a local solution through each (t0 , y0 ). Does it have a global
solution, that is, if f : R2 ’ R is well behaved can we ¬nd a solution for the
equation y (t) = f (t, y(t)) which is de¬ned for all t ∈ R? Our ¬rst result in
this direction is positive.
Theorem 12.3.3. Suppose f : R2 ’ R is a continuous function satisfying
the following condition. There exists a K : [0, ∞) ’ [0, ∞) such that
|f (t, u) ’ f (t, v)| ¤ K(R)|u ’ v|
whenever |t| ¤ R. Then given any (t0 , y0 ) ∈ R2 there exists a unique y : R ’
R which is is di¬erentiable and satis¬es the equation y (t) = f (t, y(t)) for all
x ∈ R together with the boundary condition y(t0 ) = y0
312 A COMPANION TO ANALYSIS

Note that it makes no di¬erence how fast K(R) increases.
Proof. This proof is worth studying since it is of a type which occurs in several
places in more advanced work. We refer to the equation y (t) = f (t, y(t)) for
all t ∈ R together with the boundary condition y(t0 ) = y0 as ˜the system™.
Our result will follow if we can show that the system has a unique solution
on [t0 , ∞) and on (’∞, t0 ]. The proof is essentially the same for the two cases,
so we show that the system has a unique solution on [t0 , ∞). Observe that,
if we can show that the system has a unique solution on [t0 , T ) for all T > t0 ,
we shall have shown that the system has a unique solution on [t0 , ∞). (Write
yT : [t0 , T ) ’ R for the solution on [t0 , T ). If S ≥ T then yS (t) = yT (t) for all
t ∈ [t0 , T ) by uniqueness. Thus we can de¬ne y : [t0 , ∞) ’ R by y(t) = yT (t)
for all t ∈ [t0 , T ). By construction y is a solution of the system on [t0 , ∞).
If w : [t0 , ∞) ’ R is a solution of the system on [t0 , ∞) then, by uniqueness
on [t0 , T ), w(t) = yT (t) = y(t) for all t0 ¤ t ¤ T and all T > t0 . Since T
was arbitrary, w(t) = y(t) for all t ∈ [t0 , ∞).) We can thus concentrate our
e¬orts on showing that the system has a unique solution on [t0 , T ) for all
T > t0 .
Existence Let
E = {„ > t0 : the system has a solution on [t0 , „ )}.
By Theorem 12.3.1, E is non-empty. If E is bounded it has a supremum T0 ,
say. Choose R0 > |T0 | + 2 and set K0 = K(R0 ). By hypothesis,
|f (t, u) ’ f (t, v)| ¤ K0 |u ’ v|
whenever |t ’ T0 | < 2. Choose δ0 > 0 such that 1 > δ0 , T0 ’ t0 > 2δ0 and
K0 δ0 < 1. Since T0 is the supremum of E we can ¬nd T1 ∈ E such that
T1 > T0 ’ δ0 /3. Let y : [t0 , T1 ), ’ R be a solution of the system and let T2 =
T1 ’δ0 /3. By Theorem 12.3.1, there exists a unique w : (T2 ’δ0 , T2 +δ0 ) ’ R
such that
w (t) = f (t, w(t)), w(T2 ) = y(T2 ).
The uniqueness of w means that w(t) = y(t) for all t where both y and w
are de¬ned (that is, on (T2 ’ δ0 , T1 )). Setting
y (t) = y(t)
˜ for t < T1 ,
y (t) = w(t)
˜ for t < T2 + δ0 ,
we see that y : [t0 , T2 +δ0 ) ’ R is a solution of the system. Since T2 +δ0 > T1 ,
˜
we have a contradiction. Thus, by reductio ad absurdum, E is unbounded
and the system has a solution on [t0 , T ) for all T > t0 .
313
Please send corrections however trivial to twk@dpmms.cam.ac.uk

Uniqueness We need to show that if T > t0 and y and w are solutions of the
system on [t0 , T ) then y(t) = w(t) for all t ∈ [t0 , T ). The proof is similar to,
but simpler than, the existence proof just given. Let

E = {T > „ ≥ t0 : y(t) = w(t) for all t ∈ [t0 , „ ]}.

Since t0 ∈ E, we know that E is non-empty. By de¬nition, E is bounded and
so has a supremum T0 . If T0 = T we are done. If not, T0 < T . By continuity,
y(T0 ) = w(T0 ). As before, choose R0 > |T0 | + 2 and set K0 = K(R0 ). By
hypothesis,

|f (t, u) ’ f (t, v)| ¤ K0 |u ’ v|

whenever |t ’ T0 | < 2. Choose δ0 > 0 such that 1 > δ0 , T0 ’ t0 > 2δ0 ,
„ ’ T0 > 2δ0 , and K0 δ0 < 1. By Theorem 12.3.1, there exists a unique
z : (T0 ’ δ0 , T0 + δ0 ) ’ R such that

z (t) = f (t, z(t)), z(T0 ) = y(T0 ).

By uniqueness y(t) = z(t) = w(t) for all t ∈ (T0 ’ δ0 , T0 + δ0 ). It follows that
y(t) = w(t) for all t ∈ [t0 , T0 + δ0 ) and so, by continuity, for all t ∈ [t0 , T0 + δ].
Thus T0 +δ0 ∈ E contradicting the de¬nition of T0 . The desired result follows
by contradiction.

Exercise 12.3.4. Suppose · > 0 and f : (a, b) ’ R is a continuous function
such that, given any t1 ∈ (a, b) and any y1 ∈ R we can ¬nd an ·(t1 , y1 ) > 0
and a K(t1 , y1 ) such that

|f (t, u) ’ f (t, v)| ¤ K(t1 , y1 )|u ’ v|

whenever

t ∈ [t1 ’ ·(t1 , y1 ), t1 + ·(t1 , y1 )] and u, v ∈ [y1 ’ ·(t1 , y1 ), y1 + ·(t1 , y1 )].

Show that, if y, w : (a, b) ’ R are di¬erentiable functions such that

y (t) = f (t, y(t)), w (t) = f (t, w(t)) for all t ∈ (a, b)

and y(t0 ) = w(t0 ) for some t0 ∈ (a, b), then y(t) = w(t) for all t ∈ (a, b).

Exercise 12.3.5. Use Example 1.1.3 to show that, in the absence of the fun-
damental axiom, we cannot expect even very well behaved di¬erential equa-
tions to have unique solutions.
314 A COMPANION TO ANALYSIS

Looking at Theorem 12.3.3, we may ask if we can replace the condition

|f (t, u) ’ f (t, v)| ¤ K(R)|u ’ v| whenever |t| ¤ R

by the condition

|f (t, u) ’ f (t, v)| ¤ K(R)|u ’ v| whenever |t|, |u|, |v| ¤ R.

Unless the reader is very alert, the answer comes as a surprise followed almost
at once by surprise that the answer came as a surprise.

Example 12.3.6. Let f (t, y) = 1 + y 2 . Then

|f (t, u) ’ f (t, v)| ¤ 2R|u ’ v|

whenever |t|, |u|, |v| ¤ R. However, given t0 , y0 ∈ R, there does not exist a
di¬erentiable function y : R ’ R such that y (t) = f (t, y(t)) for all t ∈ R.

Proof. Observe ¬rst that

|f (t, u) ’ f (t, v)| = |u2 ’ v 2 | = |u + v||u ’ v| ¤ (|u| + |v|)|u ’ v| ¤ 2R|u ’ v|

whenever |t|, |u|, |v| ¤ R.
We can solve the equation

y = 1 + y2

formally by considering
dy
= dt
1 + y2
and obtaining

tan’1 y = t + a,

so that y(t) = tan(t+a) for some constant a. We choose ± ∈ [t0 ’π/2, t0 +π/2]
so that y0 = tan(t0 ’ ±) satis¬es the initial condition and thus obtain

y(t) = tan(t ’ ±)

for ± ’ π/2 < t < ± + π/2. We check that we have a solution by direct
di¬erentiation. Exercise 12.3.4 tells us that this is the only solution. Since
tan(t ’ ±) ’ ∞ as t ’ ± + π/2 through values of t < ± + π/2, the required
result follows.
315
Please send corrections however trivial to twk@dpmms.cam.ac.uk

(An alternative proof is outlined in Exercise K.265.)
Exercise 12.3.7. (i) Sketch, on the same diagram, various solutions of y (t) =
1 + y(t)2 with di¬erent initial conditions.
(ii) Identify the point in our proof of Theorem 12.3.3 where the argument
fails for the function f (t, y) = 1 + y 2 .
We may think of local solutions as a lot of jigsaw pieces. Just looking
at the pieces does not tell us whether they ¬t together to form a complete
jigsaw.
Here is another example which brings together ideas from various parts
of the book. Although the result is extremely important, I suggest that the
reader does not bother too much with the details of the proof.
Lemma 12.3.8. If z0 ∈ C \ {0} and w0 ∈ C, then the di¬erential equation
1
f (z) =
z
has a solution with f (z0 ) = w0 in the set
B(z0 , |z0 |) = {z ∈ C : |z ’ z0 | < |z0 |}.
However, the same di¬erential equation
1
f (z) =
z
has no solution valid in C \ {0}.
Proof. Our ¬rst steps re¬‚ect the knowledge gained in results like Exam-
j+1 j
ple 11.5.16 and Exercise 11.5.20. The power series ∞ (’1)j z has radius
j=1
of convergence 1. We de¬ne h : B(0, 1) ’ C by

(’1)j+1 z j
h(z) = .
j
j=1

Since we can di¬erentiate term by term within the radius of convergence, we
have
∞ ∞
1
j+1 j’1
(’1)j z j =
h (z) = (’1) z =
1+z
j=1 j=0

’1
for all |z| < 1. Thus, if we set f (z) = w0 +h(1+(z’z0 )z0 ) for z ∈ B(z0 , |z0 |),
the chain rule gives
1 1 1
f (z) = =
’1
z0 1 + (z ’ z0 )z0 z
316 A COMPANION TO ANALYSIS

as desired. Simple calculation gives f (z0 ) = w0 .
The second part of the proof is, as one might expect, closely linked to
Example 5.6.13. Suppose, if possible, that there exists an f : C \ {0} ’ C
satisfying the di¬erential equation
1
f (z) = .
z
By replacing f by f ’f (1), we may suppose that f (1) = 0. De¬ne A : R ’ C
by A(t) = ’if (eit ). Writing A(t) = a(t) + ib(t) with a(t) and b(t) real, we
see that A (t) = a (t) + ib (t) exists with value

’if (ei(t+δt) ) + if (eit )
A (t) = lim
δt
δt’0
i(t+δt)
) ’ f (eit ) eiδt ’ 1
f (e
(’ieit )
= lim i(t+δt) ’ eit
e δt
δt’0
it
e
= f (eit )i(’ieit ) = it = 1.
e
Thus A(t) = t + A(0) = t. In particular,

0 = A(0) = ’if (1) = ’if (e2πi ) = A(2π) = 2π,

which is absurd. Thus no function of the type desired can exist.
Exercise 12.3.9. The proof above is one of the kind where the principal
characters wear masks. Go through the above proof using locutions like ˜the
thing that ought to behave like log z if log z existed and behaved as we think
it ought.™
Exercise 12.3.10. Write

Bj = {z ∈ C : |z ’ eπij/3 | < 1}.
3
Bj ’ C with
Show that there exists a function f1 : j=0

1
f1 (z) = , f1 (1) = 0
z
6
Bj ’ C with
and a function f2 : j=3

1
f2 (z) = , f2 (1) = 0.
z
Find f1 ’ f2 on B0 and on B3 .
317
Please send corrections however trivial to twk@dpmms.cam.ac.uk

We have a lot of beautifully ¬tting jigsaw pieces but when we put too
many together they overlap instead forming a complete picture. Much of
complex variable theory can be considered as an extended meditation on
Lemma 12.3.8.
If the reader is prepared to allow a certain amount of hand waving, here
is a another example of this kind of problem. Consider the circle T obtained
by ˜rolling up the real real line like a carpet™ so that the point θ is identi¬ed
with the point θ + 2π. If we seek a solution of the equation

f (θ) + »2 f (θ) = 0

where » is real and positive then we can always obtain ˜local solutions™ f (θ) =
sin(»θ + θ0 ) valid on any small part of the circle we choose, but only if » is an
integer can we extend it to the whole circle. When we start doing analysis
on spheres, cylinders, tori and more complicated objects, the problem of
whether we can combine ˜local solutions™ to form consistent ˜global solutions™
becomes more and more central.
The next exercise is straightforward and worthwhile but long.
Exercise 12.3.11. (i) State and prove the appropriate generalisation of
Theorem 12.3.3 to deal with a vectorial di¬erential equation

y (t) = f (t, y(t)).

(ii) Use (i) to obtain the following generalisation of Lemma 12.2.8. Sup-
pose g : Rn+1 ’ R is a continuous function satisfying the following condition.
There exists a K : [0, ∞) ’ R such that

|g(t, u) ’ g(t, v)| ¤ K(R) u ’ v

whenever |t| ¤ R. Then, given any (t0 , y0 , y1 , . . . , yn’1 ) ∈ Rn+1 , there exists
a unique n times di¬erentiable function y : R ’ R with

y (n) (t) = g(t, y(t), y (t), . . . , y n’1 (t)) and y (j) (t0 ) = yj for 0 ¤ j ¤ n ’ 1.

Exercise 12.3.12. In this book we have given various approaches to the ex-
ponential and trigonometric functions. Using the material of this section, we

<< . .

. 36
( : 70)



. . >>