<< . .

. 66
( : 70)



. . >>

(eaK ’ 1).
|y(a) ’ yN N | ¤ + (E)
2K hN K

What happens to the error estimate (that is the right hand side of (E)) as
N ’ ∞ (that is as the step length approaches 0)? Explain in simple terms
why we should expect this.
Lh
(ii) Which positive value of h minimises + ? (iii) Show that the
2K hK
best bound on the error that we can guarantee using (E) is proportional to
1/2
.
(iv) Just as in Exercise K.267, we ask whether we can improve substan-
tially on (iii). Once again, we consider the special case f (t, u) = u, y0 = 0
and a = 1. A little experimentation shows that we can solve (D) if we choose
r = (1 ’ rh) , giving


yr+1 = (1 + N ’1 )˜r ’ (1 ’ rN ’1 ) .
˜ y (F)

Show that, for this choice of r, we obtain

yr = (1 ’ N ’1 )r ’ r
˜

and so
hN
[h ]
y(1) ’ yN N = e ’ (1 + N ’1 )N ’ N ≥ 21/2 1/2

˜ + .
2 hN

Thus the answer to our question is no.
There exist several di¬erent ways of improving on Euler™s method but a
good understanding of Euler™s method is very helpful in understanding how
anybody came to think of these more advanced techniques.
580 A COMPANION TO ANALYSIS

Exercise K.269. [12.3, H, ‘ ] (i) Let a > 0. By considering the di¬erential
equation y (x) = g(x) and using Exercise K.266, show that there exists a
constant κa such that, if g : R ’ R is di¬erentiable, |f (t)| ¤ M for all
t ∈ [0, a], N is a strictly positive integer and N h = a, then
N ’1
a
g(t) dt ’ h g(rh) ¤ M κa h.
0 n=0

(It is easy to obtain this result directly with better values for κa . See, for
example, Exercise K.125 (v) or just think a bit.)
(ii) Suppose that a = π, N h = a with N a strictly positive integer and
G(t) = sin2 (N t). Show that
N ’1
a
G(rh) ≥ 10’2 sup |G(t)|.
G(t) dt ’ h
t∈[0,a]
0 n=0

(iii) Consider the situation described in Exercise K.266. Suppose that we
replace the condition |f,1 (t, u)|+|f,2 (t, u)f (t, u)| ¤ L by the weaker condition
|f (t, u)| ¤ L for all (t, u) ∈ R2 . If a > 0 and > 0 are ¬xed, is it possible to
¬nd an N0 (depending only on K, L, a and ) such that
[h ]
|y(a) ’ yN N | ¤
for all N ≥ N0 ?
Exercise K.270. (Leibniz rule.) [12.3, G] (i) If u, v : R ’ R are n
times di¬erentiable at x show, by induction or otherwise, that the product
uv is n times di¬erentiable at x with
n
n (n’r)
(n)
(x)v (r) (x).
(uv) (x) = u
r
r=0

(this is called the Leibniz rule.)
(ii) If y : R ’ R is de¬ned by
(1 + x2 )1/2 y(x) = log(x + (1 + x2 )1/2 ),
show that y is di¬erentiable on (0, ∞) with
(1 + x2 )y (x) + xy(x) = 1.
Show, by using induction and the Leibniz rule, that y is in¬nitely di¬eren-
tiable and ¬nd the Taylor series

y (n) (0) n
x.
n!
n=0
581
Please send corrections however trivial to twk@dpmms.cam.ac.uk

(iii) (This part uses sledgehammers to crack a nut and may be omitted
without loss.) Find the radius of convergence of the Taylor series and, by
using results on the di¬erentiation of power series and the uniqueness of the
solution of di¬erential equations, show that, within the radius of convergence

y (n) (0) n
y(x) = x.
n!
n=0


Exercise K.271. [12.4, P, S] Let K : [0, 1]2 ’ R and g : [0, 1] ’ R be
continuous. Explain why there exists an M > 0 such that |K(s, t)| ¤ M
for all (s, t) ∈ [0, 1]2 . Suppose that |»| < M ’1 . By ¬nding an appropriate
contraction mapping T : C([0, 1]) ’ C([0, 1]) show that there exists a unique
f ∈ C([0, 1]) such that
1
f (t) = g(t) + » K(s, t)f (t) ds.
0

Exercise K.272. (The Wronskian.) [12.4, M] This should be treated
as an exercise in calculus rather than analysis. We write
« 
a11 a12 a13 a11 a12 a13
a21 a22 a23 = det a21 a22 a23 
a31 a32 a33 a31 a32 a33

(i) If u1 , u2 , . . . w3 are all di¬erentiable show that

u1 (t) u2 (t) u3 (t)
d
v1 (t) v2 (t) v3 (t)
dt
w1 (t) w2 (t) w3 (t)
u1 (t) u2 (t) u3 (t) u1 (t) u2 (t) u3 (t) u1 (t) u2 (t) u3 (t)
= v1 (t) v2 (t) v3 (t) + v1 (t) v2 (t) v3 (t) + v1 (t) v2 (t) v3 (t) .
w1 (t) w2 (t) w3 (t) w1 (t) w2 (t) w3 (t) w1 (t) w2 (t) w3 (t)

(ii) If u1 , u2 and u3 are three solutions of

y (t) + a(t)y (t) + b(t)y (t) + c(t)y(t) = 0,

we de¬ne their Wronskian W by

u1 (t) u2 (t) u3 (t)
W (t) = u1 (t) u2 (t) u3 (t) .
u1 (t) u2 (t) u3 (t)
582 A COMPANION TO ANALYSIS

Use part (i) and results about determinants to show that

W (t) = ’a(t)W (t).

(iii) Generalise parts (i) and (ii). Reread the proof of part (i) of Lemma 12.4.3
in the light of this exercise.

Exercise K.273. [12.4, T, S] The functions f, g : R ’ R are everywhere
di¬erentiable and their Wronskian f g ’ g f never vanishes. By applying
Rolle™s theorem to f /g, or otherwise, show that if f has zeros at a and b with
a < b then g must have a zero strictly between a and b. Deduce that ˜the
zeros of f and g are intertwined™.

Exercise K.274. [12.4, P] If f1 , f2 : [a, b] ’ R are once di¬erentiable we
de¬ne the Wronskian W (f1 , f2 ) by

f1 f2
W (f1 , f2 ) = det
f1 f2

Show that if f1 and f2 are linearly dependent (that is, we can ¬nd »1 , »2 ∈ R
not both zero such that »1 f1 (t) + »2 f2 (t) = 0 for all t ∈ [a, b]) then the
Wronskian W (f1 , f2 ) is identically zero. By considering functions of the type
g with g(x) = 0 for x ¤ c, g(x) = (x ’ c)2 for x ≥ c, or otherwise, show that
the converse is false.
Suppose now that f1 , f2 are twice continuously di¬erentiable. Show, by
considering the Wronskian
« 
f f 1 f2
W (f, f1 , f2 ) = det  f f1 f2 
f f1 f2

and using results on the existence and uniqueness of di¬erential equations
with given initial conditions, that the following result holds.
There exists a di¬erential equation of the form

y (x) + a1 (x)y (x) + a2 (x)y(x) = 0

whose solutions are exactly the functions of the form »1 f1 + »2 f2 with »1 and
»2 (in more sophisticated language, having f1 and f2 as a basis for the space
of solutions) if and only if W (f1 , f2 ) is non-zero everywhere on [a, b].
Generalise the results of this question to n functions f1 , f2 , . . . , fn .
583
Please send corrections however trivial to twk@dpmms.cam.ac.uk

Exercise K.275. [12.4, M] (This question should be treated as a mathe-
matical methods one.)
(i) Consider a di¬erential equation

y (x) + p(x)y (x) + q(x)y(x) = 0.

Suppose that u is a solution of . Show that if we write y(x) = u(x)v(x)
and w(x) = u (x) then takes the form of a ¬rst order di¬erential equation
x
in w. Show that y(x) = (A + B 0 w(t) dt)v(x) gives the general a solution
for .
(ii) Show that y(x) = exp x gives a solution of

y (x) ’ (2x + 1)y (x) + (x + 1)y(x) = 0

and use the method of (i) to ¬nd the general solution.
(iii) We know that y(x) = exp x gives a solution of

y (x) ’ 2y (x) + y(x).

Find the general solution by the method of (i).
(iv) Show how, if we know one solution of

y (x) + p1 (x)y (x) + p2 (x)y (x) + p3 (x)y(x) = 0

we can reduce the problem of ¬nding a general solution to that of solving a
second order linear di¬erential equation. Generalise.

Exercise K.276. (Method of variation of parameters.) [12.4, M,
‘ ] (Like Question K.275 this question should be treated as a mathematical
methods one.)
Consider a di¬erential equation

y (x) + p(x)y (x) + q(x)y(x) = 0.

whose Wronskian W (x) = y1 (x)y2 (x) ’
Let y1 and y2 be solutions of
y1 (x)y2 (x) never vanishes. Suppose that we wish to solve the di¬erential
equation

y (x) + p(x)y (x) + q(x)y(x) = f (x).

(i) In view of the success of the method of Question K.275 we might be
tempted to look for a solution of the form

y(x) = u1 (x)y1 (x) + u2 (x)y2 (x).
584 A COMPANION TO ANALYSIS

Make this substitution in .
(ii) At the end of (i) you obtained a di¬erential equation (†) for u1 and
u2 . It is a useful heuristic guide (though not always a reliable one) that two
unknowns require two equations so we add the further equation

u1 (x))y1 (x) + u2 (x)y2 (x) = 0. (††)

Use (††) to simplify (†) and then use the pair of equations to ¬nd u1 and u2 .
Now ¬nd the general solution of .
(iii) Discuss the relation of your result to the Green™s function result given
in Theorem 12.4.6.
(iv) Use the method of this question to ¬nd the general solution of
2
y (x) ’ y(x) = .
1 + ex
(v) Try and extend the ideas of this question to the solution of

y (x) + p(x)y (x) + q(x)y (x) + r(x)y = f (x).

when three solutions y1 , y2 , y3 with non-zero Wronskian (see Question K.272)
are known.
[It would be hard to think of an example more opposed to the modern view
that ˜understanding precedes doing™. But even the most convinced proponent
of modern ways must admit that it has a certain charm.]
Exercise K.277. [12.4, P] Let C(S) be the space of continuous functions
on the unit square

S = {(x, y) : 0 ¤ x ¤ 1, 0 ¤ y ¤ 1}

equipped with uniform norm K ∞ = sup(x,y)∈S |K(x, y)| and let C(I) be the
space of continuous functions on the unit interval I = [0, 1] equipped with the
uniform norm. Let L be the space of continuous linear maps L : C(I) ’ C(I)
equipped with the operator norm. If K ∈ C(S) we set
1
(TK (f ))(x) = K(x, y)f (y) dy.
0

Show that TK ∈ L.
Prove or disprove each of the following statements.
(i) If Kn ’ K in C(S), then TKn ’ TK in L.
(ii) If TKn ’ TK in L, then Kn ’ K in C(S).
(iii) The mapping K ’ TK is an injective map from C(S) to L.
(iii) The mapping K ’ TK is a surjective map from C(S) to L.
585
Please send corrections however trivial to twk@dpmms.cam.ac.uk

Exercise K.278. [12.4, T] (i) In Exercise K.132, you showed that if g :
R2 ’ R is a di¬erentiable function with continuous partial derivatives then
x
G(x) = g(x, t) dt
0

is di¬erentiable with
x
G (x) = g(x, x) + g,1 (x, t) dt
0

Review this exercise.
(ii) Suppose that a, b, f : R ’ R are continuous. Explain why there
exists a unique twice di¬erentiable y1 : R ’ R such that
y1 (t) + a(t)y1 (t) + b(t)y1 (t) = 0, y1 (0) = 0, y1 (0) = 1,
and a unique twice di¬erentiable y2 : R ’ R such that
y2 (t) + a(t)y2 (t) + b(t)y2 (t) = 0, y2 (1) = 0, y2 (1) = 1.
We make the following
key assumption: y1 (1) = 0.
˜
Show that, if we de¬ne H, H : R ’ R by
˜
H(s, t) = y1 (t)y2 (s)W (s)’1 , H(s, t) = y2 (t)y1 (s)W (s)’1 ,
˜
then H and H are twice di¬erentiable functions with continuous second par-
tial derivatives and
H,22 (s, t) + a(t)H,2 (s, t) + b(t)H(s, t) = 0, H(s, 0) = 0,
˜ ˜ ˜ ˜
H,22 (s, t) + a(t)H,2 (s, t) + b(t)H(s, t) = 0, H(s, 1) = 0.
˜ ˜
Check that H(t, t) = H(t, t), H2 (t, t) = H2 (t, t).
(iii) We de¬ne G : R2 ’ R by G(s, t) = H(s, t) for t ¤ s and G(s, t) =
˜
H(s, t) for s ¤ t. If we set
1 t 1
˜
y(t) = G(s, t)f (s) ds = H(s, t)f (s) ds + H(s, t)f (s) ds,
0 0 t

˜
show, using part (i) and the properties of H and H established in part (ii)
˜
(but not using the de¬nitions of H and H), that y is twice di¬erentiable and
satis¬es
y (t) + a(t)y + b(t)y = f (t)
together with the conditions y(0) = y(1) = 0.
586 A COMPANION TO ANALYSIS

Exercise K.279. [12.4, M] Consider the di¬erential equation

y (4) ’ k 4 y = f

for y : [0, 1] ’ R subject to boundary conditions y(0) = y (0) = 0, y(1) =
y (1) = 0 where f : [0, 1] ’ R is continuous and k is real.
By extending the discussion of the Green™s function in Section 12.4 show
that, provided that k does not take certain exceptional values to be identi¬ed,
the system has the solution
1
y(x) = G(x, t)f (t) dt
0

where
A(sinh kx ’ sin kx) + B(cosh kx ’ cos kx) if 0 ¤ x ¤ t,
G(x, t) =
C(sinh k(1 ’ x) ’ sin k(1 ’ x)) + D(cosh k(1 ’ x) ’ cos k(1 ’ x)) if t ¤ x ¤ 1,

and A, B C, D are given by
« «
A 1
¬B · ¬·
¬ · = M ’1 ¬0· ,
C   0
D 0

where M is a 4 — 4 matrix to be given explicitly.

Exercise K.280. [12.4, M] Consider the equation for a damped harmonic
oscillator

y (t) + 2βy (t) + ω 2 y(t) = f (t),

where β and ω are strictly positive real numbers and y : [0, ∞) ’ R satis¬es
y(0) = y (0) = 0. Use Green™s function methods to show that, if β > ω,
t
’1
f (s)e’β(t’s) sinh(±(t ’ s)) ds
y(t) = ±
0

where ± is the positive square root of β 2 ’ ω 2 and obtain a similar result in
the cases when ω > β and ω = β.
It is known that the driving force f is non-zero only when t is very small.
Sketch the behaviour of y(t) for large t and determine the value of β which
causes y to die away as fast as possible. (Interpreting this last phrase in a
reasonable manner is part of your task.)
587
Please send corrections however trivial to twk@dpmms.cam.ac.uk

Exercise K.281. [13.1, T,] (Although this sequence of exercises seems to
¬nd a natural place here they could have been placed earlier and linked with
Section 11.1.) Let (U, U ) be a complete normed vector space. By Exer-
cise 11.1.15, the space L(U, U ) of continuous linear maps with the operator
norm is complete.
(i) If T ∈ L(U, U ) and T < 1, show that the sequence Sn = n T j j=0
is Cauchy. Deduce that Sn converges in the operator norm to a limit S =
∞ ’1
and I ’ S ¤ T (1 ’ T )’1 .
j
j=0 T . Show also that S ¤ (1 ’ T )
(ii) We continue with the notation and assumptions of (i). By looking at
Sn (I ’ T ), show that S(I ’ T ) = I. Show also that (I ’ T )S = I. Conclude
that, if A ∈ L(U, U ) and I ’ A < 1, then A is invertible and
A’1 ¤ (1 ’ I ’ A )’1 and I ’ A’1 ¤ I ’ A (1 ’ I ’ A )’1 .
(iii) Suppose that B ∈ L(U, U ) is invertible and B’C < B ’1 ’1 . If we
set A = B ’1 C, show that A = B ’1 C is invertible. Show that A’1 B ’1 C = I
and CA’1 B ’1 = CBB ’1 A’1 B ’1 = I, and so C is invertible with inverse
A’1 B ’1 . Show, further that,
C ’1 ¤ B ’1 (1 ’ B ’1 B ’ C ) and
B ’1 ’ C ’1 ¤ B ’1 B ’ C (1 ’ B ’1 B ’ C ).
(iv) Let E be the set of invertible C ∈ L(U, U ). Show that E is open in
). Show further that, if we de¬ne ˜ : E ’ E by ˜(C) = C ’1 ,
(L(U, U ),
then ˜ is a continuous function.
(v) Returning to the discussion in (i) and (ii) show that if T < 1 then
(I ’ T )’1 ’ I ’ T ¤ T (1 ’ T )’1
2


<< . .

. 66
( : 70)



. . >>