<< . .

. 40
( : 95)



. . >>

10’3 1.3 · 10’6 8.4 · 10’13 4 · 10’12
25
TABLE 6.7. Newton™s method for the approximation of the root of
f (x) = e’x ’ · = 0. The stopping test is based on the control of the incre-
ment

6.6 Post-processing Techniques for Iterative
Methods
We conclude this chapter by introducing two algorithms that aim at ac-
celerating the convergence of iterative methods for ¬nding the roots of a
function.


6.6.1 Aitken™s Acceleration
We describe this technique in the case of linearly convergent ¬xed-point
methods, referring to [IK66], pp. 104“108, for the case of methods of higher
order.
Consider a ¬xed-point iteration that is linearly converging to a zero ± of
a given function f . Denoting by » an approximation of φ (±) to be suitably
determined and recalling (6.18) we have, for k ≥ 1

x(k) ’ »x(k’1) x(k) ’ »x(k) + »x(k) ’ »x(k’1)
± =
1’» 1’» (6.32)
»
(x(k) ’ x(k’1) ).
= x(k) +
1’»
Aitken™s method provides a simple way of computing » that is able to
accelerate the convergence of the sequence {x(k) } to the root ±. With this
aim, let us consider for k ≥ 2 the following ratio

x(k) ’ x(k’1)
(k)
» = (k’1) , (6.33)
’ x(k’2)
x
and check that

lim »(k) = φ (±). (6.34)
k’∞
6.6 Post-processing Techniques for Iterative Methods 273

Indeed, for k su¬ciently large

x(k+2) ’ ± φ (±)(x(k+1) ’ ±)

and thus, elaborating (6.33), we get

x(k) ’ x(k’1) (x(k) ’ ±) ’ (x(k’1) ’ ±)
lim »(k) = lim = lim (k’1)
k’∞ x(k’1) ’ x(k’2) ’ ±) ’ (x(k’2) ’ ±)
k’∞ (x
k’∞

x(k) ’ ±
’1
φ (±) ’ 1
x(k’1) ’ ±
= lim = = φ (±)
’± 1
(k’2)
x
k’∞
1’
1 ’ (k’1) φ (±)
’±
x

which is (6.34). Substituting in (6.32) » with its approximation »(k) given
by (6.33), yields the updated estimate of ±

»(k)
(x(k) ’ x(k’1) )
(k)
± x + (6.35)
1’» (k)


which, rigorously speaking, is signi¬cant only for a su¬ciently large k.
However, assuming that (6.35) holds for any k ≥ 2, we denote by x(k) the
new approximation of ± that is obtained by plugging (6.33) back into (6.35)

(x(k) ’ x(k’1) )2
’ (k) k ≥ 2.
(k) (k)
x =x , (6.36)
(x ’ x(k’1) ) ’ (x(k’1) ’ x(k’2) )

This relation is known as Aitken™s extrapolation formula.
Letting, for k ≥ 2,

x(k) = x(k) ’ x(k’1) , x(k+1) ’
2 (k)
( x(k) ) = x(k) ,
x =

formula (6.36) can be written as

( x(k) )2
x(k) = x(k) ’ k ≥ 2.
, (6.37)
2 x(k’1)


Form (6.37) explains the reason why method (6.36) is more commonly
known as Aitken™s 2 method.
For the convergence analysis of Aitken™s method, it is useful to write (6.36)
as a ¬xed-point method in the form (6.17), by introducing the iteration
function
xφ(φ(x)) ’ φ2 (x)
φ (x) = . (6.38)
φ(φ(x)) ’ 2φ(x) + x

This function is indeterminate at x = ± since φ(±) = ±; however, by
applying L™Hospital™s rule one can easily check that limx’± φ (x) = ±
274 6. Root¬nding for Nonlinear Equations

under the assumption that φ is di¬erentiable at ± and φ (±) = 1. Thus, φ
is consistent and has a continuos extension at ±, the same being also true
if ± is a multiple root of f . Moreover, it can be shown that the ¬xed points
of (6.38) coincide with those of φ even in the case where ± is a multiple
root of f (see [IK66], pp. 104-106).
From (6.38) we conclude that Aitken™s method can be applied to a ¬xed-
point method x = φ(x) of arbitrary order. Actually, the following conver-
gence result holds.

Property 6.7 (convergence of Aitken™s method) Let x(k+1) = φ(x(k) )
be a ¬xed-point iteration of order p ≥ 1 for the approximation of a simple
zero ± of a function f . If p = 1, Aitken™s method converges to ± with order
2, while if p ≥ 2 the convergence order is 2p ’ 1. In particular, if p = 1,
Aitken™s method is convergent even if the ¬xed-point method is not. If ±
has multiplicity m ≥ 2 and the method x(k+1) = φ(x(k) ) is ¬rst-order con-
vergent, then Aitken™s method converges linearly, with convergence factor
C = 1 ’ 1/m.

Example 6.10 Consider the computation of the simple zero ± = 1 for the func-
tion f (x) = (x ’ 1)ex . For this, we use three ¬xed-point methods whose iteration
functions are, respectively, φ0 (x) = log(xex ), φ1 (x) = (ex + x)/(ex + 1) and
φ2 (x) = (x2 ’ x + 1)/x (for x = 0). Notice that, since |φ0 (1)| = 2, the corre-
sponding ¬xed-point method is not convergent, while in the other two cases the
methods have order 1 and 2, respectively.
Let us check the performance of Aitken™s method, running Program 55 with
x = 2, toll = 10’10 and working in complex arithmetic. Notice that in the case
(0)

of φ0 this produces complex numbers if x(k) happens to be negative. According
to Property 6.7, Aitken™s method applied to the iteration function φ0 converges
in 8 steps to the value x(8) = 1.000002 + i 0.000002. In the other two cases, the
method of order 1 converges to ± in 18 iterations, to be compared with the 4
iterations required by Aitken™s method, while in the case of the iteration function
φ2 convergence holds in 7 iterations against 5 iterations required by Aitken™s

method.

Aitken™s method is implemented in Program 55. The input/output pa-
rameters are the same as those of previous programs in this chapter.

Program 55 - aitken : Aitken™s extrapolation
function [xvect,xdif,fx,nit]=aitken(x0,nmax,toll,phi,fun)
nit=0; xvect=[x0]; x=x0; fxn=eval(fun);
fx=[fxn]; xdif=[]; err=toll+1;
while err >= toll & nit <= nmax
nit=nit+1; xv=xvect(nit); x=xv; phix=eval(phi);
x=phix; phixx=eval(phi); den=phixx-2*phix+xv;
if den == 0, err=toll*1.e-01;
else, xn=(xv*phixx-phixˆ2)/den; xvect=[xvect; xn];
xdif=[xdif; abs(xn-xv)]; x=xn; fxn=abs(eval(fun));
6.6 Post-processing Techniques for Iterative Methods 275

fx=[fx; fxn]; err=fxn;
end
end



6.6.2 Techniques for Multiple Roots
As previously noticed in deriving Aitken™s acceleration, taking the incre-
mental ratios of successive iterates »(k) in (6.33) provides a way to estimate
the asymptotic convergence factor φ (±).
This information can be employed also to estimate the multiplicity of the
root of a nonlinear equation and, as a consequence, it provides a tool for
modifying Newton™s method in order to recover its quadratic convergence
(see (6.23)). Indeed, de¬ne the sequence m(k) through the relation »(k) =
1 ’ 1/m(k) , and recalling (6.22), it follows that m(k) tends to m as k ’ ∞.
If the multiplicity m is known a priori, it is clearly convenient to use the
modi¬ed Newton method (6.23). In other cases, the following adaptive New-
ton algorithm can be used

f (x(k) )
’m k ≥ 2,
(k+1) (k) (k)
x =x , (6.39)
f (x(k) )

where we have set

x(k’1) ’ x(k’2)
1
(k)
m = = (k’1) . (6.40)
1 ’ »(k) ’ x(k) ’ x(k’2)
2x

Example 6.11 Let us check the performances of Newton™s method in its three
versions proposed so far (standard (6.16), modi¬ed (6.23) and adaptive (6.39)),
to approximate the multiple zero ± = 1 of the function f (x) = (x2 ’ 1)p log x
(for p ≥ 1 and x > 0). The desired root has multiplicity m = p + 1. The values
p = 2, 4, 6 have been considered and x(0) = 0.8, toll=10’10 have always been
taken in numerical computations.
The obtained results are summarized in Table 6.8, where for each method
the number of iterations nit required to converge are reported. In the case of
the adaptive method, beside the value of nit we have also shown in braces the

estimate m(nit ) of the multiplicity m that is yielded by Program 56.

m standard adaptive modi¬ed
3 51 13 (2.9860) 4
5 90 16 (4.9143) 5
7 127 18 (6.7792) 5
TABLE 6.8. Solution of problem (x2 ’ 1)p log x = 0 in the interval [0.5, 1.5], with
p = 2, 4, 6
276 6. Root¬nding for Nonlinear Equations

In Example 6.11, the adaptive Newton method converges more rapidly
than the standard method, but less rapidly than the modi¬ed Newton
method. It must be noticed, however, that the adaptive method yields as
a useful by-product a good estimate of the multiplicity of the root, which
can be pro¬tably employed in a de¬‚ation procedure for the approximation
of the roots of a polynomial.

The algorithm 6.39, with the adaptive estimate (6.40) of the multiplicity
of the root, is implemented in Program 56. To avoid the onset of numerical
instabilities, the updating of m(k) is performed only when the variation be-
tween two consecutive iterates is su¬ciently diminished. The input/output
parameters are the same as those of previous programs in this chapter.

Program 56 - adptnewt : Adaptive Newton™s method
function [xvect,xdif,fx,nit,m] = adptnewt(x0,nmax,toll,fun,dfun)
xvect=x0; nit=0; r=[1]; err=toll+1; m=[1]; xdif=[];
while (nit < nmax) & (err > toll)
nit=nit+1; x=xvect(nit); fx(nit)=eval(fun); f1x=eval(dfun);
if (f1x == 0), disp(™ Stop due to vanishing derivative ™); return; end;
x=x-m(nit)*fx(nit)/f1x; xvect=[xvect;x]; fx=[fx;eval(fun)];
rd=err; err=abs(xvect(nit+1)-xvect(nit)); xdif=[xdif;err];
ra=err/rd; r=[r;ra]; di¬=abs(r(nit+1)-r(nit));
if (di¬ < 1.e-3) & (r(nit+1) > 1.e-2),
m(nit+1)=max(m(nit),1/abs(1-r(nit+1)));
else, m(nit+1)=m(nit); end
end




6.7 Applications
We apply iterative methods for nonlinear equations considered so far in the
solution of two problems arising in the study of the thermal properties of
gases and electronics, respectively.


6.7.1 Analysis of the State Equation for a Real Gas
For a mole of a perfect gas, the state equation P v = RT establishes a re-
lation between the pressure P of the gas (in Pascals [P a]), the speci¬c vol-
ume v (in cubic meters per kilogram [m3 Kg ’1 ]) and its temperature T (in
Kelvin [K]), R being the universal gas constant, expressed in [JKg ’1 K ’1 ]
(joules per kilogram per Kelvin).
For a real gas, the deviation from the state equation of perfect gases is
due to van der Waals and takes into account the intermolecular interaction
and the space occupied by molecules of ¬nite size (see [Sla63]).
6.7 Applications 277

Denoting by ± and β the gas constants according to the van der Waals
model, in order to determine the speci¬c volume v of the gas, once P and
T are known, we must solve the nonlinear equation

f (v) = (P + ±/v 2 )(v ’ β) ’ RT = 0. (6.41)

With this aim, let us consider Newton™s method (6.16) in the case of carbon
dioxide (CO2 ), at the pressure of P = 10[atm] (equal to 1013250[P a]) and
at the temperature of T = 300[K]. In such a case, ± = 188.33[P a m6 Kg ’2 ]
and β = 9.77 · 10’4 [m3 Kg ’1 ]; as a comparison, the solution computed by
assuming that the gas is perfect is v 0.056[m3 Kg ’1 ].
˜
We report in Table 6.9 the results obtained by running Program 50 for
di¬erent choices of the initial guess v (0) . We have denoted by Nit the num-
ber of iterations needed by Newton™s method to converge to the root v — of
f (v) = 0 using an absolute tolerance equal to the roundo¬ unit.

v (0) v (0) v (0) v (0)
Nit Nit Nit Nit
10’4 10’2 10’3 10’1
47 7 21 5
TABLE 6.9. Convergence of Newton™s method to the root of equation (6.41)

The computed approximation of v — is v Nit 0.0535. To analyze the causes
of the strong dependence of Nit on the value of v (0) , let us examine the
derivative f (v) = P ’ ±v ’2 + 2±βv ’3 . For v > 0, f (v) = 0 at vM
1.99 · 10’3 [m3 Kg ’1 ] (relative maximum) and at vm 1.25 · 10’2 [m3 Kg ’1 ]
(relative minimum), as can be seen in the graph of Figure 6.8 (left).
A choice of v (0) in the interval (0, vm ) (with v (0) = vM ) thus necessarily
leads to a slow convergence of Newton™s method, as demonstrated in Figure
6.8 (right), where, in solid circled line, the sequence {|v (k+1) ’ v (k) |} is
shown, for k ≥ 0.
A possible remedy consists of resorting to a polyalgorithmic approach,
based on the sequential use of the bisection method and Newton™s method
(see Section 6.2.1). Running the bisection-Newton™s method with the end-
points of the search interval equal to a = 10’4 [m3 Kg ’1 ] and b = 0.1[m3 Kg ’1 ]
and an absolute tolerance of 10’3 [m3 Kg ’1 ], yields an overall convergence
of the algorithm to the root v — in 11 iterations, with an accuracy of the
order of the roundo¬ unit. The plot of the sequence {|v (k+1) ’ v (k) |}, for
k ≥ 0, is shown in solid and starred lines in Figure 6.8 (right).


6.7.2 Analysis of a Nonlinear Electrical Circuit
Let us consider the electrical circuit in Figure 6.9 (left), where v and j
denote respectively the voltage drop across the device D (called a tunneling
diode) and the current ¬‚owing through D, while R and E are a resistor and
a voltage generator of given values.
278 6. Root¬nding for Nonlinear Equations

4
x 10 0
6 10
’2
10
4
’4
10
’6
2 10
’8
10
0
’10
10
’12
’2 10
’14
10
’4
’16
10
’18
’6 10
0 0.02 0.04 0.06 0.08 0.1 0 10 20 30 40 50




FIGURE 6.8. Graph of the function f in (6.41) (left); increments |v (k+1) ’ v (k) |
computed by the Newton™s method (circled curve) and bisection-Newton™s
method (starred curve)

The circuit is commonly employed as a biasing circuit for electronic de-
vices working at high frequency (see [Col66]). In such applications the pa-
rameters R and E are designed in such a way that v attains a value internal
to the interval for which g (v) < 0, where g is the function which describes
the bound between current and voltage for D and is drawn in Figure 6.9
(right). Explicitly, g = ±(ev/β ’ 1) ’ µv(v ’ γ), for suitable constants ±, β,
γ and µ.

j ’5
x 10
12

g(v)
10


R 8

D v 6

4

2

+
E 0
_
’2

’4
0 0.1 0.2 0.3 0.4 0.5




FIGURE 6.9. Tunneling diode circuit (left) and working point computation
(right)

Our aim is to determine the working point of the circuit at hand, that
is, the values attained by v and j for given parameters R and E. For that,
we write Kirchhoªs law for the voltages across the loop, obtaining the
following nonlinear equation
1 E
’ µv 2 + ±(ev/β ’ 1) ’
f (v) = v + µγ = 0. (6.42)
R R
From a graphical standpoint, ¬nding out the working point of the circuit
amounts to determining the intersection between the function g and the
6.8 Exercises 279

straight line of equation j = (E ’ v)/R, as shown in Figure 6.9 (right).
Assume the following (real-life) values for the parameters of the problem:
E/R = 1.2·10’4 [A], ± = 10’12 [A], β ’1 = 40 [V ’1 ], µ = 10’3 [AV ’2 ] and
γ = 0.4 [V ]. The solution of (6.42), which is also unique for the considered
values of the parameters, is v — 0.3 [V ].
To approximate v — , we compare the main iterative methods introduced
in this chapter. We have taken v (0) = 0 [V ] for Newton™s method, ξ = 0

<< . .

. 40
( : 95)



. . >>