<< . .

. 39
( : 95)



. . >>

and, as a consequence, can lead to inaccurate results. For the sake of stabil-
ity, it is therefore convenient to approximate ¬rst the root r1 of minimum
module, which is the most sensitive to ill-conditioning of the problem (see
Example 2.7, Chapter 2) and then to continue with the successive roots
r2 , . . . , until the root of maximum module is computed. To localize r1 , the
techniques described in Section 5.1 or the method of Sturm sequences can
be used (see [IK66], p. 126).
A further increase in accuracy can be obtained, once an approximation rj
of the root rj is available, by going back to the original polynomial pn and
generating through the Newton-Horner method (6.29) a new approximation
(0)
to rj , taking as initial guess rj = rj . This combination of de¬‚ation and
successive correction of the root is called the Newton-Horner method with
re¬nement.

Example 6.7 Let us examine the performance of the Newton-Horner method in
two cases: in the ¬rst one, the polynomial admits real roots, while in the second
one there are two pairs of complex conjugate roots. To single out the importance
of re¬nement, we have implemented (6.29) both switching it on and o¬ (methods
NwtRef and Nwt, respectively). The approximate roots obtained using method Nwt
are denoted by rj , while sj are those computed by method NwtRef. As for the
numerical experiments, the computations have been done in complex arithmetic,
with x(0) = 0 + i 0, i being the imaginary unit, nmax = 100 and toll = 10’5 . The
tolerance for the stopping test in the re¬nement cycle has been set to 10’3 toll.

1) p5 (x) = x5 + x4 ’ 9x3 ’ x2 + 20x ’ 12 = (x ’ 1)2 (x ’ 2)(x + 2)(x + 3).
6.4 Zeros of Algebraic Equations 265

We report in Tables 6.3(a) and 6.3(b) the approximate roots rj (j = 1, . . . , 5)
and the number of Newton iterations (Nit) needed to get each of them; in the
case of method NwtRef we also show the number of extra Newton iterations for
the re¬nement (Extra).


rj sj
Nit Nit Extra
0.99999348047830 17 0.9999999899210124 17 10
1 ’ i3.56 · 10’25 1 ’ i2.40 · 10’28
6 6 10
2 ’ i2.24 · 10’13 2 + i1.12 · 10’22
9 9 1
’2 ’ i1.70 · 10’10 ’2 + i8.18 · 10’22
7 7 1
’3 + i5.62 · 10’6 ’3 ’ i7.06 · 10’21
1 1 2

(a) (b)


TABLE 6.3. Roots of the polynomial p5 . Roots computed by the Newton-Horner
method without re¬nement (left), and with re¬nement (right)

Notice a neat increase in the accuracy of root¬nding due to re¬nement, even with
few extra iterations.

2) p6 (x) = x6 ’ 2x5 + 5x4 ’ 6x3 + 2x2 + 8x ’ 8.

The zeros of p6 are the complex numbers {1, ’1, 1 ± i, ±2i}. We report below,
denoting them by rj , (j = 1, . . . , 6), the approximations to the roots of p6 ob-
tained using method Nwt, with a number of iterations equal to 2, 1, 1, 7, 7 and 1,
respectively. Beside, we also show the corresponding approximations sj computed
by method NwtRef and obtained with a maximum number of 2 extra iterations.


rj sj
Nwt NwtRef
r1 1 s1 1
’0.99 ’ i9.54 · 10’17 ’1 + i1.23 · 10’32
r2 s2
r3 1+i s3 1+i
r4 1-i s4 1-i
-1.31 · 10’8 + i2 ’5.66 · 10’17 + i2
r5 s5
r6 -i2 s6 -i2
TABLE 6.4. Roots of the polynomial p6 obtained using the Newton-Horner
method without (left) and with (right) re¬nement


A coding of the Newton-Horner algorithm is provided in Program 53. The
input parameters are A (a vector containing the polynomial coe¬cients), n
(the degree of the polynomial), toll (tolerance on the maximum variation
between successive iterates in Newton™s method), x0 (initial value, with
x(0) ∈ R), nmax (maximum number of admissible iterations for Newton™s
266 6. Root¬nding for Nonlinear Equations

method) and iref (if iref = 1, then the re¬nement procedure is activated).
For dealing with the general case of complex roots, the initial datum is
automatically converted into the complex number z = x(0) + ix(0) , where

i = ’1.
The program returns as output the variables xn (a vector containing the
sequence of iterates for each zero of pn (x)), iter (a vector containing the
number of iterations needed to approximate each root), itrefin (a vector
containing the Newton iterations required to re¬ne each estimate of the
computed root) and root (vector containing the computed roots).

Program 53 - newthorn : Newton-Horner method with re¬nement



function [xn,iter,root,itre¬n]=newthorn(A,n,toll,x0,nmax,iref)
apoly=A;
for i=1:n, it=1; xn(it,i)=x0+sqrt(-1)*x0; err=toll+1; Ndeg=n-i+1;
if (Ndeg == 1), it=it+1; xn(it,i)=-A(2)/A(1);
else
while (it < nmax & err > toll),
[px,B]=horner(A,Ndeg,xn(it,i));
[pdx,C]=horner(B,Ndeg-1,xn(it,i));
it=it+1; if (pdx ˜=0), xn(it,i)=xn(it-1,i)-px/pdx;
err=max(abs(xn(it,i)-xn(it-1,i)),abs(px));
else,
disp(™ Stop due to a vanishing p™™ ™);
err=0; xn(it,i)=xn(it-1,i);
end
end
end
A=B;
if (iref==1), alfa=xn(it,i); itr=1; err=toll+1;
while ((err > toll*1e-3) & (itr < nmax))
[px,B]=horner(apoly,n,alfa);
[pdx,C]=horner(B,n-1,alfa); itr=itr+1;
if (pdx˜=0)
alfa2=alfa-px/pdx;
err=max(abs(alfa2-alfa),abs(px)); alfa=alfa2;
else,
disp(™ Stop due to a vanishing p™™ ™); err=0;
end
end; itre¬n(i)=itr-1; xn(it,i)=alfa;
end
iter(i)=it-1; root(i)=xn(it,i); x0=root(i);
end
6.4 Zeros of Algebraic Equations 267

6.4.3 The Muller Method
A second example of de¬‚ation employs Muller™s method for ¬nding an
approximation to the root r at step 1. of the procedure described in Section
6.4.1 (see [Mul56]). Unlike Newton™s or secant methods, Muller™s method is
able to compute complex zeros of a given function f , even starting from a
real initial datum; moreover, its order of convergence is almost quadratic.
The action of Muller™s method is drawn in Figure 6.5. The scheme ex-
tends the secant method, substituting the linear polynomial introduced in
(6.13) with a second-degree polynomial as follows. Given three distinct
values x(0) , x(1) and x(2) , the new point x(3) is determined by setting
p2 (x(3) ) = 0, where p2 ∈ P2 is the unique polynomial that interpolates
f at the points x(i) , i = 0, 1, 2, that is, p2 (x(i) ) = f (x(i) ) for i = 0, 1, 2.
Therefore,




p2



x(0) x(1) x(2)
x(3)
f


FIGURE 6.5. The ¬rst step of Muller™s method


p2 (x) = f (x(2) ) + (x ’ x(2) )f [x(2) , x(1) ] + (x ’ x(2) )(x ’ x(1) )f [x(2) , x(1) , x(0) ]


where
f (·) ’ f (ξ) f [·, „ ] ’ f [ξ, ·]
f [ξ, ·] = , f [ξ, ·, „ ] =
·’ξ „ ’ξ
are the divided di¬erences of order 1 and 2 associated with the points ξ, ·
and „ (see Section 8.2.1). Noticing that x ’ x(1) = (x ’ x(2) ) + (x(2) ’ x(1) ),
we get

p2 (x) = f (x(2) ) + w(x ’ x(2) ) + f [x(2) , x(1) , x(0) ](x ’ x(2) )2

having de¬ned

= f [x(2) , x(1) ] + (x(2) ’ x(1) )f [x(2) , x(1) , x(0) ]
w
= f [x(2) , x(1) ] + f [x(2) , x(0) ] ’ f [x(0) , x(1) ].
268 6. Root¬nding for Nonlinear Equations

Requiring that p2 (x(3) ) = 0 it follows that

1/2
’w ± w2 ’ 4f (x(2) )f [x(2) , x(1) , x(0) ]
x(3) (2)
=x + .
2f [x(2) , x(1) , x(0) ]

Similar computations must be done for getting x(4) starting from x(1) , x(2)
and x(3) and, more generally, to ¬nd x(k+1) starting from x(k’2) , x(k’1)
and x(k) , with k ≥ 2, according with the following formula (notice that the
numerator has been rationalized)

2f (x(k) )

(k+1) (k)
x =x . (6.30)
1/2
w “ w2 ’ 4f (x(k) )f [x(k) , x(k’1) , x(k’2) ]

The sign in (6.30) is chosen in such a way that the module of the denomina-
tor is maximized. Assuming that f ∈ C 3 (J ) in a suitable neighborhood J
of the root ±, with f (±) = 0, the order of convergence is almost quadratic.
Precisely, the error e(k) = ± ’ x(k) obeys the following relation (see for the
proof [Hil87])

|e(k+1) | 1 f (±)
lim = , p 1.84.
k’∞ |e(k) |p 6 f (±)


Example 6.8 Let us employ Muller™s method to approximate the roots of the
polynomial p6 examined in Example 6.7. The tolerance on the stopping test
is toll = 10’6 , while x(0) = ’5, x(1) = 0 and x(2) = 5 are the inputs to
(6.30). We report in Table 6.5 the approximate roots of p6 , denoted by sj and
rj (j = 1, . . . , 5), where, as in Example 6.7, sj and rj have been obtained by
switching the re¬nement procedure on and o¬, respectively. To compute the roots
rj , 12, 11, 9, 9, 2 and 1 iterations are needed, respectively, while only one extra
iteration is taken to re¬ne all the roots.

rj sj
’15
1 + i9.9 · 10’18
1 + i2.2 · 10
r1 s1
’1 ’ i8.4 · 10’16
r2 s2 -1
r3 0.99 + i s3 1+i
0.99 ’ i 1’i
r4 s4
’1.1 · 10’15 + i1.99
r5 s5 i2
’1.0 · 10’15 ’ i2
r6 s6 -i2
TABLE 6.5. Roots of polynomial p6 with Muller™s method without (rj ) and with
(sj ) re¬nement


Even in this example, one can notice the e¬ectiveness of the re¬nement procedure,
based on Newton™s method, on the accuracy of the solution yielded by (6.30). •
6.5 Stopping Criteria 269

The Muller method is implemented in Program 54, in the special case
where f is a polynomial of degree n. The de¬‚ation process also includes a
re¬nement phase; the evaluation of f (x(k’2) ), f (x(k’1) ) and f (x(k) ), with
k ≥ 2, is carried out using Program 52. The input/output parameters are
analogous to those described in Program 53.

Program 54 - mullde¬‚ : Muller™s method with re¬nement

function [xn,iter,root,itre¬n]=mullde¬‚(A,n,toll,x0,x1,x2,nmax,iref)
apoly=A;
for i=1:n
xn(1,i)=x0; xn(2,i)=x1; xn(3,i)=x2; it=0; err=toll+1; k=2; Ndeg=n-i+1;
if (Ndeg == 1), it=it+1; k=0; xn(it,i)=-A(2)/A(1);
else
while ((err > toll) & (it < nmax)),
k=k+1; it=it+1; [f0,B]=horner(A,Ndeg,xn(k-2,i));
[f1,B]=horner(A,Ndeg,xn(k-1,i)); [f2,B]=horner(A,Ndeg,xn(k,i));
f01=(f1-f0)/(xn(k-1,i)-xn(k-2,i)); f12=(f2-f1)/(xn(k,i)-xn(k-1,i));
f012=(f12-f01)/(xn(k,i)-xn(k-2,i)); w=f12+(xn(k,i)-xn(k-1,i))*f012;
arg=wˆ2-4*f2*f012; d1=w-sqrt(arg); d2=w+sqrt(arg); den=max(d1,d2);
if (den˜=0); xn(k+1,i)=xn(k,i)-(2*f2)/den;
err=abs(xn(k+1,i)-xn(k,i));
else
disp(™ Vanishing denominator ™); return; end;
end; end; radix=xn(k+1,i);
if (iref==1),
alfa=radix; itr=1; err=toll+1;
while ((err > toll*1e-3) & (itr < nmax)),
[px,B]=horner(apoly,n,alfa); [pdx,C]=horner(B,n-1,alfa);
if (pdx == 0), disp(™ Vanishing derivative ™); err=0; end;
itr=itr+1; if (pdx˜=0), alfa2=alfa-px/pdx;
err=abs(alfa2-alfa); alfa=alfa2; end;
end; itre¬n(i)=itr-1; xn(k+1,i)=alfa; radix=alfa;
end
iter(i)=it; root(i)=radix; [px,B]=horner(A,Ndeg-1,xn(k+1,i)); A=B;
end




6.5 Stopping Criteria
Suppose that {x(k) } is a sequence converging to a zero ± of the function
f . In this section we provide some stopping criteria for terminating the
iterative process that approximates ±. Analogous to Section 4.6, where
the case of iterative methods for linear systems has been examined, there
are two possible criteria: a stopping test based on the residual and on the
increment. Below, µ is a ¬xed tolerance on the approximate calculation of
270 6. Root¬nding for Nonlinear Equations

± and e(k) = ± ’ x(k) denotes the absolute error. We shall moreover assume
that f is continuously di¬erentiable in a suitable neighborhood of the root.

1. Control of the residual: the iterative process terminates at the ¬rst
step k such that |f (x(k) )| < µ.

Situations can arise where the test turns out to be either too restrictive or
excessively optimistic (see Figure 6.6). Applying the estimate (6.6) to the
case at hand yields
1/m
|e(k) | m!
|f (x(k) )|1/m .
|±| |f (m) (±)||±|m


In particular, in the case of simple roots, the error is bound to the residual
by the factor 1/|f (±)| so that the following conclusions can be drawn:

1. if |f (±)| 1, then |e(k) | µ; therefore, the test provides a satisfac-
tory indication of the error;

2. if |f (±)| 1, the test is not reliable since |e(k) | could be quite large
with respect to µ;

3. if, ¬nally, |f (±)| 1, we get |e(k) | µ and the test is too restrictive.

We refer to Figure 6.6 for an illustration of the last two cases.

f (x)


f (x)


x(k)
±
x(k)
±

FIGURE 6.6. Two situations where the stopping test based on the residual
is either too restrictive (when |e(k) | |f (x(k) )|, left) or too optimistic (when
|e(k) | |f (x(k) )|, right)

The conclusions that we have drawn agree with those in Example 2.4.
Indeed, when f (±) 0, the condition number of the problem f (x) = 0 is
very high and, as a consequence, the residual does not provide a signi¬cant
indication of the error.

2. Control of the increment: the iterative process terminates as soon as
|x(k+1) ’ x(k) | < µ.
6.5 Stopping Criteria 271



Let x(k) be generated by the ¬xed-point method x(k+1) = φ(x(k) ). Using
the mean value theorem, we get
e(k+1) = φ(±) ’ φ(x(k) ) = φ (ξ (k) )e(k) ,
where ξ (k) lies between x(k) and ±. Then,

x(k+1) ’ x(k) = e(k) ’ e(k+1) = 1 ’ φ (ξ (k) ) e(k)

so that, assuming that we can replace φ (ξ (k) ) with φ (±), it follows that
1
(x(k+1) ’ x(k) ).
e(k) (6.31)
1 ’ φ (±)


γ



1
1
2

-1 0 1 φ (±)

FIGURE 6.7. Behavior of γ = 1/(1 ’ φ (±)) as a function of φ (±)

As shown in Figure 6.7, we can conclude that the test:
- is unsatisfactory if φ (±) is close to 1;
- provides an optimal balancing between increment and error in the case
of methods of order 2 for which φ (±) = 0 as is the case for Newton™s
method;
- is still satisfactory if ’1 < φ (±) < 0.

Example 6.9 The zero of the function f (x) = e’x ’ · is given by ± = ’ log(·).
For · = 10’9 , ± 20.723 and f (±) = ’e’± ’10’9 . We are thus in the case
where |f (±)| 1 and we wish to examine the behaviour of Newton™s method in
the approximation of ± when the two stopping criteria above are adopted in the
computations.
We show in Tables 6.6 and 6.7 the results obtained using the test based on the
control of the residual (1) and of the increment (2), respectively. We have taken
x(0) = 0 and used two di¬erent values of the tolerance. The number of iterations
required by the method is denoted by nit.
According to (6.31), since φ (±) = 0, the stopping test based on the increment
reveals to be reliable for both the values (which are quite di¬ering) of the stop
tolerance µ. The test based on the residual, instead, yields an acceptable estimate
of the root only for very small tolerances, while it is completely wrong for large
values of µ.

272 6. Root¬nding for Nonlinear Equations

|f (x(nit) )| |± ’ x(nit) | |± ’ x(nit) |/±
µ nit
10’10 5.9 · 10’11 5.7 · 10’2
22 0.27
10’3 9.1 · 10’4
7 13.7 66.2
TABLE 6.6. Newton™s method for the approximation of the root of
f (x) = e’x ’ · = 0. The stopping test is based on the control of the residual

|x(nit) ’ x(nit’1) | |± ’ x(nit) | |± ’ x(nit) |/±
µ nit
10’10 8.4 · 10’13
26 0 0

<< . .

. 39
( : 95)



. . >>