<< . .

. 20
( : 26)



. . >>

sn 2 j=0
2
(7.43)
sn+1 sT
n ’1
=’ I+ Bn F (xn+1 ).
2
sn 2

Instead, we solve (7.43) for sn+1 to obtain
’1
Bn F (xn+1 )
(7.44) sn+1 =’ .
’1 2
1 + sT Bn F (xn+1 )/ sn
n 2

By Proposition 7.3.1, the denominator in (7.44)

1 + sT Bn F (xn+1 )/ sn
’1 2 T ’1
= 1 + vn Bn un
n 2

is nonzero unless Bn+1 is singular.
The storage requirement, of n + O(1) vectors for the nth iterate, is of the
same order as GMRES, making Broyden™s method competitive in storage with
GMRES as a linear solver [67]. Our implementation is similar to GMRES(m),
restarting with B0 when storage is exhausted.
We can use (7.42) and (7.44) to describe an algorithm. The termination
criteria are the same as for Algorithm nsol in § 5.6. We use a restarting
approach in the algorithm, adding to the input list a limit nmax on the number
of Broyden iterations taken before a restart and a limit maxit on the number
of nonlinear iterations. Note that B0 = I is implicit in the algorithm. Our
looping convention is that the loop in step 2(e)ii is skipped if n = 0; this is
consistent with setting the empty matrix product to the identity. Our notation
and algorithmic framework are based on [67].
Algorithm 7.3.1. brsol(x, F, „, maxit, nmax)
1. r0 = F (x) 2 , n = ’1,
s0 = ’F (x), itc = 0.

2. Do while itc < maxit
(a) n = n + 1; itc = itc + 1
(b) x = x + sn
(c) Evaluate F (x)
(d) If F (x) ¤ „r r0 + „a exit.
2
(e) if n < nmax then
i. z = ’F (x)
ii. for j = 0, n ’ 1
z = z + sj+1 sT z/ sj 2
2
j




Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.
Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




127
BROYDEN™S METHOD

iii. sn+1 = z/(1 ’ sT z/ sn 2 )
n 2
(f) if n = nmax then
n = ’1; s = ’F (x);
If n < nmax, then the nth iteration of brsol, which computes xn and
sn+1 , requires O(nN ) ¬‚oating-point operations and storage of n + 3 vectors,
the steps {sj }n , x, z, and F (x), where z and F (x) may occupy the same
j=0
storage location. This requirement of one more vector than the algorithm
in [67] needed is a result of the nonlinearity. If n = nmax, the iteration
will restart with xnmax , not compute snmax+1 , and therefore need storage for
nmax + 2 vectors. Our implementation of brsol in the collection of MATLAB
codes stores z and F separately in the interest of clarity and therefore requires
nmax + 3 vectors of storage.
The Sherman“Morrison approach in Algorithm brsol is more e¬cient, in
terms of both time and storage, than the dense matrix approach proposed
in [85]. However the dense matrix approach makes it possible to detect
ill conditioning in the approximate Jacobians. If the data is su¬ciently
good, bounded deterioration implies that the Broyden matrices will be well
conditioned and superlinear convergence implies that only a few iterates will
be needed. In view of all this, we feel that the approach of Algorithm brsol
is the best approach, especially for large problems.
A MATLAB implementation of brsol is provided in the collection of
MATLAB codes.

7.4. Examples for Broyden™s method

In all the examples in this section we use the l2 norm multiplied by 1/ N .
We use the MATLAB code brsol.

7.4.1. Linear problems. The theory in this chapter indicates that Broy-
den™s method can be viewed as an alternative to GMRES as an iterative method
for nonsymmetric linear equations. This point has been explored in some detail
in [67], where more elaborate numerical experiments that those given here are
presented, and in several papers on in¬nite-dimensional problems as well [91],
[104], [120]. As an example we consider the linear PDE (3.33) from § 3.7. We
use the same mesh, right-hand side, coe¬cients, and solution as the example
in § 3.7.
We compare Broyden™s method without restarts (solid line) to Broyden™s
method with restarts every three iterates (dashed line). For linear problems,
we allow for increases in the nonlinear residual. We set „a = „r = h2 where
h = 1/32. In Fig. 7.1 we present results for (3.33) preconditioned with the
Poisson solver fish2d.
Broyden™s method terminated after 9 iterations at a cost of roughly 1.5
million ¬‚oating point operations, a slightly higher cost than the GMRES
solution. The restarted iteration required 24 iterations and 3.5 million ¬‚oating
point operations. One can also see highly irregular performance from the



Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.
Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




128 ITERATIVE METHODS FOR LINEAR AND NONLINEAR EQUATIONS

restarted method.
In the linear case [67] one can store only the residuals and recover the
terminal iterate upon exit. Storage of the residual, the vector z, and the
sequence of steps saves a vector over the nonlinear case.

1
10




0
10
Relative Residual




-1
10




-2
10




-3
10
0 5 10 15 20 25
Iterations




Fig. 7.1. Broyden™s method for the linear PDE.


7.4.2. Nonlinear problems. We consider the H-equation from Chapters 5
and 6 and a nonlinear elliptic partial di¬erential equation. Broyden™s method
and Newton-GMRES both require more storage as iterations progress, Newton-
GMRES as the inner iteration progresses and Broyden™s method as the outer
iteration progresses. The cost in function evaluations for Broyden™s method is
one for each nonlinear iteration and for Newton-GMRES one for each outer
iteration and one for each inner iteration. Hence, if function evaluations are
very expensive, the cost of a solution by Broyden™s method could be much less
than that of one by Newton-GMRES. If storage is at a premium, however,
and the number of nonlinear iterations is large, Broyden™s method could be at
a disadvantage as it might need to be restarted many times. The examples
in this section, as well as those in § 8.4 illustrate the di¬erences in the two
methods. Our general recommendation is to try both.

Chandrasekhar H-equation. As before we solve the H-equation on a 100-
point mesh with c = .9 (Fig. 7.2) and c = .9999 (Fig. 7.3). We compare
Broyden™s method without restarts (solid line) to Broyden™s method with
restarts every three iterates (dashed line). For c = .9 both methods required six
iterations for convergence. The implementation without restarts required 153
thousand ¬‚oating-point operations and the restarted method 150, a substantial
improvement over the Newton-GMRES implementation.



Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.
Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




129
BROYDEN™S METHOD

For c = .9999, the restarted iteration has much more trouble. The un-
restarted iteration converges in 10 iterations at a cost of roughly 249 thousand
¬‚oating-point operations. The restarted iteration requires 18 iterations and
407 ¬‚oating-point operations. Even so, both implementations performed bet-
ter than Newton-GMRES.

0
10

-1
10

-2
10
Relative Nonlinear Residual




-3
10

-4
10

-5
10

-6
10

-7
10

-8
10
0 1 2 3 4 5 6
Iterations




Fig. 7.2. Broyden™s method for the H-equation, c = .9.
0
10

-1
10
Relative Nonlinear Residual




-2
10

-3
10

-4
10

-5
10

-6
10

-7
10
0 2 4 6 8 10 12 14 16 18
Iterations




Fig. 7.3. Broyden™s method for the H-equation, c = .9999.

One should take some care in drawing general conclusions. While Broyden™s
method on this example performed better than Newton-GMRES, the storage
in Newton-GMRES is used in the inner iteration, while that in Broyden in the
nonlinear iteration. If many nonlinear iterations are needed, it may be the case



Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.
Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




130 ITERATIVE METHODS FOR LINEAR AND NONLINEAR EQUATIONS

that Broyden™s method may need restarts while the GMRES iteration for the
linear problem for the Newton step may not.

Convection-di¬usion equation. We consider the nonlinear convection-
di¬usion equation from § 6.4.2. We use the same mesh width h = 1/32 on
a uniform square grid. We set C = 20, u0 = 0, and „r = „a = h2 . We
preconditioned with the Poisson solver fish2d.
We allowed for an increase in the residual, which happened on the second
iterate. This is not supported by the local convergence theory for nonlinear
problems; however it can be quite successful in practice. One can do this in
brsol by setting the third entry in the parameter list to 1, as one would if the
problem were truly linear. In § 8.4.3 we will return to this example.
In Fig. 7.4 we compare Broyden™s method without restarts (solid line) to
Broyden™s method with restarts every 8 iterates (dashed line). The unrestarted
Broyden iteration converges in 12 iterations at a cost of roughly 2.2 million
¬‚oating-point operations. This is a modest improvement over the Newton-
GMRES cost (reported in § 6.4.2) of 2.6 million ¬‚oating-point operations.
However, the residual norms do not monotonically decrease in the Broyden
iteration (we ¬x this in § 8.3.2) and more storage was used.

1
10



0
10
Relative Nonlinear Residual




-1
10



-2
10



-3
10



-4
10
0 2 4 6 8 10 12 14 16
Iterations




Fig. 7.4. Broyden™s method for nonlinear PDE.

At most ¬ve inner GMRES iterations were required. Therefore the GMRES
inner iteration for the Newton-GMRES approach needed storage for at most
10 vectors (the Krylov basis, s, xc , F (xc ), F (xc + hv)) to accommodate the
Newton-GMRES iteration. The Broyden iteration when restarted every n
iterates, requires storage of n + 2 vectors {sj }n’1 , x, and z (when stored in the
j=0
same place as F (x)). Hence restarting the Broyden iteration every 8 iterations
most closely corresponds to the Newton-GMRES requirements. This approach



Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.
Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




131
BROYDEN™S METHOD

took 15 iterations to terminate at a cost of 2.4 million ¬‚oating-point operations,
but the convergence was extremely irregular after the restart.




Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.
Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




132 ITERATIVE METHODS FOR LINEAR AND NONLINEAR EQUATIONS

7.5. Exercises on Broyden™s method
7.5.1. Prove Proposition 7.2.1.

7.5.2. Prove Proposition 7.3.1.

7.5.3. Extend Proposition 7.3.1 by showing that if A is a nonsingular N — N
matrix, U is N — M , V is M — N , then A + U V is nonsingular if and
only if I + V A’1 U is a nonsingular M — M matrix. If this is the case,
then
(A + U V )’1 = A’1 ’ A’1 U (I + V A’1 U )’1 V A’1 .

This is the Sherman“Morrison“Woodbury formula [68], [199]. See [102]
for further generalizations.

7.5.4. Duplicate the results reported in § 7.4. Vary the parameters in the
equations and the number of vectors stored by Broyden™s method and
report on their e¬ects on performance. What happens in the di¬erential
equation examples if preconditioning is omitted?

7.5.5. Solve the H-equation with Broyden™s method and c = 1. Set „a = „r =
10’8 . What q-factor for linear convergence do you see? Can you account
for this by applying the secant method to the equation x2 = 0? See [53]
for a complete explanation.

7.5.6. Try to solve the nonlinear convection-di¬usion equation in § 6.4 with
Broyden™s method using various values for the parameter C. How does
the performance of Broyden™s method di¬er from Newton-GMRES?

<< . .

. 20
( : 26)



. . >>