<< . .

. 10
( : 26)



. . >>

CGNR on the unpreconditioned problem, i.e., CG applied to L— Lu = L— f .
The matrix corresponding to the discretization of L— L has a large condition
number. The solid line corresponds to the CGNR applied to the preconditioned
problem, i.e., CG applied to L— G2 Lu = L— G2 f . We limited the number of
iterations to 310 = 10m and the unpreconditioned formulation of CGNR had
made very little progress after 310 iterations. This is a result of squaring
the large condition number. The preconditioned formulation did quite well,
converging in eight iterations. The MATLAB flops command indicates that
the unpreconditioned iteration required 13.7 million ¬‚oating-point operations



Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.
Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




57
GMRES ITERATION

and the preconditioned iteration 2.4 million. However, as you are asked to
investigate in Exercise 3.9.8, the behavior of even the preconditioned iteration
can also be poor if the coe¬cients of the ¬rst derivative terms are too large.
CGNE has similar performance; see Exercise 3.9.9.

0
10




-1
10
Relative Residual




-2
10




-3
10




-4
10
0 50 100 150 200 250 300 350
Iterations




Fig. 3.3. CGNR for 2-D elliptic equation.

The examples for Bi-CGSTAB use the code bicgstab from the collection
of MATLAB codes. This code has the same input/output arguments as gmres.
We applied Bi-CGSTAB to both the preconditioned and unpreconditioned
forms of (3.33) with the same data and termination criterion as for the GMRES
example.
We plot the iteration histories in Fig. 3.4. As in the previous examples, the
solid line corresponds to the preconditioned iteration and the dashed line to
the unpreconditioned iteration. Neither convergence history is monotonic, with
the unpreconditioned iteration being very irregular, but not varying by many
orders of magnitude as a CGS or Bi-CG iteration might. The preconditioned
iteration terminated in six iterations (12 matrix-vector products) and required
roughly 1.7 million ¬‚oating-point operations. The unpreconditioned iteration
took 40 iterations and 2.2 million ¬‚oating-point operations. We see a
considerable improvement in cost over CGNR and, since our unpreconditioned
matrix-vector product is so inexpensive, a better cost/iterate than GMRES in
the unpreconditioned case.
We repeat the experiment with TFQMR as our linear solver. We use
the code tfqmr that is included in our collection. In Fig.√3.5 we plot the
convergence histories in terms of the approximate residual 2k + 1„2k given
by (3.31). We plot only full iterations (m = 2k) except, possibly, for the
¬nal iterate. The approximate residual is given in terms of the quasi-residual
norm „m rather than the true residual, and the graphs are monotonic. Our



Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.
Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




58 ITERATIVE METHODS FOR LINEAR AND NONLINEAR EQUATIONS


1
10



0
10




Relative Residual
-1
10



-2
10



-3
10



-4
10
0 5 10 15 20 25 30 35 40
Iterations




Fig. 3.4. Bi-CGSTAB for 2-D elliptic equation.
0
10



-1
10
Relative Approximate Residual




-2
10



-3
10



-4
10



-5
10
0 10 20 30 40 50 60 70
Iterations




Fig. 3.5. TFQMR for 2-D elliptic equation.


termination criterion is based on the quasi-residual and (3.31) as described in

§ 3.6.4, stopping the iteration when „m m + 1/ b 2 ¤ h’2 . This led to actual
relative residuals of 7 — 10’5 for the unpreconditioned iteration and 2 — 10’4
for the preconditioned iteration, indicating that it is reasonable to use the
quasi-residual estimate (3.31) to make termination decisions.
The unpreconditioned iteration required roughly 4.1 million ¬‚oating-point
operations and 67 iterations for termination. The preconditioned iteration
was much more e¬cient, taking 1.9 million ¬‚oating-point operations and 7
iterations. The plateaus in the graph of the convergence history for the



Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.
Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




59
GMRES ITERATION

unpreconditioned iteration are related (see [196]) to spikes in the corresponding
graph for a CGS iteration.




Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.
Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




60 ITERATIVE METHODS FOR LINEAR AND NONLINEAR EQUATIONS

3.9. Exercises on GMRES
3.9.1. Give an example of a matrix that is not diagonalizable.

3.9.2. Prove Theorem 3.1.4 and Theorem 3.1.5.

3.9.3. Prove that the Arnoldi process (unless it terminates prematurely with a
solution) produces a basis for Kk+1 that satis¬es (3.8).

3.9.4. Duplicate Table 3.1 and add two more columns corresponding to classical
Gram“Schmidt orthogonalization with reorthogonalization at every step
and determine when the test in (3.10) detects loss of orthogonality. How
do your conclusions compare with those in [160]?

3.9.5. Let A = J» be an elementary Jordan block of size N corresponding to
an eigenvalue » = 0. This means that
« 
» 1 0
¬0 ·
» 1 0
¬ ·
¬ ·
.. .. .. ..
¬ ·
. . . .
J» = ¬ ·.
¬ ·
0 » 1 0·
¬
¬ ·
 1
0 »
0 »

Give an example of x0 , b ∈ RN , and » ∈ R such that the GMRES
residuals satisfy rk 2 = r0 2 for all k < N .

3.9.6. Consider the centered di¬erence discretization of

’u + u + u = 1, u(0) = u(1) = 0.

Solve this problem with GMRES (without preconditioning) and then
apply as a preconditioner the map M de¬ned by

’(M f ) = f, (M f )(0) = (M f )(1) = 0.

That is, precondition with a solver for the high-order term in the
di¬erential equation using the correct boundary conditions. Try this
for meshes with 50, 100, 200, . . . points. How does the performance of
the iteration depend on the mesh? Try other preconditioners such as
Gauss“Seidel and Jacobi. How do other methods such as CGNR, CGNE,
Bi-CGSTAB, and TFQMR perform?

3.9.7. Modify the MATLAB GMRES code to allow for restarts. One way to do
this is to call the gmres code with another code which has an outer loop
to control the restarting as in Algorithm gmresm. See [167] for another
example. How would you test for failure in a restarted algorithm? What
kinds of failures can occur? Repeat Exercise 6 with a limit of three
GMRES iterations before a restart.



Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.
Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




61
GMRES ITERATION

3.9.8. Duplicate the results in § 3.7 and § 3.8. Compare the execution times
in your computing environment. Experiment with di¬erent values of m.
Why might GMRES(m) be faster than GMRES? Vary the dimension and
the coe¬cient ay . Try ay = 5y. Why is the performance of the methods
sensitive to ay and ax ?

3.9.9. Apply CGNE to the PDE in § 3.7. How does the performance di¬er from
CGNR and GMRES?

3.9.10. Consider preconditioning from the right. For the PDE in § 3.7, apply
GMRES, CGNR, Bi-CGSTAB, TFQMR, and/or CGNE, to the equation
LGw = f and then recover the solution from u = Gw. Compare timings
and solution accuracy with those obtained by left preconditioning.

3.9.11. Are the solutions to (3.33) in § 3.7 and § 3.8 equally accurate? Compare
the ¬nal results with the known solution.

3.9.12. Implement Bi-CG and CGS and try them on some of the examples in
this chapter.




Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.
Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




62 ITERATIVE METHODS FOR LINEAR AND NONLINEAR EQUATIONS




Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.
Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




Chapter 4

Basic Concepts and Fixed-Point Iteration




This part of the book is about numerical methods for the solution of systems
of nonlinear equations. Throughout this part, we will let · denote a norm
on RN as well as the induced matrix norm.
We begin by setting the notation. We seek to solve
(4.1) F (x) = 0.
Here F : RN ’ RN . We denote the ith component of F by fi . If the
components of F are di¬erentiable at x ∈ RN we de¬ne the Jacobian matrix
F (x) by
‚fi
F (x)ij = (x).
‚xj
The Jacobian matrix is the vector analog of the derivative. We may express
the fundamental theorem of calculus as follows.
Theorem 4.0.1. Let F be di¬erentiable in an open set „¦ ‚ RN and let
x— ∈ „¦. Then for all x ∈ „¦ su¬ciently near x—
1

F (x— + t(x ’ x— ))(x ’ x— ) dt.
F (x) ’ F (x ) =
0


4.1. Types of convergence
Iterative methods can be classi¬ed by the rate of convergence.
Definition 4.1.1. Let {xn } ‚ RN and x— ∈ RN . Then
• xn ’ x— q-quadratically if xn ’ x— and there is K > 0 such that
xn+1 ’ x— ¤ K xn ’ x— 2 .

• xn ’ x— q-superlinearly with q-order ± > 1 if xn ’ x— and there is
K > 0 such that
xn+1 ’ x— ¤ K xn ’ x— ± .

• xn ’ x— q-superlinearly if
xn+1 ’ x—
lim = 0.
xn ’ x—
n’∞

65

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.
Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




66 ITERATIVE METHODS FOR LINEAR AND NONLINEAR EQUATIONS

• xn ’ x— q-linearly with q-factor σ ∈ (0, 1) if
xn+1 ’ x— ¤ σ xn ’ x—
for n su¬ciently large.
Definition 4.1.2. An iterative method for computing x— is said to be
locally (q-quadratically, q-superlinearly, q-linearly, etc.) convergent if the
iterates converge to x— (q-quadratically, q-superlinearly, q-linearly, etc.) given
that the initial data for the iteration is su¬ciently good.
Note that a q-superlinearly convergent sequence is also q-linearly conver-
gent with q-factor σ for any σ > 0. A q-quadratically convergent sequence is
q-superlinearly convergent with q-order 2.
In general, a q-superlinearly convergent method is preferable to a q-linearly
convergent one if the cost of a single iterate is the same for both methods. We
will see that often a method that is more slowly convergent, in terms of the
convergence types de¬ned above, can have a cost/iterate so low that the slow
iteration is more e¬cient.
Sometimes errors are introduced into the iteration that are independent of
the errors in the approximate solution. An example of this would be a problem
in which the nonlinear function F is the output of another algorithm that
provides error control, such as adjustment of the mesh size in an approximation
of a di¬erential equation. One might only compute F to low accuracy in the
initial phases of the iteration to compute a solution and tighten the accuracy
as the iteration progresses. The r-type convergence classi¬cation enables us to
describe that idea.
Definition 4.1.3. Let {xn } ‚ RN and x— ∈ RN . Then {xn } converges
to x— r-(quadratically, superlinearly, linearly) if there is a sequence {ξn } ‚ R
converging q-(quadratically, superlinearly, linearly) to zero such that
xn ’ x— ¤ ξn .
We say that {xn } converges r-superlinearly with r-order ± > 1 if ξn ’ 0
q-superlinearly with q-order ±.

4.2. Fixed-point iteration
Many nonlinear equations are naturally formulated as ¬xed-point problems
(4.2) x = K(x)
where K, the ¬xed-point map, may be nonlinear. A solution x— of (4.2) is
called a ¬xed point of the map K. Such problems are nonlinear analogs of the
linear ¬xed-point problems considered in Chapter 1. In this section we analyze
convergence of ¬xed-point iteration, which is given by
(4.3) xn+1 = K(xn ).
This iterative method is also called nonlinear Richardson iteration, Picard
iteration, or the method of successive substitution.

<< . .

. 10
( : 26)



. . >>