<< . .

. 26
( : 95)

. . >>

 γ2 ±2 . . . 
 
Tm =  .
.. ..
 
. . βm
° »
0 γm ±m

As in the symmetric case, the bi-orthogonalization Lanczos algorithm can
be utilized to solve the linear system (3.2). For this purpose, for m ¬xed,
m m
once the bases {vi }i=1 and {zi }i=1 have been constructed, it su¬ces to set

x(m) = x(0) + Vm y(m) ,

where y(m) is the solution to the linear system Tm y(m) = r(0) 2 e1 . It
is also possible to introduce a stopping criterion based on the residual,
without computing it explicitly, since

= |γm+1 eT ym | vm+1 2 .
r(m) 2 m

An implementation of the Lanczos method for unsymmetric systems is
given in Program 25. If a breakdown of the algorithm occurs, i.e., if γk+1 =
0, the method stops returning in output a negative value of the variable
niter which denotes the number of iterations necessary to reduce the initial
residual by a factor toll.
170 4. Iterative Methods for Solving Linear Systems

Program 25 - Lanczosnosym : The Lanczos method for unsymmetric
function [xk,nres,niter]=lanczosnosym(A,b,x0,m,toll)
r0=b-A*x0; nres0=norm(r0,2);
if nres0 ˜= 0
V=r0/nres0; Z=V; gamma(1)=0; beta(1)=0; k=1; nres=1;
while k <= m & nres > toll
vk=V(:,k); zk=Z(:,k);
if k==1, vk1=0*vk; zk1=0*zk;
else, vk1=V(:,k-1); zk1=Z(:,k-1); end
if gamma(k+1) == 0, k=m+2;
if k˜=m+2
if k==1
Tk = alpha;
yk=Tk \ (nres0*[1,0*[1:k-1]]™);
if k==m+2, niter=-k; else, niter=k-1; end

Example 4.10 Let us solve the linear system with matrix A = tridiag100 (’0.5, 2,
’1) and right-side b selected in such a way that the exact solution is x = 1T .
Using Program 25 with toll= 10’13 and a randomly generated x0, the algorithm
converges in 59 iterations. Figure 4.10 shows the convergence history reporting

the graph of r(k) 2 / r(0) 2 as a function of the number of iterations.

We conclude recalling that some variants of the unsymmetric Lanczos
method have been devised, that are characterized by a reduced compu-
tational cost. We refer the interested reader to the bibliography below for a
4.6 Stopping Criteria 171








0 10 20 30 40 50 60

FIGURE 4.10. Graph of the residual normalized to the initial residual as a func-
tion of the number of iterations for the Lanczos method applied to the system in
Example 4.10

complete description of the algorithms and to the programs included in the
MATLAB version of the public domain library templates for their e¬cient
implementation [BBC+ 94].
1. The bi-conjugate gradient method (BiCG): it can be derived by the
unsymmetric Lanczos method in the same way as the conjugate gra-
dient method is obtained from the FOM method [Fle75];
2. the Quasi-Minimal Residual method (QMR): it is analogous to the
GMRES method, the only di¬erence being the fact that the Arnoldi
orthonormalization process is replaced by the Lanczos bi-orthogona-
3. the conjugate gradient squared method (CGS): the matrix-vector prod-
ucts involving the transposed matrix AT are removed. A variant of
this method, known as BiCGStab, is characterized by a more reg-
ular convergence than provided by the CGS method (see [Son89],

4.6 Stopping Criteria
In this section we address the problem of how to estimate the error intro-
duced by an iterative method and the number kmin of iterations needed to
reduce the initial error by a factor µ.
In practice, kmin can be obtained by estimating the convergence rate of
(4.2), i.e. the rate at which e(k) ’ 0 as k tends to in¬nity. From (4.4),
we get
¤ Bk ,
172 4. Iterative Methods for Solving Linear Systems

so that Bk is an estimate of the reducing factor of the norm of the error
after k steps. Typically, the iterative process is continued until e(k) has
reduced with respect to e(0) by a certain factor µ < 1, that is
e(k) ¤ µ e(0) . (4.68)
If we assume that ρ(B) < 1, then Property 1.13 implies that there exists
a suitable matrix norm · such that B < 1. As a consequence, Bk
tends to zero as k tends to in¬nity, so that (4.68) can be satis¬ed for a
su¬ciently large k such that Bk ¤ µ holds. However, since Bk < 1, the
previous inequality amounts to requiring that
k ≥ log(µ)/ = ’ log(µ)/Rk (B),
log Bk (4.69)
where Rk (B) is the average convergence rate introduced in De¬nition 4.2.
From a practical standpoint, (4.69) is useless, being nonlinear in k; if, how-
ever, the asymptotic convergence rate is adopted, instead of the average
one, the following estimate for kmin is obtained
’ log(µ)/R(B).
kmin (4.70)
This latter estimate is usually rather optimistic, as con¬rmed by Example

Example 4.11 For the matrix A3 of Example 4.2, in the case of Jacobi method,
letting µ = 10’5 , condition (4.69) is satis¬ed with kmin = 16, while (4.70) yields
kmin = 15, with a good agreement between the two estimates. Instead, on the
matrix A4 of Example 4.2, we ¬nd that (4.69) is satis¬ed with kmin = 30, while

(4.70) yields kmin = 26.

4.6.1 A Stopping Test Based on the Increment
From the recursive error relation e(k+1) = Be(k) , we get
e(k+1) ¤ B e(k) . (4.71)
Using the triangular inequality we get
e(k+1) ¤ B ( e(k+1) + x(k+1) ’ x(k) ),
from which it follows that
x ’ x(k+1) ¤ x(k+1) ’ x(k) . (4.72)
1’ B
In particular, taking k = 0 in (4.72) and applying recursively (4.71) we also
B k+1 (1)
x ’ x(k+1) ¤ x ’ x(0) ,
1’ B
4.6 Stopping Criteria 173

which can be used to estimate the number of iterations necessary to ful¬ll
the condition e(k+1) ¤ µ, for a given tolerance µ.
In the practice, B can be estimated as follows: since

x(k+1) ’ x(k) = ’(x ’ x(k+1) ) + (x ’ x(k) ) = B(x(k) ’ x(k’1) ),

a lower bound of B is provided by c = δk+1 /δk , where δj+1 = x(j+1) ’
x(j) , with j = k ’ 1, k. Replacing B by c, the right-hand side of (4.72)
suggests using the following indicator for e(k+1)
= . (4.73)
δk ’ δk+1

Due to the kind of approximation of B that has been used, the reader is
warned that (k+1) should not be regarded as an upper bound for e(k+1) .
However, often (k+1) provides a reasonable indication about the true error
behavior, as we can see in the following example.

Example 4.12 Consider the linear system Ax=b with
®  ® 
4 1 1 6
°2 0 », b = ° ’7 » ,
’8 ’6 ’14

which admits the unit vector as exact solution. Let us apply the Jacobi method
and estimate the error at each step by using (4.73). Figure 4.11 shows an ac-
ceptable agreement between the behavior of the error e(k+1) ∞ and that of its

estimate (k+1) .










0 5 10 15 20 25

FIGURE 4.11. Absolute error (in solid line) versus the error estimated by (4.73)
(dashed line). The number of iterations is indicated on the x-axis
174 4. Iterative Methods for Solving Linear Systems

4.6.2 A Stopping Test Based on the Residual
A di¬erent stopping criterion consists of continuing the iteration until
r(k) ¤ µ, µ being a ¬xed tolerance. Note that

x ’ x(k) = A’1 b ’ x(k) = A’1 r(k) ¤ A’1 µ.

Considering instead a normalized residual, i.e. stopping the iteration as
soon as r(k) / b ¤ µ, we obtain the following control on the relative
A’1 r(k)
x ’ x(k) r(k)
¤ ¤ K(A) ¤ µK(A).
x x b
In the case of preconditioned methods, the residual is replaced by the pre-
conditioned residual, so that the previous criterion becomes

P’1 r(k)
¤ µ,
P’1 r(0)

where P is the preconditioning matrix.

4.7 Applications
In this section we consider two examples arising in electrical network anal-
ysis and structural mechanics which lead to the solution of large sparse
linear systems.

4.7.1 Analysis of an Electric Network
We consider a purely resistive electric network (shown in Figure 4.12, left)
which consists of a connection of n stages S (Figure 4.12, right) through
the series resistances R. The circuit is completed by the driving current
generator I0 and the load resistance RL . As an example, a purely resistive
network is a model of a signal attenuator for low-frequency applications
where capacitive and inductive e¬ects can be neglected. The connecting
points between the electrical components will be referred to henceforth as
nodes and are progressively labeled as drawn in the ¬gure. For n ≥ 1, the
total number of nodes is 4n. Each node is associated with a value of the
electric potential Vi , i = 0, . . . , 4n, which are the unknowns of the problem.

The nodal analysis method is employed to solve the problem. Precisely,
the Kirchho¬ current law is written at any node of the network leading
to the linear system YV = ˜ where V ∈ RN +1 is the vector of nodal
˜˜ ˜
potentials, ˜ ∈ RN +1 is the load vector and the entries of the matrix Y ∈
4.7 Applications 175

1 2 4 n-1

R1 R3
3 5 n


FIGURE 4.12. Resistive electric network (left) and resistive stage S (right)

R(N +1)—(N +1) , for i, j = 0, . . . , 4n, are given by

 Gij , for i = j,
Yij =
 j∈k(i)
 ’G , for i = j,

where k(i) is the index set of the neighboring nodes of node i and Gij =
1/Rij is the admittance between node i and node j, provided Rij denotes
the resistance between the two nodes i and j. Since the potential is de¬ned
up to an additive constant, we arbitrarily set V0 = 0 (ground potential). As
a consequence, the number of independent nodes for potential di¬erence
computations is N = 4n ’ 1 and the linear system to be solved becomes
YV = I, where Y ∈ RN —N , V ∈ RN and I ∈ RN are obtained eliminating
the ¬rst row and column in Y and the ¬rst entry in V and ˜ respectively.
˜ I,
The matrix Y is symmetric, diagonally dominant and positive de¬nite. This
last property follows by noting that
˜T˜˜ Gij (Vi ’ Vj )2 ,
Gii Vi2
V YV = +
i=1 i,j=1

which is always a positive quantity, being equal to zero only if V = 0. The
sparsity pattern of Y in the case n = 3 is shown in Figure 4.13 (left) while
the spectral condition number of Y as a function of the number of blocks n
is reported in Figure 4.13 (right). Our numerical computations have been
carried out setting the resistance values equal to 1 „¦ while I0 = 1 A.
In Figure 4.14 we report the convergence history of several non precondi-
tioned iterative methods in the case n = 5 corresponding to a matrix size of
19 — 19. The plots show the Euclidean norms of the residual normalized to
the initial residual. The dashed curve refers to the Gauss-Seidel method, the
dash-dotted line refers to the gradient method, while the solid and circled
lines refer respectively to the conjugate gradient (CG) and SOR method
(with an optimal value of the relaxation parameter ω 1.76 computed
according to (4.19) since Y is block tridiagonal symmetric positive de¬-
nite). The SOR method converges in 109 iterations, while the CG method
converges in 10 iterations.
We have also considered the solution of the system at hand by the conju-
gate gradient (CG) method using the Cholesky version of the ILU(0) and
176 4. Iterative Methods for Solving Linear Systems

0 10

2 4




<< . .

. 26
( : 95)

. . >>