<< . .

. 27
( : 95)



. . >>


0
12 10
0 2 4 6 8 10 12 0 5 10 15 20 25 30 35 40 45 50




FIGURE 4.13. Sparsity pattern of Y for n = 3 (left) and spectral condition
number of Y as a function of n (right)

2
10

0
10

’2
10

’4
10

’6
10

’8
10

’10
10

’12
10

’14
10

’16
10
0 50 100 150 200 250



FIGURE 4.14. Convergence history of several non preconditioned iterative meth-
ods


MILU(0) preconditioners, where drop tolerances equal to µ = 10’2 , 10’3
have been chosen for the MILU(0) preconditioner (see Section 4.3.2). Cal-
culations with both preconditioners have been done using the MATLAB
functions cholinc and michol. Table 4.2 shows the convergence iterations
of the method for n = 5, 10, 20, 40, 80, 160 and for the considered values of
µ. We report in the second column the number of nonzero entries in the
Cholesky factor of matrix Y, in the third column the number of iterations
for the CG method without preconditioning to converge, while the columns
ICh(0) and MICh(0) with µ = 10’2 and µ = 10’3 show the same infor-
mation for the CG method using the incomplete Cholesky and modi¬ed
incomplete Cholesky preconditioners, respectively.
The entries in the table are the number of iterations to converge and the
number in the brackets are the nonzero entries of the L-factor of the cor-
responding preconditioners. Notice the decrease of the iterations as µ de-
creases, as expected. Notice also the increase of the number of iterations
with respect to the increase of the size of the problem.
4.7 Applications 177

MICh(0) µ = 10’2 MICh(0) µ = 10’3
n CG ICh(0)
nz
5 114 10 9 (54) 6 (78) 4 (98)
10 429 20 15 (114) 7 (173) 5 (233)
20 1659 40 23 (234) 10 (363) 6 (503)
40 6519 80 36 (474) 14 (743) 7 (1043)
80 25839 160 62 (954) 21 (1503) 10 (2123)
160 102879 320 110 (1914) 34 (3023) 14 (4283)
TABLE 4.2. Convergence iterations for the preconditioned CG method


4.7.2 Finite Di¬erence Analysis of Beam Bending
Consider the beam clamped at the endpoints that is drawn in Figure 4.15
(left). The structure, of length L, is subject to a distributed load P , varying
along the free coordinate x and expressed in [Kgm’1 ]. We assume hence-
forth that the beam has uniform rectangular section, of width r and depth
s, momentum of inertia J = rs3 /12 and Young™s module E, expressed in
[m4 ] and [Kg m’2 ], respectively.

5
10



0
10



’5
10
P(x) 000
111
000
111 111
000
000
111 000
111
111
000 111
000
111
000 111
000
000
111 111
000
111
000 000
111 ’10
111
000 10
111
000
000
111 111
000
111
000 111
000
000
111 111
000
x
000
111 000
111
000
111 000
111
111
000 000
111
000
111 000
111 ’15
000
111 111
000 10
111
000 000
111 n=60 n=110
n=10
u(x) ’20
10
0 20 40 60 80 100 120



FIGURE 4.15. Clamped beam (left); convergence histories for the preconditioned
conjugate gradient method in the solution of system (4.76) (right)

The transverse bending of the beam, under the assumption of small dis-
placements, is governed by the following fourth-order di¬erential equation

(EJu ) (x) = P (x), 0 < x < L, (4.74)

where u = u(x) denotes the vertical displacement. The following boundary
conditions (at the endpoints x = 0 and x = L)

u(0) = u(L) = 0, u (0) = u (L) = 0, (4.75)

model the e¬ect of the two clampings (vanishing displacements and rota-
tions). To solve numerically the boundary-value problem (4.74)-(4.75), we
use the ¬nite di¬erence method (see Section 10.10.1 and Exercise 11 of
Chapter 12).
178 4. Iterative Methods for Solving Linear Systems

With this aim, let us introduce the discretization nodes xj = jh, with
h = L/Nh and j = 0, . . . , Nh , and substitute at each node xj the fourth-
order derivative with an approximation through centered ¬nite di¬erences.
Letting f (x) = P (x)/(EJ), fj = f (xj ) and denoting by ·j the (approx-
imate) nodal displacement of the beam at node xj , the ¬nite di¬erence
discretization of (4.74)-(4.75) is




·j’2 ’ 4·j’1 + 6·j ’ 4·j+1 + ·j+2 = h4 fj , ∀j = 2, . . . , Nh ’ 2,
(4.76)
·0 = ·1 = ·Nh ’1 = ·Nh = 0.




The null displacement boundary conditions in (4.76) that have been im-
posed at the ¬rst and the last two nodes of the grid, require that Nh ≥ 4.
Notice that a fourth-order scheme has been used to approximate the fourth-
order derivative, while, for sake of simplicity, a ¬rst-order approximation
has been employed to deal with the boundary conditions (see Section
10.10.1).
The Nh ’ 3 discrete equations (4.76) yield a linear system of the form
Ax = b where the unknown vector x ∈ RNh ’3 and the load vector
b ∈ RNh ’3 are given respectively by x = (·2 , ·3 , . . . , ·Nh ’2 )T and b =
(f2 , f3 , . . . , fNh ’2 )T , while the coe¬cient matrix A ∈ R(Nh ’3)—(Nh ’3) is
pentadiagonal and symmetric, given by A = pentadiagNh ’3 (1, ’4, 6, ’4, 1).
The matrix A is symmetric and positive de¬nite. Therefore, to solve
system Ax = b, the SSOR preconditioned conjugated gradient method (see
Section 4.21) and the Cholesky factorization method have been employed.
In the remainder of the section, the two methods are identi¬ed by the
symbols (CG) and (CH).
The convergence histories of CG are reported in Figure 4.15 (right),
where the sequences r(k) 2 / b(k) 2 , for the values n = 10, 60, 110, are
plotted, r(k) = b ’ Ax(k) being the residual at the k-th step. The results
have been obtained using Program 20, with toll=10’15 and ω = 1.8 in
(4.22). The initial vector x(0) has been set equal to the null vector.
As a comment to the graphs, it is worth noting that CG has required 7, 33
and 64 iterations to converge, respectively, with a maximum absolute error
of 5 · 10’15 with respect to the solution produced by CH. This latter has an
overall computational cost of 136, 1286 and 2436 ¬‚ops respectively, to be
compared with the corresponding 3117, 149424 and 541647 ¬‚ops of method
CG. As for the performances of the SSOR preconditioner, we remark that
the spectral condition number of matrix A is equal to 192, 3.8 · 105 and
4.5 · 106 , respectively, while the corresponding values in the preconditioned
case are 65, 1.2 · 104 and 1.3 · 105 .
4.8 Exercises 179

4.8 Exercises
1. The spectral radius of the matrix
a 4
B=
0 a
1/m
is ρ(B) = a. Check that if 0 < a < 1, then ρ(B) < 1, while Bm can
2
be greater than 1.
2. Let A ∈ Rn—n be a strictly diagonally dominant matrix by rows. Show
that the Gauss-Seidel method for the solution of the linear system (3.2) is
convergent.
3. Check that the matrix A = tridiag(’1, ±, ’1), with ± ∈ R, has eigenvalues
given by

»j = ± ’ 2 cos(jθ), j = 1, . . . , n

where θ = π/(n + 1) and the corresponding eigenvectors are

qj = [sin(jθ), sin(2jθ), . . . , sin(njθ)]T .

Under which conditions on ± is the matrix positive de¬nite?
[Solution : ± ≥ 2.]
4. Consider the pentadiagonal matrix A = pentadiagn (’1, ’1, 10, ’1, ’1).
Assume n = 10 and A = M + N + D, with D = diag(8, . . . , 8) ∈ R10—10 ,
M = pentadiag10 (’1, ’1, 1, 0, 0) and N = MT . To solve Ax = b, analyze
the convergence of the following iterative methods

(M + D)x(k+1) = ’Nx(k) + b,
(a)

Dx(k+1) = ’(M + N)x(k) + b,
(b)

(M + N)x(k+1) = ’Dx(k) + b.
(c)

[Solution : denoting respectively by ρa , ρb and ρc the spectral radii of the
iteration matrices of the three methods, we have ρa = 0.1450, ρb = 0.5
and ρc = 12.2870 which implies convergence for methods (a) and (b) and
divergence for method (c).]
5. For the solution of the linear system Ax = b with
1 2 3
A= , b= ,
2 3 5
consider the following iterative method

k ≥ 0,
x(k+1) = B(θ)x(k) + g(θ), with x(0) given,

where θ is a real parameter and
’2θ2 + 2θ + 1 ’θ
2θ2 + 2θ + 1 1
1 2
B(θ) = , g(θ) = .
’2θ2 + 2θ + 1 ’θ
2θ2 + 2θ + 1 1
4 2
180 4. Iterative Methods for Solving Linear Systems

Check that the method is consistent ∀ θ ∈ R. Then, determine the values
of θ for which the method is convergent and compute the optimal value
of θ (i.e., the value of the parameter for which the convergence rate is
maximum).
[Solution : the method is convergent i¬ ’1 < θ < 1/2 and the convergence

rate is maximum if θ = (1 ’ 3)/2.]
6. To solve the following block linear system
A1 B x b1
= ,
B A2 y b2
consider the two methods
A1 x(k+1) + By(k) = b1 , Bx(k) + A2 y(k+1) = b2 ;
(1)
A1 x(k+1) + By(k) = b1 , Bx(k+1) + A2 y(k+1) = b2 .
(2)
Find su¬cient conditions in order for the two schemes to be convergent for
any choice of the initial data x(0) , y(0) .
[Solution : method (1) is a decoupled system in the unknowns x(k+1) and
y(k+1) . Assuming that A1 and A2 are invertible, method (1) converges if
ρ(A’1 B) < 1 and ρ(A’1 B) < 1. In the case of method (2) we have a coupled
1 2
system to solve at each step in the unknowns x(k+1) and y(k+1) . Solving
formally the ¬rst equation with respect to x(k+1) (which requires A1 to be
invertible) and substituting into the second one we see that method (2) is
convergent if ρ(A’1 BA’1 B) < 1 (again A2 must be invertible).]
2 1
7. Consider the linear system Ax = b with
®  ® 
62 24 1 8 15 110
 23 50 7 14 16   
110
   
A= 4 22  , b= .
6 58 20 110
   
° 10 12 19 66 3» ° »
110
11 18 25 2 54 110
(1) Check if the Jacobi and Gauss-Seidel methods can be applied to solve
the system. (2) Check if the stationary Richardson method with optimal
parameter can be applied with P = I and P = D, where D is the diagonal
part of A, and compute the corresponding values of ±opt and ρopt .
[Solution : (1): matrix A is neither diagonally dominant nor symmetric
positive de¬nite, so that we must compute the spectral radii of the itera-
tion matrices of the Jacobi and Gauss-Seidel methods to verify if they are
convergent. It turns out that ρJ = 0.9280 and ρGS = 0.3066 which implies
convergence for both methods. (2): in the case P = I all the eigenvalues
of A are positive so that the Richardson method can be applied yielding
±opt = 0.015 and ρopt = 0.6452. If P = D the method is still applicable
and ±opt = 0.8510, ρopt = 0.6407.]
8. Consider the linear system Ax = b with
®  ® 
57 6 5 23
 7 10 8 7  32 
A= , b =  .
° 6 8 10 9 » ° 33 »
57 9 10 31
4.8 Exercises 181

Analyze the convergence properties of the Jacobi and Gauss-Seidel methods
applied to the system above in their point and block forms (for a 2—2 block
partition of A).
[Solution : both methods are convergent, the block form being the faster
one. Moreover, ρ2 (BJ ) = ρ(BGS ).]
9. To solve the linear system Ax = b, consider the iterative method (4.6),
with P = D + ωF and N = ’βF ’ E, ω and β being real numbers. Check
that the method is consistent only if β = 1 ’ ω. In such a case, express
the eigenvalues of the iteration matrix as a function of ω and determine for
which values of ω the method is convergent, as well as the value of ωopt ,
assuming that A = tridiag10 (’1, 2, ’1).
[Hint : Take advantage of the result in Exercise 3.]
10. Let A ∈ Rn—n be such that A = (1+ω)P’(N+ωP), with P’1 N nonsingular
and with real eigenvalues 1 > »1 ≥ »2 ≥ . . . ≥ »n . Find the values of ω ∈ R
for which the following iterative method

k ≥ 0,
(1 + ω)Px(k+1) = (N + ωP)x(k) + b,

converges ∀x(0) to the solution of the linear system (3.2). Determine also
the value of ω for which the convergence rate is maximum.
[Solution : ω > ’(1 + »n )/2; ωopt = ’(»1 + »n )/2.]
11. Consider the linear system

3 2 2
Ax = b with A = , b= .
’8
2 6

Write the associated functional ¦(x) and give a graphical interpretation of
the solution of the linear system. Perform some iterations of the gradient
method, after proving convergence for it.
12. Check that in the gradient method x(k+2) is not an optimal direction with
respect to r(k) .
13. Show that the coe¬cients ±k and βk in the conjugate gradient method can
be written in the alternative form (4.45).
[Solution: notice Ap(k) = (r(k) ’ r(k+1) )/±k and thus (Ap(k) )T r(k+1) =
’ r(k+1) 2 /±k . Moreover, ±k (Ap(k) )T p(k) = ’ r(k) 2 .]
2 2

14. Prove the three-terms recursive relation (4.46) for the residual in the con-
jugate gradient method.
[Solution: subtract from both sides of Ap(k) = (r(k) ’ r(k+1) )/±k the quan-
tity βk’1 /±k r(k) and recall that Ap(k) = Ar(k) ’ βk’1 Ap(k’1) . Then, ex-
pressing the residual r(k) as a function of r(k’1) one immediately gets the
desired relation.]
5
Approximation of Eigenvalues and
Eigenvectors




In this chapter we deal with approximations of the eigenvalues and eigen-
vectors of a matrix A ∈ Cn—n . Two main classes of numerical methods
exist to this purpose, partial methods, which compute the extremal eigen-
values of A (that is, those having maximum and minimum module), or
global methods, which approximate the whole spectrum of A.
It is worth noting that methods which are introduced to solve the matrix
eigenvalue problem are not necessarily suitable for calculating the matrix
eigenvectors. For example, the power method (a partial method, see Section
5.3) provides an approximation to a particular eigenvalue/eigenvector pair.
The QR method (a global method, see Section 5.5) instead computes the
real Schur form of A, a canonical form that displays all the eigenvalues of
A but not its eigenvectors. These eigenvectors can be computed, starting

<< . .

. 27
( : 95)



. . >>