<< . .

. 6
( : 95)



. . >>

k=0

As a result, if ρ(A) < 1 the matrix I ’ A is invertible and the following
inequalities hold
1 1
¤ (I ’ A)’1 ¤ (1.26)
1’ A
1+ A

·
where is an induced matrix norm such that A < 1.
Proof. Let us prove (1.24). Let ρ(A) < 1, then ∃µ > 0 such that ρ(A) < 1 ’ µ
and thus, thanks to Property 1.13, there exists a consistent matrix norm · such
that A ¤ ρ(A) + µ < 1. From the fact that Ak ¤ A k < 1 and from the
de¬nition of convergence it turns out that as k ’ ∞ the sequence Ak tends
to zero. Conversely, assume that lim Ak = 0 and let » denote an eigenvalue of
k’∞
k k
A. Then, A x = » x, being x(=0) an eigenvector associated with », so that
1.12 Positive De¬nite, Diagonally Dominant and M-matrices 27

lim »k = 0. As a consequence, |»| < 1 and because this is true for a generic
k’∞
eigenvalue one gets ρ(A) < 1 as desired. Relation (1.25) can be obtained noting
¬rst that the eigenvalues of I’A are given by 1 ’ »(A), »(A) being the generic
eigenvalue of A. On the other hand, since ρ(A) < 1, we deduce that I’A is
nonsingular. Then, from the identity

(I ’ A)(I + A + . . . + An ) = (I ’ An+1 )

and taking the limit for n tending to in¬nity the thesis follows since

(I ’ A) Ak = I.
k=0


Finally, thanks to Theorem 1.3, the equality I = 1 holds, so that

(I ’ A)’1 ¤ (1 + A ) (I ’ A)’1 ,
1= I ¤ I’A

giving the ¬rst inequality in (1.26). As for the second part, noting that I =
I’A+A and multiplying both sides on the right by (I’A)’1 , one gets (I’A)’1 =
I + A(I ’ A)’1 . Passing to the norms, we obtain

(I ’ A)’1 ¤ 1 + A (I ’ A)’1 ,

3
and thus the second inequality, since A < 1.

Remark 1.1 The assumption that there exists an induced matrix norm
such that A < 1 is justi¬ed by Property 1.13, recalling that A is conver-
gent and, therefore, ρ(A) < 1.

Notice that (1.25) suggests an algorithm to approximate the inverse of a
matrix by a truncated series expansion.



1.12 Positive De¬nite, Diagonally Dominant and
M-matrices
De¬nition 1.22 A matrix A ∈ Cn—n is positive de¬nite in Cn if the num-
ber (Ax, x) is real and positive ∀x ∈ Cn , x = 0. A matrix A ∈ Rn—n is
positive de¬nite in Rn if (Ax, x) > 0 ∀x ∈ Rn , x = 0. If the strict in-
equality is substituted by the weak one (≥) the matrix is called positive
semide¬nite.

Example 1.8 Matrices that are positive de¬nite in Rn are not necessarily sym-
metric. An instance is provided by matrices of the form

2 ±
A= (1.27)
’2 ’ ± 2
28 1. Foundations of Matrix Analysis

for ± = ’1. Indeed, for any non null vector x = (x1 , x2 )T in R2

(Ax, x) = 2(x2 + x2 ’ x1 x2 ) > 0.
1 2

Notice that A is not positive de¬nite in C2 . Indeed, if we take a complex vector

x we ¬nd out that the number (Ax, x) is not real-valued in general.

De¬nition 1.23 Let A ∈ Rn—n . The matrices
1 1
(A + AT ), ASS = (A ’ AT )
AS =
2 2
are respectively called the symmetric part and the skew-symmetric part
of A. Obviously, A = AS + ASS . If A ∈ Cn—n , the de¬nitions modify as
follows: AS = 1 (A + AH ) and ASS = 1 (A ’ AH ).
2 2

The following property holds

Property 1.15 A real matrix A of order n is positive de¬nite i¬ its sym-
metric part AS is positive de¬nite.
Indeed, it su¬ces to notice that, due to (1.12) and the de¬nition of ASS ,
xT ASS x = 0 ∀x ∈ Rn . For instance, the matrix in (1.27) has a positive
de¬nite symmetric part, since

2 ’1
1
(A + AT ) =
AS = .
’1 2
2

This holds more generally (for the proof see [Axe94]).

Property 1.16 Let A ∈ Cn—n (respectively, A ∈ Rn—n ); if (Ax, x) is real-
valued ∀x ∈ Cn , then A is hermitian (respectively, symmetric).
An immediate consequence of the above results is that matrices that are
positive de¬nite in Cn do satisfy the following characterizing property.

Property 1.17 A square matrix A of order n is positive de¬nite in Cn
i¬ it is hermitian and has positive eigenvalues. Thus, a positive de¬nite
matrix is nonsingular.
In the case of positive de¬nite real matrices in Rn , results more speci¬c
than those presented so far hold only if the matrix is also symmetric (this is
the reason why many textbooks deal only with symmetric positive de¬nite
matrices). In particular

Property 1.18 Let A ∈ Rn—n be symmetric. Then, A is positive de¬nite
i¬ one of the following properties is satis¬ed:
1. (Ax, x) > 0 ∀x = 0 with x∈ Rn ;
1.12 Positive De¬nite, Diagonally Dominant and M-matrices 29

2. the eigenvalues of the principal submatrices of A are all positive;
3. the dominant principal minors of A are all positive (Sylvester crite-
rion);
4. there exists a nonsingular matrix H such that A = HT H.
All the diagonal entries of a positive de¬nite matrix are positive. Indeed,
if ei is the i-th vector of the canonical basis of Rn , then eT Aei = aii > 0.
i
Moreover, it can be shown that if A is symmetric positive de¬nite, the
entry with the largest module must be a diagonal entry (these last two
properties are therefore necessary conditions for a matrix to be positive
de¬nite).
We ¬nally notice that if A is symmetric positive de¬nite and A1/2 is
the only positive de¬nite matrix that is a solution of the matrix equation
X2 = A, the norm
= A1/2 x = (Ax, x)1/2
x (1.28)
A 2

de¬nes a vector norm, called the energy norm of the vector x. Related to
the energy norm is the energy scalar product given by (x, y)A = (Ax, y).

De¬nition 1.24 A matrix A∈ Rn—n is called diagonally dominant by rows
if
n
|aii | ≥ |aij |, with i = 1, . . . , n,
j=1,j=i

while it is called diagonally dominant by columns if
n
|aii | ≥ |aji |, with i = 1, . . . , n.
j=1,j=i

If the inequalities above hold in a strict sense, A is called strictly diagonally
dominant (by rows or by columns, respectively).
A strictly diagonally dominant matrix that is symmetric with positive di-
agonal entries is also positive de¬nite.

De¬nition 1.25 A nonsingular matrix A ∈ Rn—n is an M-matrix if aij ¤ 0
for i = j and if all the entries of its inverse are nonnegative.
M-matrices enjoy the so-called discrete maximum principle, that is, if A is
an M-matrix and Ax ¤ 0, then x ¤ 0 (where the inequalities are meant
componentwise). In this connection, the following result can be useful.

Property 1.19 (M-criterion) Let a matrix A satisfy aij ¤ 0 for i = j.
Then A is an M-matrix if and only if there exists a vector w > 0 such that
Aw > 0.
30 1. Foundations of Matrix Analysis

Finally, M-matrices are related to strictly diagonally dominant matrices
by the following property.

Property 1.20 A matrix A ∈ Rn—n that is strictly diagonally dominant
by rows and whose entries satisfy the relations aij ¤ 0 for i = j and aii > 0,
is an M-matrix.
For further results about M-matrices, see for instance [Axe94] and [Var62].


1.13 Exercises
1. Let W1 and W2 be two subspaces of Rn . Prove that if V = W1 • W2 , then
dim(V ) = dim(W1 ) + dim(W2 ), while in general

dim(W1 + W2 ) = dim(W1 ) + dim(W2 ) ’ dim(W1 © W2 ).

[Hint : Consider a basis for W1 © W2 and ¬rst extend it to W1 , then to
W2 , verifying that the basis formed by the set of the obtained vectors is a
basis for the sum space.]
2. Check that the following set of vectors

vi = xi’1 , xi’1 , . . . , xi’1 , i = 1, 2, . . . , n,
n
1 2

forms a basis for Rn , x1 , . . . , xn being a set of n distinct points of R.
3. Exhibit an example showing that the product of two symmetric matrices
may be nonsymmetric.
4. Let B be a skew-symmetric matrix, namely, BT = ’B. Let A = (I + B)(I ’
B)’1 and show that A’1 = AT .
5. A matrix A ∈ Cn—n is called skew-hermitian if AH = ’A. Show that the
diagonal entries of A must be purely imaginary numbers.
6. Let A, B and A+B be invertible matrices of order n. Show that also A’1 +
B’1 is nonsingular and that
’1
A’1 + B’1 = A (A + B)’1 B = B (A + B)’1 A.
’1 ’1
= A (B + A)’1 B. The sec-
[Solution : A’1 + B’1 = A I + B’1 A
ond equality is proved similarly by factoring out B and A, respectively from
left and right.]
7. Given the non symmetric real matrix
® 
0 1 1
A=° 1 ’1 » ,
0
’1 ’1 0

check that it is similar to the diagonal matrix D = diag(1, 0, ’1) and ¬nd
its eigenvectors. Is this matrix normal?
[Solution : the matrix is not normal.]
1.13 Exercises 31

n
ck Ak and
8. Let A be a square matrix of order n. Check that if P (A) =
k=0
»(A) are the eigenvalues of A, then the eigenvalues of P (A) are given by
»(P (A)) = P (»(A)). In particular, prove that ρ(A2 ) = [ρ(A)]2 .
9. Prove that a matrix of order n having n distinct eigenvalues cannot be
defective. Moreover, prove that a normal matrix cannot be defective.
10. Commutativity of matrix product. Show that if A and B are square matri-
ces that share the same set of eigenvectors, then AB = BA. Prove, by a
counterexample, that the converse is false.
11. Let A be a normal matrix whose eigenvalues are »1 , . . . , »n . Show that the
singular values of A are |»1 |, . . . , |»n |.
12. Let A ∈ Cm—n with rank(A) = n. Show that A† = (AT A)’1 AT enjoys the
following properties

(1) A† A = In ; (2) A† AA† = A† , AA† A = A; (3) if m = n, A† = A’1 .

13. Show that the Moore-Penrose pseudo-inverse matrix A† is the only matrix
that minimizes the functional

AX ’ Im
min F,
X∈Cn—m


·
where is the Frobenius norm.
F

14. Prove Property 1.10.
[Solution : For any x, x ∈ V show that | x ’ x | ¤ x ’ x . Assuming
that dim(V ) = n and expanding the vector w = x ’ x on a basis of V,
show that w ¤ C w ∞ , from which the thesis follows by imposing in
the ¬rst obtained inequality that w ∞ ¤ µ.]
15. Prove Property 1.11 in the case A ∈ Rn—m with m linearly independent
columns.
[Hint : First show that · A ful¬lls all the properties characterizing a
norm: positiveness (A has linearly independent columns, thus if x = 0, then
Ax = 0, which proves the thesis), homogeneity and triangular inequality.]
16. Show that for a rectangular matrix A ∈ Rm—n
2 2 2
A = σ1 + . . . + σ p ,
F


where p is the minimum between m and n, σi are the singular values of A
and · F is the Frobenius norm.
17. Assuming p, q = 1, 2, ∞, F , recover the following table of equivalence con-
stants cpq such that ∀A ∈ Rn—n , A p ¤ cpq A q .

q=∞
cpq q=1 q=2 q=F
√ √
p=1 1 n n n
√ √
p=2 n 1 n 1
√ √
p=∞ n n 1 n
√ √ √
p=F n n n 1
32 1. Foundations of Matrix Analysis

18. A matrix norm for which A = |A| is called absolute norm, having
denoted by |A| the matrix of the absolute values of the entries of A. Prove
that · 1 , · ∞ and · F are absolute norms, while · 2 is not. Show
that for this latter

1
√A ¤ |A| ¤ n A 2.
2 2
n
2
Principles of Numerical Mathematics




The basic concepts of consistency, stability and convergence of a numerical
method will be introduced in a very general context in the ¬rst part of
the chapter: they provide the common framework for the analysis of any
method considered henceforth. The second part of the chapter deals with
the computer ¬nite representation of real numbers and the analysis of error
propagation in machine operations.


2.1 Well-posedness and Condition Number of a
Problem
Consider the following problem: ¬nd x such that

F (x, d) = 0 (2.1)

where d is the set of data which the solution depends on and F is the
functional relation between x and d. According to the kind of problem
that is represented in (2.1), the variables x and d may be real numbers,
vectors or functions. Typically, (2.1) is called a direct problem if F and d
are given and x is the unknown, inverse problem if F and x are known
and d is the unknown, identi¬cation problem when x and d are given while
the functional relation F is the unknown (these latter problems will not be
covered in this volume).
Problem (2.1) is well posed if it admits a unique solution x which depends
with continuity on the data. We shall use the terms well posed and stable in
34 2. Principles of Numerical Mathematics

an interchanging manner and we shall deal henceforth only with well-posed
problems.
A problem which does not enjoy the property above is called ill posed or
unstable and before undertaking its numerical solution it has to be regular-
ized, that is, it must be suitably transformed into a well-posed problem (see,
for instance [Mor84]). Indeed, it is not appropriate to pretend the numerical
method can cure the pathologies of an intrinsically ill-posed problem.

Example 2.1 A simple instance of an ill-posed problem is ¬nding the number
of real roots of a polynomial. For example, the polynomial p(x) = x4 ’ x2 (2a ’
1) + a(a ’ 1) exhibits a discontinuous variation of the number of real roots as a
continuously varies in the real ¬eld. We have, indeed, 4 real roots if a ≥ 1, 2 if
a ∈ [0, 1) while no real roots exist if a < 0. •

Continuous dependence on the data means that small perturbations on
the data d yield “small” changes in the solution x. Precisely, denoting by δd
an admissible perturbation on the data and by δx the consequent change
in the solution, in such a way that

F (x + δx, d + δd) = 0, (2.2)

then

∀· > 0, ∃K(·, d) : δd < · ’ δx ¤ K(·, d) δd . (2.3)

The norms used for the data and for the solution may not coincide, when-
ever d and x represent variables of di¬erent kinds.
With the aim of making this analysis more quantitative, we introduce the
following de¬nition.

De¬nition 2.1 For problem (2.1) we de¬ne the relative condition number
to be
δx / x
, (2.4)
K(d) = sup
δd / d
δd∈D

where D is a neighborhood of the origin and denotes the set of admissible
perturbations on the data for which the perturbed problem (2.2) still makes
sense. Whenever d = 0 or x = 0, it is necessary to introduce the absolute
condition number, given by
δx
Kabs (d) = sup . (2.5)
δd
δd∈D




Problem (2.1) is called ill-conditioned if K(d) is “big” for any admissible
datum d (the precise meaning of “small” and “big” is going to change
depending on the considered problem).
2.1 Well-posedness and Condition Number of a Problem 35

The property of a problem of being well-conditioned is independent of
the numerical method that is being used to solve it. In fact, it is possible
to generate stable as well as unstable numerical schemes for solving well-
conditioned problems. The concept of stability for an algorithm or for a
numerical method is analogous to that used for problem (2.1) and will be
made precise in the next section.

Remark 2.1 (Ill-posed problems) Even in the case in which the condi-
tion number does not exist (formally, it is in¬nite), it is not necessarily true
that the problem is ill-posed. In fact there exist well posed problems (for
instance, the search of multiple roots of algebraic equations, see Example
2.2) for which the condition number is in¬nite, but such that they can be
reformulated in equivalent problems (that is, having the same solutions)
with a ¬nite condition number.

If problem (2.1) admits a unique solution, then there necessarily exists a
mapping G, that we call resolvent, between the sets of the data and of the
solutions, such that

x = G(d), that is F (G(d), d) = 0. (2.6)

According to this de¬nition, (2.2) yields x + δx = G(d + δd). Assuming

<< . .

. 6
( : 95)



. . >>