<< . .

. 12
( : 95)



. . >>


which is the desired factorization of A.
This identity can be generalized as follows. Setting

mk = (0, . . . , 0, mk+1,k , . . . , mn,k )T ∈ Rn
3.3 The Gaussian Elimination Method (GEM) and LU Factorization 73

and de¬ning
® 
1 ... 0 0 ... 0
 . .. .
. .
. .
. .
.
. . . .
0 0
1 0
Mk =   = In ’ mk eT
0 0
’mk+1,k k
1
 
. .
. . . ..
°. .»
. . . .
. . . . .
’mn,k
0 ... 0 ... 1
as the k-th Gaussian transformation matrix, one ¬nds out that
(Mk )ip = δip ’ (mk eT )ip = δip ’ mik δkp , i, p = 1, . . . , n.
k

On the other hand, from (3.31) we have that
n
(k+1) (k) (k) (k)
’ (δip ’ mik δkp )apj ,
aij = aij mik δkk akj = i, j = k + 1, . . . , n,
p=1

or, equivalently,
A(k+1) = Mk A(k) . (3.35)
As a consequence, at the end of the elimination process the matrices Mk ,
with k = 1, . . . , n ’ 1, and the matrix U have been generated such that
Mn’1 Mn’2 . . . M1 A = U.
The matrices Mk are unit lower triangular with inverse given by
M’1 = 2In ’ Mk = In + mk eT , (3.36)
k
k

where (mi eT )(mj eT ) are equal to the null matrix if i = j. As a consequence
i j

A = M’1 M’1 . . . M’1 U
1 2 n’1

= (In + m1 eT )(In + m2 eT ) . . . (In + mn’1 eT )U
1 2 n’1

n’1
mi eT
= In + U
i
i=1
® 
1 0 ... ... 0
  (3.37)
 
 
.
.
 m21 
1 .
 
 
. 
=. .
..  U.
.
.
. m32 . 
 
. 
. 
. ..
.
. 
.
. 0
° »
mn1 mn2 ... mn,n’1 1
74 3. Direct Methods for the Solution of Linear Systems

De¬ning L = (Mn’1 Mn’2 . . . M1 )’1 = M’1 . . . M’1 , it follows that
1 n’1

A = LU.

We notice that, due to (3.37), the subdiagonal entries of L are the multi-
pliers mik produced by GEM, while the diagonal entries are equal to one.

Once the matrices L and U have been computed, solving the linear system
consists only of solving successively the two triangular systems

Ly = b

Ux = y.

The computational cost of the factorization process is obviously the same
as that required by GEM.

The following result establishes a link between the leading dominant
minors of a matrix and its LU factorization induced by GEM.

Theorem 3.4 Let A ∈ Rn—n . The LU factorization of A with lii = 1 for
i = 1, . . . , n exists and is unique i¬ the principal submatrices Ai of A of
order i = 1, . . . , n ’ 1 are nonsingular.
Proof. The existence of the LU factorization can be proved following the steps
of the GEM. Here we prefer to pursue an alternative approach, which allows for
proving at the same time both existence and uniqueness and that will be used
again in later sections.
Let us assume that the leading minors Ai of A are nonsingular for i = 1, . . . , n’
1 and prove, by induction on i, that under this hypothesis the LU factorization
of A(= An ) with lii = 1 for i = 1, . . . , n, exists and is unique.
The property is obviously true if i = 1. Assume therefore that there exists an
(i’1)
unique LU factorization of Ai’1 of the form Ai’1 = L(i’1) U(i’1) with lkk = 1
for k = 1, . . . , i ’ 1, and show that there exists an unique factorization also for
Ai . We partition Ai by block matrices as
® 
Ai’1 c
Ai = ° »
T
d aii

and look for a factorization of Ai of the form
® ® 
L(i’1) 0 U(i’1) u
Ai = L U = ° »° »,
(i) (i)
(3.38)
T
0T
l 1 uii

having also partitioned by blocks the factors L(i) and U(i) . Computing the prod-
uct of these two factors and equating by blocks the elements of Ai , it turns out
that the vectors l and u are the solutions to the linear systems L(i’1) u = c,
lT U(i’1) = dT .
3.3 The Gaussian Elimination Method (GEM) and LU Factorization 75

On the other hand, since 0 = det(Ai’1 ) = det(L(i’1) )det(U(i’1) ), the matrices
L(i’1) and U(i’1) are nonsingular and, as a result, u and l exist and are unique.
Thus, there exists a unique factorization of Ai , where uii is the unique solution
of the equation uii = aii ’ lT u. This completes the induction step of the proof.
It now remains to prove that, if the factorization at hand exists and is unique,
then the ¬rst n ’ 1 leading minors of A must be nonsingular. We shall distinguish
the case where A is singular and when it is nonsingular.
Let us start from the second one and assume that the LU factorization of A
with lii = 1 for i = 1, . . . , n, exists and is unique. Then, due to (3.38), we have
Ai = L(i) U(i) for i = 1, . . . , n. Thus

det(Ai ) = det(L(i) )det(U(i) ) = det(U(i) ) = u11 u22 . . . uii , (3.39)

from which, taking i = n and A nonsingular, we obtain u11 u22 . . . unn = 0, and
thus, necessarily, det(Ai ) = u11 u22 . . . uii = 0 for i = 1, . . . , n ’ 1.
Now let A be a singular matrix and assume that (at least) one diagonal entry
of U is equal to zero. Denote by ukk the null entry of U with minimum index k.
Thanks to (3.38), the factorization can be computed without troubles until the
k + 1-th step. From that step on, since the matrix U(k) is singular, existence and
uniqueness of the vector lT are certainly lost, and, thus, the same holds for the
uniqueness of the factorization. In order for this not to occur before the process
has factorized the whole matrix A, the ukk entries must all be nonzero up to the
index k = n ’ 1 included, and thus, due to (3.39), all the leading minors Ak must
be nonsingular for k = 1, . . . , n ’ 1. 3

From the above theorem we conclude that, if an Ai , with i = 1, . . . , n ’ 1,
is singular, then the factorization may either not exist or not be unique.

Example 3.3 Consider the matrices

1 2 0 1 0 1
B= , C= , D= .
1 2 1 0 0 2

According to Theorem 3.4, the singular matrix B, having nonsingular leading
minor B1 = 1, admits a unique LU factorization. The remaining two examples
outline that, if the assumptions of the theorem are not ful¬lled, the factorization
may fail to exist or be unique.
Actually, the nonsingular matrix C, with C1 singular, does not admit any
factorization, while the (singular) matrix D, with D1 singular, admits an in¬nite
number of factorizations of the form D = Lβ Uβ , with

1 0 0 1
∀β ∈ R.
Lβ = , Uβ = ,
2’β
β 1 0



In the case where the LU factorization is unique, we point out that, because
det(A) = det(LU) = det(L) det(U) = det(U), the determinant of A is given
76 3. Direct Methods for the Solution of Linear Systems

by

det(A) = u11 · · · unn .

Let us now recall the following property (referring for its proof to [GL89]
or [Hig96]).

Property 3.2 If A is a matrix diagonally dominant by rows or by columns,
then the LU factorization of A exists. In particular, if A is diagonally dom-
inant by columns, then |lij | ¤ 1 ∀i, j.

In the proof of Theorem 3.4 we exploited the fact the the diagonal entries
of L are equal to 1. In a similar manner, we could have ¬xed to 1 the
diagonal entries of the upper triangular matrix U, obtaining a variant of
GEM that will be considered in Section 3.3.4.
The freedom in setting up either the diagonal entries of L or those of U,
implies that several LU factorizations exist which can be obtained one from
the other by multiplication with a suitable diagonal matrix (see Section
3.4.1).


3.3.2 The E¬ect of Rounding Errors
If rounding errors are taken into account, the factorization process induced
by GEM yields two matrices, L and U, such that LU = A + δA, δA being a
perturbation matrix. The size of such a perturbation can be estimated by
nu
|δA| ¤ |L| |U|, (3.40)
1 ’ nu
where u is the roundo¬ unit (for the proof of this result we refer to [Hig89]).
From (3.40) it is seen that the presence of small pivotal entries can make
the right side of the inequality virtually unbounded, with a consequent loss
of control on the size of the perturbation matrix δA. The interest is thus
in ¬nding out estimates like (3.40) of the form

|δA| ¤ g(u)|A|,

where g(u) is a suitable function of u. For instance, assuming that L and
U have nonnegative entries, then since |L| |U| = |LU| one gets
nu
|L| |U| = |LU| = |A + δA| ¤ |A| + |δA| ¤ |A| + |L| |U|, (3.41)
1 ’ nu
from which the desired bound is achieved by taking g(u) = nu/(1 ’ 2nu).
The technique of pivoting, examined in Section 3.5, keeps the size of the
pivotal entries under control and makes it possible to obtain estimates like
(3.41) for any matrix.
3.3 The Gaussian Elimination Method (GEM) and LU Factorization 77

3.3.3 Implementation of LU Factorization
Since L is a lower triangular matrix with diagonal entries equal to 1 and U
is upper triangular, it is possible (and convenient) to store the LU factor-
ization directly in the same memory area that is occupied by the matrix A.
More precisely, U is stored in the upper triangular part of A (including the
diagonal), whilst L occupies the lower triangular portion of A (the diagonal
entries of L are not stored since they are implicitly assumed to be 1).
A coding of the algorithm is reported in Program 4. The output matrix
A contains the overwritten LU factorization.
Program 4 - lu kji : LU factorization of matrix A. kji version

function [A] = lu kji (A)
[n,n]=size(A);
for k=1:n-1
A(k+1:n,k)=A(k+1:n,k)/A(k,k);
for j=k+1:n, for i=k+1:n
A(i,j)=A(i,j)-A(i,k)*A(k,j);
end, end
end

This implementation of the factorization algorithm is commonly referred
to as the kji version, due to the order in which the cycles are executed.
In a more appropriate notation, it is called the SAXP Y ’ kji version,
due to the fact that the basic operation of the algorithm, which consists of
multiplying a scalar A by a vector X, summing another vector Y and then
storing the result, is usually called SAXPY (i.e. Scalar A X P lus Y ).
The factorization can of course be executed by following a di¬erent order.
In general, the forms in which the cycle on index i precedes the cycle on
j are called row-oriented, whilst the others are called column-oriented. As
usual, this terminology refers to the fact that the matrix is accessed by
rows or by columns.
An example of LU factorization, jki version and column-oriented, is given
in Program 5. This version is commonly called GAXP Y ’ jki, since the
basic operation (a product matrix-vector), is called GAXPY which stands
for Generalized sAXPY (see for further details [DGK84]). In the GAXPY
operation the scalar A of the SAXPY operation is replaced by a matrix.
Program 5 - lu jki : LU factorization of matrix A. jki version

function [A] = lu jki (A)
[n,n]=size(A);
for j=1:n
for k=1:j-1, for i=k+1:n
A(i,j)=A(i,j)-A(i,k)*A(k,j);
end, end
for i=j+1:n, A(i,j)=A(i,j)/A(j,j); end
end
78 3. Direct Methods for the Solution of Linear Systems

3.3.4 Compact Forms of Factorization
Remarkable variants of LU factorization are the Crout factorization and
Doolittle factorization, and are known also as compact forms of the Gauss
elimination method. This name is due to the fact that these approaches
require less intermediate results than the standard GEM to generate the
factorization of A.
Computing the LU factorization of A is formally equivalent to solving
the following nonlinear system of n2 equations
min(i,j)
aij = lir urj , (3.42)
r=1

the unknowns being the n2 + n coe¬cients of the triangular matrices L and
U. If we arbitrarily set n coe¬cients to 1, for example the diagonal entries
of L or U, we end up with the Doolittle and Crout methods, respectively,
which provide an e¬cient way to solve system (3.42).
In fact, supposing that the ¬rst k ’ 1 columns of L and U are available
and setting lkk = 1 (Doolittle method), the following equations are obtained
from (3.42)
k’1
akj = lkr urj + ukj , j = k, . . . , n
r=1
k’1
aik = lir urk + lik ukk , i = k + 1, . . . , n.
r=1

Note that these equations can be solved in a sequential way with respect
to the boxed variables ukj and lik . From the Doolittle compact method
we thus obtain ¬rst the k-th row of U and then the k-th column of L, as
follows: for k = 1, . . . , n
k’1
ukj = akj ’ lkr urj j = k, . . . , n
r=1 (3.43)
k’1
1
aik ’
lik = lir urk i = k + 1, . . . , n.
ukk r=1

The Crout factorization is generated similarly, computing ¬rst the k-th
column of L and then the k-th row of U: for k = 1, . . . , n
k’1
lik = aik ’ lir urk i = k, . . . , n
r=1
k’1
1
akj ’
ukj = lkr urj j = k + 1, . . . , n,
lkk r=1
3.4 Other Types of Factorization 79

where we set ukk = 1. Recalling the notations introduced above, the Doolit-
tle factorization is nothing but the ijk version of GEM.
We provide in Program 6 the implementation of the Doolittle scheme.
Notice that now the main computation is a dot product, so this scheme is
also known as the DOT ’ ijk version of GEM.
Program 6 - lu ijk : LU factorization of the matrix A: ijk version
function [A] = lu ijk (A)
[n,n]=size(A);
for i=1:n
for j=2:i
A(i,j-1)=A(i,j-1)/A(j-1,j-1);
for k=1:j-1, A(i,j)=A(i,j)-A(i,k)*A(k,j); end
end
for j=i+1:n
for k=1:i-1, A(i,j)=A(i,j)-A(i,k)*A(k,j); end
end
end



3.4 Other Types of Factorization
We now address factorizations suitable for symmetric and rectangular ma-
trices.

LDMT Factorization
3.4.1
It is possible to devise other types of factorizations of A removing the
hypothesis that the elements of L are equal to one. Speci¬cally, we will
address some variants where the factorization of A is of the form
A = LDMT .
where L, MT and D are lower triangular, upper triangular and diagonal
matrices, respectively.
After the construction of this factorization, the resolution of the system
can be carried out solving ¬rst the lower triangular system Ly=b, then the
diagonal one Dz=y, and ¬nally the upper triangular system MT x=z, with
a cost of n2 + n ¬‚ops. In the symmetric case, we obtain M = L and the
LDLT factorization can be computed with half the cost (see Section 3.4.2).
The LDLT factorization enjoys a property analogous to the one in The-
orem 3.4 for the LU factorization. In particular, the following result holds.

Theorem 3.5 If all the principal minors of a matrix A∈ Rn—n are nonzero
then there exist a unique diagonal matrix D, a unique unit lower triangu-
lar matrix L and a unique unit upper triangular matrix MT , such that
A = LDMT .
80 3. Direct Methods for the Solution of Linear Systems

Proof. By Theorem 3.4 we already know that there exists a unique LU factor-
ization of A with lii = 1 for i = 1, . . . , n. If we set the diagonal entries of D
equal to uii (nonzero because U is nonsingular), then A = LU = LD(D’1 U).
Upon de¬ning MT = D’1 U, the existence of the LDMT factorization follows,
where D’1 U is a unit upper triangular matrix. The uniqueness of the LDMT
3
factorization is a consequence of the uniqueness of the LU factorization.

The above proof shows that, since the diagonal entries of D coincide
with those of U, we could compute L, MT and D starting from the LU
factorization of A. It su¬ces to compute MT as D’1 U. Nevertheless, this
algorithm has the same cost as the standard LU factorization. Likewise,

<< . .

. 12
( : 95)



. . >>