K(x)∇•j (x) · ∇•i (x) dx

„¦

(cf. Section 3.5). For the standard reference element, which we use from

now on, we have b(1) = a(2) ’ a(1) , b(2) = a(3) ’ a(1) . Here a(i) , i = 1, 2, 3,

are the locally numbered nodes of K interpreted as vectors of R2 .

From now on we make also use of the special form of the sti¬ness matrix

and obtain

˜(m)

Aij = γ1 ‚x1 Nj ‚x1 Ni dˆ

x

ˆ ˆ

ˆ

K

+ γ2 ‚x1 Nj ‚x2 Ni + ‚x2 Nj ‚x1 Ni dˆ

x (2.52)

ˆ ˆ ˆ ˆ

ˆ

K

+ γ3 ‚x2 Nj ‚x2 Ni dˆ

x

ˆ ˆ

ˆ

K

with

1

c11 | det(B)| = a(3) ’ a(1) · a(3) ’ a(1) ,

γ1 :=

| det(B)|

1

c12 | det(B)| = ’ a(2) ’ a(1) · a(3) ’ a(1) ,

γ2 :=

| det(B)|

1

c22 | det(B)| = a(2) ’ a(1) · a(2) ’ a(1) .

γ3 :=

| det(B)|

In the implementation it is advisable to compute the values γi just once

from the local geometrical information given in the form of the vertices

a(i) = ari and to store them permanently.

Thus we obtain for the local sti¬ness matrix

˜

A(m) = γ1 S1 + γ2 S2 + γ3 S3 (2.53)

with

S1 := ‚x1 Nj ‚x1 Ni dˆ

x ,

ˆ ˆ

ˆ

K ij

80 2. Finite Element Method for Poisson Equation

S2 := ‚x1 Nj ‚x2 Ni + ‚x2 Nj ‚x1 Ni dˆ

x ,

ˆ ˆ ˆ ˆ

ˆ

K ij

S3 := ‚x2 Nj ‚x2 Ni dˆ

x .

ˆ ˆ

ˆ

K ij

An explicit computation of the matrices Si is possible because the

integrands are constant, and also these matrices can be stored permanently:

« « «

1 ’1 0 2 ’1 ’1 1 0 ’1

1 1 1

S1 = ’1 1 0 , S2 = ’1 0 1 , S3 = 0 0 0 .

2 2 2

’1 1 0 ’1 0 1

000

The right-hand side (q h )i = f (x)•i (x) dx can be treated in a similar

„¦

manner:

N

q (m)

(q h )i = i

m=1

with

(= 0 ’ ai ∈ Km ) .

q (m) = f (x)•i (x) dx

i

Km

(m)

Again, we transform the global numbering qri for the triangle

i=1,2,3

(m)

Km = conv {ar1 , ar2 , ar3 } into the local numbering qi˜ .

Anal- i=1,2,3

ogously to the determination of the entries of the sti¬ness matrix, we

have

(m)

f (F (ˆ)) •ri (F (ˆ)) | det(B)| dˆ

qi

˜ = x x x

ˆ

K

ˆx

f (ˆ) Ni (ˆ) | det(B)| dˆ ,

= x x

ˆ

K

ˆx

where f (ˆ) := f (F (ˆ)) for x ∈ K.

ˆˆ

x

In general, this integral cannot be evaluated exactly. Therefore, it has to

be approximated by a quadrature rule.

A quadrature rule for K g(ˆ) dˆ is of the type

xx

ˆ

R

ωk g ˆ(k)

b

k=1

with certain weights ωk and quadrature points ˆ(k) . As an example, we take

b

the trapezoidal rule (cf. (2.38)), where

ˆ(1) = a1 = (0, 0) , ˆ(2) = a2 = (1, 0) , ˆ(3) = a3 = (0, 1) ,

b ˆ b ˆ b ˆ

1

ωk = , k = 1, 2, 3 .

6

2.4. The Implementation of the Finite Element Method: Part 1 81

Thus for arbitrary but ¬xed quadrature rules, we have

R

(m)

ωk f ˆ(k) Ni ˆ(k) | det(B)| .

ˆb

≈

qi

˜ b (2.54)

k=1

Of course, the application of di¬erent quadrature rules on di¬erent elements

is possible, too. The values Ni ˆ(k) , i = 1, 2, 3, k = 1, . . . , R, should be

b

evaluated just once and should be stored. The discussion on the use of

quadrature rules will be continued in Sections 3.5.2 and 3.6.

In summary, the following algorithm provides the assembling of the

sti¬ness matrix and the right-hand side:

Loop over all elements m = 1, . . . , N :

• Allocating a local numbering to the nodes based on the element-node

table: 1 ’ r1 , 2 ’ r2 , 3 ’ r3 .

• Assembling of the element sti¬ness matrix A(m) according to (2.51)

˜

or (2.53).

Assembling of the right-hand side according to (2.54).

• Loop over i, j = 1, 2, 3:

˜(m)

(Ah )ri rj := (Ah )ri rj + Aij ,

(m)

(q h )ri := (q h )ri + qi

˜ .

For the sake of e¬ciency of this algorithm, it is necessary to adjust the

memory structure to the particular situation; we will see how this can be

done in Section 2.5.

2.4.3 Realization of Dirichlet Boundary Conditions: Part 1

Nodes where a Dirichlet boundary condition is prescribed must be labeled

specially, here, for instance, by the convention M = M1 + M2 , where the

nodes numbered from M1 + 1 to M correspond to the Dirichlet boundary

nodes. In more general cases, other realizations are to be preferred.

In the ¬rst step of assembling of sti¬ness matrix and the load vector, the

Dirichlet nodes are treated like all the other ones. After this, the Dirichlet

nodes are considered separately. If such a node has the number j, the

boundary condition is included by the following procedure:

Replace the jth row and the jth column (for conservation of the sym-

metry) of Ah by the jth unit vector and (q h )j by g(aj ), if u(x) = g(x) is

prescribed for x ∈ ‚„¦. If the jth column is replaced by the unit vector, the

right-hand side (q h )i for i = j must be modi¬ed to (q h )i ’ (Ah )ij g(aj ). In

other words, the contributions caused by the Dirichlet boundary condition

are included into the right-hand side. This is exactly the elimination that

led to the form (1.10), (1.11) in Chapter 1.

82 2. Finite Element Method for Poisson Equation

2.5 Solving Sparse Systems of Linear Equations

by Direct Methods

Let A be an M —M matrix. Given a vector q ∈ RM , we consider the system

of linear equations

Aξ = q .

The matrices arising from the ¬nite element discretization are sparse; i.e.,

they have a bounded number of nonzero entries per row independent of

the dimension of the system of equations. For the simple example of Sec-

tion 2.2, this bound is determined by the number of neighbouring nodes

(see (2.37)). Methods for solving systems of equations should take advan-

tage of the sparse structure. For iterative methods, which will be examined

in Chapter 5, this is easier to reach than for direct methods. Therefore, the

importance of direct methods has decreased. Nevertheless, in adapted form

and for small or medium size problems, they are still the method of choice.

Elimination without Pivoting using Band Structure

In the general case, where the matrix A is assumed only to be nonsingular,

there exist M — M matrices P , L, U such that

P A = LU .

Here P is a permutation matrix, L is a scaled lower triangular matrix, and

U is an upper triangular matrix; i.e., they have the form

« «

1 0 u11 uij

¬ · ¬ ·

.. ..

L= , U = .

. .

lij 1 0 uMM

This decomposition corresponds to the Gaussian elimination method with

pivoting. The method is very easy and has favourable properties with re-

spect to the sparse structure, if pivoting is not necessary (i.e., P = I,

A = LU ). Then the matrix A is called LU factorizable.

Denote by Ak the leading principal submatrix of A of dimension k — k,

i.e.,

«

a11 · · · a1k

¬. . ·,

Ak := . .. .

.

. .

· · · akk

ak1

and suppose that it already has been factorized as Ak = Lk Uk . This is

obviously possible for k = 1: A1 = (a11 ) = (1)(a11 ). The matrix Ak+1 can

be represented in the form of a block matrix

Ak b

Ak+1 =

cT d

2.5. Direct Methods for Sparse Linear Systems 83

with b, c ∈ Rk , d ∈ R.

Using the ansatz

Lk 0 Uk u

Lk+1 = , Uk+1 =

lT 1 0s

with unknown vectors u, l ∈ Rk and s ∈ R, it follows that

Ak+1 = Lk+1 Uk+1 ⇐’ Lk u = b , Uk l = c , lT u + s = d .

T

(2.55)

From this, we have the following result:

Let A be nonsingular. Then lower and upper triangular matrices

L, U exist with A = LU if and only if Ak is nonsingular for all (2.56)

1 ¤ k ¤ M . For this case, L and U are determined uniquely.

Furthermore, from (2.55) we have the following important consequences:

If the ¬rst l components of the vector b are equal to zero, then this is valid

for the vector u, too:

0 0

If b = , then u also has the structure u = .

β

Similarly,

0 0

c= implies the structure l = .

γ »

For example, if the matrix A has a structure as shown in Figure 2.16,

then the zeros outside of the surrounded entries are preserved after the

LU factorization. Before we introduce appropriate de¬nitions to generalize

these results, we want to consider the special case of symmetric matrices.

«

|—|0|—|0 0

¬ 0 | — — | 0 | — |·

¬ ·

A=¬| — — — — — |·

¬ ·

0 0 | — — 0 |

0|— — 0 —|

Figure 2.16. Pro¬le of a matrix.

If A is as before nonsingular and LU factorizable, then U = DLT with a

diagonal matrix D = diag (di ), and therefore

A = LDLT .

˜

This is true because A has the form A = LDU , where the upper triangular

˜

matrix U satis¬es the scaling condition uii = 1 for all i = 1, . . . , M . Such

˜

a factorization is unique, and thus

˜

A = AT implies LT = U , therefore A = LDLT .

84 2. Finite Element Method for Poisson Equation

If in particular A is symmetric and positive de¬nite, then also di > 0 is

˜

valid. Thus exactly one matrix L of the form

«

l11 0

˜¬ ·

..

L= with lii > 0 for all i

.

lij lMM

exists such that

˜˜

A = LLT , the so-called Cholesky decomposition.

We have

√ √

˜

LChol = LGauss D , where D := diag di .

This shows that the Cholesky method for the determination of the Cholesky

˜

factor L also preserves certain zeros of A in the same way as the Gaussian

elimination without pivoting.

In what follows, we want to specify the set of zeros that is preserved by

Gaussian elimination without pivoting. We will not consider a symmetric

matrix; but for the sake of simplicity we will consider a matrix with a

symmetric distribution of its entries.

De¬nition 2.19 Let A ∈ RM—M be a matrix such that aii = 0 for i =

1, . . . , M and

aij = 0 if and only if aji = 0 for all i, j = 1, . . . , M . (2.57)

We de¬ne, for i = 1, . . . , M,

fi (A) := min j aij = 0 , 1 ¤ j ¤ i .

Then

mi (A) := i ’ fi (A)

is called the ith (left-hand side) row bandwidth of A.

The bandwidth of a matrix A that satis¬es (2.57) is the number

m(A) := max mi (A) = max i ’ j aij = 0 , 1 ¤ j ¤ i ¤ M .

1¤i¤M

The band of the matrix A is

B(A) := (i, j), (j, i) i ’ m(A) ¤ j ¤ i , 1 ¤ i ¤ M .

The set

Env (A) := (i, j), (j, i) fi (A) ¤ j ¤ i , 1 ¤ i ¤ M

is called the hull or envelope of A. The number

M

p(A) := M + 2 mi (A)

i=1

is called the pro¬le of A.

2.5. Direct Methods for Sparse Linear Systems 85

The pro¬le is the number of elements of Env(A).

For the matrix A in Figure 2.16 we have (m1 (A), . . . , m5 (A)) =

(0, 0, 2, 1, 3), m(A) = 3, and p(A) = 17.

Summarizing the above considerations, we have proved the following

theorem:

Theorem 2.20 Let A be a matrix with the symmetric structure (2.57).

Then the Cholesky method or the Gaussian elimination without pivoting

preserves the hull and in particular the bandwidth.

The hull may contain zeros that will be replaced by (nonzero) entries during

the decomposition process. Therefore, in order to keep this ¬ll-in small, the

pro¬le should be as small as possible.

Furthermore, in order to exploit the matrix structure for an e¬cient

assembling and storage, this structure (or some estimate of it) should be

known in advance, before the computation of the matrix entries is started.

For example, if A is a sti¬ness matrix with the entries

∇•j · ∇•i dx ,

aij = a(•j , •i ) =

„¦

then the property

’

aij = 0 ai , aj are neighbouring nodes

can be used for the de¬nition of an (eventually too large) symmetric matrix

structure. This is also valid for the case of a nonsymmetric bilinear form

and thus a nonsymmetric sti¬ness matrix. Also in this case, the de¬nition

of fi (A) can be replaced by

fi (A) := min j 1 ¤ j ¤ i , j is a neighbouring node of i .

Since the characterization (2.56) of the possibility of the Gaussian elim-

ination without pivoting cannot be checked directly, we have to specify

su¬cient conditions. Examples for such conditions are the following (see

[34]):

• A is symmetric and positive de¬nite,

• A is an M-matrix.

—

Su¬cient conditions for this property were given in (1.32) and (1.32) .

In Section 3.9, geometrical conditions for the family of triangula-

tions (Th )h will be derived that guarantee that the ¬nite element

discretization considered here creates an M-matrix.

Data Structures

For sparse matrices, it is appropriate to store only the components within

the band or the hull. A symmetric matrix A ∈ RM—M with bandwidth

m can be stored in M (m + 1) memory positions. By means of the index

86 2. Finite Element Method for Poisson Equation

conversion aik ; bi,k’i+m+1 for k ¤ i, the matrix

«

a12 · · · a1,m+1

a11

¬ ·

¬ ·

. ..

.

¬ a21 a22 · · · ·

.

. 0

¬ ·

¬ ·

. . .

.. .. ..

¬ ·

. . .

. . .

¬ ·