<< . .

. 55
( : 59)



. . >>

|x|p j=1 |xj | , x = (x1 , . . . , xd )T ∈ Rd , d ∈ N, p ∈ [1, ∞)
p
:=
|x|∞ := maxj=1,...,d |xj | maximum norm of the vector x ∈ Rd , d ∈ N
|x| := |x|2 Euclidean norm of the vector x ∈ Rd , d ∈ N
d
x·y := xT y = j=1 xj yj scalar product of the vectors x, y ∈ Rd
:= y T Ax = y · Ax energy product of the vectors x, y ∈ Rd w.r.t.
x, y A
a symmetric, positive de¬nite matrix A
|±| := |±|1 order (or length) of the multi-index ± ∈ Nd , d ∈ N 0
I identity matrix or identity operator
jth unit vector in Rm , j = 1, . . . , m
ej
diagonal matrix in Rm,m with diagonal
diag(»i ) = diag(»1 , . . . , »m )
entries »1 , . . . , »m ∈ C
A.1. Notation 391

AT transpose of the matrix A
A’T transpose of the inverse matrix A’1
det A determinant of the square matrix A
»min (A) minimum eigenvalue of a matrix A with real eigenvalues
»max (A) maximum eigenvalue of a matrix A with real eigenvalues
σ(A) set of eigenvalues (spectrum) of the square matrix A
(A) spectral radius of the square matrix A
m(A) bandwidth of the symmetric matrix A
Env (A) hull of the square matrix A
p(A) pro¬le of the square matrix A
:= {x : x ’ x0 < } open ball in a normed space
B (x0 )
:= {x : x ’ x0 ¤ } closed ball in a normed space
B (x0 )
diameter of the set G ‚ Rd
diam (G)
|G|n n-dimensional (Lebesgue) measure of the G ‚ Rn , n ∈ {1, . . . , d}
|G| := |G|d d-dimensional (Lebesgue) measure of the set G ‚ Rd
vol (G) length (d = 1), area (d = 2), volume (d = 3) of “geometric bodies”
G ‚ Rd
int G interior of the set G
‚G boundary of the set G
G closure of the set G
span G linear hull of the set G
conv G convex hull of the set G
|G| cardinal number of the discrete set G
outer unit normal w.r.t. the set G ‚ Rd
ν
domain of Rd , d ∈ N
„¦
:= ‚„¦ boundary of the domain „¦ ‚ Rd

supp • support of the function •
’1
f inverse of the mapping f
f [G] image of the set G under the mapping f
’1
f [G] preimage of the set G under the mapping f
f |K restriction of f : G ’ R to a subset K ‚ G
vX norm of the element v of the normed space X
dim X dimension of the ¬nite-dimensional linear space X
L[X, Y ] set of linear, continuous operators acting from the normed space
X in the normed space Y
:= L[X, R] dual space of the real normed space X
X
O(·), o(·) Landau symbols of asymptotic analysis
(i, j ∈ N0 ) Kronecker symbol, i.e., δii = 1 and δij = 0 if i = j
δij

Di¬erential expressions
(l ∈ N) symbol for the partial derivative w.r.t. the lth variable
‚l
(t ∈ R) symbol for the partial derivative w.r.t. the variable t
‚t
(± ∈ Nd multi-index) ±th partial derivative
‚± 0
∇ := (‚1 , . . . , ‚d )T Nabla operator (symbolic vector)
392 A. Appendices

∆ Laplace operator
:= µ · ∇ directional derivative w.r.t. the vector µ
‚µ
:= ‚¦ := (‚j ¦i )m
D¦ Jacobi matrix or functional matrix
i,j=1
‚x
of a di¬erentiable mapping ¦ : Rm ’ Rm
Coe¬cients in di¬erential expressions
K di¬usion coe¬cient (a square matrix function)
c convection coe¬cient (a vector function)
r reaction coe¬cient

Discretization methods
Vh ansatz space
Xh extended ansatz space without any homogeneous Dirichlet
boundary conditions
ah approximated bilinear form
bh approximated linear form

Function spaces (see also Appendix A.5)
Pk (G) set of polynomials of maximum degree k on G ‚ Rd
C(G) = C 0 (G) set of continuous functions on G
(l ∈ N) set of l-times continuously di¬erentiable functions on G
C l (G)

C (G) set of in¬nitely often continuously di¬erentiable functions on G
0
C(G) = C (G) set of bounded and uniformly continuous functions on G
(l ∈ N) set of functions with bounded and uniformly continuous
C l (G)
derivatives up to the order l on G

C (G) set of functions, all partial derivatives of which are bounded and
uniformly continuous on G
0
C0 (G) = C0 (G) set of continuous functions on G with compact support
(l ∈ N) set of l-times continuously di¬erentiable functions on G
l
C0 (G)
with compact support

C0 (G) set of in¬nitely often continuously di¬erentiable functions on G
with compact support
(p ∈ [1, ∞)) set of Lebesgue-measurable functions whose pth power
p
L (G)
of their absolute value is Lebesgue-integrable on G

L (G) set of measurable, essentially bounded functions
scalar product in L2 (G) †
·, · 0,G
norm in L2 (G) †
· 0,G
· 0,p,G (p ∈ [1, ∞]) norm in Lp (G) †
· ∞,G norm in L∞ (G) †
(l ∈ N, p ∈ [1, ∞]) set of l-times weakly di¬erentiable functions
l
Wp (G)
from Lp (G), with derivatives in Lp (G)
· l,p,G (l ∈ N, p ∈ [1, ∞]) norm in Wp (G) †
l

(l ∈ N, p ∈ [1, ∞]) seminorm in Wp (G) †
| · |l,p,G l
A.2. Basic Concepts of Analysis 393

:= W2 (G) (l ∈ N)
H l (G) l

(l ∈ N) scalar product in H l (G) †
·, · l,G
(l ∈ N) norm in H l (G) †
· l,G
(l ∈ N) seminorm in H l (G) †
| · |l,G
·, · 0,h discrete L2 („¦)-scalar product
· 0,h discrete L2 („¦)-norm
L2 (‚G) set of square Lebesgue-integrable functions on the boundary ‚G
1
set of functions from H 1 (G) with vanishing trace on ‚G
H0 (G)
C([0, T ], X) = C 0 ([0, T ], X) set of continuous functions on [0, T ] with values
in the normed space X
C ([0, T ], X) (l ∈ N) set of l-times continuously di¬erentiable functions on
l

[0, T ] with values in the normed space X
Lp ((0, T ), X) (p ∈ [1, ∞]) Lebesgue-space of functions on [0, T ] with values
in the normed space X

Convention: In the case G = „¦, this speci¬cation is omitted.



A.2 Basic Concepts of Analysis
A subset G ‚ Rd is called a set of measure zero if, for any number µ > 0, a
countable family of balls Bj with d-dimensional volume µj > 0 exists such
that
∞ ∞
G‚
µj < µ and Bj .
j=1 j=1

Two functions f, g : G ’ R are called equal almost everywhere (in short:
equal a.e., notation: f ≡ g) if the set {x ∈ G : f (x) = g(x)} is of measure
zero.
In particular, a function f : G ’ R is called vanishing almost everywhere
if it is equal to the constant function zero almost everywhere.
A function f : G ’ R is called measurable if there exists a sequence
(fi )i of step functions fi : G ’ R such that fi ’ f for i ’ ∞ almost
everywhere.
In what follows, G denotes a subset of Rd , d ∈ N.
(i) A point x = (x1 , x2 , . . . , xd )T ∈ Rd is called a boundary point of G
if every open neighbourhood (perhaps an open ball) of x contains a
point of G as well as a point of the complementary set R \ G.
(ii) The collection of all boundary points of G is called the boundary of
G and is denoted by ‚G.
(iii) The set G := G ∪ ‚G is called the closure of G.
(iv) The set G is called closed if G = G.
394 A. Appendices

(v) The set G is called open if G © ‚G = ….
(vi) The set G \ ‚G is called the interior of G and is denoted by int G.
A subset G ‚ Rd is called connected if for arbitrary distinct points
x1 , x2 ∈ G there exists a continuous curve in G connecting them.
The set G is called convex if any two points from G can be connected by
a straight-line segment in G.
A nonempty, open, and connected set G ‚ Rd is called a domain in Rd .
By ± = (±1 , . . . , ±d )T ∈ Nd a so-called multi-index is denoted. Multi-
0
indices are a popular tool to abbreviate some elaborate notation. For
example,
d d d
|±| :=
±
±
‚i i
‚ := , ±! := ±i ! , ±i .
i=1 i=1 i=1

The number |±| is called the order (or length) of the multi-index ±.
For a continuous function • : G ’ R, the set supp • := {x ∈ G : •(x) = 0}
denotes the support of •.


A.3 Basic Concepts of Linear Algebra
A square matrix A ∈ Rn,n with entries aij is called symmetric if aij = aji
holds for all i, j ∈ {1, . . . , n}.
A matrix A ∈ Rn,n is called positive de¬nite if x · Ax > 0 for all x ∈
Rn \ {0}.
Given a polynomial p ∈ Pk , k ∈ N0 , of the form
k
with aj ∈ C, j ∈ {0, . . . , k}
aj z j
p(z) =
j=0

and a matrix A ∈ Cn,n , then the following matrix polynomial of A can be
established:
k
aj Aj .
p(A) :=
j=0


Eigenvalues and Eigenvectors
Let A ∈ Cn,n . A number » ∈ C is called an eigenvalue of A if
det(A ’ »I) = 0 .
If » is an eigenvalue of A, then any vector x ∈ Cn \ {0} such that
(” (A ’ »I)x = 0)
Ax = »x
is called an eigenvector of A associated with the eigenvalue ».
A.3. Linear Algebra 395

The polynomial pA (») := det(A ’ »I) is called the characteristic
polynomial of A.
The set of all eigenvalues of a matrix A is called the spectrum of A,
denoted by σ(A).
If all eigenvalues of a matrix A are real, then the numbers »max (A) and
»min (A) denote the largest, respectively smallest, of these eigenvalues.
The number (A) = max»∈σ(A) |»| is called the spectral radius of A.

Norms of Vectors and Matrices
The norm of a vector x ∈ Rn , n ∈ N, is a real-valued function x ’ |x|
satisfying the following three properties:
(i) |x| ≥ 0 for all x ∈ Rn , |x| = 0 ” x = 0 ,
(ii) |±x| = |±| |x| for all ± ∈ R , x ∈ Rn ,
(iii) |x + y| ¤ |x| + |y| for all x, y ∈ Rn .
For example, the most frequently used vector norms are
(a) the maximum norm:
|x|∞ := max |xj | . (A3.1)
j=1...n

p ∈ [1, ∞):
(b) the p -norm,

1/p
n
|x|p := |xj |p . (A3.2)
j=1

The important case p = 2 yields the so-called Euclidean norm:
1/2
n
|x|2 := x2 . (A3.3)
j
j=1

The three most important norms (that is, p = 1, 2, ∞) in Rn are equivalent
in the following sense: The inequalities

1
√ |x|2 ¤ |x|∞ ¤ |x|2 ¤ n |x|∞ ,
n
1
|x|1 ¤ |x|∞ ¤ |x|1 ¤ n |x|∞ ,
n

1
√ |x|1 ¤ |x|2 ¤ |x|1 ¤ n |x|2
n
are valid for all x ∈ Rn .
The norm of the matrix A ∈ Rn,n is a real-valued function A ’ A
satisfying the following four properties:
A ≥ 0 for all A ∈ Rn,n , A = 0 ” A = 0,
(i)
±A = |±| A for all ± ∈ R , A ∈ Rn,n ,
(ii)
A+B ¤ A + B for all A, B ∈ Rn,n ,
(iii)
AB ¤ A B for all A, B ∈ Rn,n .
(iv)
396 A. Appendices

In comparison with the de¬nition of a vector norm, we include here an
additional property (iv), which is called the submultiplicative property. It
restricts the general set of matrix norms to the practically important class
of submultiplicative norms.
The most common matrix norms are
(a) the total norm:
:= n max |aik | ,
A (A3.4)
G
1¤i,k¤n

(b) the Frobenius norm:
1/2
n
a2
A := , (A3.5)
F ik
i,k=1

(c) the maximum row sum:
n
|aik | ,
A := max (A3.6)

1¤i¤n
k=1

(d) the maximum column sum:
n
|aik | .
A := max (A3.7)
1
1¤k¤n
i=1

All these matrix norms are equivalent. For example, we have
1
¤A ¤A ¤n A p ∈ {1, ∞} ,
A ,
G p G p
n
or
1
¤A ¤A ¤n A
A .
G F G F
n
Note that the spectral radius (A) is not a matrix norm, as the following
simple example shows:
01
For A = , we have that A = 0 but (A) = 0.
00
However, for any matrix norm · the following relation is valid:
(A) ¤ A . (A3.8)
Very often, matrices and vectors simultaneously appear as a product
Ax. In order to be able to handle such situations, there should be a certain
correlation between matrix and vector norms.
A matrix norm · is called mutually consistent or compatible with the
vector norm | · | if the inequality
|Ax| ¤ A |x| (A3.9)
is valid for all x ∈ Rn and all A ∈ Rn,n .
A.3. Linear Algebra 397

Examples of mutually consistent norms are
with |x|∞ ,
A or A ∞
G


with |x|1 ,
A or A
G 1


with |x|2 .
A or A
G F

In many cases, the bound for |Ax| given by (A3.9) is not sharp enough;
i.e., for x = 0 we just have that
|Ax| < A |x| .
Therefore, the question arises of how to ¬nd, for a given vector norm, a
compatible matrix norm such that in (A3.9) the equality holds for at least
one element x = 0.
Given a vector norm |x|, the number
|Ax|
|Ax|
A := sup = sup
x∈Rn \{0} |x| x∈Rn : |x|=1

is called the induced or subordinate matrix norm.
The induced norm is a compatible matrix norm with the given vector
norm. It is the smallest norm among all matrix norms that are compatible
with the given vector norm |x|.
To illustrate the de¬nition of the induced matrix norm, the matrix norm
induced by the Euclidean vector norm is derived:

<< . .

. 55
( : 59)



. . >>