for each subset ∆ of the real line,

P (∆) = dµ (») (A.26)

∆

’1

is the projection operator onto the subspace of vectors ψ such that (» ’ A) ψ <∞

for all » ∈ ∆ (Riesz and Sz.-Nagy, 1955, Chap. VIII, Sec. 120).

/

A linear operator U is unitary if it preserves inner products, i.e.

(U ψ, U φ) = (ψ, φ) (A.27)

for any pair of vectors ψ, φ in the Hilbert space. A necessary and su¬cient condition

for unitarity is that the operator is norm preserving, i.e.

(U ψ, U ψ) = (ψ, ψ) for all ψ if and only if U is unitary . (A.28)

The spectral resolution for a unitary operator with a pure point spectrum is

eiθn Pn , θn real ,

U= (A.29)

n

and for a continuous spectrum

eiθ dµ (θ) , θ real .

U= (A.30)

A linear operator N is said to be a normal operator if

N, N † = 0 . (A.31)

The hermitian and unitary operators are both normal. The hermitian operators N1 =

N + N † /2 and N2 = N ’ N † /2i satisfy N = N1 + iN2 and [N1 , N2 ] = 0. Normal

operators therefore have the spectral resolutions

N= (xn P1n + iyn P2n ) , [P1n , P2m ] = 0 (A.32)

n

¼ Mathematics

for a point spectrum, and

N= x dµ1 (x) + i y dµ2 (y) , dµ1 (x) , dµ2 (y) = 0 (A.33)

∆1 ∆1

for a continuous spectrum.

A.3.4 Matrices

A linear operator X acting on an N -dimensional Hilbert space, with basis f (1) , . . . ,

f (N ) , is represented by the N — N matrix

Xmn = f (m) , Xf (n) . (A.34)

The operator and its matrix are both called X. The matrix for the product XY of

two operators is the matrix product

N

(XY )mn = Xmk Ykn . (A.35)

k=1

The determinant of X is de¬ned as

X1n1 · · · XN nN ,

det (X) = (A.36)

n1 ···nN

n1 ···nN

where the generalized alternating tensor is

§

⎪1 n1 · · · nN is an even permutation of 12 · · · N ,

⎨

’1 n1 · · · nN is an odd permutation of 12 · · · N , (A.37)

n1 ···nN =

⎪

©

n1 ···nN 0 otherwise .

The trace of X is

N

Tr X = Xnn . (A.38)

n=1

The transpose matrix X T is de¬ned by Xnm = Xmn . The adjoint matrix X †

T

† —

is the complex conjugate of the transpose: Xnm = Xmn . A matrix X is symmetric if

X = X T , self-adjoint or hermitian if X † = X, and unitary if X † X = XX † = I, where I

is the N — N identity matrix. Unitary transformations preserve the inner product. The

hermitian and unitary matrices both belong to the larger class of normal matrices

de¬ned by X † X = XX † .

A matrix X is positive de¬nite if all of its eigenvalues are real and non-negative.

This immediately implies that the determinant and trace of the matrix are both non-

negative. An equivalent de¬nition is that X is positive de¬nite if

φ† Xφ 0 (A.39)

for all vectors φ. For a positive-de¬nite matrix X, there is a matrix Y such that

X = Y Y †.

The normal matrices have the following important properties (Mac Lane and Birk-

ho¬, 1967, Sec. XI-10).

½

Fourier transforms

Theorem A.1 (i) If f is an eigenvector of the normal matrix Z with eigenvalue z,

then f is an eigenvector of Z † with eigenvalue z — , i.e. Zf = zf ’ Z † f = z — f .

(ii) Every normal matrix has a complete, orthonormal set of eigenvectors.

Thus hermitian matrices have real eigenvalues and unitary matrices have eigenvalues

of modulus 1.

A.4 Fourier transforms

A.4.1 Continuous transforms

In the mathematical literature it is conventional to denote the Fourier (integral)

transform of a function f (x) of a single, real variable by

∞

dxf (x) e’ikx ,

f (k) = (A.40)

’∞

so that the inverse Fourier transform is

∞

dk

f (k) eikx .

f (x) = (A.41)

2π

’∞

The virtue of this notation is that it reminds us that the two functions are, generally,

drastically di¬erent, e.g. if f (x) = 1, then f (k) = 2πδ (k) .

On the other hand, the is a typographical nuisance in any discussion involving

many uses of the Fourier transform. For this reason, we will sacri¬ce precision for

convenience. In our convention, the Fourier transform is indicated by the same letter,

and the distinction between the functions is maintained by paying attention to the

arguments.

The Fourier transform pair is accordingly written as

∞

dxf (x) e’ikx ,

f (k) = (A.42)

’∞

∞

dk

f (k) eikx .

f (x) = (A.43)

2π

’∞

This is analogous to the familiar idea that the meaning of a vector V is independent

of the coordinate system used, despite the fact that the components (Vx , Vy , Vz ) of

V are changed by transforming to a new coordinate system. From this point of view,

the functions f (x) and f (k) are simply di¬erent representations of the same physical

quantity. Confusion is readily avoided by paying attention to the physical signi¬cance

of the arguments, e.g. x denotes a point in position space, while k denotes a point

in the reciprocal space or k-space.

If the position-space function f (x) is real, then the Fourier transform satis¬es

—

f — (k) = [f (k)] = f (’k) . (A.44)

When the position variable x is replaced by the time t, it is customary in physics to

use the opposite sign convention:

¾ Mathematics

∞

dxf (x) eiωt ,

f (ω) = (A.45)

’∞

∞

dω

f (k) e’iωt .

f (t) = (A.46)

2π

’∞

Fourier transforms of functions of several variables, typically f (r), are de¬ned

similarly:

d3 rf (r) e’ik·r ,

f (k) = (A.47)

d3 k

(k) eik·r ,

f (r) = 3f (A.48)

(2π)

where the integrals are over position space and reciprocal space (k-space) respectively.

If f (r) is real then

f — (k) = f (’k) . (A.49)

Combining these conventions for a space“time function f (r, t) yields the transform

pair

dtf (r, t) e’i(k·r’ωt) ,

d3 r

f (k, ω) = (A.50)

d3 k dω

f (k, ω) ei(k·r’ωt) .

f (r, t) = (A.51)

3 2π

(2π)

The last result is simply the plane-wave expansion of f (r, t). If f (r, t) is real, then

the Fourier transform satis¬es

f — (k, ω) = f (’k, ’ω) . (A.52)

Two related and important results on Fourier transforms”which we quote for the

one- and three-dimensional cases”are Parseval™s theorem:

dω —

dtf — (t) g (t) = f (ω) g (ω) , (A.53)

2π

d3 k —

—

3

d rf (r) g (r) = 3 f (k) g (k) , (A.54)

(2π)

and the convolution theorem:

dt f (t ’ t ) g (t )

h (t) = if and only if h (ω) = f (ω) g (ω) , (A.55)

dω

f (ω ’ ω ) g (ω )

h (ω) = if and only if h (t) = f (t) g (t) , (A.56)

2π

d3 r f (r ’ r ) g (r )

h (r) = if and only if h (k) = f (k) g (k) , (A.57)

d3 k

(k ’ k ) g (k ) if and only if h (r) = f (r) g (r) .

h (k) = 3f (A.58)

(2π)

These results are readily derived by using the delta function identities (A.95) and

(A.96).

¿

Fourier transforms

A.4.2 Fourier series

It is often useful to simplify the mathematics of the one-dimensional continuous trans-

form by considering the functions to be de¬ned on a ¬nite interval (’L/2, L/2) and

imposing periodic boundary conditions. The basis vectors are still of the form

uk (x) = C exp (ikx), but the periodicity condition, uk (’L/2) = uk (L/2), restricts k

to the discrete values

2πn

(n = 0, ±1, ±2, . . .) .

k= (A.59)

L

√

Normalization requires C = 1/ L, so the transform is

L/2

1

dxf (x) e’ikx ,

fk = √ (A.60)

L ’L/2

and the inverse transform f (x) is

1

f (x) = √ fk eikx . (A.61)

L k

The continuous transform is recovered in the limit L ’ ∞ by ¬rst using eqn (A.60)

to conclude that √

Lfk ’ f (k) as L ’ ∞ , (A.62)

and writing the inverse transform as

√

1

Lfk eikx .

f (x) = (A.63)

L

k

The di¬erence between neighboring k-values is ∆k = 2π/L, so this equation can be

recast as

∆k √ dk

Lfk eikx ’ f (k) eikx .

f (x) = (A.64)

2π 2π

k

In Cartesian coordinates the three-dimensional discrete transform is de¬ned on a

rectangular parallelepiped with dimensions Lx , Ly , Lz . The one-dimensional results

then imply

1

d3 rf (r) e’ik·r ,

fk = √ (A.65)

VV

where the k-vector is restricted to

2πny 2πnz

2πnx

ux + uy + uz , (A.66)

k=

Lx Ly Lz

and V = Lx Ly Lz . The inverse transform is

1

f (r) = √ fk eik·r , (A.67)

V k

and the integral transform is recovered by

Mathematics

√

V fk ’ f (k) as V ’ ∞ . (A.68)

The sum and integral over k are related by

d3 k

1

’ , (A.69)

(2π)3

V

k

which in turn implies

3

V δk,k ’ (2π) δ (k ’ k ) . (A.70)

A.5 Laplace transforms

Another useful idea”which is closely related to the one-dimensional Fourier trans-

form”is the Laplace transform de¬ned by

∞

dt e’ζt f (t) .

f (ζ) = (A.71)

0

In this case, we will use the standard mathematical notation f (ζ), since we do not use

Laplace transforms as frequently as Fourier transforms. The inverse transform is

ζ0 +i∞

dζ ζt

f (t) = e f (ζ) . (A.72)

2πi

ζ0 ’i∞

The line (ζ0 ’ i∞, ζ0 + i∞) in the complex ζ-plane must lie to the right of any poles

in the transform function f (ζ).

The identity

df

(ζ) = ζ f (ζ) ’ f (0) (A.73)

dt

is useful in treating initial value problems for sets of linear, di¬erential equations. Thus

to solve the equations

dfn

= Vnm fm , (A.74)

dt m

with a constant matrix V , and initial data fn (0), one takes the Laplace transform to

get

ζ fn (ζ) ’ Vnm fm (ζ) = fn (0) . (A.75)

m

This set of algebraic equations can be solved to express fn (ζ) in terms of fn (0).

Inverting the Laplace transform yields the solution in the time domain.

The convolution theorem for Laplace transforms is

t ζ0 +i∞

dζ

dt g (t ’ t ) f (t ) = g (ζ) f (ζ) eζt , (A.76)

2πi

ζ0 ’i∞

0

where the integration contour is to the right of any poles of both g (ζ) and f (ζ).

Functional analysis

An important point for applications to physics is that poles in the Laplace trans-

form correspond to exponential time dependence. For example, the function f (t) =

exp (zt) has the transform

1

f (ζ) = . (A.77)

ζ ’z

More generally, consider a function f (ζ) with N simple poles in ζ:

1

f (ζ) = , (A.78)

(ζ ’ z1 ) · · · (ζ ’ zN )

where the complex numbers z1 , . . . , zN are all distinct. The inverse transform is

ζ0 +i∞

eζt

dζ

f (t) = , (A.79)

2πi (ζ ’ z1 ) · · · (ζ ’ zN )

ζ0 ’i∞

where ζ0 > max[Re z1 , . . . , Re zN ]. The contour can be closed by a large semicircle in

the left half plane, and for N > 1 the contribution from the semicircle can be neglected.

The integral is therefore given by the sum of the residues,

N

1

ezn t

f (t) = , (A.80)

zn ’ zj

n=1 j=n

which explicitly exhibits f (t) as a sum of exponentials.

A.6 Functional analysis

A.6.1 Linear functionals

In normal usage, a function, e.g. f (x), is a rule assigning a unique value to each value

of its argument. The argument is typically a point in some ¬nite-dimensional space,

e.g. the real numbers R, the complex numbers C, three-dimensional space R3 , etc. The

values of the function are also points in a ¬nite-dimensional space. For example, the

classical electric ¬eld is represented by a function E (r) that assigns a vector”a point

in R3 ”to each position r in R3 .

A rule, X, assigning a value to each point f in an in¬nite-dimensional space M

(which is usually a space of functions) is called a functional and written as X [f ].

The square brackets surrounding the argument are intended to distinguish functionals

from functions of a ¬nite number of variables.

If M is a vector space, e.g. a Hilbert space, then a functional Y [f ] that obeys

Y [±f + βg] = ±Y [f ] + βY [g] , (A.81)

for all scalars ±, β and all functions f, g ∈ M, is called a linear functional. The

family, M , of linear functionals on M is called the dual space of M. The dual space

is also a vector space, with linear combinations of its elements de¬ned by

(±X + βY ) [f ] = ±X [f ] + βY [f ] (A.82)

for all f ∈ M.

Mathematics

A.6.2 Generalized functions

In Section 3.1.2 the de¬nition (3.18) and the rule (3.21) are presented with the cavalier

disregard for mathematical niceties that is customary in physics. There are however