<< . .

. 63
( : 95)



. . >>


and •k = e’i(k’N/2)x . Denoting by
SN = span{•k , 0 ¤ k ¤ N ’ 1},
if f ∈ L2 (0, 2π) its truncation of order N satis¬es the following optimal
approximation property in the least-squares sense

f ’ fN = min f ’ g L2 (0,2π) .
L2 (0,2π)
g∈SN

Set h = 2π/N and xj = jh, for j = 0, . . . , N ’ 1, and introduce the
following discrete scalar product
N ’1
(f, g)N = h f (xj )g(xj ). (10.51)
j=0
10.9 Fourier Trigonometric Polynomials 437

Replacing (f, •k ) in (10.50) with (f, •k )N , we get the discrete Fourier
coe¬cients of the function f
N ’1 N ’1
1 1 (k’ N )j
’ikjh ijπ 2
fk = f (xj )e e = f (xj )WN (10.52)
N N
j=0 j=0


for k = 0, . . . , N ’ 1, where


WN = exp ’i
N

is the principal root of order N of unity. According to (10.4), the trigono-
metric polynomial
N ’1
N
ΠF f (x) fk ei(k’ 2 )x
= (10.53)
N
k=0

is called the discrete Fourier series of order N of f .

Lemma 10.1 The following property holds
N ’1
e’ik(l’j)h = 2πδjl , 0 ¤ l, j ¤ N ’ 1,
(•l , •j )N = h (10.54)
k=0

where δjl is the Kronecker symbol.

Proof. For l = j the result is immediate. Thus, assume l = j; we have that
N
1 ’ e’i(l’j)h
N ’1
e’ik(l’j)h = = 0.
1 ’ e’i(l’j)h
k=0


Indeed, the numerator is 1 ’ (cos(2π(l ’ j)) ’ i sin(2π(l ’ j))) = 1 ’ 1 = 0,
while the denominator cannot vanish. Actually, it vanishes i¬ (j ’ l)h = 2π, i.e.,
j ’ l = N , which is impossible. 3

Thanks to Lemma 10.1, the trigonometric polynomial ΠF f is the Fourier
N
interpolate of f at the nodes xj , that is

j = 0, 1, . . . , N ’ 1.
ΠF f (xj ) = f (xj ),
N

Indeed, using (10.52) and (10.54) in (10.53) it follows that

N ’1 N ’1 N ’1
1
ikjh ’ijh N
e’ik(l’j)h = f (xj ).
ΠF f (xj ) = fk e e = f (xl )
2
N
N
k=0 l=0 k=0
438 10. Orthogonal Polynomials in Approximation Theory

Therefore, looking at the ¬rst and last equality, we get
N ’1 N ’1
’(j’ N )k
ik(j’ N )h
, j = 0, . . . , N ’ 1. (10.55)
2
f (xj ) = fk e = fk WN
2

k=0 k=0

The mapping {f (xj )} ’ {fk } described by (10.52) is called the Discrete
Fourier Transform (DFT), while the mapping (10.55) from {fk } to {f (xj )}
is called the inverse transform (IDFT). Both DFT and IDFT can be written
in matrix form as {fk } = T{f (xj )} and {f (xj )} = C{fk } where T ∈
CN —N , C denotes the inverse of T and
1 (k’ N )j
k, j = 0, . . . , N ’ 1,
2
Tkj = W ,
NN
’(j’ N )k
j, k = 0, . . . , N ’ 1.
2
Cjk = WN ,
A naive implementation of the matrix-vector computation in the DFT and
IDFT would require N 2 operations. Using the FFT (Fast Fourier Trans-
form) algorithm only O(N log2 N ) ¬‚ops are needed, provided that N is a
power of 2, as will be explained in Section 10.9.2.

The function ΠF f ∈ SN introduced in (10.53) is the solution of the
N
minimization problem f ’ ΠF f N ¤ f ’ g N for any g ∈ SN , where
N
1/2
· N = (·, ·)N is a discrete norm for SN . In the case where f is periodic
with all its derivatives up to order s (s ≥ 1), an error estimate analogous
to that for Chebyshev and Legendre interpolation holds

¤ CN ’s f
f ’ ΠF f L2 (0,2π) s
N

and also
max |f (x) ’ ΠF f (x)| ¤ CN 1/2’s f s.
N
0¤x¤2π

In a similar manner, we also have

|(f, vN ) ’ (f, vN )N | ¤ CN ’s f vN
s

for any vN ∈ SN , and in particular, setting vN = 1 we have the following
error for the quadrature formula (10.51)
2π N ’1
f (xj ) ¤ CN ’s f
f (x)dx ’ h s
j=0
0

(see for the proof [CHQZ88], Chapter 2).
N ’1
Notice that h f (xj ) is nothing else than the composite trapezoidal
j=0

rule for approximating the integral f (x)dx. Therefore, such a formula
0
10.9 Fourier Trigonometric Polynomials 439

turns out to be extremely accurate when dealing with periodic and smooth
integrands.

Programs 88 and 89 provide an implementation of the DFT and IDFT. The
input parameter f is a string containing the function f to be transformed
while fc is a vector of size N containing the values fk .

Program 88 - dft : Discrete Fourier transform

function fc = dft(N,f)
h = 2*pi/N; x=[0:h:2*pi*(1-1/N)]; fx = eval(f); wn = exp(-i*h);
for k=0:N-1,
s = 0;
for j=0:N-1
s = s + fx(j+1)*wnˆ((k-N/2)*j);
end
fc (k+1) = s/N;
end


Program 89 - idft : Inverse discrete Fourier transform

function fv = idft(N,fc)
h = 2*pi/N; wn = exp(-i*h);
for k=0:N-1
s = 0;
for j=0:N-1
s = s + fc(j+1)*wnˆ(-k*(j-N/2));
end
fv (k+1) = s;
end




10.9.1 The Gibbs Phenomenon
Consider the discontinuous function f (x) = x/π for x ∈ [0, π] and equal to
x/π ’ 2 for x ∈ (π, 2π], and compute its DFT using Program 88. The inter-
polate ΠF f is shown in Figure 10.3 (above) for N = 8, 16, 32. Notice the
N
spurious oscillations around the point of discontinuity of f whose maximum
amplitude, however, tends to a ¬nite limit. The arising of these oscillations
is known as Gibbs phenomenon and is typical of functions with isolated
jump discontinuities; it a¬ects the behavior of the truncated Fourier series
not only in the neighborhood of the discontinuity but also over the entire
interval, as can be clearly seen in the ¬gure. The convergence rate of the
truncated series for functions with jump discontinuities is linear in N ’1 at
every given non-singular point of the interval of de¬nition of the function
(see [CHQZ88], Section 2.1.4).
440 10. Orthogonal Polynomials in Approximation Theory

1


0.5


0


’0.5


’1

0 1 2 3 4 5 6




1


0.5


0


’0.5


’1
0 1 2 3 4 5 6




FIGURE 10.3. Above: Fourier interpolate of the sawtooth function (thick solid
line) for N = 8 (dash-dotted line), 16 (dashed line) and 32 (thin solid line).
Below: the same informations are plotted in the case of the Lanczos smoothing

Since the Gibbs phenomenon is related to the slow decay of the Fourier
coe¬cients of a discontinuous function, smoothing procedures can be prof-
itably employed to attenuate the higher-order Fourier coe¬cients. This can
be done by multiplying each coe¬cient fk by a factor σk such that σk is a
decreasing function of k. An example is provided by the Lanczos smoothing

sin(2(k ’ N/2)(π/N ))
k = 0, . . . , N ’ 1.
σk = , (10.56)
2(k ’ N/2)(π/N )

The e¬ect of applying the Lanczos smoothing to the computation of the
DFT of the above function f is represented in Figure 10.3 (below), which
shows that the oscillations have almost completely disappeared.
For a deeper analysis of this subject we refer to [CHQZ88], Chapter 2.


10.9.2 The Fast Fourier Transform
As pointed out in the previous section, computing the discrete Fourier
transform (DFT) or its inverse (IDFT) as a matrix-vector product, would
require N 2 operations. In this section we illustrate the basic steps of the
Cooley-Tukey algorithm [CT65], commonly known as Fast Fourier Trans-
form (FFT). The computation of a DFT of order N is split into DFTs of
order p0 , . . . , pm , where {pi } are the prime factors of N . If N is a power of
2, the computational cost has the order of N log2 N ¬‚ops.
A recursive algorithm to compute the DFT when N is a power of 2
is described in the following. Let f = (fi )T , i = 0, . . . , N ’ 1 and set
N ’1
fj xj . Then, computing the DFT of the vector f amounts to
1
p(x) = N
j=0
10.9 Fourier Trigonometric Polynomials 441

k’ N
) for k = 0, . . . , N ’1. Let us introduce the polynomials
2
evaluating p(WN
1 N
f0 + f2 x + . . . + fN ’2 x 2 ’1 ,
pe (x) =
N
1 N
f1 + f3 x + . . . + fN ’1 x 2 ’1 .
po (x) =
N
Notice that
p(x) = pe (x2 ) + xpo (x2 )
from which it follows that the computation of the DFT of f can be carried
2(k’ N )
2
out by evaluating the polynomials pe and po at the points WN ,k=
0, . . . , N ’ 1. Since
2πk
2(k’ N )
= exp ’i
2k’N k
2
WN = WN exp(i2π) = WN/2 ,
N/2
it turns out that we must evaluate pe and po at the principal roots of unity
of order N/2. In this manner the DFT of order N is rewritten in terms
of two DFTs of order N/2; of course, we can recursively apply again this
procedure to po and pe . The process is terminated when the degree of the
last generated polynomials is equal to one.

In Program 90 we propose a simple implementation of the FFT recursive
algorithm. The input parameters are the vector f containing the NN values
fk , where NN is a power of 2.

Program 90 - ¬trec : FFT algorithm in the recursive version
function [¬tv]=¬trec(f,NN)
N = length(f); w = exp(-2*pi*sqrt(-1)/N);
if N == 2
¬tv = f(1)+w.ˆ[-NN/2:NN-1-NN/2]*f(2);
else
a1 = f(1:2:N); b1 = f(2:2:N);
a2 = ¬trec(a1,NN); b2 = ¬trec(b1,NN);
for k=-NN/2:NN-1-NN/2
¬tv(k+1+NN/2) = a2(k+1+NN/2) + b2(k+1+NN/2)*wˆk;
end
end

Remark 10.4 A FFT procedure can also be set up when N is not a power
of 2. The simplest approach consists of adding some zero samples to the
˜
original sequence {fi } in such a way to obtain a total number of N = 2p
values. This technique, however, does not necessarily yield the correct re-
sult. Therefore, an e¬ective alternative is based on partitioning the Fourier
matrix C into subblocks of smaller size. Practical FFT implementations
can handle both strategies (see, for instance, the fft package available in
MATLAB).
442 10. Orthogonal Polynomials in Approximation Theory

10.10 Approximation of Function Derivatives
A problem which is often encountered in numerical analysis is the ap-
proximation of the derivative of a function f (x) on a given interval [a, b].
A natural approach to it consists of introducing in [a, b] n + 1 nodes
{xk , k = 0, . . . , n}, with x0 = a, xn = b and xk+1 = xk +h, k = 0, . . . , n’1
where h = (b ’ a)/n. Then, we approximate f (xi ) using the nodal values
f (xk ) as

m m
h ±k ui’k = βk f (xi’k ), (10.57)
k=’m k=’m

where {±k }, {βk } ∈ R are m + m + 1 coe¬cients to be determined and uk
is the desired approximation to f (xk ).
A non negligible issue in the choice of scheme (10.57) is the computa-
tional e¬ciency. Regarding this concern, it is worth noting that, if m = 0,
determining the values {ui } requires the solution of a linear system.
The set of nodes which are involved in constructing the derivative of f at
a certain node, is called a stencil. The band of the matrix associated with
system (10.57) increases as the stencil gets larger.


10.10.1 Classical Finite Di¬erence Methods
The simplest way to generate a formula like (10.57) consists of resorting to
the de¬nition of the derivative. If f (xi ) exists, then

f (xi + h) ’ f (xi )
f (xi ) = lim+ . (10.58)
h
h’0

Replacing the limit with the incremental ratio, with h ¬nite, yields the
approximation

f (xi+1 ) ’ f (xi )
0 ¤ i ¤ n ’ 1.
uF D = , (10.59)
i
h
Relation (10.59) is a special instance of (10.57) setting m = 0, ±0 = 1,
m = 1, β’1 = 1, β0 = ’1, β1 = 0.
The right side of (10.59) is called the forward ¬nite di¬erence and the
approximation that is being used corresponds to replacing f (xi ) with
the slope of the straight line passing through the points (xi , f (xi )) and
(xi+1 , f (xi+1 )), as shown in Figure 10.4.
To estimate the error that is made, it su¬ces to expand f in Taylor™s
series, obtaining

h2
with ξi ∈ (xi , xi+1 ).
f (xi+1 ) = f (xi ) + hf (xi ) + f (ξi )
2
10.10 Approximation of Function Derivatives 443

We assume henceforth that f has the required regularity, so that

h
f (xi ) ’ uF D = ’ f (ξi ). (10.60)
i
2


f (xi )

f (xi’1 )
f (xi+1 )


<< . .

. 63
( : 95)



. . >>