<< . .

. 7
( : 18)



. . >>

a11 a12
and A ¼ .
a21 a22
The right-hand sides in example (5.4.2) depend on past values only.
However, dependencies on current values can also be included (so-
called simultaneous dynamic model [1]). Consider the modification of
the bivariate process (5.4.2)
y1 (t) ¼ a11 y1 (t À 1) þ a12 y2 (t) þ e1 (t)
y2 (t) ¼ a21 y1 (t) þ a22 y2 (t À 1) þ e2 (t) (5:4:4)
The matrix form of this process is
     
Àa12 y1 (t) y1 (t À 1)
1 a11 0 e1 (t)
¼ þ (5:4:5)
Àa21 0 a22 y2 (t À 1)
1 y2 (t) e2 (t)
Multiplying both sides of (5.4.5) with the inverse of the left-hand
matrix yields
    
a12 a22 y1 (t À 1)
y1 (t) a11
¼ (1 À a12 a21 )À1
y2 (t À 1)
y2 (t) a11 a21 a22
  
1 a12 e1 (t)
þ (1 À a12 a21 )À1 (5:4:6)
a21 1 e2 (t)
Equation (5.4.6) shows that the simultaneous dynamic models can
also be represented in the VAR form.
In the general case of n-variate time series, VAR(p) has the form [2]
y(t) ¼ a0 þ A1 y(t À 1) þ . . . þ Ap y(t À p) þ «(t) (5:4:7)
where y(t), a0 , and «(t) are n-dimensional vectors and Ai (i ¼ 1, . . . , p)
are n x n matrices. Generally, the white noises «(t) are mutually
independent. Let us introduce

Ap (L) ¼ In À A1 L À . . . À Ap Lp (5:4:8)
where In is the n-dimensional unit vector. Then equation (5.4.7) can
be presented as
56 Time Series Analysis



Ap (L)y(t) ¼ a0 þ «(t) (5:4:9)
Two covariance-stationary processes x(t) and y(t) are jointly covar-
iance-stationary if their covariance Cov(x(t), y(t À s)) depends on lag
s only. The condition for the covariance-stationary VAR(p) is the
generalization of (5.1.11) for AR(p). Namely, all values of z satisfying
the equation
jIn À A1 z À . . . À Ap zp j ¼ 0 (5:4:10)
must lie outside the unit circle. Equivalently, all solutions of the
equation

jIn lp À A1 lpÀ1 À . . . À Ap j ¼ 0 (5:4:11)
must satisfy the condition jlj < 1.
The problem of whether the lagged values of process y can improve
prediction of process x (so-called Granger causality) is often posed in
forecasting. It is said that if y fails to Granger-cause x, then the
following condition holds for all s > 0
MSE(E[x(t þ s)jx(t), x(t À 1), . . . ]) ¼
MSE(E[x(t þ s)jx(t), x(t À 1), . . . , y(t), y(t À 1), . . . ]) (5:4:12)
In this case, y is called exogenous variable with respect to x. For
example, y2 (t) is exogenous with respect to y1 (t) in (5.4.2) if a12 ¼ 0.
General methods for testing the Granger causality are described in [2].
The last notion that is introduced in this section is cointegration.
Two processes are cointegrated if they both have unit roots (i.e., they
both are I(1) ), but some linear combination of these processes is
stationary (i.e., is I(0) ). This definition can be extended to an arbi-
trary number of processes. As an example, consider a bivariate model
y1 (t) ¼ ay2 (t) þ e1 (t)
y2 (t) ¼ y2 (t À 1) þ e2 (t) (5:4:13)
Both processes y1 (t) and y2 (t) are random walks. However the process
z(t) ¼ y1 (t) À ay2 (t) (5:4:14)
is stationary. Details of testing the integration hypothesis are de-
scribed in [2]. Implications of cointegration in financial data analysis
are discussed in [3].
57
Time Series Analysis



5.5 REFERENCES FOR FURTHER READING
AND ECONOMETRIC SOFTWARE
A good concise introduction into the time series analysis is given by
Franses [1]. The comprehensive presentation of the subject can be
found in monographs by Hamilton [2] and Green [6]. Important
specifics of time series analysis in finance, particularly for analysis
and forecasting of volatility, are discussed by Alexander in [3]. In this
chapter, only time series on homogenous grids were considered. Spe-
cifics of analysis of tick-by-tick data on non-homogenous grids are
discussed in [7]. It should be noted that the exercises with the econo-
metric software packages are very helpful for learning the subject.
Besides the generic scientific software such as SAS, Splus, and Matlab
that have modules for the time series analysis, several econometric
software packages are available: PCGive, RATS, Shazam, and TSP.
While these packages may have the trial and student versions, Easy-
Reg offered by H. J. Bierens5 has sufficient capability for an intro-
ductory course and is free of charge.


5.6 EXERCISES
1. Verify equations (5.1.25)“(5.1.27).
2. Verify if the process y(t) ¼ 1:2y(t À 1) À 0:32y(t À 2) þ e(t)
(where e(t) is IID) is covariance-stationary.
3. Estimate the linear dividend growth rate from the dividends
paid in the last years (verify these data on the AMEX website:
http://www.amex.com): 2000 “ $1.51, 2001 “ $1.42, 2002 “ $1.50,
and 2003 “ $1.63.
4. Verify equation (5.4.6) for the processes (5.4.4).
This page intentionally left blank
Chapter 6


Fractals




In short, fractals are the geometric objects that are constructed by
repeating geometric patterns at a smaller and smaller scale. The
fractal theory is a beautiful theory that describes beautiful objects.
Development of the fractal theory and its financial applications has
been greatly influenced by Mandelbrot [1]. In this chapter, a short
introduction to the fractal theory relevant to financial applications is
given. In Section 6.1, the basic definitions of the fractal theory are
provided. Section 6.2 is devoted to the concept of multifractals that
has been receiving a lot of attention in the recent research of the
financial time series.

6.1 BASIC DEFINITIONS
Self-similarity is the defining property of fractals. This property
implies that the geometric patterns are isotropic, meaning shape
transformations along all coordinate axes are the same. If the geo-
metric patterns are not isotropic, say the object is contracted along
the y-axis with a scale different from that of along the x-axis, it is said
that the object is self-affine. The difference between self-similarity and
self-affinity is obvious for geometric objects. However, only self-
affinity is relevant for the graphs of financial time series [1]. Indeed,
since time and prices are measured with different units, their scaling
factors cannot be compared.



59
60 Fractals




(a)




(b)




Figure 6.1 Deterministic (a) and stochastic (b) fractals with the same
fractal dimension D ¼ ln(5)/ln(3).


If the geometric pattern used in fractal design is deterministic, the
resulting object is named a deterministic fractal. Consider an example
in path (a) of Figure 6.1 where a square is repeatedly divided into nine
small squares and four of them that have even numbers are deleted
(the squares are numerated along rows). If four squares are deleted at
random, one obtains a random fractal (one of such fractals is depicted
in path (b) of Figure 6.1). While the deterministic and stochastic
fractals in Figure 6.1 look quite different, they have the same fractal
dimension. Let us outline the physical sense of this notion.
Consider a jagged line, such as a coastline. It is embedded into a
plane. Thus, its dimension is lower than two. Yet, the more zigzagged
the line is, the greater part of plane it covers. One may then expect
that the dimension of a coastline is higher than one and it depends on
a measure of jaggedness. Another widely used example is a crumpled
paper ball. It is embedded in three-dimensional space. Yet, the
61
Fractals



volume of a paper ball depends on the sizes of its creases. Therefore,
its dimension is expected to be in the range of two to three. Thus, we
come to the notion of the fractal (non-integer) dimension for objects
that cannot be accurately described within the framework of Eucli-
dian geometry.
There are several technical definitions for the fractal dimension [2].
The most popular one is the box-counting dimension. It implies map-
ping the grid boxes of size h (e.g., squares and cubes for the two-
dimensional and the three-dimensional spaces, respectively) onto the
object of interest. The number of boxes that fill the object is
N(h) $ hÀD . The fractal dimension D is then the limit
D ¼ lim [ ln N(h)= ln (1=h)] (6:1:1)
h!0

The box-counting dimension has another equivalent definition with
the fixed unit size of the grid box and varying object size L
D ¼ lim [ ln N(L)= ln (L)] (6:1:2)
L!1

The fractal dimension for both deterministic and stochastic fractals in
Figure 6.1 equals D ¼ ln (5)= ln (3) % 1:465. Random fractals exhibit
self-similarity only in a statistical sense. Therefore, the scale invari-
ance is a more appropriate concept for random fractals than self-
similarity.
The iterated function systems are commonly used for generating
fractals. The two-dimensional iterated function algorithm for N fixed
points can be presented as
X(k þ 1) ¼ rX(k) þ (1 À r)XF (i)
Y(k þ 1) ¼ rY(k) þ (1 À r)YF (i) (6:1:3)
In (6.1.3), r is the scaling parameter; XF (i) and YF (i) are the coordin-
ates of the fixed point i; i ¼ 1, 2, . . . N. The fixed point i is selected at
every iteration at random. A famous example with N ¼ 3, the Sier-
pinski triangle, is shown in Figure 6.2.
Now, let us turn to the random processes relevant to financial time
series. If a random process X(t) is self-affine, then it satisfies the
scaling rule
X(ct) ¼ cH X(t) (6:1:4)
62 Fractals




Figure 6.2 The Sierpinski triangle with r ¼ 0:5.


The parameter H is named the Hurst exponent. Let us introduce the
fractional Brownian motion BH (t). This random process satisfies
the following conditions for all t and T [1]
E[BH (t þ T) À BH (t)] ¼ 0, (6:1:5)
E[BH (t þ T) À BH (t)]2 ¼ T2H (6:1:6)
When H ¼ 1„2 , the fractional Brownian motion is reduced to the
regular Brownian motion. For the Brownian motion, the correlation
between the past average E[BH (t) À BH (t À T)]=T and the future aver-
age E[BH (t þ T) ÀBH (t)]=T equals
C ¼ 22HÀ1 À 1 (6:1:7)
Obviously, this correlation does not depend on T. If 1„2 < H < 1, then
C > 0 and it is said that BH (t) is a persistent process. Namely, if BH (t)
grew in the past, it will most likely grow in the immediate future.
63
Fractals



Conversely, if BH (t) decreased in the past, it will most probably
continue to fall. Thus, persistent processes maintain trend. In the
opposite case (0 < H < 1„2, C < 0), the process is named anti-persist-
ent. It is said also that anti-persistent processes are mean reverting; for
example, if the current process innovation is positive, then the next
one will most likely be negative, and vice versa. There is a simple
relationship between the box-counting fractal dimension and the
Hurst exponent
D¼2ÀH (6:1:8)
The fractal dimension of a time series can be estimated using the
Hurst™s rescaled range (R/S) analysis [1, 3]. Consider the data set
xi (i ¼ 1, . . . N) with mean mN and the standard deviation sN . To
define the rescaled range, the partial sums Sk must be calculated
X
k
Sk ¼ (xi À mN ), 1 k N (6:1:9)
i¼1

The rescaled range equals
R=S ¼ [ max (Sk ) À min (Sk )]=sN , 1 k N (6:1:10)
The value of R/S is always greater than zero since max (Sk ) > 0 and
min (Sk ) < 0. For given R/S, the Hurst exponent can be estimated
using the relation
R=S ¼ (aN)H (6:1:11)
where a is a constant. The R/S analysis is superior to many other
methods of determining long-range dependencies. But this approach
has a noted shortcoming, namely, high sensitivity to the short-range
memory [4].

6.2 MULTIFRACTALS
Let us turn to the generic notion of multifractals (see, e.g., [5]).
Consider the map filled with a set of boxes that are used in the box-
counting fractal dimension. What matters for the fractal concept is
whether the given box belongs to fractal. The basic idea behind the
notion of multifractals is that every box is assigned a measure m
that characterizes some probability density (e.g., intensity of color
64 Fractals



between the white and black limits). The so-called multiplicative
process (or cascade) defines the rule according to which measure is
fragmented when the object is partitioned into smaller components.
The fragmentation ratios that are used in this process are named
¨
multipliers. The multifractal measure is characterized with the Holder
exponent a
a ¼ lim [ ln m(h)= ln (h)] (6:2:1)
h!0

where h is the box size. Let us denote the number of boxes with given
¨
h and a via Nh (a). The distribution of the Holder exponents in the
limit h ! 0 is sometimes called the multifractal spectrum
f(a) ¼ À lim [ ln Nh (a)= ln (h)] (6:2:2)
h!0

The distribution f(a) can be treated as a generalization of the fractal
dimension for the multifractal processes.
Let us describe the simplest multifractal, namely the binomial
measure m on the interval [0, 1] (see [5] for details). In the binomial
cascade, two positive multipliers, m0 and m1 , are chosen so that
m0 þ m1 ¼ 1. At the step k ¼ 0, the uniform probability measure
for mass distribution, m0 ¼ 1, is used. At the next step (k ¼ 1), the
measure m1 uniformly spreads mass in proportion m0 =m1 on the
intervals [0, 1„2 ] and [1„2 , 1], respectively. Thus, m1 [0, 1„2 ] ¼ m0 and
m1 [ 1„2 , 1] ¼ m1 . In the next steps, every interval is again divided into
two subintervals and the mass of the interval is distributed between
subintervals in proportion m0 =m1 . For example, at k ¼ 2: m2 [0, 1„4 ]
¼ m0 m0 , m2 [ 1„4 , 1„2 ] ¼ m2 [1„2 , 3„4 ] ¼ m0 m1 , m2 [3„4 , 1] ¼ m1 m1 and so on.
At the kth iteration, mass is partitioned into 2k intervals of length 2Àk .
Let us introduce the notion of the binary expansion 0:b1 b2 . . . bk for
the point x ¼ b1 2À1 þ b2 2À2 þ bk 2Àk where 0 x 1 and
0 < bk < 1. Then the measure for every dyadic interval I0b1b2 : : : bk of
length 2Àk equals
Y
k
mbi ¼ m0 n m1 kÀn
m0b1b2 : : : bk ¼ (6:2:3)
i¼1
_
where n is the number of digits 0 in the address 0b1 b2 . . . bk of the
interval™s left end, and (k À n) is the number of digits 1. Since the
subinterval mass is preserved at every step, the cascade is called
65
Fractals



3 3
(a) (b)
2.5 2.5
2 2
1.5 1.5
1 1
0.5 0.5
0 0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31
3
3
(c) (d)
2.5 2.5
2 2
1.5 1.5
1 1
0.5 0.5
0 0
14 7 10 13 16 19 22 25 28 31 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31
3
3
(f)
(e)
2.5
2.5
2 2
1.5 1.5
1 1
0.5 0.5
0 0
1 4 7 10 13 16 19 22 25 28 31 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31
Figure 6.3 Binomial cascade with m0 ¼ 0.6: a) k ¼ 0, b) k ¼ 1, c) k ¼ 2, d)
k ¼ 3, e) k ¼ 4, f) k ¼ 5.


conservative or microcanonical. The first five steps of the binomial
cascade with m0 ¼ 0:6 are depicted in Figure 6.3.
The multifractal spectrum of the binomial cascade equals
   
amax À a amax À a a À amin a À amin
f(a) ¼ À À
log2 log2
amax À amin amax À amin amax À amin amax À amin
(6:2:4)
The distribution (6.2.4) is confined with the interval [amin , amax ]. If
m0 ! 0:5, then amin ¼ À log2 (m0 ) and amax ¼ À log2 (1 À m0 ). The
binomial cascade can be generalized in two directions. First, one
can introduce a multinomial cascade by increasing the number of
subintervals to N > 2. Note that the condition
X
NÀ1
mi ¼ 1 (6:2:5)
0
66 Fractals



is needed for preserving the conservative character of the cascade.
Secondly, the values of mi can be randomized rather than assigned
fixed values. A cascade with randomized mi is called canonical. In this
case, the condition (6.2.5) is satisfied only on average, that is
" #
X
NÀ1
mi ¼ 1
E (6:2:6)
0

An example of the randomized cascade that has an explicit expres-
sion for the multifractal spectrum is the lognormal cascade [6]. In this
process, the multiplier that distributes the mass of the interval, M, is
determined with the lognormal distribution (i.e., log2 (M) is drawn
from the Gaussian distribution). If the Gaussian mean and variance
are l and s, respectively, then the conservative character of the
cascade E[M] ¼ 0.5 is preserved when

<< . .

. 7
( : 18)



. . >>