<< Ïðåä. ñòð. ñòð. 5(îáùåå êîëè÷åñòâî: 18)ÎÃËÀÂËÅÍÈÅ Ñëåä. ñòð. >>
deterministic one. This result allows one to derive the Itoâ€™s lemma.
Consider a function F(y, t) that depends on both deterministic, t, and
stochastic, y(t), variables. Let us expand the differential for F(y, t)
into the Taylor series retaining linear terms and bearing in mind
equation (4.3.3)
1 @2F
@F @F
(dy)2
dF(y, t) Â¼ dy Ã¾ dt Ã¾
2 @y2
@y @t
 
@F s2 @ 2 F
@F
Â¼ dy Ã¾ Ã¾ dt (4:3:4)
2 @ y2
@y @t
The Itoâ€™s expression (4.3.4) has an additional term in comparison with
the differential for a function with deterministic independent vari-
36 Stochastic Processes

s2 @ 2 F
ables. Namely, the term dt has stochastic nature. If y(t) is the
2 @ y2
Brownian motion (4.3.1), then
 
@F s2 @ 2 F
@F
dF(y, t) Â¼ [mdt Ã¾ sdW(t)] Ã¾ Ã¾ dt
2 @ y2
@y @t
 
@F @F s2 @ 2 F @F
Â¼m Ã¾ Ã¾ dt Ã¾ s dW(t) (4:3:5)
2 @ y2
@ y @t @y
Let us consider the function F Â¼ W2 as a simple example for
employing the Itoâ€™s lemma. In this case, m Â¼ 0, s Â¼ 1, and equation
(4.3.5) is reduced to
dF Â¼ dt Ã¾ 2WdW (4:3:6)
Finally, we specify the Itoâ€™s expression for the geometric Brownian
@F @ 2 F @F
motion F Â¼ exp [y(t)]. Since in this case, Â¼ 2 Â¼ F and Â¼ 0,
@y @y @t
then
 
s2
dF Â¼ m Ã¾ Fdt Ã¾ sFdW(t) (4:3:7)
2
Hence, if F is the geometric Brownian motion, its relative change, dF/F,
behaves as the arithmetic Brownian motion.
The Itoâ€™s lemma is a pillar of the option pricing theory. It will be
used for deriving the classical Black-Scholes equation in Section 9.4.

4.4 STOCHASTIC INTEGRAL
Now that the stochastic differential has been introduced, let us
discuss how to perform its integration. First, the Riemann-Stieltjes
integral should be defined. Consider a deterministic function f(t)
on the interval t 2 [0, T]. In order to calculate the Riemann integral
of f(t) over the interval [0, T], this interval is divided into n sub-intervals
t0 Â¼ 0 < t1 < . . . < tn Â¼ T and the following sum should be computed
X
n
Sn Â¼ f(ti )(ti Ã€ tiÃ€1 ) (4:4:1)
iÂ¼1

where ti 2 [tiÃ€1 , ti ]. The Riemann integral is the limit of Sn
37
Stochastic Processes

Ã°
T

f(t)dt Â¼ lim Sn , max (ti Ã€ tiÃ€1 ) ! 0 for all i: (4:4:2)
0

Note that the limit (4.4.2) exists only if the function f(t) is sufficiently
smooth. Another type of integral is the Stieltjes integral. Let us define
the differential of a function g(x)
dg Â¼ g(x Ã¾ dx) Ã€ g(x) (4:4:3)
Then the Stieltjes integral for the function g(t) on the interval
t 2 [0, T] is defined as
X
n
Sn Â¼ f(ti )[g(ti ) Ã€ g(tiÃ€1 )] (4:4:4)
iÂ¼1

where ti 2 [tiÃ€1 , ti ]
Ã°
T

f(t)dg(t) Â¼ lim Sn , where max (ti Ã€ tiÃ€1 ) ! 0 for all i: (4:4:5)
0
dg
If g(t) has a derivative, then dg % dt Â¼ g0 (t)dt, and the sum (4.4.4)
dt
can be written as
X
n
f(ti )g0 (ti )(ti Ã€ tiÃ€1 )
Sn Â¼ (4:4:6)
iÂ¼1

Similarity between the Riemann sum (4.4.1) and the Stieltjes sum
(4.4.6) leads to the definition of the Riemann-Stieltjes integral. The
Riemann-Stieltjes integral over the deterministic functions does not
depend on the particular choice of the point ti within the intervals
[tiÃ€1 , ti ]. However, if the function f(t) is random, the sum Sn does
depend on the choice of ti . Consider, for example, the sum (4.4.4) for
the case f(t) Â¼ g(t) Â¼ W(t) (where W(t) is the Wiener process). It
follows from (4.1.16) that
" #
X
n
E[Sn ] Â¼ E W(ti ){W(ti ) Ã€ W(tiÃ€1 )}
iÂ¼1
X X
n n
Â¼ [ min (ti , ti ) Ã€ min (ti , tiÃ€1 )] Â¼ (ti Ã€ tiÃ€1 ) (4:4:7)
iÂ¼1 iÂ¼1
38 Stochastic Processes

Let us set for all i
ti Â¼ ati Ã¾ (1 Ã€ a)tiÃ€1 0 1 (4:4:8)
a
Substitution of (4.4.8) into (4.4.7) leads to E[Sn ] Â¼ aT. Hence, the
sum (4.4.7) depends on the arbitrary parameter a and therefore can
have any value. Within the Itoâ€™s formalism, the value a Â¼ 0 is chosen,
so that ti Â¼ tiÃ€1 . The stochastic Itoâ€™s integral is defined as
Ã°
T
X
n
f(t)dW(t) Â¼ msÃ€lim f(tiÃ€1 )[W(ti ) Ã€ W(tiÃ€1 )] (4:4:9)
n!1
iÂ¼1
0

The notation ms-lim stands for the mean-square limit. It means that
the difference between the Ito integral in the left-hand side of (4.4.9)
and the sum in the right-hand side of (4.4.9) has variance that ap-
proaches zero as n increases to infinity. Thus, (4.4.9) is equivalent to
2T 32
Ã° Xn
lim E4 f(t)dW(t) Ã€ f(tiÃ€1 ){W(ti ) Ã€ W(tiÃ€1 )}5 Â¼ 0 (4:4:10)
n!1
iÃ€1
0

Let us consider the integral
Ã°
t2

I(t2 , t1 ) Â¼ W(t)dW(t) (4:4:11)
t1

as an example of calculating the Itoâ€™s integral. If the function W(t) is
deterministic, then the Riemann-Stieltjes integral IRÃ€S (t2 , t1 ) equals
IRÃ€S (t2 , t1 ) Â¼ 0:5[W(t2 )2 Ã€ W(t1 )2 ] (4:4:12)
However, when W(t) is the Wiener process, the Itoâ€™s integral II (t2 , t1 )
leads to a somewhat unexpected result
II (t2 , t1 ) Â¼ 0:5[W(t2 )2 Ã€ W(t1 )2 Ã€ (t2 Ã€ t1 )] (4:4:13)
This follows directly from equation (4.3.6). Obviously, the result
(4.4.13) can be derived directly from the definition of the Itoâ€™s integral
(see Exercise 1). Note that the mean of the Itoâ€™s integral (4.4.11)
equals zero
E[II (t2 , t1 )] Â¼ 0 (4:4:14)
39
Stochastic Processes

The difference between the right-hand sides of (4.4.12) and (4.4.13) is
determined by the particular choice of a Â¼ 0 in (4.4.8). Stratonovich
has offered another definition of the stochastic integral by choosing
a Â¼ 0:5. In contrast to equation (4.4.9), the Stratonovichâ€™s integral is
defined as
X tiÃ€1 Ã¾ ti 
Ã°
T
n
f(t)dW(t) Â¼ msÃ€lim [W(ti ) Ã€ W(tiÃ€1 )] (4:4:15)
f
2
n!1
iÂ¼1
0

For the integrand in (4.4.11), the Stratonovichâ€™s integral IS (t2 , t1 )
coincides with the Riemann-Stieltjes integral
IS (t2 , t1 ) Â¼ 0:5[W(t2 )2 Ã€ W(t1 )2 ] (4:4:16)
Both Itoâ€™s and Stratonovichâ€™s formulations can be transformed into
each other. In particular, the Itoâ€™s stochastic differential equation (4.3.1)
dyI (t) Â¼ mdt Ã¾ sdW(t) (4:4:17)
is equivalent to the Stratonovichâ€™s equation
 
@s
dyS (t) Â¼ m Ã€ 0:5s dt Ã¾ sdW(t) (4:4:18)
@y
The applications of stochastic calculus in finance are based almost
exclusively on the Itoâ€™s theory. Consider, for example, the integral
Ã°
t2

(4:4:19)
s(t)dW(t)
t1

If no correlation between the function s(t) and the innovation dW(t)
is assumed, then the Itoâ€™s approximation is a natural choice. In this
case, the function s(t) is said to be a nonanticipating function [1, 2].
However, if the innovations dW(t) are correlated (so-called non-white
noise), then the Stratonovichâ€™s approximation appears to be an ad-
equate theory [1, 6].

4.5 MARTINGALES
The martingale methodology plays an important role in the
modern theory of finance [2, 7, 8]. Martingale is a stochastic process
X(t) that satisfies the following condition
40 Stochastic Processes

E[X(t Ã¾ 1)jX(t), X(t Ã€ 1), . . . ] Â¼ X(t) (4:5:1)

The equivalent definition is given by

E[X(t Ã¾ 1) Ã€ X(t)jX(t), X(t Ã€ 1), . . . ] Â¼ 0 (4:5:2)
Both these definitions are easily generalized for the continuum pre-
sentation where the time interval, dt, between two sequent moments
t Ã¾ 1 and t approaches zero (dt ! 0). The notion of martingale is
rooted in the gambling theory. It is closely associated with the notion
of fair game, in which none of the players has an advantage. The
condition (4.5.1) implies that the expectation of the gamer wealth at
time t Ã¾ 1 conditioned on the entire history of the game is equal to the
gamer wealth at time t. Similarly, equation (4.5.2) means that the
expectation to win at every round of the game being conditioned on
the history of the game equals zero. In other words, martingale has no
trend. A process that has positive trend is named submartingale.
A process with negative trend is called supermartingale.
The martingale hypothesis applied to the asset prices states that the
expectation of future price is simply the current price. This assumption
is closely related to the Efficient Market Hypothesis discussed in
Section 2.3. Generally, the asset prices are not martingales for they
incorporate risk premium. Indeed, there must be some reward offered
to investors for bearing the risks associated with keeping the assets. It
can be shown, however, that the prices with discounted risk premium
are martingales [3].
The important property of the Itoâ€™s integral is that it is martingale.
Consider, for example, the integral (4.4.19) approximated with the
sum (4.4.9). Because the innovations dW(t) are unpredictable, it
follows from (4.4.14) that
2tÃ¾Dt 3
Ã°
E4 s(z)dW(z)5 Â¼ 0 (4:5:3)
t

Therefore,
2tÃ¾Dt 3t
Ã° Ã°
E4 s(z)dW(z)5 Â¼ s(z)dW(z) (4:5:4)
0 0
41
Stochastic Processes

and the integral (4.4.19) satisfies the martingale definition. Note that
for the Brownian motion with drift (4.2.14)
2 3
Ã°
tÃ¾dt

E[y(t Ã¾ dt)] Â¼ E4y(t) Ã¾ dy5 Â¼ y(t) Ã¾ mdt (4:5:5)
t

Hence, the Brownian motion with drift is not a martingale. However,
the process
z(t) Â¼ y(t) Ã€ mt (4:5:6)
is a martingale since
E[z(t Ã¾ dt)] Â¼ z(t) (4:5:7)
This result follows also from the Doob-Meyer decomposition theorem,
which states that a continuous submartingale X(t) at 0 t 1 with
finite expectation E[X(t)] < 1 can be decomposed into a continuous
martingale and an increasing deterministic process.

Theory and applications of the stochastic processes in natural
sciences are described in [1, 6]. A good introduction to the stochastic
calculus in finance is given in [2]. For a mathematically inclined
reader, the presentation of the stochastic theory with increasing
level of technical details can be found in [7, 8].

4.7 EXERCISES
1. Simulate daily price returns using the geometric Brownian
motion (4.3.7) for four years. Use equation (4.2.15) for approxi-
mating DW. Assume that S(0) Â¼ 10, m Â¼ 10%, s Â¼ 20% (m and
s are given per annum). Assume 250 working days per annum.
2. Prove that
Ã° Ã°
t2 t2
1 n
W(s)n dW(s) Â¼ [W(t2 )nÃ¾1 Ã€ W(t1 )nÃ¾1 ] Ã€ W(s)nÃ€1 ds
nÃ¾1 2
t1 t1

Hint: Calculate d(WnÃ¾1 ) using the Itoâ€™s lemma.
42 Stochastic Processes

3. Solve the Ornstein-Uhlenbeck equation that describes the mean-
reverting process in which the solution fluctuates around its
mean
dX Â¼ Ã€mXdt Ã¾ sdW, m > 0
Hint: introduce the variable Y Â¼ X exp (mt).
*4. Derive the integral (4.4.13) directly from the definition of the
Itoâ€™s integral (4.4.9).
Chapter 5

Time Series Analysis

Time series analysis has become an indispensable theoretical tool in
financial and economic research. Section 5.1 is devoted to the com-
monly used univariate autoregressive and moving average models.
The means for modeling trends and seasonality effects are described
in Section 5.2. The processes with non-stationary variance (condi-
tional heteroskedasticity) are discussed in Section 5.3. Finally,
the specifics of the multivariate time series are introduced in
Section 5.4.

5.1 AUTOREGRESSIVE AND MOVING AVERAGE
MODELS
5.1.1 AUTOREGRESSIVE MODEL
First, we shall consider a univariate time series y(t) for a process
that is observed at moments t Â¼ 0, 1, . . . , n (see, e.g., [1, 2]). The time
series in which the observation at moment t depends linearly on
several lagged observations at moments t Ã€ 1, t Ã€ 2, . . . , t Ã€ p

y(t) Â¼ a1 y(t Ã€ 1) Ã¾ a2 y(t Ã€ 2) Ã¾ . . . Ã¾ ap y(t Ã€ p) Ã¾ e(t), t > p (5:1:1)

is called the autoregressive process of order p, or AR(p). The term e(t) in
(5.1.1) is the white noise that satisfies the conditions (4.2.6). The lag

43
44 Time Series Analysis

operator Lp Â¼ y(t Ã€ p) is often used for describing time series. Note that
L0 Â¼ y(t). Equation (5.1.1) in terms of the lag operator has the form
Ap (L)y(t) Â¼ e(t) (5:1:2)
where
Ap (L) Â¼ 1 Ã€ a1 L Ã€ a2 L2 Ã€ . . . Ã€ ap Lp (5:1:3)
The operator Ap (L) is called the AR polynomial in lag operator of
order p. Let us consider AR(1) that starts with a random shock. Its
definition implies that
y(0) Â¼ e(0), y(1) Â¼ a1 y(0) Ã¾ e(1),
y(2) Â¼ a1 y(1) Ã¾ e(2) Â¼ a1 2 e(0) Ã¾ a1 e(1) Ã¾ e(2), . . .
Hence, by induction,
X
t
a1 i e(t Ã€ i)
y(t) Â¼ (5:1:4)
iÂ¼0

Mean and variance of AR(1) equal, respectively
E[y(t)] Â¼ 0, Var[y(t)] Â¼ s2 =(1 Ã€ a1 2 ), (5:1:5)
Obviously, the contributions of the â€˜â€˜oldâ€™â€™ noise converge with time to
zero when ja1 j < 1. As a result, this process does not drift too far from
its mean. This feature is named mean reversion.
The process with a1 Â¼ 1 is called the random walk
y(t) Â¼ y(t Ã€ 1) Ã¾ e(t) (5:1:6)
In this case, equation (5.1.4) reduces to
X
t
y(t) Â¼ e(t Ã€ i)
iÂ¼0

The noise contributions to the random walk do not weaken with time.
Therefore, the random walk does not exhibit mean reversion. Now,
consider the process that represents the first difference
x(t) Â¼ y(t) Ã€ y(t Ã€ 1) Â¼ e(t) (5:1:7)
Obviously, past noise has only transitory character for the process
x(t). Therefore, x(t) is mean-reverting. Some processes must be
45
Time Series Analysis

differenced several times in order to exclude non-transitory noise
shocks. The processes differenced d times are named integrated of
order d and denoted as I(d). The differencing operator is used for
describing an I(d) process

Di d Â¼ (1 Ã€ Li )d , j, d Â¼ . . . , Ã€2, Ã€1, 0, 1, 2 . . . (5:1:8)
 << Ïðåä. ñòð. ñòð. 5(îáùåå êîëè÷åñòâî: 18)ÎÃËÀÂËÅÍÈÅ Ñëåä. ñòð. >>