<< . .

. 5
( : 18)



. . >>

deterministic one. This result allows one to derive the Ito™s lemma.
Consider a function F(y, t) that depends on both deterministic, t, and
stochastic, y(t), variables. Let us expand the differential for F(y, t)
into the Taylor series retaining linear terms and bearing in mind
equation (4.3.3)
1 @2F
@F @F
(dy)2
dF(y, t) ¼ dy þ dt þ
2 @y2
@y @t
 
@F s2 @ 2 F
@F
¼ dy þ þ dt (4:3:4)
2 @ y2
@y @t
The Ito™s expression (4.3.4) has an additional term in comparison with
the differential for a function with deterministic independent vari-
36 Stochastic Processes


s2 @ 2 F
ables. Namely, the term dt has stochastic nature. If y(t) is the
2 @ y2
Brownian motion (4.3.1), then
 
@F s2 @ 2 F
@F
dF(y, t) ¼ [mdt þ sdW(t)] þ þ dt
2 @ y2
@y @t
 
@F @F s2 @ 2 F @F
¼m þ þ dt þ s dW(t) (4:3:5)
2 @ y2
@ y @t @y
Let us consider the function F ¼ W2 as a simple example for
employing the Ito™s lemma. In this case, m ¼ 0, s ¼ 1, and equation
(4.3.5) is reduced to
dF ¼ dt þ 2WdW (4:3:6)
Finally, we specify the Ito™s expression for the geometric Brownian
@F @ 2 F @F
motion F ¼ exp [y(t)]. Since in this case, ¼ 2 ¼ F and ¼ 0,
@y @y @t
then
 
s2
dF ¼ m þ Fdt þ sFdW(t) (4:3:7)
2
Hence, if F is the geometric Brownian motion, its relative change, dF/F,
behaves as the arithmetic Brownian motion.
The Ito™s lemma is a pillar of the option pricing theory. It will be
used for deriving the classical Black-Scholes equation in Section 9.4.

4.4 STOCHASTIC INTEGRAL
Now that the stochastic differential has been introduced, let us
discuss how to perform its integration. First, the Riemann-Stieltjes
integral should be defined. Consider a deterministic function f(t)
on the interval t 2 [0, T]. In order to calculate the Riemann integral
of f(t) over the interval [0, T], this interval is divided into n sub-intervals
t0 ¼ 0 < t1 < . . . < tn ¼ T and the following sum should be computed
X
n
Sn ¼ f(ti )(ti À tiÀ1 ) (4:4:1)
i¼1

where ti 2 [tiÀ1 , ti ]. The Riemann integral is the limit of Sn
37
Stochastic Processes



°
T

f(t)dt ¼ lim Sn , max (ti À tiÀ1 ) ! 0 for all i: (4:4:2)
0

Note that the limit (4.4.2) exists only if the function f(t) is sufficiently
smooth. Another type of integral is the Stieltjes integral. Let us define
the differential of a function g(x)
dg ¼ g(x þ dx) À g(x) (4:4:3)
Then the Stieltjes integral for the function g(t) on the interval
t 2 [0, T] is defined as
X
n
Sn ¼ f(ti )[g(ti ) À g(tiÀ1 )] (4:4:4)
i¼1

where ti 2 [tiÀ1 , ti ]
°
T

f(t)dg(t) ¼ lim Sn , where max (ti À tiÀ1 ) ! 0 for all i: (4:4:5)
0
dg
If g(t) has a derivative, then dg % dt ¼ g0 (t)dt, and the sum (4.4.4)
dt
can be written as
X
n
f(ti )g0 (ti )(ti À tiÀ1 )
Sn ¼ (4:4:6)
i¼1

Similarity between the Riemann sum (4.4.1) and the Stieltjes sum
(4.4.6) leads to the definition of the Riemann-Stieltjes integral. The
Riemann-Stieltjes integral over the deterministic functions does not
depend on the particular choice of the point ti within the intervals
[tiÀ1 , ti ]. However, if the function f(t) is random, the sum Sn does
depend on the choice of ti . Consider, for example, the sum (4.4.4) for
the case f(t) ¼ g(t) ¼ W(t) (where W(t) is the Wiener process). It
follows from (4.1.16) that
" #
X
n
E[Sn ] ¼ E W(ti ){W(ti ) À W(tiÀ1 )}
i¼1
X X
n n
¼ [ min (ti , ti ) À min (ti , tiÀ1 )] ¼ (ti À tiÀ1 ) (4:4:7)
i¼1 i¼1
38 Stochastic Processes



Let us set for all i
ti ¼ ati þ (1 À a)tiÀ1 0 1 (4:4:8)
a
Substitution of (4.4.8) into (4.4.7) leads to E[Sn ] ¼ aT. Hence, the
sum (4.4.7) depends on the arbitrary parameter a and therefore can
have any value. Within the Ito™s formalism, the value a ¼ 0 is chosen,
so that ti ¼ tiÀ1 . The stochastic Ito™s integral is defined as
°
T
X
n
f(t)dW(t) ¼ msÀlim f(tiÀ1 )[W(ti ) À W(tiÀ1 )] (4:4:9)
n!1
i¼1
0

The notation ms-lim stands for the mean-square limit. It means that
the difference between the Ito integral in the left-hand side of (4.4.9)
and the sum in the right-hand side of (4.4.9) has variance that ap-
proaches zero as n increases to infinity. Thus, (4.4.9) is equivalent to
2T 32
° Xn
lim E4 f(t)dW(t) À f(tiÀ1 ){W(ti ) À W(tiÀ1 )}5 ¼ 0 (4:4:10)
n!1
iÀ1
0

Let us consider the integral
°
t2

I(t2 , t1 ) ¼ W(t)dW(t) (4:4:11)
t1

as an example of calculating the Ito™s integral. If the function W(t) is
deterministic, then the Riemann-Stieltjes integral IRÀS (t2 , t1 ) equals
IRÀS (t2 , t1 ) ¼ 0:5[W(t2 )2 À W(t1 )2 ] (4:4:12)
However, when W(t) is the Wiener process, the Ito™s integral II (t2 , t1 )
leads to a somewhat unexpected result
II (t2 , t1 ) ¼ 0:5[W(t2 )2 À W(t1 )2 À (t2 À t1 )] (4:4:13)
This follows directly from equation (4.3.6). Obviously, the result
(4.4.13) can be derived directly from the definition of the Ito™s integral
(see Exercise 1). Note that the mean of the Ito™s integral (4.4.11)
equals zero
E[II (t2 , t1 )] ¼ 0 (4:4:14)
39
Stochastic Processes



The difference between the right-hand sides of (4.4.12) and (4.4.13) is
determined by the particular choice of a ¼ 0 in (4.4.8). Stratonovich
has offered another definition of the stochastic integral by choosing
a ¼ 0:5. In contrast to equation (4.4.9), the Stratonovich™s integral is
defined as
X tiÀ1 þ ti 
°
T
n
f(t)dW(t) ¼ msÀlim [W(ti ) À W(tiÀ1 )] (4:4:15)
f
2
n!1
i¼1
0

For the integrand in (4.4.11), the Stratonovich™s integral IS (t2 , t1 )
coincides with the Riemann-Stieltjes integral
IS (t2 , t1 ) ¼ 0:5[W(t2 )2 À W(t1 )2 ] (4:4:16)
Both Ito™s and Stratonovich™s formulations can be transformed into
each other. In particular, the Ito™s stochastic differential equation (4.3.1)
dyI (t) ¼ mdt þ sdW(t) (4:4:17)
is equivalent to the Stratonovich™s equation
 
@s
dyS (t) ¼ m À 0:5s dt þ sdW(t) (4:4:18)
@y
The applications of stochastic calculus in finance are based almost
exclusively on the Ito™s theory. Consider, for example, the integral
°
t2

(4:4:19)
s(t)dW(t)
t1

If no correlation between the function s(t) and the innovation dW(t)
is assumed, then the Ito™s approximation is a natural choice. In this
case, the function s(t) is said to be a nonanticipating function [1, 2].
However, if the innovations dW(t) are correlated (so-called non-white
noise), then the Stratonovich™s approximation appears to be an ad-
equate theory [1, 6].

4.5 MARTINGALES
The martingale methodology plays an important role in the
modern theory of finance [2, 7, 8]. Martingale is a stochastic process
X(t) that satisfies the following condition
40 Stochastic Processes



E[X(t þ 1)jX(t), X(t À 1), . . . ] ¼ X(t) (4:5:1)

The equivalent definition is given by

E[X(t þ 1) À X(t)jX(t), X(t À 1), . . . ] ¼ 0 (4:5:2)
Both these definitions are easily generalized for the continuum pre-
sentation where the time interval, dt, between two sequent moments
t þ 1 and t approaches zero (dt ! 0). The notion of martingale is
rooted in the gambling theory. It is closely associated with the notion
of fair game, in which none of the players has an advantage. The
condition (4.5.1) implies that the expectation of the gamer wealth at
time t þ 1 conditioned on the entire history of the game is equal to the
gamer wealth at time t. Similarly, equation (4.5.2) means that the
expectation to win at every round of the game being conditioned on
the history of the game equals zero. In other words, martingale has no
trend. A process that has positive trend is named submartingale.
A process with negative trend is called supermartingale.
The martingale hypothesis applied to the asset prices states that the
expectation of future price is simply the current price. This assumption
is closely related to the Efficient Market Hypothesis discussed in
Section 2.3. Generally, the asset prices are not martingales for they
incorporate risk premium. Indeed, there must be some reward offered
to investors for bearing the risks associated with keeping the assets. It
can be shown, however, that the prices with discounted risk premium
are martingales [3].
The important property of the Ito™s integral is that it is martingale.
Consider, for example, the integral (4.4.19) approximated with the
sum (4.4.9). Because the innovations dW(t) are unpredictable, it
follows from (4.4.14) that
2tþDt 3
°
E4 s(z)dW(z)5 ¼ 0 (4:5:3)
t

Therefore,
2tþDt 3t
° °
E4 s(z)dW(z)5 ¼ s(z)dW(z) (4:5:4)
0 0
41
Stochastic Processes



and the integral (4.4.19) satisfies the martingale definition. Note that
for the Brownian motion with drift (4.2.14)
2 3
°
tþdt

E[y(t þ dt)] ¼ E4y(t) þ dy5 ¼ y(t) þ mdt (4:5:5)
t

Hence, the Brownian motion with drift is not a martingale. However,
the process
z(t) ¼ y(t) À mt (4:5:6)
is a martingale since
E[z(t þ dt)] ¼ z(t) (4:5:7)
This result follows also from the Doob-Meyer decomposition theorem,
which states that a continuous submartingale X(t) at 0 t 1 with
finite expectation E[X(t)] < 1 can be decomposed into a continuous
martingale and an increasing deterministic process.


4.6 REFERENCES FOR FURTHER READING
Theory and applications of the stochastic processes in natural
sciences are described in [1, 6]. A good introduction to the stochastic
calculus in finance is given in [2]. For a mathematically inclined
reader, the presentation of the stochastic theory with increasing
level of technical details can be found in [7, 8].


4.7 EXERCISES
1. Simulate daily price returns using the geometric Brownian
motion (4.3.7) for four years. Use equation (4.2.15) for approxi-
mating DW. Assume that S(0) ¼ 10, m ¼ 10%, s ¼ 20% (m and
s are given per annum). Assume 250 working days per annum.
2. Prove that
° °
t2 t2
1 n
W(s)n dW(s) ¼ [W(t2 )nþ1 À W(t1 )nþ1 ] À W(s)nÀ1 ds
nþ1 2
t1 t1

Hint: Calculate d(Wnþ1 ) using the Ito™s lemma.
42 Stochastic Processes



3. Solve the Ornstein-Uhlenbeck equation that describes the mean-
reverting process in which the solution fluctuates around its
mean
dX ¼ ÀmXdt þ sdW, m > 0
Hint: introduce the variable Y ¼ X exp (mt).
*4. Derive the integral (4.4.13) directly from the definition of the
Ito™s integral (4.4.9).
Chapter 5


Time Series Analysis




Time series analysis has become an indispensable theoretical tool in
financial and economic research. Section 5.1 is devoted to the com-
monly used univariate autoregressive and moving average models.
The means for modeling trends and seasonality effects are described
in Section 5.2. The processes with non-stationary variance (condi-
tional heteroskedasticity) are discussed in Section 5.3. Finally,
the specifics of the multivariate time series are introduced in
Section 5.4.


5.1 AUTOREGRESSIVE AND MOVING AVERAGE
MODELS
5.1.1 AUTOREGRESSIVE MODEL
First, we shall consider a univariate time series y(t) for a process
that is observed at moments t ¼ 0, 1, . . . , n (see, e.g., [1, 2]). The time
series in which the observation at moment t depends linearly on
several lagged observations at moments t À 1, t À 2, . . . , t À p

y(t) ¼ a1 y(t À 1) þ a2 y(t À 2) þ . . . þ ap y(t À p) þ e(t), t > p (5:1:1)

is called the autoregressive process of order p, or AR(p). The term e(t) in
(5.1.1) is the white noise that satisfies the conditions (4.2.6). The lag



43
44 Time Series Analysis



operator Lp ¼ y(t À p) is often used for describing time series. Note that
L0 ¼ y(t). Equation (5.1.1) in terms of the lag operator has the form
Ap (L)y(t) ¼ e(t) (5:1:2)
where
Ap (L) ¼ 1 À a1 L À a2 L2 À . . . À ap Lp (5:1:3)
The operator Ap (L) is called the AR polynomial in lag operator of
order p. Let us consider AR(1) that starts with a random shock. Its
definition implies that
y(0) ¼ e(0), y(1) ¼ a1 y(0) þ e(1),
y(2) ¼ a1 y(1) þ e(2) ¼ a1 2 e(0) þ a1 e(1) þ e(2), . . .
Hence, by induction,
X
t
a1 i e(t À i)
y(t) ¼ (5:1:4)
i¼0

Mean and variance of AR(1) equal, respectively
E[y(t)] ¼ 0, Var[y(t)] ¼ s2 =(1 À a1 2 ), (5:1:5)
Obviously, the contributions of the ˜˜old™™ noise converge with time to
zero when ja1 j < 1. As a result, this process does not drift too far from
its mean. This feature is named mean reversion.
The process with a1 ¼ 1 is called the random walk
y(t) ¼ y(t À 1) þ e(t) (5:1:6)
In this case, equation (5.1.4) reduces to
X
t
y(t) ¼ e(t À i)
i¼0

The noise contributions to the random walk do not weaken with time.
Therefore, the random walk does not exhibit mean reversion. Now,
consider the process that represents the first difference
x(t) ¼ y(t) À y(t À 1) ¼ e(t) (5:1:7)
Obviously, past noise has only transitory character for the process
x(t). Therefore, x(t) is mean-reverting. Some processes must be
45
Time Series Analysis



differenced several times in order to exclude non-transitory noise
shocks. The processes differenced d times are named integrated of
order d and denoted as I(d). The differencing operator is used for
describing an I(d) process

Di d ¼ (1 À Li )d , j, d ¼ . . . , À2, À1, 0, 1, 2 . . . (5:1:8)

<< . .

. 5
( : 18)



. . >>