<< . .

. 3
( : 18)



. . >>

MBA level, much of the material pertinent to this chapter is given
in [3].

EXERCISES
1. Familiarize yourself with the financial market data available on
the Internet (e.g., http://www.finance.yahoo.com). Download the
weekly closing prices of the exchange-traded fund SPDR that
replicates the S&P 500 index (ticker SPY) for 1996“2003. Cal-
culate simple weekly returns for this data sample (we shall use
these data for other exercises).
2. Calculate the present value of SPY for 2004 if the asset risk
premium is equal to (a) 3% and (b) 4%. The SPY dividends in
2003 were $1.63. Assume the dividend growth rate of 5% (see
Exercise 5.3 for a more accurate estimate). Assume the risk-free
rate of 3%. What risk premium was priced in SPY in the end of
2004 according to the discounted-cash-flow theory?
3. Simulate the rational bubble using the Blanchard-Watson
model (2.2.18). Define e(t) ¼ PU (t) À 0:5 where PU is standard
uniform distribution (explain why the relation e(t) ¼ PU (t)
cannot be used). Use p ¼ 0:75 and R ¼ 0:1 as the initial values
for studying the model sensitivity to the input parameters.
4. Is there an arbitrage opportunity for the following set of the
exchange rates: GBP/USD ¼ 1.7705, EUR/USD ¼ 1.1914,
EUR/GBP ¼ 0.6694?
This page intentionally left blank
Chapter 3


Probability Distributions




This chapter begins with the basic notions of mathematical statistics
that form the framework for analysis of financial data (see, e.g.,
[1“3]). In Section 3.2, a number of distributions widely used in statis-
tical data analysis are listed. The stable distributions that have become
popular in Econophysics research are discussed in Section 3.3.


3.1 BASIC DEFINITIONS
Consider the random variable (or variate) X. The probability dens-
ity function P(x) defines the probability to find X between a and b
°
b

b) ¼ P(x)dx
Pr(a X (3:1:1)
a

The probability density must be a non-negative function and must
satisfy the normalization condition

max

P(x)dx ¼ 1 (3:1:2)
Xmin

where the interval [Xmin , Xmax ] is the range of all possible values of X.
In fact, the infinite limits [À1, 1] can always be used since P(x) may



17
18 Probability Distributions



be set to zero outside the interval [Xmin , Xmax ]. As a rule, the infinite
integration limits are further omitted.
Another way of describing random variable is to use the cumulative
distribution function
°
b

b) ¼
Pr(X P(x)dx (3:1:3)
À1

Obviously, probability satisfies the condition
Pr(X > b) ¼ 1 À Pr(X b) (3:1:4)
Two characteristics are used to describe probable values of random
variable X: mean (or expectation) and median. Mean of X is the
average of all possible values of X that are weighed with the prob-
ability density P(x)
°
m  E[X] ¼ xP(x)dx (3:1:5)

Median of X is the value, M, for which
Pr(X > M) ¼ Pr(X < M) ¼ 0:5 (3:1:6)
Median is the preferable characteristic of the most probable value for
strongly skewed data samples. Consider a sample of lottery tickets
that has one ˜˜lucky™™ ticket winning one million dollars and 999
˜˜losers.™™ The mean win in this sample is $1000, which does not
realistically describe the lottery outcome. The median zero value is a
much more relevant characteristic in this case.
The expectation of a random variable calculated using some avail-
able information It (that may change with time t) is called conditional
expectation. The conditional probability density is denoted by P(xjIt ).
Conditional expectation equals
°
E[Xt jIt ] ¼ xP(xjIt )dx (3:1:7)

Variance, Var, and the standard deviation, s, are the conventional
estimates of the deviations from the mean values of X
°
Var[X]  s2 ¼ (x À m)2 P(x)dx (3:1:8)
19
Probability Distributions



In financial literature, the standard deviation of price is used to
characterize the price volatility.
The higher-order moments of the probability distributions are
defined as
°
mn  E[Xn ] ¼ xn P(x)dx (3:1:9)

According to this definition, mean is the first moment (m  m1 ), and
variance can be expressed via the first two moments, s2 ¼ m2 À m2 .
Two other important parameters, skewness S and kurtosis K, are
related to the third and fourth moments, respectively,

S ¼ E[(x À m)3 ]=s3 , K ¼ E[(x À m)4 ]=s4 (3:1:10)
Both parameters, S and K, are dimensionless. Zero skewness implies
that the distribution is symmetrical around its mean value. The posi-
tive and negative values of skewness indicate long positive tails and
long negative tails, respectively. Kurtosis characterizes the distribu-
tion peakedness. Kurtosis of the normal distribution equals three.
The excess kurtosis, Ke ¼ K À 3, is often used as a measure of devi-
ation from the normal distribution. In particular, positive excess
kurtosis (or leptokurtosis) indicates more frequent medium and large
deviations from the mean value than is typical for the normal distri-
bution. Leptokurtosis leads to a flatter central part and to so-called
fat tails in the distribution. Negative excess kurtosis indicates frequent
small deviations from the mean value. In this case, the distribution
sharpens around its mean value while the distribution tails decay
faster than the tails of the normal distribution.
The joint distribution of two random variables X and Y is the
generalization of the cumulative distribution (see 3.1.3)
°°
b c

c) ¼
Pr(X b, Y h(x, y)dxdy (3:1:11)
À1 À1

In (3.1.11), h(x, y) is the joint density that satisfies the normalization
condition
°°
11

h(x, y)dxdy ¼ 1 (3:1:12)
À1 À1
20 Probability Distributions



Two random variables are independent if their joint density function
is simply the product of the univariate density functions: h(x, y) ¼
f (x)g(y). Covariance between two variates provides a measure of their
simultaneous change. Consider two variates, X and Y, that have the
means mX and mY , respectively. Their covariance equals
Cov(x, y)  sXY ¼ E[(x À mX )(y À mY )] ¼ E[xy] À mX mY (3:1:13)
Obviously, covariance reduces to variance if X ¼ Y: sXX ¼ sX 2 .
Positive covariance between two variates implies that these variates
tend to change simultaneously in the same direction rather than in
opposite directions. Conversely, negative covariance between two
variates implies that when one variate grows, the second one tends
to fall and vice versa. Another popular measure of simultaneous
change is the correlation coefficient
Corr(x, y) ¼ Cov(x:y)=(sX sY ) (3:1:14)
The values of the correlation coefficient are within the range [ À 1, 1].
In the general case with N variates X1 , . . . , XN (where N > 2),
correlations among variates are described with the covariance matrix.
Its elements equal
Cov(xi , xj )  sij ¼ E[(xi À mi )(xj À mj )] (3:1:15)


3.2 IMPORTANT DISTRIBUTIONS
There are several important probability distributions used in quan-
titative finance. The uniform distribution has a constant value within
the given interval [a, b] and equals zero outside this interval

0, x < a and x > b
PU ¼ (3:2:1)
1=(b À a), a x b
The uniform distribution has the following mean and higher-order
moments
mU ¼ 0, s2 U ¼ (b À a)2 =12, SU ¼ 0, KeU ¼ À6=5 (3:2:2)
The case with a ¼ 0 and b ¼ 1 is called the standard uniform distribu-
tion. Many computer languages and software packages have a library
function for calculating the standard uniform distribution.
21
Probability Distributions



The binomial distribution is a discrete distribution of obtaining n
successes out of N trials where the result of each trial is true with
probability p and is false with probability q ¼ 1 À p (so-called Ber-
noulli trials)
N!
PB (n; N, p) ¼ CNn pn qNÀn ¼ CNn pn (1 À p)NÀn , CNn ¼ (3:2:3)
n!(N À n)!
The factor CNn is called the binomial coefficient. Mean and higher-
order moments for the binomial distribution are equal, respectively,
mB ¼ Np, s2 B ¼ Np(1 À p), SB ¼ (q À p)=sB , KeB ¼ (1 À 6pq)=sB 2
(3:2:4)
In the case of large N and large (N À n), the binomial distribution
approaches the form
1
PB (n) ¼ p¬¬¬¬¬¬ exp [À(x À mB )2 =2s2 B ], N ! 1, (N À n) ! 1 (3:2:5)
2psB
that coincides with the normal (or Gaussian) distribution (see 3.2.9). In
the case with p ( 1, the binomial distribution approaches the Poisson
distribution.
The Poisson distribution describes the probability of n successes in
N trials assuming that the fraction of successes n is proportional to
the number of trials: n ¼ pN
 n n  n NÀn
N!
PP (n, N) ¼ 1À (3:2:6)
n!(N À n)! N N
As the number of trials N becomes very large (N ! 1), equation
(3.2.6) approaches the limit
PP (n) ¼ nn eÀn =n! (3:2:7)
Mean, variance, skewness, and excess kurtosis of the Poisson distri-
bution are equal, respectively,
mP ¼ s2 ¼ n, SP ¼ nÀ1=2 , KeP ¼ nÀ1 (3:2:8)
P

The normal (Gaussian) distribution has the form
1
PN (x) ¼ p¬¬¬¬¬¬ exp [À(x À m)2 =2s2 ] (3:2:9)
2ps
22 Probability Distributions



It is often denoted N(m, s). Skewness and excess kurtosis of the
normal distribution equals zero. The transform z ¼ (x À m)=s con-
verts the normal distribution into the standard normal distribution
1
PSN (z) ¼ p¬¬¬¬¬¬ exp [Àz2 =2] (3:2:10)
2p
Note that the probability for the standard normal variate to assume
the value in the interval [0, z] can be used as the definition of the error
function erf(x)
°
z
p¬¬¬
1
p¬¬¬¬¬¬ exp (Àx2 =2)dx ¼ 0:5 erf(z= 2) (3:2:11)
2p
0

Then the cumulative distribution function for the standard normal
distribution equals
p¬¬¬
PrSN (z) ¼ 0:5[1 þ erf(z= 2)] (3:2:12)
According to the central limit theorem, the probability density distri-
bution for a sum of N independent random variables with finite
variances and finite means approaches the normal distribution as N
grows to infinity. Due to exponential decay of the normal distribu-
tion, large deviations from its mean rarely appear. The normal distri-
bution plays an extremely important role in all kinds of applications.
The Box-Miller method is often used for modeling the normal distri-
bution with given uniform distribution [4]. Namely, if two numbers
x1 and x2 are drawn from the standard uniform distribution, then
y1 and y2 are the standard normal variates
y1 ¼ [À2 ln x1 )]1=2 cos (2px2 ), y2 ¼ [À2 ln x1 )]1=2 sin (2px2 ) (3:2:13)
Mean and variance of the multivariate normal distribution with N
variates can be easily calculated via the univariate means mi and
covariances sij
X X
N N
mi , s2
mN ¼ ¼ (3:2:14)
sij
N
i¼1 i, j¼1

The lognormal distribution is a distribution in which the logarithm of a
variate has the normal form
23
Probability Distributions



1
p¬¬¬¬¬¬ exp [À( ln x À m)2 =2s2 ]
PLN (x) ¼ (3:2:15)
xs 2p
Mean, variance, skewness, and excess kurtosis of the lognormal dis-
tribution can be expressed in terms of the parameters s and m
mLN ¼ exp (m þ 0:5s2 ),
s2 ¼ [ exp (s2 ) À 1] exp (2m þ s2 ),
LN

SLN ¼ [ exp (s2 ) À 1]1=2 [ exp (s2 ) þ 2],
KeLN ¼ exp (4s2 ) þ 2 exp (3s2 ) þ 3 exp (2s2 ) À 6 (3:2:16)
The Cauchy distribution (Lorentzian) is an example of the stable distri-
bution (see the next section). It has the form
b
PC (x) ¼ (3:2:17)
p[b2 þ (x À m)2 ]
The specific of the Cauchy distribution is that all its moments are
infinite. The case with b ¼ 1 and m ¼ 0 is named the standard Cauchy
distribution
1
PC (x) ¼ (3:2:18)
p[1 þ x2 ]
Figure 3.1 depicts the distribution of the weekly returns of the ex-
change-traded fund SPDR that replicates the S&P 500 index (ticker
SPY) for 1996“2003 in comparison with standard normal distribution
and the standard Cauchy distributions (see Exercise 3).
The extreme value distributions can be introduced with the Fisher-
Tippett theorem. According to this theorem, if the cumulative distri-
bution function F(x) ¼ Pr(X x) for a random variable X exists,
then the cumulative distribution of the maximum values of
X, Hj (x) ¼ Pr(Xmax x) has the following asymptotic form
(
exp [À(1 þ j(x À mmax )=smax )À1=j ], j 6¼ 0,
Hj (x) ¼ (3:2:19)
exp [À exp (À(x À mmax )=smax )], j¼0

where 1 þ j(x À mmax )=smax > 0 in the case with j 6¼ 0: In (3.2.19),
mmax and smax are the location and scale parameters, respectively;
j is the shape parameter and 1=j is named the tail index. The
24 Probability Distributions



0.5




0.4




0.3




0.2
SPY
Normal
Cauchy
0.1




0
’6 ’4 ’2 0 2 4
Figure 3.1 The standardized distribution of the weekly returns of the S&P
500 SPDR (SPY) for 1996“2003 in comparison with the standard normal
and the standard Cauchy distributions.




Fisher-Tippett theorem does not define the values of the parameters
mmax and smax . However, special methods have been developed for
their estimation [5].
It is said that the cumulative distribution function F(x) is in the
domain of attraction of Hj (x). The tail behavior of the distribution
F(x) defines the shape parameter. The Gumbel distribution corres-
ponds to the case with j ¼ 0. Distributions with thin tails, such as
normal, lognormal, and exponential distributions, have the Gumbel
domain of attraction. The case with j > 0 is named the Frechet
distribution. Domain of the Frechet attraction corresponds to distri-
butions with fat tails, such as the Cauchy distribution and the Pareto
distribution (see the next Section). Finally, the case with j < 0 defines
the Weibull distribution. This type of distributions (e.g., the uniform
distribution) has a finite tail.
25
Probability Distributions



3.3 STABLE DISTRIBUTIONS AND SCALE
INVARIANCE
The principal property of stable distribution is that the sum of
variates has the same distribution shape as that of addends (see,
e.g., [6] for details). Both the Cauchy distribution and the normal
distribution are stable. This means, in particular, that the sum of
two normal distributions with the same mean and variance is also the
normal distribution (see Exercise 2). The general definition for
the stable distributions was given by Levy. Therefore, the stable
distributions are also called the Levy distributions.
Consider the Fourier transform F(q) of the probability distribution
function f(x)
°
F(q) ¼ f(x)eiqx dx (3:3:1)

The function F(q) is also called the characteristic function of the
stochastic process. It can be shown that the logarithm of the charac-
teristic function for the Levy distribution has the following form
(
imq À gjqja [1 À ibd tan (pa=2)], if a 6¼ 1
ln FL (q) ¼ (3:3:2)
imq À gjqj[1 þ 2ibd ln (jqj)=p)], if a ¼ 1

<< . .

. 3
( : 18)



. . >>