<< Ïðåä. ñòð. ñòð. 3(îáùåå êîëè÷åñòâî: 18)ÎÃËÀÂËÅÍÈÅ Ñëåä. ñòð. >>
MBA level, much of the material pertinent to this chapter is given
in [3].

EXERCISES
1. Familiarize yourself with the financial market data available on
weekly closing prices of the exchange-traded fund SPDR that
replicates the S&P 500 index (ticker SPY) for 1996â€“2003. Cal-
culate simple weekly returns for this data sample (we shall use
these data for other exercises).
2. Calculate the present value of SPY for 2004 if the asset risk
premium is equal to (a) 3% and (b) 4%. The SPY dividends in
2003 were \$1.63. Assume the dividend growth rate of 5% (see
Exercise 5.3 for a more accurate estimate). Assume the risk-free
rate of 3%. What risk premium was priced in SPY in the end of
2004 according to the discounted-cash-flow theory?
3. Simulate the rational bubble using the Blanchard-Watson
model (2.2.18). Define e(t) Â¼ PU (t) Ã€ 0:5 where PU is standard
uniform distribution (explain why the relation e(t) Â¼ PU (t)
cannot be used). Use p Â¼ 0:75 and R Â¼ 0:1 as the initial values
for studying the model sensitivity to the input parameters.
4. Is there an arbitrage opportunity for the following set of the
exchange rates: GBP/USD Â¼ 1.7705, EUR/USD Â¼ 1.1914,
EUR/GBP Â¼ 0.6694?
Chapter 3

Probability Distributions

This chapter begins with the basic notions of mathematical statistics
that form the framework for analysis of financial data (see, e.g.,
[1â€“3]). In Section 3.2, a number of distributions widely used in statis-
tical data analysis are listed. The stable distributions that have become
popular in Econophysics research are discussed in Section 3.3.

3.1 BASIC DEFINITIONS
Consider the random variable (or variate) X. The probability dens-
ity function P(x) defines the probability to find X between a and b
Ã°
b

b) Â¼ P(x)dx
Pr(a X (3:1:1)
a

The probability density must be a non-negative function and must
satisfy the normalization condition
XÃ°
max

P(x)dx Â¼ 1 (3:1:2)
Xmin

where the interval [Xmin , Xmax ] is the range of all possible values of X.
In fact, the infinite limits [Ã€1, 1] can always be used since P(x) may

17
18 Probability Distributions

be set to zero outside the interval [Xmin , Xmax ]. As a rule, the infinite
integration limits are further omitted.
Another way of describing random variable is to use the cumulative
distribution function
Ã°
b

b) Â¼
Pr(X P(x)dx (3:1:3)
Ã€1

Obviously, probability satisfies the condition
Pr(X > b) Â¼ 1 Ã€ Pr(X b) (3:1:4)
Two characteristics are used to describe probable values of random
variable X: mean (or expectation) and median. Mean of X is the
average of all possible values of X that are weighed with the prob-
ability density P(x)
Ã°
m  E[X] Â¼ xP(x)dx (3:1:5)

Median of X is the value, M, for which
Pr(X > M) Â¼ Pr(X < M) Â¼ 0:5 (3:1:6)
Median is the preferable characteristic of the most probable value for
strongly skewed data samples. Consider a sample of lottery tickets
that has one â€˜â€˜luckyâ€™â€™ ticket winning one million dollars and 999
â€˜â€˜losers.â€™â€™ The mean win in this sample is \$1000, which does not
realistically describe the lottery outcome. The median zero value is a
much more relevant characteristic in this case.
The expectation of a random variable calculated using some avail-
able information It (that may change with time t) is called conditional
expectation. The conditional probability density is denoted by P(xjIt ).
Conditional expectation equals
Ã°
E[Xt jIt ] Â¼ xP(xjIt )dx (3:1:7)

Variance, Var, and the standard deviation, s, are the conventional
estimates of the deviations from the mean values of X
Ã°
Var[X]  s2 Â¼ (x Ã€ m)2 P(x)dx (3:1:8)
19
Probability Distributions

In financial literature, the standard deviation of price is used to
characterize the price volatility.
The higher-order moments of the probability distributions are
defined as
Ã°
mn  E[Xn ] Â¼ xn P(x)dx (3:1:9)

According to this definition, mean is the first moment (m  m1 ), and
variance can be expressed via the first two moments, s2 Â¼ m2 Ã€ m2 .
Two other important parameters, skewness S and kurtosis K, are
related to the third and fourth moments, respectively,

S Â¼ E[(x Ã€ m)3 ]=s3 , K Â¼ E[(x Ã€ m)4 ]=s4 (3:1:10)
Both parameters, S and K, are dimensionless. Zero skewness implies
that the distribution is symmetrical around its mean value. The posi-
tive and negative values of skewness indicate long positive tails and
long negative tails, respectively. Kurtosis characterizes the distribu-
tion peakedness. Kurtosis of the normal distribution equals three.
The excess kurtosis, Ke Â¼ K Ã€ 3, is often used as a measure of devi-
ation from the normal distribution. In particular, positive excess
kurtosis (or leptokurtosis) indicates more frequent medium and large
deviations from the mean value than is typical for the normal distri-
bution. Leptokurtosis leads to a flatter central part and to so-called
fat tails in the distribution. Negative excess kurtosis indicates frequent
small deviations from the mean value. In this case, the distribution
sharpens around its mean value while the distribution tails decay
faster than the tails of the normal distribution.
The joint distribution of two random variables X and Y is the
generalization of the cumulative distribution (see 3.1.3)
Ã°Ã°
b c

c) Â¼
Pr(X b, Y h(x, y)dxdy (3:1:11)
Ã€1 Ã€1

In (3.1.11), h(x, y) is the joint density that satisfies the normalization
condition
Ã°Ã°
11

h(x, y)dxdy Â¼ 1 (3:1:12)
Ã€1 Ã€1
20 Probability Distributions

Two random variables are independent if their joint density function
is simply the product of the univariate density functions: h(x, y) Â¼
f (x)g(y). Covariance between two variates provides a measure of their
simultaneous change. Consider two variates, X and Y, that have the
means mX and mY , respectively. Their covariance equals
Cov(x, y)  sXY Â¼ E[(x Ã€ mX )(y Ã€ mY )] Â¼ E[xy] Ã€ mX mY (3:1:13)
Obviously, covariance reduces to variance if X Â¼ Y: sXX Â¼ sX 2 .
Positive covariance between two variates implies that these variates
tend to change simultaneously in the same direction rather than in
opposite directions. Conversely, negative covariance between two
variates implies that when one variate grows, the second one tends
to fall and vice versa. Another popular measure of simultaneous
change is the correlation coefficient
Corr(x, y) Â¼ Cov(x:y)=(sX sY ) (3:1:14)
The values of the correlation coefficient are within the range [ Ã€ 1, 1].
In the general case with N variates X1 , . . . , XN (where N > 2),
correlations among variates are described with the covariance matrix.
Its elements equal
Cov(xi , xj )  sij Â¼ E[(xi Ã€ mi )(xj Ã€ mj )] (3:1:15)

3.2 IMPORTANT DISTRIBUTIONS
There are several important probability distributions used in quan-
titative finance. The uniform distribution has a constant value within
the given interval [a, b] and equals zero outside this interval

0, x < a and x > b
PU Â¼ (3:2:1)
1=(b Ã€ a), a x b
The uniform distribution has the following mean and higher-order
moments
mU Â¼ 0, s2 U Â¼ (b Ã€ a)2 =12, SU Â¼ 0, KeU Â¼ Ã€6=5 (3:2:2)
The case with a Â¼ 0 and b Â¼ 1 is called the standard uniform distribu-
tion. Many computer languages and software packages have a library
function for calculating the standard uniform distribution.
21
Probability Distributions

The binomial distribution is a discrete distribution of obtaining n
successes out of N trials where the result of each trial is true with
probability p and is false with probability q Â¼ 1 Ã€ p (so-called Ber-
noulli trials)
N!
PB (n; N, p) Â¼ CNn pn qNÃ€n Â¼ CNn pn (1 Ã€ p)NÃ€n , CNn Â¼ (3:2:3)
n!(N Ã€ n)!
The factor CNn is called the binomial coefficient. Mean and higher-
order moments for the binomial distribution are equal, respectively,
mB Â¼ Np, s2 B Â¼ Np(1 Ã€ p), SB Â¼ (q Ã€ p)=sB , KeB Â¼ (1 Ã€ 6pq)=sB 2
(3:2:4)
In the case of large N and large (N Ã€ n), the binomial distribution
approaches the form
1
PB (n) Â¼ pï¬ƒï¬ƒï¬ƒï¬ƒï¬ƒï¬ƒ exp [Ã€(x Ã€ mB )2 =2s2 B ], N ! 1, (N Ã€ n) ! 1 (3:2:5)
2psB
that coincides with the normal (or Gaussian) distribution (see 3.2.9). In
the case with p ( 1, the binomial distribution approaches the Poisson
distribution.
The Poisson distribution describes the probability of n successes in
N trials assuming that the fraction of successes n is proportional to
the number of trials: n Â¼ pN
 n n  n NÃ€n
N!
PP (n, N) Â¼ 1Ã€ (3:2:6)
n!(N Ã€ n)! N N
As the number of trials N becomes very large (N ! 1), equation
(3.2.6) approaches the limit
PP (n) Â¼ nn eÃ€n =n! (3:2:7)
Mean, variance, skewness, and excess kurtosis of the Poisson distri-
bution are equal, respectively,
mP Â¼ s2 Â¼ n, SP Â¼ nÃ€1=2 , KeP Â¼ nÃ€1 (3:2:8)
P

The normal (Gaussian) distribution has the form
1
PN (x) Â¼ pï¬ƒï¬ƒï¬ƒï¬ƒï¬ƒï¬ƒ exp [Ã€(x Ã€ m)2 =2s2 ] (3:2:9)
2ps
22 Probability Distributions

It is often denoted N(m, s). Skewness and excess kurtosis of the
normal distribution equals zero. The transform z Â¼ (x Ã€ m)=s con-
verts the normal distribution into the standard normal distribution
1
PSN (z) Â¼ pï¬ƒï¬ƒï¬ƒï¬ƒï¬ƒï¬ƒ exp [Ã€z2 =2] (3:2:10)
2p
Note that the probability for the standard normal variate to assume
the value in the interval [0, z] can be used as the definition of the error
function erf(x)
Ã°
z
pï¬ƒï¬ƒï¬ƒ
1
pï¬ƒï¬ƒï¬ƒï¬ƒï¬ƒï¬ƒ exp (Ã€x2 =2)dx Â¼ 0:5 erf(z= 2) (3:2:11)
2p
0

Then the cumulative distribution function for the standard normal
distribution equals
pï¬ƒï¬ƒï¬ƒ
PrSN (z) Â¼ 0:5[1 Ã¾ erf(z= 2)] (3:2:12)
According to the central limit theorem, the probability density distri-
bution for a sum of N independent random variables with finite
variances and finite means approaches the normal distribution as N
grows to infinity. Due to exponential decay of the normal distribu-
tion, large deviations from its mean rarely appear. The normal distri-
bution plays an extremely important role in all kinds of applications.
The Box-Miller method is often used for modeling the normal distri-
bution with given uniform distribution [4]. Namely, if two numbers
x1 and x2 are drawn from the standard uniform distribution, then
y1 and y2 are the standard normal variates
y1 Â¼ [Ã€2 ln x1 )]1=2 cos (2px2 ), y2 Â¼ [Ã€2 ln x1 )]1=2 sin (2px2 ) (3:2:13)
Mean and variance of the multivariate normal distribution with N
variates can be easily calculated via the univariate means mi and
covariances sij
X X
N N
mi , s2
mN Â¼ Â¼ (3:2:14)
sij
N
iÂ¼1 i, jÂ¼1

The lognormal distribution is a distribution in which the logarithm of a
variate has the normal form
23
Probability Distributions

1
pï¬ƒï¬ƒï¬ƒï¬ƒï¬ƒï¬ƒ exp [Ã€( ln x Ã€ m)2 =2s2 ]
PLN (x) Â¼ (3:2:15)
xs 2p
Mean, variance, skewness, and excess kurtosis of the lognormal dis-
tribution can be expressed in terms of the parameters s and m
mLN Â¼ exp (m Ã¾ 0:5s2 ),
s2 Â¼ [ exp (s2 ) Ã€ 1] exp (2m Ã¾ s2 ),
LN

SLN Â¼ [ exp (s2 ) Ã€ 1]1=2 [ exp (s2 ) Ã¾ 2],
KeLN Â¼ exp (4s2 ) Ã¾ 2 exp (3s2 ) Ã¾ 3 exp (2s2 ) Ã€ 6 (3:2:16)
The Cauchy distribution (Lorentzian) is an example of the stable distri-
bution (see the next section). It has the form
b
PC (x) Â¼ (3:2:17)
p[b2 Ã¾ (x Ã€ m)2 ]
The specific of the Cauchy distribution is that all its moments are
infinite. The case with b Â¼ 1 and m Â¼ 0 is named the standard Cauchy
distribution
1
PC (x) Â¼ (3:2:18)
p[1 Ã¾ x2 ]
Figure 3.1 depicts the distribution of the weekly returns of the ex-
change-traded fund SPDR that replicates the S&P 500 index (ticker
SPY) for 1996â€“2003 in comparison with standard normal distribution
and the standard Cauchy distributions (see Exercise 3).
The extreme value distributions can be introduced with the Fisher-
Tippett theorem. According to this theorem, if the cumulative distri-
bution function F(x) Â¼ Pr(X x) for a random variable X exists,
then the cumulative distribution of the maximum values of
X, Hj (x) Â¼ Pr(Xmax x) has the following asymptotic form
(
exp [Ã€(1 Ã¾ j(x Ã€ mmax )=smax )Ã€1=j ], j 6Â¼ 0,
Hj (x) Â¼ (3:2:19)
exp [Ã€ exp (Ã€(x Ã€ mmax )=smax )], jÂ¼0

where 1 Ã¾ j(x Ã€ mmax )=smax > 0 in the case with j 6Â¼ 0: In (3.2.19),
mmax and smax are the location and scale parameters, respectively;
j is the shape parameter and 1=j is named the tail index. The
24 Probability Distributions

0.5

0.4

0.3

0.2
SPY
Normal
Cauchy
0.1

0
âˆ’6 âˆ’4 âˆ’2 0 2 4
Figure 3.1 The standardized distribution of the weekly returns of the S&P
500 SPDR (SPY) for 1996â€“2003 in comparison with the standard normal
and the standard Cauchy distributions.

Fisher-Tippett theorem does not define the values of the parameters
mmax and smax . However, special methods have been developed for
their estimation [5].
It is said that the cumulative distribution function F(x) is in the
domain of attraction of Hj (x). The tail behavior of the distribution
F(x) defines the shape parameter. The Gumbel distribution corres-
ponds to the case with j Â¼ 0. Distributions with thin tails, such as
normal, lognormal, and exponential distributions, have the Gumbel
domain of attraction. The case with j > 0 is named the Frechet
distribution. Domain of the Frechet attraction corresponds to distri-
butions with fat tails, such as the Cauchy distribution and the Pareto
distribution (see the next Section). Finally, the case with j < 0 defines
the Weibull distribution. This type of distributions (e.g., the uniform
distribution) has a finite tail.
25
Probability Distributions

3.3 STABLE DISTRIBUTIONS AND SCALE
INVARIANCE
The principal property of stable distribution is that the sum of
variates has the same distribution shape as that of addends (see,
e.g., [6] for details). Both the Cauchy distribution and the normal
distribution are stable. This means, in particular, that the sum of
two normal distributions with the same mean and variance is also the
normal distribution (see Exercise 2). The general definition for
the stable distributions was given by Levy. Therefore, the stable
distributions are also called the Levy distributions.
Consider the Fourier transform F(q) of the probability distribution
function f(x)
Ã°
F(q) Â¼ f(x)eiqx dx (3:3:1)

The function F(q) is also called the characteristic function of the
stochastic process. It can be shown that the logarithm of the charac-
teristic function for the Levy distribution has the following form
(
imq Ã€ gjqja [1 Ã€ ibd tan (pa=2)], if a 6Â¼ 1
ln FL (q) Â¼ (3:3:2)
imq Ã€ gjqj[1 Ã¾ 2ibd ln (jqj)=p)], if a Â¼ 1

 << Ïðåä. ñòð. ñòð. 3(îáùåå êîëè÷åñòâî: 18)ÎÃËÀÂËÅÍÈÅ Ñëåä. ñòð. >>