0% found this document useful (0 votes)
30 views

Lecture1 (Chapter1,2)

1) The document provides an overview of key concepts in probability and statistics including sample spaces, events, probability, independence, expectation, variance, normal and binomial distributions. 2) Key grading policies for the course are outlined, with homework worth 20%, a midterm 30% and a final exam worth 50% of the overall grade. 3) Examples are provided to demonstrate calculating probabilities and conditional probabilities for events such as coin flips.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Lecture1 (Chapter1,2)

1) The document provides an overview of key concepts in probability and statistics including sample spaces, events, probability, independence, expectation, variance, normal and binomial distributions. 2) Key grading policies for the course are outlined, with homework worth 20%, a midterm 30% and a final exam worth 50% of the overall grade. 3) Examples are provided to demonstrate calculating probabilities and conditional probabilities for events such as coin flips.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 19

Mathematical Finance First

Lecture
Biswo Poudel
Syllabus/Grading
• Textbook: An elementary introduction to
mathematical finance . By Sheldon Ross. Third
Edition (2011)
• Grading: Homework: 20%, Midterm: 30%,
Final (Cumulative):50%
Probability
• Sample space, S: Set of all possible outcomes
of the experiment. For example, if there are m
possible outcomes,1,2,…,m, then S={1,2,…,m}
• Flip a coin example has S={h,t}
• Event, A: Any set of possible outcomes of the
experiment is called an event. We say event A
occurs when the outcome of experiment is a
point in A. Note 
p ( A) 
iA
pi
Complement
• Complement of an event A: it is the event in
which A doesn’t occur. Also indicated by .AC

The fact: .
P ( A)  1  P ( AC )

• Null event, = P ( )  0
. It is the complement of the

sample space and contains no outcomes.


• Union of A and B, AUB: all events that are in A
or B.
• Intersection of A and B: A B: All events that
are in both A and B.
more
• Note: P(AUB)=P(A)+P(B)-P(AB)
• If the events are exclusive, P(AB)=0.
P ( AB )
• Conditional Probability: P(A|B)= P( B)

• Example: A coin is flipped twice. Assuming


that all four points in the sample space
S={(h,h),(h,t),(t,h),(t,t)} are equally likely, what
is the conditional probability that both flips
land on heads given that (a) the first flip land
on head and (b) at least one flip land on head?
solution
• Let event A: both flips land on head. B: first
flip land on head. Then P(AB)=1/4. Also
P(B)=1/2. Hence, P(A|B)=1/2.
• For (b), A is same as above. B is the event that
at least one flip lands on head, the probability
of which is ¾. Then, P(AB)=1/4. Also P(B)=3/4.
Then P(A|B)=1/3.
• Multiplication theorem of probability:
P(AB)=P(B) P(A|B)
Independence

• If P(A|B)=P(A), we say A is independent of B.


• Random Variable: Numerical quantities whose
values are determined by the outcome of an
experiment are known as random variables.
• Expectation of a random variable X: 
n
E[ X ]  x j P[ X  x j ]
j 1

• Bernoulli random variable: takes value 1 with


probability p and value 0 with probability 1-p.
(a ) E (X   )  E ( X )  
• Rules of Expectation:  
 k 
(b) For r.v. X 1 , X 2 ,..., X k , E  X j  
k
E( X j )
 j 1  j 1
Binomial distribution
• Bunch of Bernoulli
• P(X=i)=  n i
  p (1  p ) n i
i

• On the other hand, define


n
X   X i , where X i is the outcome of one Bernoulli process.
i 1
n
Then, E  X    E ( X i )  np
i 1
Proposition
• Consider n independent trials, each of which is a
success with probability p. then given that there
is a total of i successes in the n trials, each of the
n
subsets of i trials is equally likely to be the set of
 
i

trials that resulted in successes.


• Proof: Let T be any subset of size i of the set {1,2,
….,n}. A is the event that all trials in T are
successes. X is a number of successes in n trials.
n i

Then, P( A | X  i)  PP( A( X, Xi)i)   np (1  p)   1n 


i

  p i (1  p ) n i  
i i
Variance, covariance, correlations
• Many expressions for the variance, but two
are prominent:    
(a ) var( X )  E ( X 2 )  E ( X )
2
(b) var( X )  E[ X  E ( X ) ]
2

• Proposition: If X , X , …, X are independent


1 2 k

 
random variables, then  
k k
var X j   var( X j )
 j 1  j 1

• Similarly,   
(a) COV ( X , Y )  E X  E ( X ) Y  E (Y ) (b)COV ( X , Y )  E ( XY )  E ( X ) E (Y )

• Correlation is defined as    
 ( X ,Y ) 
COV ( X , Y )
var X var Y
n m n m
• Also, COV ( X i ,  Y j )   COV ( X i , Y j )
i 1 j 1 i 1 j 1
• And  n  n n



var X   var( X ) 
i 1
i

i 1
cov( X , X )
i 
i 1 j  i
i j
Conditional Expectation
• Two expressions to remember:
E  X | Y  y    xP X  x | Y  y 
x

E[ X ]   E[ X | Y  y ]P (Y  y )
y
Normal Distribution
• Continuous Distribution
• The density function of Normal r.v. X is given
( x  )2
1 
by f ( x ) 
2 
e ,  x  
2 2

• Mean and variances are E(X)=  , var(X)=  2

• Area under the curve given by above density b

function gives the probability P(a  x  b)   f   d


a

• The cumulative density function is given by


x
F ( x)   f   d

Standard Normal Random Variable
• The r.v. with mean 0 and variance 1 is called
standard normal random variable and is x2
1

f ( x)  e ,  x  
denote by Z. Its density is 2
2

• The function, (x), is cumulative density


function of standard normal rv and gives area
between   and x. This is a symmetric function:
. Furthermore, we can express the
 ( x)  1   ( x)

following: P a  Z  b  (b)  (a)


Alternative Approximation
• The following approximation used to be
employed to estimate the cdf of standard
x2
normal:  x  1  1 e  a y  a y  ..  a y ,

2
1 2
2
5
5

2
1
where , y  ; a1  0.319381530; a2  0.356563782
1  0.236419 x
a3  1.781477937; a4  1.821255978; a5  1.330274429,  ( x)  1   ( x)

• IMPORTANT PROPERTY: If X is a random


 x
variable with mean and variance , then is 2

a standard normal variable. Furthermore,


E  X 1   X 2   E ( X 1 )   E ( X 2 )
Var  X 1   X 2    2 var( X 1 )   2 var( X 2 )
where X1 and X 2 are any two normally distributed random variables.
Cousin Log Normal
• A variable Y is log normally distributed if its log
is normally distributed. Let X be a normal
random variable with mean and variance .
 2

Then define . Y is log normally


Y  eX

distributed
2

random variable with E[Y ]  e 2 ;Var (Y )  e 2   (e  1)
2 2
Some math
• Let S(n) denote the price of a certain security
at the end of n additional weeks, .Let price n 1

evolve in such a way that


S ( n)
~ iid , log normal w / parameters   .0165,  0.073;n  1
S (n  1)

• (a) We first find the probability that the price


is up after one week. Denote  
 
 S (1)    S (1)  
P{S (1)  S (0)}  P   1  P  log   0
 S ( 0)     S (0)  
This is now Normal w/ 
  0.0165, 0.073 
  0.0165 
 P Z    P{Z  0.2260}  P{Z  0.2260}  0.589
 0.073 
Cont..
• Since price increases are independent, the
probability that the price will increase in time
2 is also 0.5894. So the probability that the
price will increase twice neck to neck is   2
0.5894  0.3474

• Now, let’s calculate the probability that the


price at the end of period 2 is higher than it is
today, i.e. S (2) S (2) S (1) S (2) S (1)
P{S (2)  S (0)}. Note that P{S (2)  S (0)}  P{  1}  P{   1}  P{ log  log  0}
S (0) S (1) S (0) S (1) S (0)
       
Each of them is normal w/mean
and variancegiven.So the sum is
normal with mean 20.0165 0.033 and
variance 2 0.073 2

 0.33
 P{Z  }  0.6254
0.0730  2
Central Limit Theorem
• Sum of a large number of a random variable
all having same distribution is normal. i.e.
n
Let Sn   X i , X i ~ iid , w / mean  , var  2 . Central Limit Theorem states that for large n, the expression Sn will approximately
i 1

 S  n 
be a normal random variable with expected value n and variance n 2 . Hence, we can approximate :P  n    ( x)
  n 

• An application: a coin is tossed 100 times.


What is the probability that heads appears
fewer than 40 times?
Solution : This is clearly a case of binomial distribution with mean np  50; and variance np(1 - p)  25.
X - 50 40  50
Now, P(X  40)  P{  }   (2)  0.0228
25 25
Exact binomial solution is 0.0176. We can improve this statement by using 39.5 rather than 40 in the expression P(X  40) above.
Note that P(X  39.5) will be 0.0179 upon following the same steps above.
Homework 1
• Exercise 1.18, 2.5, 2.6, 2.7, 2.8, 2.10
• Due date: 05/01 in class.

You might also like