0% found this document useful (0 votes)
21 views

Lecture5 23

This document provides a summary of key concepts in extreme value statistics and modeling extremes, which is the branch of statistics used to model and understand extreme risks. It discusses methods like the block maxima method and peaks over thresholds method for fitting extreme value distributions to data. It also covers topics like dependence among extremes, clustering of extremes, and how to adapt methods for stationary time series data. The overall goal of extreme value statistics is to develop tools for making inferences about future extreme events based on past extreme observations.

Uploaded by

Hassan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Lecture5 23

This document provides a summary of key concepts in extreme value statistics and modeling extremes, which is the branch of statistics used to model and understand extreme risks. It discusses methods like the block maxima method and peaks over thresholds method for fitting extreme value distributions to data. It also covers topics like dependence among extremes, clustering of extremes, and how to adapt methods for stationary time series data. The overall goal of extreme value statistics is to develop tools for making inferences about future extreme events based on past extreme observations.

Uploaded by

Hassan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Corona risks

Financial Risk
4-th quarter 2022/23
Lecture 5: Extreme
value statistics wrap
up

The big recession 2009 Windstorm


insurance

Gudrun January 2005


326 MEuro loss
72 % due to forest losses
4 times larger than second largest
1
Read

S. Coles: An Introduction To Statistical Modelling of Extremes, pages 30-34, 45-53,


74-91, 92-104
S. Lauridsen: Estimation of Value At Risk By Extreme Value Methods
H. Rootzen, N. Tajvidi: Extreme value statistics and wind storm losses, A case
study
Course lecture slides

Use
E. Gilleland: “Computing Software” as an aid in choosing what software to use

Links are available under “Anslag” in Canvas: remember that you need to hook up
to Chalmers/GU with vpn when you want to follow the links
2
Extreme value statistics (EVS) is the branch of statistics
developed to handle extreme risks

The philosophy of EVS is simple: extreme events, perhaps extreme


water levels or extreme financial losses, are often quite different
from ordinary everyday behavior, and ordinary behavior then has
little to say about extremes, so that only other extreme events
give useful information about future extreme events.

3
price tomorrow−price today
Apple losses (= - 100 × ) one year back
price today

1.92

quarter 4 quarter 1 quarter 2 Quarter 3

Maximum quarterly loss excess of the level 𝑢𝑢 = 1.92

How large is the risk of a big quarterly loss? BM


How large is the risk of a big loss tomorrow? PoT
4
The block maxima method (Coles p. 45-53)

Obtain observations 𝑥𝑥1 , … , 𝑥𝑥𝑛𝑛 of block maxima (e.g. weekly or yearly maxima)

• Assume observations are i.i.d and have a GEV distribution


• Use 𝑥𝑥, … , 𝑥𝑥𝑛𝑛 to estimate the GEV parameters
• Use the fitted GEV to compute estimates and confidence intervals for, e.g.,
quantiles of yearly maximum distribution (= VaR) or of Expected Shortfall

5
The asymptotic motivation: What does

mean in practice? That for large n, or,

with and , that

Since all the parameters are unknown anyway, we are left with
the problem of estimating 𝜇𝜇, 𝜎𝜎, 𝛾𝛾 from data, i.e. to use the
Block Maxima method.

6
The Peaks over thresholds (PoT) method (Coles p. 74-91)

Choose a (high) threshold 𝑢𝑢 and obtain observations of the 𝑁𝑁 excesses of the threshold
and of the times 𝑡𝑡1 , … , 𝑡𝑡𝑁𝑁 of exceedance

• Assume excesses are i.i.d and GP distributed and that 𝑡𝑡1 , … , 𝑡𝑡𝑁𝑁 are occurrence times
of an independent Poisson process, so that 𝑁𝑁 has a Poisson distribution
• Use the observed excesses to estimate the GP parameters and 𝑁𝑁 to estimate the
mean of the Poisson distribution
• Estimate tail 𝐹𝐹� 𝑥𝑥 = 1 − 𝐹𝐹 𝑥𝑥 = 𝐹𝐹� 𝑢𝑢 𝐹𝐹�𝑢𝑢 𝑥𝑥 − 𝑢𝑢 , where 𝐹𝐹�𝑢𝑢 𝑥𝑥 − 𝑢𝑢 = the conditional
distribution function of threshold excesses, by

� 𝑁𝑁 �
𝐹𝐹 𝑥𝑥 = 𝐹𝐹𝑢𝑢 (𝑥𝑥 − 𝑢𝑢)
𝑛𝑛

7

Why not just estimate 𝐹𝐹(𝑥𝑥) by 𝑁𝑁(𝑥𝑥)/𝑛𝑛? Because if 𝑥𝑥 is a very high level then
𝑁𝑁(𝑥𝑥) is small or zero, and then this estimator is useless -- and it is such very large
𝑥𝑥-es we are interested in.

Assume we have computed estimators 𝜎𝜎� , �


𝛾𝛾 of the parameters in the GP distribution
𝐹𝐹�𝑢𝑢 𝑥𝑥 = 𝐻𝐻(𝑥𝑥). Then
−1/�
𝛾𝛾
� 𝑁𝑁 � 𝑁𝑁 𝛾𝛾�
𝐹𝐹 𝑥𝑥 = 𝐻𝐻 𝑥𝑥 − 𝑢𝑢 = 1 + 𝑥𝑥 − 𝑢𝑢
𝑛𝑛 𝑛𝑛 𝜎𝜎� +

Solving 1 − 𝐹𝐹� 𝑥𝑥𝑝𝑝 = 𝑝𝑝 for 𝑥𝑥𝑝𝑝 we get an estimator of the 𝑝𝑝-th quantile of 𝑋𝑋:

𝜎𝜎� 𝑛𝑛 −�
𝛾𝛾 𝑁𝑁
𝑥𝑥𝑝𝑝 = 𝑢𝑢 + (1 − 𝑝𝑝) −1 , for 𝑝𝑝 > 1 −
𝛾𝛾� 𝑁𝑁 𝑛𝑛

8
the distribution of Block Maxima can be obtained from a PoT model

• suppose that values larger than u occur as a Poisson process with intensity 𝜆𝜆 and that
this process is independent of the sizes of the excesses
𝛾𝛾 −1/�
𝛾𝛾
• suppose excesses are i.i.d. and have GP distribution 𝐻𝐻 𝑥𝑥 = 1 − 1 + (𝑥𝑥 − 𝑢𝑢)
𝜎𝜎

• Let 𝑀𝑀𝑇𝑇 = the largest of the observations in the time interval [0, 𝑇𝑇].

Then,

1

𝛾𝛾
𝑥𝑥 − 𝑢𝑢 − 𝜆𝜆𝜆𝜆 𝛾𝛾 − 1 𝜎𝜎/𝛾𝛾)
1 + 𝛾𝛾
𝑃𝑃 𝑀𝑀𝑇𝑇 ≤ 𝑥𝑥 = exp − 𝜎𝜎 𝜆𝜆𝜆𝜆 𝛾𝛾 , for 𝑥𝑥 > 0

9
Maximum likelihood inference (Coles p. 30-34)

• parameters are estimated by (numerically) maximizing the likelihood function


• functions of parameters are estimated by plugging estimated parameters into
the function
• simplest confidence intervals for parameters obtained from the inverse of the
observed information matrix
• confidence intervals for functions of parameters, e.g. Var or ES, obtained from
the delta method
• profile likelihood confidence intervals often more accurate
• likelihood ratio test to check if simpler models are OK, e.g. if one can use the
same value of 𝛾𝛾 for two series of returns
10
Dependence: extremes come in small clusters (Coles p. 92-104)

• extremal index 𝜃𝜃 = 1/"mean cluster length“

• typically 𝑃𝑃 𝑀𝑀𝑛𝑛 ≤ 𝑥𝑥 = 𝐹𝐹 𝑥𝑥 𝜃𝜃𝜃𝜃 where 𝑀𝑀𝑛𝑛 is the maximum of 𝑛𝑛 variables and


𝐹𝐹(x) is the distribution function of one variable
• typically clusters approximately i.i.d. but with dependence within clusters
• typically tail of cluster maxima approximately same as tail of 𝐹𝐹(x) (strange but
true)
• typically the GEV distributions the only possible limit distributions
• block maxima method often works just as for i.i.d observations

stationarity??
11
The PoT method for stationary time series
1. Decluster: identify approximately i.i.d clusters of large values by
a) Block method: divide observations up into blocks of a fixed length r,
all values in a block which exceed the level u is a cluster
b) Blocks-runs method: the first cluster starts at first exceedance of u
and contains all excesses of u within a fixed length r thereafter.
The second cluster starts at the next exceedance of u and contains
all excesses of u within r thereafter, and so on. . .
c) Runs method: the first cluster starts with the first exceedance of u
and stops as soon as there is a value below u, the second cluster
starts with the next exceedance of u, and so on …

2. estimate of the extremal index


3. PoT: Use standard i.i.d. PoT model, but with excesses replaced by
cluster maxima, and excedance times replaced by the times when cluster
maxima occur. (A bit of a miracle this works. Proof not given here.)

4. Use to switch between block maxima and PoT or


12
5. Use formula for i.i.d. variables with excesses replaced by excesses
by cluster maxima and the number of excesses replaced by the
number of clusters
The i.i.d. formula: Suppose excesses are GP distributed and occur as
a Poisson process which is independent of the sizes of excesses. Let
𝑀𝑀𝑇𝑇 be the maximum in the interval [0, 𝑇𝑇] and 𝑥𝑥 > 0. Then

= EV distribution!
13
Remember

If one does not understand the real-world situation well enough,


the best quantitative tools will not help. Taleb’s Turkey example:

14

You might also like