0% found this document useful (0 votes)
19 views31 pages

ACCT5919 - Lecture 7 T3 2024

Uploaded by

liaomeimeicc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views31 pages

ACCT5919 - Lecture 7 T3 2024

Uploaded by

liaomeimeicc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

ACCT5919 -

Business Risk
Management

Lecture 7 – Capital at Risk and Performance Measurement


Agenda
COURSE ADMINISTRATION
Quizzes – any questions?
Group Video Presentation – any questions?

LECTURE – Measuring Risk


ISO Methodology
Broad Approaches – Quantitative and Qualitative
Statistical Approach
Distributions and Averages
VaR and EaR
Economic Capital
Performance Measures – RAROC and RORAC

Class Discussion – Culture and Controls

Quiz
Measuring Risk
Measuring risk is a fundamental step for
the risk analysis stage in the risk
management process.
It provides the basis for risk evaluation
against risk appetite to help prioritise
effective allocation of resources to
develop appropriate risk treatment
strategies.
By understanding the magnitude of risks
(inherent and residual), organisations can
evaluate proactive measures on a
cost/benefit basis to minimise potential
losses and maximise opportunities.
ISO specifies that the measurement of
both probability (likelihood) and impact
(consequence) is needed.
Measuring Risk (Cont.)
Consequences - are based on multiple impact types usually related to stakeholder expectations. Many
expressed in non-financial terms but still can be quantified. Could still use statistical methods if capture
past data consistently in the same way. Not done as commonly as financial losses. Define measures in the
criteria.
Likelihood – data set can be impacted by measurement decisions such as event definition (grouping
events with common causes), consistent basis for recognising event timing – time of cause, time of
detection, or time of impact. Need to be specified in criteria and recorded accurately.
Presentation
• plot each risk/event type on a single risk matrix against a common appetite for all risks
• implies single likelihood/consequence for each risk
• exposures are not aggregated for a total enterprise measure

Consequences
Likelihood Notable Minor Moderate Major Extreme
Almost Certain M H H E E
Likely M M H H E
Moderate L M M H H
Unlikely L L M M H
Rare L L L M M
Aggregating Risk Measures
For most risk/event types, there is a range of likelihood and consequences possible, e.g.,
natural disaster. This depends on how each risk/event is defined.

An aggregated view to consider total organisation financial loss exposure, especially


against buffers to bear losses is needed. This allows management of capital/reserve levels
on an enterprise-wide basis.

Aggregate loss distribution – the shape will vary by risk type. Also potentially need to
consider the correlation between risks, especially for financial, credit, and market-related
risks.

Example
aggregate
curve with a
suggestion
of how to
bear losses.
Broad Measurement Approaches
Qualitative Measurement
• Ultimately the judgement is subjective, and it is susceptible to the issues of human
biases and errors.
• It may be based on past data to provide information to the decision-maker, but the
measure is not statistically derived.
• Many risks may not have useful and relevant past data due to vastly different current
and immediate future context and drivers of that risk.
Usefulness
• Some risks are relatively new with little data on impacts.
• Some risks do not lend themselves easily to quantification or have multiple impacts –
eg primary or major impact is non-financial.
• Record- keeping of past data may not be accurate or complete for some risks, eg
large risks with multiple impacts over multiple years.
• Short amount of time to decide – interim measurement.
• Typical risk types where this is applied – people and customer related.
Broad Measurement Approaches (Cont.)
Quantitative Measurement
• Ostensibly eliminates human subjective judgement.
• Based on past risk loss events – relevant and complete and accurate data required.
• Model has assumptions (variables and relationships) – which may not always be
valid e.g. normal market conditions.
• Forward looking approach to risk management can challenge usefulness of statistical
models – simulations require understanding of drivers of different risk levels.
Usefulness
• Risk is best represented by loss events – mostly financial.
• Useful historical data available at a reasonable cost.
• Representative and comparable data beyond the organisation is available – eg,
industry data.
• Typically risks where this is applied – credit and market-related in the banking
industry.
Understanding the assumptions and drivers of risk is just as important – that is
what needs to be monitored and be managed proactively
Using Probability Theory and
Statistics in Risk Measurement
Probability theory type methods commonly used for financial type risks, especially for
the banking industry credit and market related risks or investments generally.
Methods use historical data. Also use simulations (e.g., Monte Carlo) and sensitivity
analysis to go beyond historical data sets. These are useful in helping to develop an
understanding of the variables or drivers for different outcomes.
For our course we will focus on two measures:
• Value-at-Risk (VaR): VaR attempts to quantify the potential loss for an asset,
investment or portfolio over a specified time period, at a certain confidence level.
VaR provides an estimate of the worst-case loss that an investment or portfolio is
likely (expected) to experience under normal market conditions.
• Earnings at risk (EaR): EaR attempts to determine the potential decline in an
organisation’s future earnings or profitability resulting from adverse events or risks
for a specified time period at a certain confidence level. These risks are both
internal and external risks which have a financial impact. Revenue or profit can be
used as the measure.
Statistics Refresher
In the context of risk measurement, sample data is used to provide insight into the amount of
risk that may exist, in particular, the variability (or volatility). Statistics aims to provide some
tools to help us make sense of sample data. The aim of this is broadly:
• to describe the data and its structure (distribution), and
• to infer meaning and possible broader implications from the data.

Example: The following sample of 50 pulse rates ordered by the number of beats per minute:

62 64 65 66 68 70 71 71 72 72

73 74 74 75 75 76 77 77 77 78

78 78 79 79 79 80 80 80 80 81

81 81 81 82 82 82 83 83 85 85

86 87 87 88 89 90 90 92 94 96
Describing the Distribution
Range:
The range relates to the maximum and minimum observations. The range of the distribution
is 62 to 96.
Average – three concepts:
Median:
• This is the middle value of the distribution.
• This involves finding the value where there are the same number of observations
above that value as there are below.
• In the previous distribution, containing 50 values, this would be the value between the
25th and 26th observations, i.e., between 79 & 80.
• We split the difference, and the median would be 79.5.

Mean:
• Calculated by adding all of the values of the observations in a distribution together
and dividing by the number of observations.
• In our previous example, the sum of the values is 3,955, and there are 50
observations.
• The mean is, therefore 79.1.
• This is the most common average measure.
• This is commonly referred to as the “Expected Value”.
• The Mean is a key measure that will be used in the process of risk measurement.
Describing the Distribution (Cont.)
Mode:
• Is the most frequent observation.
• The histogram shows that the two observations have the highest frequency are 80 and 81.
• The distribution, therefore, has two modes.

Sample of 50 Students

4
Number of Observations

0
62 64 65 66 68 70 71 72 73 74 75 76 77 78 79 80 81 82 83 85 86 87 88 89 90 92 94 96
Heart Beats per Minute
Describing the Distribution (Cont.)
Central tendency:
• For this set of data there can be four “averages, 79.5, 79.1, 80 and 81.
• There is often a tendency for observations to centre around a particular value.
• The “averages” are all measures of this central tendency.
• The most common measure of this tendency is the Mean.

Volatility and Shape:


Also of interest is the relative dispersion of the data observations. The shape of the distribution
allows identification of:
• Where particular observations lie within the distribution in relation to others, and
• What values are common and uncommon.
The shape of the histogram also provides a measure of the “volatility” (distribution) of the data
observations. This is often expressed in terms of the deviation from the mean. A relatively
more volatile (or more dispersed) set of data will have a higher range of values of data
observations per standard deviation from the mean.
The shape of the distribution can be many varied shapes.
Describing the Distribution (Cont.)
A risk may have a typical shape due to its nature.

Many operational risks have a F Distribution shape – often described as a long tail.
Normal Distributions
If we take many observations for some risks, there is often a tendency for the resultant distribution to acquire
the shape of a Normal Distribution as follows:

Standard Normal Other Normal


Image:Standard deviation diagram.svg

Standard Normal Distributions have a useful property in relation to their Standard Deviation (as shown
above). If we assume normality, the amount of data observations covered by each standard deviation is
fixed as per the above graph.

For normal distributions the mean, median and the mode are the same value.
Volatility Calculation Methods
In the context of risk measurement, we are trying to assess the potential future outcomes based on a
“sample” of actual historical observed outcomes – what level of volatility from the mean (expected)
value could there be based on differing confidence levels. The higher the confidence level selected the
more conservative the measure of volatility will be.

The sample of historical data needs to cover the range of conditions and value drivers that we wish to
cover in our future estimations – define and evaluate the relevance of the historical data set.

Two methods are possible:

Statistical Method
We can assume the distribution of the historical data is in the shape of a standard normal distribution
even if it is not. So, confidence levels can be expressed in terms of a number of standard deviations.
This can be done for convenience and speed of calculation of the volatility value.

Percentile Method
The alternative is to assume that future outcomes “mirror exactly” past observations, i.e., have an
identical distribution to the past. Accordingly, the value of the standard deviation will be based on the
shape of the distribution of the historical data. We need to calculate the volatility based on the values
of the observations that actually fall within the required confidence level.
One-Sided Tail Standard Normal
Distribution
The required confidence level can be expressed as a factor of the standard deviation as below:

13.6% 50% + 34.1%

2.3% Standard
Deviations

Confidence level (one tail) 85% 90% 95% 97.5% 99% 99.9%

Equivalent Approximate 1 1.25 1.65 1.95 2.33 3


Standard Deviations
VaR and EaR
Using the “at risk” values below reflect an organisation’s desire to manage the level of exposure
to such volatility to a level that is within the appetite of the organisation (losses can be borne by
the loss buffers in place.)
Use of higher confidence levels reflect a lower appetite for risk.
Statistical principles can be utilised to calculate Value at Risk (VaR) and Earnings at Risk (EaR)
as an estimate of the volatility in values:

Value at Risk (VAR)


VAR measures the worst expected loss (measured in terms of value changes) that an
organisation is likely to suffer over a given time interval under normal market conditions at
a given confidence level.

Earnings at Risk (EaR)


The term "earnings at risk" refers to the potential financial loss or negative impact on an
organisation’s earnings due to various risks and uncertainties. It represents the
vulnerability of an organisation’s earnings or profits to adverse events or changes in the
business environment. EaR models develop a distribution of potential earnings changes
due to risk to calculate the “worst expected change in earnings” over a given time interval
(normally annually) under normal market conditions at a given confidence level.
Calculation Methods – VaR and EaR
Statistical Method

VaR = Standard Deviation (SD) value x Confidence Level SD equivalent

VAR equals Economic Capital in the Asset Volatility Model

EaR = Standard Deviation (SD) value x Confidence Level SD equivalent

Economic Capital = EaR/r

r = risk-free rate

EaR considers the amount of capital required to be invested at the risk-free rate to
offset the probable level of earnings volatility.

Note: Depending on the time period provided, you may need to annualise the standard
deviation, e.g if looking at EaR to annualise monthly standard deviation for example,
you must multiply the monthly standard deviation by the square root of 12 (being the
number of monthly periods in one year).
Example VaR Calculation
Value -30 -20 -10 0 10 20 30

Probability (%) 5 10 20 30 20 10 5

Cum. Probability (%) 100 95 85 65 35 15 5

What is the VaR (worst expected loss) of this Distribution if the risk manager wants to
be 95% confident about the estimate?
The distribution mean is zero and the standard deviation is 14.49.
Example VaR Calculation (Cont.)
Percentile Method
Assumes that the observations in the distribution are the only “possible” set of future
outcomes - future reflects the past.
VaR is the deviation from the Mean using the previous data.
The value is -20 at 95% cumulative probability (95% of the observations).
The Mean is 0.
Hence VaR is 20 - 0 = 20 (use absolute value of the variance from mean - loss)

Statistical Method
Assumes that the future outcomes are normally distributed.
Uses the standard deviation and confidence level factor.
Standard deviation is 14.49.
The 95% CI equivalent SD factor assuming a standard normal distribution – 1.65.
VaR = 1.65 x 14.49 = 23.9
What is Economic Capital?
Three definitions:

1. The amount needed to cover the potential diminution in the value of assets and
other exposures (losses) over a given time period at a given statistical confidence
level.

2. The amount of the shareholders’ investment that is at risk in the business or a


particular business line.

3. The amount required, which, if invested at the risk-free rate, would cover the
“potential” downside earnings.

Economic Capital is not necessarily the same as book or accounting capital - which is
the value of historical transactions (combination of previous profits retained in the
organisation and capital raised from external stakeholders).

Economic Capital indicates the ability of a business to sustain or absorb the


financial impact of risk events in the future.
Economic Capital and Capital
Planning
Economic capital acts as a buffer against future unexpected reductions in value.

It cannot be limited solely to a multiple of past results. It also needs to take into account
planned activities, ie, the expansion or contraction of activities.

Economic Capital is therefore also required to “finance” the risks associated with future
expansion.

Organisations need to plan how to raise additional required capital in the appropriate time
frame to provide the buffer when additional risks or higher exposures come into existence.
For example, returns available to owners may be retained in the organisation to provide a
buffer for future expansion.

In practice the level of capital will be managed to balance its source as a buffer for future
unexpected losses and the need to provide an appropriate level of return to capital
providers.
Two Approaches to Estimating
Economic Capital
Two broad approaches have been developed, namely:

1. The bottom-up approach - Asset volatility models

This looks at the volatility of outcomes – individual asset or asset type value
changes.

These are VaR models, reflecting definition 1.

2. The top-down approach - Earnings volatility models

This looks at the volatility of the results of business activity as a whole.

These are EaR models – examine changes in earnings since earnings will change
as a result of risk events, reflecting definition 3.
Asset Volatility vs Earnings Volatility
Models
Area Asset Volatility Earnings Volatility
Scope of Asset volatility models primarily Earnings volatility models, on the
Analysis focus on analysing the volatility or other hand, focus specifically on
variability of a company's financial analysing the volatility of a company’s
assets, such as stocks, bonds, or or business division’s earnings or
other investments. These models profits. These models assess the
assess the potential fluctuations in variability in the company's income or
the market value of assets and their profit over a given period, typically by
associated risks. analysing historical earnings data.
Asset Volatility vs Earnings Volatility
Models (Cont.)
Area Asset Volatility Earnings Volatility
Inputs and Asset volatility models use historical Earnings volatility models typically
Variables price or return data of financial assets analyse historical earnings data,
often based on market provided data including revenue, costs, and other
where they are traded or other income statement components
sources of valuation where market based on accounting records.
data is not available.
These models may also incorporate
The models may also consider factors factors like industry-specific trends,
like market indices, correlations, and business cycles, or company-specific
macroeconomic variables to assess variables to assess earnings
asset volatility. volatility.
Asset Volatility vs Earnings Volatility
Models (Cont.)
Area Asset Volatility Earnings Volatility
Objectives and Asset volatility models are primarily Earnings volatility models are
Applications used to assess the risk and potential focused on understanding the
return of investment portfolios or volatility of a company's profitability.
individual financial assets.
They help management and analysts
They help investors and portfolio assess the stability and predictability
managers make informed decisions of earnings, evaluate the financial
about asset allocation, risk health of the company, and identify
diversification, and investment potential risks that may impact future
strategies. These models are commonly earnings. These models are often
used in portfolio optimisation, option used in financial statement analysis,
pricing, and risk management in valuation, and strategic planning.
financial markets.
Asset Volatility vs Earnings Volatility
Models (Cont.)
Area Asset Volatility Earnings Volatility
Limitations and Asset volatility models may not fully Earnings volatility models rely on historical
Considerations capture the underlying risks and earnings data and may not capture all the
uncertainties specific to a potential risks or changes in the company’s
company's operations or industry. current and future business environment.
They primarily reflect market-driven They may overlook non-recurring events,
volatility and may not account for seasonality, or specific factors impacting
company-specific factors. earnings.

Additionally, asset volatility models Moreover, earnings volatility models may not
assume that historical volatility will account for the market's reaction to earnings
be a reasonable predictor of future announcements or other external factors that
volatility, which may not always hold influence stock prices. Stock prices may not
true. always reflect the underlying financial
performance of the company and its
Frequent and detailed data may volatility.
allow a good understanding of
drivers to the change in value of the Infrequent data at organisation level may not
asset or investment. be informative as to drivers to the changes in
earnings.
Asset Volatility (VaR) v. Earnings
Volatility Models (Cont.)
Asset Volatility models (VAR) primarily focus on analysing the volatility of financial
assets and are used for investment decision-making.

Earnings volatility models focus on assessing the volatility of a company's earnings


and are used for financial performance evaluation and risk assessment within the
company itself.

Both models serve different purposes and provide insights into different aspects of
financial analysis and risk management:
• VaR can be used for pricing. Earnings cannot.
• VaR is forward-looking. Earnings is backward-looking.
• VaR leads to control action. Earnings is limited in its ability to lead to control action.
• VaR requires modeling of correlations for an organisation level view. Earnings cover
all risks relevant to whole business division.
• VaR requires heavy statistical analysis. Earnings requires limited statistical analysis.
• VaR has questionable results in the aggregation of risks. Earnings is linked to
shareholder view of value.
Incorporating Risk into Performance
Measures
Traditional Performance Measures
• Fail to measure performance against risk.
• ROA and ROE – provide no indication of relative risk – lack comparability.

Risk-Adjusted Performance Measures


Use economic capital as denominator which is risk based and provides comparability
across projects/investments:
• Risk Adjusted Return On Capital (RAROC - Asset Volatility Model)
• Return On Risk Adjusted Capital (RORAC - Earnings Volatility Model)
Calculating RAROC and RORAC
RAROC
Adjusted Income
RAROC = Economic Capital

Note:
Adjusted Income = PV* (Revenue – Operating Costs) – Expected Loss (i.e., Mean)
Revenue – operating expenses = net income.
Economic Capital = VAR

RORAC
Income
RORAC = Adjusted Capital

Note:
Adjusted Income = PV* (Revenue – Operating Costs)
Economic Capital = EAR/r

Present Value Calculation


Calculate PV where appropriate and where values are available. E.g., PV for net income (Yr.
1: 5,000, Yr2. 8,000 and Yr. 3 7,000) over 3 years at a discount rate of 8% = 5,000 +
8,000/1.08^1 + 7,000/(1.08) ^2
Using Risk Based Performance
Measures
Financial measures provide an incomplete and imprecise lens:
• While mathematically precise it is based on estimates and assumptions for variables
e.g., future revenue and expenses, discount rates, risk-free rate, relevance of historical
data.
• Consider need to update assumptions for variables for projects – re-assess level of
uncertainty.
• Consider qualitative and non-financial factors as well – e.g., nature and scale and
timeframe of investment, brand reputation.
• Consider project against organisational capability and values/mission.
• Use to diversify a portfolio of projects – need to understand risk drivers for different
projects.
• Need to consider result in the context of risk appetite – benchmark risk-adjusted returns.

Good risk management must consider both qualitative and quantitative factors.

You might also like