P1.T2. Quantitative Analysis Bionic Turtle FRM Practice Questions Chapter 13: Simulation and Bootstrapping
P1.T2. Quantitative Analysis Bionic Turtle FRM Practice Questions Chapter 13: Simulation and Bootstrapping
The information provided in this document is intended solely for you. Please do not freely distribute.
2
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
21.5.1. Mary wants to approximate the expected value of an option. She conducts a Monte
Carlo simulation, and based on her initial sample size, n = 100, she estimates the fair value of
the option is USD $72.00. The value is determined by taking the mean of the sample. She
observes that the sample standard deviation is USD $29.00. She wants to obtain a 95.0%
confidence interval (CI width in total) that is less than 5.0% of the option's fair value. If she
makes an (unrealistic) assumption that the sample standard deviation will remain constant at
$29.00 as she increases the sample size, what is nearest to her minimum desired sample size?
a) 50
b) 1,000
c) 20,000
d) 400,000
3
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
21.5.2. The following code snippet generates a vector of 100 random values from the uniform
distribution--per a typical pseudo-random number generator--that serves as a vector of random
probabilities from zero to 1.0; the function runif(n) generates a vector of n random numbers.
From this vector of random probabilities are retrieved a vector of corresponding quantiles (aka,
per the inverse cumulative distribution function or inverse CDF) according to two distributions:
the normal and the student's t with six degrees of freedom, df = 6. The final two lines simply
calculate the excess kurtosis of each sample: Z_kurt is the excess kurtosis of the sample of
random normal variables, and t_kurt is the excess kurtosis of the sample of random student's t
variables.
If we re-run this simulation (program) several times and observe the values, which of the
following is most likely to be TRUE about the outputs Z_kurt and t_kurt?
4
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
21.5.3. Nora wants to value a six-month at-the-money call option. Her program has five small
sections and is displayed below. The first section ("Assumptions") assigns values to the inputs:
S(0) = $100.00, K = $100,00, riskfree rate = 3.0%, volatility (sigma) = 28.0% and term to
maturity = six months (0.5 years). The first simulation ("Simulation #1") generates the call prices
(i.e., c_price_MCS) according to a Monte Carlo simulation. The second simulation ("Simulation
#2") performs a very simple simulation to generate the call option price (i.e., c_price_simple).
The next section computes the call option price (i.e., c_price_BSM) according to the Black-
Scholes-Merton (BSM) option-pricing model. The final section simply prints the three different
call prices that were obtained earlier: c_price_MCS, and c_price_simple, c_price_BSM.
About Nora's program and its approach to estimating the price of the six-month call option, each
of the following statements is true EXCEPT which is false?
5
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
Answers:
21.5.1. B. 1,000
We seek a 95.0% confidence interval (CI) that has a width given by 5.0% * $72.00 = $3.60.
Because CI = 2*SE*1.96 where SE = S/sqrt(n) such CI = 2*[S/sqrt(n)]*1.96, it follows that sqrt(n)
= 2*S*1.96/CI and n = (2*S*1.96/CI )^2 = (2*$29.00*1.96/$3.60 )^2 = 997.2 or about 1,000.
Note: this question inspired by GARP's Question 13.9 (which contains an error because it does
not square the final result).
21.5.2. D. True: Each run will produce identical values of Z_kurt and t_kurt but it will be
the case that Z_kurt <> 0 and t_kurt <> 3.0
The excess kurtosis of a normal distribution is zero. The excess kurtosis of a student's t
distribution is given by df/(df-4); when df = 6, its excess kurtosis is 6/(6-4) = 3.0 The psuedo-
random number generator--which is the runif(n) function as mentioned in the question--ensures
that our each sampled excess kurtosis will very from these exact values. However, the first line
is important: set.seed(37). This line guarantees that each run will produce identical values,
because each run will generate the same random vector.
21.5.3. B. False. This fails (c_price_simple equals only $1.71) because the mean of the
future stock distribution (which includes future stock prices below the strike price) is
insufficient to value the option. Instead, the first simulation is the correct approach and
could be efficiently written as mean(exp(-r*T)*pmax(ST-K,0)).
Please note that (D) is true because a seed value was NOT initialized; i.e., the set.seed()
function was not run such that each run will generate different pseudo-random numbers, and
therefore simulation #1 will produce different values on each run, although they will cluster near
the expected value. The analytical BSM call price, of course, will return the same value of
$8.594 as it does not depend on a simulation.
1
Education, Pearson. Quantitative Analysis. Pearson Learning Solutions, 2020. VitalBook file.
6
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
21.6.1. The code snippet below estimates the value of a call option with Monte Carlo simulation
(MCS) where the sample size, n = 10,000. However, this code attempts to improve on the
standard MCS by generating a second random variable, U2, which is given by U2 = 1 - U1
where both are uniform random variables. The function qnorm(U1) executes an inverse
transformation: Z1 and Z2 are random standard normal cumulative distribution functions
(CDFs).
a) This is a successful attempt to employ an antithetic variate because its standard error
will be lower than standard n = 10,000 MCS
b) This is a failed attempt to employ antithetic variates because the pairs of random
variables are uncorrelated
c) This is a successful attempt to employ control variates because the analytical (aka,
closed-form) option price informs the error bias
d) This is neither an attempt to employ antithetic variate nor a control variate
7
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
21.6.2. The following code snippet generates a bootstrap sample in order to estimate a
portfolio's potential losses (i.e., loss quantiles) over the next week, which is five cumulative
trading days.
The code assumes a vector of the history of last year's (n = 250) actual daily returns,
r_daily_act. The sample(r_daily_act, n, replace = TRUE) function samples from the vector with
replacement. In regard to this bootstrap, each of the following is true EXCEPT, which is false?
a) This bootstrap procedure is non-parametric because it does not specify a statistical (aka,
parametric) distribution
b) Like (in common with) the Monte Carlo simulation, this bootstrap depends on a uniform
pseudo-random number generator (PRNG)
c) As an iid bootstrap, this procedure reflects an assumption that the returns observations
are independent over time, but we could conduct a circular block bootstrap (CCB) if the
returns are dependent over time
d) An advantage of this bootstrap is that it will generate some daily losses that exceed the
worst daily loss in the actual sample, but the Monte Carlo simulation (MCS) data
generating process (DGP) cannot generate daily losses that did not occur in the actual
sample; i.e., MCS DGP suffers the Black Swan problem, but this bootstrap does not.
8
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
21.6.3. Robert seeks to price an exotic path-dependent option, call it Price(E), but there exists
no analytical solution available to value such a derivative. Of course, Robert does have a Black-
Scholes option pricing model (BSM OPM). His BSM OPM is a closed-form solution, but it will
only price a vanilla option; e.g., European or non-path-dependent. Let's refer to the analytical
price of this vanilla option as BSM(V).
He employs a Monte Carlo simulation (MCS) to estimate the price of his exotic, path-dependent
option, call this price MCS(E). In this MCS, he generates M sample paths of the future stock
price in order to compute M sample payoffs and retrieves the discounted mean to obtain an
estimate of the exotic option's value. Further, he re-uses the same simulated sample paths to
inform a simulated value of the vanilla option, call this MCS(V). To summarize, in regard to his
vanilla option, he has estimates for both an analytical and simulated price, BSM(V) and MCS(V).
In regard to his exotic option, he is not able to estimate BSM(E), but he has generated MCS(E).
He has now retrieved both a simulated and analytical price for the vanilla option; this difference
is given by BSM(V) - MCS(V). He applies this error to revise his estimate of the exotic option per
the following:
Bob's goal, of course, is to improve the accuracy of his estimate, as evidenced by a reduction in
the variance. A good control variate should have two properties. One is that it should be
inexpensive to construct, and this control variate meets that test. What is the other property that
Bob seeks?
9
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
Answers:
21.6.1. A. TRUE: This is a successful attempt to employ an antithetic variate because its
standard error will be lower than standard n = 10,000 MCS
21.6.2. D. FALSE. Because the bootstrap samples cannot retrieve a daily loss greater
than the worst in the set! On the other hand, the Monte Carlo's data generating process
(DGP) indeed can generate an actual loss that exceeds anything in the sample. However,
please note that because the bootstrap is with replacement, the cumulative 5-day return can
exceed the worst 5-day return in the actual sample.
Using the notation of the reading, the unknown distribution is g(X) such that the Monte Carlo
simulation approximates the expected value, E[g(X)], but its approximation can be decomposed
as E[g(X(i))] + η(i) where η(i) is a mean zero error. The control variate is another random
variable, h(X(i)), that is correlated with the error. In this question, Bob is estimating E[Price(E)] =
g(X) while BSM(V) is the control variate. According to the reading, "a good control variate
should have two properties:
First, it should be inexpensive to construct from xi. If the control variate is slow to
compute, then larger variance reductions—holding the computational cost fixed—may
be achieved by increasing the number of simulations b rather than constructing the
control variates.
Second, a control variate should have a high correlation with g(X). The optimal
combination parameter β that minimizes the approximation error is estimated using the
regression: g(X(i)) = α + βh(X(i)) + ν(i)"2
2
Education, Pearson. Quantitative Analysis. Pearson Learning Solutions, 2020. VitalBook file.
10
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
600.1. Although simulation methods might be employed in each of the following situations (or
"use cases"), which situation below LEAST requires the use of a simulation method?
a) Estimating the value of an exotic option when an analytical pricing formula is unavailable
b) Determining the effect on financial markets of substantial changes in the macroeconomic
environment
c) Calibrating the size of a Treasury bond trade in order to hedge the duration risk of a
corporate bond portfolio
d) Stress-testing a risk management model to determine whether they generate capital
requirements sufficient to cover losses in all situations
600.2. Peter, the analyst, conducted a Monte Carlo simulation in two stages. He presented it to
the firm's risk committee, who deemed it useful and robust. In regard to his simulation, each of
the following is plausible EXCEPT, which is unlikely?
a) In the first stage, the probability distribution specified for errors was not the normal
distribution
b) In the second stage, the parameter of interest is a portfolio value rather than a
regression coefficient
c) In the first state, a full structural model informed the data generating process (DGP)
rather than a pure time series model
d) In the second stage, the quantity (N), which is the number of replications, was limited to
a low value because the desired confidence was high
600.3. Paul is a researcher who is using Monte Carlo simulation in order to determine what
effect heteroscedasticity has upon the size and power of a test for autocorrelation. If the
variance of the estimate, var(x), of his quantity of interest is 36.0 and he requires a standard
error of the estimate, S(x), to be no greater than 0.10, how many replications does his
simulation require?
a) 360
b) 3,600
c) 10,000
d) 36,000
11
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
Answers:
In regard to true (A), (B), and (D), Brooks writes, "Examples from econometrics of where
simulation may be useful include:
Quantifying the simultaneous equations bias induced by treating an endogenous
variable as exogenous
Determining the appropriate critical values for a Dickey– Fuller test
Determining what effect heteroscedasticity has upon the size and power of a test for
autocorrelation.
Simulations are also often extremely useful tools in finance, in situations such as:
The pricing of exotic options, where an analytical pricing formula is unavailable
Determining the effect on financial markets of substantial changes in the macroeconomic
environment
‘Stress-testing’ risk management models to determine whether they generate capital
requirements sufficient."3
600.3. B. 3,600
As S(x) = sqrt[var(x)/N] --> S(x)^2 = var(x)/N and N = var(x)/S(x)^2. In this case, N =
36.0/(0.10^2) = 3,600
3
Chris Brooks, Introductory Econometrics for Finance, 3rd Ed. (Cambridge, UK: Cambridge University Press, 2014)
12
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
601.1. Betty is an analyst using Monte Carlo simulation to price an exotic option. Her simulation
consists of 10,000 replications where the key random variable is a random standard normal
because the underlying process is geometric Brownian motion (GBM). For example, in Excel, a
random standard normal value is achieved with an inverse transformation of a random uniform
variable by way of the nested function NORM.S.INV(RAND()). In this case, each random
standard normal, z(i) = N(0,1), is the random draw that becomes an input into the option price
function. Her simulation succeeds in producing an estimate for the option's price, but Betty is
concerned the confidence interval around her estimate is too large. If her aim is to reduce the
standard error, which of the following approaches is NEAREST to the antithetic variate
technique?
a) She simulates 5,000 pairs of random z(i) and -z(i) such that each pair has perfectly
negative covariance
b) She quadruples the number of replications which will reduce the standard error by 50%
because the sqrt(four) is equal to two
c) She imposes a condition of i.i.d. (independence and identically distributed) on the series
of z(i), which eliminates the covariance term
d) She introduces low-discrepancy sequencing with leads the Monte Carlos standard errors
to be reduced in direct proportion to the number of replications rather than in proportion
to the square root of the number of replications
601.2. Betty is an analyst using Monte Carlo simulation to price an Asian option. An Asian
option is a path-dependent option because its payoff depends on the arithmetic average of the
price of the underlying asset during the option's life. She successfully prices the option by using
10,000 replications; the simulated Asian price is denoted P(A). However, she wants to reduce
the simulation's sampling error. Which of the following approaches is NEAREST to the control
variate technique?
a) She simulates the value of path-dependency as a variable, V(PD), and adds this to the
analytical solution to the value of an Asian option WITHOUT path dependency
b) She simulates the prices of a vanilla European option, P(BS), and also analytically prices
the same, denoted P*(BS); she then estimates the Asian option price as given by [P(BS)
+ P*(BS)] - P(A)
c) She simulates the prices of a vanilla European option, P(BS), and also analytically prices
the same, denoted P*(BS); she then estimates the Asian option price as given by P(A) +
[P*(BS) - P(BS)]
d) In this scenario, she does not have a good control variate approach because she cannot
find a control statistic that is highly correlated to her statistic of interest
13
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
601.3. According to Brooks, which of the following is TRUE about random number re-usage?
a) Random number re-usage is the best way to increase the accuracy of an estimate
b) Random number re-usage can reduce the variability of the difference in estimates
across experiments
c) Random number re-usage is typically a high priority because it tends to greatly reduce
computational time
d) Although random number re-usage is not advisable across experiments, it makes great
sense within a Monte Carlo experiment
14
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
Answers:
601.1. A. TRUE: She simulates 5,000 pairs of random z(i) and -z(i) such that each pair has
perfectly negative covariance
Brooks: "The antithetic variate technique involves taking the complement of a set of random
numbers and running a parallel simulation on those. For example, if the driving stochastic force
is a set of T N(0,1) draws, denoted u(t), for each replication, an additional replication with errors
given by -u(t) is also used. It can be shown that the Monte Carlo standard error is reduced when
antithetic variates are used.”4
601.2. C. TRUE: She simulates the prices of a vanilla European option, P(BS), and also
analytically prices the same, denoted P*(BS); she then estimates the Asian option price
as given by P(A) + [P*(BS) - P(BS)]
Brooks: "13.3.2 Control variates: The application of control variates involves employing
a variable similar to that used in the simulation, but whose properties are known prior to
the simulation. Denote the variable whose properties are known by (y) and that whose
properties are under simulation by (x). The simulation is conducted on (x) and also on
(y), with the same sets of random number draws being employed in both cases.
Denoting the simulation estimates of (x) and (y) by x̂ and ŷ, respectively, a new estimate
of (x) can be derived from: x* = y + (x̂ - ŷ) = x̂ + (y - ŷ). Again, it can be shown that the
Monte Carlo sampling error of this quantity, x*, will be lower than that of (x) provided that
a certain condition holds."4
For further reference, unassigned Hull's Chapter 21 from Options, Futures, and
Other Derivatives: "Control Variate Technique: A technique known as the control
variate technique can improve the accuracy of the pricing of an American option. This
involves using the same tree to calculate the value of both the American option, f (A),
and the corresponding European option, f (E). The Black–Scholes–Merton price of the
European option, f (BSM), is also calculated. The error when the tree is used to price the
European option, f (BSM) - f (E), is assumed equal to the error when the tree is used to
price the American option. This gives the estimate of the price of the American option as:
f (A) + [f(BSM) - f(E)]."5
4
Chris Brooks, Introductory Econometrics for Finance, 3rd Ed. (Cambridge, UK: Cambridge University Press, 2014)
5
Hull, Options, Futures, and Other Derivatives, 9th Edition, Pearson (November 16, 2017)
15
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
601.3. B. TRUE: Random number re-usage can reduce the variability of the difference in
estimates across experiments
Brooks: "13.3.3 Random number re-usage across experiments: Although of course, it would
not be sensible to re-use sets of random number draws within a Monte Carlo experiment, using
the same sets of draws across experiments can greatly reduce the variability of the difference in
the estimates across those experiments. For example, it may be of interest to examine the
power of the Dickey– Fuller test for samples of size 100 observations and for different values of
Φ (to use the notation of chapter 8). Thus, for each experiment involving a different value of Φ,
the same set of standard normal random numbers could be used to reduce the sampling
variation across experiments. However, the accuracy of the actual estimates in each case will
not be increased, of course.
Another possibility involves taking long series of draws and then slicing them up into several
smaller sets to be used in different experiments. For example, Monte Carlo simulation may be
used to price several options of different times to maturity, but which are identical in all other
respects. Thus, if six-month, three-month and one-month horizons were of interest, sufficient
random draws to cover six months would be made. Then the six-months’ worth of draws could
be used to construct two replications of a three-month horizon, and six replications for the one-
month horizon. Again, the variability of the simulated option prices across maturities would be
reduced, although the accuracies of the prices themselves would not be increased for a given
number of replications.
Random number re-usage is unlikely to save computational time, for making the random draws
usually takes a very small proportion of the overall time taken to conduct the whole
experiment."6
6
Chris Brooks, Introductory Econometrics for Finance, 3rd Ed. (Cambridge, UK: Cambridge University Press, 2014)
16
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
P1.T2.602. Bootstrapping
Learning Objectives: Describe the bootstrapping method and its advantage over Monte
Carlo simulation. Describe the pseudo-random number generation method and how a
good simulation design alleviates the effects the choice of the seed has on the properties
of the generated series. Describe situations where the bootstrapping method is
ineffective. Describe disadvantages of the simulation approach to financial problem
solving.
602.1. What is the crucial difference between bootstrapping and Monte Carlo simulation?
a) One uses artificial data, but the other requires actual data
b) One requires random number generation, but the other does not rely on randomness
c) One requires a distributional assumption, but the other does not permit a distributional
assumption
d) There is no crucial difference between bootstrapping and Monte Carlo simulation
602.3. Peter used a simple Monte Carlo simulation to estimate the price of an Asian option. In
his first step, he specified a geometric Brownian motion (GBM) which is the same process used
in the Black-Scholes-Merton model. His boss Sally observes, "This is nice work Peter, but the
drawback to this approach is that you've assumed underlying returns are normally distributed.
Yet we know that returns are fat-tailed in practice." How can Peter overcome this objection and
include a fat-tailed assumption in his model?
a) He could assume the errors follow a GARCH process
b) He could assume the errors are drawn from a fat-tailed distribution; e.g., student's t
c) Either he could either assume errors follow a GARCH process or that errors are drawn
from a fat-tailed distribution
d) Monte Carlo simulation cannot overcome this objection; this is a disadvantage of Monte
Carlo simulation in comparison to bootstrapping
17
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
Answers:
602.1. A. One uses artificial data, but the other requires actual data
Brooks: "13.4 Bootstrapping Bootstrapping is related to simulation, but with one crucial
difference. With simulation, the data are constructed completely artificially. Bootstrapping, on
the other hand, is used to obtain a description of the properties of empirical estimators by using
the sample data points themselves, and it involves sampling repeatedly with replacement from
the actual data. Many econometricians were initially highly skeptical of the usefulness of the
technique, which appears, at first sight, to be some kind of magic trick – creating useful
additional information from a given sample. Indeed, Davison and Hinkley (1997, p. 3) state that
the term ‘bootstrap’ in this context comes from an analogy with the fictional character Baron
Munchhausen, who got out from the bottom of a lake by pulling himself up by his bootstraps."7
602.2. C. Both when there are outliers in the data and when the data are not independent
Brooks: "13.4.2 Situations where the bootstrap will be ineffective. There are at least two
situations where the bootstrap, as described above, will not work well. Outliers in the data If
there are outliers in the data, the conclusions of the bootstrap may be affected. In particular, the
results for a given replication may depend critically on whether the outliers appear (and how
often) in the bootstrapped sample. Non-independent data Use of the bootstrap implicitly
assumes that the data are independent of one another. This would obviously not hold if, for
example, there were autocorrelation in the data. A potential solution to this problem is to use a
‘moving block bootstrap’. Such a method allows for the dependence in the series by sampling
whole blocks of observations at a time. These, and many other issues relating to the theory and
practical usage of the bootstrap are given in Davison and Hinkley (1997); see also Efron (1979,
1982). It is also worth noting that variance reduction techniques are also available under the
bootstrap, and these work in a very similar way to those described above in the context of pure
simulation."7
602.3. C. Either could either assume errors follow a GARCH process or that errors are
drawn from a fat-tailed distribution
Brooks: "13.8.1 Simulating the price of a financial option using a fat-tailed underlying
process. A fairly limiting and unrealistic assumption in the above methodology for pricing
options is that the underlying asset returns are normally distributed, whereas in practice, it is
well known that asset returns are fat-tailed. There are several ways to remove this assumption.
First, one could employ draws from a fat-tailed distribution, such as a Student’s, in step 2 above.
Another method, which would generate a distribution of returns with fat tails, would be to
assume that the errors and therefore the returns follow a GARCH process. To generate draws
from a GARCH process, do the steps shown in box 13.6."7
7
Chris Brooks, Introductory Econometrics for Finance, 3rd Ed. (Cambridge, UK: Cambridge University Press, 2014)
18
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
400.1. Barbara, the analyst, has been tasked to choose a univariate probability distribution in
order to create a simulation model of future market returns. One of her key steps utilizes the chi-
square distribution. Among the following four methods reviewed by Pachamanova and Fabozzi,
which approach is she most likely using?
a) Bootstrapping
b) Assume a distribution, then use historical data to estimate parameters
c) Use historical data to find a distribution
d) Ignore the past and look forward with a subjective choice of distribution
400.2. Peter the Analyst has generated 800 independent scenarios of future single-period
portfolio values. He observes the mean (average) of his simulated output distribution and
determines a 95.0% confidence interval with a length of approximately $300.00; i.e., length is
the difference between the upper and lower bound of the confidence interval. Peter's manager
wants him to increase the accuracy of his estimate of the population's mean by reducing the
length of the confidence interval to about $60.00. How many scenarios should Peter run?
a) 800; no change in trials but increase the confidence level
b) 4,000
c) 7,200
d) 20,000
400.3. According to Pachamanova and Fabozzi, each of the following is true about
understanding and interpreting the results generated by Monte Carlo simulation, EXCEPT which
is false?
a) A simulation model applies an input probability distribution(s) to a deterministic model in
order to generate many scenarios (a.k.a., trials) which produce output variables and the
corresponding output probability distribution
b) Despite several advantages, the key weakness (drawback) of simulations is an inability
to generate statistical measures of central tendency and volatility for the output
probability distribution
c) Simulation is similar to statistical sampling in that we try to represent uncertainty by
generating scenarios, that is, “sampling” values for the output parameter of interest from
an underlying probability distribution
d) The simulated output's minimum and the maximum are highly sensitive to the number of
simulated values and whether the simulated values in the tails of the distribution provide
good representation for the tails of the distribution
19
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
Answers:
400.1. C. Use historical data to find a distribution (see forum for additional explanation)
400.2. D. 20,000. New CI length = $300 * [1/SQRT(x/800)], such that x = 25*800 = 20,000; i.e.,
25-fold increase in trials in order to achieve 1/SQRT(25) = 1/5th the interval.
400.3. B. False. The output distribution is not parametric but can be analyzed like any
sample dataset
Pachamanova and Fabozzi in Summary: "The distributions of output variables can be analyzed
by statistical techniques. Statistics of interest include measures of central tendency (average,
mode), measures of volatility (standard deviation, percentiles), skewness, and kurtosis. Minima
and maxima from simulations should be interpreted with care because they often depend on the
input assumptions and are very sensitive to the number of trials in the simulation."8
8
Dessislava Pachamanova and Frank Fabozzi, Simulation and Optimization in Finance (Hoboken, NJ: John Wiley &
Sons, 2010).
20
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
401.1. In arguing in favor of a simulation model approach, Risk Analyst William makes the
following five arguments:
I. Neither the sum of normal random variables nor the product of lognormal random
variables have closed-form (analytical) solutions
II. Simulation enables us to evaluate (approximately) a complex function of a random
variable.
III. Simulation enables us to visualize the probability distribution resulting from compounding
probability distributions for multiple input variables.
IV. Simulation allows us to incorporate correlations between input variables.
V. Simulation is a low-cost tool for checking the effect of changing a strategy on an output
variable of interest.
401.2. Confronted with discretization error bias, which of the following is the most likely remedy?
a) Substitute a closed-form expression
b) Assume a mean-reverting process
c) Reduce the time interval length and increase the number of steps
d) Increase the time interval length and decrease the number of steps
401.3. Four simulations all produce an identical sample standard deviation, denoted S. Which
simulation is the most efficient if time-adjusted?
a) 1,000 scenarios requiring 1.0 hour
b) 5,000 scenarios requiring 3.0 hours
c) 10,000 scenarios requiring 9.0 hours
d) 30,000 scenarios requiring 25.0 hours
21
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
Answers:
Pachamanova and Fabozzi: "Despite its simplicity, this example allows us to point out
one of the advantages of simulation modeling over pure mathematical modeling.
Simulation enables us to evaluate (approximately) a function of a random variable ...
There are three other important advantages of simulation that can only be appreciated in
more complex situations. The first one is that simulation enables us to visualize the
probability distribution resulting from compounding probability distributions for multiple
input variables. The second is that it allows us to incorporate correlations between input
variables. The third is that simulation is a low-cost tool for checking the effect of
changing a strategy on an output variable of interest."9
In regard to (I.), please note: the sum of normal random variables is normal, and the
product of lognormal random variables is lognormal. These are convenient exceptions to
the more common reality that sums and products of random variables tend not to be
analytically obvious.
401.2. C. Reduce the time interval length and increase the number of steps
9
Dessislava Pachamanova and Frank Fabozzi, Simulation and Optimization in Finance (Hoboken, NJ: John Wiley &
Sons, 2010).
22
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
402.1. The cumulative distribution function (CDF) of the exponential distribution is given by F(x)
= 1 - exp(-λ*x). Assume an exponential distribution with a lambda (λ) parameter equal to 0.50. If
we apply the inverse transform method, what random number (x) is nearest to the value
produced if our random uniform number generated on the unit interval [0,1] happens to be
0.8650?
a) 1.0
b) 2.0
c) 3.0
d) 4.0
402.2. Each of the following satisfies a condition for a good pseudo-random number generator
EXCEPT which does not?
a) The numbers generated in a single sequence are difficult to reproduce; i.e., pseudo-
random numbers "should not be replicable."
b) The numbers in the generated sequence are uniformly distributed between 0 and 1.
c) The sequence has a long cycle; i.e., it takes many iterations before the sequence begins
repeating itself
d) The numbers in the sequence are not autocorrelated.
402.3. Consider the following four assertions about random number generators:
23
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.
Answers:
402.1. D. 4.0
If F(x) = 1 - exp(-λ*x), then exp(-λ*x) = 1 - F(x), and x = -LN[1-F(x)]/λ. In this case, x = -LN[1-
0.8650]/0.5 = 4.0050
10
Dessislava Pachamanova and Frank Fabozzi, Simulation and Optimization in Finance (Hoboken, NJ: John Wiley &
Sons, 2010).
24