0% found this document useful (0 votes)
5 views

EXP Exercise NestedSampling-1

The document describes nested sampling, an algorithm for numerically calculating multidimensional integrals. It provides an example integral of a 2D Gaussian function from -0.5 to 0.5 for both variables and asks the reader to implement nested sampling to calculate this integral. The reader is asked to monitor various quantities during the algorithm, check the result against another method, and explore how the integral changes with different Gaussian widths and the introduction of a non-flat prior distribution.

Uploaded by

Asddd
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

EXP Exercise NestedSampling-1

The document describes nested sampling, an algorithm for numerically calculating multidimensional integrals. It provides an example integral of a 2D Gaussian function from -0.5 to 0.5 for both variables and asks the reader to implement nested sampling to calculate this integral. The reader is asked to monitor various quantities during the algorithm, check the result against another method, and explore how the integral changes with different Gaussian widths and the introduction of a non-flat prior distribution.

Uploaded by

Asddd
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Nested sampling

As we have seen, nested sampling is a way to calculate evidences,


Z
Z = L(θ) π(θ) dθ, (1)

with L(θ) the likelihood function and π(θ) the prior probability density. (See the bottom
for a stepwise overview of the algorithm.) However, the technique can in principle be used
to compute any (multi-dimensional) integral.

1. Consider the following function of two variables, where x, y ∈ [−0.5, 0.5]:

x2 y2
 
1
F (x, y) = exp − 2 − 2 . (2)
2πσx σy 2σx 2σy

To begin with, pick σx = 0.1 and σy = 0.5. Suppose we want to calculate the integral
Z 0.5 Z 0.5
dx dy F (x, y). (3)
−0.5 −0.5

If you wanted to compute this by means of nested sampling, what quantities can naturally
be identified with evidence, likelihood function, and prior?
Write a nested sampling routine to calculate (3). In doing so,

• Use print statements to monitor how the smallest likelihood for the set of live points
evolves.

• Similarly, keep track of how the highest prior mass evolves.

• Keep track of how the evidence evolves.

You will need to choose a termination condition. Based on what you are seeing, which con-
dition appears to be well-suited? Also play around with increasing the number of live points
used, to see how many points are needed for the end result to not be much affected any more.

2. In order to check to what extent your calculation of the integral (3) is correct, compute
it in some other way, e.g. using Mathematica or Matlab.

1
3. Pick different values for σx , σy from the ones above. How is the nested sampling result
affected depending on whether the integrand F (x, y) is more peaked or less peaked in one
or both variables x, y?

4. The nested sampling process also gives you an approximation of the posterior density
function; in our case, p(xk , yk ) = L(xk , yk )∆Xk /Z, where (xk , yk ) is the live point discarded
in step k of the algorithm, L is the likelihood, ∆Xk the difference in prior mass between
steps k and k − 1, and Z the evidence. Plot p(xk , yk ) as a 2D histogram. Had you expected
the result?

5. Now introduce a non-flat prior density distribution:

π(x, y) = 9 (x − 0.5)2 (y − 0.5)2 . (4)

Verify that this prior is normalized. Find a way to randomly draw points from the above
distribution.

6. Implement the non-flat prior of Eq. (4) in your nested sampling code.

7. Again obtain an approximation to the posterior density function as in exercise 4 above,


and plot it. Find a way to obtain an approximation to the posterior density function for x
by itself.

Nested sampling: The algorithm

The nested algorithm proceeds as follows:

1. Initialization: Pick live points θ1 , . . . , θM randomly from the prior density distribution
π(θ).

2. Step k ≥ 0:

• Find the live point θk with the smallest likelihood, Lk .

2
• If k = 0 then associate with it a prior mass X0 = 1. If k > 0 then associate with
it a prior mass Xk = tk Xk−1 , where Xk−1 is the prior mass from step k − 1, and
tk is drawn randomly from the distribution p(t) = M tM −1 .

• Remove θk from the list of live points.

• Pick a new point θ 0 randomly from the prior density distribution. Keep doing
this until L(θ 0 ) > Lk ; then add that point to the list of live points.

• Unless some appropriate termination condition is reached, go to the next step,


k + 1.

3. Approximate the evidence by


N
X
Z' Lk ∆Xk (5)
k=0

with ∆Xk = Xk−1 − Xk and N the number of steps taken, and obtain samples of the
posterior density distribution through

Lk ∆Xk
p(θk ) = . (6)
Z

You might also like