Module 3 - SMS
Module 3 - SMS
5.1 Properties of Random Numbers A sequence of random numbers, R1, R2... must have two
important statistical properties, uniformity and independence. Each random number Ri, is an independent
sample drawn from a continuous uniform distribution between zero and 1.
That is, the pdf is given by
Pseudo means false, so false random numbers are being generated. The goal of any generation scheme, is
to produce a sequence of numbers between zero and 1 which simulates, or initiates, the ideal properties of
uniform distribution and independence as closely as possible. When generating pseudo-random numbers,
certain problems or errors can occur. These errors, or departures from ideal randomness, are all related to
the properties stated previously. Some examples include the following
3) The mean of the generated numbers may be too high or too low.
c) Several numbers above the mean followed by several numbers below the mean.
Usually, random numbers are generated by a digital computer as part of the simulation. Numerous
methods can be used to generate the values. In selecting among these methods, or routines, there are a
number of important considerations.
1. The routine should be fast. The total cost can be managed by selecting a computationally efficient
method of random-number generation.
2. The routine should be portable to different computers, and ideally to different programming languages
.This is desirable so that the simulation program produces the same results wherever it is executed.
3. The routine should have a sufficiently long cycle. The cycle length, or period, represents the length of
the random-number sequence before previous numbers begin to repeat themselves in an earlier order.
Thus, if 10,000 events are to be generated, the period should be many times that long.
A special case cycling is degenerating. A routine degenerates when the same random numbers appear
repeatedly. Such an occurrence is certainly unacceptable. This can happen rapidly with some methods.
4. The random numbers should be replicable. Given the starting point (or conditions), it should be
possible to generate the same set of random numbers, completely independent of the system that is being
simulated. This is helpful for debugging purpose and is a means of facilitating comparisons between
systems.
5. Most important, and as indicated previously, the generated random numbers should closely
approximate the ideal statistical properties of uniformity and independences
It widely used technique, initially proposed by Lehmer [1951], produces a sequence of integers, X1,
X2,... between zero and m — 1 according to the following recursive relationship:
Xi+1 = (aXi + c) mod m, i = 0, 1, 2.... (7.1)
The initial value X0 is called the seed, a is called the constant multiplier, c is the increment, and m is the
modulus.
If c ≠ 0 in Equation (7.1), the form is called the mixed congruential method. When c = 0, the form is
known as the multiplicative congruential method.
The selection of the values for a, c, m and X0 drastically affects the statistical properties and the cycle
length. An example will illustrate how this technique operates.
EXAMPLE 1 Use the linear congruential method to generate a sequence of random numbers with X0 =
27, a= 17, c = 43, and m = 100.
Here, the integer values generated will all be between zero and 99 because of the value of the modulus.
These random integers should appear to be uniformly distributed the integers zero to 99.
Random numbers between zero and 1 can be generated by
Ri =Xi/m, i= 1,2,…… (7.2)
Basic Relationship:
Most natural choice for m is one that equals to the capacity of a computer word. m = 2b (binary
machine), where b is the number of bits in the computer word.
m = 10d (decimal machine), where d is the number of digits in the computer word.
EXAMPLE 1: Let m = 102 = 100, a = 19, c = 0, and X0 = 63, and generate a sequence c random
integers using Equation
When m is a power of 10, say m = 10b, the modulo operation is accomplished by saving the b rightmost
(decimal) digits.
As computing power has increased, the complexity of the systems that we are able to simulate has also
increased. One fruitful approach is to combine two or more multiplicative congruential generators in such a way
that the combined generator has good statistical properties and a longer period. The following result from
L'Ecuyer [1988] suggests how this can be done: If Wi,1, Wi,2 ,... , Wi,k are any independent, discrete-valued random
variables (not necessarily identically distributed), but one of them, say Wi,1, is uniformly distributed on the integers
0 to mi — 2, then
is uniformly distributed on the integers 0 to mi — 2. To see how this result can be used to form combined
generators, let Xi,1, Xi,2,..., X i,k be the i th output from k different multiplicative congruential generators, where the
j th generator has prime modulus mj, and the multiplier aj is chosen so that the period is mj — 1. Then the j'th
generator is producing integers Xi,j that are approximately uniformly distributed on 1 to mj - 1, and Wi,j = X i,j — 1 is
approximately uniformly distributed on 0 to mj - 2. L'Ecuyer [1988] therefore suggests combined generators of the
form
1. Frequency test. Uses the Kolmogorov-Smirnov or the chi-square test to compare the distribution
of the set of numbers generated to a uniform distribution.
2. Autocorrelation test. Tests the correlation between numbers and compares the sample
correlation to the expected correlation of zero.
5.4.1 Frequency Tests
A basic test that should always be performed to validate a new generator is the test of
uniformity. Two different methods of testing are available.
1. Kolmogorov-Smirnov(KS test) and
2. Chi-square test.
• Both of these tests measure the degree of agreement between the distribution of a sample of
generated random numbers and the theoretical uniform distribution.
• Both tests are on the null hypothesis of no significant difference between the sample distribution
and the theoretical distribution.
1. The Kolmogorov-Smirnov test. This test compares the continuous cdf, F(X), of the uniform
distribution to the empirical cdf, SN(x), of the sample of N observations. By definition,
F(x) = x, 0 ≤ x ≤ 1
If the sample from the random-number generator is R1 R2, ,..., RN, then the empirical cdf, SN(x), is
defined by
The Kolmogorov-Smirnov test is based on the largest absolute deviation between F(x) and SN(X) over the
range of the random variable. That is. it is based on the statistic D = max |F(x) -SN(x)| For testing
against a uniform cdf, the test procedure follows these steps:
Step 1: Rank the data from smallest to largest. Let R (i) denote the i th smallest observation, so that
Step 2: Compute
Step 3: Compute D = max (D+, D-).
Step 4: Determine the critical value, Dα, from Table A.8 for the specified significance level α and the
given sample size N.
Step 5:
We conclude that no difference has been detected between the true distribution of {R1, R2,... RN} and the
uniform distribution.
EXAMPLE 6: Suppose that the five numbers 0.44, 0.81, 0.14, 0.05, 0.93 were generated, and it is
desired to perform a test for uniformity using the Kolmogorov-Smirnov test with a level of significance α
of 0.05.
Step 1: Rank the data from smallest to largest. 0.05, 0.14, 0.44, 0.81, 0.93
Step 4: Determine the critical value, Dα, from Table A.8 for the specified significance level α and the
given sample size N. Here α=0.05, N=5 then value of Dα = 0.565
Step 5: Since the computed value, 0.26 is less than the tabulated critical value, 0.565,
the hypothesis of no difference between the distribution of the generated numbers and the uniform
distribution is not rejected.
N – No. of observation
n – No. of classes
Note: sampling distribution of approximately the chi square has n-1 degrees of freedom
Example 7: Use the chi-square test with α = 0.05 to test whether the data shown below are uniformly
distributed. The test uses n = 10 intervals of equal length, namely [0, 0.1), [0.1, 0.2)... [0.9, 1.0).
(REFER TABLE A.6)
5.4.2 Tests for Auto-correlation
The tests for auto-correlation are concerned with the dependence between numbers in a sequence. The list
of the 30 numbers appears to have the effect that every 5th number has a very large value. If this is a
regular pattern, we can't really say the sequence is random.
The test computes the auto-correlation between every m numbers (m is also known as the lag) starting
ρ
with the ith number. Thus the autocorrelation im between the following numbers would be of interest.
EXAMPLE : Test whether the 3rd, 8th, 13th, and so on, numbers in the sequence at the beginning of this
section are auto correlated. (Use a = 0.05.) Here, i = 3 (beginning with the third number), m = 5 (every
five numbers), N = 30 (30 numbers in the sequence), and M = 4 (largest integer such that 3 + (M +1)5 <
30).
Solution:
2.Random Variate Generation TECHNIQUES:
• INVERSE TRANSFORMATION TECHNIQUE
• ACCEPTANCE-REJECTION TECHNIQUE
All these techniques assume that a source of uniform (0,1) random numbers is available R1,R2….. where
each R1 has probability density function and cumulative distribution function.
Note: The random variable may be either discrete or continuous.
2.1 Inverse Transform Technique The inverse transform technique can be used to sample
from exponential, the uniform, the Weibull and the triangle distributions.
2.1.1 Exponential Distribution The exponential distribution, has probability density function (pdf)
given by
E(Xi)= 1/λ
And so 1/λ is mean inter arrival time. The goal here is to develop a procedure for generating values X1, X2,
X3 . . . which have an exponential distribution.
The inverse transform technique can be utilized, at least in principle, for any distribution. But it is most
useful when the cdf. F(x), is of such simple form that its inverse, F-1, can be easily computed.
Step 1: Compute the cdf of the desired random variable X. For the exponential distribution, the cdf is
Step 2: Set F(X) = R on the range of X. For the exponential distribution, it becomes
Since X is a random variable (with the exponential distribution in this case), so 1-e-λx is also a random
variable, here called R. As will be shown later, R has a uniform distribution over the interval (0,1).,
Step 3: Solve the equation F(X) = R for X in terms of R. For the exponential distribution, the solution
proceeds as follows:
Equation (5.1) is called a random-variate generator for the exponential distribution. In general, Equation
(5.1) is written as X=F-1(R). Generating a sequence of values is accomplished through steps 4.
Step 4: Generate (as needed) uniform random numbers R1, R2, R3,... and compute the desired random
variates by
Xi = F-1 (Ri)
so that Xi = -1/λ ln ( 1 – Ri) …( 5.2 ) for i = 1,2,3,.... One simplification that is usually employed in
Equation (5.2) is to replace 1 – Ri by Ri to yield Xi = -1/λ ln Ri …( 5.3 ) which is justified since both Ri
and 1- Ri are uniformly distributed on (0,1).
Solution:
F(x) = 0, x < a
( x – a ) / ( b –a ), a ≤ x ≤ b
1, x > b
where α>0 and β>0 are the scale and shape of parameters.
Steps for Weibull distribution are as follows:
Useful particularly when inverse cdf does not exist in closed form
Illustration: To generate random variants, X ~ U(1/4, 1)
Procedures:
2.1.1 Poisson Distribution A Poisson random variable, N, with mean a > 0 has pmf
N can be interpreted as number of arrivals from a Poisson arrival process during one unit of time
• Then time between the arrivals in the process are exponentially distributed with rate α.
• Thus there is a relationship between the (discrete) Poisson distribution and the (continuous)
exponential distribution, namely
The procedure for generating a Poisson random variate, N, is given by the following steps:
Example: Generate three Poisson variants with mean a =0.2 for the given Random number
0.4357,0.4146,0.8353,0.9952,0.8004
Solution:
Step 1.Set n = 0, P = 1.
Step 3. Since P = 0.4357 < e-b = 0.8187, accept N = 0. Repeat Above procedure
Gamma distribution: