Assignment1
Assignment1
Lab Report
Course Title: Computer Simulation & Modeling Lab
Course code: CSE:404
Submitted By Submitted To
Name: Md Shohanur Rahman Dr. Syful Islam
ID: 19CSE046 Assistant Professor,
Year:4th Dept. of CSE,BSMRSTU
Semester: 1st
Session: 2019-2020
DATE: 22-12-2024
Experiment Title-1: Lear Mouth lewis for Pseudo-Random Number Generation.
Code:
import numpy as np
a = 75 # Multiplier
c = 0 # Increment
m = 2**(31) - 1 # Modulus, a large prime number
x = 0.1 # Initial seed value
Input/Output:
3.4924596564343477e-09
2.619344742325761e-07
1.9645085567443206e-05
0.0014733814175582405
0.11050360631686804
0.28777047376510245
0.5827855323826827
0.708914928701201
0.1686196525900716
0.6464739442553715
0.48554581915286643
0.4159364364649806
…………………….
….................99th value
Discussion:
Initialization:
The constants a, c, and m are defined, along with the initial seed value x = 0.1.
Loop:
Code:
import numpy as np a
= 2
c = 4
m = 5
x = 3
for i in range(1,10):
x= np.mod((a*x+c),m)
print(x)
Input/Output:
4032184627
Discussion:
initialization:
LCG Formula:
The LCG generates the next value in the sequence using the formula: xn+1=(a⋅xn+c)mod mx_{n+1}
= (a \cdot x_n + c) \mod mxn+1=(a⋅xn+c)modm
Here, x_n is the current value of x, and x_{n+1} is the next value to be generated.
np.mod computes the remainder after division by m (i.e., the modulus operation).
Loop:
Output:
Code:
import random
for i in range(20):
print('%05.4f' % random.random(), end=' ')
print()
random.seed(1)
for i in range(20):
print('%05.4f' % random.random(), end=' ')
print()
for i in range(20):
print('%6.4f' %random.uniform(1, 100), end=' ')
print()
for i in range(20):
print(random.randint(-100, 100), end=' ')
print()
for i in range(20):
print(random.randrange(0, 100,5), end=' ')
print()
DataList = range(10,100,10)
print("Initial Data List = ",DataList)
DataSample = random.sample(DataList,k=5)
print("Sample Data List = ",DataSample)
Input/Output:
Discussion:
This experiment showcases various techniques to generate and manipulate random data using Python's random
module. The experiment includes generating random floating-point and integer values, selecting random items from a
list, and creating random samples. Seeding the random number generator with random.seed() ensures that the
output is reproducible, which is useful for testing and debugging. The experiment effectively illustrates how to harness
the power of the random module for different scenarios where randomness is required.
Experiment-4: Chi-Square Goodness-of-Fit Test for Uniform Distribution Using Linear Congruential
Generator (LCG)
Code:
import numpy as np
a = 75
c = 0
m = 2**(31) -1
x = 0.1
u=np.array([])
for i in range(0,10):
x= np.mod((a*x+c),m)
u= np.append(u,x/m)
print(u[i])
N=100
s=20
Ns =N/s
S = np.arange(0, 1, 0.05)
counts = np.empty(S.shape, dtype=int)
V=0
for i in range(0,20):
counts[i] = len(np.where((u >= S[i]) & (u < S[i]+0.05))[0])
V=V+(counts[i]-Ns)**2 / Ns
print("R = ",counts)
print("V = ", V)
plt.bar(Ypos,counts)
Input/Output:
Discussion:
This experiment uses a Linear Congruential Generator (LCG) to generate pseudo-random numbers. The
uniformity of these numbers over the interval [0, 1) is tested using the Chi-Square Goodness-of-Fit Test.
1. LCG: Generates a sequence of numbers based on a linear formula, normalized to fall within [0, 1).
2. Uniformity Test: The interval [0, 1) is divided into equal subintervals. The Chi-Square test compares
the observed count of numbers in each subinterval against the expected count if the distribution were
uniform.
3. Results:
Chi-Square Statistic: Indicates how close the distribution is to uniform. A lower value
suggests a more uniform distribution.
Bar Chart: Visually shows the distribution of numbers across the subintervals.
Code:
import numpy as np
import matplotlib.pyplot as plt
N = 1000
n = 10
p = 0.5
P1 = np.random.binomial(n,p,N)
plt.figure()
plt.hist(P1, density=True, alpha=0.8, histtype='bar', color = 'green', ec='black')
plt.show()
Input/Output:
Discussion:
This experiment simulates and visualizes a binomial distribution using Python's NumPy library. The
binomial distribution is a discrete probability distribution that describes the number of successes in a fixed
number of independent Bernoulli trials, each with the same probability of success.
Steps Involved:
mu = 10
sigma =2
P1 = sigma, 1000)
np.random.normal(mu,
mu = 5
sigma =2
P2 = sigma, 1000)
np.random.normal(mu,
mu = 15
sigma =2
P3 = sigma, 1000)
np.random.normal(mu,
Plot1 = sns.distplot(P1)
Plot2 = sns.distplot(P2)
Plot3 = sns.distplot(P3)
mu = 10
sigma =2
P4 = sigma, 1000)
np.random.normal(mu,
mu = 10
sigma =1
mu = 10
sigma =0.5
plt.figure()
Plot4 = sns.distplot(P4)
Plot5 = sns.distplot(P5)
Plot6 = sns.distplot(P6)
plt.show()
Input/Output:
Discussion:
This experiment generates and compares multiple normal distributions with different means and standard
deviations. The purpose is to observe how changes in the mean and standard deviation affect the shape and
position of the distributions.
Steps Involved:
The plots are created using Seaborn's distplot function, which shows the probability density
function (PDF) of the data, giving a smooth curve that represents the distribution.
a=1
b=100
N=10000
X2=np.random.uniform(a,b,N)
plt.figure()
plt.plot(X2)
plt.show()
plt.figure()
plt.hist(X2, density=True, histtype='stepfilled', alpha=0.2)
plt.show()
Input/Output:
Discussion:
This experiment demonstrates the characteristics of a uniform distribution by generating random samples
from it with two different sample sizes, and then visualizing the results through line plots and histograms.
Steps Involved:
SamplesMeans = []
for i in range(0,1000):
DataExtracted = random.sample(DataPop,k=100)
DataExtractedMean = np.mean(DataExtracted)
SamplesMeans.append(DataExtractedMean)
plt.figure()
plt.hist(SamplesMeans, density=True, histtype='stepfilled', alpha=0.2)
plt.show()
Input/Output:
Discussion:
This experiment explores the concept of sampling distribution by generating a large uniform
population, drawing random samples from it, and analyzing the distribution of sample means.
Steps Involved:
Code:
import random
import numpy as np
import matplotlib.pyplot as plt
random.seed(2)
f = lambda x: x**2
a = 0.0
b = 3.0
NumSteps = 1000000
XIntegral=[]
YIntegral=[]
XRectangle=[]
YRectangle=[]
A = (b - a) * (ymax - ymin) N
= 1000000
M = 0
for k in range(N):
x = a + (b - a) * random.random()
y = ymin + (ymax - ymin) * random.random() if
y <= f(x):
M += 1
XIntegral.append(x)
YIntegral.append(y)
else:
XRectangle.append(x)
YRectangle.append(y)
NumericalIntegral = M / N * A
print ("Numerical integration = " + str(NumericalIntegral))
XLin=np.linspace(a,b)
YLin=[]
for x in XLin:
YLin.append(f(x))
plt.axis ([0, b, 0, f(b)])
plt.plot (XLin,YLin, color="red" , linewidth="4")
plt.scatter(XIntegral, YIntegral, color="blue", marker =".")
plt.scatter(XRectangle, YRectangle, color="yellow", marker =".")
plt.title ("Numerical Integration using Monte Carlo method")
plt.show()
Input/Output:
Discussion:
This experiment demonstrates numerical integration using the Monte Carlo method to estimate the
integral of a function f(x)=x2 over a given interval [a,b]. The Monte Carlo method uses random
sampling to estimate the area under the curve of a function.
Steps Involved:
N = 10000
M = 0
XCircle=[]
YCircle=[]
XSquare=[]
YSquare=[]
for p in range(N):
x=random.random()
y=random.random()
if(x**2+y**2 <= 1):
M+=1
XCircle.append(x)
YCircle.append(y)
else:
XSquare.append(x)
YSquare.append(y)
Pi = 4*M/N
XLin=np.linspace(0,1)
YLin=[]
for x in XLin:
YLin.append(math.sqrt(1-x**2))
plt.axis ("equal")
plt.grid (which="major")
plt.plot (XLin , YLin, color="red" , linewidth="4")
plt.scatter(XCircle, YCircle, color="yellow", marker =".")
plt.scatter(XSquare, YSquare, color="blue" , marker =".")
plt.title ("Monte Carlo method for Pi estimation")
plt.show()
Input/Output:
Discussion:
This experiment uses the Monte Carlo method to estimate the value of π by simulating random points
within a unit square and determining how many fall inside a quarter circle inscribed within that
square.
Steps Involved:
1. Setup:
o Generate Random Points:
N = 10,000 random points are generated in the unit square [0,1]×[0,1][0, 1] \times
[0, 1][0,1]×[0,1].
2. Check Points Inside Circle:
o Circle Condition:
Points are checked to see if they fall inside the quarter circle of radius 1, centered at the
origin. The condition is x2+y2≤1.
o Count Points:
Points inside the circle are counted (M), and their coordinates are stored separately from
points outside the circle.
3. Estimate π\piπ:
o Calculation:
The estimate of π is calculated using the ratio of points inside the circle to the total number
of points, scaled by 4: π≈4×N/M
4. Visualization:
o Plot:
The quarter circle is plotted in red.
Points inside the circle are shown in yellow, while points outside are shown in blue.
Code:
from random import seed
from random import random
from matplotlib import pyplot
seed(1)
RWPath = list()
Input/Output:
Discussion:
This experiment simulates a one-dimensional random walk using a simple algorithm where each step is
determined by a random choice between moving forward or backward.
Steps Involved:
1. Initialization:
Seed Random Generator: Ensures reproducibility by setting the random seed.
Start Path: Initialize the random walk path with the first step being either -1 or 1, chosen randomly.
2. Simulating the Random Walk:
Generate Steps:
For each subsequent step (total of 999 steps):
Step Direction: A random choice is made to move either -1 or 1.
Update Position: The new position is computed by adding this step to the
previous position.
Record Position: The updated position is appended to the path.
3. Plotting the Path:
Visualization:
The path of the random walk is plotted, showing how the position changes over time.
Experiment Title-12: Weather Forecasting Using a Markov Chain
Code:
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(3)
StatesData = ["Sunny","Rainy"]
TransitionStates = [["SuSu","SuRa"],["RaRa","RaSu"]]
TransitionMatrix = [[0.80,0.20],[0.25,0.75]]
WeatherForecasting = list()
NumDays = 365
TodayPrediction = StatesData[0]
if TodayPrediction == "Sunny":
TransCondition =
np.random.choice(TransitionStates[0],replace=True,p=TransitionMatrix[0])
if TransCondition == "SuSu":
pass
else:
TodayPrediction = "Rainy"
WeatherForecasting.append(TodayPrediction)
print(TodayPrediction)
plt.plot(WeatherForecasting)
plt.show()
plt.figure()
plt.hist(WeatherForecasting)
plt.show()
Input/Output:
Discussion:
This experiment simulates weather forecasting using a Markov Chain model. The model predicts weather
states (Sunny or Rainy) for each day based on transition probabilities defined by a transition matrix.
Steps Involved:
1. Setup:
o States and Transition Matrix:
StatesData: The possible weather states are "Sunny" and "Rainy".
TransitionStates: Defines possible transitions (e.g., "SuSu" for Sunny to Sunny, "SuRa" for
Sunny to Rainy).
TransitionMatrix: Probability matrix specifying the likelihood of transitions between states:
[[0.80, 0.20], [0.25, 0.75]] where:
0.80 is the probability of remaining Sunny given it was Sunny, and 0.20 is
the probability of changing to Rainy.
0.25 is the probability of changing to Sunny given it was Rainy, and 0.75 is
the probability of remaining Rainy.
2. Forecasting Process:
o Initial State: The weather starts as "Sunny".
o Simulation Loop:
For each day, determine the next state based on the current state and the transition
probabilities.
Update Prediction:
Use np.random.choice to randomly select the next state based on the transition
probabilities.
Update the state for the next iteration.
3. Recording and Plotting Results:
o Weather Forecasting:
Append the forecasted weather for each day to the WeatherForecasting list.
o Plots:
Line Plot: Shows the sequence of weather states over the 365 days.
Histogram: Displays the frequency of each weather state over the year.
Code:
import random
import numpy as np
import matplotlib.pyplot as plt
PopData = list()
random.seed(7)
for i in range(1000):
DataElem = 50 * random.random()
PopData.append(DataElem)
PopSampleMean = list()
for i in range(10000):
SampleI = random.choices(PopData, k=100)
PopSampleMean.append(np.mean(SampleI))
plt.hist(PopSampleMean)
plt.show()
MeanPopSampleMean = np.mean(PopSampleMean)
print("The mean of the Bootstrap estimator is ",MeanPopSampleMean)
MeanPopData = np.mean(PopData)
print("The mean of the population is ",MeanPopData)
MeanPopSample = np.mean(PopSample)
print("The mean of the simple random sample is ",MeanPopSample)
Input/Output:
Discussion:
Step-by-Step Breakdown:
1. Population Data Creation:
o Generated 1,000 random values (0–50) as population.
o True population mean: 24.97.
3. Bootstrap Sampling:
o Resampled 10,000 bootstrap samples (size 100).
o Calculated means of each sample.
6. Comparison:
o Simple sample mean (25.35) deviates more than bootstrap mean (24.95), showing bootstrap's robustness.
Code:
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-1,3,100)
y=x**2-2*x+1
fig = plt.figure()
axdef = fig.add_subplot(1, 1, 1)
axdef.spines['left'].set_position('center')
axdef.spines['bottom'].set_position('zero')
axdef.spines['right'].set_color('none')
axdef.spines['top'].set_color('none')
axdef.xaxis.set_ticks_position('bottom')
axdef.yaxis.set_ticks_position('left')
plt.plot(x,y, 'r')
plt.show()
ActualX = 3
LearningRate = 0.01
PrecisionValue = 0.000001
PreviousStepSize = 1
MaxIteration = 10000
IterationCounter = 0
Input/Output:
Discussion:
1. Graph of the Function:
3. Stopping Criteria:
Convergence: The algorithm converges to x=1.0x = 1.0x=1.0, which matches the analytical solution for the
minimum of the function.
Number of Iterations: It takes 246 iterations to reach the desired precision, indicating efficient convergence.
Effectiveness: The gradient descent method successfully finds the minimum by leveraging the slope of the
function.
Experiment Title-15: Newton-Raphson Method for Finding the Minimum of a Cubic Function.
Code:
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0,3,100)
y=x**3 -2*x**2 -x + 2
fig = plt.figure()
axdef = fig.add_subplot(1, 1, 1)
axdef.spines['left'].set_position('center')
axdef.spines['bottom'].set_position('zero')
axdef.spines['right'].set_color('none')
axdef.spines['top'].set_color('none')
axdef.xaxis.set_ticks_position('bottom')
axdef.yaxis.set_ticks_position('left')
plt.plot(x,y, 'r')
plt.show()
ActualX = 3
PrecisionValue = 0.000001
PreviousStepSize = 1
MaxIteration = 10000
IterationCounter = 0
Input/Output:
Discussion:
1. Graph and Function Analysis:
The function f(x)=x3−2x2−x+2f(x) = x^3 - 2x^2 - x + 2f(x)=x3−2x2−x+2 is a cubic polynomial, and its graph
shows one local minimum and one local maximum.
The local minimum is located approximately at x=2.56x = 2.56x=2.56, as confirmed by the Newton-Raphson
method.
2. Newton-Raphson Method:
Goal: The Newton-Raphson method finds the stationary points of f(x)f(x)f(x) by solving f′(x)=0f'(x) = 0f′(x)=0.
Steps:
o First Derivative: f′(x)=3x2−4x−1f'(x) = 3x^2 - 4x - 1f′(x)=3x2−4x−1.
o Second Derivative: f′′(x)=6x−4f''(x) = 6x - 4f′′(x)=6x−4.
o Starting from x=3x = 3x=3, the method iteratively updates xxx using: xnew=xold−f′(x)f′′(x)x_{\
text{new}} = x_{\text{old}} - \frac{f'(x)}{f''(x)}xnew=xold−f′′(x)f′(x)
3. Stopping Criteria:
The algorithm stops when the step size (∣xnew−xold∣|x_{\text{new}} - x_{\text{old}}|∣xnew−xold∣) is less than
the precision (0.0000010.0000010.000001) or when the maximum iterations (10,000) are reached.
4. Convergence:
Code:
import numpy as np
import matplotlib.pyplot as plt
# Number of steps
n = 1000
Input/Output:
Discussion:
1. Brownian Motion Definition:
2. Simulation Approach:
3. Interpretation of Results:
4. Applications:
Code:
import pandas as pd
import random
import numpy as np
N = 10000
TotalTime=[]
T = np.empty(shape=(N,6))
TaskTimes=[[3,5,8],
[2,4,7],
[3,5,9],
[4,6,10],
[3,5,9],
[2,6,8]]
Lh=[]
for i in range(6):
Lh.append((TaskTimes[i][1]-TaskTimes[i][0])/(TaskTimes[i][2]-TaskTimes[i][0]))
for p in range(N):
for i in range(6):
trand=random.random()
if (trand < Lh[i]):
T[p][i] = TaskTimes[i][0] + np.sqrt(trand*(TaskTimes[i][1]-TaskTimes[i]
[0])*(TaskTimes[i][2]-TaskTimes[i][0]))
else:
T[p][i] = TaskTimes[i][2] - np.sqrt((1-trand)*(TaskTimes[i][2]-TaskTimes[i]
[1])*(TaskTimes[i][2]-TaskTimes[i][0]))
TotalTime.append( T[p][0]+ np.maximum(T[p][1],T[p][2]) + np.maximum(T[p][3],T[p][4])
+ T[p][5])
pd.set_option('display.max_columns', None)
print(Data.describe())
hist = Data.hist(bins=10)
Input/Output:
Discussion:
Discussion:
1. Objective:
Simulate project completion times using task durations drawn from triangular distributions for six tasks.
The total project completion time is calculated as:
2. Triangular Distribution:
Each task duration is sampled from a triangular distribution defined by its minimum, most likely, and maximum
times: TaskTimes[i]} = [a, m, b] where a is the minimum, m is the most likely, and b is the maximum.
A random value (trandtrandtrand) is generated to determine the sampled value, using:
For each iteration (N=10,000N = 10,000N=10,000), task durations are calculated, and the total project time is
computed based on the project logic.
A large number of simulations ensures accurate statistical representation of possible outcomes.
4. Statistical Insights:
The minimum project time represents the fastest scenario where tasks finish with their minimum durations.
The mean project time reflects the average outcome across all simulations, accounting for variability in task
durations.
The maximum project time reflects the slowest scenario.
5. Visualizations:
Histograms of individual task times validate the triangular distribution of each task.
A distribution plot of total project times (optional) can illustrate the variability in project completion time.