0% found this document useful (0 votes)
3 views

Assignment1

The document is a lab report from Bangabandhu Sheikh Mujibur Rahman Science and Technology University detailing various experiments conducted in the Computer Simulation & Modeling Lab. It includes code and discussions for generating pseudo-random numbers, performing statistical tests, simulating distributions, and visualizing data using Python. Each experiment demonstrates different techniques and concepts related to random number generation and statistical analysis.

Uploaded by

Shohanur Rahman
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Assignment1

The document is a lab report from Bangabandhu Sheikh Mujibur Rahman Science and Technology University detailing various experiments conducted in the Computer Simulation & Modeling Lab. It includes code and discussions for generating pseudo-random numbers, performing statistical tests, simulating distributions, and visualizing data using Python. Each experiment demonstrates different techniques and concepts related to random number generation and statistical analysis.

Uploaded by

Shohanur Rahman
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

Bangabandhu Sheikh Mujibur Rahman Science and

Technology University, Gopalganj-8100

Lab Report
Course Title: Computer Simulation & Modeling Lab
Course code: CSE:404

Submitted By Submitted To
Name: Md Shohanur Rahman Dr. Syful Islam
ID: 19CSE046 Assistant Professor,
Year:4th Dept. of CSE,BSMRSTU
Semester: 1st
Session: 2019-2020

DATE: 22-12-2024
Experiment Title-1: Lear Mouth lewis for Pseudo-Random Number Generation.

Code:

import numpy as np

a = 75 # Multiplier
c = 0 # Increment
m = 2**(31) - 1 # Modulus, a large prime number
x = 0.1 # Initial seed value

for i in range(1, 100):


x = np.mod((a * x + c), m) # Apply the LCG formula
u = x / m # Normalize the value to get a uniform random number
print(u) # Output the uniform random number.

Input/Output:
3.4924596564343477e-09
2.619344742325761e-07
1.9645085567443206e-05
0.0014733814175582405
0.11050360631686804
0.28777047376510245
0.5827855323826827
0.708914928701201
0.1686196525900716
0.6464739442553715
0.48554581915286643
0.4159364364649806
…………………….
….................99th value

Discussion:

 Initialization:

 The constants a, c, and m are defined, along with the initial seed value x = 0.1.

 Loop:

 The loop runs 99 times (from i = 1 to i = 99), generating a sequence of 99 pseudo-


random numbers.
 In each iteration, the LCG formula is applied to generate the next value of x.
 The result x is then divided by m to produce a random number u in the range [0,1), which is printed
to the console.
Experiment-2: Linear Congruential Generator (LCG) for Pseudo-Random Number Generation

Code:
import numpy as np a
= 2
c = 4
m = 5
x = 3

for i in range(1,10):
x= np.mod((a*x+c),m)
print(x)

Input/Output:

4032184627

Discussion:

 initialization:

 The constants are defined:


o Multiplier (a): 2
o Increment (c): 4
o Modulus (m): 5
o Seed (x): 3 (initial value)

 LCG Formula:

 The LCG generates the next value in the sequence using the formula: xn+1=(a⋅xn+c)mod mx_{n+1}
= (a \cdot x_n + c) \mod mxn+1=(a⋅xn+c)modm
 Here, x_n is the current value of x, and x_{n+1} is the next value to be generated.
 np.mod computes the remainder after division by m (i.e., the modulus operation).

 Loop:

 The loop runs 16 times, generating and printing 16 pseudo-random numbers.


 In each iteration, the current value of x is updated using the LCG formula.

 Output:

 The code prints the sequence of numbers generated by the LCG.

Experiment-3: Generating Random Numbers and Selections

Code:
import random

for i in range(20):
print('%05.4f' % random.random(), end=' ')
print()

random.seed(1)

for i in range(20):
print('%05.4f' % random.random(), end=' ')
print()

for i in range(20):
print('%6.4f' %random.uniform(1, 100), end=' ')
print()

for i in range(20):
print(random.randint(-100, 100), end=' ')
print()

for i in range(20):
print(random.randrange(0, 100,5), end=' ')
print()

CitiesList = ['Rome','New York','London','Berlin','Moskov', 'Los


Angeles','Paris','Madrid','Tokio','Toronto']
for i in range(10):
CitiesItem = random.choice(CitiesList)
print ("Randomly selected item from Cities list is - ", CitiesItem)

DataList = range(10,100,10)
print("Initial Data List = ",DataList)
DataSample = random.sample(DataList,k=5)
print("Sample Data List = ",DataSample)

Input/Output:

Discussion:
This experiment showcases various techniques to generate and manipulate random data using Python's random
module. The experiment includes generating random floating-point and integer values, selecting random items from a
list, and creating random samples. Seeding the random number generator with random.seed() ensures that the
output is reproducible, which is useful for testing and debugging. The experiment effectively illustrates how to harness
the power of the random module for different scenarios where randomness is required.
Experiment-4: Chi-Square Goodness-of-Fit Test for Uniform Distribution Using Linear Congruential
Generator (LCG)

Code:

import numpy as np
a = 75
c = 0
m = 2**(31) -1
x = 0.1
u=np.array([])

for i in range(0,10):
x= np.mod((a*x+c),m)
u= np.append(u,x/m)
print(u[i])

N=100
s=20
Ns =N/s
S = np.arange(0, 1, 0.05)
counts = np.empty(S.shape, dtype=int)
V=0
for i in range(0,20):
counts[i] = len(np.where((u >= S[i]) & (u < S[i]+0.05))[0])
V=V+(counts[i]-Ns)**2 / Ns

print("R = ",counts)
print("V = ", V)

import matplotlib.pyplot as plt


Ypos = np.arange(len(counts))

plt.bar(Ypos,counts)

Input/Output:
Discussion:

This experiment uses a Linear Congruential Generator (LCG) to generate pseudo-random numbers. The
uniformity of these numbers over the interval [0, 1) is tested using the Chi-Square Goodness-of-Fit Test.

1. LCG: Generates a sequence of numbers based on a linear formula, normalized to fall within [0, 1).
2. Uniformity Test: The interval [0, 1) is divided into equal subintervals. The Chi-Square test compares
the observed count of numbers in each subinterval against the expected count if the distribution were
uniform.
3. Results:

Chi-Square Statistic: Indicates how close the distribution is to uniform. A lower value
suggests a more uniform distribution.

Bar Chart: Visually shows the distribution of numbers across the subintervals.

Experiment Title-5: Simulation of Binomial Distribution Using a Random Generator

Code:

import numpy as np
import matplotlib.pyplot as plt

N = 1000
n = 10
p = 0.5

P1 = np.random.binomial(n,p,N)

plt.figure()
plt.hist(P1, density=True, alpha=0.8, histtype='bar', color = 'green', ec='black')
plt.show()

Input/Output:
Discussion:

This experiment simulates and visualizes a binomial distribution using Python's NumPy library. The
binomial distribution is a discrete probability distribution that describes the number of successes in a fixed
number of independent Bernoulli trials, each with the same probability of success.

Steps Involved:

1. Binomial Distribution Parameters:


o n (Number of Trials): This represents the number of independent trials (in this case, 10).
o p (Probability of Success): The probability of success on each trial (here, 0.5).
o N (Number of Experiments): The number of times the binomial experiment is repeated (1000 in
this case).
2. Generating the Data:
o The np.random.binomial(n, p, N) function generates N random outcomes of a
binomial distribution with n trials and probability p of success.
3. Plotting the Histogram:
o The histogram visualizes the distribution of the generated binomial outcomes.
o Density Plot: The histogram is normalized to show the probability density, giving an idea of
how frequently each outcome occurs relative to the others.
o Visual Attributes: The histogram bars are green with black edges, providing a clear view of
the distribution.

Experiment Title-6: Comparison of Multiple Normal Distributions


Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns

mu = 10
sigma =2

P1 = sigma, 1000)
np.random.normal(mu,
mu = 5
sigma =2

P2 = sigma, 1000)
np.random.normal(mu,
mu = 15
sigma =2

P3 = sigma, 1000)
np.random.normal(mu,
Plot1 = sns.distplot(P1)
Plot2 = sns.distplot(P2)
Plot3 = sns.distplot(P3)

mu = 10
sigma =2

P4 = sigma, 1000)
np.random.normal(mu,
mu = 10
sigma =1

P5 = np.random.normal(mu, sigma, 1000)

mu = 10
sigma =0.5

P6 = np.random.normal(mu, sigma, 1000)

plt.figure()
Plot4 = sns.distplot(P4)
Plot5 = sns.distplot(P5)
Plot6 = sns.distplot(P6)
plt.show()

Input/Output:

Discussion:

This experiment generates and compares multiple normal distributions with different means and standard
deviations. The purpose is to observe how changes in the mean and standard deviation affect the shape and
position of the distributions.

Steps Involved:

1. Generating Normal Distributions:


o First Set (P1, P2, P3):
 Three normal distributions are generated with the same standard deviation (sigma = 2)
but different means (mu = 10, mu = 5, mu = 15).
o Second Set (P4, P5, P6):
 Three more normal distributions are generated with the same mean (mu = 10) but
different standard deviations (sigma = 2, sigma = 1, sigma = 0.5).
2. Visualization:
o First Plot (P1, P2, P3):
 The distributions are plotted on the same graph to compare how their means affect the
position along the x-axis, while having the same spread (standard deviation).
o Second Plot (P4, P5, P6):
 These distributions are plotted together to compare how different standard deviations
affect the spread (width) of the curves, with the same central mean.

The plots are created using Seaborn's distplot function, which shows the probability density
function (PDF) of the data, giving a smooth curve that represents the distribution.

3. Interpreting the Plots:


o First Plot: As the mean (mu) shifts, the entire distribution moves along the x-axis without changing
its shape, since the standard deviation remains constant.
o Second Plot: As the standard deviation (sigma) decreases, the distribution becomes narrower
and taller, indicating that data points are more concentrated around the mean

Experiment Title-7: Visualization of Uniform Distribution with Different Sample Sizes


Code:
import numpy as np
import matplotlib.pyplot as plt
a=1
b=100
N=100
X1=np.random.uniform(a,b,N)
plt.plot(X1)
plt.show()
plt.figure()
plt.hist(X1, density=True, histtype='stepfilled', alpha=0.2)
plt.show()

a=1
b=100
N=10000
X2=np.random.uniform(a,b,N)

plt.figure()
plt.plot(X2)
plt.show()

plt.figure()
plt.hist(X2, density=True, histtype='stepfilled', alpha=0.2)
plt.show()

Input/Output:
Discussion:

This experiment demonstrates the characteristics of a uniform distribution by generating random samples
from it with two different sample sizes, and then visualizing the results through line plots and histograms.

Steps Involved:

1. Generating Uniform Distribution Data:


o First Sample (X1):
 N = 100 random samples are generated from a uniform distribution ranging from a = 1
to b
= 100.
o Second Sample (X2):
 N = 10,000 random samples are generated from the same uniform distribution.
2. Visualization:
o Line Plots:
 The line plots of X1 and X2 show the sequence of random values. For the smaller sample
(X1), the plot is more jagged, while the larger sample (X2) appears more continuous due to
the higher number of data points.
o Histograms:
 The histograms illustrate the distribution of the generated samples.
 For X1, the histogram might not appear perfectly flat due to the small sample size.
 For X2, the histogram should appear more uniform, reflecting the true nature of the
uniform distribution, with a more even spread of data across the interval.
Experiment Title-8: Sampling Distribution of the Mean from a Uniform Population
Code:
import random
import numpy as np
import matplotlib.pyplot as plt
a=1
b=100
N=10000
DataPop=list(np.random.uniform(a,b,N))
plt.hist(DataPop, density=True, histtype='stepfilled', alpha=0.2)
plt.show()

SamplesMeans = []
for i in range(0,1000):
DataExtracted = random.sample(DataPop,k=100)
DataExtractedMean = np.mean(DataExtracted)
SamplesMeans.append(DataExtractedMean)
plt.figure()
plt.hist(SamplesMeans, density=True, histtype='stepfilled', alpha=0.2)
plt.show()

Input/Output:

Discussion:

This experiment explores the concept of sampling distribution by generating a large uniform
population, drawing random samples from it, and analyzing the distribution of sample means.

Steps Involved:

1. Generating the Population:


o Uniform Population (DataPop):
 10,000 random values are generated from a uniform distribution between a = 1 and b =
100. This represents the entire population of data points.
o Histogram of Population:
 A histogram is plotted to visualize the uniform distribution of the entire population. It shows
how the data is spread across the range from 1 to 100, illustrating the flat, even distribution
of values.
2. Sampling and Calculating Means:
o Sampling:
 1,000 random samples are extracted from the population, with each sample consisting of
100 data points.
 The random.sample() function is used to draw these samples without replacement from
the population.
o Calculating Means:
 For each sample, the mean is calculated using np.mean(), and these means are stored in
the list SamplesMeans.
3. Analyzing the Distribution of Sample Means:
o Histogram of Sample Means:
 A histogram is plotted of the means of the 1,000 samples. This histogram represents
the distribution of sample means, showing how the average value of each sample
varies.

Experiment Title-9: Numerical Integration Using the Monte Carlo Method

Code:
import random
import numpy as np
import matplotlib.pyplot as plt

random.seed(2)
f = lambda x: x**2
a = 0.0

b = 3.0
NumSteps = 1000000
XIntegral=[]
YIntegral=[]
XRectangle=[]
YRectangle=[]

ymin = f(a) ymax


= ymin
for i in range(NumSteps):
x = a + (b - a) * float(i) / NumSteps y
= f(x)
if y < ymin: ymin = y if
y > ymax: ymax = y

A = (b - a) * (ymax - ymin) N
= 1000000
M = 0
for k in range(N):
x = a + (b - a) * random.random()
y = ymin + (ymax - ymin) * random.random() if
y <= f(x):
M += 1
XIntegral.append(x)
YIntegral.append(y)
else:
XRectangle.append(x)
YRectangle.append(y)
NumericalIntegral = M / N * A
print ("Numerical integration = " + str(NumericalIntegral))

XLin=np.linspace(a,b)
YLin=[]
for x in XLin:
YLin.append(f(x))
plt.axis ([0, b, 0, f(b)])
plt.plot (XLin,YLin, color="red" , linewidth="4")
plt.scatter(XIntegral, YIntegral, color="blue", marker =".")
plt.scatter(XRectangle, YRectangle, color="yellow", marker =".")
plt.title ("Numerical Integration using Monte Carlo method")
plt.show()

Input/Output:

Discussion:

This experiment demonstrates numerical integration using the Monte Carlo method to estimate the
integral of a function f(x)=x2 over a given interval [a,b]. The Monte Carlo method uses random
sampling to estimate the area under the curve of a function.

Steps Involved:

1. Setup and Function Definition:


o Function (f): The function to be integrated is f(x) = x^2
o Interval: The integration is performed over the interval [a,b][a, b][a,b], where a=0.0a = 0.0a=0.0 and
b=3.0b = 3.0b=3.0.
2. Calculate Bounding Box:
o Bounding Box Dimensions:
 The minimum (ymin) and maximum (ymax) values of the function f(x)f(x)f(x) over the
interval are calculated. This defines the bounding box used for the Monte Carlo
sampling.
o Area of Bounding Box (A):
 The area of the bounding box is calculated as: A=(b−a)×(ymax−ymin)
3. Monte Carlo Integration:
o Random Sampling:
 N = 1,000,000 random points are sampled within the bounding box.
 For each sample, a random x value and a random y value are generated.
o Count Points Under Curve:
 If the random y value is less than or equal to f(x), the point is considered to be under
the curve.
 Points that fall under the curve are recorded as XIntegral and YIntegral, while
points above the curve are recorded as XRectangle and YRectangle.
o Estimate Integral:
 The integral is estimated by the ratio of points under the curve to the total number of
points, multiplied by the area of the bounding box: NumericalIntegral=(M/N)×A
 Where M is the number of points under the curve.
4. Plotting the Results:
o Plot Function and Points:
 The function f(x) is plotted in red.
 Points under the curve (blue) and points above the curve (yellow) are plotted to visualize
the Monte Carlo sampling process.
o Visualization:
 The plot shows the function f(x),the sampled points, and the areas where the points fall
either under or above the curve.

Experiment Title-10: Monte Carlo Method for Estimating π.


Code:
import math
import random
import numpy as np
import matplotlib.pyplot as plt

N = 10000
M = 0

XCircle=[]
YCircle=[]
XSquare=[]
YSquare=[]

for p in range(N):
x=random.random()
y=random.random()
if(x**2+y**2 <= 1):
M+=1
XCircle.append(x)
YCircle.append(y)
else:
XSquare.append(x)
YSquare.append(y)

Pi = 4*M/N

print("N=%d M=%d Pi=%.2f" %(N,M,Pi))

XLin=np.linspace(0,1)
YLin=[]
for x in XLin:
YLin.append(math.sqrt(1-x**2))

plt.axis ("equal")
plt.grid (which="major")
plt.plot (XLin , YLin, color="red" , linewidth="4")
plt.scatter(XCircle, YCircle, color="yellow", marker =".")
plt.scatter(XSquare, YSquare, color="blue" , marker =".")
plt.title ("Monte Carlo method for Pi estimation")

plt.show()

Input/Output:
Discussion:
This experiment uses the Monte Carlo method to estimate the value of π by simulating random points
within a unit square and determining how many fall inside a quarter circle inscribed within that
square.

Steps Involved:

1. Setup:
o Generate Random Points:
 N = 10,000 random points are generated in the unit square [0,1]×[0,1][0, 1] \times
[0, 1][0,1]×[0,1].
2. Check Points Inside Circle:
o Circle Condition:
 Points are checked to see if they fall inside the quarter circle of radius 1, centered at the
origin. The condition is x2+y2≤1.
o Count Points:
 Points inside the circle are counted (M), and their coordinates are stored separately from
points outside the circle.
3. Estimate π\piπ:
o Calculation:
 The estimate of π is calculated using the ratio of points inside the circle to the total number
of points, scaled by 4: π≈4×N/M
4. Visualization:
o Plot:
 The quarter circle is plotted in red.
 Points inside the circle are shown in yellow, while points outside are shown in blue.

Experiment Title-11: 1D Random Walk Simulation.

Code:
from random import seed
from random import random
from matplotlib import pyplot
seed(1)
RWPath = list()

RWPath.append(-1 if random() < 0.5 else 1)


for i in range(1, 1000):
ZNValue = -1 if random() < 0.5 else 1
XNValue = RWPath[i-1] + ZNValue
RWPath.append(XNValue)
pyplot.plot(RWPath)
pyplot.show()

Input/Output:
Discussion:
This experiment simulates a one-dimensional random walk using a simple algorithm where each step is
determined by a random choice between moving forward or backward.

Steps Involved:

1. Initialization:
 Seed Random Generator: Ensures reproducibility by setting the random seed.
 Start Path: Initialize the random walk path with the first step being either -1 or 1, chosen randomly.
2. Simulating the Random Walk:
 Generate Steps:
 For each subsequent step (total of 999 steps):
 Step Direction: A random choice is made to move either -1 or 1.
 Update Position: The new position is computed by adding this step to the
previous position.
 Record Position: The updated position is appended to the path.
3. Plotting the Path:
 Visualization:
 The path of the random walk is plotted, showing how the position changes over time.
Experiment Title-12: Weather Forecasting Using a Markov Chain
Code:
import numpy as np
import matplotlib.pyplot as plt

np.random.seed(3)
StatesData = ["Sunny","Rainy"]

TransitionStates = [["SuSu","SuRa"],["RaRa","RaSu"]]
TransitionMatrix = [[0.80,0.20],[0.25,0.75]]

WeatherForecasting = list()
NumDays = 365
TodayPrediction = StatesData[0]

print("Weather initial condition =",TodayPrediction)

for i in range(1, NumDays):

if TodayPrediction == "Sunny":
TransCondition =
np.random.choice(TransitionStates[0],replace=True,p=TransitionMatrix[0])
if TransCondition == "SuSu":
pass
else:
TodayPrediction = "Rainy"

elif TodayPrediction == "Rainy":


TransCondition =
np.random.choice(TransitionStates[1],replace=True,p=TransitionMatrix[1])
if TransCondition == "RaRa":
pass
else:
TodayPrediction = "Sunny"

WeatherForecasting.append(TodayPrediction)
print(TodayPrediction)

plt.plot(WeatherForecasting)
plt.show()

plt.figure()
plt.hist(WeatherForecasting)
plt.show()

Input/Output:
Discussion:
This experiment simulates weather forecasting using a Markov Chain model. The model predicts weather
states (Sunny or Rainy) for each day based on transition probabilities defined by a transition matrix.

Steps Involved:

1. Setup:
o States and Transition Matrix:
 StatesData: The possible weather states are "Sunny" and "Rainy".
 TransitionStates: Defines possible transitions (e.g., "SuSu" for Sunny to Sunny, "SuRa" for
Sunny to Rainy).
 TransitionMatrix: Probability matrix specifying the likelihood of transitions between states:
 [[0.80, 0.20], [0.25, 0.75]] where:
 0.80 is the probability of remaining Sunny given it was Sunny, and 0.20 is
the probability of changing to Rainy.
 0.25 is the probability of changing to Sunny given it was Rainy, and 0.75 is
the probability of remaining Rainy.
2. Forecasting Process:
o Initial State: The weather starts as "Sunny".
o Simulation Loop:
 For each day, determine the next state based on the current state and the transition
probabilities.
 Update Prediction:
 Use np.random.choice to randomly select the next state based on the transition
probabilities.
 Update the state for the next iteration.
3. Recording and Plotting Results:
o Weather Forecasting:
 Append the forecasted weather for each day to the WeatherForecasting list.
o Plots:
 Line Plot: Shows the sequence of weather states over the 365 days.
 Histogram: Displays the frequency of each weather state over the year.

Experiment Title-13: Bootstrap Estimator Analysis for Population Mean Estimation

Code:
import random
import numpy as np
import matplotlib.pyplot as plt

PopData = list()

random.seed(7)

for i in range(1000):
DataElem = 50 * random.random()
PopData.append(DataElem)

PopSample = random.choices(PopData, k=100)

PopSampleMean = list()
for i in range(10000):
SampleI = random.choices(PopData, k=100)
PopSampleMean.append(np.mean(SampleI))

plt.hist(PopSampleMean)
plt.show()

MeanPopSampleMean = np.mean(PopSampleMean)
print("The mean of the Bootstrap estimator is ",MeanPopSampleMean)

MeanPopData = np.mean(PopData)
print("The mean of the population is ",MeanPopData)

MeanPopSample = np.mean(PopSample)
print("The mean of the simple random sample is ",MeanPopSample)

Input/Output:

Discussion:
Step-by-Step Breakdown:
1. Population Data Creation:
o Generated 1,000 random values (0–50) as population.
o True population mean: 24.97.

2. Simple Random Sampling:


o Drew a random sample of 100 points.
o Sample mean: 25.35 (slight deviation due to variability).

3. Bootstrap Sampling:
o Resampled 10,000 bootstrap samples (size 100).
o Calculated means of each sample.

4. Histogram of Bootstrap Means:


o Displayed the distribution of sample means.
o Approximately normal due to Central Limit Theorem.

5. Bootstrap Mean Estimation:


o Mean of bootstrap means: 24.95 (close to true mean: 24.97).

6. Comparison:
o Simple sample mean (25.35) deviates more than bootstrap mean (24.95), showing bootstrap's robustness.

Experiment Title-14: Gradient Descent for Minimizing a Quadratic Function

Code:
import numpy as np
import matplotlib.pyplot as plt

x = np.linspace(-1,3,100)
y=x**2-2*x+1

fig = plt.figure()
axdef = fig.add_subplot(1, 1, 1)
axdef.spines['left'].set_position('center')
axdef.spines['bottom'].set_position('zero')
axdef.spines['right'].set_color('none')
axdef.spines['top'].set_color('none')
axdef.xaxis.set_ticks_position('bottom')
axdef.yaxis.set_ticks_position('left')

plt.plot(x,y, 'r')
plt.show()

Gradf = lambda x: 2*x-2

ActualX = 3
LearningRate = 0.01
PrecisionValue = 0.000001
PreviousStepSize = 1
MaxIteration = 10000
IterationCounter = 0

while PreviousStepSize > PrecisionValue and IterationCounter < MaxIteration:


PreviousX = ActualX
ActualX = ActualX - LearningRate * Gradf(PreviousX)
PreviousStepSize = abs(ActualX - PreviousX)
IterationCounter = IterationCounter+1
print("Number of iterations = ",IterationCounter,"\nActual value of x is =
",ActualX)

print("X value of f(x) minimum = ", ActualX)

Input/Output:

Discussion:
1. Graph of the Function:

 The plot of f(x)=x2−2x+1f(x) = x^2 - 2x + 1f(x)=x2−2x+1 is a parabola opening upwards.


 The minimum value occurs at the vertex of the parabola, which is expected at x=1.0x = 1.0x=1.0.

2. Gradient Descent Algorithm:

 Goal: Minimize the function f(x)f(x)f(x) using gradient descent.


 The derivative (gradient) of f(x)f(x)f(x) is f′(x)=2x−2f'(x) = 2x - 2f′(x)=2x−2.
 Starting from an initial guess of x=3x = 3x=3, the algorithm iteratively updates xxx in the direction of the
negative gradient, scaled by the learning rate (0.010.010.01).

3. Stopping Criteria:

 The algorithm stops when:


o The step size (∣ActualX−PreviousX∣|\text{ActualX} - \text{PreviousX}|∣ActualX−PreviousX∣) is less than
the precision value (0.0000010.0000010.000001).
o The iteration count exceeds the maximum allowed iterations (10,000).

4. Observations from Results:

 Convergence: The algorithm converges to x=1.0x = 1.0x=1.0, which matches the analytical solution for the
minimum of the function.
 Number of Iterations: It takes 246 iterations to reach the desired precision, indicating efficient convergence.
 Effectiveness: The gradient descent method successfully finds the minimum by leveraging the slope of the
function.

Experiment Title-15: Newton-Raphson Method for Finding the Minimum of a Cubic Function.
Code:

import numpy as np
import matplotlib.pyplot as plt

x = np.linspace(0,3,100)
y=x**3 -2*x**2 -x + 2

fig = plt.figure()
axdef = fig.add_subplot(1, 1, 1)
axdef.spines['left'].set_position('center')
axdef.spines['bottom'].set_position('zero')
axdef.spines['right'].set_color('none')
axdef.spines['top'].set_color('none')
axdef.xaxis.set_ticks_position('bottom')
axdef.yaxis.set_ticks_position('left')

plt.plot(x,y, 'r')
plt.show()

print('Value of x at the minimum of the function', x[np.argmin(y)])

FirstDerivative = lambda x: 3*x**2-4*x -1


SecondDerivative = lambda x: 6*x-4

ActualX = 3
PrecisionValue = 0.000001
PreviousStepSize = 1
MaxIteration = 10000
IterationCounter = 0

while PreviousStepSize > PrecisionValue and IterationCounter < MaxIteration:


PreviousX = ActualX
ActualX = ActualX - FirstDerivative(PreviousX)/ SecondDerivative(PreviousX)
PreviousStepSize = abs(ActualX - PreviousX)
IterationCounter = IterationCounter+1
print("Number of iterations = ",IterationCounter,"\nActual value of x is =
",ActualX)

print("X value of f(x) minimum = ", ActualX)

Input/Output:

Discussion:
1. Graph and Function Analysis:

The function f(x)=x3−2x2−x+2f(x) = x^3 - 2x^2 - x + 2f(x)=x3−2x2−x+2 is a cubic polynomial, and its graph
shows one local minimum and one local maximum.
The local minimum is located approximately at x=2.56x = 2.56x=2.56, as confirmed by the Newton-Raphson
method.

2. Newton-Raphson Method:

 Goal: The Newton-Raphson method finds the stationary points of f(x)f(x)f(x) by solving f′(x)=0f'(x) = 0f′(x)=0.
 Steps:
o First Derivative: f′(x)=3x2−4x−1f'(x) = 3x^2 - 4x - 1f′(x)=3x2−4x−1.
o Second Derivative: f′′(x)=6x−4f''(x) = 6x - 4f′′(x)=6x−4.
o Starting from x=3x = 3x=3, the method iteratively updates xxx using: xnew=xold−f′(x)f′′(x)x_{\
text{new}} = x_{\text{old}} - \frac{f'(x)}{f''(x)}xnew=xold−f′′(x)f′(x)

3. Stopping Criteria:

 The algorithm stops when the step size (∣xnew−xold∣|x_{\text{new}} - x_{\text{old}}|∣xnew−xold∣) is less than
the precision (0.0000010.0000010.000001) or when the maximum iterations (10,000) are reached.

4. Convergence:

 The method converges to x=2.56x = 2.56x=2.56 after 6 iterations.


 This demonstrates the efficiency of the Newton-Raphson method for finding critical points.

Experiment Title-16: Simulation of Brownian Motion Using Python

Code:
import numpy as np
import matplotlib.pyplot as plt

# Set random seed for reproducibility


np.random.seed(4)

# Number of steps
n = 1000

# Corrected square root calculation


SQN = 1 / np.sqrt(n)

# Generate random normal values


ZValues = np.random.randn(n)

# Initialize Brownian motion


Yk = 0
SBMotion = []

# Simulate the Brownian motion


for k in range(n):
Yk = Yk + SQN * ZValues[k]
SBMotion.append(Yk)

# Plot the Brownian motion


plt.plot(SBMotion)
plt.title("Simulated Brownian Motion")
plt.xlabel("Steps")
plt.ylabel("Value")
plt.show()

Input/Output:

Discussion:
1. Brownian Motion Definition:

 Brownian motion, also known as a Wiener process, is a continuous-time stochastic process.


 It is widely used to model random phenomena in fields like physics, finance, and biology.

2. Simulation Approach:

 The simulation discretizes Brownian motion into nnn steps.


 Each increment (dWkdW_kdWk) is modeled as Zk×Δt1/2Z_k \times \Delta t^{1/2}Zk×Δt1/2, where
Zk∼N(0,1)Z_k \sim N(0,1)Zk∼N(0,1).
 Here, Δt=1/n\Delta t = 1/nΔt=1/n, so the scaling factor is 1/n1/\sqrt{n}1/n.

3. Interpretation of Results:

 The plot represents a single realization of a Brownian motion path.


 The motion is random, but it satisfies properties of Brownian motion:
o Starts at 0.
o Independent increments.
o Normally distributed step sizes.
o Fluctuations scale with time.

4. Applications:

 Physics: Describing particle movement in fluids.


 Finance: Modeling stock prices and option pricing (geometric Brownian motion).
 Mathematics: Studying properties of stochastic processes.
Experiment Title-17: Simulation of Project Completion Time Using Triangular Distributions (monte
carlo).

Code:
import pandas as pd
import random
import numpy as np

N = 10000

TotalTime=[]

T = np.empty(shape=(N,6))

TaskTimes=[[3,5,8],
[2,4,7],
[3,5,9],
[4,6,10],
[3,5,9],
[2,6,8]]

Lh=[]
for i in range(6):
Lh.append((TaskTimes[i][1]-TaskTimes[i][0])/(TaskTimes[i][2]-TaskTimes[i][0]))

for p in range(N):
for i in range(6):
trand=random.random()
if (trand < Lh[i]):
T[p][i] = TaskTimes[i][0] + np.sqrt(trand*(TaskTimes[i][1]-TaskTimes[i]
[0])*(TaskTimes[i][2]-TaskTimes[i][0]))
else:
T[p][i] = TaskTimes[i][2] - np.sqrt((1-trand)*(TaskTimes[i][2]-TaskTimes[i]
[1])*(TaskTimes[i][2]-TaskTimes[i][0]))
TotalTime.append( T[p][0]+ np.maximum(T[p][1],T[p][2]) + np.maximum(T[p][3],T[p][4])
+ T[p][5])

Data = pd.DataFrame(T,columns=['Task1', 'Task2', 'Task3', 'Task4', 'Task5', 'Task6'])

pd.set_option('display.max_columns', None)
print(Data.describe())

hist = Data.hist(bins=10)

print("Minimum project completion time = ",np.amin(TotalTime))

print("Mean project completion time = ",np.mean(TotalTime))

print("Maximum project completion time = ",np.amax(TotalTime))

Input/Output:
Discussion:

Discussion:

1. Objective:

Simulate project completion times using task durations drawn from triangular distributions for six tasks.
The total project completion time is calculated as:

2. Triangular Distribution:

 Each task duration is sampled from a triangular distribution defined by its minimum, most likely, and maximum
times: TaskTimes[i]} = [a, m, b] where a is the minimum, m is the most likely, and b is the maximum.
 A random value (trandtrandtrand) is generated to determine the sampled value, using:

3. Monte Carlo Simulation:

 For each iteration (N=10,000N = 10,000N=10,000), task durations are calculated, and the total project time is
computed based on the project logic.
 A large number of simulations ensures accurate statistical representation of possible outcomes.

4. Statistical Insights:

 The minimum project time represents the fastest scenario where tasks finish with their minimum durations.
 The mean project time reflects the average outcome across all simulations, accounting for variability in task
durations.
 The maximum project time reflects the slowest scenario.
5. Visualizations:

 Histograms of individual task times validate the triangular distribution of each task.
 A distribution plot of total project times (optional) can illustrate the variability in project completion time.

You might also like