0% found this document useful (0 votes)
9 views

Maths Final Report

Uploaded by

amrithavarsha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Maths Final Report

Uploaded by

amrithavarsha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

VISVESVARAYA TECHNOLOGICAL UNIVERSITY

“JNANA SANGAMA”, BELAGAVI - 590 018

ASSIGNMENT REPORT on

“Numerical Methods and Applications”

Submitted by

Shalini L 4SF21CS144

In partial fulfillment of the requirements for the VII semester

BACHELOR OF ENGINEERING

in

COMPUTER SCIENCE & ENGINEERING

Under the Guidance of

Mr. Sunil Kumar K

Assistant Professor, Department of Civil Engineering

at

Sahyadri College of Engineering and Management,

An Autonomous Institution, Mangaluru 2024-25


Module 1
1.Gauss Elimination method
Algorithm for Gauss elimination method
1. Start
2. Declare the variables and read the order of the matrix n.
3. Take the coefficients of the linear equation as:
Do for k=1 to n
Do for j=1 to n+1
Read a[k][j]
End for j
End for k
4. Do for k=1 to n-1
Do for i=k+1 to n
Do for j=k+1 to n+1
a[i][j] = a[i][j] – a[i][k] /a[k][k] * a[k][j]
End for j
End for i
End for k
5. Compute x[n] = a[n][n+1]/a[n][n]
6. Do for k=n-1 to 1
sum = 0
Do for j=k+1 to n
sum = sum + a[k][j] * x[j]
End for j
x[k] = 1/a[k][k] * (a[k][n+1] – sum)
End for k
7. Display the result x[k]
8. Stop

Flowchart
Code:

#Name: Shalini L
#USN : 4SF21CS144
#Course Name : Numerical Methods And Applications

def gauss_elimination(matrix):
n = len(matrix)
# Forward elimination (convert matrix to upper triangular form)
for k in range(n):
# Partial pivoting: find the row with the largest element in column k
max_row = max(range(k, n), key=lambda i: abs(matrix[i][k]))
matrix[k], matrix[max_row] = matrix[max_row], matrix[k]
for i in range(k + 1, n):
factor = matrix[i][k] / matrix[k][k]
for j in range(k, n + 1): # Including the last column (right-hand side)
matrix[i][j] -= factor * matrix[k][j]
# Back substitution (solve for the variables)
x = [0] * n
x[n - 1] = matrix[n - 1][n] / matrix[n - 1][n - 1]
for i in range(n - 2, -1, -1):
x[i] = matrix[i][n]
for j in range(i + 1, n):
x[i] -= matrix[i][j] * x[j]
x[i] /= matrix[i][i]
return x
def main():
# Input the number of equations
n = 3 # Here we assume a 3x3 system of equations
# Input the augmented matrix for the system of equations
print("Enter the augmented matrix (coefficients and constants):")
matrix = []
for i in range(n):
row = list(map(float, input().split()))
matrix.append(row)
# Perform Gauss Elimination to solve the system
solution = gauss_elimination(matrix)
# Display the result
print("The solution is:")
for i in range(n):
print(f"x[{i + 1}] = {solution[i]:.2f}")
if __name__ == "__main__":
main()

Output:

2. Gauss-Jacobi method
Algorithm for Gauss-Jacobi method
1. Start
2. Declare the variables and read the order of the matrix n
3. Read the stopping criteria er
4. Read the coefficients aim as
Do for i=1 to n
Do for j=1 to n
Read a[i][j]
Repeat for j
Repeat for i
5. Read the coefficients b[i] for i=1 to n
6. Initialize x0[i] = 0 for i=1 to n
7. Set key=0
8. For i=1 to n
Set sum = b[i]
For j=1 to n
If (j not equal to i)
Set sum = sum – a[i][j] * x0[j]
Repeat j
x[i] = sum/a[i][i]
If absolute value of ((x[i] – x0[i]) / x[i]) > er, then
Set key = 1
Set x0[i] = x[i]
Repeat i
9. If key = 1, then
Goto step 6
10. Otherwise print results

Flowchart
Code:

#Name: Shalini L
#USN : 4SF21CS144
#Course Name : Numerical Methods And Applications

import numpy as np

def gauss_jacobi(A, b, tol=1e-10, max_iterations=100):


# Ensure A is a numpy array for easy indexing
A = np.array(A)
b = np.array(b)

# Initialize the solution vector x with zeros (or any initial guess)
n = len(b)
x = np.zeros(n)

# Iterative process
for iteration in range(max_iterations):
x_new = np.zeros_like(x)

# Update each component of the solution vector


for i in range(n):
# Compute the sum for the current equation
sum_ = b[i] - np.dot(A[i, :], x) + A[i, i] * x[i]
x_new[i] = sum_ / A[i, i]

# Check for convergence (if the difference is within tolerance)


if np.linalg.norm(x_new - x, ord=np.inf) < tol:
print(f"Converged in {iteration+1} iterations.")
return x_new

# Update the solution vector for the next iteration


x = x_new

# If max iterations are reached and no convergence, return current result


print("Max iterations reached, no convergence.")
return x

def main():
# Example system of equations:
# 4x - x + 2z = 4
# 3x + 5y - 2z = 3
# 2x - y + 3z = 7

A=[
[4, -1, 2],
[3, 5, -2],
[2, -1, 3]
]
b = [4, 3, 7]

# Solve the system using Gauss-Jacobi method


solution = gauss_jacobi(A, b)
# Display the solution
print("The solution is:")
for i, xi in enumerate(solution):
print(f"x[{i+1}] = {xi:.6f}")
if __name__ == "__main__":
main()

Output:
MODULE 2
1.Newtons Forward Interpolation Method
Algorithms for Newtons Forward Interpolation Method
Step 1: Start the program
Step 2: Read n (No. of arguments)
Step 3: For i = 0 to n − 1
Read x i &y i [0]
End i
Step 4: Construct the Forward Difference Table
For j = 1 to n − 1
For i = 0 to n − 1 − j
y i [j] = y[i + 1][j − 1] − y[i][j − 1]
End i
End j
Step 5: Print the Forward Difference Table
For i = 0 to n − 1
For j = 0 to n − 1 − i
Print y i [j]
End j
Next Line
End i
Step 6: Read a (Point of Interpolation)
Step 7: Assign h = x[1] − x[0] (Step Length)
Step 8: Assign u = (a − x[0])/h
Step 9: Assign sum = y 0 [0] & p = 1.0
Step 10: For j = 1 to n − 1
p = p ∗ (u − j + 1)/j
sum = sum + p ∗ y 0 [j]
End j
Step 11: Display a & sum
Step 12: Stop

Flowchart
Code:

#Name: Shalini L
#USN : 4SF21CS144
#Course Name : Numerical Methods And Applications

def newton_forward_difference(x_values, y_values, target_x):


"""
Implements Newton's Forward Difference Formula for interpolation.

Parameters:
x_values (list): List of evenly spaced x values.
y_values (list): List of corresponding f(x) values.
target_x (float): The x value for which f(x) needs to be interpolated.

Returns:
float: Interpolated value of f(x) at target_x.
"""
n = len(x_values)

# Step 1: Create the forward difference table


difference_table = [y_values[:]] # Initialize with y_values as the first row
for i in range(1, n):
row = []
for j in range(n - i):
row.append(difference_table[i - 1][j + 1] - difference_table[i - 1][j])
difference_table.append(row)

# Step 2: Calculate u
h = x_values[1] - x_values[0] # Spacing between x values
u = (target_x - x_values[0]) / h

# Step 3: Apply the Newton Forward Difference formula


result = y_values[0] # Initialize with f(x0)
u_term = 1 # To hold terms like u, u(u-1), u(u-1)(u-2), ...
factorial = 1 # To calculate factorial for denominator
for i in range(1, n):
u_term *= (u - (i - 1)) # Update u term
factorial *= i # Update factorial
result += (u_term * difference_table[i][0]) / factorial

return result

# Example usage
if __name__ == "__main__":
# Input data
x_values = [1, 2, 3, 4, 5] # x values
y_values = [1, 8, 27, 64, 125] # Corresponding f(x) values (f(x) = x^3)
target_x = 3.5 # Value of x to interpolate
interpolated_value = newton_forward_difference(x_values, y_values, target_x)
print(f"Interpolated value at x = {target_x}: {interpolated_value:.6f}")

Output:

Newtons Backward Interpolation method


Algorithm
Step 1: Start the program
Step 2: Read n (No. of arguments)
Step 3: For i = 0 to n − 1
Read x i &y i [0]
End i
Step 4: Construct the Backward Difference Table
For j = 1 to n − 1
For i = j to n − 1
y i [j] = y[i][j − 1] − y[i − 1][j − 1]
End i
End j
Step 5: Print the Backward Difference Table
For i = 0 to n − 1
For j = 0 to i
Print y i [j]
End j
End i
Step 6: Read a (Point of Interpolation)
Step 7: Assign h = x[1] − x[0] (Step Length)
Step 8: Assign u = (a − x[n − 1])/h
Step 9: Assign sum = y n − 1 [0] & p = 1.0
Step 10: For j = 1 to n − 1
p = p ∗ (u + j − 1)/j
sum = sum + p ∗ y n − 1 [j]
End j
Step 11: Display a & sum
Step 12: Stop

Flowchart:

Code:

#Name: Shalini L
#USN : 4SF21CS144
#Course Name : Numerical Methods And Applications

def newton_backward_difference(x_values, y_values, target_x):


"""
Implements Newton's Backward Difference Formula for interpolation.
Parameters:
x_values (list): List of evenly spaced x values.
y_values (list): List of corresponding f(x) values.
target_x (float): The x value for which f(x) needs to be interpolated.
Returns:
float: Interpolated value of f(x) at target_x.
"""
n = len(x_values)
# Step 1: Create the backward difference table
difference_table = [y_values[:]] # Initialize with y_values as the first row
for i in range(1, n):
row = []
for j in range(n - i):
row.append(difference_table[i - 1][j + 1] - difference_table[i - 1][j])
difference_table.append(row)
# Step 2: Calculate u
h = x_values[1] - x_values[0] # Spacing between x values
u = (target_x - x_values[-1]) / h # Based on the last x value
# Step 3: Apply the Newton Backward Difference formula
result = y_values[-1] # Initialize with f(x_n)
u_term = 1 # To hold terms like u, u(u+1), u(u+1)(u+2), ...
factorial = 1 # To calculate factorial for denominator
for i in range(1, n):
u_term *= (u + (i - 1)) # Update u term
factorial *= i # Update factorial
result += (u_term * difference_table[i][-1]) / factorial
return result
# Example usage
if __name__ == "__main__":
# Input data
x_values = [0, 1, 2, 3, 4] # x values
y_values = [1, 2, 4, 8, 16] # Corresponding f(x) values

target_x = 3.5 # Value of x to interpolate


interpolated_value = newton_backward_difference(x_values, y_values, target_x)
print(f"Interpolated value at x = {target_x}: {interpolated_value:.6f}")

Output:

2.Lagrange Method
Algorithm:
1. Start
2. Input: Ask the user for the number of known data.
3. Input: Ask the user to input values of x and corresponding y. Store these values in arrays
x[] and y[] respectively.
4. Display: Show the input data of x[] and y[] to the user so that they can correct any incorrect
data or re-input any missing data.
5. Input: Ask the user to input the value of x at which the value of y is to be interpolated
(x_input).
6. Compute:
o Initialize y_result = 0.
o For i = 0 to n-1:
▪ Initialize term = 1.
▪ For j = 0 to n-1 (where j ≠ i):
▪ Compute term *= (x_input - x[j]) / (x[i] - x[j]).
▪ Update y_result += y[i] * term.
7. Output: Display the value of y_result corresponding to x_input.
8. Input: Ask the user to input 1 to run the program again.
9. If the user inputs 1, go back to step 2. Otherwise, proceed to step 10.
10. End

Code:

#Name: Shalini L
#USN : 4SF21CS144
#Course Name : Numerical Methods And Applications

import numpy as np
def lagrange_interpolation(x_points, y_points, x):
"""
Perform Lagrange interpolation to find the value of the polynomial at a given point.
Parameters:
x_points (list or ndarray): The x-coordinates of the data points.
y_points (list or ndarray): The y-coordinates of the data points.
x (float): The x-value at which the interpolation is to be evaluated.
Returns:
float: The interpolated y-value corresponding to x.
"""
n = len(x_points)
result = 0.0
for i in range(n):
# Calculate the Lagrange basis polynomial L_i(x)
term = y_points[i]
for j in range(n):
if j != i:
term *= (x - x_points[j]) / (x_points[i] - x_points[j])
# Add the term to the result
result += term
return result
# Example Usage
x_points = np.array([1, 2, 3])
y_points = np.array([1, 4, 9]) # This is y = x^2, for example
x_value = 2.5 # The x-value where we want to interpolate
interpolated_value = lagrange_interpolation(x_points, y_points, x_value)
print(f"Interpolated value at x = {x_value}: {interpolated_value}")
Output:
MODULE 3
1.Trapezoidal Method
Algorithm for Trapezoidal Method
o Start

o Define and Declare function


o Input initial boundary value, final boundary value and length of interval
o Calculate number of strips, n = (final boundary value –final boundary value)/length of
interval
o Perform following operation in loop
x[i]=x0+i*h
y[i]=f(x[i])
print y[i]
o Initialize se=0, s0=0
o Do the following using loop
If i %2 = 0
So=s0+y[i]
Otherwise
Se=se+y[i]
o ans= h/3*(y[0]+y[n]+4*so+2*se)
o print the ans
o stop

Flowchart
Code:

#Name: Shalini L
#USN : 4SF21CS144
#Course Name : Numerical Methods And Applications

import numpy as np

def trapezoidal_rule(f, a, b):


"""
Approximates the definite integral of a function f using the Trapezoidal Rule.

Parameters:
f (function): The integrand, a function of x.
a (float): The lower limit of integration.
b (float): The upper limit of integration.

Returns:
float: The approximated integral of f from a to b.
"""
# Apply the Trapezoidal Rule formula
integral = (b - a) / 2 * (f(a) + f(b))

return integral

# Example Usage
def example_function(x):
return np.sin(x) # Example: f(x) = sin(x)

a = 0 # Lower limit of integration


b = np.pi # Upper limit of integration

result = trapezoidal_rule(example_function, a, b)

print(f"Approximated integral of sin(x) from {a} to {b}: {result}")


Output:

2. Simpson’s 1/3rd rule:


Algorithm for Simpson’s 1/3rd rule:
1. Start
2. Define function f(x)
3. Read lower limit of integration, upper limit of
integration and number of sub interval
4. Calculate: step size = (upper limit - lower limit)/number of sub interval
5. Set: integration value = f(lower limit) + f(upper limit)
6. Set: i = 1
7. If i > number of sub interval then goto
8. Calculate: k = lower limit + i * h
9. If i mod 2 =0 then
Integration value = Integration Value + 2* f(k)
Otherwise
Integration Value = Integration Value + 4 * f(k)
End If
10. Increment i by 1 i.e. i = i+1 and go to step 7
11. Calculate: Integration value = Integration value * step size/3
12. Display Integration value as required answer
13. Stop
Flowchart:

Code:

#Name: Shalini L
#USN : 4SF21CS144
#Course Name : Numerical Methods And Applications

def simpsons_rule_13(f, a, b):


def simpson_13(f, a, b, n):
# Ensure that n is even
if n % 2 == 1:
n += 1 # If n is odd, make it even

# Calculate the width of the subintervals


h = (b - a) / n

# Initialize the sums


sum_odd = 0
sum_even = 0

# Compute the sum for odd indexed terms (excluding the first and last term)
for i in range(1, n, 2):
sum_odd += f(a + i * h)
# Compute the sum for even indexed terms (excluding the first and last term)
for i in range(2, n, 2):
sum_even += f(a + i * h)

# Calculate the final result using Simpson's 1/3rd rule


result = (h / 3) * (f(a) + 4 * sum_odd + 2 * sum_even + f(b))

return result

# Example usage:
import math

# Define the function to integrate


def f(x):
return math.sin(x)

# Integration limits
a=0
b = math.pi

# Number of subintervals (must be even)


n=6

# Call Simpson's 1/3 Rule


result = simpson_13(f, a, b, n)
print(f"Approximate value of the integral: {result}")
Output:
MODULE 4
1. Taylor Series
Taylor Series Algorithm
1. Start
2. Input: Ask the user for the function (e.g., sine, cosine, exponential) and the value of x for
which the Taylor Series will be approximated.
3. Input: Ask the user for the number of terms (n) to be used in the Taylor Series expansion.
4. Initialize: Set result = 0 (this will store the sum of terms).
5. Compute: For each term i from 0 to n-1:
o Calculate the value of the i-th derivative at x = 0 (or the base value for the
function).
o Compute the i-th term in the Taylor Series as term = (derivative value at 0 * (x^i)) /
i!.
o Add this term to result.
6. Output: Display the approximated result of the Taylor Series.
7. End

Flowchart:
Code:

#Name: Shalini L
#USN : 4SF21CS144
#Course Name : Numerical Methods And Applications

import math

# Example: Solve y'(x) = y - x with y(0) = 1 using Taylor Series

# Function representing the differential equation: y'(x) = f(x, y)


def f(x, y):
return y - x # Example equation: y' = y - x

# First derivative of the function


def f_prime(x, y):
return 1 # Derivative of y' = y - x is 1 (dy/dx = 1)

# Second derivative of the function


def f_double_prime(x, y):
return 0 # Second derivative of y' = y - x is 0 (since it's linear)

# Taylor Series method


def taylor_series_method(f, f_prime, f_double_prime, y0, x0, h, steps):
x = x0
y = y0

print(f"x = {x}, y = {y}") # Initial values

# Iterative computation for each step


for step in range(steps):
# Calculate the next y using the Taylor series expansion
y_next = y + h * f(x, y) + (h**2 / 2) * f_prime(x, y) + (h**3 / 6) * f_double_prime(x, y)

# Update x and y
x += h
y = y_next

# Output the current x and y values


print(f"x = {x}, y = {y}")

return x, y

# Example usage:
y0 = 1 # Initial condition y(0) = 1
x0 = 0 # Initial x value
h = 0.1 # Step size
steps = 10 # Number of iterations

# Solve the ODE y' = y - x using Taylor series method


taylor_series_method(f, f_prime, f_double_prime, y0, x0, h, steps)

Output:

2.Eulars method
Algorithm
1. Start
2. Define function
3. Get the values of x0, y0, h and xn
*Here x0 and y0 are the initial conditions
h is the interval
xn is the required value
4. n = (xn – x0)/h + 1
5. Start loop from i=1 to n
6. y = y0 + h*f(x0,y0)
x=x+h
7. Print values of y0 and x0
8. Check if x < xn
If yes, assign x0 = x and y0 = y
If no, goto 9.
9. End loop i
10. Stop

Flowchart

Code:

#Name: Shalini L
#USN : 4SF21CS144
#Course Name : Numerical Methods And Applications

def euler_method(f, x0, y0, h, n):


"""
Implements Euler's Method for solving an ODE.

Parameters:
f (function): Function defining the ODE dy/dx = f(x, y).
x0 (float): Initial x value.
y0 (float): Initial y value.
h (float): Step size.
n (int): Number of steps.

Returns:
list: List of (x, y) values approximating the solution.
"""
results = [(x0, y0)] # Store initial values
for _ in range(n):
# Current values
x, y = results[-1]
# Compute the next value using Euler's formula
y_next = y + h * f(x, y)
x_next = x + h
# Append the result
results.append((x_next, y_next))

return results

# Example usage
if __name__ == "__main__":
# Define the ODE dy/dx = f(x, y)
def f(x, y):
return y - x**2 + 1 # Example: dy/dx = y - x^2 + 1

# Initial conditions
x0 = 0 # Initial x
y0 = 0.5 # Initial y
h = 0.2 # Step size
n = 10 # Number of steps

# Compute the solution


results = euler_method(f, x0, y0, h, n)

# Print the results


for x, y in results:
print(f"x = {x:.2f}, y = {y:.6f}")

Output:
MODULE 5
1. Laplace Method
Algorithm for Solving the Laplace Equation
Step 1: Start
Step 2: Input the number of grid points in the x-direction (nx) and the y-direction (ny).
Step 3: Input the tolerance (tol) for convergence and the maximum number of iterations (max_iter).
Step 4: Initialize the grid:
• Create a grid with ny rows and nx columns, initializing all values to 0.

• Set the boundary conditions:


o Set the top boundary (first row) to a specified value (e.g., 1).
o Set the bottom boundary (last row) to a specified value (e.g., 0).
o Set the left boundary (first column) to a specified value (e.g., 0).
o Set the right boundary (last column) to a specified value (e.g., 0).
Step 5: Iterate until convergence or maximum iterations:
• Repeat the following steps for a number of iterations up to max_iter:

o Copy the current grid to another temporary grid (u_old).


o For each interior point (i.e., points not on the boundary):
▪ Update the value of the grid at that point using the average of its
neighboring points (up, down, left, right).
▪ u[i,j]=1/4(u[i+1,j]+u[i−1,j]+u[i,j+1]+u[i,j−1])
Step 6: Check for convergence:
• Calculate the maximum difference between the current grid and the previous grid.

• If the maximum difference is smaller than the tolerance (tol), print "Converged" and stop
the iteration.
Step 7: Check for maximum iterations:
• If the iteration reaches max_iter without convergence, print "Maximum iterations reached
without convergence" and stop.
Step 8: Return the final grid with the solution to the Laplace equation.
Step 9: End

Flowchart:
Code:
import numpy as np

def laplace_solver(nx, ny, tol=1e-5, max_iter=5000):


"""
Solves the 2D Laplace equation using the finite difference method.

Parameters:
nx (int): Number of grid points in the x-direction.
ny (int): Number of grid points in the y-direction.
tol (float): Convergence tolerance.
max_iter (int): Maximum number of iterations.

Returns:
np.ndarray: Solution grid for the Laplace equation.
"""
# Initialize the solution grid
u = np.zeros((ny, nx))
# Boundary conditions
u[0, :] = 1 # Top boundary
u[-1, :] = 0 # Bottom boundary
u[:, 0] = 0 # Left boundary
u[:, -1] = 0 # Right boundary
# Iterative solver
for iteration in range(max_iter):
u_old = u.copy()
# Update interior points using the finite difference formula
for i in range(1, ny-1):
for j in range(1, nx-1):
u[i, j] = 0.25 * (u_old[i+1, j] + u_old[i-1, j] + u_old[i, j+1] + u_old[i, j-1])

# Check for convergence


diff = np.max(np.abs(u - u_old))
if diff < tol:
print(f"Converged after {iteration + 1} iterations.")
break
else:
print("Maximum iterations reached without convergence.")
return u

# Example usage
if __name__ == "__main__":
nx, ny = 20, 20 # Grid size
solution = laplace_solver(nx, ny)
# Print the solution
print("Solution grid:")
print(solution)
# Visualization
try:
import matplotlib.pyplot as plt
plt.imshow(solution, origin="lower", cmap="viridis", extent=[0, 1, 0, 1])
plt.colorbar(label="u(x, y)")
plt.title("Solution to Laplace's Equation")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
except ImportError:
print("Matplotlib not installed. Skipping visualization.")

Output:
2.Poissons method

Algorithm for Solving Poisson Equation


Step 1: Start
Step 2: Input:
• nx: Number of grid points in the x-direction.

• ny: Number of grid points in the y-direction.


• f(x, y): The source term function (right-hand side of the Poisson equation).
• tol: Convergence tolerance (default value 1e-5).
• max_iter: Maximum number of iterations (default value 5000).
Step 3: Initialize the grid:
• Create a 2D array u of size (ny, nx) and initialize all values to 0.

• Define grid spacing h = 1.0 / (nx - 1).


Step 4: Create the mesh for coordinates:
• Generate 1D arrays x and y representing the grid points in the x and y directions, both
ranging from 0 to 1.
• Use np.meshgrid(x, y) to create a 2D grid X and Y.
Step 5: Compute the source term F:
• Calculate F = f(X, Y), where f is the source term function.

Step 6: Iterative solver:


• For each iteration (from 1 to max_iter):

o Save a copy of the current grid u as u_old for comparison.


o For each interior grid point (i, j):
▪ Update u[i, j] using the finite difference formula:
u[i,j]=0.25×(u[i+1,j]+u[i−1,j]+u[i,j+1]+u[i,j−1]−h2×F[i,j])
▪ Step 7: Check for convergence:
• Calculate the maximum difference diff = max(abs(u - u_old)).
• If diff < tol, print "Converged after X iterations" and go to Step 8.
Step 8: Check for maximum iterations:
• If max_iter iterations are reached without convergence, print "Maximum iterations reached
without convergence" and stop.
Step 9: Return the solution grid u.
Step 10: Stop
Flowchart:

Code:
import numpy as np

def poisson_solver(nx, ny, f, tol=1e-5, max_iter=5000):


"""
Solves the 2D Poisson equation using the finite difference method.
Parameters:
nx (int): Number of grid points in the x-direction.
ny (int): Number of grid points in the y-direction.
f (function): Function representing the source term f(x, y).
tol (float): Convergence tolerance.
max_iter (int): Maximum number of iterations.
Returns:
np.ndarray: Solution grid for the Poisson equation.
"""
# Initialize the solution grid
u = np.zeros((ny, nx))

# Define grid spacing (assuming a unit square domain)


h = 1.0 / (nx - 1)
# Create a grid for evaluating f(x, y)
x = np.linspace(0, 1, nx)
y = np.linspace(0, 1, ny)
X, Y = np.meshgrid(x, y)
# Compute the source term on the grid
F = f(X, Y)

# Iterative solver
for iteration in range(max_iter):
u_old = u.copy()
# Update interior points using the finite difference formula
for i in range(1, ny-1):
for j in range(1, nx-1):
u[i, j] = 0.25 * (u_old[i+1, j] + u_old[i-1, j] + u_old[i, j+1] + u_old[i, j-1] - h**2 * F[i, j])

# Check for convergence


diff = np.max(np.abs(u - u_old))
if diff < tol:
print(f"Converged after {iteration + 1} iterations.")
break
else:
print("Maximum iterations reached without convergence.")

return u

# Example usage
if __name__ == "__main__":
# Define the source term f(x, y)
def f(x, y):
return 2 * np.pi**2 * np.sin(np.pi * x) * np.sin(np.pi * y) # Example: 2*pi^2 * sin(pi*x) *
sin(pi*y)

# Grid size
nx, ny = 40, 40 # Higher resolution grid
# Solve the Poisson equation
solution = poisson_solver(nx, ny, f)
# Print the solution
print("Solution grid:")
print(solution)

Output:

You might also like