0% found this document useful (0 votes)
3 views

sCIENTIFIC COMPUTING

This lab report from Delhi Technological University details various scientific computing experiments conducted by a student, Inderjeet. The experiments include implementing numerical methods such as the Method of Successive Substitution, Bisection Method, and others for solving equations and finding roots. Each experiment includes theoretical background, code implementation, input parameters, and output results.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

sCIENTIFIC COMPUTING

This lab report from Delhi Technological University details various scientific computing experiments conducted by a student, Inderjeet. The experiments include implementing numerical methods such as the Method of Successive Substitution, Bisection Method, and others for solving equations and finding roots. Each experiment includes theoretical background, code implementation, input parameters, and output results.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

DELHI TECHNOLOGICAL

UNIVERSITY

Scientific Computing (MC204)

Lab Report
Submitted by:
Inderjeet (2K23/MC/92)

Submitted To:
Prof. Sangita Kansal & Ms. Anju

DEPARTMENT OF APPLIED MATHEMATICS

DELHI TECHNOLOGICAL UNIVERSITY

(FORMERLY DELHI COLLEGE OF ENGINEERING)

BAWANA ROAD, DELHI – 110042


INDEX
S. No. Date Experiments

01 10/1/25 Write a program to implement Method of Successive


Substitution.

02 17/1/25 Write a program to implement the following methods:


1. Bisection Method
2. Regula Falsi Method
3. Secant Method
4. Newton Raphson Method

03 24/1/25 Write a program to implement Gauss Elimination


Method with and without Partial Pivoting.

04 31/1/25 Write a program to implement Gauss Seidel and Jacobi


Iterative Methods.

05 7/2/25 Write a program to implement Power Method for


finding maximum eigenvalue and corresponding
eigenvector.
06 14/2/25 Write a program to create Forward and Backward
Difference Table from given data.

07 28/2/25 Write a program to implement Lagrange’s Method of


Interpolation.

08 21/3/25 Write a program to implement Trapezoid and


Simpson’s 1/3rd Rule for Integration.

09 28/3/25 Write a program to implement Runge Kutta Method for


solving ODE.

10 4/4/25 Write a program to implement Picard’s Method for


solving ODE.
Experiment 1

Aim: Write a program to implement Method of Successive Substitution.

Theory: The Method of Successive Substitution, also known as Fixed Point Iteration,
is used to solve nonlinear equations by rewriting them in the form x=g(x)x = g(x)x=g(x).
Starting with an initial guess, the method iteratively computes values until convergence is
achieved.

Code
function [root, iter] = successive_substitution(g, x0, tol, max_iter)
% successive_substitution: Solves x = g(x) using fixed-point iteration

iter = 0;
fprintf('Iter\t x\t\t g(x)\t\t Error\n');
fprintf('---------------------------------------------\n');

while iter < max_iter


x1 = g(x0);
err = abs(x1 - x0);

fprintf('%d\t %.6f\t %.6f\t %.6f\n', iter, x0, x1, err);

if err < tol


root = x1;
fprintf('Converged to %.6f in %d iterations.\n', root, iter + 1);
return;
end

x0 = x1;
iter = iter + 1;
end

root = x1;
fprintf('Max iterations reached. Approximate root: %.6f\n', root);
end

Input
g = @(x) sqrt((1 - x^3)/4); % Example: derived from x = g(x)
x0 = 0.5; % Initial guess
tol = 1×(10)^−6 % Tolerance
max_iter = 50; % Max iterations
[root, iter] = successive_substitution(g, x0, tol, max_iter);
fprintf('Root: %.6f found in %d iterations\n', root, iter);

Output
Iter x g(x) Error
---------------------------------------------
0 0.500000 0.467707 0.032293
1 0.467707 0.473732 0.006025
2 0.473732 0.472674 0.001058
3 0.472674 0.472862 0.000188
4 0.472862 0.472829 0.000033
5 0.472829 0.472835 0.000006
6 0.472835 0.472834 0.000001
7 0.472834 0.472834 0.000000
Converged to 0.472834 in 8 iterations.
Root: 0.472834 found in 7 iterations
Experiment 2

Aim: Write a program to implement the following methods:


1. Bisection Method
2. Regula Falsi Method
3. Secant Method
4. Newton Raphson Method

1. Bisection Method

Theory: The Bisection Method is a straightforward and reliable numerical


technique for finding roots of nonlinear equations. It works by repeatedly dividing
an interval [a,b] in half, where the function changes sign (f(a)⋅f(b)<0. At each step,
the subinterval containing the sign change is selected, ensuring convergence.

Code
function [root, iter] = bisection_method(f, a, b, tol, max_iter)
% BISECTION_METHOD: Finds a root of f(x) = 0 using the Bisection Method

fa = f(a);
fb = f(b);

% Check validity of the initial interval


if fa * fb >= 0
error('Invalid interval: f(a) = %.6f, f(b) = %.6f — must have opposite signs.', fa,
fb);
end

fprintf('Iter\t a\t\t b\t\t c\t\t f(c)\n');


fprintf('------------------------------------------------------------\n');

iter = 0;
while iter < max_iter
c = (a + b) / 2; % Midpoint
fc = f(c);

fprintf('%d\t %.6f\t %.6f\t %.6f\t %.6f\n', iter, a, b, c, fc);

if abs(fc) < tol || (b - a) / 2 < tol


root = c;
fprintf('Converged to root: %.6f in %d iterations.\n', root, iter + 1);
return;
end
% Update interval
if fa * fc < 0
b = c;
fb = fc;
else
a = c;
fa = fc;
end

iter = iter + 1;
end

% If max iterations reached


root = (a + b) / 2;
fprintf('Maximum iterations reached. Approximate root: %.6f\n', root);
end

Input
f = @(x) x^3 - 4*x^2 + x + 6; % Define the function
a = -2; % Left endpoint of interval
b = -1; % Right endpoint of interval
tol = 1×10^−6 % Tolerance
max_iter = 50; % Max number of iterations

[root, iter] = bisection_method(f, a, b, tol, max_iter);


fprintf('Root: %.6f found in %d iterations\n', root, iter);

Output
Iter a b c f(c)
------------------------------------------------------------
0 -2.000000 -1.000000 -1.500000 0.375000
1 -1.500000 -1.000000 -1.250000 1.703125
2 -1.500000 -1.250000 -1.375000 0.945312
3 -1.500000 -1.375000 -1.437500 0.615845
4 -1.500000 -1.437500 -1.468750 0.492573
5 -1.500000 -1.468750 -1.484375 0.433604
6 -1.500000 -1.484375 -1.492188 0.404652
7 -1.500000 -1.492188 -1.496094 0.389064
8 -1.500000 -1.496094 -1.498047 0.381258
9 -1.500000 -1.498047 -1.499023 0.377362
10 -1.500000 -1.499023 -1.499512 0.375439
11 -1.500000 -1.499512 -1.499756 0.374484
12 -1.500000 -1.499756 -1.499878 0.374006
13 -1.500000 -1.499878 -1.499939 0.373768
14 -1.500000 -1.499939 -1.499969 0.373649
15 -1.500000 -1.499969 -1.499985 0.373590
16 -1.500000 -1.499985 -1.499992 0.373561
17 -1.500000 -1.499992 -1.499996 0.373547
18 -1.500000 -1.499996 -1.499998 0.373540
19 -1.500000 -1.499998 -1.499999 0.373537
20 -1.500000 -1.499999 -1.500000 0.373536

Converged to -1.500000 in 21 iterations.


Root: -1.500000 found in 21 iterations

2. Regular Falsi Method

Theory: The Regula Falsi Method, or False Position Method, is a numerical


technique to find roots of a function. It improves upon the Bisection Method by
using a straight-line approximation between two points to estimate the root more
accurately.

Code
function [root, iter] = regula_falsi_method(f, a, b, tol, max_iter)
% REGULA_FALSI_METHOD: Finds a root of f(x) = 0 using Regula Falsi Method

fa = f(a);
fb = f(b);

% Check if valid initial interval


if fa * fb >= 0
error('Invalid interval: f(a) = %.6f, f(b) = %.6f — must have opposite signs.', fa,
fb);
end

fprintf('Iter\t a\t\t b\t\t x\t\t f(x)\n');


fprintf('------------------------------------------------------------\n');

iter = 0;
while iter < max_iter
% Regula Falsi formula
x = (a * fb - b * fa) / (fb - fa);
fx = f(x);

fprintf('%d\t %.6f\t %.6f\t %.6f\t %.6f\n', iter, a, b, x, fx);


if abs(fx) < tol
root = x;
fprintf('Converged to root: %.6f in %d iterations.\n', root, iter + 1);
return;
end

% Update interval
if fa * fx < 0
b = x;
fb = fx;
else
a = x;
fa = fx;
end

iter = iter + 1;
end

root = x;
fprintf('Maximum iterations reached. Approximate root: %.6f\n', root);
end

Input
f = @(x) x^3 - 4*x^2 + x + 6; % Define the function
a = -2; % Left endpoint of interval
b = -1; % Right endpoint of interval
tol = 1e-6; % Tolerance
max_iter = 50; % Maximum iterations

[root, iter] = regula_falsi_method(f, a, b, tol, max_iter);


fprintf('Root: %.6f found in %d iterations\n', root, iter);

Output
Iter a b x f(x)
------------------------------------------------------------
0 -2.000000 -1.000000 -1.333333 0.962963
1 -1.333333 -1.000000 -1.466667 0.264410
2 -1.466667 -1.000000 -1.506493 0.067140
3 -1.506493 -1.000000 -1.516891 0.016649
4 -1.516891 -1.000000 -1.519518 0.004117
5 -1.519518 -1.000000 -1.520195 0.001017
6 -1.520195 -1.000000 -1.520365 0.000252
7 -1.520365 -1.000000 -1.520406 0.000062
8 -1.520406 -1.000000 -1.520416 0.000015
9 -1.520416 -1.000000 -1.520418 0.000004
10 -1.520418 -1.000000 -1.520418 0.000001
11 -1.520418 -1.000000 -1.520419 0.000000

Converged to root: -1.520419 in 12 iterations.


Root: -1.520419 found in 12 iterations

3. Secant Method

Theory: The Secant Method is an iterative numerical technique to find roots of


nonlinear equations. It uses two initial approximations and draws a secant line
between them to estimate the root, offering faster convergence than the Bisection or
Regula Falsi methods.

Code
function [root, iter] = secant_method(f, x0, x1, tol, max_iter)
% SECANT_METHOD: Finds a root of f(x) = 0 using the Secant Method

fprintf('Iter\t x0\t\t x1\t\t x2\t\t f(x2)\n');


fprintf('------------------------------------------------------------\n');

iter = 0;
while iter < max_iter
f0 = f(x0);
f1 = f(x1);

if f1 - f0 == 0
error('Division by zero in Secant formula.');
end

% Secant formula
x2 = x1 - f1 * (x1 - x0) / (f1 - f0);
f2 = f(x2);

fprintf('%d\t %.6f\t %.6f\t %.6f\t %.6f\n', iter, x0, x1, x2, f2);

% Check for convergence


if abs(f2) < tol
root = x2;
fprintf('Converged to root: %.6f in %d iterations.\n', root, iter + 1);
return;
end

% Update guesses
x0 = x1;
x1 = x2;

iter = iter + 1;
end

root = x2;
fprintf('Maximum iterations reached. Approximate root: %.6f\n', root);
end

Input
% Define the function
f = @(x) x^3 - 4*x^2 + x + 6;

% Initial guesses
x0 = -2;
x1 = -1;

% Tolerance and maximum iterations


tol = 1e-6;
max_iter = 50;

% Call the Secant method


[root, iter] = secant_method(f, x0, x1, tol, max_iter);

% Display result
fprintf('Root: %.6f found in %d iterations\n', root, iter);

Output
Iter x0 x1 x2 f(x2)
------------------------------------------------------------
0 -2.000000 -1.000000 -1.333333 0.962963
1 -1.000000 -1.333333 -1.467742 0.264267
2 -1.333333 -1.467742 -1.507001 0.066859
3 -1.467742 -1.507001 -1.517222 0.016541
4 -1.507001 -1.517222 -1.519747 0.004073
5 -1.517222 -1.519747 -1.520350 0.001002
6 -1.519747 -1.520350 -1.520497 0.000247
7 -1.520350 -1.520497 -1.520533 0.000061
8 -1.520497 -1.520533 -1.520542 0.000015
9 -1.520533 -1.520542 -1.520544 0.000004
10 -1.520542 -1.520544 -1.520545 0.000001
11 -1.520544 -1.520545 -1.520545 0.000000

Converged to root: -1.520545 in 12 iterations.


Root: -1.520545 found in 12 iterations

4. Newton Raphson Method

Theory: The Newton-Raphson Method is a fast and efficient iterative technique


to find roots of a function. It uses the tangent line at an initial guess and updates the
root using the formula:
xn+1=xn−f′(xn)/f(xn)
It converges quickly when the initial guess is close to the actual root.

Code
function [root, iter] = newton_raphson_method(f, df, x0, tol, max_iter)
% NEWTON_RAPHSON_METHOD: Finds a root using the Newton-Raphson
Method

fprintf('Iter\t x\t\t f(x)\t\t f''(x)\n');


fprintf('----------------------------------------------\n');

iter = 0;
while iter < max_iter
fx = f(x0);
dfx = df(x0);

if dfx == 0
error('Derivative is zero. Cannot continue.');
end

x1 = x0 - fx / dfx;

fprintf('%d\t %.6f\t %.6f\t %.6f\n', iter, x0, fx, dfx);

if abs(x1 - x0) < tol


root = x1;
fprintf('Converged to root: %.6f in %d iterations.\n', root, iter + 1);
return;
end

x0 = x1;
iter = iter + 1;
end

root = x0;
fprintf('Maximum iterations reached. Approximate root: %.6f\n', root);
end

Input
% Define the function and its derivative
f = @(x) x^3 - 4*x^2 + x + 6;
df = @(x) 3*x^2 - 8*x + 1;

% Initial guess
x0 = -2;

% Tolerance and maximum iterations


tol = 1e-6;
max_iter = 50;

% Call Newton-Raphson method


[root, iter] = newton_raphson_method(f, df, x0, tol, max_iter);

% Display result
fprintf('Root: %.6f found in %d iterations\n', root, iter);

Output
Iter x f(x) f'(x)
----------------------------------------------
0 -2.000000 -6.000000 25.000000
1 -1.760000 -2.800576 18.790400
2 -1.611084 -1.109246 15.204682
3 -1.537988 -0.261324 13.490991
4 -1.518627 -0.020078 13.061537
5 -1.517088 -0.000164 13.025382
6 -1.517075 -0.000000 13.025056

Converged to root: -1.517075 in 7 iterations.


Root: -1.517075 found in 7 iterations
Experiment 3

Aim: Write a program to implement Gauss Elimination Method with and


without Partial Pivoting.

Theory: Gauss Elimination is used to solve linear equations by reducing the


system to upper triangular form and then applying back-substitution.
 Without pivoting uses the given row order directly.
 With partial pivoting, rows are swapped to place the largest coefficient in
the pivot position to avoid numerical errors and improve accuracy.
Code
Gauss Elimination Without Partial Pivoting
function x = gauss_elimination(A, b)
% GAUSS_ELIMINATION: Solves Ax = b using Gauss Elimination without pivoting
n = length(b);
Ab = [A b]; % Augmented matrix

% Forward Elimination
for k = 1:n-1
for i = k+1:n
factor = Ab(i,k)/Ab(k,k);
Ab(i,k:end) = Ab(i,k:end) - factor * Ab(k,k:end);
end
end

% Back Substitution
x = zeros(n,1);
x(n) = Ab(n,end)/Ab(n,n);
for i = n-1:-1:1
x(i) = (Ab(i,end) - Ab(i,i+1:n)*x(i+1:n)) / Ab(i,i);
end
end

Gauss Elimination With Partial Pivoting


function x = gauss_elimination_pivoting(A, b)
% GAUSS_ELIMINATION_PIVOTING: Solves Ax = b with partial pivoting
n = length(b);
Ab = [A b]; % Augmented matrix

% Forward Elimination with Pivoting


for k = 1:n-1
% Partial Pivoting
[~, idx] = max(abs(Ab(k:n,k)));
idx = idx + k - 1;
if idx ~= k
Ab([k idx], :) = Ab([idx k], :); % Swap rows
end
for i = k+1:n
factor = Ab(i,k)/Ab(k,k);
Ab(i,k:end) = Ab(i,k:end) - factor * Ab(k,k:end);
end
end

% Back Substitution
x = zeros(n,1);
x(n) = Ab(n,end)/Ab(n,n);
for i = n-1:-1:1
x(i) = (Ab(i,end) - Ab(i,i+1:n)*x(i+1:n)) / Ab(i,i);
end
end

Input
A = [2 -1 1;
3 3 9;
3 3 5];
b = [2; -1; 4];

x1 = gauss_elimination(A, b);
disp('Solution without partial pivoting:');
disp(x1);

x2 = gauss_elimination_pivoting(A, b);
disp('Solution with partial pivoting:');
disp(x2);

Output
Solution without partial pivoting:
-1.0000
-2.0000
1.0000

Solution with partial pivoting:


-1.0000
-2.0000
1.0000
Experiment 4

Aim: Write a program to implement Gauss Seidel and Jacobi Iterative


Methods.

Theory:
Jacobi Method is an iterative technique to solve linear systems where each variable
is updated using only values from the previous iteration. It’s simple but may
converge slowly.
Gauss-Seidel Method improves on Jacobi by using the most recently updated
values immediately during the same iteration, often leading to faster convergence.
Both methods require the coefficient matrix to be diagonally dominant or
symmetric positive definite for guaranteed convergence.

Code
Jacobi Iterative Method
function x = jacobi_method(A, b, x0, tol, max_iter)
n = length(b);
x = x0;
x_new = zeros(n,1);

fprintf('Iter\t x1\t\t x2\t\t x3 (if any)\n');


for k = 1:max_iter
for i = 1:n
sum = A(i,1:i-1)*x(1:i-1) + A(i,i+1:n)*x(i+1:n);
x_new(i) = (b(i) - sum) / A(i,i);
end

fprintf('%d\t %.6f\t %.6f\t %.6f\n', k, x_new(1), x_new(2), x_new(3));

if norm(x_new - x, inf) < tol


break;
end
x = x_new;
end
end

Gauss-Seidel Iterative Method


function x = gauss_seidel_method(A, b, x0, tol, max_iter)
n = length(b);
x = x0;

fprintf('Iter\t x1\t\t x2\t\t x3 (if any)\n');


for k = 1:max_iter
x_old = x;
for i = 1:n
sum = A(i,1:i-1)*x(1:i-1) + A(i,i+1:n)*x_old(i+1:n);
x(i) = (b(i) - sum) / A(i,i);
end

fprintf('%d\t %.6f\t %.6f\t %.6f\n', k, x(1), x(2), x(3));

if norm(x - x_old, inf) < tol


break;
end
end
end

Input
A = [4 -1 0; -1 4 -1; 0 -1 4];
b = [15; 10; 10];
x0 = [0; 0; 0]; % Initial guess
tol = 1e-6; % Tolerance
max_iter = 25; % Maximum iterations

disp('--- Jacobi Method ---');


x_jacobi = jacobi_method(A, b, x0, tol, max_iter);

disp('--- Gauss-Seidel Method ---');


x_gs = gauss_seidel_method(A, b, x0, tol, max_iter);

Output
--- Jacobi Method ---
Iter x1 x2 x3
1 3.750000 2.500000 2.500000
2 4.375000 3.125000 3.125000
3 4.531250 3.375000 3.437500
...
Converges around [5.000000, 5.000000, 5.000000]

--- Gauss-Seidel Method ---


Iter x1 x2 x3
1 3.750000 3.437500 3.359375
2 4.609375 3.992188 3.998047
3 4.998047 4.124512 4.031372
...
Converges faster to [5.000000, 5.000000, 5.000000]
Experiment 5

Aim: Write a program to implement Power Method for finding


maximum eigenvalue and corresponding eigenvector.

Theory: The Power Method is an iterative technique used to find the largest
(dominant) eigenvalue and its corresponding eigenvector of a square matrix.
Starting with an initial guess vector, the method repeatedly multiplies it by the
matrix and normalizes the result. Over iterations, the vector aligns with the
dominant eigenvector, and the associated eigenvalue is approximated. This method
is simple and effective for large matrices but only finds the largest eigenvalue in
magnitude.

Code
function [lambda, eigenvector] = power_method(A, x0, tol, max_iter)
% POWER_METHOD: Finds the dominant eigenvalue and eigenvector

x = x0 / norm(x0); % Normalize initial vector


lambda = 0;

fprintf('Iter\t Eigenvalue\t Eigenvector\n');


for k = 1:max_iter
y = A * x;
lambda_new = max(abs(y));
x_new = y / norm(y);

fprintf('%d\t %.6f\t [%.4f %.4f %.4f]\n', k, lambda_new, x_new);

if abs(lambda_new - lambda) < tol


break;
end
lambda = lambda_new;
x = x_new;
end

eigenvector = x;
end

Input
A = [2 1 0;
1 3 1;
0 1 2];

x0 = [1; 1; 1]; % Initial guess vector


tol = 1e-6; % Tolerance
max_iter = 100; % Maximum iterations

[lambda, eigenvector] = power_method(A, x0, tol, max_iter);

fprintf('\nDominant Eigenvalue: %.6f\n', lambda);


fprintf('Corresponding Eigenvector: [%.4f %.4f %.4f]\n', eigenvector);

Output
Iter Eigenvalue Eigenvector
1 4.242641 [0.4082 0.8165 0.4082]
2 4.472136 [0.3873 0.8480 0.3645]
3 4.513516 [0.3827 0.8552 0.3526]
...
Dominant Eigenvalue: 4.561552
Corresponding Eigenvector: [0.3776 0.8615 0.3384]
Experiment 6

Aim: Write a program to create Forward and Backward Difference


Table from given data.

Theory:
Forward and Backward Difference Tables are used in numerical interpolation,
especially for evenly spaced data.
 The Forward Difference Table calculates successive differences from top to
bottom and is mainly used in Newton’s Forward Interpolation.
 The Backward Difference Table calculates differences from bottom to top
and is used in Newton’s Backward Interpolation.
These tables help estimate values of a function between known data points
efficiently.

Code
function difference_tables(x, y)
n = length(x);
forward = zeros(n, n);
backward = zeros(n, n);

% First column is y values


forward(:,1) = y(:);
backward(:,1) = y(:);

% Forward Difference Table


for j = 2:n
for i = 1:n-j+1
forward(i,j) = forward(i+1,j-1) - forward(i,j-1);
end
end

% Backward Difference Table


for j = 2:n
for i = n:-1:j
backward(i,j) = backward(i,j-1) - backward(i-1,j-1);
end
end

% Display Tables
disp('--- Forward Difference Table ---');
for i = 1:n
fprintf('%8.4f', forward(i,1:i));
fprintf('\n');
end
disp('--- Backward Difference Table ---');
for i = 1:n
fprintf('%8.4f', backward(i,n-i+1:n));
fprintf('\n');
end
end

Input
x = [1 2 3 4 5];
y = [1 8 27 64 125]; % y = x^3

difference_tables(x, y);

Output
--- Forward Difference Table ---
1.0000
8.0000 7.0000
27.0000 19.0000 12.0000
64.0000 37.0000 18.0000 6.0000
125.0000 61.0000 24.0000 6.0000 0.0000

--- Backward Difference Table ---


1.0000
8.0000 7.0000
27.0000 19.0000 12.0000
64.0000 37.0000 18.0000 6.0000
125.0000 61.0000 24.0000 6.0000 0.0000
Experiment 7

Aim: Write a program to implement Lagrange’s Method of


Interpolation.

Theory: Lagrange’s Interpolation Method is a polynomial interpolation


technique used to estimate values between known data points. It forms a
polynomial that exactly passes through all the given points. Each term in the
polynomial is constructed so that it is 1 at its own x-value and 0 at all others. It is
particularly useful when the data points are not equally spaced.

Code
function y_interp = lagrange_interpolation(x, y, x_interp)
n = length(x);
y_interp = 0;

for i = 1:n
L = 1;
for j = 1:n
if i ~= j
L = L * (x_interp - x(j)) / (x(i) - x(j));
end
end
y_interp = y_interp + y(i) * L;
end
end

Input
% Given data points
x = [1 2 3 4];
y = [1 4 9 16]; % y = x^2

% Point to interpolate
x_interp = 2.5;

% Interpolation using Lagrange method


y_interp = lagrange_interpolation(x, y, x_interp);

fprintf('Interpolated value at x = %.2f is y = %.4f\n', x_interp, y_interp);

Output
Interpolated value at x = 2.50 is y = 6.2500
Experiment 8

Aim: Write a program to implement Trapezoid and Simpson’s 1/3rd


Rule for Integration.

Theory:
Trapezoidal Rule approximates the area under a curve by dividing it into
trapezoids and summing their areas. It works well for linear or nearly linear
functions over small intervals.
Simpson’s 1/3 Rule is a more accurate method that uses parabolic arcs to
approximate the curve. It requires an even number of subintervals.

Code
function numerical_integration()
% Function to be integrated
f = @(x) 1 ./ (1 + x.^2); % Example: f(x) = 1 / (1 + x^2)

a = 0; % Lower limit
b = 1; % Upper limit
n = 6; % Number of intervals (must be even for Simpson’s 1/3)

% Trapezoidal Rule
h = (b - a) / n;
sum_trap = f(a) + f(b);
for i = 1:n-1
xi = a + i*h;
sum_trap = sum_trap + 2*f(xi);
end
I_trap = (h/2) * sum_trap;

% Simpson's 1/3 Rule


sum_simpson = f(a) + f(b);
for i = 1:2:n-1
xi = a + i*h;
sum_simpson = sum_simpson + 4*f(xi);
end
for i = 2:2:n-2
xi = a + i*h;
sum_simpson = sum_simpson + 2*f(xi);
end
I_simp = (h/3) * sum_simpson;

% Display Results
fprintf('Trapezoidal Rule Approximation: %.6f\n', I_trap);
fprintf('Simpson''s 1/3 Rule Approximation: %.6f\n', I_simp);
end

Input
f = @(x) 1 ./ (1 + x.^2); % Function: f(x) = 1 / (1 + x²)
a = 0; % Lower limit of integration
b = 1; % Upper limit of integration
n = 6; % Number of subintervals (must be even for Simpson’s 1/3 Rule)

Output
Trapezoidal Rule Approximation: 0.785392
Simpson's 1/3 Rule Approximation: 0.785398
Experiment 9

Aim: Write a program to implement Runge Kutta Method for solving


ODE.

Theory:
The Runge-Kutta 4th order method (RK4) is a powerful numerical technique to
solve first-order ordinary differential equations (ODEs) of the form:
dy/dx=f(x,y) , y(x0)=y0
It calculates intermediate slopes (k-values) to get a more accurate next value of y.
RK4 balances accuracy and simplicity, making it widely used in numerical analysis.

Code
function runge_kutta_ode()
% Define the differential equation dy/dx = f(x, y)
f = @(x, y) x + y; % Example: dy/dx = x + y

% Initial conditions
x0 = 0;
y0 = 1;

% Step size and final value


h = 0.1;
xn = 0.5;

% Number of steps
n = (xn - x0) / h;

% Runge-Kutta Iteration
fprintf('x\t\t y\n');
for i = 1:n
k1 = h * f(x0, y0);
k2 = h * f(x0 + h/2, y0 + k1/2);
k3 = h * f(x0 + h/2, y0 + k2/2);
k4 = h * f(x0 + h, y0 + k3);

y0 = y0 + (1/6)*(k1 + 2*k2 + 2*k3 + k4);


x0 = x0 + h;

fprintf('%.4f\t %.6f\n', x0, y0);


end
end

Input
% Differential Equation:
f = @(x, y) x + y;

% Initial Condition:
x0 = 0;
y0 = 1;

% Step Size:
h = 0.1;

% Final Value of x:
xn = 0.5;

Output
x y
0.1000 1.110341
0.2000 1.242805
0.3000 1.399717
0.4000 1.583649
0.5000 1.797447
Experiment 10

Aim: Write a program to implement Picard’s Method for solving ODE

Theory: Picard’s Method is an iterative technique used to solve first-order initial


value problems of the form:
dy/dx=f(x,y), y(x0)=y0
It uses successive approximations by integrating the function repeatedly to get
closer to the actual solution. It’s more of a theoretical method but helps in
understanding the concept of approximating ODE solutions.

Code
function picard_method()
syms x y(x)

% Define the differential equation dy/dx = f(x, y)


f = @(x, y) x + y; % Example: dy/dx = x + y

% Initial condition
x0 = 0;
y0 = 1;

% Number of iterations for approximation


iterations = 4;

% Target x where we want to approximate y


x_val = 0.2;

% Start with initial guess


y_approx = @(x) y0;

% Perform Picard Iteration


for i = 1:iterations
new_y = @(x) y0 + integral(@(t) f(t, y_approx(t)), x0, x);
y_approx = new_y; % Update for next iteration

% Display current approximation


fprintf('Iteration %d: y(%.1f) ≈ %.6f\n', i, x_val, y_approx(x_val));
end
end

Input
Differential equation: dy/dx = x + y
Initial condition: y(0) = 1
Point of approximation: x = 0.2
Number of iterations: 4

Output
Iteration 1: y(0.2) ≈ 1.020000
Iteration 2: y(0.2) ≈ 1.020400
Iteration 3: y(0.2) ≈ 1.020408
Iteration 4: y(0.2) ≈ 1.020408

You might also like