sCIENTIFIC COMPUTING
sCIENTIFIC COMPUTING
UNIVERSITY
Lab Report
Submitted by:
Inderjeet (2K23/MC/92)
Submitted To:
Prof. Sangita Kansal & Ms. Anju
Theory: The Method of Successive Substitution, also known as Fixed Point Iteration,
is used to solve nonlinear equations by rewriting them in the form x=g(x)x = g(x)x=g(x).
Starting with an initial guess, the method iteratively computes values until convergence is
achieved.
Code
function [root, iter] = successive_substitution(g, x0, tol, max_iter)
% successive_substitution: Solves x = g(x) using fixed-point iteration
iter = 0;
fprintf('Iter\t x\t\t g(x)\t\t Error\n');
fprintf('---------------------------------------------\n');
x0 = x1;
iter = iter + 1;
end
root = x1;
fprintf('Max iterations reached. Approximate root: %.6f\n', root);
end
Input
g = @(x) sqrt((1 - x^3)/4); % Example: derived from x = g(x)
x0 = 0.5; % Initial guess
tol = 1×(10)^−6 % Tolerance
max_iter = 50; % Max iterations
[root, iter] = successive_substitution(g, x0, tol, max_iter);
fprintf('Root: %.6f found in %d iterations\n', root, iter);
Output
Iter x g(x) Error
---------------------------------------------
0 0.500000 0.467707 0.032293
1 0.467707 0.473732 0.006025
2 0.473732 0.472674 0.001058
3 0.472674 0.472862 0.000188
4 0.472862 0.472829 0.000033
5 0.472829 0.472835 0.000006
6 0.472835 0.472834 0.000001
7 0.472834 0.472834 0.000000
Converged to 0.472834 in 8 iterations.
Root: 0.472834 found in 7 iterations
Experiment 2
1. Bisection Method
Code
function [root, iter] = bisection_method(f, a, b, tol, max_iter)
% BISECTION_METHOD: Finds a root of f(x) = 0 using the Bisection Method
fa = f(a);
fb = f(b);
iter = 0;
while iter < max_iter
c = (a + b) / 2; % Midpoint
fc = f(c);
iter = iter + 1;
end
Input
f = @(x) x^3 - 4*x^2 + x + 6; % Define the function
a = -2; % Left endpoint of interval
b = -1; % Right endpoint of interval
tol = 1×10^−6 % Tolerance
max_iter = 50; % Max number of iterations
Output
Iter a b c f(c)
------------------------------------------------------------
0 -2.000000 -1.000000 -1.500000 0.375000
1 -1.500000 -1.000000 -1.250000 1.703125
2 -1.500000 -1.250000 -1.375000 0.945312
3 -1.500000 -1.375000 -1.437500 0.615845
4 -1.500000 -1.437500 -1.468750 0.492573
5 -1.500000 -1.468750 -1.484375 0.433604
6 -1.500000 -1.484375 -1.492188 0.404652
7 -1.500000 -1.492188 -1.496094 0.389064
8 -1.500000 -1.496094 -1.498047 0.381258
9 -1.500000 -1.498047 -1.499023 0.377362
10 -1.500000 -1.499023 -1.499512 0.375439
11 -1.500000 -1.499512 -1.499756 0.374484
12 -1.500000 -1.499756 -1.499878 0.374006
13 -1.500000 -1.499878 -1.499939 0.373768
14 -1.500000 -1.499939 -1.499969 0.373649
15 -1.500000 -1.499969 -1.499985 0.373590
16 -1.500000 -1.499985 -1.499992 0.373561
17 -1.500000 -1.499992 -1.499996 0.373547
18 -1.500000 -1.499996 -1.499998 0.373540
19 -1.500000 -1.499998 -1.499999 0.373537
20 -1.500000 -1.499999 -1.500000 0.373536
Code
function [root, iter] = regula_falsi_method(f, a, b, tol, max_iter)
% REGULA_FALSI_METHOD: Finds a root of f(x) = 0 using Regula Falsi Method
fa = f(a);
fb = f(b);
iter = 0;
while iter < max_iter
% Regula Falsi formula
x = (a * fb - b * fa) / (fb - fa);
fx = f(x);
% Update interval
if fa * fx < 0
b = x;
fb = fx;
else
a = x;
fa = fx;
end
iter = iter + 1;
end
root = x;
fprintf('Maximum iterations reached. Approximate root: %.6f\n', root);
end
Input
f = @(x) x^3 - 4*x^2 + x + 6; % Define the function
a = -2; % Left endpoint of interval
b = -1; % Right endpoint of interval
tol = 1e-6; % Tolerance
max_iter = 50; % Maximum iterations
Output
Iter a b x f(x)
------------------------------------------------------------
0 -2.000000 -1.000000 -1.333333 0.962963
1 -1.333333 -1.000000 -1.466667 0.264410
2 -1.466667 -1.000000 -1.506493 0.067140
3 -1.506493 -1.000000 -1.516891 0.016649
4 -1.516891 -1.000000 -1.519518 0.004117
5 -1.519518 -1.000000 -1.520195 0.001017
6 -1.520195 -1.000000 -1.520365 0.000252
7 -1.520365 -1.000000 -1.520406 0.000062
8 -1.520406 -1.000000 -1.520416 0.000015
9 -1.520416 -1.000000 -1.520418 0.000004
10 -1.520418 -1.000000 -1.520418 0.000001
11 -1.520418 -1.000000 -1.520419 0.000000
3. Secant Method
Code
function [root, iter] = secant_method(f, x0, x1, tol, max_iter)
% SECANT_METHOD: Finds a root of f(x) = 0 using the Secant Method
iter = 0;
while iter < max_iter
f0 = f(x0);
f1 = f(x1);
if f1 - f0 == 0
error('Division by zero in Secant formula.');
end
% Secant formula
x2 = x1 - f1 * (x1 - x0) / (f1 - f0);
f2 = f(x2);
fprintf('%d\t %.6f\t %.6f\t %.6f\t %.6f\n', iter, x0, x1, x2, f2);
% Update guesses
x0 = x1;
x1 = x2;
iter = iter + 1;
end
root = x2;
fprintf('Maximum iterations reached. Approximate root: %.6f\n', root);
end
Input
% Define the function
f = @(x) x^3 - 4*x^2 + x + 6;
% Initial guesses
x0 = -2;
x1 = -1;
% Display result
fprintf('Root: %.6f found in %d iterations\n', root, iter);
Output
Iter x0 x1 x2 f(x2)
------------------------------------------------------------
0 -2.000000 -1.000000 -1.333333 0.962963
1 -1.000000 -1.333333 -1.467742 0.264267
2 -1.333333 -1.467742 -1.507001 0.066859
3 -1.467742 -1.507001 -1.517222 0.016541
4 -1.507001 -1.517222 -1.519747 0.004073
5 -1.517222 -1.519747 -1.520350 0.001002
6 -1.519747 -1.520350 -1.520497 0.000247
7 -1.520350 -1.520497 -1.520533 0.000061
8 -1.520497 -1.520533 -1.520542 0.000015
9 -1.520533 -1.520542 -1.520544 0.000004
10 -1.520542 -1.520544 -1.520545 0.000001
11 -1.520544 -1.520545 -1.520545 0.000000
Code
function [root, iter] = newton_raphson_method(f, df, x0, tol, max_iter)
% NEWTON_RAPHSON_METHOD: Finds a root using the Newton-Raphson
Method
iter = 0;
while iter < max_iter
fx = f(x0);
dfx = df(x0);
if dfx == 0
error('Derivative is zero. Cannot continue.');
end
x1 = x0 - fx / dfx;
x0 = x1;
iter = iter + 1;
end
root = x0;
fprintf('Maximum iterations reached. Approximate root: %.6f\n', root);
end
Input
% Define the function and its derivative
f = @(x) x^3 - 4*x^2 + x + 6;
df = @(x) 3*x^2 - 8*x + 1;
% Initial guess
x0 = -2;
% Display result
fprintf('Root: %.6f found in %d iterations\n', root, iter);
Output
Iter x f(x) f'(x)
----------------------------------------------
0 -2.000000 -6.000000 25.000000
1 -1.760000 -2.800576 18.790400
2 -1.611084 -1.109246 15.204682
3 -1.537988 -0.261324 13.490991
4 -1.518627 -0.020078 13.061537
5 -1.517088 -0.000164 13.025382
6 -1.517075 -0.000000 13.025056
% Forward Elimination
for k = 1:n-1
for i = k+1:n
factor = Ab(i,k)/Ab(k,k);
Ab(i,k:end) = Ab(i,k:end) - factor * Ab(k,k:end);
end
end
% Back Substitution
x = zeros(n,1);
x(n) = Ab(n,end)/Ab(n,n);
for i = n-1:-1:1
x(i) = (Ab(i,end) - Ab(i,i+1:n)*x(i+1:n)) / Ab(i,i);
end
end
% Back Substitution
x = zeros(n,1);
x(n) = Ab(n,end)/Ab(n,n);
for i = n-1:-1:1
x(i) = (Ab(i,end) - Ab(i,i+1:n)*x(i+1:n)) / Ab(i,i);
end
end
Input
A = [2 -1 1;
3 3 9;
3 3 5];
b = [2; -1; 4];
x1 = gauss_elimination(A, b);
disp('Solution without partial pivoting:');
disp(x1);
x2 = gauss_elimination_pivoting(A, b);
disp('Solution with partial pivoting:');
disp(x2);
Output
Solution without partial pivoting:
-1.0000
-2.0000
1.0000
Theory:
Jacobi Method is an iterative technique to solve linear systems where each variable
is updated using only values from the previous iteration. It’s simple but may
converge slowly.
Gauss-Seidel Method improves on Jacobi by using the most recently updated
values immediately during the same iteration, often leading to faster convergence.
Both methods require the coefficient matrix to be diagonally dominant or
symmetric positive definite for guaranteed convergence.
Code
Jacobi Iterative Method
function x = jacobi_method(A, b, x0, tol, max_iter)
n = length(b);
x = x0;
x_new = zeros(n,1);
Input
A = [4 -1 0; -1 4 -1; 0 -1 4];
b = [15; 10; 10];
x0 = [0; 0; 0]; % Initial guess
tol = 1e-6; % Tolerance
max_iter = 25; % Maximum iterations
Output
--- Jacobi Method ---
Iter x1 x2 x3
1 3.750000 2.500000 2.500000
2 4.375000 3.125000 3.125000
3 4.531250 3.375000 3.437500
...
Converges around [5.000000, 5.000000, 5.000000]
Theory: The Power Method is an iterative technique used to find the largest
(dominant) eigenvalue and its corresponding eigenvector of a square matrix.
Starting with an initial guess vector, the method repeatedly multiplies it by the
matrix and normalizes the result. Over iterations, the vector aligns with the
dominant eigenvector, and the associated eigenvalue is approximated. This method
is simple and effective for large matrices but only finds the largest eigenvalue in
magnitude.
Code
function [lambda, eigenvector] = power_method(A, x0, tol, max_iter)
% POWER_METHOD: Finds the dominant eigenvalue and eigenvector
eigenvector = x;
end
Input
A = [2 1 0;
1 3 1;
0 1 2];
Output
Iter Eigenvalue Eigenvector
1 4.242641 [0.4082 0.8165 0.4082]
2 4.472136 [0.3873 0.8480 0.3645]
3 4.513516 [0.3827 0.8552 0.3526]
...
Dominant Eigenvalue: 4.561552
Corresponding Eigenvector: [0.3776 0.8615 0.3384]
Experiment 6
Theory:
Forward and Backward Difference Tables are used in numerical interpolation,
especially for evenly spaced data.
The Forward Difference Table calculates successive differences from top to
bottom and is mainly used in Newton’s Forward Interpolation.
The Backward Difference Table calculates differences from bottom to top
and is used in Newton’s Backward Interpolation.
These tables help estimate values of a function between known data points
efficiently.
Code
function difference_tables(x, y)
n = length(x);
forward = zeros(n, n);
backward = zeros(n, n);
% Display Tables
disp('--- Forward Difference Table ---');
for i = 1:n
fprintf('%8.4f', forward(i,1:i));
fprintf('\n');
end
disp('--- Backward Difference Table ---');
for i = 1:n
fprintf('%8.4f', backward(i,n-i+1:n));
fprintf('\n');
end
end
Input
x = [1 2 3 4 5];
y = [1 8 27 64 125]; % y = x^3
difference_tables(x, y);
Output
--- Forward Difference Table ---
1.0000
8.0000 7.0000
27.0000 19.0000 12.0000
64.0000 37.0000 18.0000 6.0000
125.0000 61.0000 24.0000 6.0000 0.0000
Code
function y_interp = lagrange_interpolation(x, y, x_interp)
n = length(x);
y_interp = 0;
for i = 1:n
L = 1;
for j = 1:n
if i ~= j
L = L * (x_interp - x(j)) / (x(i) - x(j));
end
end
y_interp = y_interp + y(i) * L;
end
end
Input
% Given data points
x = [1 2 3 4];
y = [1 4 9 16]; % y = x^2
% Point to interpolate
x_interp = 2.5;
Output
Interpolated value at x = 2.50 is y = 6.2500
Experiment 8
Theory:
Trapezoidal Rule approximates the area under a curve by dividing it into
trapezoids and summing their areas. It works well for linear or nearly linear
functions over small intervals.
Simpson’s 1/3 Rule is a more accurate method that uses parabolic arcs to
approximate the curve. It requires an even number of subintervals.
Code
function numerical_integration()
% Function to be integrated
f = @(x) 1 ./ (1 + x.^2); % Example: f(x) = 1 / (1 + x^2)
a = 0; % Lower limit
b = 1; % Upper limit
n = 6; % Number of intervals (must be even for Simpson’s 1/3)
% Trapezoidal Rule
h = (b - a) / n;
sum_trap = f(a) + f(b);
for i = 1:n-1
xi = a + i*h;
sum_trap = sum_trap + 2*f(xi);
end
I_trap = (h/2) * sum_trap;
% Display Results
fprintf('Trapezoidal Rule Approximation: %.6f\n', I_trap);
fprintf('Simpson''s 1/3 Rule Approximation: %.6f\n', I_simp);
end
Input
f = @(x) 1 ./ (1 + x.^2); % Function: f(x) = 1 / (1 + x²)
a = 0; % Lower limit of integration
b = 1; % Upper limit of integration
n = 6; % Number of subintervals (must be even for Simpson’s 1/3 Rule)
Output
Trapezoidal Rule Approximation: 0.785392
Simpson's 1/3 Rule Approximation: 0.785398
Experiment 9
Theory:
The Runge-Kutta 4th order method (RK4) is a powerful numerical technique to
solve first-order ordinary differential equations (ODEs) of the form:
dy/dx=f(x,y) , y(x0)=y0
It calculates intermediate slopes (k-values) to get a more accurate next value of y.
RK4 balances accuracy and simplicity, making it widely used in numerical analysis.
Code
function runge_kutta_ode()
% Define the differential equation dy/dx = f(x, y)
f = @(x, y) x + y; % Example: dy/dx = x + y
% Initial conditions
x0 = 0;
y0 = 1;
% Number of steps
n = (xn - x0) / h;
% Runge-Kutta Iteration
fprintf('x\t\t y\n');
for i = 1:n
k1 = h * f(x0, y0);
k2 = h * f(x0 + h/2, y0 + k1/2);
k3 = h * f(x0 + h/2, y0 + k2/2);
k4 = h * f(x0 + h, y0 + k3);
Input
% Differential Equation:
f = @(x, y) x + y;
% Initial Condition:
x0 = 0;
y0 = 1;
% Step Size:
h = 0.1;
% Final Value of x:
xn = 0.5;
Output
x y
0.1000 1.110341
0.2000 1.242805
0.3000 1.399717
0.4000 1.583649
0.5000 1.797447
Experiment 10
Code
function picard_method()
syms x y(x)
% Initial condition
x0 = 0;
y0 = 1;
Input
Differential equation: dy/dx = x + y
Initial condition: y(0) = 1
Point of approximation: x = 0.2
Number of iterations: 4
Output
Iteration 1: y(0.2) ≈ 1.020000
Iteration 2: y(0.2) ≈ 1.020400
Iteration 3: y(0.2) ≈ 1.020408
Iteration 4: y(0.2) ≈ 1.020408