0% found this document useful (0 votes)
2 views

202201154_Lab03

The document presents a MATLAB implementation of two optimization algorithms: Gradient Descent and Steepest Descent, applied to a quadratic function defined by a Hessian matrix and linear term. It initializes the algorithms with a starting point and iteratively updates the solution while tracking the error norms. The results are visualized in a plot comparing the convergence of both methods over iterations.

Uploaded by

chauhansarjil8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

202201154_Lab03

The document presents a MATLAB implementation of two optimization algorithms: Gradient Descent and Steepest Descent, applied to a quadratic function defined by a Hessian matrix and linear term. It initializes the algorithms with a starting point and iteratively updates the solution while tracking the error norms. The results are visualized in a plot comparing the convergence of both methods over iterations.

Uploaded by

chauhansarjil8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Rishi Godhasara - 202201154

Question 1

clc;
clear;
close all;

Q = [2, 0; 0, 4]; % Hessian matrix (positive definite)


b = [4; -4]; % Linear term
c = 10; % Constant term

% Gradient function
grad_f = @(x) Q * x - b;

x0 = [-3; 2]; % Starting point


tol = 1e-4; % Convergence threshold
max_iter = 1000; % Maximum iterations

% Optimal solution (Q * x* = b → x* = inv(Q) * b)


x_opt = Q \ b;

% Storage for errors


error_vals_gd = zeros(max_iter, 1); % For Gradient Descent
error_vals_sd = zeros(max_iter, 1); % For Steepest Descent

%% Gradient Descent Algorithm


x = x0;
alpha_gd = 0.1; % Fixed step size
iter_gd = 0;

for k = 1:max_iter
grad = grad_f(x);
% Stop if the norm of the gradient is less than the threshold
if norm(grad) < tol
break;
end
% Gradient Descent update step
x = x - alpha_gd * grad;
% Store error norm (squared for better visualization)
error_vals_gd(k) = norm(x - x_opt, 2)^2;
iter_gd = iter_gd + 1; % Count iterations
end
iter_gd

iter_gd =

1
52

% Trim excess zeros


error_vals_gd = error_vals_gd(1:iter_gd);

%% Steepest Descent Algorithm


x = x0;
iter_sd = 0;

for k = 1:max_iter
grad = grad_f(x);
% Compute optimal step size: alpha = (grad' * grad) / (grad' * Q * grad)
alpha_sd = (grad' * grad) / (grad' * (Q * grad));
% Update step
x = x - alpha_sd * grad;
% Store error norm (squared)
error_vals_sd(k) = norm(x - x_opt, 2)^2;
iter_sd = iter_sd + 1; % Count iterations
% Stop if the norm of the gradient is less than the threshold
if norm(grad) < tol
break;
end
end

% Trim excess zeros


error_vals_sd = error_vals_sd(1:iter_sd);

%% Compare the two methods in one plot


figure;
plot(1:iter_gd, error_vals_gd, '-o', 'LineWidth', 2, 'DisplayName',
'Gradient Descent');
hold on;
plot(1:iter_sd, error_vals_sd, '-x', 'LineWidth', 2, 'DisplayName',
'Steepest Descent');
xlabel('Iterations (k)');
ylabel('||x_k - x*||_2^2');
title('Comparison of Gradient Descent and Steepest Descent');
legend;
grid on;
hold off;

2
3

You might also like