0% found this document useful (0 votes)
49 views13 pages

Practice 16 17 18

The team members are: 1. Marisnelvys Cabreja Consuegra 2. Dannys García Miranda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views13 pages

Practice 16 17 18

The team members are: 1. Marisnelvys Cabreja Consuegra 2. Dannys García Miranda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Team:

1. Marisnelvys Cabreja Consuegra


2. Dannys García Miranda
Answers
Practice 16
1),2) Gerschgorin's circle theorem states that every eigenvalue of a square matrix is in
at least one of the disks whose centers are the diagonal inputs of the matrix, and
whose radii are the sum of the entries in each row. In this case it is shown that the
eigenvalues of the matrix are very close to the center, coinciding the values 2 and 7
with centers of circles and the other centers approximate the other eigenvalues.

A=[ 2 1 1 0; -1 15 1 1; 0 1 7 1; 1 0 2 -5]; %Matrix


[EV, DV] = eig(A);%EV columns of eigenvectors and DV diagonal of eigenvalues
DV=diag(real(DV))
EV=real(EV)
n=size(A,1);
k=(n*2*n+1);
x=ones(n,1);%initial vector x0
y=norm(x,2);% y:absolute value of the vector x0
I=eye(n);
center=diag(A) % Center of the circles
for i=1:n% Circle radii
R=sum(abs(A(i:i,1:n)))-abs(A(i:i))
end
t=max(max(abs(DV))); % eigenvalue max
if t==-min(min(DV)) % if there coincidence with the minimun before find the
absolute value
g=-1; %sign
else
g=1; %sign
end
a=t+0.2;
B=A-a*I;
if g==1
for i=1:k% times of iterations for the best approximation of the results
x=B\(x/y);
y=norm(x,2);% y:absolute value of the vector x
xmax=x/y;
c=( xmax'*A*xmax); %Rayleigh quotient
end
else
for i=1:k% times of iterations for the best approximation of the results
x=B\(x/y);
y=(((-1)^(k))*norm(x,2));% y:absolute value of the vector x
xmax=x/y;
c=( xmax'*A*xmax); %Rayleigh quotient
end
end
ymax=1/y+a % Higher absolute value eigenvalue
c=c
F=round(DV,4);
[row,column]=find(F==(round(g*t,4)));
if EV(1,row)>0
s=1;
else
s=-1;
end
if s>0
if xmax(1,1)>0
xmax=xmax
else
xmax=-xmax
end
else
if xmax(1,1)<0
xmax=xmax
else
xmax=-xmax
end
end

Below we graph the circles to better appreciate the results

title([' \fontsize{16} \color{magenta}-5 \color{blue}2 \color{green}7


\color{red}15 '])
viscircles([2,0],2,"LineStyle","-","Color",'blue','LineWidth',1)
hold on
viscircles([2,0],0.2,"Color",'blue')
hold on
viscircles([15,0],17,"LineStyle","-","Color",'red','LineWidth',1)
hold on
viscircles([15,0],0.2,"Color",'red')
hold on
viscircles([7,0],9,"LineStyle","-","Color",'green','LineWidth',1)
hold on
viscircles([7,0],0.2,"Color",'green')
hold on
viscircles([-5,0],7,"LineStyle","-","Color",'magenta','LineWidth',1)
hold on
viscircles([-5,0],0.2,"Color",'magenta')
• The eigenvalues and vectors are very close.
• c=15.0568 is the Rayleigh quotient and ymax=15.4568 is the higher absolute
value eigenvalue and both are very close, but as you can see the Rayleigh
quotient is more accurate or closer to the value found by the eig command.

Practice 17
1) To write the QR algorithm code, we consider the following:
Anxn: square matrix whose eigenvalues are obtained by calculating its
characteristic polynomial p(x)=| A-x*In| and it is factored, since the roots of
this are the eigenvalues of the matrix.

Iterative method:

*Invertible Anxn, expressed as A=Q*R.(A*A^(-1)= A^(-1) *A=I, det(A)≠0)


(e1,e2,…,en: column vectors of A, base of 𝑅𝑛 , because A is invertible). Then we
apply Gram-Schmidt to calculate the associated orthonormal basis (u1,u2,...,a)
and:
*Q: Orthogonal matrix (Q^(-1)=Q’): because its column vectors form an
orthonormal basis (Matrix whose column vectors are u1,u2,…,un)
* R: Upper triangular. It is obtained by clearing Q*R=A, R= Q^(-1)*A= Q’*A, so
R= Q’*A.
Then as Q is invertible, such that RQ=Q^(-1)(QR)Q (RQ=IRQ=(Q^(-1)Q)RQ=Q^(-
1)(QR)Q), QR~RQ always, so RQ and QR have the same eigenvalues as A=RQ, with
the same multiplicities.
Then the QR algorithm consists of starting with the initial QR factorization of the
matrix A = Q0R0, then R0Q0 = Q1R1, R1Q1 = Q2R2 and so on we have a chain of
matrices similar to the initial, then all the matrices that appear here have the same
eigenvalues with the same multiplicities.

a)
A=[1 1 1; 0 1 1; 1 0 -1];
n=size(A,1);
for r=1:10 % Number of iterations
% 1 Check if A is invertible
%D=det(A)
%if D==0;
%disp('The matrix is not invertible');
%else
%disp('The matrix is invertible, Continue');
%end
% 2 Column vectors of A, ei
% 3 Calculate the associated orthonormal basis using the Gram-Schmidt method, ui
e1=A(1:n,1);
u1=e1/norm(e1,2);
Q=zeros(n);
Q(1:n,1)=u1;
for i=2:n
ei=A(1:n,i);
wi=ei-sum((ei.*u1)).*u1; % Scalar Product: sum((ei.*u1))
u1=wi/norm(wi,2);
Q(1:n,i)=u1;
end

% Verify portogonality of Q
%K=round(Q^(-1),4)==round(Q',4);
%if prod(K)==1
%disp(" It is fulfilled that Q^(-1)=Q'");
%else
%disp("Q^(-1) is not iqual to Q', Q is not orthogonal");
%end
R=round(Q'*A,4);
%[QQ,RR] = qr(A);% QR factorization with code to compare results
% Verify portogonality of QQ
%K=round(QQ^(-1),4)==round(QQ',4);
%if prod(K)==1
%disp(" It is fulfilled that QQ^(-1)=QQ'");
%else
%disp("QQ^(-1) is not iqual to QQ', QQ is not orthogonal");
%end
%P=RR*QQ;
%PP=det(P)
%Q=Q
%Qinv=Q^(-1)
%Qtrasp=Q'
%QQ=QQ
%QQinv= QQ^(-1)
%QQtrasp= QQ'
%R=R
%RR=RR
A=R*Q;% We assign the value of R*Q to the matrix A to repeat the algorithm r times
end

** The above code is to run different analyses, step by step but for what you want
to analyze below we just have to reduce:
A=rand(30,30);
n= size(A,1);
for r=1: 10000 % Number of iterations
e1=A(1:n,1);
u1=e1/norm(e1,2);
Q=zeros(n);
Q(1:n,1)=u1;
for i=2:n
ei=A(1:n,i);
wi=ei-sum((ei.*u1)).*u1; % Scalar Product: sum((ei.*u1))
u1=wi/norm(wi,2);
Q(1:n,i)=u1;
end
R=round(Q'*A,4);
A=R*Q;% We assign the value of R*Q to the matrix A to repeat the algorithm r times
end
S=A
F=tril(S,-1)
Norm=norm(F,2)
Eigenvalues=eig(S)
DiagEntr=diag(S)

The S matrix has all its inputs equal to NaN, because the original matrix A, is
composed of very small values and the iterations are numerous could arise from
operations with indefinite numerical results 0/0 approximately, so we have the NaN
(NaN returns the scalar representation of "is not a number"). F is lower triangular
with NaN below the main diagonal, therefore its norm is also NaN, the diagonal
entries and eigenvalues found with the eig command are also NaN. The same happens
in both matrices A and B, since they have similar characteristics in terms of
their small components and the same high number of iterations.

These conclusions come from our own code for the QR algorithm, using our own code
for iterated QR factorization, but analyzing the above with the qr command to
perform the iterated factorization we obtain the following results:

A=rand(30,30);
for r=1: 10000 % Number of iterations
[Q,R]=qr(A);
A=R*Q;
end
S=A
F=tril(S,-1)
Norm=norm(F,2)
Eigenvalues=eig(S)
DiagEntr=diag(S)
for matrix A
for matrix B , We have that the real part of F is:
From here it is concluded that for A and B in the same way, S is practically
a null matrix, since it has few elements, of which are not in its main
diagonal other than zero, the matrix B has only 1 other than zero, of the
elements that are below its main diagonal and the norm of F in A is 1.5265,
while in B it is 9.9725e-6, much smaller, as expected, because their
entries are much smaller comparing the elements of the main diagonal and
those that are outside it, on the other hand, since the matrix S is
practically null, we have both A and B, which as the null matrix is also a
diagonal matrix, because all its elements that are not on the diagonal tend
to zero, the eigenvalues of S closely approximate the elements of its main
diagonal.

Practice 18
1)

%We have a real symmetric matrix Anxn and we are going to apply Jacobi rotation to
any 2x2 diagonal matrix of A, the number of diagonal matrices of A is N.
A=rand(10);
A=0.5*(A+A');
n=size(A,1);
N=n*(n-1)/2;%number of diagonal matrices of A
O=zeros(n);
O=triu(A,1)+O;
O=tril(A,-1)+O;
off_A=norm(O,'fro');% norm of entries outside the main diagonal
t1=cputime;
while off_A>0.001
%Iterated Jacobi rotations, cyclical Jacobi method.
for i=1:n-1
for j=i+1:n
a=A(i,i);
b=A(i,j);
bb=A(j,i);
d=A(j,j);
B=[a b; bb d];
%For any real symmetric matrix 2x2 there exists J, orthogonal matrix belonging to
R2x2, such that J'AJ is diagonal.
T=(a-d)/(2*b);
r=sqrt(1+T^2);
t=sign(T)/(abs(T)+r);
q=sqrt(1+t^2);
c=1/q;
s=t*c;
% J is the Jacobi rotation matrix, and c,-s,s,c its nonzero elements, inside and
outside the diagonal
J=eye(n);
J(i,i)=c;
J(i,j)=-s;
J(j,i)=s;
J(j,j)=c;
A=J'*A*J;
end
end
Eoff=(A-diag(diag(A)));
off_A=norm(Eoff,'fro');
end
t2=cputime;
t=t2-t1
A=A
Eoff= Eoff
off_A=off_A

2)The output matrix is diagonal

3)

As can be seen the norm outside the diagonal of the output matrix has decreased
considerably with respect to the norm of the inputs outside the diagonal of the
initial matrix, therefore this method is convergent and the output matrix J'*A*J
is diagonal.

4) The execution time is:

For a matrix 10x10:

t =

0.0500

For a matrix 100x100:

t =

12.3700

For a matrix 1000x1000:

t =

9700.0014

5) Since the matrix is diagonal its exact eigenvalues are the elements of its
diagonal, so:

prod(round(sort(eig(A)),4)==round(sort(diag(A)),4))

If this result gives us 1 it means that the eig eig eig and the exact ones
corresponding to the diagonal of the matrix are equal.

As can be seen the result was 1, so the eigenvalues found by both ways are
accurate.

You might also like